I'd like to see a Solaris client.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dennis
Chen
Sent: Wednesday, March 04, 2015 2:00 AM
To: ceph-devel; ceph-users; Sage Weil; Loic Dachary
Subject: [ceph-users] The project of ceph client file
I have a somewhat interesting scenario. I have an RBD of 17TB formatted using
XFS. I would like it accessible from two different hosts, one mapped/mounted
read-only, and one mapped/mounted as read-write. Both are shared using Samba
4.x. One Samba server gives read-only access to the world
CephFS, yes, but it's not considered production-ready.
You can also use an RBD volume and place OCFS2 on it and share it that way, too.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
yang.bi...@zte.com.cn
Sent: Friday, October 31, 2014 12:22 AM
To:
and
data pools to eliminate the HEALTH_WARN issue.
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Thursday, September 11, 2014 2:09 PM
To: McNamara, Bradley
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Upgraded now MDS won't start
On Wed, Sep 10, 2014 at 4
Hello,
This is my first real issue since running Ceph for several months. Here's the
situation:
I've been running an Emperor cluster for several months. All was good. I
decided to upgrade since I'm running Ubuntu 13.10 and 0.72.2. I decided to
first upgrade Ceph to 0.80.4, which was the
The other thing to note, too, is that it appears you're trying to decrease the
PG/PGP_num parameters, which is not supported. In order to decrease those
settings, you'll need to delete and recreate the pools. All new pools created
will use the settings defined in the ceph.conf file.
The underlying file system on the RBD needs to be a clustered file system, like
OCFS2, GFS2, etc., and a cluster between the two, or more, iSCSI target servers
needs to be created to manage the clustered file system.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
The formula was designed to be used on a per-pool basis. Having said that,
though, when looking at the number of PG's from a system-wide perspective, one
does not want too many total PG's. So, it's a balancing act, and it has been
suggested that it's better to have slightly more PG's than you
Do you have all of the cluster IP's defined in the host file on each OSD
server? As I understand it, the mon's do not use a cluster network, only the
OSD servers.
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gandalf
I believe any kernel greater than 3.9 supports format 2 RBD's. I'm sure
someone will correct me if this is a misstatement.
Brad
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dyweni - Ceph-Users
Sent: Thursday, April
Take a look at ProxmoxVE. Has full support for Ceph, is supported, and uses
KVM/QEMU.
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Brian Candler
Sent: Friday, April 04, 2014 1:44 AM
To: Brian Beverage;
There was a very recent thread discussing PG calculations, and it made me doubt
my cluster setup. So, Inktank, please provide some clarification.
I followed the documentation, and interpreted that documentation to mean that
PG and PGP calculation was based upon a per-pool calculation. The
I'm confused...
The bug tracker says this was resolved ten days ago. Also, I actually used
ceph-deploy on 2/12/2014 to add two monitors to my cluster, and it worked, and
the documentation says it can be done. However, I believe that I added the new
mon's to the ceph.conf in the
I have a test cluster that is up and running. It consists of three mons, and
three OSD servers, with each OSD server having eight OSD's and two SSD's for
journals. I'd like to move from the flat crushmap to a crushmap with typical
depth using most of the predefined types. I have the current
Just for clarity since I didn't see it explained, but how are you accessing
Ceph using ESXI? Is it via iscsi or NFS? Thanks.
Brad McNamara
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Maciej Bonin
Sent: Tuesday,
-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Peter Matulis
Sent: Wednesday, January 29, 2014 8:11 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG's and Pools
On 01/28/2014 09:46 PM, McNamara, Bradley wrote:
I finally have my
I finally have my first test cluster up and running. No data on it, yet. The
config is: three mons, and three OSDS servers. Each OSDS server has eight 4TB
SAS drives and two SSD journal drives.
The cluster is healthy, so I started playing with PG and PGP values. By the
provided
Correct me if I'm wrong, I'm new to this, but I think the distinction between
the two methods is that using 'qemu-img create -f rbd' creates an RBD for
either a VM to boot from, or for mounting within a VM. Whereas, the OP wants a
single RBD, formatted with a cluster file system, to use as a
Instead of using ext4 for the file system, you need to use a clustered file
system on the RBD device.
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jon
Sent: Wednesday, May 29, 2013 7:55 AM
To: Igor Laskovy
Cc: ceph-users
Subject: Re:
...@gmail.com]
Sent: Wednesday, May 29, 2013 11:47 AM
To: McNamara, Bradley
Cc: ceph-users
Subject: Might Be Spam -RE: [ceph-users] Mounting a shared block device on
multiple hosts
Hello Bradley,
Please excuse my ignorance, I am new to CEPH and what I thought was a good
understanding of file
20 matches
Mail list logo