[ceph-users] Increasing osd pg bits and osd pgp bits after cluster has been setup

2014-12-03 Thread J-P Methot
Hi, I'm trying to reach a number of 4096 pg as suggested in the doc, but I can't have more than 32 pgs per OSD. I suspect this is caused by the default of 6 pg bits (2^5 = 32, the first bit being for 2^0). Is there a command to increase it once the OSDs have been linked to the cluster and

[ceph-users] filestore_fiemap and other ceph tweaks

2015-02-02 Thread J-P Methot
Hi, I've been looking into increasing the performance of my ceph cluster for openstack that will be moved in production soon. It's a full 1TB SSD cluster with 16 OSD per node over 6 nodes. As I searched for possible tweaks to implement, I stumbled upon unitedstack's presentation at the

Re: [ceph-users] filestore_fiemap and other ceph tweaks

2015-02-02 Thread J-P Methot
with fiemap set to true? On 2/2/2015 10:47 AM, Haomai Wang wrote: There exists a more recently discuss in PR(https://github.com/ceph/ceph/pull/1665). On Mon, Feb 2, 2015 at 11:05 PM, J-P Methot jpmet...@gtcomm.net wrote: Hi, I've been looking into increasing the performance of my ceph cluster

[ceph-users] client unable to access files after caching pool addition

2015-02-03 Thread J-P Methot
Hi, I tried to add a caching pool in front of openstack vms and volumes pools. I believed that the process was transparent, but as soon as I set the caching for both of these pools, the VMs could not find their volumes anymore. Obviously when I undid my changes, everything went back to

[ceph-users] OSDs not getting mounted back after reboot

2015-01-28 Thread J-P Methot
Hi, I'm having an issue wuite similar to this old bug : http://tracker.ceph.com/issues/5194, except that I'm using centos 6. Basically, I setup a cluster using ceph-deploy to save some time (this is a 90+ OSD cluster). I rebooted a node earlier today and now all the drives are unmounted and

[ceph-users] Ceph configuration on multiple public networks.

2015-01-09 Thread J-P Methot
Hi, We've setup ceph and openstack on a fairly peculiar network configuration (or at least I think it is) and I'm looking for information on how to make it work properly. Basically, we have 3 networks, a management network, a storage network and a cluster network. The management network is

Re: [ceph-users] Ceph Cluster Address

2015-03-03 Thread J-P Methot
I had to go through the same experience of changing the public network address and it's not easy. Ceph seems to keep a record of what ip address is associated to what OSD and a port number for the process. I was never able to find out where this record is kept or how to change it manually.

Re: [ceph-users] Migrating objects from one pool to another?

2015-03-26 Thread J-P Methot
let me do that. Additionally, since the compute component also has its own ceph pool, I'm pretty sure it won't let me migrate the data through openstack. On 3/26/2015 3:54 PM, Steffen W Sørensen wrote: On 26/03/2015, at 20.38, J-P Methot jpmet...@gtcomm.net wrote: Lately I've been going back

[ceph-users] Migrating objects from one pool to another?

2015-03-26 Thread J-P Methot
Hi, Lately I've been going back to work on one of my first ceph setup and now I see that I have created way too many placement groups for the pools on that setup (about 10 000 too many). I believe this may impact performances negatively, as the performances on this ceph cluster are abysmal.

Re: [ceph-users] Possible improvements for a slow write speed (excluding independent SSD journals)

2015-04-21 Thread J-P Methot
*From: *Eneko Lacunza elacu...@binovo.es *To: *J-P Methot jpmet...@gtcomm.net, Christian Balzer ch...@gol.com, ceph-users@lists.ceph.com *Sent: *Tuesday, 21 April, 2015 8:18:20 AM *Subject: *Re: [ceph-users] Possible improvements for a slow write speed (excluding

Re: [ceph-users] Possible improvements for a slow write speed (excluding independent SSD journals)

2015-04-20 Thread J-P Methot
ratio 1:1. The replication size is 3, yes. The pools are replicated. On 4/20/2015 10:43 AM, Barclay Jameson wrote: Are your journals on separate disks? What is your ratio of journal disks to data disks? Are you doing replication size 3 ? On Mon, Apr 20, 2015 at 9:30 AM, J-P Methot jpmet

[ceph-users] Possible improvements for a slow write speed (excluding independent SSD journals)

2015-04-20 Thread J-P Methot
Hi, This is similar to another thread running right now, but since our current setup is completely different from the one described in the other thread, I thought it may be better to start a new one. We are running Ceph Firefly 0.80.8 (soon to be upgraded to 0.80.9). We have 6 OSD hosts

Re: [ceph-users] How to backup hundreds or thousands of TB

2015-05-06 Thread J-P Methot
Case in point, here's a little story as to why backup outside ceph is necessary: I was working on modifying journal locations for a running test ceph cluster when, after bringing back a few OSD nodes, two PGs started being marked as incomplete. That made all operations on the pool hang as,

Re: [ceph-users] Bad performances in recovery

2015-08-19 Thread J-P Methot
op priority = 1 If you are concerned about *recovery performance*, you may want to bump this up, but I doubt it will help much from default settings.. Thanks Regards Somnath -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of J-P Methot

[ceph-users] Bad performances in recovery

2015-08-19 Thread J-P Methot
Hi, Our setup is currently comprised of 5 OSD nodes with 12 OSD each, for a total of 60 OSDs. All of these are SSDs with 4 SSD journals on each. The ceph version is hammer v0.94.1 . There is a performance overhead because we're using SSDs (I've heard it gets better in infernalis, but we're not

Re: [ceph-users] Bad performances in recovery

2015-08-21 Thread J-P Methot
if scrubbing started in the cluster or not. That may considerably slow down the cluster. -Original Message- From: Somnath Roy Sent: Wednesday, August 19, 2015 1:35 PM To: 'J-P Methot'; ceph-us...@ceph.com Subject: RE: [ceph-users] Bad performances in recovery All the writes will go

Re: [ceph-users] Bad performances in recovery

2015-08-20 Thread J-P Methot
or not. That may considerably slow down the cluster. -Original Message- From: Somnath Roy Sent: Wednesday, August 19, 2015 1:35 PM To: 'J-P Methot'; ceph-us...@ceph.com Subject: RE: [ceph-users] Bad performances in recovery All the writes will go through the journal. It may happen your

[ceph-users] Strange logging behaviour for ceph

2015-09-02 Thread J-P Methot
Hi, We're using Ceph Hammer 0.94.1 on centOS 7. On the monitor, when we set log_to_syslog = true Ceph starts shooting logs at stdout. I thought at first it might be rsyslog that is wrongly configured, but I did not find a rule that could explain this behavior. Can anybody else replicate this? If

[ceph-users] Backing up ceph rbd content to an external storage

2015-09-22 Thread J-P Methot
Hi, We've been considering periodically backing up rbds from ceph to a different storage backend, just in case. I've thought of a few ways this could be possible, but I am curious if anybody on this list is currently doing that. Are you currently backing up data that is contained in ceph? What

Re: [ceph-users] Ceph RBD bench has a strange behaviour when RBD client caching is active

2016-01-25 Thread J-P Methot
Ceph package is 0.94.5, which is hammer. So yes it could very well be this bug. Must I assume then that it only affects rbd bench and not the general functionality of the client? On 2016-01-25 1:59 PM, Jason Dillaman wrote: > What release are you testing? You might be hitting this issue [1]

[ceph-users] Ceph RBD bench has a strange behaviour when RBD client caching is active

2016-01-25 Thread J-P Methot
Hi, We've run into a weird issue on our current test setup. We're currently testing a small low-cost Ceph setup, with sata drives, 1gbps ethernet and an Intel SSD for journaling per host. We've linked this to an openstack setup. Ceph is the latest Hammer release. We notice that when we do rbd