optmize librbd for iops

2012-11-12 Thread Stefan Priebe - Profihost AG
Hello list, are there any plans to optimize librbd for iops? Right now i'm able to get 50.000 iop/s via iscsi and 100.000 iop/s using multipathing with iscsi. With librbd i'm stuck to around 18.000iops. As this scales with more hosts but not with more disks in a vm. It must be limited by rbd

Re: Disabling journal

2012-11-12 Thread Sage Weil
On Sun, 11 Nov 2012, Stefan Priebe wrote: Hi Sage, With btrfs, yes, although this isn't something we have tested in a while. I'm not using btrfs as long as the devs claim it is not ready for prod. In that case, the journal is needed for consistency of the fs; we rely on writeahead

Re: Disabling journal

2012-11-12 Thread Stefan Priebe - Profihost AG
Am 12.11.2012 15:42, schrieb Sage Weil: On Sun, 11 Nov 2012, Stefan Priebe wrote: Hi Sage, With btrfs, yes, although this isn't something we have tested in a while. I'm not using btrfs as long as the devs claim it is not ready for prod. In that case, the journal is needed for consistency

Re: Disabling journal

2012-11-12 Thread Sage Weil
On Mon, 12 Nov 2012, Stefan Priebe - Profihost AG wrote: Am 12.11.2012 15:42, schrieb Sage Weil: On Sun, 11 Nov 2012, Stefan Priebe wrote: Hi Sage, With btrfs, yes, although this isn't something we have tested in a while. I'm not using btrfs as long as the devs claim it is not

ceph cluster hangs when rebooting one node

2012-11-12 Thread Stefan Priebe - Profihost AG
Hello list, i was checking what happens if i reboot a ceph node. Sadly if i reboot one node, the whole ceph cluster hangs and no I/O is possible. ceph -w: Looks like this: 2012-11-12 16:03:58.191106 mon.0 [INF] pgmap v19013: 7032 pgs: 7032 active+clean; 91615 MB data, 174 GB used, 4294 GB /

Re: [BUG] ceph-mon crashes

2012-11-12 Thread Stefan Priebe - Profihost AG
Am 12.11.2012 15:58, schrieb Joao Eduardo Luis: Hi Stefan, Any chance you can get me a larger chunk of the log from the monitor that was the leader by the time you issued those commands until the point the monitor crashed (from the excerpt you provided, that should be mon.b)? Sure:

Re: ceph cluster hangs when rebooting one node

2012-11-12 Thread Sage Weil
On Mon, 12 Nov 2012, Stefan Priebe - Profihost AG wrote: Hello list, i was checking what happens if i reboot a ceph node. Sadly if i reboot one node, the whole ceph cluster hangs and no I/O is possible. If you are using the current master, the new 'min_size' may be biting you; ceph osd

Re: ceph cluster hangs when rebooting one node

2012-11-12 Thread Stefan Priebe - Profihost AG
Am 12.11.2012 16:11, schrieb Sage Weil: On Mon, 12 Nov 2012, Stefan Priebe - Profihost AG wrote: Hello list, i was checking what happens if i reboot a ceph node. Sadly if i reboot one node, the whole ceph cluster hangs and no I/O is possible. If you are using the current master, the new

Re: [pve-devel] less cores more iops / speed

2012-11-12 Thread Stefan Priebe - Profihost AG
Adding this to ceph.conf on kvm host adds another 2000 iops (20.000 iop/s with one VM). I'm sure most of them are useless on a client kvm / rbd host but i don't know which one makes sense ;-) [global] debug ms = 0/0 debug rbd = 0/0 debug lockdep = 0/0 debug

improve speed with auth supported=none

2012-11-12 Thread Stefan Priebe - Profihost AG
Hello list, i'm still trying to improve ceph speed. Disable logging on host and rbd client gives me additional 5000 iop/s which is great. But i also wanted to try disabling authentication using: auth supported=none How does this work? Do i just have to place this line under global section

changed rbd cp behavior in 0.53

2012-11-12 Thread Andrey Korolyov
Hi, For this version, rbd cp assumes that destination pool is the same as source, not 'rbd', if pool in the destination path is omitted. rbd cp install/img testimg rbd ls install img testimg Is this change permanent? Thanks! -- To unsubscribe from this list: send the line unsubscribe

Re: [BUG] ceph-mon crashes

2012-11-12 Thread Joao Eduardo Luis
On 11/12/2012 03:10 PM, Stefan Priebe - Profihost AG wrote: Am 12.11.2012 15:58, schrieb Joao Eduardo Luis: Hi Stefan, Any chance you can get me a larger chunk of the log from the monitor that was the leader by the time you issued those commands until the point the monitor crashed (from the

pull request: ceph-qa-suite branch wip-java

2012-11-12 Thread Joe Buck
This patch adds a yaml file to add the libcephfs-java tests to the nightly qa test set. Best, -Joe Buck -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: improve speed with auth supported=none

2012-11-12 Thread Sébastien Han
I guess you can refer to that link on the list: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/9776 btw do you get 5000 iop/s on the rbd kernel or on a vm disk? cheers. On Mon, Nov 12, 2012 at 4:37 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Hello list, i'm

Re: [BUG] ceph-mon crashes

2012-11-12 Thread Stefan Priebe
Hi Joao, Am 12.11.2012 18:05, schrieb Joao Eduardo Luis: Can you please confirm me that sometime between you issuing the out command and mon.b failing, you had yet another monitor (maybe mon.a) that was the leader but for some reason it was down by the time that mon.b failed? If so, could you

Re: [pve-devel] less cores more iops / speed

2012-11-12 Thread Josh Durgin
On 11/12/2012 07:33 AM, Stefan Priebe - Profihost AG wrote: Adding this to ceph.conf on kvm host adds another 2000 iops (20.000 iop/s with one VM). I'm sure most of them are useless on a client kvm / rbd host but i don't know which one makes sense ;-) [global] debug ms = 0/0

Re: [pve-devel] less cores more iops / speed

2012-11-12 Thread Stefan Priebe
Hi Josh, For the client side you'd these settings to disable all debug logging: ... Thanks! Stefan -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [BUG] ceph-mon crashes

2012-11-12 Thread Joao Eduardo Luis
On 11/12/2012 06:30 PM, Stefan Priebe wrote: Hi Joao, Am 12.11.2012 18:05, schrieb Joao Eduardo Luis: Can you please confirm me that sometime between you issuing the out command and mon.b failing, you had yet another monitor (maybe mon.a) that was the leader but for some reason it was down

Re: Build regressions/improvements in v3.7-rc5

2012-11-12 Thread Geert Uytterhoeven
On Mon, Nov 12, 2012 at 9:58 PM, Geert Uytterhoeven ge...@linux-m68k.org wrote: JFYI, when comparing v3.7-rc5 to v3.7-rc4[3], the summaries are: - build errors: +14/-4 14 regressions: + drivers/virt/fsl_hypervisor.c: error: 'MSR_GS' undeclared (first use in this function): = 799:93 +

Re: [BUG] ceph-mon crashes

2012-11-12 Thread Stefan Priebe
Thanks i'm subscribed to the tracker now. Stefan Am 12.11.2012 21:40, schrieb Joao Eduardo Luis: On 11/12/2012 06:30 PM, Stefan Priebe wrote: Hi Joao, Am 12.11.2012 18:05, schrieb Joao Eduardo Luis: Can you please confirm me that sometime between you issuing the out command and mon.b

Re: improve speed with auth supported=none

2012-11-12 Thread Stefan Priebe
Thanks, this gives another burst for iops. I'm now at 23.000 iops ;-) So for random 4k iops ceph auth and especially the logging is a lot of overhead. Greets, Stefan Am 12.11.2012 19:26, schrieb Sébastien Han: I guess you can refer to that link on the list:

Re: rbd map command hangs for 15 minutes during system start up

2012-11-12 Thread Nick Bartos
After removing 8-libceph-protect-ceph_con_open-with-mutex.patch, it seems we no longer have this hang. On Thu, Nov 8, 2012 at 5:43 PM, Josh Durgin josh.dur...@inktank.com wrote: On 11/08/2012 02:10 PM, Mandell Degerness wrote: We are seeing a somewhat random, but frequent hang on our systems

Re: changed rbd cp behavior in 0.53

2012-11-12 Thread Josh Durgin
On 11/12/2012 08:30 AM, Andrey Korolyov wrote: Hi, For this version, rbd cp assumes that destination pool is the same as source, not 'rbd', if pool in the destination path is omitted. rbd cp install/img testimg rbd ls install img testimg Is this change permanent? Thanks! This is a

Re: rbd map command hangs for 15 minutes during system start up

2012-11-12 Thread Sage Weil
On Mon, 12 Nov 2012, Nick Bartos wrote: After removing 8-libceph-protect-ceph_con_open-with-mutex.patch, it seems we no longer have this hang. Hmm, that's a bit disconcerting. Did this series come from our old 3.5 stable series? I recently prepared a new one that backports *all* of the

ceph osd crush set command under 0.53

2012-11-12 Thread Mandell Degerness
Did the syntax and behavior of the ceph osd crush set ... command change between 0.48 and 0.53? When trying out ceph 0.53, I get the following in my log when trying to add the first OSD to a new cluster (similar behavior for osds 2 and 3). It appears that the ceph osd crush command fails, but

Re: ceph osd crush set command under 0.53

2012-11-12 Thread Sage Weil
On Mon, 12 Nov 2012, Mandell Degerness wrote: Did the syntax and behavior of the ceph osd crush set ... command change between 0.48 and 0.53? When trying out ceph 0.53, I get the following in my log when trying to add the first OSD to a new cluster (similar behavior for osds 2 and 3). It

[Help] Use Ceph RBD as primary storage in CloudStack 4.0

2012-11-12 Thread Alex Jiang
Hi, All Has somebody used Ceph RBD in CloudStack as primary storage? I see that in the new features of CS 4.0, RBD is supported for KVM. So I tried using RBD as primary storage but met with some problems. I use a CentOS6.3 server as host. First I erase the qemu-kvm(0.12.1) and libvirt(0.9.10)

MDaemon Notification -- Attachment Removed

2012-11-12 Thread Postmaster
The following message contained restricted attachment(s) which have been removed: From : ceph-devel@vger.kernel.org To: libr...@irost.org Subject : Returned mail: Data format error Message-ID: Attachment(s) removed: - instruction.pif --

Re: improve speed with auth supported=none

2012-11-12 Thread Josh Durgin
On 11/12/2012 01:57 PM, Stefan Priebe wrote: Thanks, this gives another burst for iops. I'm now at 23.000 iops ;-) So for random 4k iops ceph auth and especially the logging is a lot of overhead. How much difference did disabling auth make vs only disabling logging? Josh Greets, Stefan Am

Re: optmize librbd for iops

2012-11-12 Thread Josh Durgin
On 11/12/2012 05:50 AM, Stefan Priebe - Profihost AG wrote: Hello list, are there any plans to optimize librbd for iops? Right now i'm able to get 50.000 iop/s via iscsi and 100.000 iop/s using multipathing with iscsi. With librbd i'm stuck to around 18.000iops. As this scales with more hosts

Re: improve speed with auth supported=none

2012-11-12 Thread Stefan Priebe
Am 13.11.2012 08:42, schrieb Josh Durgin: On 11/12/2012 01:57 PM, Stefan Priebe wrote: Thanks, this gives another burst for iops. I'm now at 23.000 iops ;-) So for random 4k iops ceph auth and especially the logging is a lot of overhead. How much difference did disabling auth make vs only

Re: optmize librbd for iops

2012-11-12 Thread Stefan Priebe
Am 13.11.2012 08:51, schrieb Josh Durgin: On 11/12/2012 05:50 AM, Stefan Priebe - Profihost AG wrote: Hello list, are there any plans to optimize librbd for iops? Right now i'm able to get 50.000 iop/s via iscsi and 100.000 iop/s using multipathing with iscsi. With librbd i'm stuck to around