Re: how to debug slow rbd block device

2012-05-23 Thread Stefan Priebe - Profihost AG
Hi, So try enabling RBD writeback caching — see http://marc.info /?l=ceph-develm=133758599712768w=2 will test tomorrow. Thanks. Can we path this to the qemu-drive option? Stefan Am 22.05.2012 23:11, schrieb Greg Farnum: On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote: Am

Re: how to debug slow rbd block device

2012-05-23 Thread Stefan Priebe - Profihost AG
Am 23.05.2012 08:30, schrieb Josh Durgin: On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote: Hi, So try enabling RBD writeback caching — see http://marc.info /?l=ceph-develm=133758599712768w=2 will test tomorrow. Thanks. Can we path this to the qemu-drive option? Yup, see

Re: how to debug slow rbd block device

2012-05-23 Thread Josh Durgin
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote: Am 23.05.2012 08:30, schrieb Josh Durgin: On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote: Hi, So try enabling RBD writeback caching — see http://marc.info /?l=ceph-develm=133758599712768w=2 will test tomorrow. Thanks.

Re: how to debug slow rbd block device

2012-05-23 Thread Andrey Korolyov
Hi, For Stefan: Increasing socket memory gave me about some percents on fio tests inside VM(I have measured 'max-iops-until-ceph-throws-message-about-delayed-write' parameter). What is more important, osd process, if possible, should be pinned to dedicated core or two, and all other processes

Re: how to debug slow rbd block device

2012-05-23 Thread Josh Durgin
On 05/23/2012 12:22 AM, Stefan Priebe - Profihost AG wrote: Am 23.05.2012 09:19, schrieb Josh Durgin: On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote: You can use any of the rbd-specific options (like rbd_cache_max_dirty) with qemu= 0.15. You can set them in a global ceph.conf file,

Re: how to debug slow rbd block device

2012-05-23 Thread Stefan Priebe - Profihost AG
Am 23.05.2012 09:22, schrieb Andrey Korolyov: Hi, For Stefan: Increasing socket memory gave me about some percents on fio tests inside VM(I have measured 'max-iops-until-ceph-throws-message-about-delayed-write' parameter). What is more important, osd process, if possible, should be

Re: how to debug slow rbd block device

2012-05-23 Thread Stefan Priebe - Profihost AG
Am 23.05.2012 09:19, schrieb Josh Durgin: On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote: Am 23.05.2012 08:30, schrieb Josh Durgin: On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote: Hi, So try enabling RBD writeback caching — see http://marc.info

Re: how to debug slow rbd block device

2012-05-23 Thread Josh Durgin
On 05/23/2012 01:20 AM, Stefan Priebe - Profihost AG wrote: Am 23.05.2012 09:19, schrieb Josh Durgin: On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote: Am 23.05.2012 08:30, schrieb Josh Durgin: On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote: Hi, So try enabling RBD

Re: how to debug slow rbd block device

2012-05-23 Thread Stefan Priebe - Profihost AG
Am 22.05.2012 23:11, schrieb Greg Farnum: On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote: Am 22.05.2012 22:49, schrieb Greg Farnum: Anyway, it looks like you're just paying a synchronous write penalty What does that exactly mean? Shouldn't one threaded write to four 260MB/s

I have some problem to mount ceph file system

2012-05-23 Thread Frank
Hello I have a question about ceph. When I mount ceph, I do the command as follow : # mount -t ceph -o name=admin,secret=XX 10.1.0.1:6789/ /mnt/ceph -vv now I create an user foo and make a secretkey by ceph-authtool like that : # ceph-authtool /etc/ceph/keyring.bin -n client.foo --gen-key

Re: RGW, future directions

2012-05-23 Thread Henry C Chang
2012/5/23 Yehuda Sadeh yeh...@inktank.com: RGW is maturing. Beside looking at performance, which highly ties into RADOS performance, we'd like to hear whether there are certain pain points or future directions that you (you as in the ceph community) would like to see us taking. There are a

Multiple named clusters on same nodes

2012-05-23 Thread Amon Ott
Hello all! We would like to have two independent clusters on the same cluster nodes, specially one for user home directories (called homeuser) and one for backups (backup). The reason is that in case the homeuser cephfs breaks (like it has done several times in our tests), we still have

Re: how to debug slow rbd block device

2012-05-23 Thread Stefan Priebe - Profihost AG
Am 23.05.2012 10:30, schrieb Stefan Priebe - Profihost AG: Am 22.05.2012 23:11, schrieb Greg Farnum: On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote: Am 22.05.2012 22:49, schrieb Greg Farnum: Anyway, it looks like you're just paying a synchronous write penalty What does that

Re: RGW, future directions

2012-05-23 Thread Kiran Patil
Hello, How about this CloudFS management system ? The CloudFS management system consists of two parts: a very simple web-based management daemon called cloudfsd, and scripts to perform various discrete functions. http://git.fedorahosted.org/git/?p=CloudFS.git;a=summary

Re: RGW, future directions

2012-05-23 Thread Kiran Patil
Sorry I missed to send this link. http://git.fedorahosted.org/git/?p=CloudFS.git;a=blob;f=doc/mgmt_manual.md;h=bfcbbe9769f8726ecd1aefcf19e1159074971110;hb=HEAD On Wed, May 23, 2012 at 3:47 PM, Kiran Patil kirantpa...@gmail.com wrote: Hello, How about this CloudFS management system ? The 

Re: Huge MDS log crashing the cluster

2012-05-23 Thread Madhusudhana U
Tommi Virtanen tv at inktank.com writes: The default logrotate script installed by ceph.deb rotates log files daily and preserves 7 days of logs. If your /var is tiny, or you have heavy debugging turned on, you probably need to rotate more often and retain fewer log files. Or, if you're not

Re: RGW, future directions

2012-05-23 Thread Xiaopong Tran
I would like to see radosgw supporting: - Object versioning - Bucket policy +1 for these. 7. libradosgw Yes, please :) Cheers, Xiaopong -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: how to debug slow rbd block device

2012-05-23 Thread Mark Nelson
On 5/23/12 2:22 AM, Andrey Korolyov wrote: Hi, For Stefan: Increasing socket memory gave me about some percents on fio tests inside VM(I have measured 'max-iops-until-ceph-throws-message-about-delayed-write' parameter). What is more important, osd process, if possible, should be pinned to

Re: RGW, future directions

2012-05-23 Thread Xiaopong Tran
I would like to see radosgw supporting: - Object versioning - Bucket policy +1 for these. 7. libradosgw Yes, please :) Cheers, Xiaopong -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: MDS crash, wont startup again

2012-05-23 Thread Felix Feinhals
Hey, ok i installed libc-dbg and run your commands now this comes up: gdb /usr/bin/ceph-mds core snip GNU gdb (GDB) 7.0.1-debian Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to

Re: Ceph on btrfs 3.4rc

2012-05-23 Thread Josef Bacik
On Wed, May 23, 2012 at 02:34:43PM +0200, Christian Brunner wrote: 2012/5/22 Josef Bacik jo...@redhat.com: Yeah you would also need to change orphan_meta_reserved.  I fixed this by just taking the BTRFS_I(inode)-lock when messing with these since we don't want to take up all that

Re: Ceph on btrfs 3.4rc

2012-05-23 Thread Josef Bacik
On Wed, May 23, 2012 at 02:34:43PM +0200, Christian Brunner wrote: 2012/5/22 Josef Bacik jo...@redhat.com: Yeah you would also need to change orphan_meta_reserved.  I fixed this by just taking the BTRFS_I(inode)-lock when messing with these since we don't want to take up all that

Re: Huge MDS log crashing the cluster

2012-05-23 Thread Sage Weil
On Wed, 23 May 2012, Madhusudhana U wrote: Tommi Virtanen tv at inktank.com writes: The default logrotate script installed by ceph.deb rotates log files daily and preserves 7 days of logs. If your /var is tiny, or you have heavy debugging turned on, you probably need to rotate more often

Re: RGW, future directions

2012-05-23 Thread Guilhem LETTRON
Hi, 2012/5/22 Yehuda Sadeh yeh...@inktank.com: RGW is maturing. Beside looking at performance, which highly ties into RADOS performance, we'd like to hear whether there are certain pain points or future directions that you (you as in the ceph community) would like to see us taking. There

Re: [PATCH] Update ceph.spec for ceph-0.47

2012-05-23 Thread Sage Weil
On Tue, 22 May 2012, Alexandre Oliva wrote: Add BuildRequires: libxml2-devel. Move BuildRequires: libcurl-devel to a more proper place. Uninstall libs3, incorrectly installed in RPM_BUILD_ROOT. We fixed the Makefile so that libs3 is no longer installed in the first place, took that part out.

Re: Multiple named clusters on same nodes

2012-05-23 Thread Tommi Virtanen
On Wed, May 23, 2012 at 2:00 AM, Amon Ott a@m-privacy.de wrote: So I started experimenting with the new cluster variable, but it does not seem to be well supported so far. mkcephfs does not even know about it and always uses ceph as cluster name. Setting a value for cluster in global

Re: Ceph on btrfs 3.4rc

2012-05-23 Thread Martin Mailand
Hi Josef, this patch is running for 3 hours without a Bug and without the Warning. I will let it run overnight and report tomorrow. It looks very good ;-) -martin Am 23.05.2012 17:02, schrieb Josef Bacik: Ok give this a shot, it should do it. Thanks, -- To unsubscribe from this list: send

Re: I have some problem to mount ceph file system

2012-05-23 Thread Gregory Farnum
On Wed, May 23, 2012 at 1:51 AM, Frank frankwoo@gmail.com wrote: Hello I have a question about ceph. When I mount ceph, I do the command as follow : # mount -t ceph -o name=admin,secret=XX 10.1.0.1:6789/ /mnt/ceph -vv now I create an user foo and make a secretkey by ceph-authtool

Re: I have some problem to mount ceph file system

2012-05-23 Thread Sage Weil
On Wed, 23 May 2012, Gregory Farnum wrote: On Wed, May 23, 2012 at 1:51 AM, Frank frankwoo@gmail.com wrote: Hello I have a question about ceph. When I mount ceph, I do the command as follow : # mount -t ceph -o name=admin,secret=XX 10.1.0.1:6789/ /mnt/ceph -vv now I create

Re: RGW, future directions

2012-05-23 Thread Yehuda Sadeh
On Wed, May 23, 2012 at 1:59 AM, Henry C Chang henry.cy.ch...@gmail.com wrote: 2012/5/23 Yehuda Sadeh yeh...@inktank.com: RGW is maturing. Beside looking at performance, which highly ties into RADOS performance, we'd like to hear whether there are certain pain points or future directions that

Re: Designing a cluster guide

2012-05-23 Thread Gregory Farnum
On Wed, May 23, 2012 at 12:47 PM, Jerker Nyberg jer...@update.uu.se wrote: On Tue, 22 May 2012, Gregory Farnum wrote: Direct users of the RADOS object store (i.e., librados) can do all kinds of things with the integrity guarantee options. But I don't believe there's currently a way to make

Re: RGW, future directions

2012-05-23 Thread Yehuda Sadeh
On Wed, May 23, 2012 at 3:17 AM, Kiran Patil kirantpa...@gmail.com wrote: Hello, How about this CloudFS management system ? The CloudFS management system consists of two parts: a very simple web-based management daemon called cloudfsd, and scripts to perform various discrete functions.

v0.47.2 released

2012-05-23 Thread Sage Weil
This point release fixes a librbd bug and a small items: * librbd: possible livelock with the caching enabled * libs3 compilation/install confusion (when building from source) * ceph.spec updates You can get it from the usual places: * Git at git://github.com/ceph/ceph.git * Tarball at 

write-through cache

2012-05-23 Thread Mandell Degerness
I would like to test the effect of using the new write-through cache on RBD volumes mounted to Openstack VMs. What, precisely, are the changes I need to make to the volume XML in order to do so? -Mandell -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a

Re: write-through cache

2012-05-23 Thread Josh Durgin
On 05/23/2012 04:44 PM, Mandell Degerness wrote: I would like to test the effect of using the new write-through cache on RBD volumes mounted to Openstack VMs. What, precisely, are the changes I need to make to the volume XML in order to do so? If your only volumes are rbd, you can append

NFS re-exporting CEPH cluster

2012-05-23 Thread Madhusudhana U
Hi all, Can anyone tried re-exporting CEPH cluster via NFS with success (I mean to say, mount the CEPH cluster in one of the machine and then export that via NFS to clients)? I need to do this bcz of my client kernel version and some EDA tools compatibility.Can someone suggest me how I can

Re: RGW, future directions

2012-05-23 Thread Wido den Hollander
On 22-05-12 20:07, Yehuda Sadeh wrote: RGW is maturing. Beside looking at performance, which highly ties into RADOS performance, we'd like to hear whether there are certain pain points or future directions that you (you as in the ceph community) would like to see us taking. There are a few

Re: RGW, future directions

2012-05-23 Thread Henry C Chang
2012/5/24 Yehuda Sadeh yeh...@inktank.com: On Wed, May 23, 2012 at 1:59 AM, Henry C Chang henry.cy.ch...@gmail.com wrote: 2012/5/23 Yehuda Sadeh yeh...@inktank.com: RGW is maturing. Beside looking at performance, which highly ties into RADOS performance, we'd like to hear whether there are