Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-29 Thread Kasper Dieter
Hi Sébastien, On Thu, Aug 28, 2014 at 06:11:37PM +0200, Sebastien Han wrote: Hey all, (...) We have been able to reproduce this on 3 distinct platforms with some deviations (because of the hardware) but the behaviour is the same. Any thoughts will be highly appreciated, only getting 3,2k out

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-23 Thread Kasper Dieter
On Thu, Sep 18, 2014 at 03:36:48PM +0200, Alexandre DERUMIER wrote: Have anyone ever testing multi volume performance on a *FULL* SSD setup? I known that Stefan Priebe run full ssd clusters in production, and have done benchmark. (Ad far I remember, he have benched around 20k peak with

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-24 Thread Kasper Dieter
On Wed, Sep 24, 2014 at 08:49:21PM +0200, Alexandre DERUMIER wrote: What about writes with Giant? I'm around - 4k iops (4k random) with 1osd (1 node - 1 osd) - 8k iops (4k random) with 2 osd (1 node - 2 osd) - 16K iops (4k random) with 4 osd (2 nodes - 2 osd by node) - 22K iops (4k

Re: [ceph-users] SSD MTBF

2014-10-01 Thread Kasper Dieter
On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote: On 09/29/2014 03:58 AM, Dan Van Der Ster wrote: Hi Emmanuel, This is interesting, because we?ve had sales guys telling us that those Samsung drives are definitely the best for a Ceph journal O_o ! Our sales guys or Samsung

[ceph-users] cephfs set_layout / setfattr ... does not work anymore for pools

2014-08-18 Thread Kasper Dieter
Hi Sage, a couple of months ago (maybe last year) I was able to change the assignment of Directorlies and Files of CephFS to different pools back and forth (with cephfs set_layout as well as with setfattr). Now (with ceph v0.81 and Kernel 3.10 an the client side) neither 'cephfs set_layout' nor

Re: [ceph-users] setfattr ... does not work anymore for pools

2014-08-18 Thread Kasper Dieter
' commands) and it is the same with both ceph-fuse and the kernel client. sage On Mon, 18 Aug 2014, Kasper Dieter wrote: Hi Sage, a couple of months ago (maybe last year) I was able to change the assignment of Directorlies and Files of CephFS to different pools back and forth

Re: [ceph-users] setfattr ... works after 'ceph mds add_data_pool'

2014-08-18 Thread Kasper Dieter
Hi Sage, it seems the pools must be added to the MDS first: ceph mds add_data_pool 3# = SSD-r2 ceph mds add_data_pool 4# = SAS-r2 After these commands the setfattr -n ceph.dir.layout.pool worked. Thanks, -Dieter On Mon, Aug 18, 2014 at 10:19:08PM +0200, Kasper Dieter wrote

[ceph-users] snapshots on CephFS

2013-10-16 Thread Kasper Dieter
Hi Greg, on http://comments.gmane.org/gmane.comp.file-systems.ceph.user/1705 I found a statement from you regarding snapshots on cephfs: ---snip--- Filesystem snapshots exist and you can experiment with them on CephFS (there's a hidden .snaps folder; you can create or remove snapshots by

Re: [ceph-users] Number of threads for osd processes

2013-11-27 Thread Kasper Dieter
On Wed, Nov 27, 2013 at 04:34:00PM +0100, Gregory Farnum wrote: On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson mark.nel...@inktank.com wrote: On 11/27/2013 09:25 AM, Gregory Farnum wrote: On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer jens-christian.fisc...@switch.ch wrote: The

[ceph-users] rbd create ... STRIPINGV2 and format 2 or later required

2014-03-11 Thread Kasper Dieter
When using rbd create ... --image-format 2 in some cases this CMD is rejected by EINVAL with the message librbd: STRIPINGV2 and format 2 or later required for non-default striping But, in v0.61.9 STRIPINGV2 and format 2 should be supported [root@rx37-3 ~]# rbd create --pool SSD-r2 --size 20480

Re: [ceph-users] rbd create ... STRIPINGV2 and format 2 or later required

2014-03-11 Thread Kasper Dieter
. On Tue, Mar 11, 2014 at 7:16 PM, Kasper Dieter [1]dieter.kas...@ts.fujitsu.com wrote

Re: [ceph-users] rbd create ... STRIPINGV2 and format 2 or later required

2014-03-11 Thread Kasper Dieter
images on RADOS as block storage. On Tue, Mar 11, 2014 at 7:37 PM, Kasper Dieter [1]dieter.kas

[ceph-users] rbd format 2 stripe-count != 1 cannot be mapped with rbd.ko kernel 3.13.5

2014-03-12 Thread Kasper Dieter
of --stripe-unit to 8192 for example Or increase order so that it is bigger than your stripe unit and contains a multiple of stripe-units (e.g. 21) And it will work without any problem JC On Mar 11, 2014, at 07:22, Kasper Dieter dieter.kas...@ts.fujitsu.com wrote: So

Re: [ceph-users] rbd format 2 stripe-count != 1 cannot be mapped with rbd.ko kernel 3.13.5

2014-03-12 Thread Kasper Dieter
Please see this Email on ceph-devel ---snip--- Date: Thu, 15 Aug 2013 14:30:24 +0200 From: Damien Churchill dam...@gmail.com To: Kasper, Dieter dieter.kas...@ts.fujitsu.com CC: ceph-de...@vger.kernel.org ceph-de...@vger.kernel.org Subject: Re: rbd: format 2 support in rbd.ko ? On 15 August 2013

Re: [ceph-users] OSD down after PG increase

2014-03-13 Thread Kasper Dieter
We have observed a very similar behavior. In a 140 OSD cluster (new created and idle) ~8000 PGs are available. After adding two new pools (each with 2 PGs) 100 out of 140 OSDs are going down + out. The cluster never recovers. This problem can be reproduced every time with v0.67 and 0.72.

Re: [ceph-users] OSD down after PG increase

2014-03-13 Thread Kasper Dieter
On Thu, Mar 13, 2014 at 11:16:45AM +0100, Gandalf Corvotempesta wrote: 2014-03-13 10:53 GMT+01:00 Kasper Dieter dieter.kas...@ts.fujitsu.com: After adding two new pools (each with 2 PGs) 100 out of 140 OSDs are going down + out. The cluster never recovers. In my case, cluster

Re: [ceph-users] 0.61 Cuttlefish / ceph-deploy missing

2013-05-15 Thread Kasper Dieter
Hi Sage, I'm a little bit confused about 'ceph-deploy' in 0.61: . the 0.61 release note says: ceph-deploy: our new deployment tool to replace 'mkcephfs' . http://ceph.com/docs/master/rados/deployment/mkcephfs/ says To deploy a test or development cluster, you can use the mkcephfs tool.

Re: [ceph-users] Any concern about Ceph on CentOS

2013-07-17 Thread Kasper Dieter
] On Behalf Of Kasper Dieter Sent: Wednesday, July 17, 2013 2:17 PM To: Chen, Xiaoxi Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com Subject: Re: Any concern about Ceph on CentOS Hi Xiaoxi, we are really running Ceph on CentOS-6.4 (6 server nodes, 3 client nodes, 160 OSDs). We put

[ceph-users] subscribe

2013-07-17 Thread Kasper Dieter
subscribe Thanks, Dieter ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph rbd io tracking (rbdtop?)

2013-08-12 Thread Kasper Dieter
On Mon, Aug 12, 2013 at 03:19:04PM +0200, Jeff Moskow wrote: Hi, The activity on our ceph cluster has gone up a lot. We are using exclusively RBD storage right now. Is there a tool/technique that could be used to find out which rbd images are receiving the most activity (something like

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-08-28 Thread Kasper Dieter
On Wed, Aug 28, 2013 at 04:24:59PM +0200, Gandalf Corvotempesta wrote: 2013/6/20 Matthew Anderson manderson8...@gmail.com: Hi All, I've had a few conversations on IRC about getting RDMA support into Ceph and thought I would give it a quick attempt to hopefully spur some interest. What I

[ceph-users] from whom and when will rbd_cache* be read

2013-09-01 Thread Kasper Dieter
Hi, under http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/ I found a good description about RBD cache parameters. But, I am missing information - by whom these parameters are evaluated and - when will this happen ? My assumption: - the rbd_cache* parameter will be read by MONs