Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread Somnath Roy
I generally do a 1M seq write to fill up the device. Block size doesn’t matter here but bigger block size is faster to fill up and that’s why people use that. From: V Plus [mailto:v.plussh...@gmail.com] Sent: Sunday, December 11, 2016 7:03 PM To: Somnath Roy Cc: ceph-users@lists.ceph.com

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread Udo Lembke
Hi, but I assume you measure also cache in this scenario - the osd-nodes has cached the writes in the filebuffer (due this the latency should be very small). Udo On 12.12.2016 03:00, V Plus wrote: > Thanks Somnath! > As you recommended, I executed: > dd if=/dev/zero bs=1M count=4096 of=/dev/rbd0

Re: [ceph-users] 2x replication: A BIG warning

2016-12-11 Thread Wido den Hollander
> Op 9 december 2016 om 22:31 schreef Oliver Humpage : > > > > > On 7 Dec 2016, at 15:01, Wido den Hollander wrote: > > > > I would always run with min_size = 2 and manually switch to min_size = 1 if > > the situation really requires it at that moment.

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread V Plus
Thanks! One more question, what do you mean by "bigger" ? Do you mean that bigger block size (say, I will run read test with bs=4K, then I need to first write the rbd with bs>4K?)? or size that is big enough to cover the area where the test will be executed? On Sun, Dec 11, 2016 at 9:54 PM,

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread Somnath Roy
A block needs to be written before read otherwise you will get funny result. For example, in case of flash (depending on how FW is implemented) , it will mostly return you 0 if a block is not written. Now, I have seen some flash FW is really inefficient on manufacturing this data (say 0) if not

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread V Plus
Hi.. Udo, I am not sure I understood what you said. Did you mean that the 'dd' command also got cached in the osd node? or?? On Sun, Dec 11, 2016 at 10:46 PM, Udo Lembke wrote: > Hi, > but I assume you measure also cache in this scenario - the osd-nodes has > cached the

[ceph-users] ceph erasure code profile

2016-12-11 Thread rmichel
Hi! some questions about EC profiles: Is it possible to create a profile that spreads the chunks over 3 racks with 6 hosts inside a rack (k=8 m=3, plugin=jerasure)? When, how? Thanks! Michel signature.asc Description: PGP signature ___ ceph-users

Re: [ceph-users] Sandisk SSDs

2016-12-11 Thread Mike Miller
Hi, some time ago when starting a ceph evaluation cluster I used SSDs with similar specs. I would strongly recommend against it, during normal operation things might be fine, but wait until the first disk fails and things have to be backfilled. If you still try, please let me know how

Re: [ceph-users] rsync kernel client cepfs mkstemp no space left on device

2016-12-11 Thread Mike Miller
Hi, you have given up too early. rsync is not a nice workload for cephfs, in particular, most linux kernel clients cephfs will end up caching all inodes/dentries. The result is that mds servers crash due to memory limitations. And rsync basically screens all inodes/dentries so it is the

[ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread V Plus
Hi Guys, we have a ceph cluster with 6 machines (6 OSD per host). 1. I created 2 images in Ceph, and map them to another host A (*outside *the Ceph cluster). On host A, I got */dev/rbd0* and* /dev/rbd1*. 2. I start two fio job to perform READ test on rbd0 and rbd1. (fio job descriptions can be

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread Somnath Roy
Fill up the image with big write (say 1M) first before reading and you should see sane throughput. Thanks & Regards Somnath From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of V Plus Sent: Sunday, December 11, 2016 5:44 PM To: ceph-users@lists.ceph.com Subject: [ceph-users]

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread V Plus
Thanks Somnath! As you recommended, I executed: dd if=/dev/zero bs=1M count=4096 of=/dev/rbd0 dd if=/dev/zero bs=1M count=4096 of=/dev/rbd1 Then the output results look more reasonable! Could you tell me why?? Btw, the purpose of my run is to test the performance of rbd in ceph. Does my case

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread JiaJia Zhong
>> 3. After the test, in a.txt, we got bw=1162.7MB/s, in b.txt, we get >> bw=3579.6MB/s. mostly, due to your kernel buffer of client host -- Original -- From: "Somnath Roy"; Date: Mon, Dec 12, 2016 09:47 AM To: "V

Re: [ceph-users] Ceph performance is too good (impossible..)...

2016-12-11 Thread V Plus
Thanks. then how can we avoid this if I want to test the ceph rbd performance. BTW, it seems not the case. I followed what Somnath said, and got reasonable results. But I am still confused. On Sun, Dec 11, 2016 at 8:59 PM, JiaJia Zhong wrote: > >> 3. After the test,

Re: [ceph-users] rsync kernel client cepfs mkstemp no space left on device

2016-12-11 Thread John Spray
On Sun, Dec 11, 2016 at 4:38 PM, Mike Miller wrote: > Hi, > > you have given up too early. rsync is not a nice workload for cephfs, in > particular, most linux kernel clients cephfs will end up caching all > inodes/dentries. The result is that mds servers crash due to