Re: [ceph-users] very different performance on two volumes in the same pool #2

2015-05-10 Thread Somnath Roy
Two things.. 1. You should always use SSD drives for benchmarking after preconditioning it. 2. After creating and mapping rbd lun, you need to write data first to read it afterword otherwise fio output will be misleading. In fact, I think you will see IO is not even hitting cluster (check with

Re: [ceph-users] very different performance on two volumes in the same pool #2

2015-05-10 Thread Nikola Ciprich
On Mon, May 11, 2015 at 05:20:25AM +, Somnath Roy wrote: Two things.. 1. You should always use SSD drives for benchmarking after preconditioning it. well, I don't really understand... ? 2. After creating and mapping rbd lun, you need to write data first to read it afterword

[ceph-users] civetweb lockups

2015-05-10 Thread Daniel Hoffman
Hi All. We have a wierd issue where civetweb just locks up, it just fails to respond to HTTP and a restart resolves the problem. This happens anywhere from every 60 seconds to every 4 hours with no reason behind it. We have run the gateway in full debug mode and there is nothing there that seems

[ceph-users] very different performance on two volumes in the same pool #2

2015-05-10 Thread Nikola Ciprich
Hello ceph developers and users, some time ago, I posted here a question regarding very different performance for two volumes in one pool (backed by SSD drives). After some examination, I probably got to the root of the problem.. When I create fresh volume (ie rbd create --image-format 2 --size

Re: [ceph-users] A pesky unfound object

2015-05-10 Thread Eino Tuominen
Hello again, Just an update on this; I restarted all the acting osd daemons, and the unfound message is now gone. There must have been some sort of a book keeping error which got fixed on daemon restart. -Original Message- From: Eino Tuominen Sent: 4. toukokuuta 2015 13:27 To: Eino

[ceph-users] ?????? about rgw region sync

2015-05-10 Thread TERRY
I build two ceph clusters. for the first cluster, I do the follow steps 1??create pools sudo ceph osd pool create .us-east.rgw.root 64 64 sudo ceph osd pool create .us-east.rgw.control 64 64 sudo ceph osd pool create .us-east.rgw.gc 64 64 sudo ceph osd pool create .us-east.rgw.buckets 64 64 sudo

Re: [ceph-users] Shadow Files

2015-05-10 Thread Daniel Hoffman
Any updates on when this is going to be released? Daniel On Wed, May 6, 2015 at 3:51 AM, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote: Yes, so it seems. The librados::nobjects_begin() call expects at least a Hammer (0.94) backend. Probably need to add a try/catch there to catch this issue,

Re: [ceph-users] Crush rule freeze cluster

2015-05-10 Thread Timofey Titovets
Georgios, oh, sorry for my poor english _-_, may be I poor expressed what i want =] i know how to write simple Crush rule and how use it, i want several things things: 1. Understand why, after inject bad map, my test node make offline. This is unexpected. 2. May be somebody can explain what and

Re: [ceph-users] osd does not start when object store is set to newstore

2015-05-10 Thread Srikanth Madugundi
Hi, Thanks a lot Somnath for the help. I tried to change ./autogen.sh to ./do_autogen.sh -r but see this error during building process. I tried searching CC libosd_tp_la-osd.lo CC libosd_tp_la-pg.lo CC librbd_tp_la-librbd.lo CC librados_tp_la-librados.lo CC

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-10 Thread Ultral
Hello Jason, but to me it sounds like you are saying that there are no/minimal deltas between snapshots move2db24-20150428 and 2015-05-05 (both from the export-diff and from your clone). yep, it correct. difference between snapshots move2db24-20150428 2015-05-05 is too small 4kb instead of

Re: [ceph-users] Crush rule freeze cluster

2015-05-10 Thread Georgios Dimitrakakis
Timofey, may be your best chance is to connect directly at the server and see what is going on. Then you can try debug why the problem occurred. If you don't want to wait until tomorrow you may try to see what is going on using the server's direct remote console access. The majority of the