[ceph-users] 答复: Re: RBD read-ahead didn't improve 4K read performance

2014-11-21 Thread duan . xufeng
Hi, I test in VM with fio, here is the config: [global] direct=1 ioengine=aio iodepth=1 [sequence read 4K] rw=read bs=4K size=1024m directory=/mnt filename=test sequence read 4K: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1 fio-2.1.3 Starting 1 process sequence read 4K:

[ceph-users] Ceph inconsistency after deep-scrub

2014-11-21 Thread Paweł Sadowski
Hi, During deep-scrub Ceph discovered some inconsistency between OSDs on my cluster (size 3, min size 2). I have fund broken object and calculated md5sum of it on each OSD (osd.195 is acting_primary): osd.195 - md5sum_ osd.40 - md5sum_ osd.314 - md5sum_ I run ceph pg repair and

[ceph-users] ceph-announce list

2014-11-21 Thread JuanFra Rodriguez Cardoso
Hi all: As it was asked weeks ago.. what is the way the ceph community uses to stay tuned on new features and bug fixes? Thanks! Best, --- JuanFra Rodriguez Cardoso es.linkedin.com/in/jfrcardoso/ ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] rest-bench ERROR: failed to create bucket: XmlParseFailure

2014-11-21 Thread Frank Li
Hi, Is anyone help me to resolve the error as follows ? Thank a lot's. rest-bench --api-host=172.20.10.106 --bucket=test --access-key=BXXX --secret=z --protocol=http --uri_style=path --concurrent-ios=3 --block-size=4096 write host=172.20.10.106 ERROR: failed to

[ceph-users] OSD in uninterruptible sleep

2014-11-21 Thread Jon Kåre Hellan
We are testing a Giant cluster - on virtual machines for now. We have seen the same problem two nights in a row: One of the OSDs gets stuck in uninterruptible sleep. The only way to get rid of it is apparently to reboot - kill -9, -11 and -15 have all been tried. The monitor apparently

[ceph-users] RBD Cache Considered Harmful? (on all-SSD pools, at least)

2014-11-21 Thread Florian Haas
Hi everyone, been trying to get to the bottom of this for a few days; thought I'd take this to the list to see if someone had insight to share. Situation: Ceph 0.87 (Giant) cluster with approx. 250 OSDs. One set of OSD nodes with just spinners put into one CRUSH ruleset assigned to a spinner

Re: [ceph-users] RBD Cache Considered Harmful? (on all-SSD pools, at least)

2014-11-21 Thread Mark Nelson
On 11/21/2014 08:14 AM, Florian Haas wrote: Hi everyone, been trying to get to the bottom of this for a few days; thought I'd take this to the list to see if someone had insight to share. Situation: Ceph 0.87 (Giant) cluster with approx. 250 OSDs. One set of OSD nodes with just spinners put

[ceph-users] Calamari install issues

2014-11-21 Thread Shain Miley
Hello all, I followed the setup steps provided here: http://karan-mj.blogspot.com/2014/09/ceph-calamari-survival-guide.html I was able to build and install everything correctly as far as I can tell...however I am still not able to get the server to see the cluster. I am getting the

Re: [ceph-users] pg's degraded

2014-11-21 Thread JIten Shah
Thanks Michael. That was a good idea. I did: 1. sudo service ceph stop mds 2. ceph mds newfs 1 0 —yes-i-really-mean-it (where 1 and 0 are pool ID’s for metadata and data) 3. ceph health (It was healthy now!!!) 4. sudo servie ceph start mds.$(hostname -s) And I am back in business. Thanks

Re: [ceph-users] Calamari install issues

2014-11-21 Thread Michael Kuriger
I had to run salt-call state.highstate” on my ceph nodes. Also, if you’re running giant you’ll have to make a small change to get your disk stats to show up correctly. /opt/calamari/venv/lib/python2.6/site-packages/calamari_rest_api-0.1-py2.6.egg/calamari_rest/views/v1.py $ diff v1.py

Re: [ceph-users] pg's degraded

2014-11-21 Thread Michael Kuriger
I have started over from scratch a few times myself ;-) Michael Kuriger mk7...@yp.com 818-649-7235 MikeKuriger (IM) From: JIten Shah jshah2...@me.commailto:jshah2...@me.com Date: Friday, November 21, 2014 at 9:44 AM To: Michael Kuriger mk7...@yp.commailto:mk7...@yp.com Cc: Craig Lewis

Re: [ceph-users] OSD in uninterruptible sleep

2014-11-21 Thread Gregory Farnum
On Fri, Nov 21, 2014 at 4:56 AM, Jon Kåre Hellan jon.kare.hel...@uninett.no wrote: We are testing a Giant cluster - on virtual machines for now. We have seen the same problem two nights in a row: One of the OSDs gets stuck in uninterruptible sleep. The only way to get rid of it is apparently

Re: [ceph-users] Radosgw agent only syncing metadata

2014-11-21 Thread Mark Kirkwood
On 21/11/14 16:05, Mark Kirkwood wrote: On 21/11/14 15:52, Mark Kirkwood wrote: On 21/11/14 14:49, Mark Kirkwood wrote: The only things that look odd in the destination zone logs are 383 requests getting 404 rather than 200: $ grep http_status=404 ceph-client.radosgw.us-west-1.log ...

Re: [ceph-users] Calamari install issues

2014-11-21 Thread Shain Miley
Michael, Thanks for the info. We are running ceph version 0.80.7 so I don't think the 2nd part applies here. However when I run the salt command on the ceph nodes it fails: root@hqceph1:~# salt-call state.highstate [INFO] Loading fresh modules for state activity local: --

[ceph-users] Multiple MDS servers...

2014-11-21 Thread JIten Shah
I am trying to setup 3 MDS servers (one on each MON) but after I am done setting up the first one, it give me below error when I try to start it on the other ones. I understand that only 1 MDS is functional at a time, but I thought you can have multiple of them up, incase the first one dies? Or

Re: [ceph-users] mds cluster degraded

2014-11-21 Thread JIten Shah
This got taken care of after I deleted the pools for metadata and data and started it again. I did: 1. sudo service ceph stop mds 2. ceph mds newfs 1 0 —yes-i-really-mean-it (where 1 and 0 are pool ID’s for metadata and data) 3. ceph health (It was healthy now!!!) 4. sudo servie ceph start