[ceph-users] OSD not marked as down or out

2015-02-20 Thread Sudarshan Pathak
Hello everyone, I have a cluster running with OpenStack. It has 6 OSD (3 in each 2 different locations). Each pool has 3 replication size with 2 copy in primary location and 1 copy at secondary location. Everything is running as expected but the osd are not marked as down when I poweroff a OSD

[ceph-users] running giant/hammer mds with firefly osds

2015-02-20 Thread Dan van der Ster
Hi all, Back in the dumpling days, we were able to run the emperor MDS with dumpling OSDs -- this was an improvement over the dumpling MDS. Now we have stable firefly OSDs, but I was wondering if we can reap some of the recent CephFS developments by running a giant or ~hammer MDS with our

Re: [ceph-users] running giant/hammer mds with firefly osds

2015-02-20 Thread Luis Periquito
Hi Dan, I remember http://tracker.ceph.com/issues/9945 introducing some issues with running cephfs between different versions of giant/firefly. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg14257.html So if you upgrade please be aware that you'll also have to update the clients. On

Re: [ceph-users] new ssd intel s3610, has somebody tested them ?

2015-02-20 Thread Dan van der Ster
Interesting, thanks for the link. I hope the quality on the 3610/3710 is as good as the 3700... we haven't yet seen a single failure in production. Cheers, Dan On Fri, Feb 20, 2015 at 8:06 AM, Alexandre DERUMIER aderum...@odiso.com wrote: Hi, Intel has just released new ssd s3610:

Re: [ceph-users] new ssd intel s3610, has somebody tested them ?

2015-02-20 Thread Christian Balzer
Hello, On Fri, 20 Feb 2015 09:30:56 +0100 Dan van der Ster wrote: Interesting, thanks for the link. Interesting indeed, more for a non-Ceph project of mine, but still. ^o^ I hope the quality on the 3610/3710 is as good as the 3700... we haven't yet seen a single failure in production.

[ceph-users] unsubscribe

2015-02-20 Thread Konstantin Khatskevich
unsubscribe -- Best regards, Konstantin Khatskevich ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] erasure coded pool

2015-02-20 Thread Deneau, Tom
Is it possible to run an erasure coded pool using default k=2, m=2 profile on a single node? (this is just for functionality testing). The single node has 3 OSDs. Replicated pools run fine. ceph.conf does contain: osd crush chooseleaf type = 0 -- Tom Deneau

Re: [ceph-users] CephFS and data locality?

2015-02-20 Thread Jake Kugel
Okay thanks for pointing me in the right direction. From a quick read I think this will work but will take a look in detail. Thanks! Jake On Tue, Feb 17, 2015 at 3:16 PM, Gregory Farnum wrote: On Tue, Feb 17, 2015 at 10:36 AM, Jake Kugel jkugel@... wrote: Hi, I'm just starting to look

Re: [ceph-users] re: Upgrade 0.80.5 to 0.80.8 --the VM's read requestbecome too slow

2015-02-20 Thread Xu (Simon) Chen
Any update on this matter? I've been thinking of upgrading from 0.80.7 to 0.80.8 - lucky that I see this thread first... On Thu, Feb 12, 2015 at 10:39 PM, 杨万元 yangwanyuan8...@gmail.com wrote: thanks very much for your advice . yes,as you said,disabled the rbd_cache will improve the read

Re: [ceph-users] initially conf calamari to know about my Ceph cluster(s)

2015-02-20 Thread Dan Mick
By the way, you may want to put these sorts of questions on ceph-calam...@lists.ceph.com, which is specific to calamari. On 02/16/2015 01:08 PM, Steffen Winther wrote: Steffen Winther ceph.user@... writes: Trying to figure out how to initially configure calamari clients to know about my

Re: [ceph-users] Calamari build in vagrants

2015-02-20 Thread Dan Mick
On 02/16/2015 12:57 PM, Steffen Winther wrote: Dan Mick dmick@... writes: 0cbcfbaa791baa3ee25c4f1a135f005c1d568512 on the 1.2.3 branch has the change to yo 1.1.0. I've just cherry-picked that to v1.3 and master. Do you mean that you merged 1.2.3 into master and branch 1.3? I put just that

[ceph-users] Cluster never reaching clean after osd out

2015-02-20 Thread Yves
I have a Cluster of 3 hosts, running giant on Debian wheezy and Backports Kernel 3.16.0-0.bpo.4-amd64. For testing I did a ~# ceph osd out 20 from a clean state. Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck unclean to get up and then go down to about 11. Short after

Re: [ceph-users] Fixing a crushmap

2015-02-20 Thread Kyle Hutson
Here was the process I went through. 1) I created an EC pool which created ruleset 1 2) I edited the crushmap to approximately its current form 3) I discovered my previous EC pool wasn't doing what I meant for it to do, so I deleted it. 4) I created a new EC pool with the parameters I wanted and

Re: [ceph-users] Fixing a crushmap

2015-02-20 Thread Kyle Hutson
Oh, and I don't yet have any important data here, so I'm not worried about losing anything at this point. I just need to get my cluster happy again so I can play with it some more. On Fri, Feb 20, 2015 at 11:00 AM, Kyle Hutson kylehut...@ksu.edu wrote: Here was the process I went through. 1) I

Re: [ceph-users] ceph-osd pegging CPU on giant, no snapshots involved this time

2015-02-20 Thread Mark Nelson
On 02/19/2015 10:56 AM, Florian Haas wrote: On Wed, Feb 18, 2015 at 10:27 PM, Florian Haas flor...@hastexo.com wrote: On Wed, Feb 18, 2015 at 9:32 PM, Mark Nelson mnel...@redhat.com wrote: On 02/18/2015 02:19 PM, Florian Haas wrote: Hey everyone, I must confess I'm still not fully

[ceph-users] Fixing a crushmap

2015-02-20 Thread Kyle Hutson
I manually edited my crushmap, basing my changes on http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ I have SSDs and HDDs in the same box and was wanting to separate them by ruleset. My current crushmap can be seen at http://pastie.org/9966238 I had it

Re: [ceph-users] Fixing a crushmap

2015-02-20 Thread Luis Periquito
The process of creating an erasure coded pool and a replicated one is slightly different. You can use Sebastian's guide to create/manage the osd tree, but you should follow this guide http://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool. I'm not sure (i.e. I never tried) to

Re: [ceph-users] erasure coded pool

2015-02-20 Thread Loic Dachary
Hi Tom, On 20/02/2015 22:59, Deneau, Tom wrote: Is it possible to run an erasure coded pool using default k=2, m=2 profile on a single node? (this is just for functionality testing). The single node has 3 OSDs. Replicated pools run fine. For k=2 m=2 to work you need four (k+m) OSDs. As

Re: [ceph-users] Power failure recovery woes (fwd)

2015-02-20 Thread Jeff
Should I infer from the silence that there is no way to recover from the FAILED assert(last_e.version.version e.version.version) errors? Thanks, Jeff - Forwarded message from Jeff j...@usedmoviefinder.com - Date: Tue, 17 Feb 2015 09:16:33 -0500 From: Jeff

Re: [ceph-users] Minor version difference between monitors and OSDs

2015-02-20 Thread Gregory Farnum
On Thu, Feb 19, 2015 at 8:30 PM, Christian Balzer ch...@gol.com wrote: Hello, I have a cluster currently at 0.80.1 and would like to upgrade it to 0.80.7 (Debian as you can guess), but for a number of reasons I can't really do it all at the same time. In particular I would like to upgrade

Re: [ceph-users] OSD not marked as down or out

2015-02-20 Thread Gregory Farnum
That's pretty strange, especially since the monitor is getting the failure reports. What version are you running? Can you bump up the monitor debugging and provide its output from around that time? -Greg On Fri, Feb 20, 2015 at 3:26 AM, Sudarshan Pathak sushan@gmail.com wrote: Hello

Re: [ceph-users] running giant/hammer mds with firefly osds

2015-02-20 Thread Dan van der Ster
On Fri, Feb 20, 2015 at 7:56 PM, Gregory Farnum g...@gregs42.com wrote: On Fri, Feb 20, 2015 at 3:50 AM, Luis Periquito periqu...@gmail.com wrote: Hi Dan, I remember http://tracker.ceph.com/issues/9945 introducing some issues with running cephfs between different versions of giant/firefly.

Re: [ceph-users] Power failure recovery woes (fwd)

2015-02-20 Thread Gregory Farnum
You can try searching the archives and tracker.ceph.com for hints about repairing these issues, but your disk stores have definitely been corrupted and it's likely to be an adventure. I'd recommend examining your local storage stack underneath Ceph and figuring out which part was ignoring

[ceph-users] Cluster never reaching clean after osd out

2015-02-20 Thread Yves Kretzschmar
I have a Cluster of 3 hosts, running Debian wheezy and Backports Kernel  3.16.0-0.bpo.4-amd64. For testing I did a  ~# ceph osd out 20 from a clean state. Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck unclean to get up and then go down to about 11.   Short after that the

Re: [ceph-users] running giant/hammer mds with firefly osds

2015-02-20 Thread Gregory Farnum
On Fri, Feb 20, 2015 at 3:50 AM, Luis Periquito periqu...@gmail.com wrote: Hi Dan, I remember http://tracker.ceph.com/issues/9945 introducing some issues with running cephfs between different versions of giant/firefly. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg14257.html

[ceph-users] Cluster never reaching clean after osd out

2015-02-20 Thread Yves Kretzschmar
I have a Cluster of 3 hosts, running Debian wheezy and Backports Kernel3.16.0-0.bpo.4-amd64. For testing I did a ~# ceph osd out 20 from a clean state. Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck unclean to get up and then go down to about 11. Short after that the