[ceph-users] Sunday's Ceph based business model

2015-03-15 Thread Loic Dachary
Hi Ceph, Disclaimer: I'm no entrepreneur, the business model idea that came to me this Sunday should not be taken seriously ;-) Let say individuals can buy hardware that are Ceph ready (i.e. contain some variation of https://wiki.ceph.com/Clustering_a_few_NAS_into_a_Ceph_cluster) and build a

[ceph-users] HEALTH_WARN too few pgs per osd (0 min 20)

2015-03-15 Thread Jesus Chavez (jeschave)
Hi all, does anybody know why still get WARN message status? I don’t even have pools yet son I am not sure why is warning me… [root@capricornio ceph-cluster]# ceph status cluster d39f6247-1543-432d-9247-6c56f65bb6cd health HEALTH_WARN too few pgs per osd (0 min 20) monmap e1: 3

Re: [ceph-users] HEALTH_WARN too few pgs per osd (0 min 20)

2015-03-15 Thread Jesus Chavez (jeschave)
yeah thats what I thought, I had the same WARN message with RHEL6.6 but I had a pool and when I changed the value of pgs the message just gone... [cid:image005.png@01D00809.A6D502D0] Jesus Chavez SYSTEMS ENGINEER-C.SALES jesch...@cisco.commailto:jesch...@cisco.com Phone: +52 55 5267 3146

Re: [ceph-users] HEALTH_WARN too few pgs per osd (0 min 20)

2015-03-15 Thread Loic Dachary
Hi, On 15/03/2015 16:23, Jesus Chavez (jeschave) wrote: Hi all, does anybody know why still get WARN message status? I don’t even have pools yet son I am not sure why is warning me… [root@capricornio ceph-cluster]# ceph status cluster d39f6247-1543-432d-9247-6c56f65bb6cd

[ceph-users] FW: More than 50% osds down, CPUs still busy; will the cluster recover without help?

2015-03-15 Thread Chris Murray
Apologies if anyone receives this twice. I didn't see this e-mail come back through to the list ... -Original Message- From: Chris Murray Sent: 14 March 2015 08:56 To: 'Gregory Farnum' Cc: 'ceph-users' Subject: RE: [ceph-users] More than 50% osds down, CPUs still busy; will the cluster

[ceph-users] Ceph release timeline

2015-03-15 Thread Loic Dachary
Hi Ceph, In an attempt to clarify what Ceph release is stable, LTS or development. a new page was added to the documentation: http://ceph.com/docs/master/releases/ It is a matrix where each cell is a release number linked to the release notes from http://ceph.com/docs/master/release-notes/.

Re: [ceph-users] Ceph release timeline

2015-03-15 Thread Lindsay Mathieson
Thanks, thats quite helpful. On 16 March 2015 at 08:29, Loic Dachary l...@dachary.org wrote: Hi Ceph, In an attempt to clarify what Ceph release is stable, LTS or development. a new page was added to the documentation: http://ceph.com/docs/master/releases/ It is a matrix where each cell is

Re: [ceph-users] Ceph release timeline

2015-03-15 Thread Georgios Dimitrakakis
Indeed it is! Thanks! George Thanks, thats quite helpful. On 16 March 2015 at 08:29, Loic Dachary wrote: Hi Ceph, In an attempt to clarify what Ceph release is stable, LTS or development. a new page was added to the documentation: http://ceph.com/docs/master/releases/ [1] It is a matrix

Re: [ceph-users] Shadow files

2015-03-15 Thread Ben
It is either a problem with CEPH, Civetweb or something else in our configuration. But deletes in user buckets is still leaving a high number of old shadow files. Since we have millions and millions of objects, it is hard to reconcile what should and shouldnt exist. Looking at our cluster

Re: [ceph-users] tgt and krbd

2015-03-15 Thread Mike Christie
On 03/09/2015 11:15 AM, Nick Fisk wrote: Hi Mike, I was using bs_aio with the krbd and still saw a small caching effect. I'm not sure if it was on the ESXi or tgt/krbd page cache side, but I was definitely seeing the IO's being coalesced into larger ones on the krbd I am not sure what you

[ceph-users] osd goes down

2015-03-15 Thread Jesus Chavez (jeschave)
Hi guys! After get a cluster for 10 servers and build a image for a pool of 450TB, it got sucked at the mkfs moment and what I noticed was that my entire cluster was failing in a really weird way, some osds goes down and up from different nodes and repeat so now I have a lot of PGs degraded

Re: [ceph-users] CephFS: delayed objects deletion ?

2015-03-15 Thread Yan, Zheng
On Sat, Mar 14, 2015 at 5:22 PM, Florent B flor...@coppint.com wrote: Hi, What do you call old MDS ? I'm on Giant release, it is not very old... And I tried restarting both but it didn't solve my problem. Will it be OK in Hammer ? On 03/13/2015 04:27 AM, Yan, Zheng wrote: On Fri, Mar 13,

Re: [ceph-users] Sunday's Ceph based business model

2015-03-15 Thread Anthony D'Atri
Interesting idea. I'm not sure, though, Ceph is designed with this sort of latency in mind. Crashplan does let you do something very similar for free, as I understand it, though it's more of a nearline thing. ___ ceph-users mailing list

Re: [ceph-users] Firefly, cephfs issues: different unix rights depending on the client and ls are slow

2015-03-15 Thread Yan, Zheng
On Sat, Mar 14, 2015 at 7:03 AM, Scottix scot...@gmail.com wrote: ... The time variation is caused cache coherence. when client has valid information in its cache, 'stat' operation will be fast. Otherwise the client need to send request to MDS and wait for reply, which will be slow.

Re: [ceph-users] tgt and krbd

2015-03-15 Thread Mike Christie
On 03/15/2015 07:54 PM, Mike Christie wrote: On 03/09/2015 11:15 AM, Nick Fisk wrote: Hi Mike, I was using bs_aio with the krbd and still saw a small caching effect. I'm not sure if it was on the ESXi or tgt/krbd page cache side, but I was definitely seeing the IO's being coalesced into

Re: [ceph-users] Firefly, cephfs issues: different unix rights depending on the client and ls are slow

2015-03-15 Thread Yan, Zheng
On Sat, Mar 14, 2015 at 7:03 AM, Scottix scot...@gmail.com wrote: ... The time variation is caused cache coherence. when client has valid information in its cache, 'stat' operation will be fast. Otherwise the client need to send request to MDS and wait for reply, which will be slow.