Hi Ceph,
Disclaimer: I'm no entrepreneur, the business model idea that came to me this
Sunday should not be taken seriously ;-)
Let say individuals can buy hardware that are Ceph ready (i.e. contain some
variation of https://wiki.ceph.com/Clustering_a_few_NAS_into_a_Ceph_cluster)
and build a
Hi all, does anybody know why still get WARN message status?
I don’t even have pools yet son I am not sure why is warning me…
[root@capricornio ceph-cluster]# ceph status
cluster d39f6247-1543-432d-9247-6c56f65bb6cd
health HEALTH_WARN too few pgs per osd (0 min 20)
monmap e1: 3
yeah thats what I thought, I had the same WARN message with RHEL6.6 but I had a
pool and when I changed the value of pgs the message just gone...
[cid:image005.png@01D00809.A6D502D0]
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146
Hi,
On 15/03/2015 16:23, Jesus Chavez (jeschave) wrote:
Hi all, does anybody know why still get WARN message status?
I don’t even have pools yet son I am not sure why is warning me…
[root@capricornio ceph-cluster]# ceph status
cluster d39f6247-1543-432d-9247-6c56f65bb6cd
Apologies if anyone receives this twice. I didn't see this e-mail come back
through to the list ...
-Original Message-
From: Chris Murray
Sent: 14 March 2015 08:56
To: 'Gregory Farnum'
Cc: 'ceph-users'
Subject: RE: [ceph-users] More than 50% osds down, CPUs still busy; will the
cluster
Hi Ceph,
In an attempt to clarify what Ceph release is stable, LTS or development. a new
page was added to the documentation: http://ceph.com/docs/master/releases/ It
is a matrix where each cell is a release number linked to the release notes
from http://ceph.com/docs/master/release-notes/.
Thanks, thats quite helpful.
On 16 March 2015 at 08:29, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
In an attempt to clarify what Ceph release is stable, LTS or development.
a new page was added to the documentation:
http://ceph.com/docs/master/releases/ It is a matrix where each cell is
Indeed it is!
Thanks!
George
Thanks, thats quite helpful.
On 16 March 2015 at 08:29, Loic Dachary wrote:
Hi Ceph,
In an attempt to clarify what Ceph release is stable, LTS or
development. a new page was added to the documentation:
http://ceph.com/docs/master/releases/ [1] It is a matrix
It is either a problem with CEPH, Civetweb or something else in our
configuration.
But deletes in user buckets is still leaving a high number of old shadow
files. Since we have millions and millions of objects, it is hard to
reconcile what should and shouldnt exist.
Looking at our cluster
On 03/09/2015 11:15 AM, Nick Fisk wrote:
Hi Mike,
I was using bs_aio with the krbd and still saw a small caching effect. I'm
not sure if it was on the ESXi or tgt/krbd page cache side, but I was
definitely seeing the IO's being coalesced into larger ones on the krbd
I am not sure what you
Hi guys!
After get a cluster for 10 servers and build a image for a pool of 450TB, it
got sucked at the mkfs moment and what I noticed was that my entire cluster was
failing in a really weird way, some osds goes down and up from different nodes
and repeat so now I have a lot of PGs degraded
On Sat, Mar 14, 2015 at 5:22 PM, Florent B flor...@coppint.com wrote:
Hi,
What do you call old MDS ? I'm on Giant release, it is not very old...
And I tried restarting both but it didn't solve my problem.
Will it be OK in Hammer ?
On 03/13/2015 04:27 AM, Yan, Zheng wrote:
On Fri, Mar 13,
Interesting idea. I'm not sure, though, Ceph is designed with this sort of
latency in mind.
Crashplan does let you do something very similar for free, as I understand it,
though it's more of a nearline thing.
___
ceph-users mailing list
On Sat, Mar 14, 2015 at 7:03 AM, Scottix scot...@gmail.com wrote:
...
The time variation is caused cache coherence. when client has valid
information
in its cache, 'stat' operation will be fast. Otherwise the client need to
send
request to MDS and wait for reply, which will be slow.
On 03/15/2015 07:54 PM, Mike Christie wrote:
On 03/09/2015 11:15 AM, Nick Fisk wrote:
Hi Mike,
I was using bs_aio with the krbd and still saw a small caching effect. I'm
not sure if it was on the ESXi or tgt/krbd page cache side, but I was
definitely seeing the IO's being coalesced into
On Sat, Mar 14, 2015 at 7:03 AM, Scottix scot...@gmail.com wrote:
...
The time variation is caused cache coherence. when client has valid
information
in its cache, 'stat' operation will be fast. Otherwise the client need to
send
request to MDS and wait for reply, which will be slow.
16 matches
Mail list logo