I know I'm not even close to this type of a problem yet with my small
cluster (both test and production clusters) - but it would be great if
something like that could appear in the cluster HEALTHWARN, if Ceph could
determine the amount of used processes and compare them against the current
limit
Hi all,
After the previous thread, I'm doing my SSD shopping for and I came across
an SSD called an Edge Boost Pro w/ Power Fail, it seems to have some
impressive specs - in most places decent user reviews, in once place a poor
one - I was wondering if anyone has had any experience with these
Hi all,
In my reading on the net about various implementations of Ceph, I came
across this website blog page (really doesn't give a lot of good
information but caused me to wonder):
http://avengermojo.blogspot.com/2014/12/cubieboard-cluster-ceph-test.html
near the bottom, the person did a rados
Hi,I've sent a couple of emails to the list since subscribing, but I've never
seen them reach the list; I was just wondering if there was something wrong?___
ceph-users mailing list
ceph-users@lists.ceph.com
On Sun, Mar 1, 2015 at 11:19 PM, Christian Balzer ch...@gol.com wrote:
I'll be honest, the pricing on Intel's website is far from reality. I
haven't been able to find any OEMs, and retail pricing on the 200GB 3610
is ~231 (the $300 must have been a different model in the line).
Hi all,
I've only been using ceph for a few months now and currently have a small
cluster (3 nodes, 18 OSDs). I get decent performance based upon the
configuration.
My question is, should I have a larger pipe on the client/public network or on
the ceph cluster private network? I can only have
I was subscribed with a yahoo email address, but it was getting some grief
so I decided to try using my gmail address, hopefully this one is working
-Tony
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
I have a small cluster together and it's running fairly well (3 nodes, 21
osds). I'm looking to improve the write performance a bit though, which I
was hoping that using SSDs for journals would do. But, I was wondering
what people had as recommendations for SSDs to act as journal
(the speeds of 240 is a bit better than
120s). and i've left 50% underprovisioned. I've got 10GB for journals and I
am using 4 osds per ssd.
Andrei
--
*From: *Tony Harris neth...@gmail.com
*To: *Andrei Mikhailovsky and...@arhont.com
*Cc: *ceph-users@lists.ceph.com
On Sun, Mar 1, 2015 at 10:18 PM, Christian Balzer ch...@gol.com wrote:
On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote:
On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer ch...@gol.com wrote:
Again, penultimately you will need to sit down, compile and compare the
numbers
Christian
On Sun, 1 Mar 2015 15:08:10 -0600 Tony Harris wrote:
Now, I've never setup a journal on a separate disk, I assume you have 4
partitions at 10GB / partition, I noticed in the docs they referred to 10
GB, as a good starter. Would it be better to have 4 partitions @ 10g ea
or 4 @20
On Sat, Feb 28, 2015 at 11:21 PM, Christian Balzer ch...@gol.com wrote:
On Sat, 28 Feb 2015 20:42:35 -0600 Tony Harris wrote:
Hi all,
I have a small cluster together and it's running fairly well (3 nodes, 21
osds). I'm looking to improve the write performance a bit though, which
I
per $ ratio out of all the drives.
Andrei
--
*From: *Tony Harris neth...@gmail.com
*To: *Christian Balzer ch...@gol.com
*Cc: *ceph-users@lists.ceph.com
*Sent: *Sunday, 1 March, 2015 4:19:30 PM
*Subject: *Re: [ceph-users] SSD selection
Well, although I have 7
Hi all,
I have a cluster currently on Giant - is Hammer stable/ready for production
use?
-Tony
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
So with this, will even numbers then be LTS? Since 9.0.0 is following
0.94.x/Hammer, and every other release is normally LTS, I'm guessing
10.x.x, 12.x.x, etc. will be LTS...
On Tue, May 5, 2015 at 11:45 AM, Sage Weil sw...@redhat.com wrote:
On Tue, 5 May 2015, Joao Eduardo Luis wrote:
On
Hi all,
I have a cluster of 3 nodes, 18 OSDs. I used the pgcalc to give a
suggested number of PGs - here was my list:
Group1 3 rep 18 OSDs 30% data 512PGs
Group2 3 rep 18 OSDs 30% data 512PGs
Group3 3 rep 18 OSDs 30% data 512PGs
Group4 2 rep 18 OSDs 5% data 256PGs
Group5 2
Sounds to me like you've put yourself at too much risk - *if* I'm reading
your message right about your configuration, you have multiple hosts
accessing OSDs that are stored on a single shared box - so if that single
shared box (single point of failure for multiple nodes) goes down it's
possible
17 matches
Mail list logo