Re: [ceph-users] Ceph BIG outage : 200+ OSD are down , OSD cannot create thread

2015-03-09 Thread Tony Harris
I know I'm not even close to this type of a problem yet with my small cluster (both test and production clusters) - but it would be great if something like that could appear in the cluster HEALTHWARN, if Ceph could determine the amount of used processes and compare them against the current limit

[ceph-users] New SSD Question

2015-03-02 Thread Tony Harris
Hi all, After the previous thread, I'm doing my SSD shopping for and I came across an SSD called an Edge Boost Pro w/ Power Fail, it seems to have some impressive specs - in most places decent user reviews, in once place a poor one - I was wondering if anyone has had any experience with these

[ceph-users] Question about rados bench

2015-03-03 Thread Tony Harris
Hi all, In my reading on the net about various implementations of Ceph, I came across this website blog page (really doesn't give a lot of good information but caused me to wonder): http://avengermojo.blogspot.com/2014/12/cubieboard-cluster-ceph-test.html near the bottom, the person did a rados

[ceph-users] Mail not reaching the list?

2015-02-28 Thread Tony Harris
Hi,I've sent a couple of emails to the list since subscribing, but I've never seen them reach the list; I was just wondering if there was something wrong?___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] SSD selection

2015-03-02 Thread Tony Harris
On Sun, Mar 1, 2015 at 11:19 PM, Christian Balzer ch...@gol.com wrote: I'll be honest, the pricing on Intel's website is far from reality. I haven't been able to find any OEMs, and retail pricing on the 200GB 3610 is ~231 (the $300 must have been a different model in the line).

[ceph-users] Ceph - networking question

2015-02-27 Thread Tony Harris
Hi all, I've only been using ceph for a few months now and currently have a small cluster (3 nodes, 18 OSDs).  I get decent performance based upon the configuration. My question is, should I have a larger pipe on the client/public network or on the ceph cluster private network?  I can only have

[ceph-users] Am I reaching the list now?

2015-02-28 Thread Tony Harris
I was subscribed with a yahoo email address, but it was getting some grief so I decided to try using my gmail address, hopefully this one is working -Tony ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] SSD selection

2015-02-28 Thread Tony Harris
Hi all, I have a small cluster together and it's running fairly well (3 nodes, 21 osds). I'm looking to improve the write performance a bit though, which I was hoping that using SSDs for journals would do. But, I was wondering what people had as recommendations for SSDs to act as journal

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
(the speeds of 240 is a bit better than 120s). and i've left 50% underprovisioned. I've got 10GB for journals and I am using 4 osds per ssd. Andrei -- *From: *Tony Harris neth...@gmail.com *To: *Andrei Mikhailovsky and...@arhont.com *Cc: *ceph-users@lists.ceph.com

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
On Sun, Mar 1, 2015 at 10:18 PM, Christian Balzer ch...@gol.com wrote: On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote: On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer ch...@gol.com wrote: Again, penultimately you will need to sit down, compile and compare the numbers

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
Christian On Sun, 1 Mar 2015 15:08:10 -0600 Tony Harris wrote: Now, I've never setup a journal on a separate disk, I assume you have 4 partitions at 10GB / partition, I noticed in the docs they referred to 10 GB, as a good starter. Would it be better to have 4 partitions @ 10g ea or 4 @20

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
On Sat, Feb 28, 2015 at 11:21 PM, Christian Balzer ch...@gol.com wrote: On Sat, 28 Feb 2015 20:42:35 -0600 Tony Harris wrote: Hi all, I have a small cluster together and it's running fairly well (3 nodes, 21 osds). I'm looking to improve the write performance a bit though, which I

Re: [ceph-users] SSD selection

2015-03-01 Thread Tony Harris
per $ ratio out of all the drives. Andrei -- *From: *Tony Harris neth...@gmail.com *To: *Christian Balzer ch...@gol.com *Cc: *ceph-users@lists.ceph.com *Sent: *Sunday, 1 March, 2015 4:19:30 PM *Subject: *Re: [ceph-users] SSD selection Well, although I have 7

[ceph-users] Ceph Hammer question..

2015-04-22 Thread Tony Harris
Hi all, I have a cluster currently on Giant - is Hammer stable/ready for production use? -Tony ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Tony Harris
So with this, will even numbers then be LTS? Since 9.0.0 is following 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x, 12.x.x, etc. will be LTS... On Tue, May 5, 2015 at 11:45 AM, Sage Weil sw...@redhat.com wrote: On Tue, 5 May 2015, Joao Eduardo Luis wrote: On

[ceph-users] Do I have enough pgs?

2015-04-15 Thread Tony Harris
Hi all, I have a cluster of 3 nodes, 18 OSDs. I used the pgcalc to give a suggested number of PGs - here was my list: Group1 3 rep 18 OSDs 30% data 512PGs Group2 3 rep 18 OSDs 30% data 512PGs Group3 3 rep 18 OSDs 30% data 512PGs Group4 2 rep 18 OSDs 5% data 256PGs Group5 2

Re: [ceph-users] Enclosure power failure pausing client IO till all connected hosts up

2015-07-09 Thread Tony Harris
Sounds to me like you've put yourself at too much risk - *if* I'm reading your message right about your configuration, you have multiple hosts accessing OSDs that are stored on a single shared box - so if that single shared box (single point of failure for multiple nodes) goes down it's possible