[ceph-users] multiple osd failure

2015-01-22 Thread Rob Antonello
We have a 16 node cluster which reported a number of osd's losing heartbeat connections and then reporting osd down ( even though it was up ) This cause a number of PG's to go peering or down and the cluster to stop serving data. We are running ceph version 0.87 The osd's that reported down

Re: [ceph-users] Journals on all SSD cluster

2015-01-22 Thread Christian Balzer
Hello, On Thu, 22 Jan 2015 08:32:13 +0100 (CET) Alexandre DERUMIER wrote: Hi, From my last benchmark, Using which version of Ceph? I was around 12 iops rand read 4k , 2iops rand write 4k (3 nodes with 2ssd osd+journal ssd intel 3500) That was with replication of 1, if I

Re: [ceph-users] Journals on all SSD cluster

2015-01-22 Thread Andrew Thrift
Thanks for the insight, I am are aware from the threads on the mailing list that currently Ceph is unable to make use of all of the performance of the SSD's. However, while we will not get maximum performance we will certainly get better latency than we experience with spinning disks. All our

Re: [ceph-users] Journals on all SSD cluster

2015-01-22 Thread Alexandre DERUMIER
From my last benchmark, Using which version of Ceph? It was with giant (big improvements with threads sharding, so you can use more cores by osd) That was with replication of 1, if I remember right? for reads, I don't have too much difference with replication 1,2 or 3, but my client was cpu

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-22 Thread Loic Dachary
Hi, On 22/01/2015 16:37, Chad William Seys wrote: Hi Loic, The size of each chunk is object size / K. If you have K=1 and M=2 it will be the same as 3 replicas with none of the advantages ;-) Interesting! I did not see this explained so explicitly. So is the general explanation of k

Re: [ceph-users] 4 GB mon database?

2015-01-22 Thread Brian Rak
On 1/21/2015 5:56 PM, Gregory Farnum wrote: On Mon, Jan 19, 2015 at 2:48 PM, Brian Rak b...@gameservers.com wrote: Awhile ago, I ran into this issue: http://tracker.ceph.com/issues/10411 I did manage to solve that by deleting the PGs, however ever since that issue my mon databases have been

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-22 Thread Chad William Seys
Hi Loic, The size of each chunk is object size / K. If you have K=1 and M=2 it will be the same as 3 replicas with none of the advantages ;-) Interesting! I did not see this explained so explicitly. So is the general explanation of k and m something like: k, m: fault tolerance of m+1

[ceph-users] how to remove storage tier

2015-01-22 Thread Chad William Seys
Hi all, I've got a tiered pool arrangement with a replicated pool and an erasure pool. I set it up such that the replicated pool is in front of the erasure coded pool. I know want to change the properties of the erasure coded pool. Is there a way of changing switching which erasure profile