We have a 16 node cluster which reported a number of osd's losing heartbeat
connections and then reporting osd down ( even though it was up )
This cause a number of PG's to go peering or down and the cluster to stop
serving data.
We are running ceph version 0.87
The osd's that reported down
Hello,
On Thu, 22 Jan 2015 08:32:13 +0100 (CET) Alexandre DERUMIER wrote:
Hi,
From my last benchmark,
Using which version of Ceph?
I was around 12 iops rand read 4k , 2iops rand write 4k (3
nodes with 2ssd osd+journal ssd intel 3500)
That was with replication of 1, if I
Thanks for the insight,
I am are aware from the threads on the mailing list that currently Ceph is
unable to make use of all of the performance of the SSD's. However, while
we will not get maximum performance we will certainly get better latency
than we experience with spinning disks.
All our
From my last benchmark,
Using which version of Ceph?
It was with giant (big improvements with threads sharding, so you can use more
cores by osd)
That was with replication of 1, if I remember right?
for reads, I don't have too much difference with replication 1,2 or 3, but my
client was cpu
Hi,
On 22/01/2015 16:37, Chad William Seys wrote:
Hi Loic,
The size of each chunk is object size / K. If you have K=1 and M=2 it will
be the same as 3 replicas with none of the advantages ;-)
Interesting! I did not see this explained so explicitly.
So is the general explanation of k
On 1/21/2015 5:56 PM, Gregory Farnum wrote:
On Mon, Jan 19, 2015 at 2:48 PM, Brian Rak b...@gameservers.com wrote:
Awhile ago, I ran into this issue: http://tracker.ceph.com/issues/10411
I did manage to solve that by deleting the PGs, however ever since that
issue my mon databases have been
Hi Loic,
The size of each chunk is object size / K. If you have K=1 and M=2 it will
be the same as 3 replicas with none of the advantages ;-)
Interesting! I did not see this explained so explicitly.
So is the general explanation of k and m something like:
k, m: fault tolerance of m+1
Hi all,
I've got a tiered pool arrangement with a replicated pool and an erasure
pool. I set it up such that the replicated pool is in front of the erasure
coded pool. I know want to change the properties of the erasure coded pool.
Is there a way of changing switching which erasure profile