On Mon, 18 Feb 2013, Ben Rowland wrote:
> Hi Sam,
>
> I can still reproduce it. I'm not clear if this is actually the
> expected behaviour of Ceph: if reads/writes are done at the primary
> OSD, and if a new primary can't be 'elected' (say due to a net-split
> between failure domains), then is a
Hi Sam,
I can still reproduce it. I'm not clear if this is actually the
expected behaviour of Ceph: if reads/writes are done at the primary
OSD, and if a new primary can't be 'elected' (say due to a net-split
between failure domains), then is a failure expected, for consistency
guarantees? Or am
On Fri, Feb 15, 2013 at 6:29 AM, Ben Rowland wrote:
> Further to my question about reads on a degraded PG, my tests show
> that indeed reads from rgw fail when not all OSDs in a PG are up, even
> when the data is physically available on an up/in OSD.
>
> I have a "size" and "min_size" of 2 on my p
Further to my question about reads on a degraded PG, my tests show
that indeed reads from rgw fail when not all OSDs in a PG are up, even
when the data is physically available on an up/in OSD.
I have a "size" and "min_size" of 2 on my pool, and 2 hosts with 2
OSDs on each. Crush map is set to wri
On Wed, Feb 13, 2013 at 3:40 AM, Ben Rowland wrote:
> Hi,
>
> Apologies that this is a fairly long post, but hopefully all my
> questions are similar (or even invalid!)
>
> Does Ceph allow writes to proceed if it's not possible to satisfy the
> rules for replica placement across failure domains, a