I still don't understand why I get any clean PGs in the erasure-coded
pool, when with two OSDs down there is no more redundancy, and
therefore all PGs should be undersized (or so I think).
I repeated the experiment by bringing two remaining OSDs online, and
then killing them, and got results
On Wed, May 9, 2018 at 4:37 PM, Maciej Puzio wrote:
> My setup consists of two pools on 5 OSDs, and is intended for cephfs:
> 1. erasure-coded data pool: k=3, m=2, size=5, min_size=3 (originally
> 4), number of PGs=128
> 2. replicated metadata pool: size=3, min_size=2, number
My setup consists of two pools on 5 OSDs, and is intended for cephfs:
1. erasure-coded data pool: k=3, m=2, size=5, min_size=3 (originally
4), number of PGs=128
2. replicated metadata pool: size=3, min_size=2, number of PGs=100
When all OSDs were online, all PGs from both pools has status
On Tue, May 8, 2018 at 2:16 PM Maciej Puzio wrote:
> Thank you everyone for your replies. However, I feel that at least
> part of the discussion deviated from the topic of my original post. As
> I wrote before, I am dealing with a toy cluster, whose purpose is not
> to
Thank you everyone for your replies. However, I feel that at least
part of the discussion deviated from the topic of my original post. As
I wrote before, I am dealing with a toy cluster, whose purpose is not
to provide a resilient storage, but to evaluate ceph and its behavior
in the event of a
On Tue, May 8, 2018 at 12:07 PM, Dan van der Ster wrote:
> On Tue, May 8, 2018 at 7:35 PM, Vasu Kulkarni wrote:
>> On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio wrote:
>>> I am an admin in a research lab looking for a cluster storage
On Tue, May 8, 2018 at 7:35 PM, Vasu Kulkarni wrote:
> On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio wrote:
>> I am an admin in a research lab looking for a cluster storage
>> solution, and a newbie to ceph. I have setup a mini toy cluster on
>> some VMs,
On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio wrote:
> I am an admin in a research lab looking for a cluster storage
> solution, and a newbie to ceph. I have setup a mini toy cluster on
> some VMs, to familiarize myself with ceph and to test failure
> scenarios. I am using ceph
You talked about "using default settings wherever possible"... Well, Ceph's
default settings everywhere they exist, is to not allow you to write while
you don't have at least 1 more copy that you can lose without data loss.
If your bosses require you to be able to lose 2 servers and still serve
2018-05-08 1:46 GMT+02:00 Maciej Puzio :
> Paul, many thanks for your reply.
> Thinking about it, I can't decide if I'd prefer to operate the storage
> server without redundancy, or have it automatically force a downtime,
> subjecting me to a rage of my users and my boss.
>
It's a very bad idea to accept data if you can't guarantee that it will be
stored in way that tolerates a disk outage
without data loss. Just don't.
Increase the number of coding chunks to 3 if you want to withstand two
simultaneous disk
failures without impacting availability.
Paul
Paul, many thanks for your reply.
Thinking about it, I can't decide if I'd prefer to operate the storage
server without redundancy, or have it automatically force a downtime,
subjecting me to a rage of my users and my boss.
But I think that the typical expectation is that system serves the
data
The docs seem wrong here. min_size is available for erasure coded pools and
works like you'd expect it to work.
Still, it's not a good idea to reduce it to the number of data chunks.
Paul
2018-05-07 23:26 GMT+02:00 Maciej Puzio :
> I am an admin in a research lab looking
13 matches
Mail list logo