Hello,
On Wed, 10 Apr 2019 20:09:58 +0200 Paul Emmerich wrote:
> On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > Another thing that crossed my mind aside from failure probabilities caused
> > by actual HDDs dying is of course the little detail that most
On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote:
>
>
> Hello,
>
> Another thing that crossed my mind aside from failure probabilities caused
> by actual HDDs dying is of course the little detail that most Ceph
> installations will have have WAL/DB (journal) on SSDs, the most typical
>
On 10/04/2019 18.11, Christian Balzer wrote:
> Another thing that crossed my mind aside from failure probabilities caused
> by actual HDDs dying is of course the little detail that most Ceph
> installations will have have WAL/DB (journal) on SSDs, the most typical
> ratio being 1:4.
> And given
Hello,
Another thing that crossed my mind aside from failure probabilities caused
by actual HDDs dying is of course the little detail that most Ceph
installations will have have WAL/DB (journal) on SSDs, the most typical
ratio being 1:4.
And given the current thread about compaction killing
On Tue, 2 Apr 2019 19:04:28 +0900 Hector Martin wrote:
> On 02/04/2019 18.27, Christian Balzer wrote:
> > I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2
> > pool with 1024 PGs.
>
> (20 choose 2) is 190, so you're never going to have more than that many
> unique sets
On 02/04/2019 18.27, Christian Balzer wrote:
I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2
pool with 1024 PGs.
(20 choose 2) is 190, so you're never going to have more than that many
unique sets of OSDs.
I just looked at the OSD distribution for a replica 3 pool
Hello Hector,
Firstly I'm so happy somebody actually replied.
On Tue, 2 Apr 2019 16:43:10 +0900 Hector Martin wrote:
> On 31/03/2019 17.56, Christian Balzer wrote:
> > Am I correct that unlike with with replication there isn't a maximum size
> > of the critical path OSDs?
>
> As far as I
On 31/03/2019 17.56, Christian Balzer wrote:
Am I correct that unlike with with replication there isn't a maximum size
of the critical path OSDs?
As far as I know, the math for calculating the probability of data loss
wrt placement groups is the same for EC and for replication. Replication
Hello,
considering erasure coding for the first time (so excuse seemingly
obvious questions) and staring at the various previous posts
and documentation and in particular:
http://docs.ceph.com/docs/master/dev/osd_internals/erasure_coding/
Am I correct that unlike with with replication there