Re: [ceph-users] Re-weight Entire Cluster?

2017-05-30 Thread Anthony D'Atri
> From: Anthony D'Atri <a...@dreamsnake.net> > Date: Tuesday, May 30, 2017 at 1:10 PM > To: ceph-users <ceph-users@lists.ceph.com> > Cc: Cave Mike <mc...@uvic.ca> > Subject: Re: [ceph-users] Re-weight Entire Cluster? > > > >> It appears the curre

Re: [ceph-users] Re-weight Entire Cluster?

2017-05-30 Thread Mike Cave
May 30, 2017 at 1:10 PM To: ceph-users <ceph-users@lists.ceph.com> Cc: Cave Mike <mc...@uvic.ca> Subject: Re: [ceph-users] Re-weight Entire Cluster? > It appears the current best practice is to weight each OSD according to it?s > size (3.64 for 4TB drive, 7.45 for 8TB drive,

Re: [ceph-users] Re-weight Entire Cluster?

2017-05-30 Thread Anthony D'Atri
> It appears the current best practice is to weight each OSD according to it?s > size (3.64 for 4TB drive, 7.45 for 8TB drive, etc). OSD’s are created with those sorts of CRUSH weights by default, yes. Which is convenient, but it’s import to know that those weights are arbitrary, and what

Re: [ceph-users] Re-weight Entire Cluster?

2017-05-29 Thread Udo Lembke
Hi Mike, On 30.05.2017 01:49, Mike Cave wrote: > > Greetings All, > > > > I recently started working with our ceph cluster here and have been > reading about weighting. > > > > It appears the current best practice is to weight each OSD according > to it’s size (3.64 for 4TB drive, 7.45 for