: *Sage Weil sw...@redhat.com
*To: *Quenten Grasso qgra...@onq.com.au
*Cc: *ceph-users@lists.ceph.com
*Sent: *Thursday, 17 July, 2014 4:44:45 PM
*Subject: *Re: [ceph-users] ceph osd crush tunables optimal AND add new
OSD at the same time
On Thu, 17 Jul 2014, Quenten Grasso wrote:
Hi Sage
:38 PM
To: Sage Weil
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at
the same time
Hi Sage,
since this problem is tunables-related, do we need to expect same behavior or
not when we do regular data rebalancing caused by adding new
: Quenten Grasso qgra...@onq.com.au
To: Andrija Panic andrija.pa...@gmail.com, Sage Weil sw...@redhat.com
Cc: ceph-users@lists.ceph.com
Sent: Wednesday, 16 July, 2014 1:20:19 PM
Subject: Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at
the same time
Hi Sage, Andrija List
I
...@gmail.com, Sage Weil
sw...@redhat.com
*Cc: *ceph-users@lists.ceph.com
*Sent: *Wednesday, 16 July, 2014 1:20:19 PM
*Subject: *Re: [ceph-users] ceph osd crush tunables optimal AND add new
OSD at the same time
Hi Sage, Andrija List
I have seen the tuneables issue on our cluster when I
-users@lists.ceph.com
*Sent: *Wednesday, 16 July, 2014 1:20:19 PM
*Subject: *Re: [ceph-users] ceph osd crush tunables optimal AND add new
OSD at the same time
Hi Sage, Andrija List
I have seen the tuneables issue on our cluster when I upgraded to firefly.
I ended up going back
Hi Sage,
since this problem is tunables-related, do we need to expect same behavior
or not when we do regular data rebalancing caused by adding new/removing
OSD? I guess not, but would like your confirmation.
I'm already on optimal tunables, but I'm afraid to test this by i.e.
shuting down 1
On Tue, 15 Jul 2014, Andrija Panic wrote:
Hi Sage, since this problem is tunables-related, do we need to expect
same behavior or not when we do regular data rebalancing caused by
adding new/removing OSD? I guess not, but would like your confirmation.
I'm already on optimal tunables, but
, 13 July, 2014 9:54:17 PM
Subject: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the
same time
Hi,
after seting ceph upgrade (0.72.2 to 0.80.3) I have issued ceph osd crush
tunables optimal and after only few minutes I have added 2 more OSDs to the
CEPH cluster...
So
tunables optimal AND add new OSD
at thesame time
Hi,
after seting ceph upgrade (0.72.2 to 0.80.3) I have issued ceph osd crush
tunables optimal and after only few minutes I have added 2 more OSDs to
the CEPH cluster...
So these 2 changes were more or a less done at the same time
I've added some additional notes/warnings to the upgrade and release
notes:
https://github.com/ceph/ceph/commit/fc597e5e3473d7db6548405ce347ca7732832451
If there is somewhere else where you think a warning flag would be useful,
let me know!
Generally speaking, we want to be able to cope with
Perhaps here: http://ceph.com/releases/v0-80-firefly-released/
Thanks
On 14 July 2014 18:18, Sage Weil sw...@redhat.com wrote:
I've added some additional notes/warnings to the upgrade and release
notes:
https://github.com/ceph/ceph/commit/fc597e5e3473d7db6548405ce347ca7732832451
If there
Hi,
which values are all changed with ceph osd crush tunables optimal?
Is it perhaps possible to change some parameter the weekends before the
upgrade is running, to have more time?
(depends if the parameter are available in 0.72...).
The warning told, it's can take days... we have an cluster
On Mon, 14 Jul 2014, Udo Lembke wrote:
Hi,
which values are all changed with ceph osd crush tunables optimal?
There are some brand new crush tunables that fix.. I don't even remember
off hand.
In general, you probably want to stay away from 'optimal' unless this is a
fresh cluster and all
Udo, I had all VMs completely unoperational - so don't set optimal for
now...
On 14 July 2014 20:48, Udo Lembke ulem...@polarzone.de wrote:
Hi,
which values are all changed with ceph osd crush tunables optimal?
Is it perhaps possible to change some parameter the weekends before the
upgrade
Hi,
after seting ceph upgrade (0.72.2 to 0.80.3) I have issued ceph osd crush
tunables optimal and after only few minutes I have added 2 more OSDs to
the CEPH cluster...
So these 2 changes were more or a less done at the same time - rebalancing
because of tunables optimal, and rebalancing
15 matches
Mail list logo