Re: [ceph-users] Crush Bucket move crashes mons

2018-03-20 Thread warren.jeffs
; Sent: 20 March 2018 17:21 To: Jeffs, Warren (STFC,RAL,ISIS) <warren.je...@stfc.ac.uk> Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] Crush Bucket move crashes mons Hi, I made the changes directly to the crush map, i.e., (1) deleting the all the weight_set blocks and then move the bucket

Re: [ceph-users] Crush Bucket move crashes mons

2018-03-16 Thread warren.jeffs
From: Paul Emmerich [paul.emmer...@croit.io] Sent: 16 March 2018 16:48 To: Jeffs, Warren (STFC,RAL,ISIS) Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] Crush Bucket move crashes mons Hi, looks like it fails to adjust the number of weight set entries when moving the entries. The good

Re: [ceph-users] Crush Bucket move crashes mons

2018-03-16 Thread Paul Emmerich
the CLI I can at least cancel the command the monitor > comes back up fine. > > > > Many thanks. > > > > Warren > > > > > > *From:* Paul Emmerich [mailto:paul.emmer...@croit.io] > *Sent:* 16 March 2018 13:54 > *To:* Jeffs, Warren (STFC,RAL,ISIS)

Re: [ceph-users] Crush Bucket move crashes mons

2018-03-16 Thread warren.jeffs
From: Paul Emmerich [mailto:paul.emmer...@croit.io] Sent: 16 March 2018 13:54 To: Jeffs, Warren (STFC,RAL,ISIS) <warren.je...@stfc.ac.uk> Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] Crush Bucket move crashes mons Hi, the error looks like there might be something wrong with the device c

Re: [ceph-users] Crush Bucket move crashes mons

2018-03-16 Thread Paul Emmerich
Hi, the error looks like there might be something wrong with the device classes (which are managed via separate trees with magic names behind the scenes). Can you post your crush map and the command that you are trying to run? Paul 2018-03-15 16:27 GMT+01:00 : > Hi

[ceph-users] Crush Bucket move crashes mons

2018-03-15 Thread warren.jeffs
Hi All, Having some interesting challenges. I am trying to move 2 new nodes + 2 new racks into my default root, I have added them to the cluster outside of the Root=default. They are all in and up - happy it seems. The new nodes have all 12 OSDs in them and they are all 'UP' So when going to