Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-19 Thread Simon Leinen
cephmailinglist writes: > e) find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0 chown ceph:ceph > [...] > [...] Also at that time one of our pools got a lot of extra data, > those files where stored with root permissions since we did not > restarted the Ceph daemons yet, the 'find' in step e

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-14 Thread George Mihaiescu
Hi, We initially upgraded from Hammer to Jewel while keeping the ownership unchanged, by adding "setuser match path = /var/lib/ceph/$type/$cluster-$id" in ceph.conf Later, we used the following steps to change from running as root to running as ceph. On the storage nodes, we ran the following

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-14 Thread Richard Arends
On 03/13/2017 02:02 PM, Christoph Adomeit wrote: Christoph, Thanks for the detailed upgrade report. We have another scenario: We have allready upgraded to jewel 10.2.6 but we are still running all our monitors and osd daemons as root using the setuser match path directive. What would be the

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-14 Thread Richard Arends
On 03/12/2017 07:54 PM, Florian Haas wrote: Florian, For others following this thread who still have the hammer→jewel upgrade ahead: there is a ceph.conf option you can use here; no need to fiddle with the upstart scripts. setuser match path = /var/lib/ceph/$type/$cluster-$id Ah, i did not

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-13 Thread Christoph Adomeit
Thanks for the detailed upgrade report. We have another scenario: We have allready upgraded to jewel 10.2.6 but we are still running all our monitors and osd daemons as root using the setuser match path directive. What would be the recommended way to have all daemons running as ceph:ceph user

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-13 Thread Piotr Dałek
On 03/13/2017 11:07 AM, Dan van der Ster wrote: On Sat, Mar 11, 2017 at 12:21 PM, wrote: The next and biggest problem we encountered had to do with the CRC errors on the OSD map. On every map update, the OSDs that were not upgraded yet, got that CRC error and

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-13 Thread Dan van der Ster
On Sat, Mar 11, 2017 at 12:21 PM, wrote: > > The next and biggest problem we encountered had to do with the CRC errors on > the OSD map. On every map update, the OSDs that were not upgraded yet, got > that CRC error and asked the monitor for a full OSD map instead of

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-12 Thread Christian Balzer
Hello, On Sun, 12 Mar 2017 19:54:10 +0100 Florian Haas wrote: > On Sat, Mar 11, 2017 at 12:21 PM, wrote: > > The upgrade of our biggest cluster, nr 4, did not go without > > problems. Since we where expecting a lot of "failed to encode map > > e with expected crc"

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-12 Thread Christian Balzer
Hello, On Sun, 12 Mar 2017 19:52:12 +1000 Brad Hubbard wrote: > On Sun, Mar 12, 2017 at 6:36 AM, Christian Theune > wrote: > > Hi, > > > > thanks for that report! Glad to hear a mostly happy report. I’m still on the > > fence … ;) > > > > I have had reports that Qemu

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-12 Thread Florian Haas
On Sat, Mar 11, 2017 at 12:21 PM, wrote: > The upgrade of our biggest cluster, nr 4, did not go without > problems. Since we where expecting a lot of "failed to encode map > e with expected crc" messages, we disabled clog to monitors > with 'ceph tell osd.* injectargs

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-12 Thread Brad Hubbard
On Sun, Mar 12, 2017 at 6:36 AM, Christian Theune wrote: > Hi, > > thanks for that report! Glad to hear a mostly happy report. I’m still on the > fence … ;) > > I have had reports that Qemu (librbd connections) will require > updates/restarts before upgrading. What was your

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-12 Thread cephmailinglist
On 03/11/2017 09:49 PM, Udo Lembke wrote: Hi Udo, Perhaps would an "find /var/lib/ceph/ ! -uid 64045 -exec chown ceph:ceph" do an better job?! We did exactly that (and also tried other combinations) and that is a workaround for the 'argument too long' problem, but then it would call an exec

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-12 Thread cephmailinglist
On 03/11/2017 09:36 PM, Christian Theune wrote: Hello, I have had reports that Qemu (librbd connections) will require updates/restarts before upgrading. What was your experience on that side? Did you upgrade the clients? Did you start using any of the new RBD features, like fast diff? We

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-11 Thread Udo Lembke
Hi, thanks for the usefull infos. On 11.03.2017 12:21, cephmailingl...@mosibi.nl wrote: > > Hello list, > > A week ago we upgraded our Ceph clusters from Hammer to Jewel and with > this email we want to share our experiences. > > ... > > > e) find /var/lib/ceph/ ! -uid 64045 -print0|xargs

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-11 Thread Christian Theune
Hi, thanks for that report! Glad to hear a mostly happy report. I’m still on the fence … ;) I have had reports that Qemu (librbd connections) will require updates/restarts before upgrading. What was your experience on that side? Did you upgrade the clients? Did you start using any of the new

[ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-11 Thread cephmailinglist
Hello list, A week ago we upgraded our Ceph clusters from Hammer to Jewel and with this email we want to share our experiences. We have four clusters: 1) Test cluster for all the fun things, completely virtual. 2) Test cluster for Openstack: 3 monitors and 9 OSDs, all baremetal 3) Cluster