Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
Systems Administrator Research Computing Services Team University of Victoria From: Martin Verges Date: Friday, November 15, 2019 at 11:52 AM To: Janne Johansson Cc: Cave Mike , ceph-users Subject: Re: [ceph-users] Migrating from block to lvm I would consider doing it host-by-host wise, as you

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
of Victoria O: 250.472.4997 From: Janne Johansson Date: Friday, November 15, 2019 at 11:46 AM To: Cave Mike Cc: Paul Emmerich , ceph-users Subject: Re: [ceph-users] Migrating from block to lvm Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave mailto:mc...@uvic.ca>>: So would you recommend

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Martin Verges
I would consider doing it host-by-host wise, as you should always be able to handle the complete loss of a node. This would be much faster in the end as you save a lot of time not migrating data back and forth. However this can lead to problems if your cluster is not configured according to the

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Janne Johansson
Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave : > So would you recommend doing an entire node at the same time or per-osd? > You should be able to do it per-OSD (or per-disk in case you run more than one OSD per disk), to minimize data movement over the network, letting other OSDs on the same

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
So would you recommend doing an entire node at the same time or per-osd? Senior Systems Administrator Research Computing Services Team University of Victoria O: 250.472.4997 On 2019-11-15, 10:28 AM, "Paul Emmerich" wrote: You'll have to tell LVM about multi-path, otherwise LVM gets

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Paul Emmerich
You'll have to tell LVM about multi-path, otherwise LVM gets confused. But that should be the only thing Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Fri, Nov

[ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
Greetings all! I am looking at upgrading to Nautilus in the near future (currently on Mimic). We have a cluster built on 480 OSDs all using multipath and simple block devices. I see that the ceph-disk tool is now deprecated and the ceph-volume tool doesn’t do everything that ceph-disk did for