Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
On Mon, 16 Oct 2017 18:32:06 -0400 (EDT) Anthony Verevkin wrote: > > From: "Sage Weil" <s...@newdream.net> > > To: "Alfredo Deza" <ad...@redhat.com> > > Cc: "ceph-devel" <ceph-de...@vger.kernel.org>, ceph-users@lists.ceph.com > > Sent: Monday, October 9, 2017 11:09:29 AM > > Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and > > disk partition support] > > > > To put this in context, the goal here is to kill ceph-disk in mimic. > > > > > > Perhaps the "out" here is to support a "dir" option where the user > > can > > manually provision and mount an OSD on /var/lib/ceph/osd/*, with > > 'journal' > > or 'block' symlinks, and ceph-volume will do the last bits that > > initialize > > the filestore or bluestore OSD from there. Then if someone has a > > scenario > > that isn't captured by LVM (or whatever else we support) they can > > always > > do it manually? > > > > > In fact, now that bluestore only requires a few small files and symlinks to > remain in /var/lib/ceph/osd/* without the extra requirements for xattrs > support and xfs, why not simply leave those folders on OS root filesystem and > only point symlinks to bluestore block and db devices? That would simplify > the osd deployment so much - and the symlinks can then point to > /dev/disk/by-uuid or by-path or lvm path or whatever. The only downside for > this approach that I see is that disks themselves would no longer be > transferable between the hosts as those few files that describe the OSD are > no longer on the disk itself. > If the OS is on a RAID1 the chances of things being lost entirely is reduced very much, so moving OSDs to another host becomes a trivial exercise one would assume. But yeah, this sounds fine to me, as it's extremely flexible. Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
On Mon, 16 Oct 2017, Anthony Verevkin wrote: > > > From: "Sage Weil" <s...@newdream.net> > > To: "Alfredo Deza" <ad...@redhat.com> > > Cc: "ceph-devel" <ceph-de...@vger.kernel.org>, ceph-users@lists.ceph.com > > Sent: Monday, October 9, 2017 11:09:29 AM > > Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and > > disk partition support] > > > > To put this in context, the goal here is to kill ceph-disk in mimic. > > > > > > Perhaps the "out" here is to support a "dir" option where the user > > can > > manually provision and mount an OSD on /var/lib/ceph/osd/*, with > > 'journal' > > or 'block' symlinks, and ceph-volume will do the last bits that > > initialize > > the filestore or bluestore OSD from there. Then if someone has a > > scenario > > that isn't captured by LVM (or whatever else we support) they can > > always > > do it manually? > > > > In fact, now that bluestore only requires a few small files and symlinks > to remain in /var/lib/ceph/osd/* without the extra requirements for > xattrs support and xfs, why not simply leave those folders on OS root > filesystem and only point symlinks to bluestore block and db devices? > That would simplify the osd deployment so much - and the symlinks can > then point to /dev/disk/by-uuid or by-path or lvm path or whatever. The > only downside for this approach that I see is that disks themselves > would no longer be transferable between the hosts as those few files > that describe the OSD are no longer on the disk itself. :) this is exactly what we're doing, actually: https://github.com/ceph/ceph/pull/18256 We plan to backport this to luminous, hopefully in time for the next point release. dm-crypt is still slightly annoying to set up, but it will still be much easier. sage ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
> From: "Sage Weil" <s...@newdream.net> > To: "Alfredo Deza" <ad...@redhat.com> > Cc: "ceph-devel" <ceph-de...@vger.kernel.org>, ceph-users@lists.ceph.com > Sent: Monday, October 9, 2017 11:09:29 AM > Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and > disk partition support] > > To put this in context, the goal here is to kill ceph-disk in mimic. > > Perhaps the "out" here is to support a "dir" option where the user > can > manually provision and mount an OSD on /var/lib/ceph/osd/*, with > 'journal' > or 'block' symlinks, and ceph-volume will do the last bits that > initialize > the filestore or bluestore OSD from there. Then if someone has a > scenario > that isn't captured by LVM (or whatever else we support) they can > always > do it manually? > In fact, now that bluestore only requires a few small files and symlinks to remain in /var/lib/ceph/osd/* without the extra requirements for xattrs support and xfs, why not simply leave those folders on OS root filesystem and only point symlinks to bluestore block and db devices? That would simplify the osd deployment so much - and the symlinks can then point to /dev/disk/by-uuid or by-path or lvm path or whatever. The only downside for this approach that I see is that disks themselves would no longer be transferable between the hosts as those few files that describe the OSD are no longer on the disk itself. Regards, Anthony ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
Hi, On 09/10/17 16:09, Sage Weil wrote: To put this in context, the goal here is to kill ceph-disk in mimic. One proposal is to make it so new OSDs can *only* be deployed with LVM, and old OSDs with the ceph-disk GPT partitions would be started via ceph-volume support that can only start (but not deploy new) OSDs in that style. Is the LVM-only-ness concerning to anyone? Looking further forward, NVMe OSDs will probably be handled a bit differently, as they'll eventually be using SPDK and kernel-bypass (hence, no LVM). For the time being, though, they would use LVM. This seems the best point to jump in on this thread. We have a ceph (Jewel / Ubuntu 16.04) cluster with around 3k OSDs, deployed with ceph-ansible. They are plain-disk OSDs with journal on NVME partitions. I don't think this is an unusual configuration :) I think to get rid of ceph-disk, we would want at least some of the following: * solid scripting for "move slowly through cluster migrating OSDs from disk to lvm" - 1 OSD at a time isn't going to produce unacceptable rebalance load, but it is going to take a long time, so such scripting would have to cope with being stopped and restarted and suchlike (and be able to use the correct journal partitions) * ceph-ansible support for "some lvm, some plain disk" arrangements - presuming a "create new OSDs as lvm" approach when adding new OSDs or replacing failed disks * support for plain disk (regardless of what provides it) that remains solid for some time yet On Fri, 6 Oct 2017, Alfredo Deza wrote: Bluestore support should be the next step for `ceph-volume lvm`, and while that is planned we are thinking of ways to improve the current caveats (like OSDs not coming up) for clusters that have deployed OSDs with ceph-disk. These issues seem mostly to be down to timeouts being too short and the single global lock for activating OSDs. IMO we can't require any kind of data migration in order to upgrade, which means we either have to (1) keep ceph-disk around indefinitely, or (2) teach ceph-volume to start existing GPT-style OSDs. Given all of the flakiness around udev, I'm partial to #2. The big question for me is whether #2 alone is sufficient, or whether ceph-volume should also know how to provision new OSDs using partitions and no LVM. Hopefully not? I think this depends on how well tools such as ceph-ansible can cope with mixed OSD types (my feeling at the moment is "not terribly well", but I may be being unfair). Regards, Matthew -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
On 10-10-2017 14:21, Alfredo Deza wrote: > On Tue, Oct 10, 2017 at 8:14 AM, Willem Jan Withagenwrote: >> On 10-10-2017 13:51, Alfredo Deza wrote: >>> On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer wrote: Hello, (pet peeve alert) On Mon, 9 Oct 2017 15:09:29 + (UTC) Sage Weil wrote: > To put this in context, the goal here is to kill ceph-disk in mimic. >> >> Right, that means we need a ceph-volume zfs before things get shot down. >> Fortunately there is little history to carry over. >> >> But then still somebody needs to do the work. ;-| >> Haven't looked at ceph-volume, but I'll put it on the agenda. > > An interesting take on zfs (and anything else we didn't set up from > the get-go) is that we envisioned developers might > want to craft plugins for ceph-volume and expand its capabilities, > without placing the burden of coming up > with new device technology to support. > > The other nice aspect of this is that a plugin would get to re-use all > the tooling in place in ceph-volume. The plugin architecture > exists but it isn't fully developed/documented yet. I was part of the original discussion when ceph-volume said it was going to be plugable... And would be a great proponent of thye plugins. If only because ceph-disk is rather convoluted to add to. Not that it cannot be done, but the code is rather loaded with linuxisms for its devices. And it takes some care to not upset the old code, even to the point that code for a routine is refactored into 3 new routines: one OS selctor and then the old code for Linux, and the new code for FreeBSD. And that starts to look like a poor mans plugin. :) But still I need to find the time, and sharpen my python skills. Luckily mimic is 9 months away. :) --WjW ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
On Tue, Oct 10, 2017 at 8:14 AM, Willem Jan Withagenwrote: > On 10-10-2017 13:51, Alfredo Deza wrote: >> On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer wrote: >>> >>> Hello, >>> >>> (pet peeve alert) >>> On Mon, 9 Oct 2017 15:09:29 + (UTC) Sage Weil wrote: >>> To put this in context, the goal here is to kill ceph-disk in mimic. > > Right, that means we need a ceph-volume zfs before things get shot down. > Fortunately there is little history to carry over. > > But then still somebody needs to do the work. ;-| > Haven't looked at ceph-volume, but I'll put it on the agenda. An interesting take on zfs (and anything else we didn't set up from the get-go) is that we envisioned developers might want to craft plugins for ceph-volume and expand its capabilities, without placing the burden of coming up with new device technology to support. The other nice aspect of this is that a plugin would get to re-use all the tooling in place in ceph-volume. The plugin architecture exists but it isn't fully developed/documented yet. > > --WjW > > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
On 10-10-2017 13:51, Alfredo Deza wrote: > On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzerwrote: >> >> Hello, >> >> (pet peeve alert) >> On Mon, 9 Oct 2017 15:09:29 + (UTC) Sage Weil wrote: >> >>> To put this in context, the goal here is to kill ceph-disk in mimic. Right, that means we need a ceph-volume zfs before things get shot down. Fortunately there is little history to carry over. But then still somebody needs to do the work. ;-| Haven't looked at ceph-volume, but I'll put it on the agenda. --WjW ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzerwrote: > > Hello, > > (pet peeve alert) > On Mon, 9 Oct 2017 15:09:29 + (UTC) Sage Weil wrote: > >> To put this in context, the goal here is to kill ceph-disk in mimic. >> >> One proposal is to make it so new OSDs can *only* be deployed with LVM, >> and old OSDs with the ceph-disk GPT partitions would be started via >> ceph-volume support that can only start (but not deploy new) OSDs in that >> style. >> >> Is the LVM-only-ness concerning to anyone? >> > If the provision below is met, not really. > >> Looking further forward, NVMe OSDs will probably be handled a bit >> differently, as they'll eventually be using SPDK and kernel-bypass (hence, >> no LVM). For the time being, though, they would use LVM. >> > And so it begins. > LVM does a lot of nice things, but not everything for everybody. > It is also another layer added with all the (minor) reductions in > performance (with normal storage, not NVMe) and of course potential bugs. > ceph-volume was crafted in a way that we wouldn't be forcing anyone to a single backend (e.g. LVM). Initially it went even further, as just being a simple orchestrator for getting devices mounted and starting the OSD with minimal configuration and *regardless* of what type of devices were being used. The current status of the LVM portion is *very* robust, although it is lacking a big chunk of feature parity with ceph-disk. I anticipate potential bugs anyway :) >> >> On Fri, 6 Oct 2017, Alfredo Deza wrote: >> > Now that ceph-volume is part of the Luminous release, we've been able >> > to provide filestore support for LVM-based OSDs. We are making use of >> > LVM's powerful mechanisms to store metadata which allows the process >> > to no longer rely on UDEV and GPT labels (unlike ceph-disk). >> > >> > Bluestore support should be the next step for `ceph-volume lvm`, and >> > while that is planned we are thinking of ways to improve the current >> > caveats (like OSDs not coming up) for clusters that have deployed OSDs >> > with ceph-disk. >> > >> > --- New clusters --- >> > The `ceph-volume lvm` deployment is straightforward (currently >> > supported in ceph-ansible), but there isn't support for plain disks >> > (with partitions) currently, like there is with ceph-disk. >> > >> > Is there a pressing interest in supporting plain disks with >> > partitions? Or only supporting LVM-based OSDs fine? >> >> Perhaps the "out" here is to support a "dir" option where the user can >> manually provision and mount an OSD on /var/lib/ceph/osd/*, with 'journal' >> or 'block' symlinks, and ceph-volume will do the last bits that initialize >> the filestore or bluestore OSD from there. Then if someone has a scenario >> that isn't captured by LVM (or whatever else we support) they can always >> do it manually? >> > Basically this. > Since all my old clusters were deployed like this, with no > chance/intention to upgrade to GPT or even LVM. > How would symlinks work with Bluestore, the tiny XFS bit? In this case, we are looking to allow ceph-volume to scan currently deployed OSDs, and get all the information needed and save it as a plain configuration file that will be read at boot time. That is the only other option that is not dependent on udev/ceph-disk that doesn't mean redoing an OSD from scratch. It would be a one-time operation to get out of old deployment's tie into udev/gpt/ceph-disk > >> > --- Existing clusters --- >> > Migration to ceph-volume, even with plain disk support means >> > re-creating the OSD from scratch, which would end up moving data. >> > There is no way to make a GPT/ceph-disk OSD become a ceph-volume one >> > without starting from scratch. >> > >> > A temporary workaround would be to provide a way for existing OSDs to >> > be brought up without UDEV and ceph-disk, by creating logic in >> > ceph-volume that could load them with systemd directly. This wouldn't >> > make them lvm-based, nor it would mean there is direct support for >> > them, just a temporary workaround to make them start without UDEV and >> > ceph-disk. >> > >> > I'm interested in what current users might look for here,: is it fine >> > to provide this workaround if the issues are that problematic? Or is >> > it OK to plan a migration towards ceph-volume OSDs? >> >> IMO we can't require any kind of data migration in order to upgrade, which >> means we either have to (1) keep ceph-disk around indefinitely, or (2) >> teach ceph-volume to start existing GPT-style OSDs. Given all of the >> flakiness around udev, I'm partial to #2. The big question for me is >> whether #2 alone is sufficient, or whether ceph-volume should also know >> how to provision new OSDs using partitions and no LVM. Hopefully not? >> > I really disliked the udev/GPT stuff from the get-go and flakiness is > being kind for sometimes completely indeterministic behavior. > Yep, forcing users to always fit one model seemed annoying to me. I understand the
Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
Hello, (pet peeve alert) On Mon, 9 Oct 2017 15:09:29 + (UTC) Sage Weil wrote: > To put this in context, the goal here is to kill ceph-disk in mimic. > > One proposal is to make it so new OSDs can *only* be deployed with LVM, > and old OSDs with the ceph-disk GPT partitions would be started via > ceph-volume support that can only start (but not deploy new) OSDs in that > style. > > Is the LVM-only-ness concerning to anyone? > If the provision below is met, not really. > Looking further forward, NVMe OSDs will probably be handled a bit > differently, as they'll eventually be using SPDK and kernel-bypass (hence, > no LVM). For the time being, though, they would use LVM. > And so it begins. LVM does a lot of nice things, but not everything for everybody. It is also another layer added with all the (minor) reductions in performance (with normal storage, not NVMe) and of course potential bugs. > > On Fri, 6 Oct 2017, Alfredo Deza wrote: > > Now that ceph-volume is part of the Luminous release, we've been able > > to provide filestore support for LVM-based OSDs. We are making use of > > LVM's powerful mechanisms to store metadata which allows the process > > to no longer rely on UDEV and GPT labels (unlike ceph-disk). > > > > Bluestore support should be the next step for `ceph-volume lvm`, and > > while that is planned we are thinking of ways to improve the current > > caveats (like OSDs not coming up) for clusters that have deployed OSDs > > with ceph-disk. > > > > --- New clusters --- > > The `ceph-volume lvm` deployment is straightforward (currently > > supported in ceph-ansible), but there isn't support for plain disks > > (with partitions) currently, like there is with ceph-disk. > > > > Is there a pressing interest in supporting plain disks with > > partitions? Or only supporting LVM-based OSDs fine? > > Perhaps the "out" here is to support a "dir" option where the user can > manually provision and mount an OSD on /var/lib/ceph/osd/*, with 'journal' > or 'block' symlinks, and ceph-volume will do the last bits that initialize > the filestore or bluestore OSD from there. Then if someone has a scenario > that isn't captured by LVM (or whatever else we support) they can always > do it manually? > Basically this. Since all my old clusters were deployed like this, with no chance/intention to upgrade to GPT or even LVM. How would symlinks work with Bluestore, the tiny XFS bit? > > --- Existing clusters --- > > Migration to ceph-volume, even with plain disk support means > > re-creating the OSD from scratch, which would end up moving data. > > There is no way to make a GPT/ceph-disk OSD become a ceph-volume one > > without starting from scratch. > > > > A temporary workaround would be to provide a way for existing OSDs to > > be brought up without UDEV and ceph-disk, by creating logic in > > ceph-volume that could load them with systemd directly. This wouldn't > > make them lvm-based, nor it would mean there is direct support for > > them, just a temporary workaround to make them start without UDEV and > > ceph-disk. > > > > I'm interested in what current users might look for here,: is it fine > > to provide this workaround if the issues are that problematic? Or is > > it OK to plan a migration towards ceph-volume OSDs? > > IMO we can't require any kind of data migration in order to upgrade, which > means we either have to (1) keep ceph-disk around indefinitely, or (2) > teach ceph-volume to start existing GPT-style OSDs. Given all of the > flakiness around udev, I'm partial to #2. The big question for me is > whether #2 alone is sufficient, or whether ceph-volume should also know > how to provision new OSDs using partitions and no LVM. Hopefully not? > I really disliked the udev/GPT stuff from the get-go and flakiness is being kind for sometimes completely indeterministic behavior. Since there never was an (non-disruptive) upgrade process from non-GPT based OSDs to GPT based ones, I wonder what changed minds here. Not that the GPT based users won't appreciate it. Christian > sage > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
To put this in context, the goal here is to kill ceph-disk in mimic. One proposal is to make it so new OSDs can *only* be deployed with LVM, and old OSDs with the ceph-disk GPT partitions would be started via ceph-volume support that can only start (but not deploy new) OSDs in that style. Is the LVM-only-ness concerning to anyone? Looking further forward, NVMe OSDs will probably be handled a bit differently, as they'll eventually be using SPDK and kernel-bypass (hence, no LVM). For the time being, though, they would use LVM. On Fri, 6 Oct 2017, Alfredo Deza wrote: > Now that ceph-volume is part of the Luminous release, we've been able > to provide filestore support for LVM-based OSDs. We are making use of > LVM's powerful mechanisms to store metadata which allows the process > to no longer rely on UDEV and GPT labels (unlike ceph-disk). > > Bluestore support should be the next step for `ceph-volume lvm`, and > while that is planned we are thinking of ways to improve the current > caveats (like OSDs not coming up) for clusters that have deployed OSDs > with ceph-disk. > > --- New clusters --- > The `ceph-volume lvm` deployment is straightforward (currently > supported in ceph-ansible), but there isn't support for plain disks > (with partitions) currently, like there is with ceph-disk. > > Is there a pressing interest in supporting plain disks with > partitions? Or only supporting LVM-based OSDs fine? Perhaps the "out" here is to support a "dir" option where the user can manually provision and mount an OSD on /var/lib/ceph/osd/*, with 'journal' or 'block' symlinks, and ceph-volume will do the last bits that initialize the filestore or bluestore OSD from there. Then if someone has a scenario that isn't captured by LVM (or whatever else we support) they can always do it manually? > --- Existing clusters --- > Migration to ceph-volume, even with plain disk support means > re-creating the OSD from scratch, which would end up moving data. > There is no way to make a GPT/ceph-disk OSD become a ceph-volume one > without starting from scratch. > > A temporary workaround would be to provide a way for existing OSDs to > be brought up without UDEV and ceph-disk, by creating logic in > ceph-volume that could load them with systemd directly. This wouldn't > make them lvm-based, nor it would mean there is direct support for > them, just a temporary workaround to make them start without UDEV and > ceph-disk. > > I'm interested in what current users might look for here,: is it fine > to provide this workaround if the issues are that problematic? Or is > it OK to plan a migration towards ceph-volume OSDs? IMO we can't require any kind of data migration in order to upgrade, which means we either have to (1) keep ceph-disk around indefinitely, or (2) teach ceph-volume to start existing GPT-style OSDs. Given all of the flakiness around udev, I'm partial to #2. The big question for me is whether #2 alone is sufficient, or whether ceph-volume should also know how to provision new OSDs using partitions and no LVM. Hopefully not? sage ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com