On Mon, 16 Oct 2017 18:32:06 -0400 (EDT) Anthony Verevkin wrote:

> > From: "Sage Weil" <s...@newdream.net>
> > To: "Alfredo Deza" <ad...@redhat.com>
> > Cc: "ceph-devel" <ceph-de...@vger.kernel.org>, ceph-users@lists.ceph.com
> > Sent: Monday, October 9, 2017 11:09:29 AM
> > Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and 
> > disk partition support]
> > 
> > To put this in context, the goal here is to kill ceph-disk in mimic.
> >   
> 
>  
> > Perhaps the "out" here is to support a "dir" option where the user
> > can
> > manually provision and mount an OSD on /var/lib/ceph/osd/*, with
> > 'journal'
> > or 'block' symlinks, and ceph-volume will do the last bits that
> > initialize
> > the filestore or bluestore OSD from there.  Then if someone has a
> > scenario
> > that isn't captured by LVM (or whatever else we support) they can
> > always
> > do it manually?
> >   
> 
> 
> In fact, now that bluestore only requires a few small files and symlinks to 
> remain in /var/lib/ceph/osd/* without the extra requirements for xattrs 
> support and xfs, why not simply leave those folders on OS root filesystem and 
> only point symlinks to bluestore block and db devices? That would simplify 
> the osd deployment so much - and the symlinks can then point to 
> /dev/disk/by-uuid or by-path or lvm path or whatever. The only downside for 
> this approach that I see is that disks themselves would no longer be 
> transferable between the hosts as those few files that describe the OSD are 
> no longer on the disk itself.
> 

If the OS is on a RAID1 the chances of things being lost entirely is
reduced very much, so moving OSDs to another host becomes a trivial
exercise one would assume.

But yeah, this sounds fine to me, as it's extremely flexible.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to