Hi!
After getting some other stuff done, I finally got around to continuing here.
I set up a whole new cluster with ceph-deploy, but adding the first OSD fails:
ceph-deploy osd create --bluestore ${HOST}:/dev/sdc --block-wal
/dev/cl/ceph-waldb-sdc --block-db /dev/cl/ceph-waldb-sdc
.
.
.
[WARNIN] get_partition_dev: Try 9/10 : partition 1 for /dev/cl/ceph-waldb-sdc
does not exist in /sys/block/dm-2
[WARNIN] get_dm_uuid: get_dm_uuid /dev/cl/ceph-waldb-sdc uuid path is
/sys/dev/block/253:2/dm/uuid
[WARNIN] get_dm_uuid: get_dm_uuid /dev/cl/ceph-waldb-sdc uuid is
LVM-2r0bGcoyMB0VnWeGGS77eOD5IOu8wAPN3wPX4OWSS1XGkYZYoziXhfAFMjJf4FJR
[WARNIN]
[WARNIN] get_partition_dev: Try 10/10 : partition 1 for /dev/cl/ceph-waldb-sdc
does not exist in /sys/block/dm-2
[WARNIN] get_dm_uuid: get_dm_uuid /dev/cl/ceph-waldb-sdc uuid path is
/sys/dev/block/253:2/dm/uuid
[WARNIN] get_dm_uuid: get_dm_uuid /dev/cl/ceph-waldb-sdc uuid is
LVM-2r0bGcoyMB0VnWeGGS77eOD5IOu8wAPN3wPX4OWSS1XGkYZYoziXhfAFMjJf4FJR
[WARNIN]
[WARNIN] Traceback (most recent call last):
[WARNIN] File "/usr/sbin/ceph-disk", line 9, in <module>
[WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts',
'ceph-disk')()
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
5687, in run
[WARNIN] main(sys.argv[1:])
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
5638, in main
[WARNIN] args.func(args)
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
2004, in main
[WARNIN] Prepare.factory(args).prepare()
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
1993, in prepare
[WARNIN] self._prepare()
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
2074, in _prepare
[WARNIN] self.data.prepare(*to_prepare_list)
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
2807, in prepare
[WARNIN] self.prepare_device(*to_prepare_list)
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
2983, in prepare_device
[WARNIN] to_prepare.prepare()
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
2216, in prepare
[WARNIN] self.prepare_device()
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
2310, in prepare_device
[WARNIN] partition = device.get_partition(num)
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
1714, in get_partition
[WARNIN] dev = get_partition_dev(self.path, num)
[WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line
717, in get_partition_dev
[WARNIN] (pnum, dev, error_msg))
[WARNIN] ceph_disk.main.Error: Error: partition 1 for /dev/cl/ceph-waldb-sdc
does not appear to exist in /sys/block/dm-2
[ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v
prepare --block.wal /dev/cl/ceph-waldb-sdc --block.db /dev/cl/ceph-waldb-sdc
--bluestore --cluster ceph --fs-type xfs -- /dev/sdc
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
I found some open issues about this: http://tracker.ceph.com/issues/6042, and
http://tracker.ceph.com/issues/5461.. Could this be related?
Cheers,
Martin
-----Ursprüngliche Nachricht-----
Von: Loris Cuoghi [mailto:[email protected]]
Gesendet: Montag, 3. Juli 2017 15:48
An: Martin Emrich <[email protected]>
Cc: [email protected]; Vasu Kulkarni <[email protected]>
Betreff: Re: [ceph-users] How to set up bluestore manually?
Le Mon, 3 Jul 2017 12:32:20 +0000,
Martin Emrich <[email protected]> a écrit :
> Hi!
>
> Thanks for the super-fast response!
>
> That did work somehow... Here's my commandline (As Bluestore seems to
> still require a Journal,
No, it doesn't. :D
> I repurposed the SSD partitions for it and put the DB/WAL on the
> spinning disk):
On the contrary, Bluestore's DB/WAL are good candidates for low-latency storage
like an SSD.
>
> ceph-deploy osd create --bluestore
> <hostname>:/dev/sdc:/dev/mapper/cl-ceph_journal_sdc
Just
ceph-deploy osd create --bluestore ${hostname}:/device/path
should be sufficent to create a device composed of:
* 1 small (~100 MB) XFS partition
* 1 big (remaining space) partition formatted as bluestore
Additional options like:
--block-wal /path/to/ssd/partition
--block-db /path/to/ssd/partition
allow having SSD-backed WAL and DB.
> But it created two (!) new OSDs instead of one, and placed them under
> the default CRUSH rule (thus making my cluster doing stuff; they
> should be under a different rule)...
Default stuff is applied... by default :P
Take a good read:
http://docs.ceph.com/docs/master/rados/operations/crush-map/
In particular, on how editing an existing CRUSH map:
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
-- Loris
> Did I do something wrong or are
> the two OSDs part of the bluestore concept? If yes, how to handle them
> in the CRUSH map (I have different categories of OSD hosts for
> different use cases, split by appropriate CRUSH rules).
>
> Thanks
>
> Martin
>
> -----Ursprüngliche Nachricht-----
> Von: Loris Cuoghi [mailto:[email protected]]
> Gesendet: Montag, 3. Juli 2017 13:39
> An: Martin Emrich <[email protected]>
> Cc: Vasu Kulkarni <[email protected]>; [email protected]
> Betreff: Re: [ceph-users] How to set up bluestore manually?
>
> Le Mon, 3 Jul 2017 11:30:04 +0000,
> Martin Emrich <[email protected]> a écrit :
>
> > Hi!
> >
> > Thanks for the hint, but I get this error:
> >
> > [ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No
> > such file or directory: 'ceph.conf'; has `ceph-deploy new` been run
> > in this directory?
> >
> > Obviously, ceph-deploy only works if the cluster has been managed
> > with ceph-deply all along (and I won’t risk messing with my cluster
> > by attempting to “retrofit” ceph-deply to it)… I’ll setup a single
> > VM cluster to squeeze out the necessary commands and check back…
>
> No need to start with ceph-deploy in order to use it :)
>
> You can:
>
> create a working directory (e.g. ~/ceph-deploy)
> - cd ~/ceph-deploy
> - copy the ceph.conf from /etc/ceph/ceph.conf in the working directory
> - execute ceph-deploy gatherkeys to obtain the necessary keys
>
> et voilà ;)
>
> Give it a try!
>
> -- Loris
>
> >
> > Regards,
> >
> > Martin
> >
> > Von: Vasu Kulkarni [mailto:[email protected]]
> > Gesendet: Freitag, 30. Juni 2017 17:58
> > An: Martin Emrich <[email protected]>
> > Cc: [email protected]
> > Betreff: Re: [ceph-users] How to set up bluestore manually?
> >
> >
> >
> > On Fri, Jun 30, 2017 at 8:31 AM, Martin Emrich
> > <[email protected]<mailto:[email protected]>> wrote:
> > Hi!
> >
> > I’d like to set up new OSDs with bluestore: the real data (“block”)
> > on a spinning disk, and DB+WAL on a SSD partition.
> >
> > But I do not use ceph-deploy, and never used ceph-disk (I set up the
> > filestore OSDs manually). Google tells me that ceph-disk does not
> > (yet) support splitting the components across multiple block
> > devices, I also had no luck while attempting it anyways. (using Ceph
> > 12.1 RC1)
> >
> > why not give ceph-deploy a chance, It has options to specify for db
> > and wal in command line. you dont have to worry about what options
> > to pass to ceph-disk as it encapsulates that
> >
> > ex: ceph-deploy osd create --bluestore --block-wal /dev/nvme0n1
> > --block-db /dev/nvme0n1 p30:sdb
> >
> >
> >
> > I just can’t find documentation on how to set up a bluestore OSD
> > manually:
> >
> >
> > * How do I “prepare” the block, block.wal and block.db
> > blockdevices? Just directing ceph-osd to the block devices via
> > ceph.conf does not seem to be enough.
> > * Do bluestore OSDs still use/need a separate journal
> > file/device? Or ist that replaced by the WAL?
> >
> > http://docs.ceph.com/docs/master/ also has not very much information
> > on using bluestore, is this documentation deprecated?
> >
> > The document has fallen behind for manual deployment and ceph-deploy
> > usage for bluestore as well, but if you use the above command, it
> > will clearly throw out what it is doing and how ceph-disk is being
> > called, those are essentially manual steps that should go in
> > document. I think we have a tracker for update which is pending for
> > sometime.
> >
> >
> > Thanks for any hints,
> >
> > Martin
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]<mailto:[email protected]>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com