I can run the following command with no issue however and have done for 
multiple OSD's which work fine, it just creates an sdd1 and sdd2


ceph-disk prepare --bluestore /dev/sdg --block.wal /dev/sdd --block.db /dev/sdd


,Ashley

________________________________
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Martin Emrich 
<martin.emr...@empolis.com>
Sent: 07 July 2017 08:03:44
To: Vasu Kulkarni
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] How to set up bluestore manually?

Hi!

It looks like I found the problem: The example suggested I can share the same 
block device for both WAL and DB, but apparently this is not the case.
Analyzing the log output of ceph-deploy and the OSD hosts logs, I am making 
progress in discovering the exact steps to set up a bluestore OSD without 
convenience tools like ceph-deploy and ceph-disk.

The error below occurs during “ceph-osd –mkfs” if both block.db and block.wal 
point to the same block device. If I use different block devices, creating the 
bluestore fs works fine.

Nevertheless I created a ticket: http://tracker.ceph.com/issues/20540

One more question: What are the size requirements for the WAL and the DB?

Cheers,

Martin


Von: Vasu Kulkarni [mailto:vakul...@redhat.com]
Gesendet: Donnerstag, 6. Juli 2017 20:45
An: Martin Emrich <martin.emr...@empolis.com>
Cc: Loris Cuoghi <loris.cuo...@artificiale.net>; ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] How to set up bluestore manually?

I recommend you file a tracker issue at http://tracker.ceph.com/ with all 
details( ceph version, steps you ran and output hiding out anything you dont 
want to put),  I doubt its a ceph-deploy issue
but we can try in our lab to replicate it.

On Thu, Jul 6, 2017 at 5:25 AM, Martin Emrich 
<martin.emr...@empolis.com<mailto:martin.emr...@empolis.com>> wrote:
Hi!

I changed the partitioning scheme to use a "real" primary partition instead of 
a logical volume. Ceph-deploy seems run fine now, but the OSD does not start.

I see lots of these in the journal:

Jul 06 13:53:42  sh[9768]: 0> 2017-07-06 13:53:42.794027 7fcf9918fb80 -1 *** 
Caught signal (Aborted) **
Jul 06 13:53:42  sh[9768]: in thread 7fcf9918fb80 thread_name:ceph-osd
Jul 06 13:53:42  sh[9768]: ceph version 12.1.0 
(262617c9f16c55e863693258061c5b25dea5b086) luminous (dev)
Jul 06 13:53:42  sh[9768]: 1: (()+0x9cd6af) [0x7fcf99b776af]
Jul 06 13:53:42  sh[9768]: 2: (()+0xf370) [0x7fcf967d9370]
Jul 06 13:53:42  sh[9768]: 3: (gsignal()+0x37) [0x7fcf958031d7]
Jul 06 13:53:42  sh[9768]: 4: (abort()+0x148) [0x7fcf958048c8]
Jul 06 13:53:42  sh[9768]: 5: (ceph::__ceph_assert_fail(char const*, char 
const*, int, char const*)+0x284) [0x7fcf99bb5394]
Jul 06 13:53:42  sh[9768]: 6: (BitMapAreaIN::reserve_blocks(long)+0xb6) 
[0x7fcf99b6c486]
Jul 06 13:53:42  sh[9768]: 7: (BitMapAllocator::reserve(unsigned long)+0x80) 
[0x7fcf99b6a240]
Jul 06 13:53:42  sh[9768]: 8: (BlueFS::_allocate(unsigned char, unsigned long, 
std::vector<bluefs_extent_t, mempool::pool_allocator<(mempool::pool_index_t)9, 
bluefs_extent_t> >*)+0xee) [0x7fcf99b31c\
0e]
Jul 06 13:53:42  sh[9768]: 9: 
(BlueFS::_flush_and_sync_log(std::unique_lock<std::mutex>&, unsigned long, 
unsigned long)+0xbc4) [0x7fcf99b38be4]
Jul 06 13:53:42  sh[9768]: 10: (BlueFS::sync_metadata()+0x215) [0x7fcf99b3d725]
Jul 06 13:53:42  sh[9768]: 11: (BlueFS::umount()+0x74) [0x7fcf99b3dc44]
Jul 06 13:53:42  sh[9768]: 12: (BlueStore::_open_db(bool)+0x579) 
[0x7fcf99a62859]
Jul 06 13:53:42  sh[9768]: 13: (BlueStore::fsck(bool)+0x39b) [0x7fcf99a9581b]
Jul 06 13:53:42  sh[9768]: 14: (BlueStore::mkfs()+0x1168) [0x7fcf99a6d118]
Jul 06 13:53:42  sh[9768]: 15: (OSD::mkfs(CephContext*, ObjectStore*, 
std::string const&, uuid_d, int)+0x29b) [0x7fcf9964b75b]
Jul 06 13:53:42  sh[9768]: 16: (main()+0xf83) [0x7fcf99590573]
Jul 06 13:53:42  sh[9768]: 17: (__libc_start_main()+0xf5) [0x7fcf957efb35]
Jul 06 13:53:42  sh[9768]: 18: (()+0x4826e6) [0x7fcf9962c6e6]
Jul 06 13:53:42  sh[9768]: NOTE: a copy of the executable, or `objdump -rdS 
<executable>` is needed to interpret this.
Jul 06 13:53:42  sh[9768]: Traceback (most recent call last):
Jul 06 13:53:42  sh[9768]: File "/usr/sbin/ceph-disk", line 9, in <module>
Jul 06 13:53:42  sh[9768]: load_entry_point('ceph-disk==1.0.0', 
'console_scripts', 'ceph-disk')()


Also interesting is the message "-1 rocksdb: Invalid argument: db: does not 
exist (create_if_missing is false)"... Looks to me as if ceph-deploy did not 
create the RocksDB?

So still no success with bluestore :(

Thanks,

Martin

-----Ursprüngliche Nachricht-----
Von: ceph-users 
[mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
 Im Auftrag von Martin Emrich
Gesendet: Dienstag, 4. Juli 2017 22:02
An: Loris Cuoghi 
<loris.cuo...@artificiale.net<mailto:loris.cuo...@artificiale.net>>; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Betreff: Re: [ceph-users] How to set up bluestore manually?

Hi!

After getting some other stuff done, I finally got around to continuing here.

I set up a whole new cluster with ceph-deploy, but adding the first OSD fails:

ceph-deploy osd create --bluestore ${HOST}:/dev/sdc --block-wal 
/dev/cl/ceph-waldb-sdc --block-db /dev/cl/ceph-waldb-sdc .
.
.
[WARNIN] get_partition_dev: Try 9/10 : partition 1 for /dev/cl/ceph-waldb-sdc 
does not exist in /sys/block/dm-2  [WARNIN] get_dm_uuid: get_dm_uuid 
/dev/cl/ceph-waldb-sdc uuid path is /sys/dev/block/253:2/dm/uuid  [WARNIN] 
get_dm_uuid: get_dm_uuid /dev/cl/ceph-waldb-sdc uuid is 
LVM-2r0bGcoyMB0VnWeGGS77eOD5IOu8wAPN3wPX4OWSS1XGkYZYoziXhfAFMjJf4FJR
 [WARNIN]
 [WARNIN] get_partition_dev: Try 10/10 : partition 1 for /dev/cl/ceph-waldb-sdc 
does not exist in /sys/block/dm-2  [WARNIN] get_dm_uuid: get_dm_uuid 
/dev/cl/ceph-waldb-sdc uuid path is /sys/dev/block/253:2/dm/uuid  [WARNIN] 
get_dm_uuid: get_dm_uuid /dev/cl/ceph-waldb-sdc uuid is 
LVM-2r0bGcoyMB0VnWeGGS77eOD5IOu8wAPN3wPX4OWSS1XGkYZYoziXhfAFMjJf4FJR
 [WARNIN]
 [WARNIN] Traceback (most recent call last):
 [WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
 [WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 
'ceph-disk')()
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
5687, in run
 [WARNIN]     main(sys.argv[1:])
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
5638, in main
 [WARNIN]     args.func(args)
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
2004, in main
 [WARNIN]     Prepare.factory(args).prepare()
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
1993, in prepare
 [WARNIN]     self._prepare()
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
2074, in _prepare
 [WARNIN]     self.data.prepare(*to_prepare_list)
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
2807, in prepare
 [WARNIN]     self.prepare_device(*to_prepare_list)
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
2983, in prepare_device
 [WARNIN]     to_prepare.prepare()
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
2216, in prepare
 [WARNIN]     self.prepare_device()
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
2310, in prepare_device
 [WARNIN]     partition = device.get_partition(num)
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
1714, in get_partition
 [WARNIN]     dev = get_partition_dev(self.path, num)
 [WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 
717, in get_partition_dev
 [WARNIN]     (pnum, dev, error_msg))
 [WARNIN] ceph_disk.main.Error: Error: partition 1 for /dev/cl/ceph-waldb-sdc 
does not appear to exist in /sys/block/dm-2  [ERROR ] RuntimeError: command 
returned non-zero exit status: 1 [ceph_deploy.osd][ERROR ] Failed to execute 
command: /usr/sbin/ceph-disk -v prepare --block.wal /dev/cl/ceph-waldb-sdc 
--block.db /dev/cl/ceph-waldb-sdc --bluestore --cluster ceph --fs-type xfs -- 
/dev/sdc [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

I found some open issues about this: http://tracker.ceph.com/issues/6042, and 
http://tracker.ceph.com/issues/5461.. Could this be related?

Cheers,

Martin


-----Ursprüngliche Nachricht-----
Von: Loris Cuoghi 
[mailto:loris.cuo...@artificiale.net<mailto:loris.cuo...@artificiale.net>]
Gesendet: Montag, 3. Juli 2017 15:48
An: Martin Emrich <martin.emr...@empolis.com<mailto:martin.emr...@empolis.com>>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>; Vasu Kulkarni 
<vakul...@redhat.com<mailto:vakul...@redhat.com>>
Betreff: Re: [ceph-users] How to set up bluestore manually?

Le Mon, 3 Jul 2017 12:32:20 +0000,
Martin Emrich <martin.emr...@empolis.com<mailto:martin.emr...@empolis.com>> a 
écrit :

> Hi!
>
> Thanks for the super-fast response!
>
> That did work somehow... Here's my commandline (As Bluestore seems to
> still require a Journal,

No, it doesn't. :D

> I repurposed the SSD partitions for it and put the DB/WAL on the
> spinning disk):

On the contrary, Bluestore's DB/WAL are good candidates for low-latency storage 
like an SSD.

>
>    ceph-deploy osd create --bluestore
> <hostname>:/dev/sdc:/dev/mapper/cl-ceph_journal_sdc

Just
        ceph-deploy osd create --bluestore ${hostname}:/device/path

should be sufficent to create a device composed of:

        * 1 small (~100 MB) XFS partition
        * 1 big (remaining space) partition formatted as bluestore

Additional options like:

        --block-wal /path/to/ssd/partition
        --block-db /path/to/ssd/partition

allow having SSD-backed WAL and DB.

> But it created two (!) new OSDs instead of one, and placed them under
> the default CRUSH rule (thus making my cluster doing stuff; they
> should be under a different rule)...

Default stuff is applied... by default :P

Take a good read:

http://docs.ceph.com/docs/master/rados/operations/crush-map/

In particular, on how editing an existing CRUSH map:

http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map

-- Loris

> Did I do something wrong or are
> the two OSDs part of the bluestore concept? If yes, how to handle them
> in the CRUSH map (I have different categories of OSD hosts for
> different use cases, split by appropriate CRUSH rules).
>
> Thanks
>
> Martin
>
> -----Ursprüngliche Nachricht-----
> Von: Loris Cuoghi 
> [mailto:loris.cuo...@artificiale.net<mailto:loris.cuo...@artificiale.net>]
> Gesendet: Montag, 3. Juli 2017 13:39
> An: Martin Emrich 
> <martin.emr...@empolis.com<mailto:martin.emr...@empolis.com>>
> Cc: Vasu Kulkarni <vakul...@redhat.com<mailto:vakul...@redhat.com>>; 
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> Betreff: Re: [ceph-users] How to set up bluestore manually?
>
> Le Mon, 3 Jul 2017 11:30:04 +0000,
> Martin Emrich <martin.emr...@empolis.com<mailto:martin.emr...@empolis.com>> a 
> écrit :
>
> > Hi!
> >
> > Thanks for the hint, but I get this error:
> >
> > [ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No
> > such file or directory: 'ceph.conf'; has `ceph-deploy new` been run
> > in this directory?
> >
> > Obviously, ceph-deploy only works if the cluster has been managed
> > with ceph-deply all along (and I won’t risk messing with my cluster
> > by attempting to “retrofit” ceph-deply to it)… I’ll setup a single
> > VM cluster to squeeze out the necessary commands and check back…
>
> No need to start with ceph-deploy in order to use it :)
>
> You can:
>
> create a working directory (e.g. ~/ceph-deploy)
> - cd ~/ceph-deploy
> - copy the ceph.conf from /etc/ceph/ceph.conf in the working directory
> - execute ceph-deploy gatherkeys to obtain the necessary keys
>
> et voilà ;)
>
> Give it a try!
>
> -- Loris
>
> >
> > Regards,
> >
> > Martin
> >
> > Von: Vasu Kulkarni [mailto:vakul...@redhat.com<mailto:vakul...@redhat.com>]
> > Gesendet: Freitag, 30. Juni 2017 17:58
> > An: Martin Emrich 
> > <martin.emr...@empolis.com<mailto:martin.emr...@empolis.com>>
> > Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> > Betreff: Re: [ceph-users] How to set up bluestore manually?
> >
> >
> >
> > On Fri, Jun 30, 2017 at 8:31 AM, Martin Emrich
> > <martin.emr...@empolis.com<mailto:martin.emr...@empolis.com><mailto:martin.emr...@empolis.com<mailto:martin.emr...@empolis.com>>>
> >  wrote:
> > Hi!
> >
> > I’d like to set up new OSDs with bluestore: the real data (“block”)
> > on a spinning disk, and DB+WAL on a SSD partition.
> >
> > But I do not use ceph-deploy, and never used ceph-disk (I set up the
> > filestore OSDs manually). Google tells me that ceph-disk does not
> > (yet) support splitting the components across multiple block
> > devices, I also had no luck while attempting it anyways. (using Ceph
> > 12.1 RC1)
> >
> > why not give ceph-deploy a chance, It has options to specify for db
> > and wal in command line. you dont have to worry about what options
> > to pass to ceph-disk as it encapsulates that
> >
> > ex: ceph-deploy osd create --bluestore --block-wal /dev/nvme0n1
> > --block-db /dev/nvme0n1 p30:sdb
> >
> >
> >
> > I just can’t find documentation on how to set up a bluestore OSD
> > manually:
> >
> >
> >   *   How do I “prepare” the block, block.wal and block.db
> > blockdevices? Just directing ceph-osd to the block devices via
> > ceph.conf does not seem to be enough.
> >   *   Do bluestore OSDs still use/need a separate journal
> > file/device? Or ist that replaced by the WAL?
> >
> > http://docs.ceph.com/docs/master/ also has not very much information
> > on using bluestore, is this documentation deprecated?
> >
> > The document has fallen behind for manual deployment and ceph-deploy
> > usage for bluestore as well, but if you use the above command, it
> > will clearly throw out what it is doing and how ceph-disk is being
> > called, those are essentially manual steps that should go in
> > document. I think we have a tracker for update which is pending for
> > sometime.
> >
> >
> > Thanks for any hints,
> >
> > Martin
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com><mailto:ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to