Re: [ceph-users] luminous ubuntu 16.04 HWE (4.10 kernel). ceph-disk can't prepare a disk

2017-10-24 Thread Webert de Souza Lima
When you do umount the device, the raised error is still the same?


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*

On Mon, Oct 23, 2017 at 4:46 AM, Wido den Hollander  wrote:

>
> > Op 22 oktober 2017 om 18:45 schreef Sean Sullivan :
> >
> >
> > On freshly installed ubuntu 16.04 servers with the HWE kernel selected
> > (4.10). I can not use ceph-deploy or ceph-disk to provision osd.
> >
> >
> >  whenever I try I get the following::
> >
> > ceph-disk -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys
> > --bluestore --cluster ceph --fs-type xfs -- /dev/sdy
> > command: Running command: /usr/bin/ceph-osd --cluster=ceph
> > --show-config-value=fsid
> > get_dm_uuid: get_dm_uuid /dev/sdy uuid path is
> /sys/dev/block/65:128/dm/uuid
> > set_type: Will colocate block with data on /dev/sdy
> > command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> > --lookup bluestore_block_size
> > [command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> > --lookup bluestore_block_db_size
> > command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> > --lookup bluestore_block_size
> > command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> > --lookup bluestore_block_wal_size
> > get_dm_uuid: get_dm_uuid /dev/sdy uuid path is
> /sys/dev/block/65:128/dm/uuid
> > get_dm_uuid: get_dm_uuid /dev/sdy uuid path is
> /sys/dev/block/65:128/dm/uuid
> > get_dm_uuid: get_dm_uuid /dev/sdy uuid path is
> /sys/dev/block/65:128/dm/uuid
> > Traceback (most recent call last):
> >   File "/usr/sbin/ceph-disk", line 9, in 
> > load_entry_point('ceph-disk==1.0.0', 'console_scripts',
> 'ceph-disk')()
> >   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5704,
> in
> > run
> > main(sys.argv[1:])
> >   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5655,
> in
> > main
> > args.func(args)
> >   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2091,
> in
> > main
> > Prepare.factory(args).prepare()
> >   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2080,
> in
> > prepare
> > self._prepare()
> >   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2154,
> in
> > _prepare
> > self.lockbox.prepare()
> >   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2842,
> in
> > prepare
> > verify_not_in_use(self.args.lockbox, check_partitions=True)
> >   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 950,
> in
> > verify_not_in_use
> > raise Error('Device is mounted', partition)
> > ceph_disk.main.Error: Error: Device is mounted: /dev/sdy5
> >
> > unmounting the disk does not seem to help either. I'm assuming something
> is
> > triggering too early but i'm not sure how to delay or figure that out.
> >
> > has anyone deployed on xenial with the 4.10 kernel? Am I missing
> something
> > important?
>
> Yes I have without any issues, I've did:
>
> $ ceph-disk prepare /dev/sdb
>
> Luminous default to BlueStore and that worked just fine.
>
> Yes, this is with a 4.10 HWE kernel from Ubuntu 16.04.
>
> Wido
>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] luminous ubuntu 16.04 HWE (4.10 kernel). ceph-disk can't prepare a disk

2017-10-23 Thread Wido den Hollander

> Op 22 oktober 2017 om 18:45 schreef Sean Sullivan :
> 
> 
> On freshly installed ubuntu 16.04 servers with the HWE kernel selected
> (4.10). I can not use ceph-deploy or ceph-disk to provision osd.
> 
> 
>  whenever I try I get the following::
> 
> ceph-disk -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys
> --bluestore --cluster ceph --fs-type xfs -- /dev/sdy
> command: Running command: /usr/bin/ceph-osd --cluster=ceph
> --show-config-value=fsid
> get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
> set_type: Will colocate block with data on /dev/sdy
> command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> --lookup bluestore_block_size
> [command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> --lookup bluestore_block_db_size
> command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> --lookup bluestore_block_size
> command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> --lookup bluestore_block_wal_size
> get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
> get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
> get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
> Traceback (most recent call last):
>   File "/usr/sbin/ceph-disk", line 9, in 
> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5704, in
> run
> main(sys.argv[1:])
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5655, in
> main
> args.func(args)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2091, in
> main
> Prepare.factory(args).prepare()
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2080, in
> prepare
> self._prepare()
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2154, in
> _prepare
> self.lockbox.prepare()
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2842, in
> prepare
> verify_not_in_use(self.args.lockbox, check_partitions=True)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 950, in
> verify_not_in_use
> raise Error('Device is mounted', partition)
> ceph_disk.main.Error: Error: Device is mounted: /dev/sdy5
> 
> unmounting the disk does not seem to help either. I'm assuming something is
> triggering too early but i'm not sure how to delay or figure that out.
> 
> has anyone deployed on xenial with the 4.10 kernel? Am I missing something
> important?

Yes I have without any issues, I've did:

$ ceph-disk prepare /dev/sdb

Luminous default to BlueStore and that worked just fine.

Yes, this is with a 4.10 HWE kernel from Ubuntu 16.04.

Wido

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] luminous ubuntu 16.04 HWE (4.10 kernel). ceph-disk can't prepare a disk

2017-10-22 Thread Sean Sullivan
On freshly installed ubuntu 16.04 servers with the HWE kernel selected
(4.10). I can not use ceph-deploy or ceph-disk to provision osd.


 whenever I try I get the following::

ceph-disk -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys
--bluestore --cluster ceph --fs-type xfs -- /dev/sdy
command: Running command: /usr/bin/ceph-osd --cluster=ceph
--show-config-value=fsid
get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
set_type: Will colocate block with data on /dev/sdy
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
--lookup bluestore_block_size
[command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
--lookup bluestore_block_db_size
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
--lookup bluestore_block_size
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
--lookup bluestore_block_wal_size
get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
get_dm_uuid: get_dm_uuid /dev/sdy uuid path is /sys/dev/block/65:128/dm/uuid
Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 9, in 
load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5704, in
run
main(sys.argv[1:])
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5655, in
main
args.func(args)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2091, in
main
Prepare.factory(args).prepare()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2080, in
prepare
self._prepare()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2154, in
_prepare
self.lockbox.prepare()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2842, in
prepare
verify_not_in_use(self.args.lockbox, check_partitions=True)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 950, in
verify_not_in_use
raise Error('Device is mounted', partition)
ceph_disk.main.Error: Error: Device is mounted: /dev/sdy5

unmounting the disk does not seem to help either. I'm assuming something is
triggering too early but i'm not sure how to delay or figure that out.

has anyone deployed on xenial with the 4.10 kernel? Am I missing something
important?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com