Created tracker for this issue -- > http://tracker.ceph.com/issues/22354

Thanks
Jayaram

On Fri, Dec 8, 2017 at 9:49 PM, nokia ceph <[email protected]> wrote:

> Hello Team,
>
> We aware that ceph-disk which is deprecated in 12.2.2 . As part of my
> testing, I can still using this ceph-disk utility for creating OSD's in
> 12.2.2
>
> Here I'm getting activation error on the second hit onwards.
>
> First occurance  OSD's creating without any issue.
> =======================================
>
> ## ceph-disk prepare --bluestore --cluster ceph --cluster-uuid
> b2f1b9b9-eecc-4c17-8b92-cfa60b31c121 /dev/sde; ceph-disk activate
> /dev/sde1
> /usr/lib/python2.7/site-packages/ceph_disk/main.py:5653: UserWarning:
> ************************************************************
> *******************
> This tool is now deprecated in favor of ceph-volume.
> It is recommended to use ceph-volume for OSD deployments. For details see:
>
>     http://docs.ceph.com/docs/master/ceph-volume/#migrating
>
> ************************************************************
> *******************
>
>   warnings.warn(DEPRECATION_WARNING)
> Creating new GPT entries.
> The operation has completed successfully.
> The operation has completed successfully.
> The operation has completed successfully.
> meta-data=/dev/sde1              isize=2048   agcount=4, agsize=6336 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=1        finobt=0, sparse=0
> data     =                       bsize=4096   blocks=25344, imaxpct=25
>          =                       sunit=64     swidth=64 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> log      =internal log           bsize=4096   blocks=1728, version=2
>          =                       sectsz=512   sunit=64 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> Warning: The kernel is still using the old partition table.
> The new table will be used at the next reboot.
> The operation has completed successfully.
> /usr/lib/python2.7/site-packages/ceph_disk/main.py:5685: UserWarning:
> ************************************************************
> *******************
> This tool is now deprecated in favor of ceph-volume.
> It is recommended to use ceph-volume for OSD deployments. For details see:
>
>     http://docs.ceph.com/docs/master/ceph-volume/#migrating
>
> ************************************************************
> *******************
>
>   warnings.warn(DEPRECATION_WARNING)
> /usr/lib/python2.7/site-packages/ceph_disk/main.py:5653: UserWarning:
> ************************************************************
> *******************
> This tool is now deprecated in favor of ceph-volume.
> It is recommended to use ceph-volume for OSD deployments. For details see:
>
>     http://docs.ceph.com/docs/master/ceph-volume/#migrating
>
> ************************************************************
> *******************
>
>   warnings.warn(DEPRECATION_WARNING)
> got monmap epoch 3
> 2017-12-08 16:07:57.262854 7fe58f6e6d00 -1 key
> 2017-12-08 16:07:57.769048 7fe58f6e6d00 -1 created object store
> /var/lib/ceph/tmp/mnt.dTiXMX for osd.16 fsid b2f1b9b9-eecc-4c17-8b92-
> cfa60b31c121
> Removed symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@16.
> service.
> Created symlink from /run/systemd/system/ceph-osd.
> target.wants/[email protected] to /usr/lib/systemd/system/ceph-osd@
> .service.
>
> ---
>
>
> On the second occurance. , I'm getting below issue...
> ========================================
>
> # ceph-disk prepare --bluestore --cluster ceph --cluster-uuid
> b2f1b9b9-eecc-4c17-8b92-cfa60b31c121 /dev/sde; ceph-disk activate
> /dev/sde1
> /usr/lib/python2.7/site-packages/ceph_disk/main.py:5653: UserWarning:
> ************************************************************
> *******************
> This tool is now deprecated in favor of ceph-volume.
> It is recommended to use ceph-volume for OSD deployments. For details see:
>
>     http://docs.ceph.com/docs/master/ceph-volume/#migrating
>
> ************************************************************
> *******************
>
>   warnings.warn(DEPRECATION_WARNING)
> Creating new GPT entries.
> The operation has completed successfully.
> The operation has completed successfully.
> The operation has completed successfully.
> meta-data=/dev/sde1              isize=2048   agcount=4, agsize=6336 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=1        finobt=0, sparse=0
> data     =                       bsize=4096   blocks=25344, imaxpct=25
>          =                       sunit=64     swidth=64 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> log      =internal log           bsize=4096   blocks=1728, version=2
>          =                       sectsz=512   sunit=64 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> Warning: The kernel is still using the old partition table.
> The new table will be used at the next reboot.
> The operation has completed successfully.
> /usr/lib/python2.7/site-packages/ceph_disk/main.py:5685: UserWarning:
> ************************************************************
> *******************
> This tool is now deprecated in favor of ceph-volume.
> It is recommended to use ceph-volume for OSD deployments. For details see:
>
>     http://docs.ceph.com/docs/master/ceph-volume/#migrating
>
> ************************************************************
> *******************
>
>   warnings.warn(DEPRECATION_WARNING)
> /usr/lib/python2.7/site-packages/ceph_disk/main.py:5653: UserWarning:
> ************************************************************
> *******************
> This tool is now deprecated in favor of ceph-volume.
> It is recommended to use ceph-volume for OSD deployments. For details see:
>
>     http://docs.ceph.com/docs/master/ceph-volume/#migrating
>
> ************************************************************
> *******************
>
>   warnings.warn(DEPRECATION_WARNING)
> got monmap epoch 3
> 2017-12-08 16:09:07.518454 7fa64e12fd00 -1 
> bluestore(/var/lib/ceph/tmp/mnt.7x0kCL/block)
> _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.7x0kCL/block fsid
> 54954cfd-b7f3-4f74-9b2e-2ef57c5143cc does not match our fsid
> 29262e99-12ff-4c45-9113-8f69830a1a5e
> 2017-12-08 16:09:07.772688 7fa64e12fd00 -1 
> bluestore(/var/lib/ceph/tmp/mnt.7x0kCL)
> mkfs fsck found fatal error: (5) Input/output error
> 2017-12-08 16:09:07.772723 7fa64e12fd00 -1 OSD::mkfs: ObjectStore::mkfs
> failed with error (5) Input/output error
> 2017-12-08 16:09:07.772823 7fa64e12fd00 -1  ** ERROR: error creating empty
> object store in /var/lib/ceph/tmp/mnt.7x0kCL: (5) Input/output error
> mount_activate: Failed to activate
> Traceback (most recent call last):
>   File "/usr/sbin/ceph-disk", line 9, in <module>
>     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5736,
> in run
>     main(sys.argv[1:])
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5682,
> in main
>     main_catch(args.func, args)
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5710,
> in main_catch
>     func(args)
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3761,
> in main_activate
>     reactivate=args.reactivate,
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3524,
> in mount_activate
>     (osd_id, cluster) = activate(path, activate_key_template, init)
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3701,
> in activate
>     keyring=keyring,
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3153,
> in mkfs
>     '--setgroup', get_ceph_group(),
>   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 570, in
> command_check_call
>     return subprocess.check_call(arguments)
>   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
>     raise CalledProcessError(retcode, cmd)
> subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster',
> 'ceph', '--mkfs', '-i', u'16', '--monmap', 
> '/var/lib/ceph/tmp/mnt.7x0kCL/activate.monmap',
> '--osd-data', '/var/lib/ceph/tmp/mnt.7x0kCL', '--osd-uuid',
> u'29262e99-12ff-4c45-9113-8f69830a1a5e', '--setuser', 'ceph',
> '--setgroup', 'ceph']' returned non-zero exit status 1
>
>
> Do you think is this a bug ? Even I can reproducable it everytime on any
> osd's . On the first time I can recreate OSD's without any problem. second
> time onwards the issue happening.
>
> Need your views on this ... ?
>
> Thanks
> Jayaram
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to