[ceph-users] Issus with device-mapper drive partition names.

2015-02-13 Thread Tyler Bishop
When trying to zap and prepare a disk it fails to find the partitions. 



[ceph@ceph0-mon0 ~]$ ceph-deploy -v disk zap 
ceph0-node1:/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.conf ][ DEBUG ] found configuration file at: 
/home/ceph/.cephdeploy.conf 

[ ceph_deploy.cli ][ INFO ] Invoked (1.5.21): /usr/bin/ceph-deploy -v disk zap 
ceph0-node1:/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.osd ][ DEBUG ] zapping /dev/mapper/35000c50031a1c08b on 
ceph0-node1 

[ ceph0-node1 ][ DEBUG ] connection detected need for sudo 

[ ceph0-node1 ][ DEBUG ] connected to host: ceph0-node1 

[ ceph0-node1 ][ DEBUG ] detect platform information from remote host 

[ ceph0-node1 ][ DEBUG ] detect machine type 

[ ceph_deploy.osd ][ INFO ] Distro info: CentOS Linux 7.0.1406 Core 

[ ceph0-node1 ][ DEBUG ] zeroing last few blocks of device 

[ ceph0-node1 ][ DEBUG ] find the location of an executable 

[ ceph0-node1 ][ INFO ] Running command: sudo /usr/sbin/ceph-disk zap 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ DEBUG ] Creating new GPT entries. 

[ ceph0-node1 ][ DEBUG ] Warning: The kernel is still using the old partition 
table. 

[ ceph0-node1 ][ DEBUG ] The new table will be used at the next reboot. 

[ ceph0-node1 ][ DEBUG ] GPT data structures destroyed! You may now partition 
the disk using fdisk or 

[ ceph0-node1 ][ DEBUG ] other utilities. 

[ ceph0-node1 ][ DEBUG ] Warning: The kernel is still using the old partition 
table. 

[ ceph0-node1 ][ DEBUG ] The new table will be used at the next reboot. 

[ ceph0-node1 ][ DEBUG ] The operation has completed successfully. 

[ ceph_deploy.osd ][ INFO ] calling partx on zapped device 
/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.osd ][ INFO ] re-reading known partitions will display errors 

[ ceph0-node1 ][ INFO ] Running command: sudo partx -a 
/dev/mapper/35000c50031a1c08b 




Now running prepare fails because it can't find the newly created partitions. 




[ceph@ceph0-mon0 ~]$ ceph-deploy -v osd prepare 
ceph0-node1:/dev/mapper/35000c50031a1c08b 




[ ceph_deploy.conf ][ DEBUG ] found configuration file at: 
/home/ceph/.cephdeploy.conf 

[ ceph_deploy.cli ][ INFO ] Invoked (1.5.21): /usr/bin/ceph-deploy -v osd 
prepare ceph0-node1:/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.osd ][ DEBUG ] Preparing cluster ceph disks 
ceph0-node1:/dev/mapper/35000c50031a1c08b: 

[ ceph0-node1 ][ DEBUG ] connection detected need for sudo 

[ ceph0-node1 ][ DEBUG ] connected to host: ceph0-node1 

[ ceph0-node1 ][ DEBUG ] detect platform information from remote host 

[ ceph0-node1 ][ DEBUG ] detect machine type 

[ ceph_deploy.osd ][ INFO ] Distro info: CentOS Linux 7.0.1406 Core 

[ ceph_deploy.osd ][ DEBUG ] Deploying osd to ceph0-node1 

[ ceph0-node1 ][ DEBUG ] write cluster configuration to 
/etc/ceph/{cluster}.conf 

[ ceph0-node1 ][ INFO ] Running command: sudo udevadm trigger 
--subsystem-match=block --action=add 

[ ceph_deploy.osd ][ DEBUG ] Preparing host ceph0-node1 disk 
/dev/mapper/35000c50031a1c08b journal None activate False 

[ ceph0-node1 ][ INFO ] Running command: sudo ceph-disk -v prepare --fs-type 
xfs --cluster ceph -- /dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Will colocate journal with data on 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] DEBUG:ceph-disk:Creating journal partition num 2 size 
1 on /dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /sbin/sgdisk 
--new=2:0:1M --change-name=2:ceph journal 
--partition-guid=2:b9202d1b-63be-4deb-ad08-0a143a31f4a9 
--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ DEBUG ] Information: Moved requested sector from 34 to 2048 in 

[ ceph0-node1 ][ DEBUG ] order to align on 2048-sector boundaries. 

[ ceph0-node1 ][ DEBUG ] Warning: The kernel is still using the old partition 
table. 

[ ceph0-node1 ][ DEBUG ] The new table will be used at the next reboot. 

[ ceph0-node1 ][ DEBUG ] The operation has completed successfully. 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:calling partx on prepared device 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] 

Re: [ceph-users] Issus with device-mapper drive partition names.

2015-02-13 Thread Stephen Hindle
I ran into something similiar when messing with my test cluster -
basically, it doesn't like existing GPT tables on devices.
I got in the habit of running 'gdisk /dev/sdX' and using the 'x'
(expert) and 'z' (zap) commands to get rid of the GPT table
prior to doing ceph setup.

On Thu, Feb 12, 2015 at 3:09 PM, Tyler Bishop
tyler.bis...@beyondhosting.net wrote:
 When trying to zap and prepare a disk it fails to find the partitions.

 [ceph@ceph0-mon0 ~]$ ceph-deploy -v disk zap
 ceph0-node1:/dev/mapper/35000c50031a1c08b

 [ceph_deploy.conf][DEBUG ] found configuration file at:
 /home/ceph/.cephdeploy.conf

 [ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy -v disk zap
 ceph0-node1:/dev/mapper/35000c50031a1c08b

 [ceph_deploy.osd][DEBUG ] zapping /dev/mapper/35000c50031a1c08b on
 ceph0-node1

 [ceph0-node1][DEBUG ] connection detected need for sudo

 [ceph0-node1][DEBUG ] connected to host: ceph0-node1

 [ceph0-node1][DEBUG ] detect platform information from remote host

 [ceph0-node1][DEBUG ] detect machine type

 [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.0.1406 Core

 [ceph0-node1][DEBUG ] zeroing last few blocks of device

 [ceph0-node1][DEBUG ] find the location of an executable

 [ceph0-node1][INFO  ] Running command: sudo /usr/sbin/ceph-disk zap
 /dev/mapper/35000c50031a1c08b

 [ceph0-node1][DEBUG ] Creating new GPT entries.

 [ceph0-node1][DEBUG ] Warning: The kernel is still using the old partition
 table.

 [ceph0-node1][DEBUG ] The new table will be used at the next reboot.

 [ceph0-node1][DEBUG ] GPT data structures destroyed! You may now partition
 the disk using fdisk or

 [ceph0-node1][DEBUG ] other utilities.

 [ceph0-node1][DEBUG ] Warning: The kernel is still using the old partition
 table.

 [ceph0-node1][DEBUG ] The new table will be used at the next reboot.

 [ceph0-node1][DEBUG ] The operation has completed successfully.

 [ceph_deploy.osd][INFO  ] calling partx on zapped device
 /dev/mapper/35000c50031a1c08b

 [ceph_deploy.osd][INFO  ] re-reading known partitions will display errors

 [ceph0-node1][INFO  ] Running command: sudo partx -a
 /dev/mapper/35000c50031a1c08b


 Now running prepare fails because it can't find the newly created
 partitions.


 [ceph@ceph0-mon0 ~]$ ceph-deploy -v osd prepare
 ceph0-node1:/dev/mapper/35000c50031a1c08b


 [ceph_deploy.conf][DEBUG ] found configuration file at:
 /home/ceph/.cephdeploy.conf

 [ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy -v osd
 prepare ceph0-node1:/dev/mapper/35000c50031a1c08b

 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
 ceph0-node1:/dev/mapper/35000c50031a1c08b:

 [ceph0-node1][DEBUG ] connection detected need for sudo

 [ceph0-node1][DEBUG ] connected to host: ceph0-node1

 [ceph0-node1][DEBUG ] detect platform information from remote host

 [ceph0-node1][DEBUG ] detect machine type

 [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.0.1406 Core

 [ceph_deploy.osd][DEBUG ] Deploying osd to ceph0-node1

 [ceph0-node1][DEBUG ] write cluster configuration to
 /etc/ceph/{cluster}.conf

 [ceph0-node1][INFO  ] Running command: sudo udevadm trigger
 --subsystem-match=block --action=add

 [ceph_deploy.osd][DEBUG ] Preparing host ceph0-node1 disk
 /dev/mapper/35000c50031a1c08b journal None activate False

 [ceph0-node1][INFO  ] Running command: sudo ceph-disk -v prepare --fs-type
 xfs --cluster ceph -- /dev/mapper/35000c50031a1c08b

 [ceph0-node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
 --cluster=ceph --show-config-value=fsid

 [ceph0-node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
 --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs

 [ceph0-node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
 --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs

 [ceph0-node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
 --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

 [ceph0-node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
 --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

 [ceph0-node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
 --cluster=ceph --show-config-value=osd_journal_size

 [ceph0-node1][WARNIN] INFO:ceph-disk:Will colocate journal with data on
 /dev/mapper/35000c50031a1c08b

 [ceph0-node1][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size
 1 on /dev/mapper/35000c50031a1c08b

 [ceph0-node1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk
 --new=2:0:1M --change-name=2:ceph journal
 --partition-guid=2:b9202d1b-63be-4deb-ad08-0a143a31f4a9
 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt --
 /dev/mapper/35000c50031a1c08b

 [ceph0-node1][DEBUG ] Information: Moved requested sector from 34 to 2048 in

 [ceph0-node1][DEBUG ] order to align on 2048-sector boundaries.

 [ceph0-node1][DEBUG ] Warning: The kernel is still using the old partition
 table.

 [ceph0-node1][DEBUG ] The new table will be used at the next reboot.