Loic,
Thank you very much for the partprobe workaround. I rebuilt the cluster using
94.2.
I've created partitions on the journal SSDs with parted and then use ceph-disk
prepare as below. I'm not seeing all of the disks with the tmp mounts when I
check 'mount' but I also don't see any of the mount directory mount points at
/var/lib/ceph/osd. I'm see the following output from prepare. When I attempt to
'activate' it errors out saying the devices don't exist.
ceph-disk prepare --cluster ceph --cluster-uuid
b2c2e866-ab61-4f80-b116-20fa2ea2ca94 --fs-type xfs --zap-disk
/dev/disk/by-id/wwn-0x500003959bd02f56
/dev/disk/by-id/wwn-0x500080d91010024b-part1
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
partx: specified range <1:0> does not make sense
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same
device as the osd data
WARNING:ceph-disk:Journal /dev/disk/by-id/wwn-0x500080d91010024b-part1 was not
prepared with ceph-disk. Symlinking directly.
The operation has completed successfully.
partx: /dev/disk/by-id/wwn-0x500003959bd02f56: error adding partition 1
meta-data=/dev/sdw1 isize=2048 agcount=4, agsize=244188597 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=976754385, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=476930, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
The operation has completed successfully.
partx: /dev/disk/by-id/wwn-0x500003959bd02f56: error adding partition 1
[root@ceph0 ceph]# ceph -v
ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
[root@ceph0 ceph]# rpm -qa | grep ceph
ceph-radosgw-0.94.2-0.el7.x86_64
libcephfs1-0.94.2-0.el7.x86_64
ceph-common-0.94.2-0.el7.x86_64
python-cephfs-0.94.2-0.el7.x86_64
ceph-0.94.2-0.el7.x86_64
[root@ceph0 ceph]#
> -----Original Message-----
> From: Loic Dachary [mailto:[email protected]]
> Sent: Friday, June 26, 2015 3:29 PM
> To: Bruce McFarland; [email protected]
> Subject: Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD
>
> Hi,
>
> Prior to firefly v0.80.8 ceph-disk zap did not call partprobe and that was
> causing the kind of problems you're experiencing. It was fixed by
> https://github.com/ceph/ceph/commit/e70a81464b906b9a304c29f474e672
> 6762b63a7c and is described in more details at
> http://tracker.ceph.com/issues/9665. Rebooting the machine ensures the
> partition table is up to date and that's what you probably want to do after
> that kind of failure. You can however avoid the failure by running:
>
> * ceph-disk zap
> * partproble
> * ceph-disk prepare
>
> Cheers
>
> P.S. The "partx: /dev/disk/by-id/wwn-0x500003959ba80a4e: error adding
> partition 1" can be ignored, it does not actually matter. A message was
> added later to avoid confusion with a real error.
> .
> On 26/06/2015 17:09, Bruce McFarland wrote:
> > I have moved storage nodes to RHEL 7.1 and used the basic server install. I
> installed ceph-deploy and used the ceph.repo/epel.repo for installation of
> ceph 80.7. I have tried ceph-disk with issuing "zap" on the same command
> line as "prepare" and on a separate command line immediately before the
> ceph-disk prepare. I consistently run into the partition errors and am unable
> to create OSD's on RHEL 7.1.
> >
> >
> >
> > ceph-disk prepare --cluster ceph --cluster-uuid 373a09f7-2070-4d20-8504-
> c8653fb6db80 --fs-type xfs --zap-disk /dev/disk/by-id/wwn-
> 0x500003959ba80a4e /dev/disk/by-id/wwn-0x500080d9101001d6-part1
> >
> > Caution: invalid backup GPT header, but valid main header; regenerating
> >
> > backup header from main header.
> >
> >
> >
> >
> **************************************************************
> **************
> >
> > Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but
> disk
> >
> > verification and recovery are STRONGLY recommended.
> >
> >
> **************************************************************
> **************
> >
> > GPT data structures destroyed! You may now partition the disk using fdisk
> or
> >
> > other utilities.
> >
> > The operation has completed successfully.
> >
> > WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
> same device as the osd data
> >
> > The operation has completed successfully.
> >
> > meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=244188597
> blks
> >
> > = sectsz=512 attr=2, projid32bit=1
> >
> > = crc=0 finobt=0
> >
> > data = bsize=4096 blocks=976754385, imaxpct=5
> >
> > = sunit=0 swidth=0 blks
> >
> > naming =version 2 bsize=4096 ascii-ci=0 ftype=0
> >
> > log =internal log bsize=4096 blocks=476930, version=2
> >
> > = sectsz=512 sunit=0 blks, lazy-count=1
> >
> > realtime =none extsz=4096 blocks=0, rtextents=0
> >
> > The operation has completed successfully.
> >
> > partx: /dev/disk/by-id/wwn-0x500003959ba80a4e: error adding partition 1
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
> --
> Loïc Dachary, Artisan Logiciel Libre
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com