I already ran the "ceph-volume lvm activate --all "  command right after I
prepared (using "lvm prepare") those OSDs.  Do I need to run the "activate"
command again?



On Mon, Nov 5, 2018 at 1:24 PM, Alfredo Deza <ad...@redhat.com> wrote:

> On Mon, Nov 5, 2018 at 12:54 PM Hayashida, Mami <mami.hayash...@uky.edu>
> wrote:
> >
> > I commented out those lines and, yes, I was able to restart the system
> and all the Filestore OSDs are now running.  But when I cannot start
> converted Bluestore OSDs (service).   When I look up the log for osd.60,
> this is what I see:
>
> Something/someone may have added those entries in fstab. It is
> certainly not something that ceph-disk or ceph-volume would ever do.
> Glad you found them and removed them.
>
>
> >
> > 2018-11-05 12:47:00.756794 7f1f2775ae00  0 set uid:gid to 64045:64045
> (ceph:ceph)
> > 2018-11-05 12:47:00.756821 7f1f2775ae00  0 ceph version 12.2.9 (
> 9e300932ef8a8916fb3fda78c58691a6ab0f4217) luminous (stable), process
> ceph-osd, pid 33706
> > 2018-11-05 12:47:00.776554 7f1f2775ae00  0 pidfile_write: ignore empty
> --pid-file
> > 2018-11-05 12:47:00.788716 7f1f2775ae00  0 load: jerasure load: lrc
> load: isa
> > 2018-11-05 12:47:00.788803 7f1f2775ae00  1 bdev create path
> /var/lib/ceph/osd/ceph-60/block type kernel
> > 2018-11-05 12:47:00.788818 7f1f2775ae00  1 bdev(0x564350f4ad80
> /var/lib/ceph/osd/ceph-60/block) open path /var/lib/ceph/osd/ceph-60/block
> > 2018-11-05 12:47:00.789179 7f1f2775ae00  1 bdev(0x564350f4ad80
> /var/lib/ceph/osd/ceph-60/block) open size 10737418240 (0x280000000,
> 10GiB) block_size 4096 (4KiB) rotational
> > 2018-11-05 12:47:00.789257 7f1f2775ae00  1 
> > bluestore(/var/lib/ceph/osd/ceph-60)
> _set_cache_sizes cache_size 1073741824 meta 0.4 kv 0.4 data 0.2
> > 2018-11-05 12:47:00.789286 7f1f2775ae00  1 bdev(0x564350f4ad80
> /var/lib/ceph/osd/ceph-60/block) close
> > 2018-11-05 12:47:01.075002 7f1f2775ae00  1 
> > bluestore(/var/lib/ceph/osd/ceph-60)
> _mount path /var/lib/ceph/osd/ceph-60
> > 2018-11-05 12:47:01.075069 7f1f2775ae00  1 bdev create path
> /var/lib/ceph/osd/ceph-60/block type kernel
> > 2018-11-05 12:47:01.075078 7f1f2775ae00  1 bdev(0x564350f4afc0
> /var/lib/ceph/osd/ceph-60/block) open path /var/lib/ceph/osd/ceph-60/block
> > 2018-11-05 12:47:01.075391 7f1f2775ae00  1 bdev(0x564350f4afc0
> /var/lib/ceph/osd/ceph-60/block) open size 10737418240 (0x280000000,
> 10GiB) block_size 4096 (4KiB) rotational
> > 2018-11-05 12:47:01.075450 7f1f2775ae00  1 
> > bluestore(/var/lib/ceph/osd/ceph-60)
> _set_cache_sizes cache_size 1073741824 meta 0.4 kv 0.4 data 0.2
> > 2018-11-05 12:47:01.075536 7f1f2775ae00  1 bdev create path
> /var/lib/ceph/osd/ceph-60/block.db type kernel
> > 2018-11-05 12:47:01.075544 7f1f2775ae00  1 bdev(0x564350f4b200
> /var/lib/ceph/osd/ceph-60/block.db) open path /var/lib/ceph/osd/ceph-60/
> block.db
> > 2018-11-05 12:47:01.075555 7f1f2775ae00 -1 bdev(0x564350f4b200
> /var/lib/ceph/osd/ceph-60/block.db) open open got: (13) Permission denied
> > 2018-11-05 12:47:01.075573 7f1f2775ae00 -1 
> > bluestore(/var/lib/ceph/osd/ceph-60)
> _open_db add block device(/var/lib/ceph/osd/ceph-60/block.db) returned:
> (13) Permission denied
> > 2018-11-05 12:47:01.075589 7f1f2775ae00  1 bdev(0x564350f4afc0
> /var/lib/ceph/osd/ceph-60/block) close
> > 2018-11-05 12:47:01.346356 7f1f2775ae00 -1 osd.60 0 OSD:init: unable to
> mount object store
> > 2018-11-05 12:47:01.346378 7f1f2775ae00 -1  ** ERROR: osd init failed:
> (13) Permission denied
>
> If this has already been converted and it is ceph-volume trying to
> start it, could you please try activating that one OSD with:
>
>     ceph-volume lvm activate <OSD ID> <OSD FSID>
>
> And paste the output here, and then if possible, capture the relevant
> part of that CLI call from /var/log/ceph/ceph-volume.log
> >
> >
> >
> >
> > On Mon, Nov 5, 2018 at 12:34 PM, Hector Martin <hec...@marcansoft.com>
> wrote:
> >>
> >> On 11/6/18 2:01 AM, Hayashida, Mami wrote:
> >> > I did find in /etc/fstab entries like this for those 10 disks
> >> >
> >> > /dev/sdh1   /var/lib/ceph/osd/ceph-60  xfs noatime,nodiratime 0 0
> >> >
> >> > Should I comment all 10 of them out (for osd.{60-69}) and try
> rebooting
> >> > again?
> >>
> >> Yes. Anything that references any of the old partitions that don't exist
> >> (/dev/sdh1 etc) should be removed. The disks are now full-disk LVM PVs
> >> and should have no partitions.
> >>
> >> --
> >> Hector Martin (hec...@marcansoft.com)
> >> Public Key: https://mrcn.st/pub
> >
> >
> >
> >
> > --
> > Mami Hayashida
> > Research Computing Associate
> >
> > Research Computing Infrastructure
> > University of Kentucky Information Technology Services
> > 301 Rose Street | 102 James F. Hardymon Building
> > Lexington, KY 40506-0495
> > mami.hayash...@uky.edu
> > (859)323-7521
>



-- 
*Mami Hayashida*

*Research Computing Associate*
Research Computing Infrastructure
University of Kentucky Information Technology Services
301 Rose Street | 102 James F. Hardymon Building
Lexington, KY 40506-0495
mami.hayash...@uky.edu
(859)323-7521
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to