Re: [ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-12 Thread kevin parrikar
Thank you all for your suggestions:


This is what i followed for the upgrade:

Hammer to Jewel:
apt-get dist-upgrade on each node seperately.
stopped monitor process;
stopped osd;
changed permission to ceph:ceph recursively for /var/lib/ceph/
restarted monitor process;
restarted osd;

*ceph osd set require_jewel_osds;*

*ceph osd set sortbitwise;*
verified with
ceph -s
rados bench

Jewel to Luminous
apt-get dist-upgrade on each node.
stopped monitor process;
stopped osd process;
restarted monitor;
restarted osd process;

Result:
osd is not coming up

Steps tried to resolve:
rebooted all nodes;


upgrade from Hammer to Jewel was almost smooth but Jewel to Luminous OSD is
not coming up.

Any suggestions on where to check for clue.


Regards,
Kev

On Wed, Sep 13, 2017 at 1:17 AM, Lincoln Bryant 
wrote:

> Did you set the sortbitwise flag, fix OSD ownership (or use the "setuser
> match path" option) and such after upgrading from Hammer to Jewel? I am not
> sure if that matters here, but it might help if you elaborate on your
> upgrade process a bit.
>
> --Lincoln
>
> > On Sep 12, 2017, at 2:22 PM, kevin parrikar 
> wrote:
> >
> > Can some one please help me on this.I have no idea how to bring up the
> cluster to operational state.
> >
> > Thanks,
> > Kev
> >
> > On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar <
> kevin.parker...@gmail.com> wrote:
> > hello All,
> > I am trying to upgrade a small test setup having one monitor and one osd
> node which is in hammer release .
> >
> >
> > I updating from hammer to jewel using package update commands and things
> are working.
> > How ever after updating from Jewel to Luminous, i am facing issues with
> osd failing to start .
> >
> > upgraded packages on both nodes and i can see in "ceph mon versions" is
> successful
> >
> >  ceph mon versions
> > {
> > "ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c)
> luminous (rc)": 1
> > }
> >
> > but  ceph osd versions returns empty strig
> >
> >
> > ceph osd versions
> > {}
> >
> >
> > dpkg --list|grep ceph
> > ii  ceph 12.2.0-1trusty
>amd64distributed storage and file system
> > ii  ceph-base12.2.0-1trusty
>amd64common ceph daemon libraries and management tools
> > ii  ceph-common  12.2.0-1trusty
>amd64common utilities to mount and interact with a ceph
> storage cluster
> > ii  ceph-deploy  1.5.38
>all  Ceph-deploy is an easy to use configuration tool
> > ii  ceph-mgr 12.2.0-1trusty
>amd64manager for the ceph distributed storage system
> > ii  ceph-mon 12.2.0-1trusty
>amd64monitor server for the ceph storage system
> > ii  ceph-osd 12.2.0-1trusty
>amd64OSD server for the ceph storage system
> > ii  libcephfs1   10.2.9-1trusty
>amd64Ceph distributed file system client library
> > ii  libcephfs2   12.2.0-1trusty
>amd64Ceph distributed file system client library
> > ii  python-cephfs12.2.0-1trusty
>amd64Python 2 libraries for the Ceph libcephfs library
> >
> > from OSD log:
> >
> > 2017-09-12 05:38:10.618023 7fc307a10d00  0 set uid:gid to 64045:64045
> (ceph:ceph)
> > 2017-09-12 05:38:10.618618 7fc307a10d00  0 ceph version 12.2.0 (
> 32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process
> (unknown), pid 21513
> > 2017-09-12 05:38:10.624473 7fc307a10d00  0 pidfile_write: ignore empty
> --pid-file
> > 2017-09-12 05:38:10.633099 7fc307a10d00  0 load: jerasure load: lrc
> load: isa
> > 2017-09-12 05:38:10.633657 7fc307a10d00  0 
> > filestore(/var/lib/ceph/osd/ceph-0)
> backend xfs (magic 0x58465342)
> > 2017-09-12 05:38:10.635164 7fc307a10d00  0 
> > filestore(/var/lib/ceph/osd/ceph-0)
> backend xfs (magic 0x58465342)
> > 2017-09-12 05:38:10.637503 7fc307a10d00  0 
> > genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config
> option
> > 2017-09-12 05:38:10.637833 7fc307a10d00  0 
> > genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data
> hole' config option
> > 2017-09-12 05:38:10.637923 7fc307a10d00  0 
> > genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: splice() is disabled via 'filestore splice' config option
> > 2017-09-12 05:38:10.639047 7fc307a10d00  0 
> > genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
> > 2017-09-12 05:38:10.639501 7fc307a10d00  0 
> > xfsfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_feature: extsize is disabled by conf
> > 

Re: [ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-12 Thread Lincoln Bryant
Did you set the sortbitwise flag, fix OSD ownership (or use the "setuser match 
path" option) and such after upgrading from Hammer to Jewel? I am not sure if 
that matters here, but it might help if you elaborate on your upgrade process a 
bit.

--Lincoln

> On Sep 12, 2017, at 2:22 PM, kevin parrikar  wrote:
> 
> Can some one please help me on this.I have no idea how to bring up the 
> cluster to operational state.
> 
> Thanks,
> Kev
> 
> On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar  
> wrote:
> hello All,
> I am trying to upgrade a small test setup having one monitor and one osd node 
> which is in hammer release .
> 
> 
> I updating from hammer to jewel using package update commands and things are 
> working.
> How ever after updating from Jewel to Luminous, i am facing issues with osd 
> failing to start .
> 
> upgraded packages on both nodes and i can see in "ceph mon versions" is 
> successful 
> 
>  ceph mon versions
> {
> "ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous 
> (rc)": 1
> }
> 
> but  ceph osd versions returns empty strig
>  
> 
> ceph osd versions
> {}
> 
> 
> dpkg --list|grep ceph
> ii  ceph 12.2.0-1trusty   
>   amd64distributed storage and file system
> ii  ceph-base12.2.0-1trusty   
>   amd64common ceph daemon libraries and management tools
> ii  ceph-common  12.2.0-1trusty   
>   amd64common utilities to mount and interact with a ceph storage 
> cluster
> ii  ceph-deploy  1.5.38   
>   all  Ceph-deploy is an easy to use configuration tool
> ii  ceph-mgr 12.2.0-1trusty   
>   amd64manager for the ceph distributed storage system
> ii  ceph-mon 12.2.0-1trusty   
>   amd64monitor server for the ceph storage system
> ii  ceph-osd 12.2.0-1trusty   
>   amd64OSD server for the ceph storage system
> ii  libcephfs1   10.2.9-1trusty   
>   amd64Ceph distributed file system client library
> ii  libcephfs2   12.2.0-1trusty   
>   amd64Ceph distributed file system client library
> ii  python-cephfs12.2.0-1trusty   
>   amd64Python 2 libraries for the Ceph libcephfs library
> 
> from OSD log:
> 
> 2017-09-12 05:38:10.618023 7fc307a10d00  0 set uid:gid to 64045:64045 
> (ceph:ceph)
> 2017-09-12 05:38:10.618618 7fc307a10d00  0 ceph version 12.2.0 
> (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process (unknown), 
> pid 21513
> 2017-09-12 05:38:10.624473 7fc307a10d00  0 pidfile_write: ignore empty 
> --pid-file
> 2017-09-12 05:38:10.633099 7fc307a10d00  0 load: jerasure load: lrc load: isa
> 2017-09-12 05:38:10.633657 7fc307a10d00  0 
> filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
> 2017-09-12 05:38:10.635164 7fc307a10d00  0 
> filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
> 2017-09-12 05:38:10.637503 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP 
> ioctl is disabled via 'filestore fiemap' config option
> 2017-09-12 05:38:10.637833 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: 
> SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
> 2017-09-12 05:38:10.637923 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice() 
> is disabled via 'filestore splice' config option
> 2017-09-12 05:38:10.639047 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2) 
> syscall fully supported (by glibc and kernel)
> 2017-09-12 05:38:10.639501 7fc307a10d00  0 
> xfsfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_feature: extsize is 
> disabled by conf
> 2017-09-12 05:38:10.640417 7fc307a10d00  0 
> filestore(/var/lib/ceph/osd/ceph-0) start omap initiation
> 2017-09-12 05:38:10.640842 7fc307a10d00  1 leveldb: Recovering log #102
> 2017-09-12 05:38:10.642690 7fc307a10d00  1 leveldb: Delete type=0 #102
> 
> 2017-09-12 05:38:10.643128 7fc307a10d00  1 leveldb: Delete type=3 #101
> 
> 2017-09-12 05:38:10.649616 7fc307a10d00  0 
> filestore(/var/lib/ceph/osd/ceph-0) mount(1758): enabling WRITEAHEAD journal 
> mode: checkpoint is not enabled
> 2017-09-12 05:38:10.654071 7fc307a10d00 -1 journal FileJournal::_open: 
> disabling aio for non-block journal.  Use journal_force_aio to force use of 
> aio anyway
> 2017-09-12 05:38:10.654590 7fc307a10d00  1 journal _open 
> /var/lib/ceph/osd/ceph-0/journal fd 28: 2147483648 

Re: [ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-12 Thread Steve Taylor
It seems like I've seen similar behavior in the past with the changing of the 
osd user context between hammer and jewel. Hammer ran osds as root, and they 
switched to running as the ceph user in jewel. That doesn't really seem to 
match your scenario perfectly, but I think the errors you're seeing in the logs 
match what I've seen in that situation before.

If that's the issue, you need to chown everything under /var/lib/ceph/osd to be 
owned by ceph instead of root as documented in the jewel release notes.




[cid:SC_LOGO_VERT_4C_100x72_f823be1a-ae53-43d3-975c-b054a1b22ec3.jpg]


Steve Taylor | Senior Software Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799 |



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



On Wed, 2017-09-13 at 00:52 +0530, kevin parrikar wrote:
Can some one please help me on this.I have no idea how to bring up the cluster 
to operational state.

Thanks,
Kev

On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar 
> wrote:
hello All,
I am trying to upgrade a small test setup having one monitor and one osd node 
which is in hammer release .


I updating from hammer to jewel using package update commands and things are 
working.
How ever after updating from Jewel to Luminous, i am facing issues with osd 
failing to start .

upgraded packages on both nodes and i can see in "ceph mon versions" is 
successful

 ceph mon versions
{
"ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous 
(rc)": 1
}

but  ceph osd versions returns empty strig


ceph osd versions
{}


dpkg --list|grep ceph
ii  ceph 12.2.0-1trusty 
amd64distributed storage and file system
ii  ceph-base12.2.0-1trusty 
amd64common ceph daemon libraries and management tools
ii  ceph-common  12.2.0-1trusty 
amd64common utilities to mount and interact with a ceph storage 
cluster
ii  ceph-deploy  1.5.38 
all  Ceph-deploy is an easy to use configuration tool
ii  ceph-mgr 12.2.0-1trusty 
amd64manager for the ceph distributed storage system
ii  ceph-mon 12.2.0-1trusty 
amd64monitor server for the ceph storage system
ii  ceph-osd 12.2.0-1trusty 
amd64OSD server for the ceph storage system
ii  libcephfs1   10.2.9-1trusty 
amd64Ceph distributed file system client library
ii  libcephfs2   12.2.0-1trusty 
amd64Ceph distributed file system client library
ii  python-cephfs12.2.0-1trusty 
amd64Python 2 libraries for the Ceph libcephfs library

from OSD log:

2017-09-12 05:38:10.618023 7fc307a10d00  0 set uid:gid to 64045:64045 
(ceph:ceph)
2017-09-12 05:38:10.618618 7fc307a10d00  0 ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process (unknown), 
pid 21513
2017-09-12 05:38:10.624473 7fc307a10d00  0 pidfile_write: ignore empty 
--pid-file
2017-09-12 05:38:10.633099 7fc307a10d00  0 load: jerasure load: lrc load: isa
2017-09-12 05:38:10.633657 7fc307a10d00  0 filestore(/var/lib/ceph/osd/ceph-0) 
backend xfs (magic 0x58465342)
2017-09-12 05:38:10.635164 7fc307a10d00  0 filestore(/var/lib/ceph/osd/ceph-0) 
backend xfs (magic 0x58465342)
2017-09-12 05:38:10.637503 7fc307a10d00  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP ioctl 
is disabled via 'filestore fiemap' config option
2017-09-12 05:38:10.637833 7fc307a10d00  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: 
SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-09-12 05:38:10.637923 7fc307a10d00  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice() is 
disabled via 'filestore splice' config option
2017-09-12 05:38:10.639047 7fc307a10d00  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2) 
syscall fully supported (by glibc and kernel)
2017-09-12 05:38:10.639501 7fc307a10d00  0 
xfsfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_feature: extsize is 
disabled by conf
2017-09-12 05:38:10.640417 7fc307a10d00  0 filestore(/var/lib/ceph/osd/ceph-0) 
start 

Re: [ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-12 Thread kevin parrikar
Can some one please help me on this.I have no idea how to bring up the
cluster to operational state.

Thanks,
Kev

On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar 
wrote:

> hello All,
> I am trying to upgrade a small test setup having one monitor and one osd
> node which is in hammer release .
>
>
> I updating from hammer to jewel using package update commands and things
> are working.
> How ever after updating from Jewel to Luminous, i am facing issues with
> osd failing to start .
>
> upgraded packages on both nodes and i can see in "ceph mon versions" is
> successful
>
>
>
>
>
> * ceph mon versions{"ceph version 12.2.0
> (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 1}*
> but  ceph osd versions returns empty strig
>
>
>
>
> *ceph osd versions{}*
>
> *dpkg --list|grep ceph*
> ii  ceph 12.2.0-1trusty
> amd64distributed storage and file system
> ii  ceph-base12.2.0-1trusty
> amd64common ceph daemon libraries and management tools
> ii  ceph-common  12.2.0-1trusty
> amd64common utilities to mount and interact with a ceph storage
> cluster
> ii  ceph-deploy  1.5.38
> all  Ceph-deploy is an easy to use configuration tool
> ii  ceph-mgr 12.2.0-1trusty
> amd64manager for the ceph distributed storage system
> ii  ceph-mon 12.2.0-1trusty
> amd64monitor server for the ceph storage system
> ii  ceph-osd 12.2.0-1trusty
> amd64OSD server for the ceph storage system
> ii  libcephfs1   10.2.9-1trusty
> amd64Ceph distributed file system client library
> ii  libcephfs2   12.2.0-1trusty
> amd64Ceph distributed file system client library
> ii  python-cephfs12.2.0-1trusty
> amd64Python 2 libraries for the Ceph libcephfs library
>
>
> *from OSD log:*
> 2017-09-12 05:38:10.618023 7fc307a10d00  0 set uid:gid to 64045:64045
> (ceph:ceph)
> 2017-09-12 05:38:10.618618 7fc307a10d00  0 ceph version 12.2.0 (
> 32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process
> (unknown), pid 21513
> 2017-09-12 05:38:10.624473 7fc307a10d00  0 pidfile_write: ignore empty
> --pid-file
> 2017-09-12 05:38:10.633099 7fc307a10d00  0 load: jerasure load: lrc load:
> isa
> 2017-09-12 05:38:10.633657 7fc307a10d00  0 filestore(/var/lib/ceph/osd/ceph-0)
> backend xfs (magic 0x58465342)
> 2017-09-12 05:38:10.635164 7fc307a10d00  0 filestore(/var/lib/ceph/osd/ceph-0)
> backend xfs (magic 0x58465342)
> 2017-09-12 05:38:10.637503 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config
> option
> 2017-09-12 05:38:10.637833 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data
> hole' config option
> 2017-09-12 05:38:10.637923 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: splice() is disabled via 'filestore splice' config option
> 2017-09-12 05:38:10.639047 7fc307a10d00  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
> 2017-09-12 05:38:10.639501 7fc307a10d00  0 
> xfsfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_feature: extsize is disabled by conf
> 2017-09-12 05:38:10.640417 7fc307a10d00  0 filestore(/var/lib/ceph/osd/ceph-0)
> start omap initiation
> 2017-09-12 05:38:10.640842 7fc307a10d00  1 leveldb: Recovering log #102
> 2017-09-12 05:38:10.642690 7fc307a10d00  1 leveldb: Delete type=0 #102
>
> 2017-09-12 05:38:10.643128 7fc307a10d00  1 leveldb: Delete type=3 #101
>
> 2017-09-12 05:38:10.649616 7fc307a10d00  0 filestore(/var/lib/ceph/osd/ceph-0)
> mount(1758): enabling WRITEAHEAD journal mode: checkpoint is not enabled
> 2017-09-12 05:38:10.654071 7fc307a10d00 -1 journal FileJournal::_open:
> disabling aio for non-block journal.  Use journal_force_aio to force use of
> aio anyway
> 2017-09-12 05:38:10.654590 7fc307a10d00  1 journal _open
> /var/lib/ceph/osd/ceph-0/journal fd 28: 2147483648 bytes, block size 4096
> bytes, directio = 1, aio = 0
> 2017-09-12 05:38:10.655353 7fc307a10d00  1 journal _open
> /var/lib/ceph/osd/ceph-0/journal fd 28: 2147483648 bytes, block size 4096
> bytes, directio = 1, aio = 0
> 2017-09-12 05:38:10.656985 7fc307a10d00  1 filestore(/var/lib/ceph/osd/ceph-0)
> upgrade(1365)
> 2017-09-12 05:38:10.657798 7fc307a10d00  0 _get_class not permitted to
> load sdk
> 2017-09-12 05:38:10.658675 7fc307a10d00  0 _get_class not permitted to
> load lua
> 2017-09-12 05:38:10.658931 7fc307a10d00  0 
> /build/ceph-12.2.0/src/cls/cephfs/cls_cephfs.cc:197: loading cephfs
> 2017-09-12 05:38:10.659320 7fc307a10d00  0 
> 

[ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-11 Thread kevin parrikar
hello All,
I am trying to upgrade a small test setup having one monitor and one osd
node which is in hammer release .


I updating from hammer to jewel using package update commands and things
are working.
How ever after updating from Jewel to Luminous, i am facing issues with osd
failing to start .

upgraded packages on both nodes and i can see in "ceph mon versions" is
successful





* ceph mon versions{"ceph version 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 1}*
but  ceph osd versions returns empty strig




*ceph osd versions{}*

*dpkg --list|grep ceph*
ii  ceph
12.2.0-1trusty amd64distributed storage
and file system
ii  ceph-base
12.2.0-1trusty amd64common ceph daemon
libraries and management tools
ii  ceph-common
12.2.0-1trusty amd64common utilities to
mount and interact with a ceph storage cluster
ii  ceph-deploy
1.5.38 all  Ceph-deploy is an
easy to use configuration tool
ii  ceph-mgr
12.2.0-1trusty amd64manager for the
ceph distributed storage system
ii  ceph-mon
12.2.0-1trusty amd64monitor server for
the ceph storage system
ii  ceph-osd
12.2.0-1trusty amd64OSD server for the
ceph storage system
ii  libcephfs1
10.2.9-1trusty amd64Ceph distributed
file system client library
ii  libcephfs2
12.2.0-1trusty amd64Ceph distributed
file system client library
ii  python-cephfs
12.2.0-1trusty amd64Python 2 libraries
for the Ceph libcephfs library


*from OSD log:*
2017-09-12 05:38:10.618023 7fc307a10d00  0 set uid:gid to 64045:64045
(ceph:ceph)
2017-09-12 05:38:10.618618 7fc307a10d00  0 ceph version 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process
(unknown), pid 21513
2017-09-12 05:38:10.624473 7fc307a10d00  0 pidfile_write: ignore empty
--pid-file
2017-09-12 05:38:10.633099 7fc307a10d00  0 load: jerasure load: lrc load:
isa
2017-09-12 05:38:10.633657 7fc307a10d00  0
filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
2017-09-12 05:38:10.635164 7fc307a10d00  0
filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
2017-09-12 05:38:10.637503 7fc307a10d00  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP
ioctl is disabled via 'filestore fiemap' config option
2017-09-12 05:38:10.637833 7fc307a10d00  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features:
SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-09-12 05:38:10.637923 7fc307a10d00  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice()
is disabled via 'filestore splice' config option
2017-09-12 05:38:10.639047 7fc307a10d00  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features:
syncfs(2) syscall fully supported (by glibc and kernel)
2017-09-12 05:38:10.639501 7fc307a10d00  0
xfsfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_feature: extsize is
disabled by conf
2017-09-12 05:38:10.640417 7fc307a10d00  0
filestore(/var/lib/ceph/osd/ceph-0) start omap initiation
2017-09-12 05:38:10.640842 7fc307a10d00  1 leveldb: Recovering log #102
2017-09-12 05:38:10.642690 7fc307a10d00  1 leveldb: Delete type=0 #102

2017-09-12 05:38:10.643128 7fc307a10d00  1 leveldb: Delete type=3 #101

2017-09-12 05:38:10.649616 7fc307a10d00  0
filestore(/var/lib/ceph/osd/ceph-0) mount(1758): enabling WRITEAHEAD
journal mode: checkpoint is not enabled
2017-09-12 05:38:10.654071 7fc307a10d00 -1 journal FileJournal::_open:
disabling aio for non-block journal.  Use journal_force_aio to force use of
aio anyway
2017-09-12 05:38:10.654590 7fc307a10d00  1 journal _open
/var/lib/ceph/osd/ceph-0/journal fd 28: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
2017-09-12 05:38:10.655353 7fc307a10d00  1 journal _open
/var/lib/ceph/osd/ceph-0/journal fd 28: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
2017-09-12 05:38:10.656985 7fc307a10d00  1
filestore(/var/lib/ceph/osd/ceph-0) upgrade(1365)
2017-09-12 05:38:10.657798 7fc307a10d00  0 _get_class not permitted to load
sdk
2017-09-12 05:38:10.658675 7fc307a10d00  0 _get_class not permitted to load
lua
2017-09-12 05:38:10.658931 7fc307a10d00  0 
/build/ceph-12.2.0/src/cls/cephfs/cls_cephfs.cc:197: loading cephfs
2017-09-12 05:38:10.659320 7fc307a10d00  0 
/build/ceph-12.2.0/src/cls/hello/cls_hello.cc:296: loading cls_hello
2017-09-12 05:38:10.662854 7fc307a10d00  0 _get_class not permitted to load
kvs
2017-09-12 05:38:10.663621 7fc307a10d00 -1 osd.0 0 failed to load OSD map
for epoch 32, got 0 bytes
2017-09-12 05:38:10.70 7fc307a10d00 -1
/build/ceph-12.2.0/src/osd/OSD.h: In function 'OSDMapRef
OSDService::get_map(epoch_t)' thread 7fc307a10d00 time 2017-09-12