Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-20 Thread Dai Xiang
On Tue, Nov 14, 2017 at 11:12:47AM +0100, Iban Cabrillo wrote:
> HI,
>You should do something like #ceph osd in osd.${num}:
>But If this is your tree, I do not see any osd available at this moment
> in your cluster, should be something similar to this xesample:
> 
> ID CLASS WEIGHT   TYPE NAMESTATUS REWEIGHT PRI-AFF
> -1   58.21509 root default
> 
> -2   29.12000 host cephosd01
>  1   hdd  3.64000 osd.1up  1.0 1.0
> ..
> -3   29.09509 host cephosd02
>  0   hdd  3.63689 osd.0up  1.0 1.0
> ..
> 
> Please have a look at the guide:
> http://docs.ceph.com/docs/luminous/rados/deployment/ceph-deploy-osd/


I install ceph in docker in fact, since docker doesn't support create
partition during running, i use `parted` to create then start
container to use ceph create again. The debug log is all right,
anywhere else i can get detail info?
> 
> 
> Regards, I
> 
> 2017-11-14 10:58 GMT+01:00 Dai Xiang :
> 
> > On Tue, Nov 14, 2017 at 10:52:00AM +0100, Iban Cabrillo wrote:
> > > Hi Dai Xiang,
> > >   There is no OSD available at this moment in your cluste, then you can't
> > > read/write or mount anything, maybe the osds are configured but they are
> > > out, please could you paste the "#ceph osd tree " command
> > > to see your osd status ?
> >
> > ID CLASS WEIGHT TYPE NAMESTATUS REWEIGHT PRI-AFF
> > -10 root default
> >
> > It is out indeed, but i really do not know how to fix it.
> >
> > --
> > Best Regards
> > Dai Xiang
> > >
> > > Regards, I
> > >
> > >
> > > 2017-11-14 10:39 GMT+01:00 Dai Xiang :
> > >
> > > > On Tue, Nov 14, 2017 at 09:21:56AM +, Linh Vu wrote:
> > > > > Odd, you only got 2 mons and 0 osds? Your cluster build looks
> > incomplete.
> > > >
> > > > But from the log, osd seems normal:
> > > > [172.17.0.4][INFO  ] checking OSD status...
> > > > [172.17.0.4][DEBUG ] find the location of an executable
> > > > [172.17.0.4][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > > > --format=json
> > > > [ceph_deploy.osd][DEBUG ] Host 172.17.0.4 is now ready for osd use.
> > > > ...
> > > >
> > > > [172.17.0.5][INFO  ] Running command: systemctl enable ceph.target
> > > > [172.17.0.5][INFO  ] checking OSD status...
> > > > [172.17.0.5][DEBUG ] find the location of an executable
> > > > [172.17.0.5][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > > > --format=json
> > > > [ceph_deploy.osd][DEBUG ] Host 172.17.0.5 is now ready for osd use.
> > > >
> > > > --
> > > > Best Regards
> > > > Dai Xiang
> > > > >
> > > > > Get Outlook for Android
> > > > >
> > > > > 
> > > > > From: Dai Xiang 
> > > > > Sent: Tuesday, November 14, 2017 6:12:27 PM
> > > > > To: Linh Vu
> > > > > Cc: ceph-users@lists.ceph.com
> > > > > Subject: Re: mount failed since failed to load ceph kernel module
> > > > >
> > > > > On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> > > > > > Your kernel is way too old for CephFS Luminous. I'd use one of the
> > > > newer kernels from http://elrepo.org. :) We're on 4.12 here on RHEL
> > 7.4.
> > > > >
> > > > > I had updated kernel version to newest:
> > > > > [root@d32f3a7b6eb8 ~]$ uname -a
> > > > > Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12
> > 20:21:04
> > > > EST 2017 x86_64 x86_64 x86_64 GNU/Linux
> > > > > [root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
> > > > > CentOS Linux release 7.2.1511 (Core)
> > > > >
> > > > > But still failed:
> > > > > [root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t
> > > > ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> > > > > failed to load ceph kernel module (1)
> > > > > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > > > > mount error 2 = No such file or directory
> > > > > [root@d32f3a7b6eb8 ~]$ ll /cephfs
> > > > > total 0
> > > > >
> > > > > [root@d32f3a7b6eb8 ~]$ ceph -s
> > > > >   cluster:
> > > > > id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
> > > > > health: HEALTH_WARN
> > > > > Reduced data availability: 128 pgs inactive
> > > > > Degraded data redundancy: 128 pgs unclean
> > > > >
> > > > >   services:
> > > > > mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
> > > > > mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
> > > > > mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
> > > > > osd: 0 osds: 0 up, 0 in
> > > > >
> > > > >   data:
> > > > > pools:   2 pools, 128 pgs
> > > > > objects: 0 objects, 0 bytes
> > > > > usage:   0 kB used, 0 kB / 0 kB avail
> > > > > pgs: 100.000% pgs unknown
> > > > >  128 unknown
> > > > >
> > > > > [root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
> > > > > ceph  372736  0
> > > > > libceph   315392  1 ceph
> > > > > fscache   

Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-14 Thread Iban Cabrillo
HI,
   You should do something like #ceph osd in osd.${num}:
   But If this is your tree, I do not see any osd available at this moment
in your cluster, should be something similar to this xesample:

ID CLASS WEIGHT   TYPE NAMESTATUS REWEIGHT PRI-AFF
-1   58.21509 root default

-2   29.12000 host cephosd01
 1   hdd  3.64000 osd.1up  1.0 1.0
..
-3   29.09509 host cephosd02
 0   hdd  3.63689 osd.0up  1.0 1.0
..

Please have a look at the guide:
http://docs.ceph.com/docs/luminous/rados/deployment/ceph-deploy-osd/


Regards, I

2017-11-14 10:58 GMT+01:00 Dai Xiang :

> On Tue, Nov 14, 2017 at 10:52:00AM +0100, Iban Cabrillo wrote:
> > Hi Dai Xiang,
> >   There is no OSD available at this moment in your cluste, then you can't
> > read/write or mount anything, maybe the osds are configured but they are
> > out, please could you paste the "#ceph osd tree " command
> > to see your osd status ?
>
> ID CLASS WEIGHT TYPE NAMESTATUS REWEIGHT PRI-AFF
> -10 root default
>
> It is out indeed, but i really do not know how to fix it.
>
> --
> Best Regards
> Dai Xiang
> >
> > Regards, I
> >
> >
> > 2017-11-14 10:39 GMT+01:00 Dai Xiang :
> >
> > > On Tue, Nov 14, 2017 at 09:21:56AM +, Linh Vu wrote:
> > > > Odd, you only got 2 mons and 0 osds? Your cluster build looks
> incomplete.
> > >
> > > But from the log, osd seems normal:
> > > [172.17.0.4][INFO  ] checking OSD status...
> > > [172.17.0.4][DEBUG ] find the location of an executable
> > > [172.17.0.4][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > > --format=json
> > > [ceph_deploy.osd][DEBUG ] Host 172.17.0.4 is now ready for osd use.
> > > ...
> > >
> > > [172.17.0.5][INFO  ] Running command: systemctl enable ceph.target
> > > [172.17.0.5][INFO  ] checking OSD status...
> > > [172.17.0.5][DEBUG ] find the location of an executable
> > > [172.17.0.5][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > > --format=json
> > > [ceph_deploy.osd][DEBUG ] Host 172.17.0.5 is now ready for osd use.
> > >
> > > --
> > > Best Regards
> > > Dai Xiang
> > > >
> > > > Get Outlook for Android
> > > >
> > > > 
> > > > From: Dai Xiang 
> > > > Sent: Tuesday, November 14, 2017 6:12:27 PM
> > > > To: Linh Vu
> > > > Cc: ceph-users@lists.ceph.com
> > > > Subject: Re: mount failed since failed to load ceph kernel module
> > > >
> > > > On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> > > > > Your kernel is way too old for CephFS Luminous. I'd use one of the
> > > newer kernels from http://elrepo.org. :) We're on 4.12 here on RHEL
> 7.4.
> > > >
> > > > I had updated kernel version to newest:
> > > > [root@d32f3a7b6eb8 ~]$ uname -a
> > > > Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12
> 20:21:04
> > > EST 2017 x86_64 x86_64 x86_64 GNU/Linux
> > > > [root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
> > > > CentOS Linux release 7.2.1511 (Core)
> > > >
> > > > But still failed:
> > > > [root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t
> > > ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> > > > failed to load ceph kernel module (1)
> > > > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > > > mount error 2 = No such file or directory
> > > > [root@d32f3a7b6eb8 ~]$ ll /cephfs
> > > > total 0
> > > >
> > > > [root@d32f3a7b6eb8 ~]$ ceph -s
> > > >   cluster:
> > > > id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
> > > > health: HEALTH_WARN
> > > > Reduced data availability: 128 pgs inactive
> > > > Degraded data redundancy: 128 pgs unclean
> > > >
> > > >   services:
> > > > mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
> > > > mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
> > > > mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
> > > > osd: 0 osds: 0 up, 0 in
> > > >
> > > >   data:
> > > > pools:   2 pools, 128 pgs
> > > > objects: 0 objects, 0 bytes
> > > > usage:   0 kB used, 0 kB / 0 kB avail
> > > > pgs: 100.000% pgs unknown
> > > >  128 unknown
> > > >
> > > > [root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
> > > > ceph  372736  0
> > > > libceph   315392  1 ceph
> > > > fscache65536  3 ceph,nfsv4,nfs
> > > > libcrc32c  16384  5 libceph,nf_conntrack,xfs,dm_
> > > persistent_data,nf_nat
> > > >
> > > >
> > > > --
> > > > Best Regards
> > > > Dai Xiang
> > > > >
> > > > >
> > > > > Hi!
> > > > >
> > > > > I got a confused issue in docker as below:
> > > > >
> > > > > After install ceph successfully, i want to mount cephfs but failed:
> > > > >
> > > > > [root@dbffa72704e4 ~]$ /bin/mount http://172.17.0.4:/ > > 172.17.0.4:/> /cephfs -t ceph -o name=admin,secretfile=/etc/
> 

Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-14 Thread Dai Xiang
On Tue, Nov 14, 2017 at 10:52:00AM +0100, Iban Cabrillo wrote:
> Hi Dai Xiang,
>   There is no OSD available at this moment in your cluste, then you can't
> read/write or mount anything, maybe the osds are configured but they are
> out, please could you paste the "#ceph osd tree " command
> to see your osd status ?

ID CLASS WEIGHT TYPE NAMESTATUS REWEIGHT PRI-AFF 
-10 root default

It is out indeed, but i really do not know how to fix it.

-- 
Best Regards
Dai Xiang
> 
> Regards, I
> 
> 
> 2017-11-14 10:39 GMT+01:00 Dai Xiang :
> 
> > On Tue, Nov 14, 2017 at 09:21:56AM +, Linh Vu wrote:
> > > Odd, you only got 2 mons and 0 osds? Your cluster build looks incomplete.
> >
> > But from the log, osd seems normal:
> > [172.17.0.4][INFO  ] checking OSD status...
> > [172.17.0.4][DEBUG ] find the location of an executable
> > [172.17.0.4][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > --format=json
> > [ceph_deploy.osd][DEBUG ] Host 172.17.0.4 is now ready for osd use.
> > ...
> >
> > [172.17.0.5][INFO  ] Running command: systemctl enable ceph.target
> > [172.17.0.5][INFO  ] checking OSD status...
> > [172.17.0.5][DEBUG ] find the location of an executable
> > [172.17.0.5][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > --format=json
> > [ceph_deploy.osd][DEBUG ] Host 172.17.0.5 is now ready for osd use.
> >
> > --
> > Best Regards
> > Dai Xiang
> > >
> > > Get Outlook for Android
> > >
> > > 
> > > From: Dai Xiang 
> > > Sent: Tuesday, November 14, 2017 6:12:27 PM
> > > To: Linh Vu
> > > Cc: ceph-users@lists.ceph.com
> > > Subject: Re: mount failed since failed to load ceph kernel module
> > >
> > > On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> > > > Your kernel is way too old for CephFS Luminous. I'd use one of the
> > newer kernels from http://elrepo.org. :) We're on 4.12 here on RHEL 7.4.
> > >
> > > I had updated kernel version to newest:
> > > [root@d32f3a7b6eb8 ~]$ uname -a
> > > Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04
> > EST 2017 x86_64 x86_64 x86_64 GNU/Linux
> > > [root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
> > > CentOS Linux release 7.2.1511 (Core)
> > >
> > > But still failed:
> > > [root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t
> > ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> > > failed to load ceph kernel module (1)
> > > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > > mount error 2 = No such file or directory
> > > [root@d32f3a7b6eb8 ~]$ ll /cephfs
> > > total 0
> > >
> > > [root@d32f3a7b6eb8 ~]$ ceph -s
> > >   cluster:
> > > id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
> > > health: HEALTH_WARN
> > > Reduced data availability: 128 pgs inactive
> > > Degraded data redundancy: 128 pgs unclean
> > >
> > >   services:
> > > mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
> > > mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
> > > mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
> > > osd: 0 osds: 0 up, 0 in
> > >
> > >   data:
> > > pools:   2 pools, 128 pgs
> > > objects: 0 objects, 0 bytes
> > > usage:   0 kB used, 0 kB / 0 kB avail
> > > pgs: 100.000% pgs unknown
> > >  128 unknown
> > >
> > > [root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
> > > ceph  372736  0
> > > libceph   315392  1 ceph
> > > fscache65536  3 ceph,nfsv4,nfs
> > > libcrc32c  16384  5 libceph,nf_conntrack,xfs,dm_
> > persistent_data,nf_nat
> > >
> > >
> > > --
> > > Best Regards
> > > Dai Xiang
> > > >
> > > >
> > > > Hi!
> > > >
> > > > I got a confused issue in docker as below:
> > > >
> > > > After install ceph successfully, i want to mount cephfs but failed:
> > > >
> > > > [root@dbffa72704e4 ~]$ /bin/mount http://172.17.0.4:/ > 172.17.0.4:/> /cephfs -t ceph -o 
> > name=admin,secretfile=/etc/ceph/admin.secret
> > -v
> > > > failed to load ceph kernel module (1)
> > > > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > > > mount error 5 = Input/output error
> > > >
> > > > But ceph related kernel modules have existed:
> > > >
> > > > [root@dbffa72704e4 ~]$ lsmod | grep ceph
> > > > ceph  327687  0
> > > > libceph   287066  1 ceph
> > > > dns_resolver   13140  2 nfsv4,libceph
> > > > libcrc32c  12644  3 xfs,libceph,dm_persistent_data
> > > >
> > > > Check the ceph state(i only set data disk for osd):
> > > >
> > > > [root@dbffa72704e4 ~]$ ceph -s
> > > >   cluster:
> > > > id: 20f51975-303e-446f-903f-04e1feaff7d0
> > > > health: HEALTH_WARN
> > > > Reduced data availability: 128 pgs inactive
> > > > Degraded data redundancy: 128 pgs unclean
> > > >
> > > >   services:
> > > > mon: 2 daemons, quorum 

Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-14 Thread Iban Cabrillo
Hi Dai Xiang,
  There is no OSD available at this moment in your cluste, then you can't
read/write or mount anything, maybe the osds are configured but they are
out, please could you paste the "#ceph osd tree " command
to see your osd status ?

Regards, I


2017-11-14 10:39 GMT+01:00 Dai Xiang :

> On Tue, Nov 14, 2017 at 09:21:56AM +, Linh Vu wrote:
> > Odd, you only got 2 mons and 0 osds? Your cluster build looks incomplete.
>
> But from the log, osd seems normal:
> [172.17.0.4][INFO  ] checking OSD status...
> [172.17.0.4][DEBUG ] find the location of an executable
> [172.17.0.4][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> --format=json
> [ceph_deploy.osd][DEBUG ] Host 172.17.0.4 is now ready for osd use.
> ...
>
> [172.17.0.5][INFO  ] Running command: systemctl enable ceph.target
> [172.17.0.5][INFO  ] checking OSD status...
> [172.17.0.5][DEBUG ] find the location of an executable
> [172.17.0.5][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> --format=json
> [ceph_deploy.osd][DEBUG ] Host 172.17.0.5 is now ready for osd use.
>
> --
> Best Regards
> Dai Xiang
> >
> > Get Outlook for Android
> >
> > 
> > From: Dai Xiang 
> > Sent: Tuesday, November 14, 2017 6:12:27 PM
> > To: Linh Vu
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: mount failed since failed to load ceph kernel module
> >
> > On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> > > Your kernel is way too old for CephFS Luminous. I'd use one of the
> newer kernels from http://elrepo.org. :) We're on 4.12 here on RHEL 7.4.
> >
> > I had updated kernel version to newest:
> > [root@d32f3a7b6eb8 ~]$ uname -a
> > Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04
> EST 2017 x86_64 x86_64 x86_64 GNU/Linux
> > [root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
> > CentOS Linux release 7.2.1511 (Core)
> >
> > But still failed:
> > [root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t
> ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> > failed to load ceph kernel module (1)
> > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > mount error 2 = No such file or directory
> > [root@d32f3a7b6eb8 ~]$ ll /cephfs
> > total 0
> >
> > [root@d32f3a7b6eb8 ~]$ ceph -s
> >   cluster:
> > id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
> > health: HEALTH_WARN
> > Reduced data availability: 128 pgs inactive
> > Degraded data redundancy: 128 pgs unclean
> >
> >   services:
> > mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
> > mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
> > mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
> > osd: 0 osds: 0 up, 0 in
> >
> >   data:
> > pools:   2 pools, 128 pgs
> > objects: 0 objects, 0 bytes
> > usage:   0 kB used, 0 kB / 0 kB avail
> > pgs: 100.000% pgs unknown
> >  128 unknown
> >
> > [root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
> > ceph  372736  0
> > libceph   315392  1 ceph
> > fscache65536  3 ceph,nfsv4,nfs
> > libcrc32c  16384  5 libceph,nf_conntrack,xfs,dm_
> persistent_data,nf_nat
> >
> >
> > --
> > Best Regards
> > Dai Xiang
> > >
> > >
> > > Hi!
> > >
> > > I got a confused issue in docker as below:
> > >
> > > After install ceph successfully, i want to mount cephfs but failed:
> > >
> > > [root@dbffa72704e4 ~]$ /bin/mount http://172.17.0.4:/ 172.17.0.4:/> /cephfs -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret
> -v
> > > failed to load ceph kernel module (1)
> > > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > > mount error 5 = Input/output error
> > >
> > > But ceph related kernel modules have existed:
> > >
> > > [root@dbffa72704e4 ~]$ lsmod | grep ceph
> > > ceph  327687  0
> > > libceph   287066  1 ceph
> > > dns_resolver   13140  2 nfsv4,libceph
> > > libcrc32c  12644  3 xfs,libceph,dm_persistent_data
> > >
> > > Check the ceph state(i only set data disk for osd):
> > >
> > > [root@dbffa72704e4 ~]$ ceph -s
> > >   cluster:
> > > id: 20f51975-303e-446f-903f-04e1feaff7d0
> > > health: HEALTH_WARN
> > > Reduced data availability: 128 pgs inactive
> > > Degraded data redundancy: 128 pgs unclean
> > >
> > >   services:
> > > mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> > > mgr: dbffa72704e4(active), standbys: 5807d12f920e
> > > mds: cephfs-1/1/1 up  {0=5807d12f920e=up:creating}, 1 up:standby
> > > osd: 0 osds: 0 up, 0 in
> > >
> > >   data:
> > > pools:   2 pools, 128 pgs
> > > objects: 0 objects, 0 bytes
> > > usage:   0 kB used, 0 kB / 0 kB avail
> > > pgs: 100.000% pgs unknown
> > >  128 unknown
> > >
> > > [root@dbffa72704e4 ~]$ ceph version
> > > ceph version 12.2.1 

Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-14 Thread Dai Xiang
On Tue, Nov 14, 2017 at 09:21:56AM +, Linh Vu wrote:
> Odd, you only got 2 mons and 0 osds? Your cluster build looks incomplete.

But from the log, osd seems normal:
[172.17.0.4][INFO  ] checking OSD status...
[172.17.0.4][DEBUG ] find the location of an executable
[172.17.0.4][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat 
--format=json
[ceph_deploy.osd][DEBUG ] Host 172.17.0.4 is now ready for osd use.
...

[172.17.0.5][INFO  ] Running command: systemctl enable ceph.target
[172.17.0.5][INFO  ] checking OSD status...
[172.17.0.5][DEBUG ] find the location of an executable
[172.17.0.5][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat 
--format=json
[ceph_deploy.osd][DEBUG ] Host 172.17.0.5 is now ready for osd use.

-- 
Best Regards
Dai Xiang
> 
> Get Outlook for Android
> 
> 
> From: Dai Xiang 
> Sent: Tuesday, November 14, 2017 6:12:27 PM
> To: Linh Vu
> Cc: ceph-users@lists.ceph.com
> Subject: Re: mount failed since failed to load ceph kernel module
> 
> On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> > Your kernel is way too old for CephFS Luminous. I'd use one of the newer 
> > kernels from http://elrepo.org. :) We're on 4.12 here on RHEL 7.4.
> 
> I had updated kernel version to newest:
> [root@d32f3a7b6eb8 ~]$ uname -a
> Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04 EST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.2.1511 (Core)
> 
> But still failed:
> [root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t ceph -o 
> name=admin,secretfile=/etc/ceph/admin.secret -v
> failed to load ceph kernel module (1)
> parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> mount error 2 = No such file or directory
> [root@d32f3a7b6eb8 ~]$ ll /cephfs
> total 0
> 
> [root@d32f3a7b6eb8 ~]$ ceph -s
>   cluster:
> id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
> health: HEALTH_WARN
> Reduced data availability: 128 pgs inactive
> Degraded data redundancy: 128 pgs unclean
> 
>   services:
> mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
> mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
> mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
> osd: 0 osds: 0 up, 0 in
> 
>   data:
> pools:   2 pools, 128 pgs
> objects: 0 objects, 0 bytes
> usage:   0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
>  128 unknown
> 
> [root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
> ceph  372736  0
> libceph   315392  1 ceph
> fscache65536  3 ceph,nfsv4,nfs
> libcrc32c  16384  5 
> libceph,nf_conntrack,xfs,dm_persistent_data,nf_nat
> 
> 
> --
> Best Regards
> Dai Xiang
> >
> >
> > Hi!
> >
> > I got a confused issue in docker as below:
> >
> > After install ceph successfully, i want to mount cephfs but failed:
> >
> > [root@dbffa72704e4 ~]$ /bin/mount http://172.17.0.4:/ 
> > /cephfs -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> > failed to load ceph kernel module (1)
> > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > mount error 5 = Input/output error
> >
> > But ceph related kernel modules have existed:
> >
> > [root@dbffa72704e4 ~]$ lsmod | grep ceph
> > ceph  327687  0
> > libceph   287066  1 ceph
> > dns_resolver   13140  2 nfsv4,libceph
> > libcrc32c  12644  3 xfs,libceph,dm_persistent_data
> >
> > Check the ceph state(i only set data disk for osd):
> >
> > [root@dbffa72704e4 ~]$ ceph -s
> >   cluster:
> > id: 20f51975-303e-446f-903f-04e1feaff7d0
> > health: HEALTH_WARN
> > Reduced data availability: 128 pgs inactive
> > Degraded data redundancy: 128 pgs unclean
> >
> >   services:
> > mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> > mgr: dbffa72704e4(active), standbys: 5807d12f920e
> > mds: cephfs-1/1/1 up  {0=5807d12f920e=up:creating}, 1 up:standby
> > osd: 0 osds: 0 up, 0 in
> >
> >   data:
> > pools:   2 pools, 128 pgs
> > objects: 0 objects, 0 bytes
> > usage:   0 kB used, 0 kB / 0 kB avail
> > pgs: 100.000% pgs unknown
> >  128 unknown
> >
> > [root@dbffa72704e4 ~]$ ceph version
> > ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous 
> > (stable)
> >
> > My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 
> > 3.10.0-514.el7.x86_64.
> >
> > I saw some ceph related images on docker hub so that i think above
> > operation is ok, did i miss something important?
> >
> > --
> > Best Regards
> > Dai Xiang
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-14 Thread Linh Vu
Odd, you only got 2 mons and 0 osds? Your cluster build looks incomplete.

Get Outlook for Android


From: Dai Xiang 
Sent: Tuesday, November 14, 2017 6:12:27 PM
To: Linh Vu
Cc: ceph-users@lists.ceph.com
Subject: Re: mount failed since failed to load ceph kernel module

On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> Your kernel is way too old for CephFS Luminous. I'd use one of the newer 
> kernels from http://elrepo.org. :) We're on 4.12 here on RHEL 7.4.

I had updated kernel version to newest:
[root@d32f3a7b6eb8 ~]$ uname -a
Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04 EST 
2017 x86_64 x86_64 x86_64 GNU/Linux
[root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

But still failed:
[root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t ceph -o 
name=admin,secretfile=/etc/ceph/admin.secret -v
failed to load ceph kernel module (1)
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
mount error 2 = No such file or directory
[root@d32f3a7b6eb8 ~]$ ll /cephfs
total 0

[root@d32f3a7b6eb8 ~]$ ceph -s
  cluster:
id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs unclean

  services:
mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
osd: 0 osds: 0 up, 0 in

  data:
pools:   2 pools, 128 pgs
objects: 0 objects, 0 bytes
usage:   0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
 128 unknown

[root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
ceph  372736  0
libceph   315392  1 ceph
fscache65536  3 ceph,nfsv4,nfs
libcrc32c  16384  5 
libceph,nf_conntrack,xfs,dm_persistent_data,nf_nat


--
Best Regards
Dai Xiang
>
>
> Hi!
>
> I got a confused issue in docker as below:
>
> After install ceph successfully, i want to mount cephfs but failed:
>
> [root@dbffa72704e4 ~]$ /bin/mount http://172.17.0.4:/ 
> /cephfs -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> failed to load ceph kernel module (1)
> parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> mount error 5 = Input/output error
>
> But ceph related kernel modules have existed:
>
> [root@dbffa72704e4 ~]$ lsmod | grep ceph
> ceph  327687  0
> libceph   287066  1 ceph
> dns_resolver   13140  2 nfsv4,libceph
> libcrc32c  12644  3 xfs,libceph,dm_persistent_data
>
> Check the ceph state(i only set data disk for osd):
>
> [root@dbffa72704e4 ~]$ ceph -s
>   cluster:
> id: 20f51975-303e-446f-903f-04e1feaff7d0
> health: HEALTH_WARN
> Reduced data availability: 128 pgs inactive
> Degraded data redundancy: 128 pgs unclean
>
>   services:
> mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> mgr: dbffa72704e4(active), standbys: 5807d12f920e
> mds: cephfs-1/1/1 up  {0=5807d12f920e=up:creating}, 1 up:standby
> osd: 0 osds: 0 up, 0 in
>
>   data:
> pools:   2 pools, 128 pgs
> objects: 0 objects, 0 bytes
> usage:   0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
>  128 unknown
>
> [root@dbffa72704e4 ~]$ ceph version
> ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous 
> (stable)
>
> My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 
> 3.10.0-514.el7.x86_64.
>
> I saw some ceph related images on docker hub so that i think above
> operation is ok, did i miss something important?
>
> --
> Best Regards
> Dai Xiang

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-13 Thread Dai Xiang
On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> Your kernel is way too old for CephFS Luminous. I'd use one of the newer 
> kernels from elrepo.org. :) We're on 4.12 here on RHEL 7.4.

I had updated kernel version to newest:
[root@d32f3a7b6eb8 ~]$ uname -a
Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04 EST 
2017 x86_64 x86_64 x86_64 GNU/Linux
[root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 

But still failed:
[root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t ceph -o 
name=admin,secretfile=/etc/ceph/admin.secret -v
failed to load ceph kernel module (1)
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
mount error 2 = No such file or directory
[root@d32f3a7b6eb8 ~]$ ll /cephfs
total 0

[root@d32f3a7b6eb8 ~]$ ceph -s
  cluster:
id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs unclean
 
  services:
mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
osd: 0 osds: 0 up, 0 in
 
  data:
pools:   2 pools, 128 pgs
objects: 0 objects, 0 bytes
usage:   0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
 128 unknown

[root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
ceph  372736  0 
libceph   315392  1 ceph
fscache65536  3 ceph,nfsv4,nfs
libcrc32c  16384  5 
libceph,nf_conntrack,xfs,dm_persistent_data,nf_nat


-- 
Best Regards
Dai Xiang
> 
> 
> Hi!
> 
> I got a confused issue in docker as below:
> 
> After install ceph successfully, i want to mount cephfs but failed:
> 
> [root@dbffa72704e4 ~]$ /bin/mount 172.17.0.4:/ /cephfs 
> -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> failed to load ceph kernel module (1)
> parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> mount error 5 = Input/output error
> 
> But ceph related kernel modules have existed:
> 
> [root@dbffa72704e4 ~]$ lsmod | grep ceph
> ceph  327687  0
> libceph   287066  1 ceph
> dns_resolver   13140  2 nfsv4,libceph
> libcrc32c  12644  3 xfs,libceph,dm_persistent_data
> 
> Check the ceph state(i only set data disk for osd):
> 
> [root@dbffa72704e4 ~]$ ceph -s
>   cluster:
> id: 20f51975-303e-446f-903f-04e1feaff7d0
> health: HEALTH_WARN
> Reduced data availability: 128 pgs inactive
> Degraded data redundancy: 128 pgs unclean
> 
>   services:
> mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> mgr: dbffa72704e4(active), standbys: 5807d12f920e
> mds: cephfs-1/1/1 up  {0=5807d12f920e=up:creating}, 1 up:standby
> osd: 0 osds: 0 up, 0 in
> 
>   data:
> pools:   2 pools, 128 pgs
> objects: 0 objects, 0 bytes
> usage:   0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
>  128 unknown
> 
> [root@dbffa72704e4 ~]$ ceph version
> ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous 
> (stable)
> 
> My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 
> 3.10.0-514.el7.x86_64.
> 
> I saw some ceph related images on docker hub so that i think above
> operation is ok, did i miss something important?
> 
> --
> Best Regards
> Dai Xiang

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-13 Thread Dai Xiang
On Tue, Nov 14, 2017 at 02:24:06AM +, Linh Vu wrote:
> Your kernel is way too old for CephFS Luminous. I'd use one of the newer 
> kernels from elrepo.org. :) We're on 4.12 here on RHEL 7.4.

There is still a question:
why on my host(3.10.0-327.el7.x86_64) cephfs can mount and load kernel
module normally?
Does it mean the docker's kernel must be above 4.12.* to enable
cephfs?

-- 
Best Regards
Dai Xiang
> 
> 
> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of 
> xiang@sky-data.cn <xiang@sky-data.cn>
> Sent: Tuesday, 14 November 2017 1:13:47 PM
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] mount failed since failed to load ceph kernel module
> 
> Hi!
> 
> I got a confused issue in docker as below:
> 
> After install ceph successfully, i want to mount cephfs but failed:
> 
> [root@dbffa72704e4 ~]$ /bin/mount 172.17.0.4:/<http://172.17.0.4:/> /cephfs 
> -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> failed to load ceph kernel module (1)
> parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> mount error 5 = Input/output error
> 
> But ceph related kernel modules have existed:
> 
> [root@dbffa72704e4 ~]$ lsmod | grep ceph
> ceph  327687  0
> libceph   287066  1 ceph
> dns_resolver   13140  2 nfsv4,libceph
> libcrc32c  12644  3 xfs,libceph,dm_persistent_data
> 
> Check the ceph state(i only set data disk for osd):
> 
> [root@dbffa72704e4 ~]$ ceph -s
>   cluster:
> id: 20f51975-303e-446f-903f-04e1feaff7d0
> health: HEALTH_WARN
> Reduced data availability: 128 pgs inactive
> Degraded data redundancy: 128 pgs unclean
> 
>   services:
> mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> mgr: dbffa72704e4(active), standbys: 5807d12f920e
> mds: cephfs-1/1/1 up  {0=5807d12f920e=up:creating}, 1 up:standby
> osd: 0 osds: 0 up, 0 in
> 
>   data:
> pools:   2 pools, 128 pgs
> objects: 0 objects, 0 bytes
> usage:   0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
>  128 unknown
> 
> [root@dbffa72704e4 ~]$ ceph version
> ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous 
> (stable)
> 
> My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 
> 3.10.0-514.el7.x86_64.
> 
> I saw some ceph related images on docker hub so that i think above
> operation is ok, did i miss something important?
> 
> --
> Best Regards
> Dai Xiang

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount failed since failed to load ceph kernel module

2017-11-13 Thread Linh Vu
Your kernel is way too old for CephFS Luminous. I'd use one of the newer 
kernels from elrepo.org. :) We're on 4.12 here on RHEL 7.4.


From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of 
xiang@sky-data.cn <xiang@sky-data.cn>
Sent: Tuesday, 14 November 2017 1:13:47 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] mount failed since failed to load ceph kernel module

Hi!

I got a confused issue in docker as below:

After install ceph successfully, i want to mount cephfs but failed:

[root@dbffa72704e4 ~]$ /bin/mount 172.17.0.4:/<http://172.17.0.4:/> /cephfs -t 
ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
failed to load ceph kernel module (1)
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
mount error 5 = Input/output error

But ceph related kernel modules have existed:

[root@dbffa72704e4 ~]$ lsmod | grep ceph
ceph  327687  0
libceph   287066  1 ceph
dns_resolver   13140  2 nfsv4,libceph
libcrc32c  12644  3 xfs,libceph,dm_persistent_data

Check the ceph state(i only set data disk for osd):

[root@dbffa72704e4 ~]$ ceph -s
  cluster:
id: 20f51975-303e-446f-903f-04e1feaff7d0
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs unclean

  services:
mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
mgr: dbffa72704e4(active), standbys: 5807d12f920e
mds: cephfs-1/1/1 up  {0=5807d12f920e=up:creating}, 1 up:standby
osd: 0 osds: 0 up, 0 in

  data:
pools:   2 pools, 128 pgs
objects: 0 objects, 0 bytes
usage:   0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
 128 unknown

[root@dbffa72704e4 ~]$ ceph version
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)

My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 
3.10.0-514.el7.x86_64.

I saw some ceph related images on docker hub so that i think above
operation is ok, did i miss something important?

--
Best Regards
Dai Xiang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mount failed since failed to load ceph kernel module

2017-11-13 Thread xiang....@sky-data.cn
Hi! 

I got a confused issue in docker as below: 

After install ceph successfully, i want to mount cephfs but failed: 

[root@dbffa72704e4 ~]$ /bin/mount 172.17.0.4:/ /cephfs -t ceph -o 
name=admin,secretfile=/etc/ceph/admin.secret -v 
failed to load ceph kernel module (1) 
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret 
mount error 5 = Input/output error 

But ceph related kernel modules have existed: 

[root@dbffa72704e4 ~]$ lsmod | grep ceph 
ceph 327687 0 
libceph 287066 1 ceph 
dns_resolver 13140 2 nfsv4,libceph 
libcrc32c 12644 3 xfs,libceph,dm_persistent_data 

Check the ceph state(i only set data disk for osd): 

[root@dbffa72704e4 ~]$ ceph -s 
cluster: 
id: 20f51975-303e-446f-903f-04e1feaff7d0 
health: HEALTH_WARN 
Reduced data availability: 128 pgs inactive 
Degraded data redundancy: 128 pgs unclean 

services: 
mon: 2 daemons, quorum dbffa72704e4,5807d12f920e 
mgr: dbffa72704e4(active), standbys: 5807d12f920e 
mds: cephfs-1/1/1 up {0=5807d12f920e=up:creating}, 1 up:standby 
osd: 0 osds: 0 up, 0 in 

data: 
pools: 2 pools, 128 pgs 
objects: 0 objects, 0 bytes 
usage: 0 kB used, 0 kB / 0 kB avail 
pgs: 100.000% pgs unknown 
128 unknown 

[root@dbffa72704e4 ~]$ ceph version 
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous 
(stable) 

My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 
3.10.0-514.el7.x86_64. 

I saw some ceph related images on docker hub so that i think above 
operation is ok, did i miss something important? 

-- 
Best Regards 
Dai Xiang 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com