Hi,
after i copied /lib/lsb/* ( was not existing on my new centos 7.2 ) system
now
# service ceph start
Error EINVAL: entity osd.18 exists but key does not match
ERROR:ceph-disk:Failed to activate
ceph-disk: Command '['/usr/bin/ceph', '--cluster', 'ceph', '--name',
'client.bootstrap-osd', '--keyring',
'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'auth', 'add', 'osd.18',
'-i', '/var/lib/ceph/tmp/mnt.aK93bJ/keyring', 'osd', 'allow *', 'mon',
'allow profile osd']' returned non-zero exit status 22
ceph-disk: Error: One or more partitions failed to activate
and after i deleted the old ceph auth ids with:
ceph auth del osd.id
it started to work, after repeating all again.
So all in all, in the very end:
Thank you dear vendors, that you can not use eighter systemd or upstart
or sysv or anything else, but not all at once, mixed and removing it
during within a major release...
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
Steuer Nr.: 35 236 3622 1
UST ID: DE274086107
Am 24.03.2016 um 02:34 schrieb Oliver Dzombic:
> Hi,
>
> i try to add a node to an existing cluster:
>
> ceph-deploy install newceph2 --release hammer
>
> works fine.
>
> I try to add an osd:
>
> ceph-deploy osd create newceph2:/dev/sdc:/dev/sda
>
> works fine:
>
> [newceph2][WARNIN] Executing /sbin/chkconfig ceph on
> [newceph2][INFO ] checking OSD status...
> [newceph2][INFO ] Running command: ceph --cluster=ceph osd stat
> --format=json
> [ceph_deploy.osd][DEBUG ] Host newceph2 is now ready for osd use.
>
> ceph -s will show:
>
> osdmap e20602: 19 osds: 18 up, 18 in
>
> ceph osd tree will show:
>
> ceph osd tree
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 97.31982 root default
> -2 16.13997 host ceph1
> 0 5.37999 osd.0 up 1.0 1.0
> 1 5.37999 osd.1 up 1.0 1.0
> 2 5.37999 osd.2 up 1.0 1.0
> -3 16.13997 host ceph2
> 3 5.37999 osd.3 up 1.0 1.0
> 4 5.37999 osd.4 up 1.0 1.0
> 5 5.37999 osd.5 up 1.0 1.0
> -4 16.13997 host ceph3
> 6 5.37999 osd.6 up 1.0 1.0
> 7 5.37999 osd.7 up 1.0 1.0
> 8 5.37999 osd.8 up 1.0 1.0
> -5 16.13997 host ceph4
> 9 5.37999 osd.9 up 1.0 1.0
> 10 5.37999 osd.10 up 1.0 1.0
> 11 5.37999 osd.11 up 1.0 1.0
> -6 16.37997 host ceph5
> 12 5.45999 osd.12 up 1.0 1.0
> 13 5.45999 osd.13 up 1.0 1.0
> 14 5.45999 osd.14 up 1.0 1.0
> -7 16.37997 host ceph6
> 15 5.45999 osd.15 up 1.0 1.0
> 16 5.45999 osd.16 up 1.0 1.0
> 17 5.45999 osd.17 up 1.0 1.0
> 180 osd.18down0 1.0
>
>
> The last lines of the osd log on the node will show:
>
> 2016-03-24 11:25:57.454637 7fa994474880 -1
> filestore(/var/lib/ceph/tmp/mnt.gFf0AJ) could not find
> 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
>
> 2016-03-24 11:25:57.621629 7fa994474880 1 journal close
> /var/lib/ceph/tmp/mnt.gFf0AJ/journal
>
> 2016-03-24 11:25:57.626038 7fa994474880 -1 created object store
> /var/lib/ceph/tmp/mnt.gFf0AJ journal
> /var/lib/ceph/tmp/mnt.gFf0AJ/journal for osd.18 fsid
> 292e15e5-bc38-41b0-9e7b-6f5ef1cf2e53
>
> 2016-03-24 11:25:57.626131 7fa994474880 -1 auth: error reading file:
> /var/lib/ceph/tmp/mnt.gFf0AJ/keyring: can't open
> /var/lib/ceph/tmp/mnt.gFf0AJ/keyring: (2) No such file or directory
>
> 2016-03-24 11:25:57.631470 7fa994474880 -1 created new key in keyring
> /var/lib/ceph/tmp/mnt.gFf0AJ/keyring
>
>
> thats hammer
>
> ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
>
> on centos 7.2
>
> there are no systemctl / services commands working:
>
> # service ceph
> /etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or
> directory
>
>
> --
>
> 2-3 months ago this was just working fine. In the meanwhile something
> changed as it seems in the ceph-deploy code.
>
> Any suggestions ? I more or less urgently need to add osd's :/
>
> Thank you !
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com