Hi,
I follow storage cluster Quick start instruction in my centos 7 more than 10
times including complete cleaning and reinstallation. I failed in the same
step every time. "ceph-deploy osd activate ..." The last try I just create
disk in the local drive to avoid some permission warning and run "ceph-deploy
osd prepare .. and
[albert@admin-node my-cluster]$ ceph-deploy osd activate
admin-node:/home/albert/my-cluster/cephd2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/albert/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy osd activate
admin-node:/home/albert/my-cluster/cephd2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
<ceph_deploy.conf.cephdeploy.Conf instance at 0xe82518>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at
0xe75c08>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('admin-node',
'/home/albert/my-cluster/cephd2', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
admin-node:/home/albert/my-cluster/cephd2:
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host admin-node disk
/home/albert/my-cluster/cephd2
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[admin-node][DEBUG ] find the location of an executable
[admin-node][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate
--mark-init systemd --mount /home/albert/my-cluster/cephd2
[admin-node][WARNIN] main_activate: path = /home/albert/my-cluster/cephd2
[admin-node][WARNIN] activate: Cluster uuid is
8f9bf207-6c6a-4764-8b9e-63f70810837b
[admin-node][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph
--show-config-value=fsid
[admin-node][WARNIN] Traceback (most recent call last):
[admin-node][WARNIN] File "/usr/sbin/ceph-disk", line 9, in <module>
[admin-node][WARNIN] load_entry_point('ceph-disk==1.0.0',
'console_scripts', 'ceph-disk')()
[admin-node][WARNIN] File
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run
[admin-node][WARNIN] main(sys.argv[1:])
[admin-node][WARNIN] File
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4915, in main
[admin-node][WARNIN] args.func(args)
[admin-node][WARNIN] File
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3277, in
main_activate
[admin-node][WARNIN] init=args.mark_init,
[admin-node][WARNIN] File
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3097, in activate_dir
[admin-node][WARNIN] (osd_id, cluster) = activate(path,
activate_key_template, init)
[admin-node][WARNIN] File
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3173, in activate
[admin-node][WARNIN] ' with fsid %s' % ceph_fsid)
[admin-node][WARNIN] ceph_disk.main.Error: Error: No cluster conf found in
/etc/ceph with fsid 8f9bf207-6c6a-4764-8b9e-63f70810837b
[admin-node][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
/usr/sbin/ceph-disk -v activate --mark-init systemd --mount
/home/albert/my-cluster/cephd2
Need some help. Really appreciated.
Albert
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com