Thanks for the pointer about not installing Ceph on the admin node. Should I
explicitly remove it?
Regarding /etc/ceph, it's there:
ceph@domUca s2cCluster]$ ls -la /etc/ceph
total 20
drwxr-xr-x. 2 root root 4096 Mar 12 01:02 .
drwxr-xr-x. 77 root root 4096 Mar 12 03:41 ..
-rw-r--r--. 1 root root 64 Mar 12 01:02 ceph.client.admin.keyring
-rw-r--r--. 1 root root 229 Mar 12 01:02 ceph.conf
-rwxr-xr-x. 1 root root 92 Dec 20 22:47 rbdmap
[ceph@domUca s2cCluster]$ ceph-deploy admin domUca
[ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy admin domUca
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to domUca
[domUca][DEBUG ] connected to host: domUca
[domUca][DEBUG ] detect platform information from remote host
[domUca][DEBUG ] detect machine type
[domUca][DEBUG ] get remote short hostname
[domUca][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
Unhandled exception in thread started by
Error in sys.excepthook:
Original exception was:
[ceph@domUca s2cCluster]$
This looks like a bug to me.
On 2014 Mar 12, at 07:52, Ashish Chandra wrote:
Hi ,
Have you created /etc/ceph directory in ur "domUca" node. Seems to me this
directory is missing.
And while installing ceph from quick start we are not supposed to install ceph
on admin node.
If the above directory is missing , please create one and run the command..
ceph-deploy admin domUca. (to update the conf in above node)
Thanks and Regards
Ashish Chandra
Cloud Engineer, Reliance Jio
On Wed, Mar 12, 2014 at 6:42 AM, Mark s2c
<[email protected]<mailto:[email protected]>> wrote:
Hello
I followed everything in the set up documentation setting up a test cluster on
an XCP install and got this:
Invoked (1.3.5): /usr/bin/ceph--
deploy admin domUs1 domUs2 domUs3 domUca
Pushing admin keys and conf to domUs1
connected to host: domUs1
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to domUs2
connected to host: domUs2
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to domUs3
connected to host: domUs3
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to domUca
connected to host: domUca
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Unhandled exception in thread started by <function run_and_release at 0xc7bb90>
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib64/python2.6/site-packages/abrt_exception_handler.py", line
204,,
in <lambda>
sys.excepthook = lambda etype, value, tb: handleMyException((etype, value,
tt
b))
TypeError: 'NoneType' object is not callable
Original exception was:
Traceback (most recent call last):
File
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gatee
way_base.py", line 245, in run_and_release
with self._running_lock:
File "/usr/lib64/python2.6/threading.py", line 117, in acquire
me = _get_ident()
TypeError: 'NoneType' object is not callable
I must've missed the advice to install ceph on the admin node cos I hadn't done
that. When I did, thinking this maybe a spurious error, I get this:
2014-03-12 01:10:08.094837 7fe8c8626700 0 -- :/1011655 >>
192.168.10.25:6789/0<http://192.168.10.25:6789/0> pipe(0x7fe8c4024440 sd=3 :0
s=1 pgs=0 cs=0 l=1 c=0x7fe8c40246a0).fault
2014-03-12 01:10:11.091931 7fe8c8525700 0 -- :/1011655 >>
192.168.10.25:6789/0<http://192.168.10.25:6789/0> pipe(0x7fe8b8000c00 sd=3 :0
s=1 pgs=0 cs=0 l=1 c=0x7fe8b8000e60).fault
...
I specified 2 OSDs on 2 virtualdisks plugged into each of domUs1-3. The setup
is running on a new HP DL360p h/w RAID across 4 x 1 Tb disks.
Anyone seen this before?
Thanks
Mark
ps. note I captured the above output with a typescript so there maybe duplicate
chars in certain places.
_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com