Hello
I followed everything in the set up documentation setting up a test cluster on
an XCP install and got this:
Invoked (1.3.5): /usr/bin/ceph--
deploy admin domUs1 domUs2 domUs3 domUca
Pushing admin keys and conf to domUs1
connected to host: domUs1
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to domUs2
connected to host: domUs2
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to domUs3
connected to host: domUs3
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to domUca
connected to host: domUca
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Unhandled exception in thread started by <function run_and_release at 0xc7bb90>
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib64/python2.6/site-packages/abrt_exception_handler.py", line
204,,
in <lambda>
sys.excepthook = lambda etype, value, tb: handleMyException((etype, value,
tt
b))
TypeError: 'NoneType' object is not callable
Original exception was:
Traceback (most recent call last):
File
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gatee
way_base.py", line 245, in run_and_release
with self._running_lock:
File "/usr/lib64/python2.6/threading.py", line 117, in acquire
me = _get_ident()
TypeError: 'NoneType' object is not callable
I must've missed the advice to install ceph on the admin node cos I hadn't done
that. When I did, thinking this maybe a spurious error, I get this:
2014-03-12 01:10:08.094837 7fe8c8626700 0 -- :/1011655 >> 192.168.10.25:6789/0
pipe(0x7fe8c4024440 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fe8c40246a0).fault
2014-03-12 01:10:11.091931 7fe8c8525700 0 -- :/1011655 >> 192.168.10.25:6789/0
pipe(0x7fe8b8000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fe8b8000e60).fault
...
I specified 2 OSDs on 2 virtualdisks plugged into each of domUs1-3. The setup
is running on a new HP DL360p h/w RAID across 4 x 1 Tb disks.
Anyone seen this before?
Thanks
Mark
ps. note I captured the above output with a typescript so there maybe duplicate
chars in certain places.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com