Hi Ken,
Thanks for your reply.The ceph cluster runs well.
:~$ sudo ceph -s cluster 285441d6-c059-405d-9762-86bd91f279d0 health
HEALTH_OK monmap e1: 1 mons at {strony-pc=10.132.138.233:6789/0}
election epoch 9, quorum 0 strony-pc osdmap e200: 2 osds: 2 up, 2 in
flags sortbitwise pgmap v225126: 256 pgs, 1 pools, 345 bytes data, 10
objects 10326 MB used, 477 GB / 488 GB avail 256
active+clean client io 0 B/s rd, 193 op/s rd, 0 op/s wr
$ ceph osd lspools6 rbd,
I previously deleted some pools. So, the latest ID for the pool, 'rbd', is 6. I
guess the client probably tries accessing the first pool by default and then
got stuck. So, how can I change the pool ID into '0'?
Thanks,Strony
On Monday, June 6, 2016 1:46 AM, Ken Peng <[email protected]> wrote:
hello,
Does ceph cluster work right? run ceph -s and ceph -w for watching more details.
2016-06-06 16:17 GMT+08:00 strony zhang <[email protected]>:
Hi,
I am a new learner in ceph. Now I install an All-in-one ceph on the host A.
Then I tried accessing the ceph from another host B with librados and librbd
installed.
>From host B: I run python to access the ceph on host A.>>> import rados>>>
>cluster1 = rados.Rados(conffile='/etc/ceph/ceph.conf')>>>
>cluster1.connect()>>> print
>cluster1.get_fsid()285441d6-c059-405d-9762-86bd91f279d0>>> >>> import rbd>>>
>rbd_inst = rbd.RBD()>>> ioctx = cluster1.open_ioctx('rbd')>>>
>rbd_inst.list(ioctx).... stuck in here; it never exits until the python
>program is killed manually.
But in Host A, I don't find any error info.zq@zq-ubuntu:~$ rbd list -lNAME
SIZE PARENT FMT PROT LOCK z1 1024M 2 z2 1024M 2
z3 1024M 2
The ceph.conf and ceph.client.admin.keyring in host B are the same to those in
host A. Any comments are appreciated.
Thanks,Strony
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com