Greg,
        I verified in all cluster nodes that rbd_secret_uuid is same as
virsh secret-list. And If I do virsh secret-get-value of this uuid, i
getting back the auth key for client.volumes.  What did you mean by same
configuration?. Did you mean same secret for all compute nodes?
        when we login as admin, There is a column in admin panel which
gives the 'host' where the volumes lie. I know that volumes are striped
across the cluster but it gives same host for all volumes. That is why ,I
got little confused.


On Fri, Jul 26, 2013 at 9:23 AM, Gregory Farnum <[email protected]> wrote:

> On Fri, Jul 26, 2013 at 9:17 AM, johnu <[email protected]> wrote:
> > Hi all,
> >         I need to know whether someone else also faced the same issue.
> >
> >
> > I tried openstack + ceph integration. I have seen that I could create
> > volumes from horizon and it is created in rados.
> >
> > When I check the created volumes in admin panel, all volumes are shown
> to be
> > created in the same host.( I tried creating 10 volumes, but all are
> created
> > in same host 'slave1') I I haven't changed crushmap and I am using the
> > default one which came along with ceph-deploy.
>
> RBD volumes don't live on a given host in the cluster; they are
> striped across all of them. What do you mean the volume is "in"
> slave1?
>
> > Second Issue
> > I am not able to attach volumes to instances if hosts differ. Eg: If
> volumes
> > are created in host 'slave1' , instance1 is created in host 'master' and
> > instance2 is created in host 'slave1',  I am able to attach volumes to
> > instance2 but not to instance1.
>
> This sounds like maybe you don't have quite the same configuration on
> both hosts. Due to the way OpenStack and virsh handle their config
> fragments and secrets, you need to have the same virsh secret-IDs both
> configured (in the OpenStack config files) and set (in virsh's
> internal database) on every compute host and the Cinder/Nova manager.
>


> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to