Hi,

I am working on an HA setup with multiple cinder-volume workers. However, I
have a single ceph cluster that I want both workers to front.

With relatively standard configuration, the volumes created seem to be tied
to a specific worker. This means, if a volume is created by cinder-volume-1
and that goes down, removing that volume would be stuck in delete state
until cinder-volume-1 comes back. It would be nice for cinder-volume-2 to
realize it's operating on the same pool as the other node, and thus remove
the volume right away.

Any thoughts?

-Simon
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to