On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:
Hi,

Looks like we need to be able to set AZ per backend. What do you think about such option?

I dislike such an option.

The whole premise behind an AZ is that it's a failure domain. The node running the cinder services is in exactly one such failure domain. If you have 2 backends in 2 different AZs, then the cinder services managing those backends should be running on nodes that are also in those AZs. If you do it any other way then you create a situation where a failure in one AZ causes loss of services in a different AZ, which is exactly what the AZ feature is trying to avoid.

If you do the correct thing and run cinder services on nodes in the AZs that they're managing then you will never have a problem with the one-AZ-per-cinder.conf design we have today.

-Ben



Regards,
Ivan Kolodyazhny

On Mon, Aug 10, 2015 at 7:07 PM, John Griffith <john.griffi...@gmail.com <mailto:john.griffi...@gmail.com>> wrote:



    On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal
    <michal.du...@intel.com <mailto:michal.du...@intel.com>> wrote:

        Hi,

        In Kilo cycle [1] was merged. It started passing AZ of a
        booted VM to Cinder to make volumes appear in the same AZ as
        VM. This is certainly a good approach, but I wonder how to
        deal with an use case when administrator cares about AZ of a
        compute node of the VM, but wants to ignore AZ of volume. Such
        case would be when fault tolerance of storage is maintained on
        another level - for example using Ceph replication and failure
        domains.

        Normally I would simply disable AvailabilityZoneFilter in
        cinder.conf, but it turns out cinder-api validates if
        availability zone is correct [2]. This means that if Cinder
        has no AZs configured all requests from Nova will fail on an
        API level.

        Configuring fake AZs in Cinder is also problematic, because AZ
        cannot be configured on a per-backend manner. I can only
        configure it per c-vol node, so I would need N extra nodes
        running c-vol,  where N is number of AZs to achieve that.

        Is there any solution to satisfy such use case?

        [1] https://review.openstack.org/#/c/157041
        [2]
        
https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L279-L282

        
__________________________________________________________________________
        OpenStack Development Mailing List (not for usage questions)
        Unsubscribe:
        openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
        <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


    ​Seems like we could introduce the capability in cinder to ignore
    that if it's desired?  It would probably be worth looking on the
    Cinder side at being able to configure multiple AZ's for a volume
    (perhaps even an aggregate Zone just for Cinder).  That way we
    still honor the setting but provide a way to get around it for
    those that know what they're doing.

    John


    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to