Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Dulko, Michal
 From: Ben Swartzlander [mailto:b...@swartzlander.org]
 Sent: Thursday, August 27, 2015 8:11 PM
 To: OpenStack Development Mailing List (not for usage questions)
 
 On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:
 
 
   Hi,
 
   Looks like we need to be able to set AZ per backend. What do you
 think about such option?
 
 
 
 I dislike such an option.
 
 The whole premise behind an AZ is that it's a failure domain. The node
 running the cinder services is in exactly one such failure domain. If you 
 have 2
 backends in 2 different AZs, then the cinder services managing those
 backends should be running on nodes that are also in those AZs. If you do it
 any other way then you create a situation where a failure in one AZ causes
 loss of services in a different AZ, which is exactly what the AZ feature is 
 trying
 to avoid.
 
 If you do the correct thing and run cinder services on nodes in the AZs that
 they're managing then you will never have a problem with the one-AZ-per-
 cinder.conf design we have today.
 
 -Ben

I disagree. You may have failure domains done on a different level, like using 
Ceph mechanisms for that. In such case you want to provide the user with a 
single backend regardless of compute AZ partitioning. To address such needs you 
would need to set multiple AZ per backend to make this achievable.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Duncan Thomas
Except your failure domain includes the cinder volume service, independent
of the resiliency of you backend, so if they're all on one node then you
don't really have availability zones.

I have historically strongly espoused the same view as Ben, though there
are lots of people who want fake availability zones... No strong use cases
though
On 28 Aug 2015 11:59, Dulko, Michal michal.du...@intel.com wrote:

  From: Ben Swartzlander [mailto:b...@swartzlander.org]
  Sent: Thursday, August 27, 2015 8:11 PM
  To: OpenStack Development Mailing List (not for usage questions)
 
  On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:
 
 
Hi,
 
Looks like we need to be able to set AZ per backend. What do you
  think about such option?
 
 
 
  I dislike such an option.
 
  The whole premise behind an AZ is that it's a failure domain. The node
  running the cinder services is in exactly one such failure domain. If
 you have 2
  backends in 2 different AZs, then the cinder services managing those
  backends should be running on nodes that are also in those AZs. If you
 do it
  any other way then you create a situation where a failure in one AZ
 causes
  loss of services in a different AZ, which is exactly what the AZ feature
 is trying
  to avoid.
 
  If you do the correct thing and run cinder services on nodes in the AZs
 that
  they're managing then you will never have a problem with the one-AZ-per-
  cinder.conf design we have today.
 
  -Ben

 I disagree. You may have failure domains done on a different level, like
 using Ceph mechanisms for that. In such case you want to provide the user
 with a single backend regardless of compute AZ partitioning. To address
 such needs you would need to set multiple AZ per backend to make this
 achievable.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Dulko, Michal
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Friday, August 28, 2015 2:31 PM
 
 Except your failure domain includes the cinder volume service, independent
 of the resiliency of you backend, so if they're all on one node then you don't
 really have availability zones.
 
 I have historically strongly espoused the same view as Ben, though there are
 lots of people who want fake availability zones... No strong use cases though

In case you have Ceph backend (actually I think this applies to any non-LVM 
backend), you normally run c-vol on your controller nodes in A/P manner. c-vol 
becomes more like control plane service and we don't provide AZs for control 
plane. Nova doesn't do it either, AZs are only for compute nodes.

Given that now Nova assumes that Cinder have same set of AZs, we should be able 
to create fake ones (or have a fallback option like in patch provided by Ned).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Our use case for fake AZs (and why I pushed 
https://review.openstack.org/#/c/217857/ to enable that sort of behavior) is 
what Michal outlined, namely that we use Ceph and do not need or want Cinder to 
add itself to the mix when we're dealing with our failure domains. We already 
handle that via our Ceph crush map, so Cinder doesn't need to worry about it. 
It should just throw volumes at the configured RBD pool for the requested 
backend and not concern itself with what's going on behind the scenes.

From: openstack-dev@lists.openstack.org 
Subject: Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones


Except your failure domain includes the cinder volume service, independent of 
the resiliency of you backend, so if they're all on one node then you don't 
really have availability zones.
I have historically strongly espoused the same view as Ben, though there are 
lots of people who want fake availability zones... No strong use cases though
On 28 Aug 2015 11:59, Dulko, Michal michal.du...@intel.com wrote:

 From: Ben Swartzlander [mailto:b...@swartzlander.org]
 Sent: Thursday, August 27, 2015 8:11 PM
 To: OpenStack Development Mailing List (not for usage questions)

 On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:


   Hi,

   Looks like we need to be able to set AZ per backend. What do you
 think about such option?



 I dislike such an option.

 The whole premise behind an AZ is that it's a failure domain. The node
 running the cinder services is in exactly one such failure domain. If you 
 have 2
 backends in 2 different AZs, then the cinder services managing those
 backends should be running on nodes that are also in those AZs. If you do it
 any other way then you create a situation where a failure in one AZ causes
 loss of services in a different AZ, which is exactly what the AZ feature is 
 trying
 to avoid.

 If you do the correct thing and run cinder services on nodes in the AZs that
 they're managing then you will never have a problem with the one-AZ-per-
 cinder.conf design we have today.

 -Ben

I disagree. You may have failure domains done on a different level, like using 
Ceph mechanisms for that. In such case you want to provide the user with a 
single backend regardless of compute AZ partitioning. To address such needs you 
would need to set multiple AZ per backend to make this achievable.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-27 Thread Ivan Kolodyazhny
Hi,

Looks like we need to be able to set AZ per backend. What do you think
about such option?

Regards,
Ivan Kolodyazhny

On Mon, Aug 10, 2015 at 7:07 PM, John Griffith john.griffi...@gmail.com
wrote:



 On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal michal.du...@intel.com
 wrote:

 Hi,

 In Kilo cycle [1] was merged. It started passing AZ of a booted VM to
 Cinder to make volumes appear in the same AZ as VM. This is certainly a
 good approach, but I wonder how to deal with an use case when administrator
 cares about AZ of a compute node of the VM, but wants to ignore AZ of
 volume. Such case would be when fault tolerance of storage is maintained on
 another level - for example using Ceph replication and failure domains.

 Normally I would simply disable AvailabilityZoneFilter in cinder.conf,
 but it turns out cinder-api validates if availability zone is correct [2].
 This means that if Cinder has no AZs configured all requests from Nova will
 fail on an API level.

 Configuring fake AZs in Cinder is also problematic, because AZ cannot be
 configured on a per-backend manner. I can only configure it per c-vol node,
 so I would need N extra nodes running c-vol,  where N is number of AZs to
 achieve that.

 Is there any solution to satisfy such use case?

 [1] https://review.openstack.org/#/c/157041
 [2]
 https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L279-L282

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​Seems like we could introduce the capability in cinder to ignore that if
 it's desired?  It would probably be worth looking on the Cinder side at
 being able to configure multiple AZ's for a volume (perhaps even an
 aggregate Zone just for Cinder).  That way we still honor the setting but
 provide a way to get around it for those that know what they're doing.

 John


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-27 Thread Ben Swartzlander

On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:

Hi,

Looks like we need to be able to set AZ per backend. What do you think 
about such option?


I dislike such an option.

The whole premise behind an AZ is that it's a failure domain. The node 
running the cinder services is in exactly one such failure domain. If 
you have 2 backends in 2 different AZs, then the cinder services 
managing those backends should be running on nodes that are also in 
those AZs. If you do it any other way then you create a situation where 
a failure in one AZ causes loss of services in a different AZ, which is 
exactly what the AZ feature is trying to avoid.


If you do the correct thing and run cinder services on nodes in the AZs 
that they're managing then you will never have a problem with the 
one-AZ-per-cinder.conf design we have today.


-Ben




Regards,
Ivan Kolodyazhny

On Mon, Aug 10, 2015 at 7:07 PM, John Griffith 
john.griffi...@gmail.com mailto:john.griffi...@gmail.com wrote:




On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal
michal.du...@intel.com mailto:michal.du...@intel.com wrote:

Hi,

In Kilo cycle [1] was merged. It started passing AZ of a
booted VM to Cinder to make volumes appear in the same AZ as
VM. This is certainly a good approach, but I wonder how to
deal with an use case when administrator cares about AZ of a
compute node of the VM, but wants to ignore AZ of volume. Such
case would be when fault tolerance of storage is maintained on
another level - for example using Ceph replication and failure
domains.

Normally I would simply disable AvailabilityZoneFilter in
cinder.conf, but it turns out cinder-api validates if
availability zone is correct [2]. This means that if Cinder
has no AZs configured all requests from Nova will fail on an
API level.

Configuring fake AZs in Cinder is also problematic, because AZ
cannot be configured on a per-backend manner. I can only
configure it per c-vol node, so I would need N extra nodes
running c-vol,  where N is number of AZs to achieve that.

Is there any solution to satisfy such use case?

[1] https://review.openstack.org/#/c/157041
[2]

https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L279-L282


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Seems like we could introduce the capability in cinder to ignore
that if it's desired?  It would probably be worth looking on the
Cinder side at being able to configure multiple AZ's for a volume
(perhaps even an aggregate Zone just for Cinder).  That way we
still honor the setting but provide a way to get around it for
those that know what they're doing.

John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-27 Thread Dulko, Michal
There were a little IRC discussion on that [1] and I've started to work on 
creating a spec for Mitaka. I've got a little busy last time, but finishing it 
is still in my backlog. I'll make sure to post it up for reviews once Mitaka 
specs bucket will open.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-08-11.log.html#t2015-08-11T14:48:49

 -Original Message-
 From: Ivan Kolodyazhny [mailto:e...@e0ne.info]
 Sent: Thursday, August 27, 2015 4:44 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Rogon, Kamil
 Subject: Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability
 zones
 
 Hi,
 
 Looks like we need to be able to set AZ per backend. What do you think
 about such option?
 
 
 Regards,
 Ivan Kolodyazhny
 
 On Mon, Aug 10, 2015 at 7:07 PM, John Griffith john.griffi...@gmail.com
 mailto:john.griffi...@gmail.com  wrote:
 
 
 
 
   On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal
 michal.du...@intel.com mailto:michal.du...@intel.com  wrote:
 
 
   Hi,
 
   In Kilo cycle [1] was merged. It started passing AZ of a booted
 VM to Cinder to make volumes appear in the same AZ as VM. This is certainly
 a good approach, but I wonder how to deal with an use case when
 administrator cares about AZ of a compute node of the VM, but wants to
 ignore AZ of volume. Such case would be when fault tolerance of storage is
 maintained on another level - for example using Ceph replication and failure
 domains.
 
   Normally I would simply disable AvailabilityZoneFilter in
 cinder.conf, but it turns out cinder-api validates if availability zone is 
 correct
 [2]. This means that if Cinder has no AZs configured all requests from Nova
 will fail on an API level.
 
   Configuring fake AZs in Cinder is also problematic, because AZ
 cannot be configured on a per-backend manner. I can only configure it per c-
 vol node, so I would need N extra nodes running c-vol,  where N is number
 of AZs to achieve that.
 
   Is there any solution to satisfy such use case?
 
   [1] https://review.openstack.org/#/c/157041
   [2]
 https://github.com/openstack/cinder/blob/master/cinder/volume/flows/ap
 i/create_volume.py#L279-L282
 
 
   
 __
   OpenStack Development Mailing List (not for usage
 questions)
   Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe http://OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-
 bin/mailman/listinfo/openstack-dev
 
 
 
   ​Seems like we could introduce the capability in cinder to ignore that 
 if
 it's desired?  It would probably be worth looking on the Cinder side at being
 able to configure multiple AZ's for a volume (perhaps even an aggregate
 Zone just for Cinder).  That way we still honor the setting but provide a way
 to get around it for those that know what they're doing.
 
 
   John
 
 
   
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe http://OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-10 Thread John Griffith
On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal michal.du...@intel.com
wrote:

 Hi,

 In Kilo cycle [1] was merged. It started passing AZ of a booted VM to
 Cinder to make volumes appear in the same AZ as VM. This is certainly a
 good approach, but I wonder how to deal with an use case when administrator
 cares about AZ of a compute node of the VM, but wants to ignore AZ of
 volume. Such case would be when fault tolerance of storage is maintained on
 another level - for example using Ceph replication and failure domains.

 Normally I would simply disable AvailabilityZoneFilter in cinder.conf, but
 it turns out cinder-api validates if availability zone is correct [2]. This
 means that if Cinder has no AZs configured all requests from Nova will fail
 on an API level.

 Configuring fake AZs in Cinder is also problematic, because AZ cannot be
 configured on a per-backend manner. I can only configure it per c-vol node,
 so I would need N extra nodes running c-vol,  where N is number of AZs to
 achieve that.

 Is there any solution to satisfy such use case?

 [1] https://review.openstack.org/#/c/157041
 [2]
 https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L279-L282

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Seems like we could introduce the capability in cinder to ignore that if
it's desired?  It would probably be worth looking on the Cinder side at
being able to configure multiple AZ's for a volume (perhaps even an
aggregate Zone just for Cinder).  That way we still honor the setting but
provide a way to get around it for those that know what they're doing.

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev