Someone else expressed this more gracefully than I:

*'Because sans Ironic, compute-nodes still have physical characteristics*
*that make grouping on them attractive for things like anti-affinity. I*
*don't really want my HA instances "not on the same compute node", I want*
*them "not in the same failure domain". It becomes a way for all*
*OpenStack workloads to have more granularity than "availability zone".*
(
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg14891.html
)

^That guy definitely has a good head on his shoulders ;)

-James


On Wed, Dec 16, 2015 at 12:40 PM, James Penick <jpen...@gmail.com> wrote:

> >Affinity is mostly meaningless with baremetal. It's entirely a
> >virtualization related thing. If you try and group things by TOR, or
> >chassis, or anything else, it's going to start meaning something entirely
> >different than it means in Nova,
>
> I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs
> as well as baremetal. As an example, there are cases where certain compute
> resources move significant amounts of data between one or two other
> instances, but you want to ensure those instances are not on the same
> hypervisor. In that scenario it makes sense to have instances on different
> hypervisors, but on the same TOR to reduce unnecessary traffic across the
> fabric.
>
> >and it would probably be better to just
> >make lots of AZ's and have users choose their AZ mix appropriately,
> >since that is the real meaning of AZ's.
>
> Yes, at some level certain things should be expressed in the form of an
> AZ, power seems like a good candidate for that. But , expressing something
> like a TOR as an AZ in an environment with hundreds of thousands of
> physical hosts, would not scale. Further, it would require users to have a
> deeper understanding of datacenter toplogy, which is exactly the opposite
> of why IaaS exists.
>
> The whole point of a service-oriented infrastructure is to be able to give
> the end user the ability to boot compute resources that match a variety of
> constraints, and have those resources selected and provisioned for them. IE
> "Give me 12 instances of m1.blah, all running Linux, and make sure they're
> spread across 6 different TORs and 2 different power domains in network
> zone Blah."
>
>
>
>
>
>
>
> On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum <cl...@fewbar.com> wrote:
>
>> Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
>> > Nobody is talking about running a compute per flavor or capability. All
>> > compute hosts will be able to handle all ironic nodes. We *do* still
>> > need to figure out how to handle availability zones or host aggregates,
>> > but I expect we would pass along that data to be matched against. I
>> > think it would just be metadata on a node. Something like
>> > node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
>> > you. Ditto for host aggregates - add the metadata to ironic to match
>> > what's in the host aggregate. I'm honestly not sure what to do about
>> > (anti-)affinity filters; we'll need help figuring that out.
>> >
>>
>> Affinity is mostly meaningless with baremetal. It's entirely a
>> virtualization related thing. If you try and group things by TOR, or
>> chassis, or anything else, it's going to start meaning something entirely
>> different than it means in Nova, and it would probably be better to just
>> make lots of AZ's and have users choose their AZ mix appropriately,
>> since that is the real meaning of AZ's.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to