Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Andrew Laski


On Mon, Aug 1, 2016, at 08:08 AM, Jay Pipes wrote:
> On 07/31/2016 10:03 PM, Alex Xu wrote:
> > 2016-07-28 22:31 GMT+08:00 Jay Pipes  > >:
> >
> > On 07/20/2016 11:25 PM, Alex Xu wrote:
> >
> > One more for end users: Capabilities Discovery API, it should be
> > 'GET
> > /resource_providers/tags'. Or a proxy API from nova to the placement
> > API?
> >
> >
> > I would imagine that it should be a `GET
> > /resource-providers/{uuid}/capabilities` call on the placement API,
> > only visible to cloud administrators.
> >
> > When the end-user request a capability which doesn't support by the
> > cloud, the end-user needs to wait for a moment after sent boot request
> > due to we use async call in nova, then he get an instance with error
> > status. The error info is no valid host. If this is the only way for
> > user to discover the capabilities in the cloud, that sounds bad. So we
> > need an API for the end-user to discover the Capabilities which are
> > supported in the cloud, the end-user can query this API before send boot
> > request.
> 
> Ah, yes, totally agreed. I'm not sure if that is something that we'd 
> want to put as a normal-end-user-callable API endpoint in the placement 
> API, but certainly we could do something like this in the placement API:
> 
>   GET /capabilities
> 
> Would return a list of capability strings representing the distinct set 
> of capabilities that any resource provider in the system exposed. It 
> would not give the user any counts of resource providers that expose the 
> capabilities, nor would it provide any information regarding which 
> resource providers had any available inventory for a consumer to use.

This is what I had imagined based on the midcycle discussion of this
topic. Just information about what is possible to request, and no
information about what is available.

> 
> Nova could then either have a proxy API call that would add the normal 
> end-user interface to that information or completely hide it from end 
> users via the existing flavors interface?

Please no more proxy APIs :)

> 
> Thoughts?
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Monday, August 1, 2016 1:09 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage
> Capabilities with ResourceProvider
> 
> On 07/31/2016 10:03 PM, Alex Xu wrote:
> > 2016-07-28 22:31 GMT+08:00 Jay Pipes <jaypi...@gmail.com
> > <mailto:jaypi...@gmail.com>>:
> >
> > On 07/20/2016 11:25 PM, Alex Xu wrote:
> >
> > One more for end users: Capabilities Discovery API, it should be
> > 'GET
> > /resource_providers/tags'. Or a proxy API from nova to the placement
> > API?
> >
> >
> > I would imagine that it should be a `GET
> > /resource-providers/{uuid}/capabilities` call on the placement API,
> > only visible to cloud administrators.
> >
> > When the end-user request a capability which doesn't support by the
> > cloud, the end-user needs to wait for a moment after sent boot request
> > due to we use async call in nova, then he get an instance with error
> > status. The error info is no valid host. If this is the only way for
> > user to discover the capabilities in the cloud, that sounds bad. So we
> > need an API for the end-user to discover the Capabilities which are
> > supported in the cloud, the end-user can query this API before send
> > boot request.
> 
> Ah, yes, totally agreed. I'm not sure if that is something that we'd want to 
> put as a
> normal-end-user-callable API endpoint in the placement API, but certainly we
> could do something like this in the placement API:
> 
>   GET /capabilities
> 
> Would return a list of capability strings representing the distinct set of 
> capabilities
> that any resource provider in the system exposed. It would not give the user 
> any
> counts of resource providers that expose the capabilities, nor would it 
> provide
> any information regarding which resource providers had any available inventory
> for a consumer to use.
> 
> Nova could then either have a proxy API call that would add the normal 
> end-user
> interface to that information or completely hide it from end users via the 
> existing
> flavors interface?
[Mooney, Sean K] the main drawback with that as an end user is you cannot tell 
what combination of capabilities will
Work together.  For example a cloud might provide SSDs and GPUs but they may 
not be provided on the
Same host or indeed still available on the same host though in the latter case 
no valid host would be the expected behavior.
That said this can be somewhat mitigated via operators creating flavors that 
will work with their infra which is a reasonable requirement
For us to ask them to fulfill but tenant could still uploads images with 
capability request or indeed craft boot requests that would still fail.
You would basically need to return a list of capability  adjacency lists so 
that the end user could build the matrix of what features can be requested 
together.
That would potentially be computationally intensive in the api but mysql should 
be able to compute it efficiently. 
> 
> Thoughts?
> 
> Best,
> -jay
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Jay Pipes

On 07/31/2016 10:03 PM, Alex Xu wrote:

2016-07-28 22:31 GMT+08:00 Jay Pipes >:

On 07/20/2016 11:25 PM, Alex Xu wrote:

One more for end users: Capabilities Discovery API, it should be
'GET
/resource_providers/tags'. Or a proxy API from nova to the placement
API?


I would imagine that it should be a `GET
/resource-providers/{uuid}/capabilities` call on the placement API,
only visible to cloud administrators.

When the end-user request a capability which doesn't support by the
cloud, the end-user needs to wait for a moment after sent boot request
due to we use async call in nova, then he get an instance with error
status. The error info is no valid host. If this is the only way for
user to discover the capabilities in the cloud, that sounds bad. So we
need an API for the end-user to discover the Capabilities which are
supported in the cloud, the end-user can query this API before send boot
request.


Ah, yes, totally agreed. I'm not sure if that is something that we'd 
want to put as a normal-end-user-callable API endpoint in the placement 
API, but certainly we could do something like this in the placement API:


 GET /capabilities

Would return a list of capability strings representing the distinct set 
of capabilities that any resource provider in the system exposed. It 
would not give the user any counts of resource providers that expose the 
capabilities, nor would it provide any information regarding which 
resource providers had any available inventory for a consumer to use.


Nova could then either have a proxy API call that would add the normal 
end-user interface to that information or completely hide it from end 
users via the existing flavors interface?


Thoughts?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Alex Xu
Nova-spec is submitted: https://review.openstack.org/345138, welcome review
and comments!

2016-07-11 19:08 GMT+08:00 Alex Xu :

> This propose is about using ResourceProviderTags as a solution to manage
> Capabilities (Qualitative) in ResourceProvider.
> The ResourceProviderTags is to describe the capabilities which are defined
> by OpenStack Service (Compute Service,
> Storage Service, Network Service etc.) and by users. The ResourceProvider
> provide resource exposed by a single
> compute node, some shared resource pool or an external resource-providing
> service of some sort.  As such,
> ResourceProviderTags is also expected to describe the capabilities of
> single ResourceProvider or the capabilities of
> ResourcePool.
>
> The ResourceProviderTags is similar with ServersTags [0] which is
> implemented in the Nova. The only difference is
> that the tags is attached to the ResourceProvider. The API endpoint will
> be " /ResourceProvider/{uuid}/tags", and it
> will follow the API-WG guideline about Tags [1].
>
> As the Tags are just strings, the meaning of Tag isn't defined by
> Scheduler. The meaning of Tag is defined by
> OpenStack services or Users. The ResourceProviderTags will only be used
> for scheduling with a ResourceProviderTags
> filter.
>
> The ResourceProviderTags is very easy to cover the cases of single
> ResourceProvider, ResourcePool and
> DynamicResouces. Let see those cases one by one.
>
> For single ResourceProvider case, just see how Nova report ComputeNode's
> Capabilities. Firstly,  Nova is expected
> to define a standard way to describe the Capabilities which provided by
> Hypervisor or Hardware. Then those description
> of Capabilities can be used across the Openstack deployment. So Nova will
> define a set of Tags. Those Tags should
> be included with prefix to indicated that this is coming from Nova. Also
> the naming rule of prefix can be used to catalog
> the Capabilities. For example, the capabilities can be defined as:
>
> COMPUTE_HW_CAP_CPU_AVX
> COMPUTE_HW_CAP_CPU_SSE
> 
> COMPUTE_HV_CAP_LIVE_MIGRATION
> COMPUTE_HV_CAP_LIVE_SNAPSHOT
> 
>
> ( The COMPUTE means this is coming from Nova. HW means this is hardware
> related Capabilities. HV means this is
>  capabilities of Hypervisor. But the catalog of Capabilities can be
> discussed separated. This propose focus on the
>  ResourceTags. We also have another idea about not using 'PREFIX' to
> manage the Tags. We can add attributes to the
>  Tags. Then we have more control on the Tags. This will describe
> separately in the bottom. )
>
> Nova will create ResourceProvider for the compute node, and report the
> quantitative stuff, and report capabilities
> by adding those defined tags to the ResourceProvider at same time. Then
> those Capabilities are exposed by Nova
> automatically.
>
> The capabilities of ComputeNode can be queried through the API "GET
> /ResourceProviders/{uuid}/tags".
>
> For the ResourcePool case, let us use Shared Storage Pool as example. The
> different Storage Pool may have
> different capabilities. Maybe one of Pool are using SSD. For expose that
> Capability, admin user can do as below:
>
> 1. Define the aggregates
>   $AGG_UUID=`openstack aggregate create r1rck0610`
>
> 2. Create resource pool for shared storage
>   $RP_UUID=`openstack resource-provider create "/mnt/nfs/row1racks0610/" \
> --aggregate-uuid=$AGG_UUID`
>
> 3. Update the capacity of shared storage
>   openstack resource-provider set inventory $RP_UUID \
> --resource-class=DISK_GB \
> --total=10 --reserved=1000 \
> --min-unit=50 --max-unit=1 --step-size=10 \
> --allocation-ratio=1.0
>
> 4. Add the Capabilities of shared storage
>   openstack resource-provider add tags $RP_UUID --tag STORAGE_WITH_SSD
>
> In this case, 'STORAGE_WITH_SSD' is defined by Admin user. This is the
> same with Quantitative, where there
> isn't agent to report the Quantitative, neither the Qualitative.
>
> This is also easy to cover the DynamicResource case. Thinking of Ironic,
> admin will create ResourcePool for
> same hardware configuration bare-metal machines. Those machines will have
> the same set of capabilities. So
> those capabilities will be added to the ResourcePool as tags, this is
> pretty same with SharedStoragePool case.
>
> To expose cloud capabilities to users,  there is one more API endpoint
> 'GET /ResourceProviders/Tags'. User can
> get all the tags. Then user can know what kind of Capabilities the cloud
> provides. The query parameter
> will allow user to filter the Tags by the prefix rules.
>
> This propose is intended to be a solution of managing Capabilities in the
> scheduler with ResourceProvider. But yes,
> look at how Nova implement the manage of Capabilities, this is just part
> of solution. The whole solution still needs needs
> other propose (like [2]) to describe how to model capabilities inside the
> compute node and propose (like [3]) to
> describe how 

Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-31 Thread Alex Xu
2016-07-28 22:31 GMT+08:00 Jay Pipes :

> On 07/20/2016 11:25 PM, Alex Xu wrote:
>
>> One more for end users: Capabilities Discovery API, it should be 'GET
>> /resource_providers/tags'. Or a proxy API from nova to the placement
>> API?
>>
>
> I would imagine that it should be a `GET
> /resource-providers/{uuid}/capabilities` call on the placement API, only
> visible to cloud administrators.
>
>
When the end-user request a capability which doesn't support by the cloud,
the end-user needs to wait for a moment after sent boot request due to we
use async call in nova, then he get an instance with error status. The
error info is no valid host. If this is the only way for user to discover
the capabilities in the cloud, that sounds bad. So we need an API for the
end-user to discover the Capabilities which are supported in the cloud, the
end-user can query this API before send boot request.


>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-28 Thread Jay Pipes

On 07/20/2016 11:25 PM, Alex Xu wrote:

One more for end users: Capabilities Discovery API, it should be 'GET
/resource_providers/tags'. Or a proxy API from nova to the placement
API?


I would imagine that it should be a `GET 
/resource-providers/{uuid}/capabilities` call on the placement API, only 
visible to cloud administrators.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-20 Thread Alex Xu
2016-07-20 11:43 GMT-07:00 Mooney, Sean K <sean.k.moo...@intel.com>:

>
>
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: Wednesday, July 20, 2016 7:16 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage
> > Capabilities with ResourceProvider
> >
> > On 07/13/2016 01:37 PM, Ed Leafe wrote:
> > > On Jul 11, 2016, at 6:08 AM, Alex Xu <sou...@gmail.com> wrote:
> > >
> > >> For example, the capabilities can be defined as:
> > >>
> > >> COMPUTE_HW_CAP_CPU_AVX
> > >> COMPUTE_HW_CAP_CPU_SSE
> > >> 
> > >> COMPUTE_HV_CAP_LIVE_MIGRATION
> > >> COMPUTE_HV_CAP_LIVE_SNAPSHOT
> > >> 
> > >>
> > >> ( The COMPUTE means this is coming from Nova. HW means this is
> > >> hardware related Capabilities. HV means this is  capabilities of
> > >> Hypervisor. But the catalog of Capabilities can be discussed
> > >> separated. This propose focus on the  ResourceTags. We also have
> > >> another idea about not using 'PREFIX' to manage the Tags. We can add
> > >> attributes to the  Tags. Then we have more control on the Tags. This
> > >> will describe separately in the bottom. )
> > >
> > > I was ready to start ranting about using horribly mangled names to
> > represent data, and then saw your comment about attributes for tags.
> > Yes, a thousand times yes to attributes! There can be several
> > standards, such as ‘compute’ or ‘networking’ that we use for some basic
> > cross-cloud compatibility, but making them flexible is a must for
> > adoption.
> >
> > I disagree :) Adoption -- at least interoperable cloud adoption -- of
> > this functionality will likely be hindered by super-flexible
> > description of capabilities. I think having a set of "standard"
> > capabilities that can be counted on to be cross-OpenStack-cloud
> > compatible and a set of "dynamic" capabilities that are custom to a
> > deployment would be a good thing to do.
>
> [Mooney, Sean K]
> I know there is a bad memories when I metion CIM (
> http://www.dmtf.org/standards/cim)
> for many on the nova team but if we are to use standard names we should
> probably
> actually assess are there existing standads that we could adopt instead of
> defining
> our own standard names in nova for the resources.
> For example
> http://schemas.dmtf.org/wbem/cim-html/2/CIM_ProcessorAllocationSettingData.html
> Define the name for different instcution set extentions for example avx is
> DMTF:x86:AVX.
> Some work has been done in glance to allow importing cim metadata from ovf
> files also
>
> https://specs.openstack.org/openstack/glance-specs/specs/mitaka/implemented/cim-namespace-metadata-definitions.html
>
> while I don’t think using the full cim information model is useful in this
> case using the name would be
> from an inter-operability point of view as we not only would have standard
> names in openstack but those names
> would conform to an existing standard.
>

Thanks! This is good suggestion. For 'DMTF:x86:AVX', we probably can
reference the 'x86:AVX', then we probably needs add some prefix like
'COMPUTE_HW_CPU'


>
> We could still allow custom attribute but is see value in standardizing
> what can be standardized.
>
>
> >
> > Best,
> > -jay
> >
> > > I can update the qualitative request spec to add ResourceProviderTags
> > as a possible implementation.
> >
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-20 Thread Alex Xu
2016-07-20 11:08 GMT-07:00 Jay Pipes :

> On 07/18/2016 01:45 PM, Matt Riedemann wrote:
>
>> On 7/15/2016 8:06 PM, Alex Xu wrote:
>>
>>>
>>> Actually I still think aggregates isn't good for Manage Caps, just as I
>>> said in previous reply about Aggregates. One of reason is just same with
>>> #2 you said :) And It's totally not managable. User is even no way to
>>> query a specific host in which host-aggregate. And there isn't a
>>> interface to query what metadata was related to the host by
>>> host-aggregate. I prefer just keep the Aggregate as tool to group the
>>> hosts. But yes, user still can use host-aggregate to manage host with
>>> old way, let's user decide what is more convenient.
>>>
>>
>> +1 to Alex's point. I just read through this thread and had the same
>> thought. If the point is to reduce complexity in the system and surface
>> capabilities to the end user, let's do that with resource provider tags,
>> not a mix of host aggregate metadata and resource provider tags so that
>> an operator has to set both, but also know in what situations he/she has
>> to set it and where.
>>
>
> Yeah, having the resource provider be tagged with capabilities versus
> having to manage aggregate tags may make some of the qualitative matching
> queries easier to grok. The query performance won't necessarily be any
> better, but they will likely be easier to read...
>
> I'm hoping Jay or someone channeling Jay can hold my hand and walk me
>> safely through the evil forest that is image properties / flavor extra
>> specs / scheduler hints / host aggregates / resource providers / and the
>> plethora of scheduler filters that use them to build a concrete
>> picture/story tying this all together. I'm thinking like use cases, what
>> does the operator need to do
>>
>
> Are you asking how things are *currently* done in Nova? If so, I'll need
> to find some alcohol.
>
> If you are asking about how we'd *like* all of the qualitative things to
> be requested and queried in the new placement API, then less alcohol is
> required.
>
> The schema I'm thinking about on the placement engine side looks like this:
>
> CREATE TABLE tags (
>   id INT NOT NULL,
>   name VARCHAR(200) NOT NULL,
>   PRIMARY KEY (id),
>   UNIQUE INDEX (name)
> );
>
> CREATE TABLE resource_provider_tags (
>   resource_provider_id INT NOT NULL
>   tag_id INT NOT NULL,
>   PRIMARY KEY (resource_provider_id, tag_id),
>   INDEX (tag_id)
> );
>
> On the Nova side, we need a mechanism of associating a set of capabilities
> that may either be required or preferred. The thing that we currently use
> for associating requested things in Nova is the flavor, so we'd need to
> define a mapping in Nova for the tags a flavor would require or prefer.
>
> CREATE TABLE flavor_tags (
>   flavor_id INT NOT NULL,
>   tag_name VARCHAR(200) NOT NULL,
>   is_required INT NOT NULL
> );
>
> We would need to have a call in the placement REST API to find the
> resource providers that matched a particular set of required or preferred
> capability tags. Such a call might look like the following:
>
> GET /resource_providers
> {
>   "resources": {
> "VCPU": 2,
> "MEMORY_MB": 2048,
> "DISK_GB": 100
>   },
>   "requires": [
> "storage:ssd",
> "compute:hw:x86:avx2",
>   ],
>   "prefers": [
> "compute:virt:accelerated_whizzybang"
>   ]
> }
>

so GET with a request body?


>
> Disregard the quantitative side of the above request right now. We could
> answer the qualitative side of the equation with the following SQL query in
> the placement engine:
>
> SELECT rp.uuid
> FROM resource_providers AS rp, tags AS t1, tags AS t2, tags AS t3
> INNER JOIN resource_provider_tags AS rpt1
> ON rp.id = rpt1.resource_provider_id
> AND rpt1.tag_id = t1.id
> INNER JOIN resource_provider_tags AS rpt2
> AND rpt1.resource_provider_id = rpt2.resource_provider_id
> AND rpt2.tag_id = t2.id
> LEFT JOIN resource_provider_tags AS rpt3
> ON rp.id = rpt3.resource_provider_id
> AND rpt3.tag_id = t3.id
> GROUP BY rp.uuid
> ORDER BY COUNT(COALESCE(rpt3.resource_provider_id, 0)) DESC
> WHERE t1.name = 'storage:ssd'
> AND t2.name = 'compute:hw:x86:avx2'
> AND t3.name IN ('compute:virt:accelerated_whizzybang')
>
> The above returns all resource providers having the 'storage:ssd' and
> 'compute:hw:x86:avx2' tags and returns resource providers *first* that have
> the 'compute:virt:accelerated_whizzybang' tag.
>
> , what does the end user of the cloud need
>> to do, etc. I think if we're going to do resource providers tags for
>> capabilities we also need to think about what we're replacing. Maybe
>> that's just host aggregate metadata, but what's the deprecation plan for
>> that?
>>
>
> Good question, as usual. My expectation would be that in Ocata, when we
> start adding the qualitative aspects to the placement REST API, we would
> introduce documentation that operators could use to translate common use
> cases that they were using flavor extra_specs and aggregate metadata for 

Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-20 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, July 20, 2016 7:16 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage
> Capabilities with ResourceProvider
> 
> On 07/13/2016 01:37 PM, Ed Leafe wrote:
> > On Jul 11, 2016, at 6:08 AM, Alex Xu <sou...@gmail.com> wrote:
> >
> >> For example, the capabilities can be defined as:
> >>
> >> COMPUTE_HW_CAP_CPU_AVX
> >> COMPUTE_HW_CAP_CPU_SSE
> >> 
> >> COMPUTE_HV_CAP_LIVE_MIGRATION
> >> COMPUTE_HV_CAP_LIVE_SNAPSHOT
> >> 
> >>
> >> ( The COMPUTE means this is coming from Nova. HW means this is
> >> hardware related Capabilities. HV means this is  capabilities of
> >> Hypervisor. But the catalog of Capabilities can be discussed
> >> separated. This propose focus on the  ResourceTags. We also have
> >> another idea about not using 'PREFIX' to manage the Tags. We can add
> >> attributes to the  Tags. Then we have more control on the Tags. This
> >> will describe separately in the bottom. )
> >
> > I was ready to start ranting about using horribly mangled names to
> represent data, and then saw your comment about attributes for tags.
> Yes, a thousand times yes to attributes! There can be several
> standards, such as ‘compute’ or ‘networking’ that we use for some basic
> cross-cloud compatibility, but making them flexible is a must for
> adoption.
> 
> I disagree :) Adoption -- at least interoperable cloud adoption -- of
> this functionality will likely be hindered by super-flexible
> description of capabilities. I think having a set of "standard"
> capabilities that can be counted on to be cross-OpenStack-cloud
> compatible and a set of "dynamic" capabilities that are custom to a
> deployment would be a good thing to do.

[Mooney, Sean K] 
I know there is a bad memories when I metion CIM 
(http://www.dmtf.org/standards/cim)
for many on the nova team but if we are to use standard names we should probably
actually assess are there existing standads that we could adopt instead of 
defining
our own standard names in nova for the resources. 
For example 
http://schemas.dmtf.org/wbem/cim-html/2/CIM_ProcessorAllocationSettingData.html
Define the name for different instcution set extentions for example avx is 
DMTF:x86:AVX.
Some work has been done in glance to allow importing cim metadata from ovf 
files also
https://specs.openstack.org/openstack/glance-specs/specs/mitaka/implemented/cim-namespace-metadata-definitions.html

while I don’t think using the full cim information model is useful in this case 
using the name would be
from an inter-operability point of view as we not only would have standard 
names in openstack but those names
would conform to an existing standard.

We could still allow custom attribute but is see value in standardizing what 
can be standardized.


> 
> Best,
> -jay
> 
> > I can update the qualitative request spec to add ResourceProviderTags
> as a possible implementation.
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-20 Thread Jay Pipes

On 07/13/2016 01:37 PM, Ed Leafe wrote:

On Jul 11, 2016, at 6:08 AM, Alex Xu  wrote:


For example, the capabilities can be defined as:

COMPUTE_HW_CAP_CPU_AVX
COMPUTE_HW_CAP_CPU_SSE

COMPUTE_HV_CAP_LIVE_MIGRATION
COMPUTE_HV_CAP_LIVE_SNAPSHOT


( The COMPUTE means this is coming from Nova. HW means this is hardware related 
Capabilities. HV means this is
 capabilities of Hypervisor. But the catalog of Capabilities can be discussed 
separated. This propose focus on the
 ResourceTags. We also have another idea about not using 'PREFIX' to manage the 
Tags. We can add attributes to the
 Tags. Then we have more control on the Tags. This will describe separately in 
the bottom. )


I was ready to start ranting about using horribly mangled names to represent 
data, and then saw your comment about attributes for tags. Yes, a thousand 
times yes to attributes! There can be several standards, such as ‘compute’ or 
‘networking’ that we use for some basic cross-cloud compatibility, but making 
them flexible is a must for adoption.


I disagree :) Adoption -- at least interoperable cloud adoption -- of 
this functionality will likely be hindered by super-flexible description 
of capabilities. I think having a set of "standard" capabilities that 
can be counted on to be cross-OpenStack-cloud compatible and a set of 
"dynamic" capabilities that are custom to a deployment would be a good 
thing to do.


Best,
-jay


I can update the qualitative request spec to add ResourceProviderTags as a 
possible implementation.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-20 Thread Jay Pipes

On 07/18/2016 01:45 PM, Matt Riedemann wrote:

On 7/15/2016 8:06 PM, Alex Xu wrote:


Actually I still think aggregates isn't good for Manage Caps, just as I
said in previous reply about Aggregates. One of reason is just same with
#2 you said :) And It's totally not managable. User is even no way to
query a specific host in which host-aggregate. And there isn't a
interface to query what metadata was related to the host by
host-aggregate. I prefer just keep the Aggregate as tool to group the
hosts. But yes, user still can use host-aggregate to manage host with
old way, let's user decide what is more convenient.


+1 to Alex's point. I just read through this thread and had the same
thought. If the point is to reduce complexity in the system and surface
capabilities to the end user, let's do that with resource provider tags,
not a mix of host aggregate metadata and resource provider tags so that
an operator has to set both, but also know in what situations he/she has
to set it and where.


Yeah, having the resource provider be tagged with capabilities versus 
having to manage aggregate tags may make some of the qualitative 
matching queries easier to grok. The query performance won't necessarily 
be any better, but they will likely be easier to read...



I'm hoping Jay or someone channeling Jay can hold my hand and walk me
safely through the evil forest that is image properties / flavor extra
specs / scheduler hints / host aggregates / resource providers / and the
plethora of scheduler filters that use them to build a concrete
picture/story tying this all together. I'm thinking like use cases, what
does the operator need to do


Are you asking how things are *currently* done in Nova? If so, I'll need 
to find some alcohol.


If you are asking about how we'd *like* all of the qualitative things to 
be requested and queried in the new placement API, then less alcohol is 
required.


The schema I'm thinking about on the placement engine side looks like this:

CREATE TABLE tags (
  id INT NOT NULL,
  name VARCHAR(200) NOT NULL,
  PRIMARY KEY (id),
  UNIQUE INDEX (name)
);

CREATE TABLE resource_provider_tags (
  resource_provider_id INT NOT NULL
  tag_id INT NOT NULL,
  PRIMARY KEY (resource_provider_id, tag_id),
  INDEX (tag_id)
);

On the Nova side, we need a mechanism of associating a set of 
capabilities that may either be required or preferred. The thing that we 
currently use for associating requested things in Nova is the flavor, so 
we'd need to define a mapping in Nova for the tags a flavor would 
require or prefer.


CREATE TABLE flavor_tags (
  flavor_id INT NOT NULL,
  tag_name VARCHAR(200) NOT NULL,
  is_required INT NOT NULL
);

We would need to have a call in the placement REST API to find the 
resource providers that matched a particular set of required or 
preferred capability tags. Such a call might look like the following:


GET /resource_providers
{
  "resources": {
"VCPU": 2,
"MEMORY_MB": 2048,
"DISK_GB": 100
  },
  "requires": [
"storage:ssd",
"compute:hw:x86:avx2",
  ],
  "prefers": [
"compute:virt:accelerated_whizzybang"
  ]
}

Disregard the quantitative side of the above request right now. We could 
answer the qualitative side of the equation with the following SQL query 
in the placement engine:


SELECT rp.uuid
FROM resource_providers AS rp, tags AS t1, tags AS t2, tags AS t3
INNER JOIN resource_provider_tags AS rpt1
ON rp.id = rpt1.resource_provider_id
AND rpt1.tag_id = t1.id
INNER JOIN resource_provider_tags AS rpt2
AND rpt1.resource_provider_id = rpt2.resource_provider_id
AND rpt2.tag_id = t2.id
LEFT JOIN resource_provider_tags AS rpt3
ON rp.id = rpt3.resource_provider_id
AND rpt3.tag_id = t3.id
GROUP BY rp.uuid
ORDER BY COUNT(COALESCE(rpt3.resource_provider_id, 0)) DESC
WHERE t1.name = 'storage:ssd'
AND t2.name = 'compute:hw:x86:avx2'
AND t3.name IN ('compute:virt:accelerated_whizzybang')

The above returns all resource providers having the 'storage:ssd' and 
'compute:hw:x86:avx2' tags and returns resource providers *first* that 
have the 'compute:virt:accelerated_whizzybang' tag.



, what does the end user of the cloud need
to do, etc. I think if we're going to do resource providers tags for
capabilities we also need to think about what we're replacing. Maybe
that's just host aggregate metadata, but what's the deprecation plan for
that?


Good question, as usual. My expectation would be that in Ocata, when we 
start adding the qualitative aspects to the placement REST API, we would 
introduce documentation that operators could use to translate common use 
cases that they were using flavor extra_specs and aggregate metadata for 
in the pre-placement world to the resource provider tags setup that 
would replace that functonality in the placement API world. Unlike most 
of the quantitative side of things, there really isn't a good way to 
"autoheal" or "autosetup" these things.



There is a ton to talk about here, so I'll leave that for the midcycle.
But 

Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-19 Thread Alex Xu
2016-07-18 13:45 GMT-07:00 Matt Riedemann :

> On 7/15/2016 8:06 PM, Alex Xu wrote:
>
>>
>> Actually I still think aggregates isn't good for Manage Caps, just as I
>> said in previous reply about Aggregates. One of reason is just same with
>> #2 you said :) And It's totally not managable. User is even no way to
>> query a specific host in which host-aggregate. And there isn't a
>> interface to query what metadata was related to the host by
>> host-aggregate. I prefer just keep the Aggregate as tool to group the
>> hosts. But yes, user still can use host-aggregate to manage host with
>> old way, let's user decide what is more convenient.
>>
>>
> +1 to Alex's point. I just read through this thread and had the same
> thought. If the point is to reduce complexity in the system and surface
> capabilities to the end user, let's do that with resource provider tags,
> not a mix of host aggregate metadata and resource provider tags so that an
> operator has to set both, but also know in what situations he/she has to
> set it and where.
>
> I'm hoping Jay or someone channeling Jay can hold my hand and walk me
> safely through the evil forest that is image properties / flavor extra
> specs / scheduler hints / host aggregates / resource providers / and the
> plethora of scheduler filters that use them to build a concrete
> picture/story tying this all together. I'm thinking like use cases, what
> does the operator need to do, what does the end user of the cloud need to
> do, etc. I think if we're going to do resource providers tags for
> capabilities we also need to think about what we're replacing. Maybe that's
> just host aggregate metadata, but what's the deprecation plan for that?
>

Yes, it is a lot of confuse on existed image properties and extra_specs. I
have tried list all the properties and extra_specs:
https://etherpad.openstack.org/p/nova_existed_extra_spec_and_metadata

But look at them, I think none of them are capabilities(after Jay point me
out the disk_type isn't capabilities). They are very hypervisor specific or
VM hardware configuration detail.

The Nova API shouldn't expose any specific hypervisor detail, also the VM
hardware configuration detail. User shouldn't care about those detail, they
just needs request the Capabilities, then nova decide the VM hardware
configuration based on the Capabilities.

My initial thought is we leave the existed properties and extra_specs
alone, deal with Capabilities separately. Just dump my thought at here.

For the deprecation of host aggregate metadata, I didn't thought that yet.
In normally we can keep them for a release after we have ResourceTags?
Anyway I will think about it more, thanks for point this out.



>
> There is a ton to talk about here, so I'll leave that for the midcycle.
> But let's think about what, if anything, needs to land in Newton to enable
> us working on this in Ocata - but our priority for the midcycle is really
> going to be focused on what things we need to get done yet in Newton based
> on what we said we'd do in Austin.
>
> Also, a final nit - can we please be specific about roles in this thread
> and any specs? I see 'user' thrown around a lot, but there are different
> kinds of users. Only admins can see host aggregates and their metadata. And
> when we're talking about how these tags will be used, let's be clear about
> who the actors are - admins or cloud users. It helps avoid some confusion.


Got it, I will clear the user roles in the specs later. Thanks for point
this out too.


>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-18 Thread Matt Riedemann

On 7/15/2016 8:06 PM, Alex Xu wrote:


Actually I still think aggregates isn't good for Manage Caps, just as I
said in previous reply about Aggregates. One of reason is just same with
#2 you said :) And It's totally not managable. User is even no way to
query a specific host in which host-aggregate. And there isn't a
interface to query what metadata was related to the host by
host-aggregate. I prefer just keep the Aggregate as tool to group the
hosts. But yes, user still can use host-aggregate to manage host with
old way, let's user decide what is more convenient.



+1 to Alex's point. I just read through this thread and had the same 
thought. If the point is to reduce complexity in the system and surface 
capabilities to the end user, let's do that with resource provider tags, 
not a mix of host aggregate metadata and resource provider tags so that 
an operator has to set both, but also know in what situations he/she has 
to set it and where.


I'm hoping Jay or someone channeling Jay can hold my hand and walk me 
safely through the evil forest that is image properties / flavor extra 
specs / scheduler hints / host aggregates / resource providers / and the 
plethora of scheduler filters that use them to build a concrete 
picture/story tying this all together. I'm thinking like use cases, what 
does the operator need to do, what does the end user of the cloud need 
to do, etc. I think if we're going to do resource providers tags for 
capabilities we also need to think about what we're replacing. Maybe 
that's just host aggregate metadata, but what's the deprecation plan for 
that?


There is a ton to talk about here, so I'll leave that for the midcycle. 
But let's think about what, if anything, needs to land in Newton to 
enable us working on this in Ocata - but our priority for the midcycle 
is really going to be focused on what things we need to get done yet in 
Newton based on what we said we'd do in Austin.


Also, a final nit - can we please be specific about roles in this thread 
and any specs? I see 'user' thrown around a lot, but there are different 
kinds of users. Only admins can see host aggregates and their metadata. 
And when we're talking about how these tags will be used, let's be clear 
about who the actors are - admins or cloud users. It helps avoid some 
confusion.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-15 Thread Alex Xu
2016-07-14 10:38 GMT+08:00 Cheng, Yingxin :

>
> On 7/14/16, 05:42, "Ed Leafe"  wrote:
> On Jul 12, 2016, at 2:43 AM, Cheng, Yingxin 
> wrote:
> > 4. Capabilities are managed/grouped/abstracted by namespaces, and
> scheduler can make decisions based on either cap_names or cap_namespaces
> > 5. Placement service DON’T have any specific knowledge of a
> capability, it only know the its name, namespaces, its relationship to
> resource providers. They are used for scheduling, capability management and
> report.
>
> Thinking about that a bit, it would seem that a host aggregate could
> also be represented as a namespace:name tag. That makes sense, since the
> fact that a host belongs to a particular aggregate is a qualitative aspect
> of that host.
>
>
> Thanks for the feedback!
>
> We’ve thought about the relationship between capability tags and host
> aggregates carefully. And we decide not to blend it with host aggregates,
> for several reasons below:
> 1. We want to manage capabilities in only ONE place, either in host
> aggregates, compute_node records or with resource_provider records.
> 2. Compute services may need to attach discovered capabilities to its
> host. It is inconvenient if we store caps with host aggregates, because
> nova-compute needs to create/search host aggregates first, it can’t
> directly attach caps.
> 3. Other services may need to attach discovered capabilities to its
> resources. So the best place is to its related resource pool, not
> aggregates, nor compute_node records. Note the relationship between
> resource pools and host aggregates are N:N.
> 4. It’s logically correct to store caps with resource_providers, because
> caps are actually owned by nodes or resource pools.
> 5. Scheduling will be faster if resource-providers are directly attached
> with caps.
>
> However, for user-defined caps, it still seems easier to manage them with
> aggregates. We may want to manage them in a way different from pre-defined
> caps. Or we can indirectly manage them through aggregates, but they are
> actually stored with compute-node resource-providers in placement db.
>

Actually I still think aggregates isn't good for Manage Caps, just as I
said in previous reply about Aggregates. One of reason is just same with #2
you said :) And It's totally not managable. User is even no way to query a
specific host in which host-aggregate. And there isn't a interface to query
what metadata was related to the host by host-aggregate. I prefer just keep
the Aggregate as tool to group the hosts. But yes, user still can use
host-aggregate to manage host with old way, let's user decide what is more
convenient.


>
>
> --
> Regards
> Yingxin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-14 Thread Cheng, Yingxin
On 7/14/16, 12:18, "Edward Leafe"  wrote:
On Jul 13, 2016, at 9:38 PM, Cheng, Yingxin  wrote:
>>Thinking about that a bit, it would seem that a host aggregate could 
also be represented as a namespace:name tag. That makes sense, since the fact 
that a host belongs to a particular aggregate is a qualitative aspect of that 
host.
>> 
> 
> Thanks for the feedback!
> 
> We’ve thought about the relationship between capability tags and host 
aggregates carefully. And we decide not to blend it with host aggregates, for 
several reasons below:
> 1. We want to manage capabilities in only ONE place, either in host 
aggregates, compute_node records or with resource_provider records.
> 2. Compute services may need to attach discovered capabilities to its 
host. It is inconvenient if we store caps with host aggregates, because 
nova-compute needs to create/search host aggregates first, it can’t directly 
attach caps.
> 3. Other services may need to attach discovered capabilities to its 
resources. So the best place is to its related resource pool, not aggregates, 
nor compute_node records. Note the relationship between resource pools and host 
aggregates are N:N.
> 4. It’s logically correct to store caps with resource_providers, because 
caps are actually owned by nodes or resource pools.
> 5. Scheduling will be faster if resource-providers are directly attached 
with caps.
> 
> However, for user-defined caps, it still seems easier to manage them with 
aggregates. We may want to manage them in a way different from pre-defined 
caps. Or we can indirectly manage them through aggregates, but they are 
actually stored with compute-node resource-providers in placement db.

Oh, I think you misunderstood me. Capabilities definitely belong with 
resource providers, not host aggregates, because not all RPs are hosts.
I'm thinking that host aggregates themselves are equivalent to capabilities 
for hosts. Imagine we have 10 hosts, and put 3 of them in an aggregate. How is 
that different than if we give those three a tag with the 'host_agg' namespace, 
and with tag named for the agg?
I'm just thinking out loud here. There might be opportunities to simplify a 
lot of the code between capability tags and host aggregates in the future, 
since it looks like host aggs are a structural subset of RP capability tags.

-- Ed Leafe


Your concerns are correct. The major goal of “Capability Tags” series is to 
*replace* existing capability-like functionalities in Nova and Scheduler with a 
more generic and extensible implementation.

As you said, host aggregates themselves are equivalent to capabilities for 
hosts. We should continue support this way with the new “Capability Tags” 
implementation. Currently users can write free-formed metadata to host 
aggregates, then scheduler can process them through 
“AggregateImagePropertiesIsolation” and “AggregateInstanceExtraSpecsFilter”, 
when users can specify those caps in image-props and flavor extra-specs. This 
means we need to support capability tags in group granularity, i.e. to tag caps 
to host aggregates. It can be in a separate implementation called “Aggregate 
Capability Tags”, replacing current implementation with the two mentioned 
aggregate filters.

As for “Resource Provider Capability Tags”, we are managing capabilities in a 
finer granularity: host and resource pool level. Currently users can only use 
pre-defined caps such as “architecture”, “hypervisor-types”, 
“hypervisor-versions” and “vm-mode” in host states, which can be processed by 
“ImagePropertiesFilter” and “ComputeCapabilitiesFilter” and “JsonFilter”, when 
users can specify them in image-props, flavor extra-specs and scheduler hints. 
We are designing “Resource Provider Capability Tags” to replace them and 
providing extensibility to add more service-defined and user-defined caps in a 
generic way.

The above also means we may want to manage caps in a separate table, and 
maintain their relationship with resource providers and host aggregates. So we 
can query existing caps, validate them in image-props, flavor extra-specs and 
scheduler hints, and manage them in a consistent way.


---
Regards
Yingxin





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-13 Thread Edward Leafe
On Jul 13, 2016, at 9:38 PM, Cheng, Yingxin  wrote:

>>Thinking about that a bit, it would seem that a host aggregate could also 
>> be represented as a namespace:name tag. That makes sense, since the fact 
>> that a host belongs to a particular aggregate is a qualitative aspect of 
>> that host.
>> 
> 
> Thanks for the feedback!
> 
> We’ve thought about the relationship between capability tags and host 
> aggregates carefully. And we decide not to blend it with host aggregates, for 
> several reasons below:
> 1. We want to manage capabilities in only ONE place, either in host 
> aggregates, compute_node records or with resource_provider records.
> 2. Compute services may need to attach discovered capabilities to its host. 
> It is inconvenient if we store caps with host aggregates, because 
> nova-compute needs to create/search host aggregates first, it can’t directly 
> attach caps.
> 3. Other services may need to attach discovered capabilities to its 
> resources. So the best place is to its related resource pool, not aggregates, 
> nor compute_node records. Note the relationship between resource pools and 
> host aggregates are N:N.
> 4. It’s logically correct to store caps with resource_providers, because caps 
> are actually owned by nodes or resource pools.
> 5. Scheduling will be faster if resource-providers are directly attached with 
> caps.
> 
> However, for user-defined caps, it still seems easier to manage them with 
> aggregates. We may want to manage them in a way different from pre-defined 
> caps. Or we can indirectly manage them through aggregates, but they are 
> actually stored with compute-node resource-providers in placement db.

Oh, I think you misunderstood me. Capabilities definitely belong with resource 
providers, not host aggregates, because not all RPs are hosts.

I'm thinking that host aggregates themselves are equivalent to capabilities for 
hosts. Imagine we have 10 hosts, and put 3 of them in an aggregate. How is that 
different than if we give those three a tag with the 'host_agg' namespace, and 
with tag named for the agg?

I'm just thinking out loud here. There might be opportunities to simplify a lot 
of the code between capability tags and host aggregates in the future, since it 
looks like host aggs are a structural subset of RP capability tags.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-13 Thread Cheng, Yingxin

On 7/14/16, 05:42, "Ed Leafe"  wrote:
On Jul 12, 2016, at 2:43 AM, Cheng, Yingxin  wrote:
> 4. Capabilities are managed/grouped/abstracted by namespaces, and 
scheduler can make decisions based on either cap_names or cap_namespaces
> 5. Placement service DON’T have any specific knowledge of a capability, 
it only know the its name, namespaces, its relationship to resource providers. 
They are used for scheduling, capability management and report.

Thinking about that a bit, it would seem that a host aggregate could also 
be represented as a namespace:name tag. That makes sense, since the fact that a 
host belongs to a particular aggregate is a qualitative aspect of that host.


Thanks for the feedback!

We’ve thought about the relationship between capability tags and host 
aggregates carefully. And we decide not to blend it with host aggregates, for 
several reasons below:
1. We want to manage capabilities in only ONE place, either in host aggregates, 
compute_node records or with resource_provider records.
2. Compute services may need to attach discovered capabilities to its host. It 
is inconvenient if we store caps with host aggregates, because nova-compute 
needs to create/search host aggregates first, it can’t directly attach caps.
3. Other services may need to attach discovered capabilities to its resources. 
So the best place is to its related resource pool, not aggregates, nor 
compute_node records. Note the relationship between resource pools and host 
aggregates are N:N.
4. It’s logically correct to store caps with resource_providers, because caps 
are actually owned by nodes or resource pools.
5. Scheduling will be faster if resource-providers are directly attached with 
caps.

However, for user-defined caps, it still seems easier to manage them with 
aggregates. We may want to manage them in a way different from pre-defined 
caps. Or we can indirectly manage them through aggregates, but they are 
actually stored with compute-node resource-providers in placement db. 


-- 
Regards
Yingxin 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-13 Thread Ed Leafe
On Jul 12, 2016, at 2:43 AM, Cheng, Yingxin  wrote:

> 4. Capabilities are managed/grouped/abstracted by namespaces, and scheduler 
> can make decisions based on either cap_names or cap_namespaces
> 5. Placement service DON’T have any specific knowledge of a capability, it 
> only know the its name, namespaces, its relationship to resource providers. 
> They are used for scheduling, capability management and report.

Thinking about that a bit, it would seem that a host aggregate could also be 
represented as a namespace:name tag. That makes sense, since the fact that a 
host belongs to a particular aggregate is a qualitative aspect of that host.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-13 Thread Ed Leafe
On Jul 11, 2016, at 6:08 AM, Alex Xu  wrote:

> For example, the capabilities can be defined as:
> 
> COMPUTE_HW_CAP_CPU_AVX
> COMPUTE_HW_CAP_CPU_SSE
> 
> COMPUTE_HV_CAP_LIVE_MIGRATION
> COMPUTE_HV_CAP_LIVE_SNAPSHOT
> 
> 
> ( The COMPUTE means this is coming from Nova. HW means this is hardware 
> related Capabilities. HV means this is
>  capabilities of Hypervisor. But the catalog of Capabilities can be discussed 
> separated. This propose focus on the
>  ResourceTags. We also have another idea about not using 'PREFIX' to manage 
> the Tags. We can add attributes to the
>  Tags. Then we have more control on the Tags. This will describe separately 
> in the bottom. )

I was ready to start ranting about using horribly mangled names to represent 
data, and then saw your comment about attributes for tags. Yes, a thousand 
times yes to attributes! There can be several standards, such as ‘compute’ or 
‘networking’ that we use for some basic cross-cloud compatibility, but making 
them flexible is a must for adoption.

I can update the qualitative request spec to add ResourceProviderTags as a 
possible implementation.


-- Ed Leafe




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-12 Thread Cheng, Yingxin
Some thoughts of “Capability” use cases:
1. Nova-compute service can discover host capabilities and tag them into its 
compute-node resource provider record.
2. Other services such as ironic and cinder may want to manage its resource 
pool, and they can tag capabilities themselves to that pool.
3. Admin/user can register user-defined capabilities to resource providers 
(i.e. a pool or a host)
4. Capabilities are managed/grouped/abstracted by namespaces, and scheduler can 
make decisions based on either cap_names or cap_namespaces
5. Placement service DON’T have any specific knowledge of a capability, it only 
know the its name, namespaces, its relationship to resource providers. They are 
used for scheduling, capability management and report.
6. Placement service need to know where a capability comes from(user-defined, 
nova-defined, or others), so it can have modification control of capabilities, 
and it can list existing capabilities according to types.

I think the above is the normal use cases, please correct me if there are 
mistakes or add more items.


Regards,
-Yingxin

From: Alex Xu [mailto:sou...@gmail.com]
Sent: Monday, July 11, 2016 11:22 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Bhandaru, Malini K ; Cheng, Yingxin 
; Jin, Yuntong ; Tan, Lin 

Subject: Re: [Nova] [RFC] ResourceProviderTags - Manage Capabilities with 
ResourceProvider

Matt mentioned Aggregate in the scheduler meeting, that is good question and 
also reminder me I should explain the relationship between Aggregate and 
ResourceProviderTags.

The Aggregate is expected to keep as a tool for group the hosts, and just for 
group hosts. People used to manage Capabilities with Aggregates, they put the 
hosts with some kind of Capabilities into same Aggregates, and using the 
metadata to identify the Capabilities. But Aggregate with metadata is really 
not very easy to manage.

Thinking of the case:

Host1 have Capability1
Host2 have Capability1 and Capability2
Host3 have Capability2 and Capability3.

With this case, we needs 3 aggregates for each Capability: agg_cap1, agg_cap2, 
agg_cap3. Then we needs add hosts to the aggregate as below:

agg_cap1: host1, host2
agg_cap2: host2, host3
agg_cap3: host3

When there are more capabilities and more hosts which needs to manage, the 
mapping of hosts and aggregate will be more complicate. And there isn't a easy 
interface to let user to know the specific host have what kind of capabilities.

The ResourceProviderTags will be a substitution of Aggregate on manage 
capabilities. With tags, it won't generate a complex mapping.

For the same case, we just need to add tags on the ResourceProvider. The 
interface of tags is pretty easy, check out the api-wg guideline 
https://github.com/openstack/api-wg/blob/master/guidelines/tags.rst. And the 
query parameter made the management easy.

There are also have some user want to write script to manage the Capabilities. 
Thinking the aggregate, the script will be very hard due to manage the mapping 
between aggregates and hosts. The script will be very easy with tags, due to 
tags just a set of string.


2016-07-11 19:08 GMT+08:00 Alex Xu >:
This propose is about using ResourceProviderTags as a solution to manage 
Capabilities (Qualitative) in ResourceProvider.
The ResourceProviderTags is to describe the capabilities which are defined by 
OpenStack Service (Compute Service,
Storage Service, Network Service etc.) and by users. The ResourceProvider 
provide resource exposed by a single
compute node, some shared resource pool or an external resource-providing 
service of some sort.  As such,
ResourceProviderTags is also expected to describe the capabilities of single 
ResourceProvider or the capabilities of
ResourcePool.

The ResourceProviderTags is similar with ServersTags [0] which is implemented 
in the Nova. The only difference is
that the tags is attached to the ResourceProvider. The API endpoint will be " 
/ResourceProvider/{uuid}/tags", and it
will follow the API-WG guideline about Tags [1].

As the Tags are just strings, the meaning of Tag isn't defined by Scheduler. 
The meaning of Tag is defined by
OpenStack services or Users. The ResourceProviderTags will only be used for 
scheduling with a ResourceProviderTags
filter.

The ResourceProviderTags is very easy to cover the cases of single 
ResourceProvider, ResourcePool and
DynamicResouces. Let see those cases one by one.

For single ResourceProvider case, just see how Nova report ComputeNode's 
Capabilities. Firstly,  Nova is expected
to define a standard way to describe the Capabilities which provided by 
Hypervisor or Hardware. Then those description
of Capabilities can be used across the Openstack deployment. So Nova will 
define a set of Tags. Those Tags should
be included with prefix to 

Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-11 Thread Alex Xu
Matt mentioned Aggregate in the scheduler meeting, that is good question
and also reminder me I should explain the relationship between Aggregate
and ResourceProviderTags.

The Aggregate is expected to keep as a tool for group the hosts, and just
for group hosts. People used to manage Capabilities with Aggregates, they
put the hosts with some kind of Capabilities into same Aggregates, and
using the metadata to identify the Capabilities. But Aggregate with
metadata is really not very easy to manage.

Thinking of the case:

Host1 have Capability1
Host2 have Capability1 and Capability2
Host3 have Capability2 and Capability3.

With this case, we needs 3 aggregates for each Capability: agg_cap1,
agg_cap2, agg_cap3. Then we needs add hosts to the aggregate as below:

agg_cap1: host1, host2
agg_cap2: host2, host3
agg_cap3: host3

When there are more capabilities and more hosts which needs to manage, the
mapping of hosts and aggregate will be more complicate. And there isn't a
easy interface to let user to know the specific host have what kind of
capabilities.

The ResourceProviderTags will be a substitution of Aggregate on manage
capabilities. With tags, it won't generate a complex mapping.

For the same case, we just need to add tags on the ResourceProvider. The
interface of tags is pretty easy, check out the api-wg guideline
https://github.com/openstack/api-wg/blob/master/guidelines/tags.rst. And
the query parameter made the management easy.

There are also have some user want to write script to manage the
Capabilities. Thinking the aggregate, the script will be very hard due to
manage the mapping between aggregates and hosts. The script will be very
easy with tags, due to tags just a set of string.


2016-07-11 19:08 GMT+08:00 Alex Xu :

> This propose is about using ResourceProviderTags as a solution to manage
> Capabilities (Qualitative) in ResourceProvider.
> The ResourceProviderTags is to describe the capabilities which are defined
> by OpenStack Service (Compute Service,
> Storage Service, Network Service etc.) and by users. The ResourceProvider
> provide resource exposed by a single
> compute node, some shared resource pool or an external resource-providing
> service of some sort.  As such,
> ResourceProviderTags is also expected to describe the capabilities of
> single ResourceProvider or the capabilities of
> ResourcePool.
>
> The ResourceProviderTags is similar with ServersTags [0] which is
> implemented in the Nova. The only difference is
> that the tags is attached to the ResourceProvider. The API endpoint will
> be " /ResourceProvider/{uuid}/tags", and it
> will follow the API-WG guideline about Tags [1].
>
> As the Tags are just strings, the meaning of Tag isn't defined by
> Scheduler. The meaning of Tag is defined by
> OpenStack services or Users. The ResourceProviderTags will only be used
> for scheduling with a ResourceProviderTags
> filter.
>
> The ResourceProviderTags is very easy to cover the cases of single
> ResourceProvider, ResourcePool and
> DynamicResouces. Let see those cases one by one.
>
> For single ResourceProvider case, just see how Nova report ComputeNode's
> Capabilities. Firstly,  Nova is expected
> to define a standard way to describe the Capabilities which provided by
> Hypervisor or Hardware. Then those description
> of Capabilities can be used across the Openstack deployment. So Nova will
> define a set of Tags. Those Tags should
> be included with prefix to indicated that this is coming from Nova. Also
> the naming rule of prefix can be used to catalog
> the Capabilities. For example, the capabilities can be defined as:
>
> COMPUTE_HW_CAP_CPU_AVX
> COMPUTE_HW_CAP_CPU_SSE
> 
> COMPUTE_HV_CAP_LIVE_MIGRATION
> COMPUTE_HV_CAP_LIVE_SNAPSHOT
> 
>
> ( The COMPUTE means this is coming from Nova. HW means this is hardware
> related Capabilities. HV means this is
>  capabilities of Hypervisor. But the catalog of Capabilities can be
> discussed separated. This propose focus on the
>  ResourceTags. We also have another idea about not using 'PREFIX' to
> manage the Tags. We can add attributes to the
>  Tags. Then we have more control on the Tags. This will describe
> separately in the bottom. )
>
> Nova will create ResourceProvider for the compute node, and report the
> quantitative stuff, and report capabilities
> by adding those defined tags to the ResourceProvider at same time. Then
> those Capabilities are exposed by Nova
> automatically.
>
> The capabilities of ComputeNode can be queried through the API "GET
> /ResourceProviders/{uuid}/tags".
>
> For the ResourcePool case, let us use Shared Storage Pool as example. The
> different Storage Pool may have
> different capabilities. Maybe one of Pool are using SSD. For expose that
> Capability, admin user can do as below:
>
> 1. Define the aggregates
>   $AGG_UUID=`openstack aggregate create r1rck0610`
>
> 2. Create resource pool for shared storage
>   $RP_UUID=`openstack 

[openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-11 Thread Alex Xu
This propose is about using ResourceProviderTags as a solution to manage
Capabilities (Qualitative) in ResourceProvider.
The ResourceProviderTags is to describe the capabilities which are defined
by OpenStack Service (Compute Service,
Storage Service, Network Service etc.) and by users. The ResourceProvider
provide resource exposed by a single
compute node, some shared resource pool or an external resource-providing
service of some sort.  As such,
ResourceProviderTags is also expected to describe the capabilities of
single ResourceProvider or the capabilities of
ResourcePool.

The ResourceProviderTags is similar with ServersTags [0] which is
implemented in the Nova. The only difference is
that the tags is attached to the ResourceProvider. The API endpoint will be
" /ResourceProvider/{uuid}/tags", and it
will follow the API-WG guideline about Tags [1].

As the Tags are just strings, the meaning of Tag isn't defined by
Scheduler. The meaning of Tag is defined by
OpenStack services or Users. The ResourceProviderTags will only be used for
scheduling with a ResourceProviderTags
filter.

The ResourceProviderTags is very easy to cover the cases of single
ResourceProvider, ResourcePool and
DynamicResouces. Let see those cases one by one.

For single ResourceProvider case, just see how Nova report ComputeNode's
Capabilities. Firstly,  Nova is expected
to define a standard way to describe the Capabilities which provided by
Hypervisor or Hardware. Then those description
of Capabilities can be used across the Openstack deployment. So Nova will
define a set of Tags. Those Tags should
be included with prefix to indicated that this is coming from Nova. Also
the naming rule of prefix can be used to catalog
the Capabilities. For example, the capabilities can be defined as:

COMPUTE_HW_CAP_CPU_AVX
COMPUTE_HW_CAP_CPU_SSE

COMPUTE_HV_CAP_LIVE_MIGRATION
COMPUTE_HV_CAP_LIVE_SNAPSHOT


( The COMPUTE means this is coming from Nova. HW means this is hardware
related Capabilities. HV means this is
 capabilities of Hypervisor. But the catalog of Capabilities can be
discussed separated. This propose focus on the
 ResourceTags. We also have another idea about not using 'PREFIX' to manage
the Tags. We can add attributes to the
 Tags. Then we have more control on the Tags. This will describe separately
in the bottom. )

Nova will create ResourceProvider for the compute node, and report the
quantitative stuff, and report capabilities
by adding those defined tags to the ResourceProvider at same time. Then
those Capabilities are exposed by Nova
automatically.

The capabilities of ComputeNode can be queried through the API "GET
/ResourceProviders/{uuid}/tags".

For the ResourcePool case, let us use Shared Storage Pool as example. The
different Storage Pool may have
different capabilities. Maybe one of Pool are using SSD. For expose that
Capability, admin user can do as below:

1. Define the aggregates
  $AGG_UUID=`openstack aggregate create r1rck0610`

2. Create resource pool for shared storage
  $RP_UUID=`openstack resource-provider create "/mnt/nfs/row1racks0610/" \
--aggregate-uuid=$AGG_UUID`

3. Update the capacity of shared storage
  openstack resource-provider set inventory $RP_UUID \
--resource-class=DISK_GB \
--total=10 --reserved=1000 \
--min-unit=50 --max-unit=1 --step-size=10 \
--allocation-ratio=1.0

4. Add the Capabilities of shared storage
  openstack resource-provider add tags $RP_UUID --tag STORAGE_WITH_SSD

In this case, 'STORAGE_WITH_SSD' is defined by Admin user. This is the same
with Quantitative, where there
isn't agent to report the Quantitative, neither the Qualitative.

This is also easy to cover the DynamicResource case. Thinking of Ironic,
admin will create ResourcePool for
same hardware configuration bare-metal machines. Those machines will have
the same set of capabilities. So
those capabilities will be added to the ResourcePool as tags, this is
pretty same with SharedStoragePool case.

To expose cloud capabilities to users,  there is one more API endpoint 'GET
/ResourceProviders/Tags'. User can
get all the tags. Then user can know what kind of Capabilities the cloud
provides. The query parameter
will allow user to filter the Tags by the prefix rules.

This propose is intended to be a solution of managing Capabilities in the
scheduler with ResourceProvider. But yes,
look at how Nova implement the manage of Capabilities, this is just part of
solution. The whole solution still needs needs
other propose (like [2]) to describe how to model capabilities inside the
compute node and propose (like [3]) to
describe how to request capabilities.

Manage Tags with attributes
=
As described above, we add prefix to Tags to mark which service this Tag is
coming from and which catalog or
namespaces of Capabilities this Tags belongs to. An alternative idea is
adding attributes to the Tags.
We can use one attribute tags to mark the origin of