Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jorge Williams
I think I understand your confusing Justin.  Extensions are not there to bind 
APIs together.  The examples I gave were probably a bit misleading.  Extensions 
are there to support niche functionality and to allow developers to innovate 
without having to wait for some centralized group to approve.

You're right,  things should become clearer as we move towards code :-)

-jOrGe W.


On Feb 18, 2011, at 5:08 PM, Justin Santa Barbara wrote:

I find this even more confusing than before.  On the one hand, we talk about a 
suite of independent APIs, and on the other hand we talk about binding them 
together using extensions.  We talk about standardizing around one API, and we 
talk about letting a thousand flowers bloom as extensions.

I'm going to wait till there's concrete code here before commenting further, I 
think, so that we can talk in specifics.

Justin


On Fri, Feb 18, 2011 at 2:32 PM, Erik Carlin 
mailto:erik.car...@rackspace.com>> wrote:
The way I see it, there isn't a singular OpenStack API (even today there is 
swift, nova, and glance).  OpenStack is a suite of IaaS each with their own API 
– so there is a SUITE of standard OS APIs.  And each OS service should strive 
to define the canonical API for automating that particular service.  If I just 
want to run an image repo, I deploy glance.  If my SAN guy can't get storage 
provisioned fast enough, I deploy the OS block storage service (once we have 
it).  And if I want a full cloud suite, I deploy all the services.  They are 
loosely coupled and (ideally) independent building blocks.  Whether one chooses 
to front the different service endpoints with a proxy to unify them or have 
separate service endpoints is purely a deployment decision.  Either way, there 
are no competing OS APIs.  Support for 3rd party APIs (e.g. EC2) is secondary 
IMO, and to some degree, detrimental.  Standards are defined largely in part by 
ubiquity.  We want OS to become ubiquitous and we want the OS APIs to become 
defacto.  Supporting additional APIs (or even variations of the same API like 
AMQP per the other thread) doesn't help us here.  I would love to see the 
community rally behind a per service standard OS REST API that we can own and 
drive.

To that end, the goal as I see it is to launch canonical OpenStack Compute 
(nova) and Image (glance) APIs with Cactus.  In Diablo, we would then work to 
introduce separate network and block storage services with REST APIs as well.  
All APIs would be independently versioned and stable.  I'm ALL for per language 
OpenStack bindings that implement support for the entire suite of services.

Re: extensions, it's actually the technical aspects that are driving it.  There 
is a tension between standards and innovation that needs to be resolved.  In 
addition, we need to be able to support niche functionality (e.g. Rackspace may 
want to support API operations related to managed services) without imposing it 
on everyone.  These problems are not new.  We've seen the same exact thing with 
OpenGL and they have a very successful extension model that has solved this.  
Jorge studied this when did his PhD and has designed extensions with that in 
mind.  He has a presentation on extensions here if you haven't seen it.  I 
think extensions are critically important and would encourage dialog amongst 
the community to come to a consensus on this.  Per my points above, I would 
prefer to avoid separate APIs for the same service.  Let's see if we can get 
behind a per service API that becomes THE defacto standard way for automating 
that service.

Erik

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Fri, 18 Feb 2011 09:57:12 -0800

To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>

Subject: Re: [Openstack] OpenStack Compute API 1.1

> How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be the 
OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will 
continue to use their existing wrappers (e.g. jclouds)  Once there's an 
OpenStack API, then end-users will want to find a library for that, and we 
don't want that to be a poor experience.  To maintain a good experience, we 
either can't break the API, or we need to write and maintain a lot of proxying 
code to maintain compatibility.  We know we're not ready for the first 
commitment, and I don't think we get enough to justify the second.

> I think the proxy would make sense if you wanted to have a single api. Not 
> all service providers will but I see this as entirely optional, not required 
> to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the API, 
they use a wrapper library. 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Justin Santa Barbara
I find this even more confusing than before.  On the one hand, we talk about
a suite of independent APIs, and on the other hand we talk about binding
them together using extensions.  We talk about standardizing around one API,
and we talk about letting a thousand flowers bloom as extensions.

I'm going to wait till there's concrete code here before commenting further,
I think, so that we can talk in specifics.

Justin


On Fri, Feb 18, 2011 at 2:32 PM, Erik Carlin wrote:

>  The way I see it, there isn't a singular OpenStack API (even today there
> is swift, nova, and glance).  OpenStack is a suite of IaaS each with their
> own API – so there is a SUITE of standard OS APIs.  And each OS service
> should strive to define the canonical API for automating that particular
> service.  If I just want to run an image repo, I deploy glance.  If my SAN
> guy can't get storage provisioned fast enough, I deploy the OS block storage
> service (once we have it).  And if I want a full cloud suite, I deploy all
> the services.  They are loosely coupled and (ideally) independent building
> blocks.  Whether one chooses to front the different service endpoints with a
> proxy to unify them or have separate service endpoints is purely a
> deployment decision.  Either way, there are no competing OS APIs.  Support
> for 3rd party APIs (e.g. EC2) is secondary IMO, and to some degree,
> detrimental.  Standards are defined largely in part by ubiquity.  We want OS
> to become ubiquitous and we want the OS APIs to become defacto.  Supporting
> additional APIs (or even variations of the same API like AMQP per the other
> thread) doesn't help us here.  I would love to see the community rally
> behind a per service standard OS REST API that we can own and drive.
>
>  To that end, the goal as I see it is to launch canonical OpenStack
> Compute (nova) and Image (glance) APIs with Cactus.  In Diablo, we would
> then work to introduce separate network and block storage services with REST
> APIs as well.  All APIs would be independently versioned and stable.  I'm
> ALL for per language OpenStack bindings that implement support for the
> entire suite of services.
>
>  Re: extensions, it's actually the technical aspects that are driving it.
>  There is a tension between standards and innovation that needs to be
> resolved.  In addition, we need to be able to support niche functionality
> (e.g. Rackspace may want to support API operations related to managed
> services) without imposing it on everyone.  These problems are not new.
>  We've seen the same exact thing with OpenGL and they have a very successful
> extension model that has solved this.  Jorge studied this when did his PhD
> and has designed extensions with that in mind.  He has a presentation on
> extensions here if you haven't seen it.  I think extensions are critically
> important and would encourage dialog amongst the community to come to a
> consensus on this.  Per my points above, I would prefer to avoid separate
> APIs for the same service.  Let's see if we can get behind a per service API
> that becomes THE defacto standard way for automating that service.
>
>  Erik
>
>   From: Justin Santa Barbara 
> Date: Fri, 18 Feb 2011 09:57:12 -0800
>
> To: Paul Voccio 
> Cc: "openstack@lists.launchpad.net" 
>
> Subject: Re: [Openstack] OpenStack Compute API 1.1
>
>  > How is the 1.1 api proposal breaking this?
>
>  Because if we launch an OpenStack API, the expectation is that this will
> be the OpenStack API :-)
>
>  If we support a third-party API (CloudServers or EC2), then people will
> continue to use their existing wrappers (e.g. jclouds)  Once there's an
> OpenStack API, then end-users will want to find a library for that, and we
> don't want that to be a poor experience.  To maintain a good experience, we
> either can't break the API, or we need to write and maintain a lot of
> proxying code to maintain compatibility.  We know we're not ready for the
> first commitment, and I don't think we get enough to justify the second.
>
>  > I think the proxy would make sense if you wanted to have a single api.
> Not all service providers will but I see this as entirely optional, not
> required to use the services.
>
>  But then we have two OpenStack APIs?  Our ultimate end users don't use
> the API, they use a wrapper library.  They want a stable library that works
> and is kept up to date with recent changes and don't care about what's going
> on under the covers.  Wrapper library authors want an API that is (1) one
> API and (2) stable with reasonable evolution, otherwise they'll abandon
> their wrapper or not update it.
>
>  > The extensions mechanism 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Erik Carlin
Whoops.  Extension presentation link was broken.  
Here<http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=view&target=Extensions.pdf>
 is a working one.

From: Erik Carlin mailto:erik.car...@rackspace.com>>
Date: Fri, 18 Feb 2011 16:32:30 -0600
To: Justin Santa Barbara mailto:jus...@fathomdb.com>>, 
Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

The way I see it, there isn't a singular OpenStack API (even today there is 
swift, nova, and glance).  OpenStack is a suite of IaaS each with their own API 
– so there is a SUITE of standard OS APIs.  And each OS service should strive 
to define the canonical API for automating that particular service.  If I just 
want to run an image repo, I deploy glance.  If my SAN guy can't get storage 
provisioned fast enough, I deploy the OS block storage service (once we have 
it).  And if I want a full cloud suite, I deploy all the services.  They are 
loosely coupled and (ideally) independent building blocks.  Whether one chooses 
to front the different service endpoints with a proxy to unify them or have 
separate service endpoints is purely a deployment decision.  Either way, there 
are no competing OS APIs.  Support for 3rd party APIs (e.g. EC2) is secondary 
IMO, and to some degree, detrimental.  Standards are defined largely in part by 
ubiquity.  We want OS to become ubiquitous and we want the OS APIs to become 
defacto.  Supporting additional APIs (or even variations of the same API like 
AMQP per the other thread) doesn't help us here.  I would love to see the 
community rally behind a per service standard OS REST API that we can own and 
drive.

To that end, the goal as I see it is to launch canonical OpenStack Compute 
(nova) and Image (glance) APIs with Cactus.  In Diablo, we would then work to 
introduce separate network and block storage services with REST APIs as well.  
All APIs would be independently versioned and stable.  I'm ALL for per language 
OpenStack bindings that implement support for the entire suite of services.

Re: extensions, it's actually the technical aspects that are driving it.  There 
is a tension between standards and innovation that needs to be resolved.  In 
addition, we need to be able to support niche functionality (e.g. Rackspace may 
want to support API operations related to managed services) without imposing it 
on everyone.  These problems are not new.  We've seen the same exact thing with 
OpenGL and they have a very successful extension model that has solved this.  
Jorge studied this when did his PhD and has designed extensions with that in 
mind.  He has a presentation on extensions here<4:20
Jorge%20Williams
http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=view&target=Extensions.pdf>
 if you haven't seen it.  I think extensions are critically important and would 
encourage dialog amongst the community to come to a consensus on this.  Per my 
points above, I would prefer to avoid separate APIs for the same service.  
Let's see if we can get behind a per service API that becomes THE defacto 
standard way for automating that service.

Erik

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Fri, 18 Feb 2011 09:57:12 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

> How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be the 
OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will 
continue to use their existing wrappers (e.g. jclouds)  Once there's an 
OpenStack API, then end-users will want to find a library for that, and we 
don't want that to be a poor experience.  To maintain a good experience, we 
either can't break the API, or we need to write and maintain a lot of proxying 
code to maintain compatibility.  We know we're not ready for the first 
commitment, and I don't think we get enough to justify the second.

> I think the proxy would make sense if you wanted to have a single api. Not 
> all service providers will but I see this as entirely optional, not required 
> to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the API, 
they use a wrapper library.  They want a stable library that works and is kept 
up to date with recent changes and don't care about what's going on under the 
covers.  Wrapper library authors want an API that is (1) one API and (2) stable 
with reasonable evolution, otherwise they'll aba

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Erik Carlin
The way I see it, there isn't a singular OpenStack API (even today there is 
swift, nova, and glance).  OpenStack is a suite of IaaS each with their own API 
– so there is a SUITE of standard OS APIs.  And each OS service should strive 
to define the canonical API for automating that particular service.  If I just 
want to run an image repo, I deploy glance.  If my SAN guy can't get storage 
provisioned fast enough, I deploy the OS block storage service (once we have 
it).  And if I want a full cloud suite, I deploy all the services.  They are 
loosely coupled and (ideally) independent building blocks.  Whether one chooses 
to front the different service endpoints with a proxy to unify them or have 
separate service endpoints is purely a deployment decision.  Either way, there 
are no competing OS APIs.  Support for 3rd party APIs (e.g. EC2) is secondary 
IMO, and to some degree, detrimental.  Standards are defined largely in part by 
ubiquity.  We want OS to become ubiquitous and we want the OS APIs to become 
defacto.  Supporting additional APIs (or even variations of the same API like 
AMQP per the other thread) doesn't help us here.  I would love to see the 
community rally behind a per service standard OS REST API that we can own and 
drive.

To that end, the goal as I see it is to launch canonical OpenStack Compute 
(nova) and Image (glance) APIs with Cactus.  In Diablo, we would then work to 
introduce separate network and block storage services with REST APIs as well.  
All APIs would be independently versioned and stable.  I'm ALL for per language 
OpenStack bindings that implement support for the entire suite of services.

Re: extensions, it's actually the technical aspects that are driving it.  There 
is a tension between standards and innovation that needs to be resolved.  In 
addition, we need to be able to support niche functionality (e.g. Rackspace may 
want to support API operations related to managed services) without imposing it 
on everyone.  These problems are not new.  We've seen the same exact thing with 
OpenGL and they have a very successful extension model that has solved this.  
Jorge studied this when did his PhD and has designed extensions with that in 
mind.  He has a presentation on extensions here<4:20
Jorge%20Williams
http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=view&target=Extensions.pdf>
 if you haven't seen it.  I think extensions are critically important and would 
encourage dialog amongst the community to come to a consensus on this.  Per my 
points above, I would prefer to avoid separate APIs for the same service.  
Let's see if we can get behind a per service API that becomes THE defacto 
standard way for automating that service.

Erik

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Fri, 18 Feb 2011 09:57:12 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

> How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be the 
OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will 
continue to use their existing wrappers (e.g. jclouds)  Once there's an 
OpenStack API, then end-users will want to find a library for that, and we 
don't want that to be a poor experience.  To maintain a good experience, we 
either can't break the API, or we need to write and maintain a lot of proxying 
code to maintain compatibility.  We know we're not ready for the first 
commitment, and I don't think we get enough to justify the second.

> I think the proxy would make sense if you wanted to have a single api. Not 
> all service providers will but I see this as entirely optional, not required 
> to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the API, 
they use a wrapper library.  They want a stable library that works and is kept 
up to date with recent changes and don't care about what's going on under the 
covers.  Wrapper library authors want an API that is (1) one API and (2) stable 
with reasonable evolution, otherwise they'll abandon their wrapper or not 
update it.

> The extensions mechanism is the biggest change, iirc.

I'm not a big fan of the extensions idea, because it feels more like a 
reflection of a management goal, rather than a technical decision ("OpenStack 
is open to extensions")  Supporting separate APIs feels like a better way to do 
that.  I'm very open to be corrected here, but I think we need to see code that 
wants to use the extension API and isn't better done as a separate API.  Right 
now I haven't seen any patches, and that makes me uneasy.






Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jorge Williams

On Feb 18, 2011, at 11:53 AM, Jay Pipes wrote:

I think your points are all valid, Jorge. Not disagreeing with them;
more just outlining that while saying all services must *publish* a
REST interface, services can listen and respond on more than one
protocol.

I'm glad we're *mostly* in agreement :-)


So, I agree with you basically, just pointing out that while having a
REST interface is a good standard, it shouldn't be the *only* way that
services can communicate with each other :)


Again, I'm not saying it's the *only* way services should communicate with one 
another especially if there exist protocols that make no sense replicating in 
REST.  That said, I don't like the idea of having to maintain different 
protocols otherwise.  I'm not convinced that doing so is necessary, it muddies 
the water on what exactly the true service interface is, it keeps us from 
consuming the same dog food we're selling, and I'm afraid it may lead to added 
work for service teams.


-jay

On Fri, Feb 18, 2011 at 12:46 PM, Jorge Williams
mailto:jorge.willi...@rackspace.com>> wrote:

On Feb 18, 2011, at 10:27 AM, Jay Pipes wrote:

Hi Jorge! Thanks for the detailed response. Comments inline. :)

On Fri, Feb 18, 2011 at 11:02 AM, Jorge Williams
mailto:jorge.willi...@rackspace.com>> wrote:
There are lots of advantages:

1) It allows services to be more autonomous, and gives us clearly defined 
service boundaries. Each service can be treated as a black box.

Agreed.

2) All service communication becomes versioned, not just the public API but 
also the admin API.  This means looser coupling which can help us work in 
parallel.  So glance can be on 1.2 of their API, but another API that depends 
on it (say compute) can continue to consume 1.1 until they're ready to switch 
-- we don't have the bottlenecks of everyone having to update everything 
together.

Agreed.

3) Also because things are loosely coupled and there are clearly defined 
boundaries  it positions us to have many other services (LBaaS, FWaaS, DBaaS, 
DNSaaS, etc).

Agreed.

4) It also becomes easier to deploy a subset of functionality ( you want 
compute and image, but not block).

Agreed.

5) Interested developers can get involved in only the services that they care 
about without worrying about other services.

Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
the communication protocol between internal Nova services (network,
compute, and volume) right now. Developers can currently get involved
in the services they want to without messing with the other services.


I'm saying we can even package/deploy/run each service separately.  I supposed 
you can also do this with AMQP, I just see less roadblocks to doing this with 
HTTP.  So for example, AMQP requires a message bus which is external to the 
service.  That affects autonomy.  With an HTTP/REST approach, I can simply talk 
to the service directly. I suppose things could be a little different if had a 
queuing service.  But even then, do we really want all of our messages to go to 
the queue service first?


6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
integration as it is, it makes sense for us to standardize on it.

Unless I'm mistaken, we're not talking about APIs. We're talking about
protocols. AMQP vs. HTTP.

What we call APIs are really protocols, so the OpenStack compute API is really 
a protocol for talking to compute.  Keep in mind we intimately use HTTP in our 
restful protocol...content negotiation, headers, status codes, etc... all of 
these are part of the API.

Another thing I should note, is that I see benefits in keeping the  interface 
to service same regardless of whether it's a user or another service that's 
making a call.  This allows us to eat our own dog food. That is, there's no 
separate protocol for developers than there is for clients.  Sure there may be 
an Admin API, but the difference between the Admin API and the Public API is 
really defined in terms of security policies by the operator.


We are certainly changing the way we are doing things, but I don't really think 
we are throwing away a lot of functionality.  As PVO mentioned, things should 
work very similar to the way they are working now.  You still have compute 
workers, you may still have an internal queue, the only difference is that 
cross-service communication is now happening by issuing REST calls.

I guess I'm on the fence with this one. I agree that:

* Having clear boundaries between services is A Good Thing
* Having versioning in the interfaces between services is A Good Thing

I'm just not convinced that services shouldn't be able to communicate
on different protocols. REST over HTTP is a fine interface. Serialized
messages over AMQP is similarly a fine interface.

I don't think we're saying you can't use any protocol besides HTTP.  If it 
makes sense to use something like AMQP **within  your service boundary** use 
it.  One of the nice things about servi

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jay Pipes
I think your points are all valid, Jorge. Not disagreeing with them;
more just outlining that while saying all services must *publish* a
REST interface, services can listen and respond on more than one
protocol.

So, I agree with you basically, just pointing out that while having a
REST interface is a good standard, it shouldn't be the *only* way that
services can communicate with each other :)

-jay

On Fri, Feb 18, 2011 at 12:46 PM, Jorge Williams
 wrote:
>
> On Feb 18, 2011, at 10:27 AM, Jay Pipes wrote:
>
>> Hi Jorge! Thanks for the detailed response. Comments inline. :)
>>
>> On Fri, Feb 18, 2011 at 11:02 AM, Jorge Williams
>>  wrote:
>>> There are lots of advantages:
>>>
>>> 1) It allows services to be more autonomous, and gives us clearly defined 
>>> service boundaries. Each service can be treated as a black box.
>>
>> Agreed.
>>
>>> 2) All service communication becomes versioned, not just the public API but 
>>> also the admin API.  This means looser coupling which can help us work in 
>>> parallel.  So glance can be on 1.2 of their API, but another API that 
>>> depends on it (say compute) can continue to consume 1.1 until they're ready 
>>> to switch -- we don't have the bottlenecks of everyone having to update 
>>> everything together.
>>
>> Agreed.
>>
>>> 3) Also because things are loosely coupled and there are clearly defined 
>>> boundaries  it positions us to have many other services (LBaaS, FWaaS, 
>>> DBaaS, DNSaaS, etc).
>>
>> Agreed.
>>
>>> 4) It also becomes easier to deploy a subset of functionality ( you want 
>>> compute and image, but not block).
>>
>> Agreed.
>>
>>> 5) Interested developers can get involved in only the services that they 
>>> care about without worrying about other services.
>>
>> Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
>> the communication protocol between internal Nova services (network,
>> compute, and volume) right now. Developers can currently get involved
>> in the services they want to without messing with the other services.
>>
>
> I'm saying we can even package/deploy/run each service separately.  I 
> supposed you can also do this with AMQP, I just see less roadblocks to doing 
> this with HTTP.  So for example, AMQP requires a message bus which is 
> external to the service.  That affects autonomy.  With an HTTP/REST approach, 
> I can simply talk to the service directly. I suppose things could be a little 
> different if had a queuing service.  But even then, do we really want all of 
> our messages to go to the queue service first?
>
>
>>> 6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
>>> integration as it is, it makes sense for us to standardize on it.
>>
>> Unless I'm mistaken, we're not talking about APIs. We're talking about
>> protocols. AMQP vs. HTTP.
>
> What we call APIs are really protocols, so the OpenStack compute API is 
> really a protocol for talking to compute.  Keep in mind we intimately use 
> HTTP in our restful protocol...content negotiation, headers, status codes, 
> etc... all of these are part of the API.
>
> Another thing I should note, is that I see benefits in keeping the  interface 
> to service same regardless of whether it's a user or another service that's 
> making a call.  This allows us to eat our own dog food. That is, there's no 
> separate protocol for developers than there is for clients.  Sure there may 
> be an Admin API, but the difference between the Admin API and the Public API 
> is really defined in terms of security policies by the operator.
>
>>
>>> We are certainly changing the way we are doing things, but I don't really 
>>> think we are throwing away a lot of functionality.  As PVO mentioned, 
>>> things should work very similar to the way they are working now.  You still 
>>> have compute workers, you may still have an internal queue, the only 
>>> difference is that cross-service communication is now happening by issuing 
>>> REST calls.
>>
>> I guess I'm on the fence with this one. I agree that:
>>
>> * Having clear boundaries between services is A Good Thing
>> * Having versioning in the interfaces between services is A Good Thing
>>
>> I'm just not convinced that services shouldn't be able to communicate
>> on different protocols. REST over HTTP is a fine interface. Serialized
>> messages over AMQP is similarly a fine interface.
>
> I don't think we're saying you can't use any protocol besides HTTP.  If it 
> makes sense to use something like AMQP **within  your service boundary** use 
> it.  One of the nice things about services being autonomous and loosely 
> coupled is that you have a lot of freedom within your black box.  So if you 
> want to use AMQP to talk to your compute nodes within your boundary go for it.
>
> I do think we need to standardize communication *between services* and 
> standardizing on REST is not a bad choice.  We learned this lesson the hard 
> way at Rackspace.  Today we have services that use REST, RMI, XML-RPC

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Justin Santa Barbara
> How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be
the OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will
continue to use their existing wrappers (e.g. jclouds)  Once there's an
OpenStack API, then end-users will want to find a library for that, and we
don't want that to be a poor experience.  To maintain a good experience, we
either can't break the API, or we need to write and maintain a lot of
proxying code to maintain compatibility.  We know we're not ready for the
first commitment, and I don't think we get enough to justify the second.

> I think the proxy would make sense if you wanted to have a single api. Not
all service providers will but I see this as entirely optional, not required
to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the
API, they use a wrapper library.  They want a stable library that works and
is kept up to date with recent changes and don't care about what's going on
under the covers.  Wrapper library authors want an API that is (1) one API
and (2) stable with reasonable evolution, otherwise they'll abandon their
wrapper or not update it.

> The extensions mechanism is the biggest change, iirc.

I'm not a big fan of the extensions idea, because it feels more like a
reflection of a management goal, rather than a technical decision
("OpenStack is open to extensions")  Supporting separate APIs feels like a
better way to do that.  I'm very open to be corrected here, but I think we
need to see code that wants to use the extension API and isn't better done
as a separate API.  Right now I haven't seen any patches, and that makes me
uneasy.





On Fri, Feb 18, 2011 at 9:29 AM, Paul Voccio wrote:

>  The spec for 1.0 and 1.1 are pretty close. The extensions mechanism is
> the biggest change, iirc.
>
>  I think the proxy would make sense if you wanted to have a single api.
> Not all service providers will but I see this as entirely optional, not
> required to use the services.
>
>  The push to get a completed compute api is the desire move away from the
> ec2 api to something that we can guide, extend and vote on as a community.
> The sooner we do the the better.
>
>  How is the 1.1 api proposal breaking this?
>
>   From: Justin Santa Barbara 
> Date: Fri, 18 Feb 2011 09:10:19 -0800
> To: Paul Voccio 
> Cc: Jay Pipes , "openstack@lists.launchpad.net" <
> openstack@lists.launchpad.net>
>
> Subject: Re: [Openstack] OpenStack Compute API 1.1
>
>  Jay: The AMQP->REST was the re-architecting I was referring to, which
> would not be customer-facing (other than likely introducing new bugs.)
>  Spinning off the services, if this is visible at the API level, is much
> more concerning to me.
>
>  So Paul, I think the proxy is good because it acknowledges the importance
> of keeping a consistent API.  But - if our API isn't finalized - why push it
> out at all, particularly if we're then going to have the overhead of
> maintaining another translation layer?  For Cactus, let's just support EC2
> and/or CloudServers 1.0 API compatibility (again a translation layer, but
> one we probably have to support anyway.)  Then we can design the right
> OpenStack API at our leisure and meet all of our goals: a stable Cactus and
> stable APIs.  If anyone ends up coding to a Cactus OpenStack API, we
> shouldn't have them become second-class citizens 3 months later.
>
> Justin
>
>
>
>
>
> On Fri, Feb 18, 2011 at 6:31 AM, Paul Voccio wrote:
>
>> Jay,
>>
>> I understand Justin's concern if we move /network and /images and /volume
>> to their own endpoints then it would be a change to the customer. I think
>> this could be solved by putting a proxy in front of each endpoint and
>> routing back to the appropriate service endpoint.
>>
>> I added another image on the wiki page to describe what I'm trying to say.
>> http://wiki.openstack.org/api_transition
>>
>>  I think might not be as bad of a transition since the compute worker
>> would
>> receive a request for a new compute node then it would proxy over to the
>> admin or public api of the network or volume node to request information.
>> It would work very similar to how the queues work now.
>>
>> pvo
>>
>> On 2/17/11 8:33 PM, "Jay Pipes"  wrote:
>>
>> >Sorry, I don't view the proposed changes from AMQP to REST as being
>> >"customer facing API changes". Could you explain? These are internal
>> >interfaces, no?
>> >
>> >-jay
>> >
>> >On Thu, Feb 17, 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jorge Williams

On Feb 18, 2011, at 10:27 AM, Jay Pipes wrote:

> Hi Jorge! Thanks for the detailed response. Comments inline. :)
> 
> On Fri, Feb 18, 2011 at 11:02 AM, Jorge Williams
>  wrote:
>> There are lots of advantages:
>> 
>> 1) It allows services to be more autonomous, and gives us clearly defined 
>> service boundaries. Each service can be treated as a black box.
> 
> Agreed.
> 
>> 2) All service communication becomes versioned, not just the public API but 
>> also the admin API.  This means looser coupling which can help us work in 
>> parallel.  So glance can be on 1.2 of their API, but another API that 
>> depends on it (say compute) can continue to consume 1.1 until they're ready 
>> to switch -- we don't have the bottlenecks of everyone having to update 
>> everything together.
> 
> Agreed.
> 
>> 3) Also because things are loosely coupled and there are clearly defined 
>> boundaries  it positions us to have many other services (LBaaS, FWaaS, 
>> DBaaS, DNSaaS, etc).
> 
> Agreed.
> 
>> 4) It also becomes easier to deploy a subset of functionality ( you want 
>> compute and image, but not block).
> 
> Agreed.
> 
>> 5) Interested developers can get involved in only the services that they 
>> care about without worrying about other services.
> 
> Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
> the communication protocol between internal Nova services (network,
> compute, and volume) right now. Developers can currently get involved
> in the services they want to without messing with the other services.
> 

I'm saying we can even package/deploy/run each service separately.  I supposed 
you can also do this with AMQP, I just see less roadblocks to doing this with 
HTTP.  So for example, AMQP requires a message bus which is external to the 
service.  That affects autonomy.  With an HTTP/REST approach, I can simply talk 
to the service directly. I suppose things could be a little different if had a 
queuing service.  But even then, do we really want all of our messages to go to 
the queue service first? 


>> 6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
>> integration as it is, it makes sense for us to standardize on it.
> 
> Unless I'm mistaken, we're not talking about APIs. We're talking about
> protocols. AMQP vs. HTTP.

What we call APIs are really protocols, so the OpenStack compute API is really 
a protocol for talking to compute.  Keep in mind we intimately use HTTP in our 
restful protocol...content negotiation, headers, status codes, etc... all of 
these are part of the API.

Another thing I should note, is that I see benefits in keeping the  interface 
to service same regardless of whether it's a user or another service that's 
making a call.  This allows us to eat our own dog food. That is, there's no 
separate protocol for developers than there is for clients.  Sure there may be 
an Admin API, but the difference between the Admin API and the Public API is 
really defined in terms of security policies by the operator.

> 
>> We are certainly changing the way we are doing things, but I don't really 
>> think we are throwing away a lot of functionality.  As PVO mentioned, things 
>> should work very similar to the way they are working now.  You still have 
>> compute workers, you may still have an internal queue, the only difference 
>> is that cross-service communication is now happening by issuing REST calls.
> 
> I guess I'm on the fence with this one. I agree that:
> 
> * Having clear boundaries between services is A Good Thing
> * Having versioning in the interfaces between services is A Good Thing
> 
> I'm just not convinced that services shouldn't be able to communicate
> on different protocols. REST over HTTP is a fine interface. Serialized
> messages over AMQP is similarly a fine interface.

I don't think we're saying you can't use any protocol besides HTTP.  If it 
makes sense to use something like AMQP **within  your service boundary** use 
it.  One of the nice things about services being autonomous and loosely coupled 
is that you have a lot of freedom within your black box.  So if you want to use 
AMQP to talk to your compute nodes within your boundary go for it.

I do think we need to standardize communication *between services* and 
standardizing on REST is not a bad choice.  We learned this lesson the hard way 
at Rackspace.  Today we have services that use REST, RMI, XML-RPC, and SOAP.  
Because there's a lot of diversity in the protocols we have services that 
expose multiple protocols to different clients (say RMI and SOAP), often a 
feature will make it to one protocol but never gets exposed in the other. 
Having to support multiple protocols adds a lot of extra work for the service 
team and for the teams like the control panel team that needs to integrate with 
all sorts of services in all sorts of ways.  We've come to the conclusion that 
supporting a single protocol is a good thing, and that HTTP/REST is not a bad 
choice.

No

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Paul Voccio
The spec for 1.0 and 1.1 are pretty close. The extensions mechanism is the 
biggest change, iirc.

I think the proxy would make sense if you wanted to have a single api. Not all 
service providers will but I see this as entirely optional, not required to use 
the services.

The push to get a completed compute api is the desire move away from the ec2 
api to something that we can guide, extend and vote on as a community. The 
sooner we do the the better.

How is the 1.1 api proposal breaking this?

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Fri, 18 Feb 2011 09:10:19 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: Jay Pipes mailto:jaypi...@gmail.com>>, 
"openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Jay: The AMQP->REST was the re-architecting I was referring to, which would not 
be customer-facing (other than likely introducing new bugs.)  Spinning off the 
services, if this is visible at the API level, is much more concerning to me.

So Paul, I think the proxy is good because it acknowledges the importance of 
keeping a consistent API.  But - if our API isn't finalized - why push it out 
at all, particularly if we're then going to have the overhead of maintaining 
another translation layer?  For Cactus, let's just support EC2 and/or 
CloudServers 1.0 API compatibility (again a translation layer, but one we 
probably have to support anyway.)  Then we can design the right OpenStack API 
at our leisure and meet all of our goals: a stable Cactus and stable APIs.  If 
anyone ends up coding to a Cactus OpenStack API, we shouldn't have them become 
second-class citizens 3 months later.

Justin





On Fri, Feb 18, 2011 at 6:31 AM, Paul Voccio 
mailto:paul.voc...@rackspace.com>> wrote:
Jay,

I understand Justin's concern if we move /network and /images and /volume
to their own endpoints then it would be a change to the customer. I think
this could be solved by putting a proxy in front of each endpoint and
routing back to the appropriate service endpoint.

I added another image on the wiki page to describe what I'm trying to say.
http://wiki.openstack.org/api_transition

I think might not be as bad of a transition since the compute worker would
receive a request for a new compute node then it would proxy over to the
admin or public api of the network or volume node to request information.
It would work very similar to how the queues work now.

pvo

On 2/17/11 8:33 PM, "Jay Pipes" mailto:jaypi...@gmail.com>> 
wrote:

>Sorry, I don't view the proposed changes from AMQP to REST as being
>"customer facing API changes". Could you explain? These are internal
>interfaces, no?
>
>-jay
>
>On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
>mailto:jus...@fathomdb.com>> wrote:
>> An API is for life, not just for Cactus.
>> I agree that stability is important.  I don't see how we can claim to
>> deliver 'stability' when the plan is then immediately to destablize
>> everything with a very disruptive change soon after, including customer
>> facing API changes and massive internal re-architecting.
>>
>>
>> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes 
>> mailto:jaypi...@gmail.com>> wrote:
>>>
>>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>>> mailto:jus...@fathomdb.com>> wrote:
>>> > Pulling volumes & images out into separate services (and moving from
>>> > AMQP to
>>> > REST) sounds like a huge breaking change, so if that is indeed the
>>>plan,
>>> > let's do that asap (i.e. Cactus).
>>>
>>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
>>> is supposed to be about stability and the only features going into
>>> Cactus should be to achieve API parity of the OpenStack Compute API
>>> with the Rackspace Cloud Servers API. Doing such a huge change like
>>> moving communication from AMQP to HTTP for volume and network would be
>>> a change that would likely undermine the stability of the Cactus
>>> release severely.
>>>
>>> -jay
>>
>>



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com<mailto:ab...@racks

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Justin Santa Barbara
Jay: The AMQP->REST was the re-architecting I was referring to, which would
not be customer-facing (other than likely introducing new bugs.)  Spinning
off the services, if this is visible at the API level, is much more
concerning to me.

So Paul, I think the proxy is good because it acknowledges the importance of
keeping a consistent API.  But - if our API isn't finalized - why push it
out at all, particularly if we're then going to have the overhead of
maintaining another translation layer?  For Cactus, let's just support EC2
and/or CloudServers 1.0 API compatibility (again a translation layer, but
one we probably have to support anyway.)  Then we can design the right
OpenStack API at our leisure and meet all of our goals: a stable Cactus and
stable APIs.  If anyone ends up coding to a Cactus OpenStack API, we
shouldn't have them become second-class citizens 3 months later.

Justin





On Fri, Feb 18, 2011 at 6:31 AM, Paul Voccio wrote:

> Jay,
>
> I understand Justin's concern if we move /network and /images and /volume
> to their own endpoints then it would be a change to the customer. I think
> this could be solved by putting a proxy in front of each endpoint and
> routing back to the appropriate service endpoint.
>
> I added another image on the wiki page to describe what I'm trying to say.
> http://wiki.openstack.org/api_transition
>
> I think might not be as bad of a transition since the compute worker would
> receive a request for a new compute node then it would proxy over to the
> admin or public api of the network or volume node to request information.
> It would work very similar to how the queues work now.
>
> pvo
>
> On 2/17/11 8:33 PM, "Jay Pipes"  wrote:
>
> >Sorry, I don't view the proposed changes from AMQP to REST as being
> >"customer facing API changes". Could you explain? These are internal
> >interfaces, no?
> >
> >-jay
> >
> >On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
> > wrote:
> >> An API is for life, not just for Cactus.
> >> I agree that stability is important.  I don't see how we can claim to
> >> deliver 'stability' when the plan is then immediately to destablize
> >> everything with a very disruptive change soon after, including customer
> >> facing API changes and massive internal re-architecting.
> >>
> >>
> >> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:
> >>>
> >>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
> >>>  wrote:
> >>> > Pulling volumes & images out into separate services (and moving from
> >>> > AMQP to
> >>> > REST) sounds like a huge breaking change, so if that is indeed the
> >>>plan,
> >>> > let's do that asap (i.e. Cactus).
> >>>
> >>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
> >>> is supposed to be about stability and the only features going into
> >>> Cactus should be to achieve API parity of the OpenStack Compute API
> >>> with the Rackspace Cloud Servers API. Doing such a huge change like
> >>> moving communication from AMQP to HTTP for volume and network would be
> >>> a change that would likely undermine the stability of the Cactus
> >>> release severely.
> >>>
> >>> -jay
> >>
> >>
>
>
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of
> the
> individual or entity to which this message is addressed, and unless
> otherwise
> expressly indicated, is confidential and privileged information of
> Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Paul Voccio
More inline. I trimmed your agrees.

On 2/18/11 10:27 AM, "Jay Pipes"  wrote:

>
>> 5) Interested developers can get involved in only the services that
>>they care about without worrying about other services.
>
>Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
>the communication protocol between internal Nova services (network,
>compute, and volume) right now. Developers can currently get involved
>in the services they want to without messing with the other services.

I think means other services will have apis that sit on a different
endpoint than compute. To talk to it, use the http interface instead of a
queue message. 


>
>> 6) We already have 3 APIs (nova, swift, glance), we need to do this
>>kind of integration as it is, it makes sense for us to standardize on it.
>
>Unless I'm mistaken, we're not talking about APIs. We're talking about
>protocols. AMQP vs. HTTP.

Its a bit of both. To break out into separate apis we wouldn't use amqp to
communicate between services.

>
>> We are certainly changing the way we are doing things, but I don't
>>really think we are throwing away a lot of functionality.  As PVO
>>mentioned, things should work very similar to the way they are working
>>now.  You still have compute workers, you may still have an internal
>>queue, the only difference is that cross-service communication is now
>>happening by issuing REST calls.
>
>I guess I'm on the fence with this one. I agree that:
>
>* Having clear boundaries between services is A Good Thing
>* Having versioning in the interfaces between services is A Good Thing
>
>I'm just not convinced that services shouldn't be able to communicate
>on different protocols. REST over HTTP is a fine interface. Serialized
>messages over AMQP is similarly a fine interface. The standardization
>should occur at the *message* level, not the *protocol* level. REST
>over HTTP, combined with the Atom Publishing Protocol, has those
>messages already defined. Having standard message definitions that are
>sent via AMQP seems to me to be the "missing link" in the
>standardization process.

Wouldn't you be designing the same thing over 2 interfaces then? You'd
have to standardize on amqp and http?

>
>Just some thoughts,
>jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jay Pipes
Hi Jorge! Thanks for the detailed response. Comments inline. :)

On Fri, Feb 18, 2011 at 11:02 AM, Jorge Williams
 wrote:
> There are lots of advantages:
>
> 1) It allows services to be more autonomous, and gives us clearly defined 
> service boundaries. Each service can be treated as a black box.

Agreed.

> 2) All service communication becomes versioned, not just the public API but 
> also the admin API.  This means looser coupling which can help us work in 
> parallel.  So glance can be on 1.2 of their API, but another API that depends 
> on it (say compute) can continue to consume 1.1 until they're ready to switch 
> -- we don't have the bottlenecks of everyone having to update everything 
> together.

Agreed.

> 3) Also because things are loosely coupled and there are clearly defined 
> boundaries  it positions us to have many other services (LBaaS, FWaaS, DBaaS, 
> DNSaaS, etc).

Agreed.

> 4) It also becomes easier to deploy a subset of functionality ( you want 
> compute and image, but not block).

Agreed.

> 5) Interested developers can get involved in only the services that they care 
> about without worrying about other services.

Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
the communication protocol between internal Nova services (network,
compute, and volume) right now. Developers can currently get involved
in the services they want to without messing with the other services.

> 6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
> integration as it is, it makes sense for us to standardize on it.

Unless I'm mistaken, we're not talking about APIs. We're talking about
protocols. AMQP vs. HTTP.

> We are certainly changing the way we are doing things, but I don't really 
> think we are throwing away a lot of functionality.  As PVO mentioned, things 
> should work very similar to the way they are working now.  You still have 
> compute workers, you may still have an internal queue, the only difference is 
> that cross-service communication is now happening by issuing REST calls.

I guess I'm on the fence with this one. I agree that:

* Having clear boundaries between services is A Good Thing
* Having versioning in the interfaces between services is A Good Thing

I'm just not convinced that services shouldn't be able to communicate
on different protocols. REST over HTTP is a fine interface. Serialized
messages over AMQP is similarly a fine interface. The standardization
should occur at the *message* level, not the *protocol* level. REST
over HTTP, combined with the Atom Publishing Protocol, has those
messages already defined. Having standard message definitions that are
sent via AMQP seems to me to be the "missing link" in the
standardization process.

Just some thoughts,
jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Josh Kleinpeter

On Feb 18, 2011, at 9:34 AM, Jay Pipes wrote:

> OK, fair enough.
> 
> Can I ask what the impetus for moving from AMQP to REST for all
> internal APIs is? Seems to me we will be throwing away a lot of
> functionality for the benefit of cross-WAN REST communication?
> 
> -jay


Not to mention building a queueing service whilst moving from AMQP to REST. 
Shouldn't we eat our own dog food? Mmm...kibbles.

> 
> On Fri, Feb 18, 2011 at 9:31 AM, Paul Voccio  
> wrote:
>> Jay,
>> 
>> I understand Justin's concern if we move /network and /images and /volume
>> to their own endpoints then it would be a change to the customer. I think
>> this could be solved by putting a proxy in front of each endpoint and
>> routing back to the appropriate service endpoint.
>> 
>> I added another image on the wiki page to describe what I'm trying to say.
>> http://wiki.openstack.org/api_transition
>> 
>> I think might not be as bad of a transition since the compute worker would
>> receive a request for a new compute node then it would proxy over to the
>> admin or public api of the network or volume node to request information.
>> It would work very similar to how the queues work now.
>> 
>> pvo
>> 
>> On 2/17/11 8:33 PM, "Jay Pipes"  wrote:
>> 
>>> Sorry, I don't view the proposed changes from AMQP to REST as being
>>> "customer facing API changes". Could you explain? These are internal
>>> interfaces, no?
>>> 
>>> -jay
>>> 
>>> On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
>>>  wrote:
 An API is for life, not just for Cactus.
 I agree that stability is important.  I don't see how we can claim to
 deliver 'stability' when the plan is then immediately to destablize
 everything with a very disruptive change soon after, including customer
 facing API changes and massive internal re-architecting.
 
 
 On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:
> 
> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>  wrote:
>> Pulling volumes & images out into separate services (and moving from
>> AMQP to
>> REST) sounds like a huge breaking change, so if that is indeed the
> plan,
>> let's do that asap (i.e. Cactus).
> 
> Sorry, I have to disagree with you here, Justin :)  The Cactus release
> is supposed to be about stability and the only features going into
> Cactus should be to achieve API parity of the OpenStack Compute API
> with the Rackspace Cloud Servers API. Doing such a huge change like
> moving communication from AMQP to HTTP for volume and network would be
> a change that would likely undermine the stability of the Cactus
> release severely.
> 
> -jay
 
 
>> 
>> 
>> 
>> Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use of the
>> individual or entity to which this message is addressed, and unless otherwise
>> expressly indicated, is confidential and privileged information of Rackspace.
>> Any dissemination, distribution or copying of the enclosed material is 
>> prohibited.
>> If you receive this transmission in error, please notify us immediately by 
>> e-mail
>> at ab...@rackspace.com, and delete the original message.
>> Your cooperation is appreciated.
>> 
>> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jorge Williams
There are lots of advantages:

1) It allows services to be more autonomous, and gives us clearly defined 
service boundaries. Each service can be treated as a black box.
2) All service communication becomes versioned, not just the public API but 
also the admin API.  This means looser coupling which can help us work in 
parallel.  So glance can be on 1.2 of their API, but another API that depends 
on it (say compute) can continue to consume 1.1 until they're ready to switch 
-- we don't have the bottlenecks of everyone having to update everything 
together.
3) Also because things are loosely coupled and there are clearly defined 
boundaries  it positions us to have many other services (LBaaS, FWaaS, DBaaS, 
DNSaaS, etc).
4) It also becomes easier to deploy a subset of functionality ( you want 
compute and image, but not block).
5) Interested developers can get involved in only the services that they care 
about without worrying about other services.
6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
integration as it is, it makes sense for us to standardize on it.

We are certainly changing the way we are doing things, but I don't really think 
we are throwing away a lot of functionality.  As PVO mentioned, things should 
work very similar to the way they are working now.  You still have compute 
workers, you may still have an internal queue, the only difference is that 
cross-service communication is now happening by issuing REST calls.

-jOrGe W.


On Feb 18, 2011, at 9:34 AM, Jay Pipes wrote:

> OK, fair enough.
> 
> Can I ask what the impetus for moving from AMQP to REST for all
> internal APIs is? Seems to me we will be throwing away a lot of
> functionality for the benefit of cross-WAN REST communication?
> 
> -jay
> 
> On Fri, Feb 18, 2011 at 9:31 AM, Paul Voccio  
> wrote:
>> Jay,
>> 
>> I understand Justin's concern if we move /network and /images and /volume
>> to their own endpoints then it would be a change to the customer. I think
>> this could be solved by putting a proxy in front of each endpoint and
>> routing back to the appropriate service endpoint.
>> 
>> I added another image on the wiki page to describe what I'm trying to say.
>> http://wiki.openstack.org/api_transition
>> 
>> I think might not be as bad of a transition since the compute worker would
>> receive a request for a new compute node then it would proxy over to the
>> admin or public api of the network or volume node to request information.
>> It would work very similar to how the queues work now.
>> 
>> pvo
>> 
>> On 2/17/11 8:33 PM, "Jay Pipes"  wrote:
>> 
>>> Sorry, I don't view the proposed changes from AMQP to REST as being
>>> "customer facing API changes". Could you explain? These are internal
>>> interfaces, no?
>>> 
>>> -jay
>>> 
>>> On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
>>>  wrote:
 An API is for life, not just for Cactus.
 I agree that stability is important.  I don't see how we can claim to
 deliver 'stability' when the plan is then immediately to destablize
 everything with a very disruptive change soon after, including customer
 facing API changes and massive internal re-architecting.
 
 
 On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:
> 
> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>  wrote:
>> Pulling volumes & images out into separate services (and moving from
>> AMQP to
>> REST) sounds like a huge breaking change, so if that is indeed the
> plan,
>> let's do that asap (i.e. Cactus).
> 
> Sorry, I have to disagree with you here, Justin :)  The Cactus release
> is supposed to be about stability and the only features going into
> Cactus should be to achieve API parity of the OpenStack Compute API
> with the Rackspace Cloud Servers API. Doing such a huge change like
> moving communication from AMQP to HTTP for volume and network would be
> a change that would likely undermine the stability of the Cactus
> release severely.
> 
> -jay
 
 
>> 
>> 
>> 
>> Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use of the
>> individual or entity to which this message is addressed, and unless otherwise
>> expressly indicated, is confidential and privileged information of Rackspace.
>> Any dissemination, distribution or copying of the enclosed material is 
>> prohibited.
>> If you receive this transmission in error, please notify us immediately by 
>> e-mail
>> at ab...@rackspace.com, and delete the original message.
>> Your cooperation is appreciated.
>> 
>> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: ht

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jay Pipes
OK, fair enough.

Can I ask what the impetus for moving from AMQP to REST for all
internal APIs is? Seems to me we will be throwing away a lot of
functionality for the benefit of cross-WAN REST communication?

-jay

On Fri, Feb 18, 2011 at 9:31 AM, Paul Voccio  wrote:
> Jay,
>
> I understand Justin's concern if we move /network and /images and /volume
> to their own endpoints then it would be a change to the customer. I think
> this could be solved by putting a proxy in front of each endpoint and
> routing back to the appropriate service endpoint.
>
> I added another image on the wiki page to describe what I'm trying to say.
> http://wiki.openstack.org/api_transition
>
> I think might not be as bad of a transition since the compute worker would
> receive a request for a new compute node then it would proxy over to the
> admin or public api of the network or volume node to request information.
> It would work very similar to how the queues work now.
>
> pvo
>
> On 2/17/11 8:33 PM, "Jay Pipes"  wrote:
>
>>Sorry, I don't view the proposed changes from AMQP to REST as being
>>"customer facing API changes". Could you explain? These are internal
>>interfaces, no?
>>
>>-jay
>>
>>On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
>> wrote:
>>> An API is for life, not just for Cactus.
>>> I agree that stability is important.  I don't see how we can claim to
>>> deliver 'stability' when the plan is then immediately to destablize
>>> everything with a very disruptive change soon after, including customer
>>> facing API changes and massive internal re-architecting.
>>>
>>>
>>> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:

 On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
  wrote:
 > Pulling volumes & images out into separate services (and moving from
 > AMQP to
 > REST) sounds like a huge breaking change, so if that is indeed the
plan,
 > let's do that asap (i.e. Cactus).

 Sorry, I have to disagree with you here, Justin :)  The Cactus release
 is supposed to be about stability and the only features going into
 Cactus should be to achieve API parity of the OpenStack Compute API
 with the Rackspace Cloud Servers API. Doing such a huge change like
 moving communication from AMQP to HTTP for volume and network would be
 a change that would likely undermine the stability of the Cactus
 release severely.

 -jay
>>>
>>>
>
>
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited.
> If you receive this transmission in error, please notify us immediately by 
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Paul Voccio
Jay,

I understand Justin's concern if we move /network and /images and /volume
to their own endpoints then it would be a change to the customer. I think
this could be solved by putting a proxy in front of each endpoint and
routing back to the appropriate service endpoint.

I added another image on the wiki page to describe what I'm trying to say.
http://wiki.openstack.org/api_transition

I think might not be as bad of a transition since the compute worker would
receive a request for a new compute node then it would proxy over to the
admin or public api of the network or volume node to request information.
It would work very similar to how the queues work now.

pvo

On 2/17/11 8:33 PM, "Jay Pipes"  wrote:

>Sorry, I don't view the proposed changes from AMQP to REST as being
>"customer facing API changes". Could you explain? These are internal
>interfaces, no?
>
>-jay
>
>On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
> wrote:
>> An API is for life, not just for Cactus.
>> I agree that stability is important.  I don't see how we can claim to
>> deliver 'stability' when the plan is then immediately to destablize
>> everything with a very disruptive change soon after, including customer
>> facing API changes and massive internal re-architecting.
>>
>>
>> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:
>>>
>>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>>>  wrote:
>>> > Pulling volumes & images out into separate services (and moving from
>>> > AMQP to
>>> > REST) sounds like a huge breaking change, so if that is indeed the
>>>plan,
>>> > let's do that asap (i.e. Cactus).
>>>
>>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
>>> is supposed to be about stability and the only features going into
>>> Cactus should be to achieve API parity of the OpenStack Compute API
>>> with the Rackspace Cloud Servers API. Doing such a huge change like
>>> moving communication from AMQP to HTTP for volume and network would be
>>> a change that would likely undermine the stability of the Cactus
>>> release severely.
>>>
>>> -jay
>>
>>



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-17 Thread Jay Pipes
Sorry, I don't view the proposed changes from AMQP to REST as being
"customer facing API changes". Could you explain? These are internal
interfaces, no?

-jay

On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
 wrote:
> An API is for life, not just for Cactus.
> I agree that stability is important.  I don't see how we can claim to
> deliver 'stability' when the plan is then immediately to destablize
> everything with a very disruptive change soon after, including customer
> facing API changes and massive internal re-architecting.
>
>
> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:
>>
>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>>  wrote:
>> > Pulling volumes & images out into separate services (and moving from
>> > AMQP to
>> > REST) sounds like a huge breaking change, so if that is indeed the plan,
>> > let's do that asap (i.e. Cactus).
>>
>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
>> is supposed to be about stability and the only features going into
>> Cactus should be to achieve API parity of the OpenStack Compute API
>> with the Rackspace Cloud Servers API. Doing such a huge change like
>> moving communication from AMQP to HTTP for volume and network would be
>> a change that would likely undermine the stability of the Cactus
>> release severely.
>>
>> -jay
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-17 Thread Justin Santa Barbara
An API is for life, not just for Cactus.

I agree that stability is important.  I don't see how we can claim to
deliver 'stability' when the plan is then immediately to destablize
everything with a very disruptive change soon after, including customer
facing API changes and massive internal re-architecting.



On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:

> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>  wrote:
> > Pulling volumes & images out into separate services (and moving from AMQP
> to
> > REST) sounds like a huge breaking change, so if that is indeed the plan,
> > let's do that asap (i.e. Cactus).
>
> Sorry, I have to disagree with you here, Justin :)  The Cactus release
> is supposed to be about stability and the only features going into
> Cactus should be to achieve API parity of the OpenStack Compute API
> with the Rackspace Cloud Servers API. Doing such a huge change like
> moving communication from AMQP to HTTP for volume and network would be
> a change that would likely undermine the stability of the Cactus
> release severely.
>
> -jay
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-17 Thread Jay Pipes
On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
 wrote:
> Pulling volumes & images out into separate services (and moving from AMQP to
> REST) sounds like a huge breaking change, so if that is indeed the plan,
> let's do that asap (i.e. Cactus).

Sorry, I have to disagree with you here, Justin :)  The Cactus release
is supposed to be about stability and the only features going into
Cactus should be to achieve API parity of the OpenStack Compute API
with the Rackspace Cloud Servers API. Doing such a huge change like
moving communication from AMQP to HTTP for volume and network would be
a change that would likely undermine the stability of the Cactus
release severely.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-17 Thread Justin Santa Barbara
Pulling volumes & images out into separate services (and moving from AMQP to
REST) sounds like a huge breaking change, so if that is indeed the plan,
let's do that asap (i.e. Cactus).




On Thu, Feb 17, 2011 at 3:44 PM, Paul Voccio wrote:

>  I wanted to put out into the open where we think the evolution of the
> apis will go over the next few releases. This is by no means the only way to
> do this, but I thought it would be a start of conversation.
>
>  http://wiki.openstack.org/api_transition
>
>  I also wanted to clear up some confusion that I think came out of our
> email thread the other day. With the OpenStack 1.1 API proposal, this is
> really a OpenStack Compute 1.1 proposal. While volumes and images are
> currently in, I think longer term they would be pulled out. The network and
> volume services should be able to scale independently of each other.
>
>  If you look at the diagram, the changes would entail moving from an amqp
> protocol to a http protocol that a worker would hit on the public/admin
> interfaces to accomplish the same work as before.
>
>  Lets keep the thread going.
>
>  Pvo
>
>
>   From: Justin Santa Barbara 
> Date: Tue, 15 Feb 2011 11:38:37 -0800
> To: Troy Toman 
>
> Cc: "openstack@lists.launchpad.net" 
> Subject: Re: [Openstack] OpenStack Compute API 1.1
>
>  Sounds great - when the patch comes in we can discuss whether this should
> be an extension or whether scheduled snapshots / generic tasks have broader
> applicability across OpenStack (and thus would be better in the core API)
>
>  Is there a blueprint?
>
>
>
> On Tue, Feb 15, 2011 at 11:32 AM, Troy Toman wrote:
>
>>
>>  On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:
>>
>>
>>  OK - so it sounds like volumes are going to be in the core API (?) -
>> good.  Let's get that into the API spec.  It also sounds like extensions
>> (swift / glance?) are not going to be in the same API long-term.  So why do
>> we have the extensions mechanism?
>>
>>  Until we have an implemented use case (i.e. a patch) that uses the
>> extensions element, I don't see how we can spec it out or approve it.  So if
>> you want it in v1.1, we better find a team that wants to use it and write
>> code.  If there is such a patch, I stand corrected and let's get it reviewed
>> and merged.
>>
>>  I would actually expect that the majority of the use cases that we want
>> in the API but don't _want_ to go through core would be more simply
>> addressed by well-known metadata (e.g. RAID-5, multi-continent replication,
>> HPC, HIPAA).
>>
>>
>>  I'm don't agree that the lack of a coded patch means we can't discuss an
>> extension mechanism. But, if you want a specific use case, we have at least
>> one we intend to deliver. It may be more of a one-off than a general case
>> because it is required to give us a reasonable transition path from our
>> current codebase to Nova. But, it is not an imagined need.
>>
>>  In the Rackspace Cloud Servers 1.0 API, we support a concept of backup
>> schedules with a series of API calls to manage them. In drafting the
>> OpenStack compute API, this was something that didn't feel generally
>> applicable or useful in the core API. So, you don't see it as part of the
>> CORE API spec. That said, for transition purposes, we will need a way to
>> provide this capability to our customers when we move to Nova. Our current
>> plan is to do this using the extension mechanism in the proposed API.
>>
>>  If there is a better way to handle this need, then let's discuss
>> further. But, I didn't want the lack of a specific example to squash the
>> idea of extensions.
>>
>>  Troy Toman
>>
>> Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use of the
>> individual or entity to which this message is addressed, and unless otherwise
>> expressly indicated, is confidential and privileged information of Rackspace.
>> Any dissemination, distribution or copying of the enclosed material is 
>> prohibited.
>> If you receive this transmission in error, please notify us immediately by 
>> e-mail
>> at ab...@rackspace.com, and delete the original message.
>> Your cooperation is appreciated.
>>
>>
>  ___ Mailing list:
> https://launchpad.net/~openstack Post to : 
> openstack@lists.launchpad.netUnsubscribe :
> https://launchpad.net/~openstack More help :
> https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-17 Thread Paul Voccio
I wanted to put out into the open where we think the evolution of the apis will 
go over the next few releases. This is by no means the only way to do this, but 
I thought it would be a start of conversation.

http://wiki.openstack.org/api_transition

I also wanted to clear up some confusion that I think came out of our email 
thread the other day. With the OpenStack 1.1 API proposal, this is really a 
OpenStack Compute 1.1 proposal. While volumes and images are currently in, I 
think longer term they would be pulled out. The network and volume services 
should be able to scale independently of each other.

If you look at the diagram, the changes would entail moving from an amqp 
protocol to a http protocol that a worker would hit on the public/admin 
interfaces to accomplish the same work as before.

Lets keep the thread going.

Pvo


From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Tue, 15 Feb 2011 11:38:37 -0800
To: Troy Toman mailto:troy.to...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Sounds great - when the patch comes in we can discuss whether this should be an 
extension or whether scheduled snapshots / generic tasks have broader 
applicability across OpenStack (and thus would be better in the core API)

Is there a blueprint?



On Tue, Feb 15, 2011 at 11:32 AM, Troy Toman 
mailto:troy.to...@rackspace.com>> wrote:

On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:


OK - so it sounds like volumes are going to be in the core API (?) - good.  
Let's get that into the API spec.  It also sounds like extensions (swift / 
glance?) are not going to be in the same API long-term.  So why do we have the 
extensions mechanism?

Until we have an implemented use case (i.e. a patch) that uses the extensions 
element, I don't see how we can spec it out or approve it.  So if you want it 
in v1.1, we better find a team that wants to use it and write code.  If there 
is such a patch, I stand corrected and let's get it reviewed and merged.

I would actually expect that the majority of the use cases that we want in the 
API but don't _want_ to go through core would be more simply addressed by 
well-known metadata (e.g. RAID-5, multi-continent replication, HPC, HIPAA).

I'm don't agree that the lack of a coded patch means we can't discuss an 
extension mechanism. But, if you want a specific use case, we have at least one 
we intend to deliver. It may be more of a one-off than a general case because 
it is required to give us a reasonable transition path from our current 
codebase to Nova. But, it is not an imagined need.

In the Rackspace Cloud Servers 1.0 API, we support a concept of backup 
schedules with a series of API calls to manage them. In drafting the OpenStack 
compute API, this was something that didn't feel generally applicable or useful 
in the core API. So, you don't see it as part of the CORE API spec. That said, 
for transition purposes, we will need a way to provide this capability to our 
customers when we move to Nova. Our current plan is to do this using the 
extension mechanism in the proposed API.

If there is a better way to handle this need, then let's discuss further. But, 
I didn't want the lack of a specific example to squash the idea of extensions.

Troy Toman


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com<mailto:ab...@rackspace.com>, and delete the original 
message.
Your cooperation is appreciated.


___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net> Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

2011-02-16 Thread Jorge Williams

I like idea of scheduling actions overall.  The idea of a generic scheduling 
service also appeals to me a lot.  The question is how do you generalize the 
service.  I'd love to see your write up.
-jOrGe W.


On Feb 16, 2011, at 4:35 PM, Adrian Otto wrote:

Glen,

I definitely recognize the value in having scheduling capability. I wrote a 
high level draft of a REST API for a generic scheduler to be used for batch job 
processing. Scheduled events are discussed regularly by users of queue systems 
that want certain things to happen on regular intervals. Considering that a 
scheduler function is useful, and could be used for many different services 
within the OpenStack system, I suggest thinking about a separate service that's 
dedicated to executing scheduled jobs that may need to interact with multiple 
services within OpenStack. This way it could be used to act upon not only 
/severs, but any other resource(s) in any service(s). Imbedding the 
functionality within nova is probably an architectural mistake. Filing a 
blueprint for a separate scheduler service sounds like a good idea.

Adrian

On Feb 16, 2011, at 2:02 PM, Glen Campbell wrote:

The proposed compute API 1.1 has a specification for server actions (Sec. 4.4) 
with the endpoint:

   /servers/{id}/action

The actual action is specified as the body of the POST request, and the 
implication is that the action is performed immediately, or as soon as possible.

I'd like us to consider changing this "action" resource into a "calendar" or 
perhaps "schedule" resource:

   /servers/{id}/schedule{/year{/month{/day{/hour{/minute}

This would provide a generalized way of performing actions on a scheduled basis.

For example, instead of having to wake up at 2AM to reboot a server (for 
whatever reason), the administrator could schedule that event:

   /servers/{id}/schedule/2011/2/17/02/00

By using the default resource (without the day or time specified), the meaning 
would be synonymous with the proposed "/action" resource; i.e., perform it NOW, 
or as soon as possible.

The schedule resource could have additional uses; for example, a GET request 
could return the currently-scheduled actions for a particular server.

Glen


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

2011-02-16 Thread Ed Leafe
On Feb 16, 2011, at 5:11 PM, Michael Mayo wrote:

> I like this idea, but I would suggest going with a unix timestamp in GMT 
> instead of /2011/xx/xx/etc.

Whether you use a timestamp or MM... format, *always* use GMT. We 
all know how much fun it is when someone in Europe sends a request to reboot 
their server at 2am, and the data center in Chicago does just that, except that 
it's now 8am for the requester.



-- Ed Leafe




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

2011-02-16 Thread Adrian Otto
Glen,

I definitely recognize the value in having scheduling capability. I wrote a 
high level draft of a REST API for a generic scheduler to be used for batch job 
processing. Scheduled events are discussed regularly by users of queue systems 
that want certain things to happen on regular intervals. Considering that a 
scheduler function is useful, and could be used for many different services 
within the OpenStack system, I suggest thinking about a separate service that's 
dedicated to executing scheduled jobs that may need to interact with multiple 
services within OpenStack. This way it could be used to act upon not only 
/severs, but any other resource(s) in any service(s). Imbedding the 
functionality within nova is probably an architectural mistake. Filing a 
blueprint for a separate scheduler service sounds like a good idea.

Adrian

On Feb 16, 2011, at 2:02 PM, Glen Campbell wrote:

The proposed compute API 1.1 has a specification for server actions (Sec. 4.4) 
with the endpoint:

   /servers/{id}/action

The actual action is specified as the body of the POST request, and the 
implication is that the action is performed immediately, or as soon as possible.

I'd like us to consider changing this "action" resource into a "calendar" or 
perhaps "schedule" resource:

   /servers/{id}/schedule{/year{/month{/day{/hour{/minute}

This would provide a generalized way of performing actions on a scheduled basis.

For example, instead of having to wake up at 2AM to reboot a server (for 
whatever reason), the administrator could schedule that event:

   /servers/{id}/schedule/2011/2/17/02/00

By using the default resource (without the day or time specified), the meaning 
would be synonymous with the proposed "/action" resource; i.e., perform it NOW, 
or as soon as possible.

The schedule resource could have additional uses; for example, a GET request 
could return the currently-scheduled actions for a particular server.

Glen


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

2011-02-16 Thread Jay Pipes
On Wed, Feb 16, 2011 at 5:29 PM, Brian Waldon
 wrote:
> -Original Message-
> From: "Jay Pipes" 
> Sent: Wednesday, February 16, 2011 5:09pm
> To: "Glen Campbell" 
> Cc: "openstack@lists.launchpad.net" 
> Subject: Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions
>
> On Wed, Feb 16, 2011 at 5:02 PM, Glen Campbell
>  wrote:
>> The proposed compute API 1.1 has a specification for server actions (Sec.
>> 4.4) with the endpoint:
>>
>>    /servers/{id}/action
>>
>> The actual action is specified as the body of the POST request, and the
>> implication is that the action is performed immediately, or as soon as
>> possible.
>
> Hmm, do you mean the GET request? The above URL implies the action is
> part of the GET URL...
>
> /servers/{id}/action only accepts POST requests with an action entity as the
> body, "action" is not a replaceable string

OK.

>> I'd like us to consider changing this "action" resource into a "calendar"
>> or
>> perhaps "schedule" resource:
>>
>>    /servers/{id}/schedule{/year{/month{/day{/hour{/minute}
>>
>> This would provide a generalized way of performing actions on a scheduled
>> basis.
>>
>> For example, instead of having to wake up at 2AM to reboot a server (for
>> whatever reason), the administrator could schedule that event:
>>
>>    /servers/{id}/schedule/2011/2/17/02/00
>>
>> By using the default resource (without the day or time specified), the
>> meaning would be synonymous with the proposed "/action" resource; i.e.,
>> perform it NOW, or as soon as possible.
>
> Why not /servers/{id}/{action}/schedule/2011/2/17/02/00 instead? That
> way no POST would be required.
>
> Changing a POST to a GET may seem convenient, but to me GET != POST and
> should never be used that way.

Hmm, I suppose, yes, you're right.

> Why not add a "schedule_at" property to the
> action entity and keep the url short?

++ that would work, too.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

2011-02-16 Thread Brian Waldon

 
 
-Original Message-
From: "Jay Pipes" 
Sent: Wednesday, February 16, 2011 5:09pm
To: "Glen Campbell" 
Cc: "openstack@lists.launchpad.net" 
Subject: Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

On Wed, Feb 16, 2011 at 5:02 PM, Glen Campbell
 wrote:
> The proposed compute API 1.1 has a specification for server actions (Sec.
> 4.4) with the endpoint:
>
>    /servers/{id}/action
>
> The actual action is specified as the body of the POST request, and the
> implication is that the action is performed immediately, or as soon as
> possible.

Hmm, do you mean the GET request? The above URL implies the action is
part of the GET URL...
 
 
/servers/{id}/action only accepts POST requests with an action entity as the 
body, "action" is not a replaceable string
 


> I'd like us to consider changing this "action" resource into a "calendar" or
> perhaps "schedule" resource:
>
>    /servers/{id}/schedule{/year{/month{/day{/hour{/minute}
>
> This would provide a generalized way of performing actions on a scheduled
> basis.
>
> For example, instead of having to wake up at 2AM to reboot a server (for
> whatever reason), the administrator could schedule that event:
>
>    /servers/{id}/schedule/2011/2/17/02/00
>
> By using the default resource (without the day or time specified), the
> meaning would be synonymous with the proposed "/action" resource; i.e.,
> perform it NOW, or as soon as possible.

Why not /servers/{id}/{action}/schedule/2011/2/17/02/00 instead? That
way no POST would be required.
 
 
Changing a POST to a GET may seem convenient, but to me GET != POST and should 
never be used that way. Why not add a "schedule_at" property to the action 
entity and keep the url short?
 


> The schedule resource could have additional uses; for example, a GET request
> could return the currently-scheduled actions for a particular server.

Sure. So, GET /servers/{id}/schedule would return a list of scheduler actions?

-jay

> Glen
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of
> the
> individual or entity to which this message is addressed, and unless
> otherwise
> expressly indicated, is confidential and privileged information of
> Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

2011-02-16 Thread Michael Mayo
I like this idea, but I would suggest going with a unix timestamp in GMT 
instead of /2011/xx/xx/etc.

Also, how would this effect error handling?  It seems like you'd basically need 
to have some sort of way to query all the server actions you've ever done 
before with their HTTP responses.



On Feb 16, 2011, at 2:02 PM, Glen Campbell wrote:

> The proposed compute API 1.1 has a specification for server actions (Sec. 
> 4.4) with the endpoint:
>  
>/servers/{id}/action
>  
> The actual action is specified as the body of the POST request, and the 
> implication is that the action is performed immediately, or as soon as 
> possible.
>  
> I'd like us to consider changing this "action" resource into a "calendar" or 
> perhaps "schedule" resource:
>  
>/servers/{id}/schedule{/year{/month{/day{/hour{/minute}
>  
> This would provide a generalized way of performing actions on a scheduled 
> basis.
>  
> For example, instead of having to wake up at 2AM to reboot a server (for 
> whatever reason), the administrator could schedule that event:
>  
>/servers/{id}/schedule/2011/2/17/02/00
>  
> By using the default resource (without the day or time specified), the 
> meaning would be synonymous with the proposed "/action" resource; i.e., 
> perform it NOW, or as soon as possible.
>  
> The schedule resource could have additional uses; for example, a GET request 
> could return the currently-scheduled actions for a particular server.
> 
> Glen
> 
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited.
> If you receive this transmission in error, please notify us immediately by 
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

Mike Mayo
901-299-9306
@greenisus



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1 ‹ server actions

2011-02-16 Thread Jay Pipes
On Wed, Feb 16, 2011 at 5:02 PM, Glen Campbell
 wrote:
> The proposed compute API 1.1 has a specification for server actions (Sec.
> 4.4) with the endpoint:
>
>    /servers/{id}/action
>
> The actual action is specified as the body of the POST request, and the
> implication is that the action is performed immediately, or as soon as
> possible.

Hmm, do you mean the GET request? The above URL implies the action is
part of the GET URL...

> I'd like us to consider changing this "action" resource into a "calendar" or
> perhaps "schedule" resource:
>
>    /servers/{id}/schedule{/year{/month{/day{/hour{/minute}
>
> This would provide a generalized way of performing actions on a scheduled
> basis.
>
> For example, instead of having to wake up at 2AM to reboot a server (for
> whatever reason), the administrator could schedule that event:
>
>    /servers/{id}/schedule/2011/2/17/02/00
>
> By using the default resource (without the day or time specified), the
> meaning would be synonymous with the proposed "/action" resource; i.e.,
> perform it NOW, or as soon as possible.

Why not /servers/{id}/{action}/schedule/2011/2/17/02/00 instead? That
way no POST would be required.

> The schedule resource could have additional uses; for example, a GET request
> could return the currently-scheduled actions for a particular server.

Sure. So, GET /servers/{id}/schedule would return a list of scheduler actions?

-jay

> Glen
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of
> the
> individual or entity to which this message is addressed, and unless
> otherwise
> expressly indicated, is confidential and privileged information of
> Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Jorge Williams

On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:


How would this work if someone didn't run  a volume service or glance? Should 
the api listen for that?

My expectation is that if someone didn't run a volume service, we should expose 
that just as if there were insufficient resources (because that's not far from 
the case.)  We'd return an error like "no resources to satisfy your request".  
That way there's only one code path for quota exhausted / zero quota / no 
volume service / all disks full / no 'HIPAA compliant' or 'earthquake proof' 
volumes available when a user requests that.


A better approach is to simply provide a service catalog with list of 
endpoints, you can easily detect whether a volume service  is available etc. 
This allows you to detect what services are available with a single request, 
rather than polling for multiple failures.  Think of writing a control panel 
and having a long list of services (image, volumes, network, etc).  Do you want 
to make separate calls??  Do you have images?  volumes?  networks?  etc etc.  A 
service catalog allows you to make a single call and that gives you an 
inventory of what's available you can then decide what to enable and disable in 
your control panel.

For glance, I don't know - how is it even possible to boot an instance without 
an image?

You may have a list of stock images that you support within compute and do 
without a full image service implementation (translation, cataloging, etc).


 We shouldn't be relying on extensions for Cactus.  In fact, I'd rather leave 
out extensions until we have a solid use case.  You may be saying that volumes 
will be our test-use case, but I think that will yield a sub-optimal API.


I see extensions doing a few things. First, it gives a way for other developers 
to work on and promote additions to the api without fighting to get them into 
core at first.

While I agree that's a good goal, I don't think we should rely on it for our 
core services, because it will give a sub-optimal experience.  I also think 
that this extension element may well be worse than simply having separate APIs. 
 Right now I think we're designing in a vacuum, as you say.

I don't see our core services relying on extensions.  Once we design APIs for 
our core services, the core APIs should have enough functionality to support 
core implementation.  Extensions are there to support functionality that is out 
of the core.  New features etc.  It allows us to innovate.


Can you explain how it would yield a sub-optimal api?
It would yield a sub-optimal API precisely because the process of fighting to 
get things into core makes them better.  If you don't believe that, then we 
should shut down the mailing list and develop closed-source.


I totally believe that fighting to get things into core will make things 
better.  Having extensions doesn't prevent this from happing I would argue that 
it encourages folks to develop stuff and show it off. If you have a great idea, 
you can show it working, if clients like it they will code against it and you 
can create incentive from getting it into the core.

Another thing I would note is that not everything belongs in the core -- 
there's always a need  for niche functionality that may be applicable only to a 
single operator or group of operators.  Troy gave a really great example for 
Rackspace with backup schedules -- and that's just one example there are others 
-- for Rackspace there are  features will likely never make it to the core 
because they require a very specific support infrastructure behind them.  With 
extensions we can add these features without breaking clients.


A less meta reasoning would be that when we design two things together, we're 
able to ensure they work together.  The screw and the screwdriver didn't evolve 
independently.  If we're designing them together, we shouldn't complicate 
things by use of extensions.

Again, our core services should stand on their own -- without extensions -- 
extensions are there to support new features in a backwards compatible way, to 
allow operators to differentiate themselves, and to offer support for niche 
functionality.



I don't think that anyone is proposing that a volume API be entirely defined as 
an extension to OpenStack compute. The volume extension servers simply as an 
example and it  covers the case for mounting and un-mounting  a volume.  If we 
can figure out a way of doing this in a general way we can always promote the 
functionality to the core.

I don't disagree that there should be core apis for each service, but that in 
the long run, there may not be a single api. Glance already doesn't have an api 
in the openstack 1.1 spec. What about Swift?

OK - so it sounds like volumes are going to be in the core API (?) - good.

No more like, there will be a core API for managing volumes that is different 
from the Compute API.


Let's get that into the API spec.  It also sounds like extensions (swift / 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Justin Santa Barbara
Sounds great - when the patch comes in we can discuss whether this should be
an extension or whether scheduled snapshots / generic tasks have broader
applicability across OpenStack (and thus would be better in the core API)

Is there a blueprint?



On Tue, Feb 15, 2011 at 11:32 AM, Troy Toman wrote:

>
>  On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:
>
>
>  OK - so it sounds like volumes are going to be in the core API (?) -
> good.  Let's get that into the API spec.  It also sounds like extensions
> (swift / glance?) are not going to be in the same API long-term.  So why do
> we have the extensions mechanism?
>
>  Until we have an implemented use case (i.e. a patch) that uses the
> extensions element, I don't see how we can spec it out or approve it.  So if
> you want it in v1.1, we better find a team that wants to use it and write
> code.  If there is such a patch, I stand corrected and let's get it reviewed
> and merged.
>
>  I would actually expect that the majority of the use cases that we want
> in the API but don't _want_ to go through core would be more simply
> addressed by well-known metadata (e.g. RAID-5, multi-continent replication,
> HPC, HIPAA).
>
>
>  I'm don't agree that the lack of a coded patch means we can't discuss an
> extension mechanism. But, if you want a specific use case, we have at least
> one we intend to deliver. It may be more of a one-off than a general case
> because it is required to give us a reasonable transition path from our
> current codebase to Nova. But, it is not an imagined need.
>
>  In the Rackspace Cloud Servers 1.0 API, we support a concept of backup
> schedules with a series of API calls to manage them. In drafting the
> OpenStack compute API, this was something that didn't feel generally
> applicable or useful in the core API. So, you don't see it as part of the
> CORE API spec. That said, for transition purposes, we will need a way to
> provide this capability to our customers when we move to Nova. Our current
> plan is to do this using the extension mechanism in the proposed API.
>
>  If there is a better way to handle this need, then let's discuss further.
> But, I didn't want the lack of a specific example to squash the idea of
> extensions.
>
>  Troy Toman
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited.
> If you receive this transmission in error, please notify us immediately by 
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Troy Toman

On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:


OK - so it sounds like volumes are going to be in the core API (?) - good.  
Let's get that into the API spec.  It also sounds like extensions (swift / 
glance?) are not going to be in the same API long-term.  So why do we have the 
extensions mechanism?

Until we have an implemented use case (i.e. a patch) that uses the extensions 
element, I don't see how we can spec it out or approve it.  So if you want it 
in v1.1, we better find a team that wants to use it and write code.  If there 
is such a patch, I stand corrected and let's get it reviewed and merged.

I would actually expect that the majority of the use cases that we want in the 
API but don't _want_ to go through core would be more simply addressed by 
well-known metadata (e.g. RAID-5, multi-continent replication, HPC, HIPAA).

I'm don't agree that the lack of a coded patch means we can't discuss an 
extension mechanism. But, if you want a specific use case, we have at least one 
we intend to deliver. It may be more of a one-off than a general case because 
it is required to give us a reasonable transition path from our current 
codebase to Nova. But, it is not an imagined need.

In the Rackspace Cloud Servers 1.0 API, we support a concept of backup 
schedules with a series of API calls to manage them. In drafting the OpenStack 
compute API, this was something that didn't feel generally applicable or useful 
in the core API. So, you don't see it as part of the CORE API spec. That said, 
for transition purposes, we will need a way to provide this capability to our 
customers when we move to Nova. Our current plan is to do this using the 
extension mechanism in the proposed API.

If there is a better way to handle this need, then let's discuss further. But, 
I didn't want the lack of a specific example to squash the idea of extensions.

Troy Toman



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Justin Santa Barbara
> How would this work if someone didn't run  a volume service or glance?
> Should the api listen for that?
>
>
My expectation is that if someone didn't run a volume service, we should
expose that just as if there were insufficient resources (because that's not
far from the case.)  We'd return an error like "no resources to satisfy your
request".  That way there's only one code path for quota exhausted / zero
quota / no volume service / all disks full / no 'HIPAA compliant' or
'earthquake proof' volumes available when a user requests that.

For glance, I don't know - how is it even possible to boot an instance
without an image?

  We shouldn't be relying on extensions for Cactus.  In fact, I'd rather
> leave out extensions until we have a solid use case.  You may be saying that
> volumes will be our test-use case, but I think that will yield a sub-optimal
> API.
>
>
>  I see extensions doing a few things. First, it gives a way for other
> developers to work on and promote additions to the api without fighting to
> get them into core at first.
>
>
While I agree that's a good goal, I don't think we should rely on it for our
core services, because it will give a sub-optimal experience.  I also think
that this extension element may well be worse than simply having separate
APIs.  Right now I think we're designing in a vacuum, as you say.

Can you explain how it would yield a sub-optimal api?
>
> It would yield a sub-optimal API precisely because the process of fighting
to get things into core makes them better.  If you don't believe that, then
we should shut down the mailing list and develop closed-source.

A less meta reasoning would be that when we design two things together,
we're able to ensure they work together.  The screw and the screwdriver
didn't evolve independently.  If we're designing them together, we shouldn't
complicate things by use of extensions.


>  I don't think that anyone is proposing that a volume API be entirely
> defined as an extension to OpenStack compute. The volume extension servers
> simply as an example and it  covers the case for mounting and un-mounting  a
> volume.  If we can figure out a way of doing this in a general way we can
> always promote the functionality to the core.
>


> I don't disagree that there should be core apis for each service, but that
> in the long run, there may not be a single api. Glance already doesn't have
> an api in the openstack 1.1 spec. What about Swift?
>

OK - so it sounds like volumes are going to be in the core API (?) - good.
 Let's get that into the API spec.  It also sounds like extensions (swift /
glance?) are not going to be in the same API long-term.  So why do we have
the extensions mechanism?

Until we have an implemented use case (i.e. a patch) that uses the
extensions element, I don't see how we can spec it out or approve it.  So if
you want it in v1.1, we better find a team that wants to use it and write
code.  If there is such a patch, I stand corrected and let's get it reviewed
and merged.

I would actually expect that the majority of the use cases that we want in
the API but don't _want_ to go through core would be more simply addressed
by well-known metadata (e.g. RAID-5, multi-continent replication, HPC,
HIPAA).



In general, I like the organic model with promotion of useful extensions
into core.  I'm 100% opposed to extensions being used for our initial
functionality.  I'm also very concerned by the idea that we think we can
radically change these APIs.  If we're talking about spinning images out
into a separate API in future, or moving volumes into the core, that would
be a major revision, because we'd break clients beyond simply requiring them
to tolerate additional elements.  Versioning helps here, but note that AWS
has _never_ deprecated a call afaik.  Versioning is a purely linear
evolution, and so if I want a new feature, that means I have to use the
refactored APIs?  It sounds like we're designing the OpenStack API v0.1,
otherwise we're going to have some very angry developers here.

One way to mitigate this is to create C, PHP, Python, Ruby and Java wrappers
as part of OpenStack, and make _that_ the stable API for now.  The wrappers
would have to hide any API changes, and thus we're not externalizing the
pain of API changes.  I think we desperately need this anyway (well, at
least one language binding) to support simple integration testing.

Justin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Jorge Williams
Additional comments inline:

On Feb 14, 2011, at 6:47 PM, Paul Voccio wrote:

Thoughts below

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 15:40:04 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Ah - well, I was sort of expecting that we'd all go the other way and agree 
some core functionality, and I thought that volumes should definitely be part 
of that.  I'd hope that the core functionality would always be part of the core 
API, and I'd include images & volumes in that list.

I'm all for having the discussion. How would this work if someone didn't run  a 
volume service or glance? Should the api listen for that? I don't disagree that 
there should be core apis for each service, but that in the long run, there may 
not be a single api. Glance already doesn't have an api in the openstack 1.1 
spec. What about Swift?


Right.




I think that building an extensible API is an ambitious proposition.  AWS seems 
to have some pretty rough edges in their API because they've built everything 
incrementally, and I would hope that we could do better, even if it does mean 
'big design up front'.

I think the block storage / volumes, networking, images and compute should all 
be part of the core API and should work well together.

Of course they all have to work well together. I do think we need to discuss 
how it works when someone isn't using these services. It is an OS API 
implementation then?


 We shouldn't be relying on extensions for Cactus.  In fact, I'd rather leave 
out extensions until we have a solid use case.  You may be saying that volumes 
will be our test-use case, but I think that will yield a sub-optimal API.


I see extensions doing a few things. First, it gives a way for other developers 
to work on and promote additions to the api without fighting to get them into 
core at first.  Can you explain how it would yield a sub-optimal api?


I also don't understand what you mean by sub-optimal API.

As PVO mentioned, extensions allow the core to be a well defined standard while 
giving operators the ability to add features that may distinguish them for one 
another. If an extension is particularly useful it can eventually be promoted 
to the core.  That means that the API grows bottom up rather than top down.  I 
see this as a very positive thing, it's hard to develop a core API in a vacuum, 
and extensions allow us to try things out -- maybe in different ways before we 
make it to the core.

I don't think that anyone is proposing that a volume API be entirely defined as 
an extension to OpenStack compute. The volume extension servers simply as an 
example and it  covers the case for mounting and un-mounting  a volume.  If we 
can figure out a way of doing this in a general way we can always promote the 
functionality to the core.





With regards to the difference between the CloudServers API and the OpenStack 
API, I really do think there should be separate documents.  I'd like for the 
OpenStack API to basically just have the JSON & XML interfaces in there, and 
none of the operational stuff that Rackspace needs to do to operate a public 
cloud (such as caching)  That is important stuff, but we need to divide and 
conquer.  I'd also like to see a third document, by NASA/Anso, which describes 
a deployment profile for a private cloud (probably no caching or rate limits).  
I think the division will actually help us here

I think we would want to have the same operational aspects in both public and 
private clouds. It gives consistent experience between what is deployed in a 
smaller implementation between what is deployed in large implementations. What 
we should do is make these levers very easy to find and tune. Maybe the default 
is they are tuned to high defaults when deployed, but the functionality should 
ship in the api.



Right.  We need to have consistent ways of handling those problems. Nothing in 
the spec says you must support caching, but if you do support caching  you will 
do so as described in the spec. I'll clear that up in the text.   Likewise, you 
could deploy OpenStack with 0 rate and 0 absolute limits.  A query to /limits 
may return an empty set, but you must have the ability to support limits in a 
consistent manner between deployments -- this is especially important when we 
consider language bindings etc.



- I don't think anyone will argue with Rackspace's expertise on their 
deployment needs, nor with NASA's on theirs, and we can just have the core 
behavior in the OpenStack API spec.

Justin





On Mon, Feb 14, 2011 at 3:18 PM, Paul Voccio 
mailto:paul.voc...@rackspace.com>> wrote:
Justin -

Thought s

Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Jorge Williams
Additional comments inline:

On Feb 14, 2011, at 4:59 PM, Paul Voccio wrote:

Thoughts below:

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 14:32:52 -0800
To: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Some thoughts...

General:


  *   Are we writing the OpenStack API, or are we writing the document for the 
next version of Cloud Servers?  In my opinion, the two need to be separate.  
For example, specifications of resource limits and rate limits, supported 
compression encodings, timeout on persistent connections, pagination, caching, 
polling and resize confirmation windows don't belong in the core OpenStack API. 
 These should be put in the CloudServers v1.1 documentation, but a different 
OpenStack provider will not impose the same limitations that Rackspace will.

I think it is fair to say the api comes with default limits. There is nothing 
in the spec or the code that says you can't alter these limits.

Right,  I'll modify the spec to denote that operators can define their own 
limits and the limits of the specs are simply sample limits.  You can query 
limits programmatically and that should be the only way to determine your 
limits for a particular deployment.

As for the differences between the OpenStack API and Cloud Servers API.  Moving 
forward, there is no Cloud Servers API.  This spec covers the OpenStack Compute 
API.  The API should really stand on its own.  I agree with PVO that the 1.2 
spec or other future specs will move images out of this API and into the glance 
API.  There should be a separate API for volume storage etc.  Again images are 
still part of the 1.1 spec because we don't want to write a complete rewrite.




Metadata:


  *   The 5 item limit will probably need to be raised if we start using the 
metadata for hints etc, but this is no big deal

Should the limit be operator specific?  Or should it just be higher?  One of 
the nice things about the 5 limit today is that it's part of the schema.  That 
means that as we are parsing the message -- in a stream like fashion -- we can 
reject messages that go over the limit.  This allows us to catch abuse early.  
If we make it operator specific we'd likely lose that feature -- not that big 
of a deal -- but worth mentioning.  Another issue is how to we prevent people 
from going metadata crazy -- that could add a lot of data and we'll need to 
support pagination of metadata -- again I can totally handle that but worth 
mentioning that's not a big deal today.

The idea for the metadata here is that it's user defined metadata.  Hints 
should  be defined using another mechanism.


  *   What is the behaviour of the metadata collection update when metadata is 
already present (merge or replace)?

That's a really good question.  On a server rebuild we will replace metadata if 
it's specified as part of the rebuild action, otherwise we leave the metadata 
alone -- this is mentioned in section 4.4.3. I believe an update (4.3.4) should 
do a merge.  Updates will replace individual metadata items with the same key.  
Does this make sense?  I'll modify the spec to reflect this.


  *   Can this return the new metadata values instead of no-return-value?

That's a great idea. I'll add that.


  *   Should we allow custom metadata on all items?  Should we replace some 
properties with well-known metadata?  e.g. on flavors, should the disk property 
move to openstack:disk metadata?  This way we don't need to define the exact 
set of metadata on all items for eternity (e.g. authors on extensions)

I like the idea of adding custom metadata to other items -- especially images 
(I'll go ahead and add these to the spec) -- but keep in mind these are user 
defined metadata items.  Operator defined items should probably go through 
extensions -- the reason for this is that you can define these in the schema so 
you can introspect them and document them correctly, you can also check to see 
what is available, you can validate the messages, etc.  Lots of advantages for 
uses if operator metadata is nicely defined -- users can manage their own 
metadata items.


  *   Are duplicate metadata keys allowed?

I'm leaning towards no.  But I could probably be convinced otherwise.


  *   Can we please reserve the openstack: prefix, just like AWS reserves the 
aws: prefix


Again, metadata items are for user's only.   We should have a prefix defined 
for extensions.

IP Addresses:


  *   Instead of just supporting a public and private network, how about 
specifying  and .  This way we can 
also support more networks e.g. SAN, private VPN networks, HPC interconnects etc

This could be a good idea. This way if someone doesn't return a private network 
or additional management networks.


I like this idea as well, let me work this into the spec.



  *   Is it use

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Paul Voccio
Thoughts below

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 15:40:04 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Ah - well, I was sort of expecting that we'd all go the other way and agree 
some core functionality, and I thought that volumes should definitely be part 
of that.  I'd hope that the core functionality would always be part of the core 
API, and I'd include images & volumes in that list.

I'm all for having the discussion. How would this work if someone didn't run  a 
volume service or glance? Should the api listen for that? I don't disagree that 
there should be core apis for each service, but that in the long run, there may 
not be a single api. Glance already doesn't have an api in the openstack 1.1 
spec. What about Swift?




I think that building an extensible API is an ambitious proposition.  AWS seems 
to have some pretty rough edges in their API because they've built everything 
incrementally, and I would hope that we could do better, even if it does mean 
'big design up front'.

I think the block storage / volumes, networking, images and compute should all 
be part of the core API and should work well together.

Of course they all have to work well together. I do think we need to discuss 
how it works when someone isn't using these services. It is an OS API 
implementation then?


 We shouldn't be relying on extensions for Cactus.  In fact, I'd rather leave 
out extensions until we have a solid use case.  You may be saying that volumes 
will be our test-use case, but I think that will yield a sub-optimal API.


I see extensions doing a few things. First, it gives a way for other developers 
to work on and promote additions to the api without fighting to get them into 
core at first.  Can you explain how it would yield a sub-optimal api?




With regards to the difference between the CloudServers API and the OpenStack 
API, I really do think there should be separate documents.  I'd like for the 
OpenStack API to basically just have the JSON & XML interfaces in there, and 
none of the operational stuff that Rackspace needs to do to operate a public 
cloud (such as caching)  That is important stuff, but we need to divide and 
conquer.  I'd also like to see a third document, by NASA/Anso, which describes 
a deployment profile for a private cloud (probably no caching or rate limits).  
I think the division will actually help us here

I think we would want to have the same operational aspects in both public and 
private clouds. It gives consistent experience between what is deployed in a 
smaller implementation between what is deployed in large implementations. What 
we should do is make these levers very easy to find and tune. Maybe the default 
is they are tuned to high defaults when deployed, but the functionality should 
ship in the api.


- I don't think anyone will argue with Rackspace's expertise on their 
deployment needs, nor with NASA's on theirs, and we can just have the core 
behavior in the OpenStack API spec.

Justin





On Mon, Feb 14, 2011 at 3:18 PM, Paul Voccio 
mailto:paul.voc...@rackspace.com>> wrote:
Justin -

Thought some more on your comments wrt images being in the 1.1 api spec. I 
agree with you that it doesn't make sense in the long term to have them in the 
compute api since the service will delegate to glance in the long term. I would 
propose that in the 1.2 or other future spec that /images move to an action on 
/compute since that’s really what is happening. I don't know that it makes 
sense to do so in 1.1 as changes are contentious enough without being a total 
rewrite.

Looking forward to your feedback,
pvo

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 14:32:52 -0800
To: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Some thoughts...

General:


  *   Are we writing the OpenStack API, or are we writing the document for the 
next version of Cloud Servers?  In my opinion, the two need to be separate.  
For example, specifications of resource limits and rate limits, supported 
compression encodings, timeout on persistent connections, pagination, caching, 
polling and resize confirmation windows don't belong in the core OpenStack API. 
 These should be put in the CloudServers v1.1 documentation, but a different 
OpenStack provider will not impose the same limitations that Rackspace will.

Metadata:


  *   The 5 item limit will probably need to be raised if we start using the 
metadata for hints etc, but this is no big deal
  *   What is the behaviour of the metadata collection update when metadata is 
alr

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Justin Santa Barbara
Ah - well, I was sort of expecting that we'd all go the other way and agree
some core functionality, and I thought that volumes should definitely be
part of that.  I'd hope that the core functionality would always be part of
the core API, and I'd include images & volumes in that list.

I think that building an extensible API is an ambitious proposition.  AWS
seems to have some pretty rough edges in their API because they've built
everything incrementally, and I would hope that we could do better, even if
it does mean 'big design up front'.

I think the block storage / volumes, networking, images and compute should
all be part of the core API and should work well together.  We shouldn't be
relying on extensions for Cactus.  In fact, I'd rather leave out extensions
until we have a solid use case.  You may be saying that volumes will be our
test-use case, but I think that will yield a sub-optimal API.



With regards to the difference between the CloudServers API and the
OpenStack API, I really do think there should be separate documents.  I'd
like for the OpenStack API to basically just have the JSON & XML interfaces
in there, and none of the operational stuff that Rackspace needs to do to
operate a public cloud (such as caching)  That is important stuff, but we
need to divide and conquer.  I'd also like to see a third document, by
NASA/Anso, which describes a deployment profile for a private cloud
(probably no caching or rate limits).  I think the division will actually
help us here - I don't think anyone will argue with Rackspace's expertise on
their deployment needs, nor with NASA's on theirs, and we can just have the
core behavior in the OpenStack API spec.

Justin





On Mon, Feb 14, 2011 at 3:18 PM, Paul Voccio wrote:

>  Justin -
>
>  Thought some more on your comments wrt images being in the 1.1 api spec.
> I agree with you that it doesn't make sense in the long term to have them in
> the compute api since the service will delegate to glance in the long term.
> I would propose that in the 1.2 or other future spec that /images move to an
> action on /compute since that’s really what is happening. I don't know that
> it makes sense to do so in 1.1 as changes are contentious enough without
> being a total rewrite.
>
>  Looking forward to your feedback,
> pvo
>
>   From: Justin Santa Barbara 
> Date: Mon, 14 Feb 2011 14:32:52 -0800
> To: 
> Subject: Re: [Openstack] OpenStack Compute API 1.1
>
>  Some thoughts...
>
>  General:
>
>
>- Are we writing the OpenStack API, or are we writing the document for
>the next version of Cloud Servers?  In my opinion, the two need to be
>separate.  For example, specifications of resource limits and rate limits,
>supported compression encodings, timeout on persistent connections,
>pagination, caching, polling and resize confirmation windows don't belong 
> in
>the core OpenStack API.  These should be put in the CloudServers v1.1
>documentation, but a different OpenStack provider will not impose the same
>limitations that Rackspace will.
>
>
>  Metadata:
>
>
>- The 5 item limit will probably need to be raised if we start using
>the metadata for hints etc, but this is no big deal
>- What is the behaviour of the metadata collection update when metadata
>is already present (merge or replace)?  Can this return the new metadata
>values instead of no-return-value?
>- Should we allow custom metadata on all items?  Should we replace some
>properties with well-known metadata?  e.g. on flavors, should the disk
>property move to openstack:disk metadata?  This way we don't need to define
>the exact set of metadata on all items for eternity (e.g. authors on
>extensions)
>- Are duplicate metadata keys allowed?
>- Can we please reserve the openstack: prefix, just like AWS reserves
>the aws: prefix
>
>
>  IP Addresses:
>
>
>- Instead of just supporting a public and private network, how about
>specifying  and .  This way we
>can also support more networks e.g. SAN, private VPN networks, HPC
>interconnects etc
>- Is it useful to know which IPV4 addresses and IPV6 addresses map to
>network cards?  Right now if there are multiple addresses on the same
>network, the correspondence is undefined.
>- What happens when a machine has a block of addresses?  Is each
>address listed individually?  What happens in IPv6 land where a machine
>could well have a huge block?  I think we need a netmask.
>
>
>  Extensions:
>
>
>- How are the XML schemas going to work with extension elements?  Right
>now, it's very free-form, which can cause problems with useful schemas

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Paul Voccio
Justin -

Thought some more on your comments wrt images being in the 1.1 api spec. I 
agree with you that it doesn't make sense in the long term to have them in the 
compute api since the service will delegate to glance in the long term. I would 
propose that in the 1.2 or other future spec that /images move to an action on 
/compute since that’s really what is happening. I don't know that it makes 
sense to do so in 1.1 as changes are contentious enough without being a total 
rewrite.

Looking forward to your feedback,
pvo

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 14:32:52 -0800
To: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Some thoughts...

General:


  *   Are we writing the OpenStack API, or are we writing the document for the 
next version of Cloud Servers?  In my opinion, the two need to be separate.  
For example, specifications of resource limits and rate limits, supported 
compression encodings, timeout on persistent connections, pagination, caching, 
polling and resize confirmation windows don't belong in the core OpenStack API. 
 These should be put in the CloudServers v1.1 documentation, but a different 
OpenStack provider will not impose the same limitations that Rackspace will.

Metadata:


  *   The 5 item limit will probably need to be raised if we start using the 
metadata for hints etc, but this is no big deal
  *   What is the behaviour of the metadata collection update when metadata is 
already present (merge or replace)?  Can this return the new metadata values 
instead of no-return-value?
  *   Should we allow custom metadata on all items?  Should we replace some 
properties with well-known metadata?  e.g. on flavors, should the disk property 
move to openstack:disk metadata?  This way we don't need to define the exact 
set of metadata on all items for eternity (e.g. authors on extensions)
  *   Are duplicate metadata keys allowed?
  *   Can we please reserve the openstack: prefix, just like AWS reserves the 
aws: prefix

IP Addresses:


  *   Instead of just supporting a public and private network, how about 
specifying  and .  This way we can 
also support more networks e.g. SAN, private VPN networks, HPC interconnects etc
  *   Is it useful to know which IPV4 addresses and IPV6 addresses map to 
network cards?  Right now if there are multiple addresses on the same network, 
the correspondence is undefined.
  *   What happens when a machine has a block of addresses?  Is each address 
listed individually?  What happens in IPv6 land where a machine could well have 
a huge block?  I think we need a netmask.

Extensions:


  *   How are the XML schemas going to work with extension elements?  Right 
now, it's very free-form, which can cause problems with useful schemas.  Are 
the proposed schemas available?

Volumes:


  *   Volume support is core to OpenStack (and has been since launch).  This 
needs therefore to be in the core API, not in an extension.  Or if it is an 
extension then compute, images and flavors should all be in extensions also 
(which would be cool, if a little complicated.)



Justin





On Mon, Feb 14, 2011 at 11:30 AM, John Purrier 
mailto:j...@openstack.org>> wrote:

Bumping this to the top of the list. One of the key deliverables for Cactus is 
a complete and usable OpenStack Compute API. This means that using only the API 
and tools that interact with the OpenStack Compute API Nova can be installed 
and configured; once running all of the Nova features and functions for VM, 
Network, and Volume provisioning and management are accessible and operable 
through the API.



We need your eyes on this, to ensure that the spec is correct. Please take the 
time to review and comment, the more up-front work we do here the better the 
implementation will be.



Thanks,



John



-Original Message-
From: 
openstack-bounces+john=openstack.org<http://openstack.org>@lists.launchpad.net<http://lists.launchpad.net>
 
[mailto:openstack-bounces+john<mailto:openstack-bounces%2Bjohn>=openstack.org<http://openstack.org>@lists.launchpad.net<http://lists.launchpad.net>]
 On Behalf Of Gabe Westmaas
Sent: Wednesday, February 09, 2011 3:03 PM
To: openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Subject: [Openstack] OpenStack API 1.1



A blueprint and proposed spec for OpenStack API 1.1 has been posted and I would 
love to get feedback on the specification.



Blueprint:

https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1



Spec wiki:

http://wiki.openstack.org/OpenStackAPI_1-1



Detailed Spec:

http://wiki.openstack.org/OpenStackAPI_1-1?action=AttachFile&do=view&target=c11-devguide-20110209.pdf



We'd like to finish up as much of the API implementation for cactus as 
possible, and in particular we want to make sure that we get API extensions 
correct as early as

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Paul Voccio
Thoughts below:

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 14:32:52 -0800
To: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Some thoughts...

General:


  *   Are we writing the OpenStack API, or are we writing the document for the 
next version of Cloud Servers?  In my opinion, the two need to be separate.  
For example, specifications of resource limits and rate limits, supported 
compression encodings, timeout on persistent connections, pagination, caching, 
polling and resize confirmation windows don't belong in the core OpenStack API. 
 These should be put in the CloudServers v1.1 documentation, but a different 
OpenStack provider will not impose the same limitations that Rackspace will.

I think it is fair to say the api comes with default limits. There is nothing 
in the spec or the code that says you can't alter these limits.



Metadata:


  *   The 5 item limit will probably need to be raised if we start using the 
metadata for hints etc, but this is no big deal
  *   What is the behaviour of the metadata collection update when metadata is 
already present (merge or replace)?  Can this return the new metadata values 
instead of no-return-value?
  *   Should we allow custom metadata on all items?  Should we replace some 
properties with well-known metadata?  e.g. on flavors, should the disk property 
move to openstack:disk metadata?  This way we don't need to define the exact 
set of metadata on all items for eternity (e.g. authors on extensions)
  *   Are duplicate metadata keys allowed?
  *   Can we please reserve the openstack: prefix, just like AWS reserves the 
aws: prefix

IP Addresses:


  *   Instead of just supporting a public and private network, how about 
specifying  and .  This way we can 
also support more networks e.g. SAN, private VPN networks, HPC interconnects etc

This could be a good idea. This way if someone doesn't return a private network 
or additional management networks.


  *   Is it useful to know which IPV4 addresses and IPV6 addresses map to 
network cards?  Right now if there are multiple addresses on the same network, 
the correspondence is undefined.

Not sure we'd know depending on the network topology where the address maps to 
a particular card. Not sure if I follow. If there are multiple addresses on the 
same network the addresses could float between them so knowing which nic they 
were originally bound to isn't important but could also be confusing.



  *   What happens when a machine has a block of addresses?  Is each address 
listed individually?  What happens in IPv6 land where a machine could well have 
a huge block?  I think we need a netmask.

Netmask makes sense.



Extensions:


  *   How are the XML schemas going to work with extension elements?  Right 
now, it's very free-form, which can cause problems with useful schemas.  Are 
the proposed schemas available?

Volumes:


  *   Volume support is core to OpenStack (and has been since launch).  This 
needs therefore to be in the core API, not in an extension.  Or if it is an 
extension then compute, images and flavors should all be in extensions also 
(which would be cool, if a little complicated.)

I think this is in preparation for the separation of apis in the future. 
Flavors would always tie to a compute api since they don't really make sense 
outside of a compute context. Glance is getting the images api, which I think 
the compute images context will eventually go there.

pvo





On Mon, Feb 14, 2011 at 11:30 AM, John Purrier 
mailto:j...@openstack.org>> wrote:

Bumping this to the top of the list. One of the key deliverables for Cactus is 
a complete and usable OpenStack Compute API. This means that using only the API 
and tools that interact with the OpenStack Compute API Nova can be installed 
and configured; once running all of the Nova features and functions for VM, 
Network, and Volume provisioning and management are accessible and operable 
through the API.



We need your eyes on this, to ensure that the spec is correct. Please take the 
time to review and comment, the more up-front work we do here the better the 
implementation will be.



Thanks,



John



-Original Message-
From: 
openstack-bounces+john=openstack.org<http://openstack.org>@lists.launchpad.net<http://lists.launchpad.net>
 
[mailto:openstack-bounces+john<mailto:openstack-bounces%2Bjohn>=openstack.org<http://openstack.org>@lists.launchpad.net<http://lists.launchpad.net>]
 On Behalf Of Gabe Westmaas
Sent: Wednesday, February 09, 2011 3:03 PM
To: openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Subject: [Openstack] OpenStack API 1.1



A blueprint and proposed spec for OpenStack API 1.1 has been posted and I would 
love to get feedback on the specification.



Blueprint:

https://blueprints.launchpad.net/nova/+spe

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Justin Santa Barbara
Some thoughts...

General:


   - Are we writing the OpenStack API, or are we writing the document for
   the next version of Cloud Servers?  In my opinion, the two need to be
   separate.  For example, specifications of resource limits and rate limits,
   supported compression encodings, timeout on persistent connections,
   pagination, caching, polling and resize confirmation windows don't belong in
   the core OpenStack API.  These should be put in the CloudServers v1.1
   documentation, but a different OpenStack provider will not impose the same
   limitations that Rackspace will.


Metadata:


   - The 5 item limit will probably need to be raised if we start using the
   metadata for hints etc, but this is no big deal
   - What is the behaviour of the metadata collection update when metadata
   is already present (merge or replace)?  Can this return the new metadata
   values instead of no-return-value?
   - Should we allow custom metadata on all items?  Should we replace some
   properties with well-known metadata?  e.g. on flavors, should the disk
   property move to openstack:disk metadata?  This way we don't need to define
   the exact set of metadata on all items for eternity (e.g. authors on
   extensions)
   - Are duplicate metadata keys allowed?
   - Can we please reserve the openstack: prefix, just like AWS reserves the
   aws: prefix


IP Addresses:


   - Instead of just supporting a public and private network, how about
   specifying  and .  This way we
   can also support more networks e.g. SAN, private VPN networks, HPC
   interconnects etc
   - Is it useful to know which IPV4 addresses and IPV6 addresses map to
   network cards?  Right now if there are multiple addresses on the same
   network, the correspondence is undefined.
   - What happens when a machine has a block of addresses?  Is each address
   listed individually?  What happens in IPv6 land where a machine could well
   have a huge block?  I think we need a netmask.


Extensions:


   - How are the XML schemas going to work with extension elements?  Right
   now, it's very free-form, which can cause problems with useful schemas.  Are
   the proposed schemas available?


Volumes:


   - Volume support is core to OpenStack (and has been since launch).  This
   needs therefore to be in the core API, not in an extension.  Or if it is an
   extension then compute, images and flavors should all be in extensions also
   (which would be cool, if a little complicated.)




Justin





On Mon, Feb 14, 2011 at 11:30 AM, John Purrier  wrote:

> Bumping this to the top of the list. One of the key deliverables for Cactus
> is a complete and usable OpenStack Compute API. This means that using only
> the API and tools that interact with the OpenStack Compute API Nova can be
> installed and configured; once running all of the Nova features and
> functions for VM, Network, and Volume provisioning and management are
> accessible and operable through the API.
>
>
>
> We need your eyes on this, to ensure that the spec is correct. Please take
> the time to review and comment, the more up-front work we do here the better
> the implementation will be.
>
>
>
> Thanks,
>
>
>
> John
>
>
>
> -Original Message-
> From: openstack-bounces+john=openstack@lists.launchpad.net [mailto:
> openstack-bounces+john=openstack@lists.launchpad.net] On Behalf Of
> Gabe Westmaas
> Sent: Wednesday, February 09, 2011 3:03 PM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] OpenStack API 1.1
>
>
>
> A blueprint and proposed spec for OpenStack API 1.1 has been posted and I
> would love to get feedback on the specification.
>
>
>
> Blueprint:
>
> https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1
>
>
>
> Spec wiki:
>
> http://wiki.openstack.org/OpenStackAPI_1-1
>
>
>
> Detailed Spec:
>
>
> http://wiki.openstack.org/OpenStackAPI_1-1?action=AttachFile&do=view&target=c11-devguide-20110209.pdf
>
>
>
> We'd like to finish up as much of the API implementation for cactus as
> possible, and in particular we want to make sure that we get API extensions
> correct as early as possible.  Other new features in the 1.1 spec include
> the ability to view both IPv4 and v6 addresses, migration to the OpenStack
> namespace and moving from image IDs in responses to URIs (imageRef) for the
> image.  There may be some additional changes as well, please jump in if I
> missed some.
>
>
>
> I will add details to the wiki page as needed based on discussions on the
> mailing list.
>
>
>
> Thanks, and let me know if you have questions.
>
>
>
> Gabe
>
>
>
>
>
> ___
>
> Mailing list: https://launchpad.net/~openstack
>
> Post to : openstack@lists.launchpad.net
>
> Unsubscribe : https://launchpad.net/~openstack
>
> More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Jay Pipes
On Mon, Feb 14, 2011 at 4:39 PM, Gabe Westmaas
 wrote:
> Thanks Jay, I promise I will make more useful wikis soon :)

Hehe, sorry if I came across as grumpy. Was just doing some old
fashioned rib-poking, that's all ;)

> Jorge answered most of the questions you had, I just wanted to point out that 
> I pulled the main IPv6 items out and put them in the wiki now.  There isn't 
> too much that changed there, but wanted to point out what was new anyway.

Sweet. Thanks!
jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Gabe Westmaas
Thanks Jay, I promise I will make more useful wikis soon :)  Jorge answered 
most of the questions you had, I just wanted to point out that I pulled the 
main IPv6 items out and put them in the wiki now.  There isn't too much that 
changed there, but wanted to point out what was new anyway.

Gabe

On Monday, February 14, 2011 4:08pm, "Jay Pipes"  said:

> The reason I haven't responded yet is because it's difficult for me to:
> 
> diff -u some.pdf other.pdf
> 
> In all seriousness, the wiki spec page says this about the differences
> in the 1.1 OpenStack API:
> 
> ==start wiki==
> 
> OS API 1.1 Features
> 
> IPv6
> 
> Extensions
> 
> Migrate to OpenStack namespace
> 
> ==end wiki==
> 
> There's just not much detail to go on. I had to go through the PDF to
> see what the proposed changes to the CS 1.0 API looked like.
> 
> After looking at the PDF, I have a couple suggestions for improvement,
> but overall things look good :)
> 
> 1) Give extensions a way to version themselves. Currently, the main
> fields in the API response to GET /extensions looks like this:
> 
> {
> "extensions" : [
> {
> "name" : "Public Image Extension",
> "namespace" : "http://docs.rackspacecloud.com/servers/api/ext/pie/v1.0";,
> "alias" : "RS-PIE",
> "updated" : "2011-01-22T13:25:27-06:00",
> "description" : "Adds the capability to share an image with other users.",
> "links" : [
> {
> "rel" : "describedby",
> "type" : "application/pdf",
> "href" : "http://docs.rackspacecloud.com/servers/api/ext/cs-pie-2011.pdf";
> },
> {
> "rel" : "describedby",
> "type" : "application/vnd.sun.wadl+xml",
> "href" : "http://docs.rackspacecloud.com/servers/api/ext/cs-pie.wadl";
> }
> ]
> }, ... ]}
> 
> I would suggest adding a "version" field to the extension resource
> definition so that extension developers will have a way of identifying
> the version of their extension the OpenStack deployment has installed.
> 
> 2) I would suggest leaving the "links" collection off of the main
> result returned by GET /extensions and instead only returned the
> "links" collection when a specific extension is queried with a call to
> GET /extensions/. You could even mimick the rest of the CS API
> and do a GET /extensions/detail that could return those fields?
> 
> 3) IPv6 stuff in the PDF looked good as far as I could tell. Mostly, I
> was looking at the examples on pages 29 and 30. Was there a specific
> section that spoke to IPv6 changes; I could not find one.
> 
> Other than those little suggestions, looks like a good start. Would be
> great to get the work going on the spec wiki page instead of the PDF,
> which nobody on the open source project can modify. I recognize the
> PDF comes from the internal Rackspace documents for Cloud Servers, of
> course, and that it's not your fault :)  Just encouraging a move to a
> format we can edit fluidly.
> 
> Cheers, and thanks Gabe!
> 
> -jay
> 
> On Mon, Feb 14, 2011 at 2:30 PM, John Purrier  wrote:
>> Bumping this to the top of the list. One of the key deliverables for Cactus
>> is a complete and usable OpenStack Compute API. This means that using only
>> the API and tools that interact with the OpenStack Compute API Nova can be
>> installed and configured; once running all of the Nova features and
>> functions for VM, Network, and Volume provisioning and management are
>> accessible and operable through the API.
>>
>>
>>
>> We need your eyes on this, to ensure that the spec is correct. Please take
>> the time to review and comment, the more up-front work we do here the better
>> the implementation will be.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> John
>>
>>
>>
>> -Original Message-
>> From: openstack-bounces+john=openstack@lists.launchpad.net
>> [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
>> Of Gabe Westmaas
>> Sent: Wednesday, February 09, 2011 3:03 PM
>> To: openstack@lists.launchpad.net
>> Subject: [Openstack] OpenStack API 1.1
>>
>>
>>
>> A blueprint and proposed spec for OpenStack API 1.1 has been posted and I
>> would love to get feedback on the specification.
>>
>>
>>
>> Blueprint:
>>
>> https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1
>>
>>
>>
>> Spec wiki:
>>
>> http://wiki.openstack.org/OpenStackAPI_1-1
>>
>>
>>
>> Detailed Spec:
>>
>> http://wiki.openstack.org/OpenStackAPI_1-1?action=AttachFile&do=view&target=c11-devguide-20110209.pdf
>>
>>
>>
>> We'd like to finish up as much of the API implementation for cactus as
>> possible, and in particular we want to make sure that we get API extensions
>> correct as early as possible.  Other new features in the 1.1 spec include
>> the ability to view both IPv4 and v6 addresses, migration to the OpenStack
>> namespace and moving from image IDs in responses to URIs (imageRef) for the
>> image.  There may be some additional changes as well, please jump in if I
>> missed some.
>>
>>
>>
>> I will add details to the wiki page as needed based on discussions on the
>> mailing list.
>>
>>
>>
>> Thanks, and let me know if y

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Jorge Williams

On Feb 14, 2011, at 3:35 PM, Jay Pipes wrote:

> On Mon, Feb 14, 2011 at 4:27 PM, Jorge Williams
>  wrote:
>> On Feb 14, 2011, at 3:08 PM, Jay Pipes wrote:
>> I'll work with Anne to make the source documents available to you guys so 
>> you can do a diff etc.  Give me a couple of days to get this working, 
>> existing docs are built into the implementation, this is a nice thing 
>> because our unit tests use the samples from the docs to make sure they're 
>> always correct...anyway now  I need to separate these out.
> 
> Cool, thanks Jorge! :)
> 
>>> I would suggest adding a "version" field to the extension resource
>>> definition so that extension developers will have a way of identifying
>>> the version of their extension the OpenStack deployment has installed.
>> 
>> Do we want to deal with extension versions?  If you need to version your 
>> extension because it's not backwards compatible simply create a new 
>> extension and append a version number to it. So RS-CBS and RS-CBS2, etc. 
>> This is how things work with OpenGL which served as a reference for our 
>> extension mechanism.
> 
> Hmm, I suppose that's possible, too.  I'd prefer a unique field that
> has version information, but either could work.
> 
> Another field that could be nice is "author" or "authors" to allow the
> developers or developer company/organization to be listed?

Another great idea.  I'll get that in there.

> 
>>> 2) I would suggest leaving the "links" collection off of the main
>>> result returned by GET /extensions and instead only returned the
>>> "links" collection when a specific extension is queried with a call to
>>> GET /extensions/. You could even mimick the rest of the CS API
>>> and do a GET /extensions/detail that could return those fields?
>> 
>> I like this idea.
> 
> Cool :)
> 
>>> 3) IPv6 stuff in the PDF looked good as far as I could tell. Mostly, I
>>> was looking at the examples on pages 29 and 30. Was there a specific
>>> section that spoke to IPv6 changes; I could not find one.
>>> 
>> 
>> I'm working to flesh this out a bit. Also I've gotten a bunch of comments on 
>> eitherpad (http://etherpad.openstack.org/osapi1-1), which I'm incorporating 
>> into the spec.  Expect more comments on eitherpad, and a new revision of the 
>> spec soon --  as well as access to the source :-).  In the meantime keep 
>> comments coming.
> 
> Gotcha. Will do :)
> 
> Cheers,
> jay



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Jay Pipes
On Mon, Feb 14, 2011 at 4:27 PM, Jorge Williams
 wrote:
> On Feb 14, 2011, at 3:08 PM, Jay Pipes wrote:
> I'll work with Anne to make the source documents available to you guys so you 
> can do a diff etc.  Give me a couple of days to get this working, existing 
> docs are built into the implementation, this is a nice thing because our unit 
> tests use the samples from the docs to make sure they're always 
> correct...anyway now  I need to separate these out.

Cool, thanks Jorge! :)

>> I would suggest adding a "version" field to the extension resource
>> definition so that extension developers will have a way of identifying
>> the version of their extension the OpenStack deployment has installed.
>
> Do we want to deal with extension versions?  If you need to version your 
> extension because it's not backwards compatible simply create a new extension 
> and append a version number to it. So RS-CBS and RS-CBS2, etc. This is how 
> things work with OpenGL which served as a reference for our extension 
> mechanism.

Hmm, I suppose that's possible, too.  I'd prefer a unique field that
has version information, but either could work.

Another field that could be nice is "author" or "authors" to allow the
developers or developer company/organization to be listed?

>> 2) I would suggest leaving the "links" collection off of the main
>> result returned by GET /extensions and instead only returned the
>> "links" collection when a specific extension is queried with a call to
>> GET /extensions/. You could even mimick the rest of the CS API
>> and do a GET /extensions/detail that could return those fields?
>
> I like this idea.

Cool :)

>> 3) IPv6 stuff in the PDF looked good as far as I could tell. Mostly, I
>> was looking at the examples on pages 29 and 30. Was there a specific
>> section that spoke to IPv6 changes; I could not find one.
>>
>
> I'm working to flesh this out a bit. Also I've gotten a bunch of comments on 
> eitherpad (http://etherpad.openstack.org/osapi1-1), which I'm incorporating 
> into the spec.  Expect more comments on eitherpad, and a new revision of the 
> spec soon --  as well as access to the source :-).  In the meantime keep 
> comments coming.

Gotcha. Will do :)

Cheers,
jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Jorge Williams

On Feb 14, 2011, at 3:08 PM, Jay Pipes wrote:

> The reason I haven't responded yet is because it's difficult for me to:
> 
> diff -u some.pdf other.pdf
> 
> In all seriousness, the wiki spec page says this about the differences
> in the 1.1 OpenStack API:
> 


I'll work with Anne to make the source documents available to you guys so you 
can do a diff etc.  Give me a couple of days to get this working, existing docs 
are built into the implementation, this is a nice thing because our unit tests 
use the samples from the docs to make sure they're always correct...anyway now  
I need to separate these out.


> ==start wiki==
> 
> OS API 1.1 Features
> 
> IPv6
> 
> Extensions
> 
> Migrate to OpenStack namespace
> 
> ==end wiki==
> 
> There's just not much detail to go on. I had to go through the PDF to
> see what the proposed changes to the CS 1.0 API looked like.
> 
> After looking at the PDF, I have a couple suggestions for improvement,
> but overall things look good :)
> 
> 1) Give extensions a way to version themselves. Currently, the main
> fields in the API response to GET /extensions looks like this:
> 
> {
> "extensions" : [
> {
> "name" : "Public Image Extension",
> "namespace" : "http://docs.rackspacecloud.com/servers/api/ext/pie/v1.0";,
> "alias" : "RS-PIE",
> "updated" : "2011-01-22T13:25:27-06:00",
> "description" : "Adds the capability to share an image with other users.",
> "links" : [
> {
> "rel" : "describedby",
> "type" : "application/pdf",
> "href" : "http://docs.rackspacecloud.com/servers/api/ext/cs-pie-2011.pdf";
> },
> {
> "rel" : "describedby",
> "type" : "application/vnd.sun.wadl+xml",
> "href" : "http://docs.rackspacecloud.com/servers/api/ext/cs-pie.wadl";
> }
> ]
> }, ... ]}
> 
> I would suggest adding a "version" field to the extension resource
> definition so that extension developers will have a way of identifying
> the version of their extension the OpenStack deployment has installed.

Do we want to deal with extension versions?  If you need to version your 
extension because it's not backwards compatible simply create a new extension 
and append a version number to it. So RS-CBS and RS-CBS2, etc. This is how 
things work with OpenGL which served as a reference for our extension mechanism.

> 
> 2) I would suggest leaving the "links" collection off of the main
> result returned by GET /extensions and instead only returned the
> "links" collection when a specific extension is queried with a call to
> GET /extensions/. You could even mimick the rest of the CS API
> and do a GET /extensions/detail that could return those fields?

I like this idea.

> 
> 3) IPv6 stuff in the PDF looked good as far as I could tell. Mostly, I
> was looking at the examples on pages 29 and 30. Was there a specific
> section that spoke to IPv6 changes; I could not find one.
> 

I'm working to flesh this out a bit. Also I've gotten a bunch of comments on 
eitherpad (http://etherpad.openstack.org/osapi1-1), which I'm incorporating 
into the spec.  Expect more comments on eitherpad, and a new revision of the 
spec soon --  as well as access to the source :-).  In the meantime keep 
comments coming.

Thanks,

jOrGe W.









Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Jay Pipes
The reason I haven't responded yet is because it's difficult for me to:

diff -u some.pdf other.pdf

In all seriousness, the wiki spec page says this about the differences
in the 1.1 OpenStack API:

==start wiki==

OS API 1.1 Features

IPv6

Extensions

Migrate to OpenStack namespace

==end wiki==

There's just not much detail to go on. I had to go through the PDF to
see what the proposed changes to the CS 1.0 API looked like.

After looking at the PDF, I have a couple suggestions for improvement,
but overall things look good :)

1) Give extensions a way to version themselves. Currently, the main
fields in the API response to GET /extensions looks like this:

{
"extensions" : [
{
"name" : "Public Image Extension",
"namespace" : "http://docs.rackspacecloud.com/servers/api/ext/pie/v1.0";,
"alias" : "RS-PIE",
"updated" : "2011-01-22T13:25:27-06:00",
"description" : "Adds the capability to share an image with other users.",
"links" : [
{
"rel" : "describedby",
"type" : "application/pdf",
"href" : "http://docs.rackspacecloud.com/servers/api/ext/cs-pie-2011.pdf";
},
{
"rel" : "describedby",
"type" : "application/vnd.sun.wadl+xml",
"href" : "http://docs.rackspacecloud.com/servers/api/ext/cs-pie.wadl";
}
]
}, ... ]}

I would suggest adding a "version" field to the extension resource
definition so that extension developers will have a way of identifying
the version of their extension the OpenStack deployment has installed.

2) I would suggest leaving the "links" collection off of the main
result returned by GET /extensions and instead only returned the
"links" collection when a specific extension is queried with a call to
GET /extensions/. You could even mimick the rest of the CS API
and do a GET /extensions/detail that could return those fields?

3) IPv6 stuff in the PDF looked good as far as I could tell. Mostly, I
was looking at the examples on pages 29 and 30. Was there a specific
section that spoke to IPv6 changes; I could not find one.

Other than those little suggestions, looks like a good start. Would be
great to get the work going on the spec wiki page instead of the PDF,
which nobody on the open source project can modify. I recognize the
PDF comes from the internal Rackspace documents for Cloud Servers, of
course, and that it's not your fault :)  Just encouraging a move to a
format we can edit fluidly.

Cheers, and thanks Gabe!

-jay

On Mon, Feb 14, 2011 at 2:30 PM, John Purrier  wrote:
> Bumping this to the top of the list. One of the key deliverables for Cactus
> is a complete and usable OpenStack Compute API. This means that using only
> the API and tools that interact with the OpenStack Compute API Nova can be
> installed and configured; once running all of the Nova features and
> functions for VM, Network, and Volume provisioning and management are
> accessible and operable through the API.
>
>
>
> We need your eyes on this, to ensure that the spec is correct. Please take
> the time to review and comment, the more up-front work we do here the better
> the implementation will be.
>
>
>
> Thanks,
>
>
>
> John
>
>
>
> -Original Message-
> From: openstack-bounces+john=openstack@lists.launchpad.net
> [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
> Of Gabe Westmaas
> Sent: Wednesday, February 09, 2011 3:03 PM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] OpenStack API 1.1
>
>
>
> A blueprint and proposed spec for OpenStack API 1.1 has been posted and I
> would love to get feedback on the specification.
>
>
>
> Blueprint:
>
> https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1
>
>
>
> Spec wiki:
>
> http://wiki.openstack.org/OpenStackAPI_1-1
>
>
>
> Detailed Spec:
>
> http://wiki.openstack.org/OpenStackAPI_1-1?action=AttachFile&do=view&target=c11-devguide-20110209.pdf
>
>
>
> We'd like to finish up as much of the API implementation for cactus as
> possible, and in particular we want to make sure that we get API extensions
> correct as early as possible.  Other new features in the 1.1 spec include
> the ability to view both IPv4 and v6 addresses, migration to the OpenStack
> namespace and moving from image IDs in responses to URIs (imageRef) for the
> image.  There may be some additional changes as well, please jump in if I
> missed some.
>
>
>
> I will add details to the wiki page as needed based on discussions on the
> mailing list.
>
>
>
> Thanks, and let me know if you have questions.
>
>
>
> Gabe
>
>
>
>
>
> ___
>
> Mailing list: https://launchpad.net/~openstack
>
> Post to : openstack@lists.launchpad.net
>
> Unsubscribe : https://launchpad.net/~openstack
>
> More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread John Purrier
Bumping this to the top of the list. One of the key deliverables for Cactus
is a complete and usable OpenStack Compute API. This means that using only
the API and tools that interact with the OpenStack Compute API Nova can be
installed and configured; once running all of the Nova features and
functions for VM, Network, and Volume provisioning and management are
accessible and operable through the API.

 

We need your eyes on this, to ensure that the spec is correct. Please take
the time to review and comment, the more up-front work we do here the better
the implementation will be.

 

Thanks,

 

John

 

-Original Message-
From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
Of Gabe Westmaas
Sent: Wednesday, February 09, 2011 3:03 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] OpenStack API 1.1

 

A blueprint and proposed spec for OpenStack API 1.1 has been posted and I
would love to get feedback on the specification.

 

Blueprint:

https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1

 

Spec wiki:

http://wiki.openstack.org/OpenStackAPI_1-1

 

Detailed Spec:

http://wiki.openstack.org/OpenStackAPI_1-1?action=AttachFile&do=view&target=
c11-devguide-20110209.pdf

 

We'd like to finish up as much of the API implementation for cactus as
possible, and in particular we want to make sure that we get API extensions
correct as early as possible.  Other new features in the 1.1 spec include
the ability to view both IPv4 and v6 addresses, migration to the OpenStack
namespace and moving from image IDs in responses to URIs (imageRef) for the
image.  There may be some additional changes as well, please jump in if I
missed some.

 

I will add details to the wiki page as needed based on discussions on the
mailing list.

 

Thanks, and let me know if you have questions.

 

Gabe

 

 

___

Mailing list: https://launchpad.net/~openstack

Post to : openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack

More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp