Re: [Openstack] [Doc] Talk shop: API Docs

2013-04-16 Thread Jorge Williams
We are outside by the bell at the top of the stairs.

Sent from my Motorola Smartphone on the Now Network from Sprint!


-Original message-
From: thingee thin...@gmail.com
To: Anne Gentle a...@openstack.org
Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net
Sent: Tue, Apr 16, 2013 15:23:37 PDT
Subject: Re: [Openstack] [Doc] Talk shop: API Docs

Is this happening? I've been standing at the pendulum and didn't see you.

On Tuesday, April 16, 2013, Anne Gentle wrote:
Hi all,
The doc track contained great discussions but I would also like to hold an 
informal session to talk about the OpenStack API docs at the Summit.

If you're interested, please come to the tables at the top of the escalators on 
the A side of the convention center, above the pendulum at 3:00 today (Tuesday 
4/16).

Some ideas and questions for discussion started here: 
https://etherpad.openstack.org/api-docs

I also wrote a blog entry about how the API reference page is put together:
http://justwriteclick.com/2013/04/14/how-its-made-the-openstack-api-reference-page/

Looking forward to talking about API docs -
Anne


--

-Mike Perez!
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-15 Thread Jorge Williams
No, it's optional.

Token validation returns what it normally does.  The only thing belongs to does 
is that you fail token validation if the given tenant is not covered by the 
scope of the token.

-jOrGe W.

On Nov 14, 2012, at 11:18 PM, Yee, Guang wrote:

 Is belongsTo mandatory? If not, what will token validation API return?
 
 {access: [list of tokens]}
 
 ?
 
 
 Guang
 
 
 -Original Message-
 From: Jorge Williams [mailto:jorge.willi...@rackspace.com] 
 Sent: Wednesday, November 14, 2012 2:47 PM
 To: OpenStack Development Mailing List
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
 Subject: Re: [openstack-dev] [Openstack] Fwd: [keystone] Tokens representing
 authorization to projects/tenants in the Keystone V3 API
 
 From an API perspective the changes required are the following:
 
   1.  The validate call returns a list of tenants instead of a single
 tenant.
 
 If the tenant id is in the URI of the API, then the validation middleware
 can assert that the tenant id is in the list of IDs.
 
 Not sure if there's any additional changes, but I don't think so.
 
 An alternative approach is to use the belongsTo query parameter in the
 validate call.  So if you know the tenantId of the resource, you can issue a
 validate with ?belongsTo=tenatId  and validation if the tenant is not in
 the list of tenatIds for the token.  The belongsTo query parameter is in the
 validate token call in the API today
 
 http://docs.openstack.org/api/openstack-identity-service/2.0/content/GET_val
 idateToken_v2.0_tokens__tokenId__Admin_API_Service_Developer_Operations-d1e1
 356.html
 
 And we use it quite a bit in our implementation, when we validate tokens --
 that is in the case where a token may have access to multiple tenants.
 
 Thoughts?
 
 -jOrGe W.
 
 
 On Nov 14, 2012, at 3:53 PM, heckj wrote:
 
 If we're going to assert it's supported, we're doing an incredible
 dis-service to writing a spec to not implement that aspect of the spec, as
 that kind of set up just leads to incompatibilities and confusion when
 asserting how the spec should be used to provide interoperability.
 
 If we accept this as a spec addition, then we MUST have an implementation
 that makes it clear how we expect to interoperate with that aspect of the
 specification, even if it's a configuration option that we don't normally
 enable. If we don't test and validate it to prove interoperability, then the
 spec is a worthless digital piece of paper.
 
 So under that pretext, I welcome suggestions on how to interpret the spec
 you're proposing to some concrete implementations that can be verified for
 interoperability, and that are compatible with the existing and/or upcoming
 implementations for V3 API.
 
 -joe
 
 On Nov 14, 2012, at 1:35 PM, Joe Savak joe.sa...@rackspace.com wrote:
 Hi Joe,
 If I'm working across multiple tenants, I'd prefer one token that I
 can securely handle that proves access rights to the tenants I'm working
 with. Handling multiple tokens increases the complexity of clients needing
 to provide multi-tenancy access to an authenticated identity. It also adds
 more calls to keystone. 
 
 Again, I think that having the keystone reference implementation restrict
 tokens to 1 tenant is fine. We shouldn't have such arbitrary restrictions in
 the API contract though. It needs to be extensible and flexible to allow for
 the all sorts of use cases that are likely to occur.
 
 Thanks,
 joe
 
 -Original Message-
 From: heckj [mailto:he...@mac.com] 
 Sent: Tuesday, November 13, 2012 3:59 PM
 To: Joe Savak
 Cc: OpenStack Development Mailing List; openstack@lists.launchpad.net
 (openstack@lists.launchpad.net)
 Subject: Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens
 representing authorization to projects/tenants in the Keystone V3 API
 
 Hey Joe:
 
 Currently a user scoped token doesn't include a service catalog - mostly
 because I think the service catalog generally requires tenant_id's to
 interpolate into the values to provide it. That doesn't mean we can't put
 in/include service catalog endpoints where that value doesn't need to be
 determined.
 
 I'm also questioning the value of providing a token scoped to all tenants
 associated with a user - that seems to have the same value as just using a
 user token. 
 
 In fact, even if we allow some arbitrary set of tenants to be scoped into
 a token along with a user, what on earth should be in the service catalog?
 Endpoints relevant to every possible tenant?
 
 This just seems to be a potential explosion of data that is poorly scoped
 from a security perspective.
 
 -joe
 
 On Nov 13, 2012, at 1:42 PM, Joe Savak joe.sa...@rackspace.com wrote:
 Will user-scoped token include the full service catalog? 
 
 Also, I thought the consensus was to allow the API contract to be
 flexible on how many tenants we can scope the token to. The ref impl can
 enforce 1 tenant-scoped token. Are we diverging from this?
 
 Thanks,
 joe
 
 -Original Message

Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-15 Thread Jorge Williams
(inline)

On Nov 15, 2012, at 2:06 PM, Dolph Mathews wrote:

Without belongsTo, you can still validate the tenant scope client-side, so it's 
a bit redundant.

Not sure what you mean.  Can you be more specific?

However, if you're making a HEAD call to validate the token, you obviously need 
the server to do that additional validation for you.


Right.


-Dolph


On Thu, Nov 15, 2012 at 8:20 AM, Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:
No, it's optional.

Token validation returns what it normally does.  The only thing belongs to does 
is that you fail token validation if the given tenant is not covered by the 
scope of the token.

-jOrGe W.

On Nov 14, 2012, at 11:18 PM, Yee, Guang wrote:

 Is belongsTo mandatory? If not, what will token validation API return?

 {access: [list of tokens]}

 ?


 Guang


 -Original Message-
 From: Jorge Williams 
 [mailto:jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com]
 Sent: Wednesday, November 14, 2012 2:47 PM
 To: OpenStack Development Mailing List
 Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
 (openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net)
 Subject: Re: [openstack-dev] [Openstack] Fwd: [keystone] Tokens representing
 authorization to projects/tenants in the Keystone V3 API

 From an API perspective the changes required are the following:

   1.  The validate call returns a list of tenants instead of a single
 tenant.

 If the tenant id is in the URI of the API, then the validation middleware
 can assert that the tenant id is in the list of IDs.

 Not sure if there's any additional changes, but I don't think so.

 An alternative approach is to use the belongsTo query parameter in the
 validate call.  So if you know the tenantId of the resource, you can issue a
 validate with ?belongsTo=tenatId  and validation if the tenant is not in
 the list of tenatIds for the token.  The belongsTo query parameter is in the
 validate token call in the API today

 http://docs.openstack.org/api/openstack-identity-service/2.0/content/GET_val
 idateToken_v2.0_tokens__tokenId__Admin_API_Service_Developer_Operations-d1e1
 356.html

 And we use it quite a bit in our implementation, when we validate tokens --
 that is in the case where a token may have access to multiple tenants.

 Thoughts?

 -jOrGe W.


 On Nov 14, 2012, at 3:53 PM, heckj wrote:

 If we're going to assert it's supported, we're doing an incredible
 dis-service to writing a spec to not implement that aspect of the spec, as
 that kind of set up just leads to incompatibilities and confusion when
 asserting how the spec should be used to provide interoperability.

 If we accept this as a spec addition, then we MUST have an implementation
 that makes it clear how we expect to interoperate with that aspect of the
 specification, even if it's a configuration option that we don't normally
 enable. If we don't test and validate it to prove interoperability, then the
 spec is a worthless digital piece of paper.

 So under that pretext, I welcome suggestions on how to interpret the spec
 you're proposing to some concrete implementations that can be verified for
 interoperability, and that are compatible with the existing and/or upcoming
 implementations for V3 API.

 -joe

 On Nov 14, 2012, at 1:35 PM, Joe Savak 
 joe.sa...@rackspace.commailto:joe.sa...@rackspace.com wrote:
 Hi Joe,
 If I'm working across multiple tenants, I'd prefer one token that I
 can securely handle that proves access rights to the tenants I'm working
 with. Handling multiple tokens increases the complexity of clients needing
 to provide multi-tenancy access to an authenticated identity. It also adds
 more calls to keystone.

 Again, I think that having the keystone reference implementation restrict
 tokens to 1 tenant is fine. We shouldn't have such arbitrary restrictions in
 the API contract though. It needs to be extensible and flexible to allow for
 the all sorts of use cases that are likely to occur.

 Thanks,
 joe

 -Original Message-
 From: heckj [mailto:he...@mac.commailto:he...@mac.com]
 Sent: Tuesday, November 13, 2012 3:59 PM
 To: Joe Savak
 Cc: OpenStack Development Mailing List; 
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 (openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net)
 Subject: Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens
 representing authorization to projects/tenants in the Keystone V3 API

 Hey Joe:

 Currently a user scoped token doesn't include a service catalog - mostly
 because I think the service catalog generally requires tenant_id's to
 interpolate into the values to provide it. That doesn't mean we can't put
 in/include service catalog endpoints where that value doesn't need to be
 determined.

 I'm also questioning the value of providing a token scoped to all tenants
 associated with a user - that seems to have the same value as just using a
 user token

Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-14 Thread Jorge Williams
From an API perspective the changes required are the following:

1.  The validate call returns a list of tenants instead of a single 
tenant.

If the tenant id is in the URI of the API, then the validation middleware can 
assert that the tenant id is in the list of IDs.

Not sure if there's any additional changes, but I don't think so.

An alternative approach is to use the belongsTo query parameter in the validate 
call.  So if you know the tenantId of the resource, you can issue a validate 
with ?belongsTo=tenatId  and validation if the tenant is not in the list of 
tenatIds for the token.  The belongsTo query parameter is in the validate token 
call in the API today

http://docs.openstack.org/api/openstack-identity-service/2.0/content/GET_validateToken_v2.0_tokens__tokenId__Admin_API_Service_Developer_Operations-d1e1356.html

And we use it quite a bit in our implementation, when we validate tokens -- 
that is in the case where a token may have access to multiple tenants.

Thoughts?

-jOrGe W.


On Nov 14, 2012, at 3:53 PM, heckj wrote:

 If we're going to assert it's supported, we're doing an incredible 
 dis-service to writing a spec to not implement that aspect of the spec, as 
 that kind of set up just leads to incompatibilities and confusion when 
 asserting how the spec should be used to provide interoperability.
 
 If we accept this as a spec addition, then we MUST have an implementation 
 that makes it clear how we expect to interoperate with that aspect of the 
 specification, even if it's a configuration option that we don't normally 
 enable. If we don't test and validate it to prove interoperability, then the 
 spec is a worthless digital piece of paper.
 
 So under that pretext, I welcome suggestions on how to interpret the spec 
 you're proposing to some concrete implementations that can be verified for 
 interoperability, and that are compatible with the existing and/or upcoming 
 implementations for V3 API.
 
 -joe
 
 On Nov 14, 2012, at 1:35 PM, Joe Savak joe.sa...@rackspace.com wrote:
 Hi Joe,
  If I'm working across multiple tenants, I'd prefer one token that I can 
 securely handle that proves access rights to the tenants I'm working with. 
 Handling multiple tokens increases the complexity of clients needing to 
 provide multi-tenancy access to an authenticated identity. It also adds more 
 calls to keystone. 
 
 Again, I think that having the keystone reference implementation restrict 
 tokens to 1 tenant is fine. We shouldn't have such arbitrary restrictions in 
 the API contract though. It needs to be extensible and flexible to allow for 
 the all sorts of use cases that are likely to occur.
 
 Thanks,
 joe
 
 -Original Message-
 From: heckj [mailto:he...@mac.com] 
 Sent: Tuesday, November 13, 2012 3:59 PM
 To: Joe Savak
 Cc: OpenStack Development Mailing List; openstack@lists.launchpad.net 
 (openstack@lists.launchpad.net)
 Subject: Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing 
 authorization to projects/tenants in the Keystone V3 API
 
 Hey Joe:
 
 Currently a user scoped token doesn't include a service catalog - mostly 
 because I think the service catalog generally requires tenant_id's to 
 interpolate into the values to provide it. That doesn't mean we can't put 
 in/include service catalog endpoints where that value doesn't need to be 
 determined.
 
 I'm also questioning the value of providing a token scoped to all tenants 
 associated with a user - that seems to have the same value as just using a 
 user token. 
 
 In fact, even if we allow some arbitrary set of tenants to be scoped into a 
 token along with a user, what on earth should be in the service catalog? 
 Endpoints relevant to every possible tenant?
 
 This just seems to be a potential explosion of data that is poorly scoped 
 from a security perspective.
 
 -joe
 
 On Nov 13, 2012, at 1:42 PM, Joe Savak joe.sa...@rackspace.com wrote:
 Will user-scoped token include the full service catalog? 
 
 Also, I thought the consensus was to allow the API contract to be flexible 
 on how many tenants we can scope the token to. The ref impl can enforce 1 
 tenant-scoped token. Are we diverging from this?
 
 Thanks,
 joe
 
 -Original Message-
 From: openstack-bounces+joe.savak=rackspace@lists.launchpad.net 
 [mailto:openstack-bounces+joe.savak=rackspace@lists.launchpad.net] On 
 Behalf Of heckj
 Sent: Tuesday, November 13, 2012 1:34 PM
 To: OpenStack Development Mailing List
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
 Subject: Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens 
 representing authorization to projects/tenants in the Keystone V3 API
 
 
 On Nov 13, 2012, at 11:01 AM, Jorge Williams jorge.willi...@rackspace.com 
 wrote:
 On Nov 13, 2012, at 11:35 AM, heckj wrote:
 So maintaining a token scoped to just the user, and a mechanism to scope 
 it to a tenant sound like all goodness. We can absolutely keep the API

Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-13 Thread Jorge Williams
 the same scope won't they? In
 which case there is no need for both concepts.
 
 let's compare with Kerberos:  In my view an unscoped token is
 comparaable with a ticket granting ticket:  it cannot be used with any
 service other than the KDC, and it can only be used to get service
 tickets. A service ticket can only be used with a specific service.  If
 that service gets compromised, any tickets it has are useless for access
 to other resources.
 
 
 If an unscoped token can be used against a wide array of services, we
 have just provided a path for an elevation of privileges attack. If I
 know that a service consumes tokens which can be used on a wide number
 of other services, I can target my attacks against that service in order
 to get access everywhere.
 
 If we are going to provide this functionality, it should be turned off
 by default.
 
 
 Comments please
 
 regards
 
 David
 
 On 23/10/2012 06:25, Jorge Williams wrote:
 Here's my view:
 
 On making the default token a configuration option:  Like the idea.
  Disabling the option by default.  That's fine too.
 
 On scoping a token to a specific endpoint:  That's fine, though I
 believe that that's in the API today.  Currently, the way that we scope
 tokens to endpoints is by validating against the service catalog. I'm
 not sure if the default middleware checks for this yet, but the Repose
 middleware does.  If you try to use a token in an endpoint that's not in
 the service catalog the request fails -- well, if the check is turned
 on.
 
 Obviously, I'd like the idea of scoping a single token to multiple
 tenants / endpoints.
 
 I don't like the idea of calling tokens sloppy tokens -- it's
 confusing.   All you have to say is that a token has a scope -- and the
 scope of the token is the set of resources that the token can provide
 access to.  You can limit the scope of a token to a tenant, to a
 endpoint, to a set of endpoints or tenants etc -- what limits you place
 on the scope of an individual token should be up to the operator.
 
 Keep in mind that as we start digging into delegation and fine grained
 authorization (after Grizzly, I'm sure), we'll end up with tokens that
 have a scope of a subset of resources in a single or multiple tenants.
  So calling them sloppy now is just confusing.  Simply stating that a
 token has a scope (as I've defined above) should suffice.  This is part
 of the reason why I've never liked the term unscoped token, because an
 unscoped token does have a scope. It just so happens that the scope of
 that token is the resource that provides a list of available tenants.
 
 -jOrGe W.
 
 On Oct 22, 2012, at 9:57 PM, Adam Young wrote:
 
 Are you guys +1 ing the original Idea, my suggestion to make it
 optional, the fact that I think we should call these sloppy tokens?
 
 On 10/22/2012 03:40 PM, Jorge Williams wrote:
 +1 here too.
 
 At the end of the day, we'd like the identity API to be flexible
 enough to allow the token to be scoped in a manner that the deployer
 sees fit.  What the keystone implementation does by default is a
 different matter -- and disabling multiple tenant  scope by default
 would be fine by me.
 
 -jOrGe W.
 
 
 On Oct 21, 2012, at 11:10 AM, Joe Savak wrote:
 
 +1. ;)
 
 So the issue is that the v2 API contract allows a token to be scoped
 to multiple tenants. For v3, I'd like to have the same flexibility.
 I don't see security issues, as if a token were to be sniffed you
 can change the password of the account using it and use those creds
 to scope tokens to any tenant you wish.
 Scope should always be kept as limited as possible. Personally, I
 don't feel like limiting the tenant list makes much difference.  THe
 more I think about it, the real benefit comes from limiting the
 endpoints.
 
 
 
 
 
 On Oct 20, 2012, at 21:07, Adam Young ayo...@redhat.com
 mailto:ayo...@redhat.com wrote:
 
 On 10/20/2012 01:50 PM, heckj wrote:
 I sent this to the openstack-dev list, and thought I'd double post
 this onto the openstack list at Launchpad for additional feedback.
 
 -joe
 
 Begin forwarded message:
 *From: *heckj he...@mac.com mailto:he...@mac.com
 *Subject: **[openstack-dev] [keystone] Tokens representing
 authorization to projects/tenants in the Keystone V3 API*
 *Date: *October 19, 2012 1:51:16 PM PDT
 *To: *OpenStack Development Mailing List
 openstack-...@lists.openstack.org
 mailto:openstack-...@lists.openstack.org
 *Reply-To: *OpenStack Development Mailing List
 openstack-...@lists.openstack.org
 mailto:openstack-...@lists.openstack.org
 
 The topic of what a token can or can't represent for the upcoming
 V3 Keystone API  came up - and I wanted to share the conversation
 a bit more broadly to get feedback.
 
 
 A bit of history:
 
 In the V2 API, when you authenticated with just a username and
 password, the token that was provided wasn't entirely clearly
 defined. The reference implementation that Keystone used was to
 create what's been called an 'unscoped' token - which was
 generally

Re: [Openstack] Fwd: [openstack-dev] [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-10-23 Thread Jorge Williams
I'm okay with Starting Tokens.

-jOrGe W.

On Oct 23, 2012, at 7:25 AM, Adam Young wrote:

On 10/23/2012 01:25 AM, Jorge Williams wrote:
Here's my view:

On making the default token a configuration option:  Like the idea.  Disabling 
the option by default.  That's fine too.

On scoping a token to a specific endpoint:  That's fine, though I believe that 
that's in the API today.  Currently, the way that we scope tokens to endpoints 
is by validating against the service catalog. I'm not sure if the default 
middleware checks for this yet, but the Repose middleware does.  If you try to 
use a token in an endpoint that's not in the service catalog the request fails 
-- well, if the check is turned on.

Obviously, I'd like the idea of scoping a single token to multiple tenants / 
endpoints.

I don't like the idea of calling tokens sloppy tokens -- it's confusing.   
All you have to say is that a token has a scope -- and the scope of the token 
is the set of resources that the token can provide access to.  You can limit 
the scope of a token to a tenant, to a endpoint, to a set of endpoints or 
tenants etc -- what limits you place on the scope of an individual token should 
be up to the operator.

Keep in mind that as we start digging into delegation and fine grained 
authorization (after Grizzly, I'm sure), we'll end up with tokens that have a 
scope of a subset of resources in a single or multiple tenants.  So calling 
them sloppy now is just confusing.  Simply stating that a token has a scope (as 
I've defined above) should suffice.  This is part of the reason why I've never 
liked the term unscoped token, because an unscoped token does have a scope. 
It just so happens that the scope of that token is the resource that provides a 
list of available tenants.
This is a pretty good distinction.  What we were calling Unscoped is, to me, 
the equivalent of a TGT in Kerberos:  a starting point, that has not been 
specified to any resources.  I'd be willing to entertain a different name than 
Unscoped.  Let me throw out Starting Tokens as a straw man, and we can beat 
it up to come up with a better term.

Sloppy was never meant seriously, but more a way to tweak the noses of the 
project members named Joe.



-jOrGe W.

On Oct 22, 2012, at 9:57 PM, Adam Young wrote:

Are you guys +1 ing the original Idea, my suggestion to make it optional, the 
fact that I think we should call these sloppy tokens?

On 10/22/2012 03:40 PM, Jorge Williams wrote:
+1 here too.

At the end of the day, we'd like the identity API to be flexible enough to 
allow the token to be scoped in a manner that the deployer sees fit.  What the 
keystone implementation does by default is a different matter -- and disabling 
multiple tenant  scope by default would be fine by me.

-jOrGe W.


On Oct 21, 2012, at 11:10 AM, Joe Savak wrote:

+1. ;)

So the issue is that the v2 API contract allows a token to be scoped to 
multiple tenants. For v3, I'd like to have the same flexibility. I don't see 
security issues, as if a token were to be sniffed you can change the password 
of the account using it and use those creds to scope tokens to any tenant you 
wish.
Scope should always be kept as limited as possible. Personally, I don't feel 
like limiting the tenant list makes much difference.  THe more I think about 
it, the real benefit comes from limiting the  endpoints.





On Oct 20, 2012, at 21:07, Adam Young 
ayo...@redhat.commailto:ayo...@redhat.com wrote:

On 10/20/2012 01:50 PM, heckj wrote:
I sent this to the openstack-dev list, and thought I'd double post this onto 
the openstack list at Launchpad for additional feedback.

-joe

Begin forwarded message:
From: heckj he...@mac.commailto:he...@mac.com
Subject: [openstack-dev] [keystone] Tokens representing authorization to 
projects/tenants in the Keystone V3 API
Date: October 19, 2012 1:51:16 PM PDT
To: OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
Reply-To: OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org

The topic of what a token can or can't represent for the upcoming V3 Keystone 
API  came up - and I wanted to share the conversation a bit more broadly to get 
feedback.


A bit of history:

In the V2 API, when you authenticated with just a username and password, the 
token that was provided wasn't entirely clearly defined. The reference 
implementation that Keystone used was to create what's been called an 
'unscoped' token - which was generally limited to only being able to get a list 
of possible tenants/projects and the capability of getting a token specific to 
a user  tenant/project (what's been called a scoped token)

Likewise, the reference implementation of the rest of the OpenStack projects 
all require a tenant information to be included within the token as that token 
was the identity refernce inforoamtion - and most OpenStack services were 
wanting to know

Re: [Openstack] Fwd: [openstack-dev] [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-10-22 Thread Jorge Williams
+1 here too.

At the end of the day, we'd like the identity API to be flexible enough to 
allow the token to be scoped in a manner that the deployer sees fit.  What the 
keystone implementation does by default is a different matter -- and disabling 
multiple tenant  scope by default would be fine by me.

-jOrGe W.


On Oct 21, 2012, at 11:10 AM, Joe Savak wrote:

+1. ;)

So the issue is that the v2 API contract allows a token to be scoped to 
multiple tenants. For v3, I'd like to have the same flexibility. I don't see 
security issues, as if a token were to be sniffed you can change the password 
of the account using it and use those creds to scope tokens to any tenant you 
wish.



On Oct 20, 2012, at 21:07, Adam Young 
ayo...@redhat.commailto:ayo...@redhat.com wrote:

On 10/20/2012 01:50 PM, heckj wrote:
I sent this to the openstack-dev list, and thought I'd double post this onto 
the openstack list at Launchpad for additional feedback.

-joe

Begin forwarded message:
From: heckj he...@mac.commailto:he...@mac.com
Subject: [openstack-dev] [keystone] Tokens representing authorization to 
projects/tenants in the Keystone V3 API
Date: October 19, 2012 1:51:16 PM PDT
To: OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
Reply-To: OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org

The topic of what a token can or can't represent for the upcoming V3 Keystone 
API  came up - and I wanted to share the conversation a bit more broadly to get 
feedback.


A bit of history:

In the V2 API, when you authenticated with just a username and password, the 
token that was provided wasn't entirely clearly defined. The reference 
implementation that Keystone used was to create what's been called an 
'unscoped' token - which was generally limited to only being able to get a list 
of possible tenants/projects and the capability of getting a token specific to 
a user  tenant/project (what's been called a scoped token)

Likewise, the reference implementation of the rest of the OpenStack projects 
all require a tenant information to be included within the token as that token 
was the identity refernce inforoamtion - and most OpenStack services were 
wanting to know the tenant associated with the token for 
authorization/ownership purposes.

Apparently Rackspace's internal implementation provided a token that was 
immediately valid for all possible tenants to which the user was associated, 
and presumably their internal implementations of openstack do whatever work is 
appropriate to discern and provide that information to the various openstack 
services.

The quandary:

In the V3 API, we started off with, and currently define the token as being 
specifically mandated to a single tenant, with a new requirement that if you 
authorize with just a username and password, a default tenant is used. If for 
some reason you have no tenant associated with the userid, the authorization is 
to be refused. If the user is associated with more than one tenant/project, 
it's possible to use the token to get a list of other tenants/projects and 
request a new token specific to one of those other tenant/projects, but the 
implementation is expected to respect and provide a default.

I would like to make default tenant a configuration option, and have it 
disabled by default.  Unscoped tokens are a very useful construct.  In the case 
where the user has many roles across a multitude of projects, it is possible to 
create huge tokens.  I would prefer unscoped tokens to remain, and to be 
associated with no tenant.  The only operation Keystone should provide with 
them is the ability to enumerate tenants, so something like Horizon can then 
request an appropriately scoped token.

I am also in favor of limiting the scope of a token to an endpoint.  Even 
more-so than tenants, scoping a token to an end point increases security.  Once 
a token has been scoped to an endpoint, it can only be used on that endpoint.  
If an endpoint gets compromised, the damage is limited to resources that 
endpoint already has access to.  This, in conjunction with pre-auths, could 
allow a user to perform an action with a minimum of risk in a public cloud 
environment.



A few folks from Rackspace touched on this at the very tail end of the V3 API 
review session on Thursday, bringing up that they had an issue with the token 
being scoped to a single tenant. Since this has significant implications to 
both security and a potential user experience flow, I wanted to bring the issue 
up across the broader community for discussion.

The request outstanding:

Rackspace folks are requesting that the token not be limited to a single 
tenant/project, but instead provides a list of potential tenants against which 
the token should be considered valid.
I would like the world to know that we are affectionately calling such tokens 
sloppy tokens and Joe Savak has 

Re: [Openstack] Fwd: [openstack-dev] [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-10-22 Thread Jorge Williams
Here's my view:

On making the default token a configuration option:  Like the idea.  Disabling 
the option by default.  That's fine too.

On scoping a token to a specific endpoint:  That's fine, though I believe that 
that's in the API today.  Currently, the way that we scope tokens to endpoints 
is by validating against the service catalog. I'm not sure if the default 
middleware checks for this yet, but the Repose middleware does.  If you try to 
use a token in an endpoint that's not in the service catalog the request fails 
-- well, if the check is turned on.

Obviously, I'd like the idea of scoping a single token to multiple tenants / 
endpoints.

I don't like the idea of calling tokens sloppy tokens -- it's confusing.   
All you have to say is that a token has a scope -- and the scope of the token 
is the set of resources that the token can provide access to.  You can limit 
the scope of a token to a tenant, to a endpoint, to a set of endpoints or 
tenants etc -- what limits you place on the scope of an individual token should 
be up to the operator.

Keep in mind that as we start digging into delegation and fine grained 
authorization (after Grizzly, I'm sure), we'll end up with tokens that have a 
scope of a subset of resources in a single or multiple tenants.  So calling 
them sloppy now is just confusing.  Simply stating that a token has a scope (as 
I've defined above) should suffice.  This is part of the reason why I've never 
liked the term unscoped token, because an unscoped token does have a scope. 
It just so happens that the scope of that token is the resource that provides a 
list of available tenants.

-jOrGe W.

On Oct 22, 2012, at 9:57 PM, Adam Young wrote:

Are you guys +1 ing the original Idea, my suggestion to make it optional, the 
fact that I think we should call these sloppy tokens?

On 10/22/2012 03:40 PM, Jorge Williams wrote:
+1 here too.

At the end of the day, we'd like the identity API to be flexible enough to 
allow the token to be scoped in a manner that the deployer sees fit.  What the 
keystone implementation does by default is a different matter -- and disabling 
multiple tenant  scope by default would be fine by me.

-jOrGe W.


On Oct 21, 2012, at 11:10 AM, Joe Savak wrote:

+1. ;)

So the issue is that the v2 API contract allows a token to be scoped to 
multiple tenants. For v3, I'd like to have the same flexibility. I don't see 
security issues, as if a token were to be sniffed you can change the password 
of the account using it and use those creds to scope tokens to any tenant you 
wish.
Scope should always be kept as limited as possible. Personally, I don't feel 
like limiting the tenant list makes much difference.  THe more I think about 
it, the real benefit comes from limiting the  endpoints.





On Oct 20, 2012, at 21:07, Adam Young 
ayo...@redhat.commailto:ayo...@redhat.com wrote:

On 10/20/2012 01:50 PM, heckj wrote:
I sent this to the openstack-dev list, and thought I'd double post this onto 
the openstack list at Launchpad for additional feedback.

-joe

Begin forwarded message:
From: heckj he...@mac.commailto:he...@mac.com
Subject: [openstack-dev] [keystone] Tokens representing authorization to 
projects/tenants in the Keystone V3 API
Date: October 19, 2012 1:51:16 PM PDT
To: OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
Reply-To: OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org

The topic of what a token can or can't represent for the upcoming V3 Keystone 
API  came up - and I wanted to share the conversation a bit more broadly to get 
feedback.


A bit of history:

In the V2 API, when you authenticated with just a username and password, the 
token that was provided wasn't entirely clearly defined. The reference 
implementation that Keystone used was to create what's been called an 
'unscoped' token - which was generally limited to only being able to get a list 
of possible tenants/projects and the capability of getting a token specific to 
a user  tenant/project (what's been called a scoped token)

Likewise, the reference implementation of the rest of the OpenStack projects 
all require a tenant information to be included within the token as that token 
was the identity refernce inforoamtion - and most OpenStack services were 
wanting to know the tenant associated with the token for 
authorization/ownership purposes.

Apparently Rackspace's internal implementation provided a token that was 
immediately valid for all possible tenants to which the user was associated, 
and presumably their internal implementations of openstack do whatever work is 
appropriate to discern and provide that information to the various openstack 
services.

The quandary:

In the V3 API, we started off with, and currently define the token as being 
specifically mandated to a single tenant, with a new requirement that if you 
authorize with just

Re: [Openstack] [keystone] Rate limit middleware

2012-07-11 Thread Jorge Williams
More info on the Repose rate limiter here:

http://wiki.openrepose.org/display/REPOSE/Rate+Limiting+Filter

The rate limiter has the concept of limit groups -- you can specify rate limits 
for a particular group -- then introspect the request to see which group 
applies.  Typically a user can be placed in a particular group etc.  When rate 
limiting keystone, you might want to rate limit authentication attempts.  The 
issue there is that the user has not gone through an auth process so you can't 
necessarily ID the user. We use the concept of quality, where different middle 
ware components take a guess about what limit group to use.  This allows the 
rate limiter to rate limit by say IP address, data in the URI, or the content 
of the message etc.

See:  http://wiki.openrepose.org/display/REPOSE/Header+Value+Quality
And:   http://wiki.openrepose.org/display/REPOSE/Identity+Filters

Sorry, our docs are a little sparse.

-jOrGe W.


On Jul 11, 2012, at 10:56 AM, Dolph Mathews wrote:

REPOSE would be worth taking a look at, as well (includes rate limiting):

  https://github.com/rackspace/repose
  http://openrepose.org/documentation.html

-Dolph

On Wed, Jul 11, 2012 at 9:19 AM, Kevin L. Mitchell 
kevin.mitch...@rackspace.commailto:kevin.mitch...@rackspace.com wrote:
On Wed, 2012-07-11 at 01:50 +0200, Rafael Durán Castañeda wrote:
 I'm working on a blueprint [1] and implementation [2] doing rate limit
 middleware for Keystone; after discussing it at keystone's meeting
 today I was suggested to ask for some feedback from the community.

Have you taken a look at Turnstile and the related integration package,
nova_limits?  Unfortunately, trunk Turnstile doesn't support
multiprocess, but I intend to address that as soon as job
responsibilities permit.

URLs:

  * http://pypi.python.org/pypi/turnstile
  * http://pypi.python.org/pypi/nova_limits
  * https://github.com/klmitch/turnstile
  * https://github.com/klmitch/nova_limits
--
Kevin L. Mitchell 
kevin.mitch...@rackspace.commailto:kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] WADL [was: v3 API draft (update and questions to the community)]

2012-06-15 Thread Jorge Williams
Totally agree.

Note that we are using  WADL today to create documentation artifacts.  So 
http://api.openstack.org/  is generated from WADLs as are good chunks of the 
books on http://docs.openstack.org/.  We're also using WADL for validation and 
testing at Rackspace internally, and I'm sure other folks are doing similar 
things.  There are definite practical advantages *today* to having a machine 
readable artifact.  Given that, I don't think Liem's request for a WADL is 
unreasonable.  Sure, WADL has it's problems, and once a viable alternative 
emerges, I'm sure it will be supported.   In fact, having a machine readable 
artifact has the advantage in this regard, in that it we can auto-convert  away 
from WADL when/if we need to.

-jOrGe W.

On Jun 15, 2012, at 7:35 AM, Doug Davis wrote:


I don't see this as an either-or type of thing.

Totally agree with Mark that the APIs need to be more clearly documented and 
that should be independent of any kind of IDL (ala WADL) artifact.  I say this 
mainly because I think we always need to have something that's human readable 
and not machine readable.  There will always be semantics that can not be 
expressed via the machine readable artifacts. Having said that, there are 
people that like to have IDL-like artifacts for some kind of tooling.  So, 
along with the well-documented APIs should be whatever artifacts that can makes 
people's lives easier.  This means XSD, WASL, WSDL, etc... whatever - pick your 
favorite.

No matter what artifact you choose to use to guide your coding (even if its 
just the well documented human readable API doc), you're still bound to that 
particular version of the APIs.  Which means a change in the APIs/server-code 
might break your client.  In this respect I don't think WADL or docs are more 
or less brittle than the other.  To me the key aspects are the extensibility 
points.  Once the APIs are deemed 'stable', we just need to make sure that new 
stuff is backwards compatible which usually means defining and leveraging well 
placed extensibility points.

thanks
-Doug
__
STSM |  Standards Architect  |  IBM Software Group
(919) 254-6905  |  IBM 444-6905  |  d...@us.ibm.commailto:d...@us.ibm.com
The more I'm around some people, the more I like my dog.


Mark Nottingham m...@mnot.netmailto:m...@mnot.net
Sent by: 
openstack-bounces+dug=us.ibm@lists.launchpad.netmailto:openstack-bounces+dug=us.ibm@lists.launchpad.net

06/14/2012 08:20 PM


To
Nguyen, Liem Manh liem_m_ngu...@hp.commailto:liem_m_ngu...@hp.com
cc
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject
[Openstack] WADL [was: v3 API draft (update and questionsto the 
   community)]







Hi Liem,

I'm one of the folks who helped Marc get WADL off of the ground. At the time, 
my use cases were exactly as you describe: documentation (e.g., 
https://github.com/mnot/wadl_stylesheets) and testing.

Even back then, there was a lot of discussion in the community; e.g., see:
  http://bitworking.org/news/193/Do-we-need-WADL
  
http://old.nabble.com/Is-it-a-good-idea-to-make-your-WADL-available--tc6087155r1.html
  
http://www.25hoursaday.com/weblog/CommentView.aspx?guid=f88dc5a6-0aff-44ca-ba42-38c651612092

I think many of the concerns that were expressed then are still valid -- some 
even within these limited uses. In no particular order:

* People can and will use WADL to represent a contract to a service (really, 
an IDL), and bake client code to a snapshot of it in time. While it's true 
that the client and server need to have agreement about what goes on the wire 
and what it means, the assumptions around what guarantees WADL makes are not 
well-thought-out (in a manner similar to WSDL), making clients generated from 
it very tightly bound to the snapshot of the server they saw at some point in 
the past. This, in turn, makes evolution / extension of the API a lot harder 
than it needs to be.

* WADL's primitives are XML Schema datatypes. This is a horrible match for 
dynamic languages like Python.

* WADL itself embodies certain patterns of use that tend to show through if you 
design for it; these may or may not be the best patterns for a particular use 
case. This is because HTTP and URLs are very flexible things, and it isn't 
expressive enough to cover all of that space. As a result, you can end up with 
convoluted APIs that are designed to fit WADL, rather than do the task at hand.

From what I've seen, many developers in OpenStack are profoundly uninterested 
in working with WADL. YMMV, but AFAICT this results in the WADL being done by 
other folks, and not matching the reality of the implementation; not a good 
situation for anyone.

What we need, I think, is a specification of the API that's precise, 
unambiguous, and easy to understand and maintain. I personally don't think WADL 
is 

Re: [Openstack] WADL [was: v3 API draft (update and questions to the community)]

2012-06-15 Thread Jorge Williams
All of the XSDs produced so far use XSD 1.1.

-jOrGe W.


On Jun 15, 2012, at 8:57 AM, Christopher B Ferris wrote:

+1

Over-reliance on WADL will only make it more challenging to gracefully evolve 
the APIs such that implementations can be forwards and/or backwards compatible, 
especially when exchanging XML based on an XSD that is not carefully crafted 
with proper extensibility points incorporated throughout the schema design, 
unless we were to adopt XSD1.1 which has an optional open content model (but 
which has not yet seen wide adoption, sadly).

Cheers,

Christopher Ferris
IBM Distinguished Engineer, CTO Industry and Cloud Standards
Member, IBM Academy of Technology
IBM Software Group, Standards Strategy
email: chris...@us.ibm.commailto:chris...@us.ibm.com
Twitter: christo4ferris
phone: +1 508 234 2986


-openstack-bounces+chrisfer=us.ibm@lists.launchpad.netmailto:-openstack-bounces+chrisfer=us.ibm@lists.launchpad.net
 wrote: -
To: Nguyen, Liem Manh liem_m_ngu...@hp.commailto:liem_m_ngu...@hp.com
From: Mark Nottingham
Sent by: 
openstack-bounces+chrisfer=us.ibm@lists.launchpad.netmailto:openstack-bounces+chrisfer=us.ibm@lists.launchpad.net
Date: 06/14/2012 08:34PM
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] WADL [was: v3 API draft (update and questions to the 
community)]

Hi Liem,

I'm one of the folks who helped Marc get WADL off of the ground. At the time, 
my use cases were exactly as you describe: documentation (e.g., 
https://github.com/mnot/wadl_stylesheets) and testing.

Even back then, there was a lot of discussion in the community; e.g., see:
   http://bitworking.org/news/193/Do-we-need-WADL
   
http://old.nabble.com/Is-it-a-good-idea-to-make-your-WADL-available--tc6087155r1.html
   
http://www.25hoursaday.com/weblog/CommentView.aspx?guid=f88dc5a6-0aff-44ca-ba42-38c651612092

I think many of the concerns that were expressed then are still valid -- some 
even within these limited uses. In no particular order:

* People can and will use WADL to represent a contract to a service (really, 
an IDL), and bake client code to a snapshot of it in time. While it's true 
that the client and server need to have agreement about what goes on the wire 
and what it means, the assumptions around what guarantees WADL makes are not 
well-thought-out (in a manner similar to WSDL), making clients generated from 
it very tightly bound to the snapshot of the server they saw at some point in 
the past. This, in turn, makes evolution / extension of the API a lot harder 
than it needs to be.

* WADL's primitives are XML Schema datatypes. This is a horrible match for 
dynamic languages like Python.

* WADL itself embodies certain patterns of use that tend to show through if you 
design for it; these may or may not be the best patterns for a particular use 
case. This is because HTTP and URLs are very flexible things, and it isn't 
expressive enough to cover all of that space. As a result, you can end up with 
convoluted APIs that are designed to fit WADL, rather than do the task at hand.

From what I've seen, many developers in OpenStack are profoundly uninterested 
in working with WADL. YMMV, but AFAICT this results in the WADL being done by 
other folks, and not matching the reality of the implementation; not a good 
situation for anyone.

What we need, I think, is a specification of the API that's precise, 
unambiguous, and easy to understand and maintain. I personally don't think WADL 
is up to that task (at least as a primary artefact), so (as I mentioned), I'm 
going to be proposing another approach.

Cheers,



On 15/06/2012, at 2:08 AM, Nguyen, Liem Manh wrote:

 IMHO, a well-documented WADL + XSD would say a thousand words (maybe more)... 
  And can serve as a basis for automated testing as well.  I understand that 
 the v3 API draft is perhaps not at that stage yet; but, would like to see a 
 WADL + XSD set as soon as the concepts are solidified.

 Liem

 -Original Message-
 From: 
 openstack-bounces+liem_m_nguyen=hp@lists.launchpad.netmailto:openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net
  [mailto:openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net] On 
 Behalf Of Mark Nottingham
 Sent: Tuesday, June 12, 2012 8:43 PM
 To: Gabriel Hurley
 Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 Subject: Re: [Openstack] [keystone] v3 API draft (update and questions to the 
 community)


 On 13/06/2012, at 1:24 PM, Gabriel Hurley wrote:

 Totally agree with all of Jay's points, and I also couldn't agree more with 
 Mark on the importance of being crystal clear, and not operating on just a 
 common understanding which is quickly misunderstood or forgotten.

 Ideally I'd like to see an OpenStack API feature contract of some sort... 
 essentially a document describing the FULL list of features, how those 
 parameters are 

Re: [Openstack] Nova API Specification

2012-05-30 Thread Jorge Williams

On May 30, 2012, at 8:33 AM, Day, Phil wrote:

Hi Folks,

I was looking for the full definition of the API requests, and I’m a tad 
confused by what I find here:

http://api.openstack.org/

Specifically for Server Create there is both and “Server – Create” and “Server 
– Extended Create”, although as far as I can see the “extended create” isn’t 
actually an extension as such (the additional parameters are supported in the 
core servers module).

Also there seem to be a number of parameter values that aren’t specified in 
either interface entry, such as:

min_count
max_count
networks
key_name

So is the API document intended to be:

-  A formal specification of the Interface
-  A set of examples  (but if you want the details you need to read the 
code)


You should also look at:

http://docs.openstack.org/api/


Are there plans to define the validation schematics of interface parameters ?


There are WADL and XML Schemas for the core here:

https://github.com/openstack/compute-api/blob/master/openstack-compute-api-2/src/os-compute-2.wadl

and here:

https://github.com/openstack/compute-api/tree/master/openstack-compute-api-2/src/xsd

nothing to validate the JSON...yet


I have another specific question on what seems to be an inconsistency between 
the XML and JSON output of get server details:

The XML response defines the names of networks as values within the addresses 
section:

addresses
network id=public
ip version=4 addr=67.23.10.132/

But in the JSON response it looks as if the network name is a structural 
element of the response:
addresses:  {
public : [
{
version: 4,
addr: 67.23.10.132
},

i.e. depending on the value of the “label” field in the networks table of the 
nova database the structure of the JSON response seems to change (I may not be 
expressing that very well, my point is that “addresses” is fixed by the API 
definition, but “public” is defined per implementation ?



Addresses is fixed in both the XML and JSON.  public is implementation specific 
in both as well.


-jOrGe W.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone service catalogue has non-core services?

2012-05-29 Thread Jorge Williams
Hey Liem,

We had a brief conversation about this at the summit.  Ec2 and volume are core 
services not extension services -- this was described in a wiki somewhere.   
Carlos has gone through the contracts cleaned them up and updated them to 
reflect reality -- and they include this particular change.

His changes are still pending review:

https://review.openstack.org/#/c/7774/

That said, the rule of having extension services use extension 
prefix:service type  still applies.

-jOrGe W.


On May 29, 2012, at 12:25 PM, Nguyen, Liem Manh wrote:

Perhaps the Nova folks may know the answer to this question…

Are the “ec2” and “nova-volume” services part of the core services now?  Or are 
they extension services?

Thanks,
Liem

From: 
openstack-bounces+liem_m_nguyen=hp@lists.launchpad.netmailto:openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net
 [mailto:openstack-bounces+liem_m_nguyen=hp@lists.launchpad.net] On Behalf 
Of Nguyen, Liem Manh
Sent: Friday, May 18, 2012 9:52 AM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] Keystone service catalogue has non-core services?

Hi Stackers,

I ran the sample_data.sh script in Keystone and saw that we have populated a 
few more services, such as ec2, dashboard and nova-volume.  Are these meant to 
be “core” services or extension services?  The definition of “core” services is 
defined here:

https://github.com/openstack/identity-api/blob/3d2e8a470733979b792d04bcfe3745731befbe8d/openstack-identity-api/src/docbkx/common/xsd/services.xsd

Extension services should be in the format of extension prefix:service type

Thanks,
Liem


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Just JSON, and extensibility

2012-04-13 Thread Jorge Williams

On Apr 13, 2012, at 8:47 AM, Mark Nottingham wrote:

 [ Full disclosure -- I'm using my personal address with Launchpad, etc., but 
 I work for Rackspace. ]
 
 On 12/04/2012, at 7:28 PM, Jorge Williams wrote:
 
 Generally, I agree with a lot of what you're saying, but I want to point out 
 a couple of things:
 
 1.  Static language folks gravitate to XML, not simply because they're 
 invested in it, but because it solves a real problem:
 […]
 and I should see those errors at compile time or while I'm authoring my code 
 if a have a good editor or IDEI shouldn't have to wait until my program 
 is run to catch those errors. 
 […]
 but at that point, there's very little benefit to having a static language, 
 because I don't have the assurances and guarantees that the language 
 provides. So I don't see a lot of Java folks dealing with JSON in that 
 manner.  Most devs will  need to build a class before hand.  So, you 
 decrease barriers for static language clients because there's a set of tools 
 that can extract the relevant info from XML schema languages and generate a 
 set of class files at compile time.   There's nothing saying you can't do 
 something similar with JSON, but those sort of tools aren't there  yet.
 
 Great -- I think it's good to level set here. We're talking about supporting 
 an entire serialisation, all of its supporting code, the QA cycle, schema 
 authoring, etc., and the #1 benefit is making the programming experience for 
 a subset of our audience more comfortable. 
 

There are a lot more clients than servers.  You have to weigh the cost of 
lowering barriers for those clients at the server side vs the cost of getting 
those client to successfully integrate with the system.  This is typically the 
a argument I make against SOAP and towards REST.  In static language circles 
creating a SOAP service is really, really  easy...but using soap typically 
introduces  high barriers to dynamic language folks.  Making the move from SOAP 
to REST definitely introduces some complexity on the dev side of the service in 
dev cycles, but you have to compare this cost against the cost that's saved on 
the client side and multiply that by the number of clients that benefit. 

Having said that,  I understand the cost of supporting a different media type 
is not insignificant, but it's also not rocket science. I reckon that the sate 
of XML support today may have to do more with the fact that our dev community 
doesn't see much value in XML, than in does with what it will actually takes to 
get it rightand like I said before, I can understand that perspective. 

 
 2.  Then there's the issue of extensibilityespecially distributed 
 extensibility. XML has that notion built in
 
 Heh; you mean bolted on with Namespaces in XML, and then completely messed up 
 by XML Schema 1.0, then only partially fixed by 1.1. But yes.
 
 JSON has no concept of it...and we are building extensible APIs. There are 
 no standard way in JSON to introduce a  new property without guaranteeing 
 that there won't be  clash.  You've mention the need for namespaces in JSON 
 precisely to  deal with this sort of issue 
 (http://www.mnot.net/blog/2011/10/12/thinking_about_namespaces_in_json). 
 
 First of all, as we've discussed many times, I don't think extensibility == 
 good in all cases; if we allow/encourage too much extensibility, the 
 platform we're building will fragment, and people won't see its full value. 
 Extensions should be allowed where the make sense, not everywhere, and 
 extensions should be encouraged to eventually converge.

I totally agree, encouraging  convergence is important.

 
 In the absence of a standard method, we're been using prefixes, which has 
 worked out well, but most JSON tools don't know how to deal with them
 
 What does that *mean*? The prefix is an opaque part of the name -- what 
 should a tool do with it?
 

It messes up some of the syntactic sugar that folks are used to using:

server.namevs   server[foo:name]  


 and they seem alien to folk that are used to using JSON day to day.
 
 Perhaps, but doing something like RAX_FOO isn't that onerous. Or using a 
 registry. Or just leaving it to the community to coordinate and document; try 
 opening a shell, typing set and pondering how *that* name space is 
 managed...
 
 This is a big deal because dynamic language folks are more likely to deal 
 with the JSON directly...Static language folks are generally not dealing 
 with XML in the same way.  In XML, the notion of extensibility is build into 
 parsers  and data binding tools directly.
 
 Not really; it's a syntactic artefact of namespaces. I'd say that the tools 
 manage it really, really badly, given how hard it is to author extensible 
 schemas (and don't get me started on XML Schema).
 

That hasn't entirely been my experience.  For the most part you can set things 
up so you don't have to think much about namespaces.

 Most folks don't have to worry too much

Re: [Openstack] Just JSON, and extensibility

2012-04-13 Thread Jorge Williams

On Apr 13, 2012, at 3:20 PM, Justin Santa Barbara wrote:

My understanding is that the solution we have now is that any extension goes 
into its own namespace; we assign a prefix to the namespace and have a way to 
map that prefix to the full namespace.  (Similar to XML schemas).  Currently 
prefixes are hard-coded, but we may not be able to keep doing this forever (XML 
has pre-document prefixes to avoid the need for a central authority).

I see 3 questions:
1) Is my summary correct?

It's pretty close.  I did propose that we maintain a registry of prefixes, but 
that's never taken off.

I did a write up a while back on extensions, you can find it here:  
https://github.com/RackerWilliams/OpenStack-Extensions/blob/master/apix-intro.pdf

Take the document that's in GitHub with a grain of salt, it doesn't entirely 
reflect the reality of things as they stand now and I feel,  after working with 
extensions for a while, that we need to make some slight modifications.
  It wasn't clear in the beginning where extension would be most useful so we 
added extensibility to everything.   As Mark mentioned, it's clear now that we 
need to scale down the points of extensibility.  In some cases we may introduce 
barriers to devs if we make absolutely everything extensible. In other cases, 
defining our own extensibility doesn't make sense.  For example,  there's no 
need define a way of extending HTTP headers -- first because, no one is writing 
those kinds of extensions and also because there already exists a method of 
extending  headers in HTTP so there's no need us to reinvent the wheel.  Stuff 
like that.


2) Are there any problems with the solution?

Yes a couple.  Especially when you consider what happens when an extension gets 
promoted to a full feature in a new version of the API.  I'm now leaning 
towards keeping prefixes forever...and providing a way for folks to write 
extension without a prefix, provided that they register the extension with the 
PTL of the project -- lets face it this is the sort of stuff that's happening 
now anyway.   Mind you these are all just thoughts at the moment we should have 
a larger discussion.

3) Are there any other problems we're not addressing?

Probably  :-)

We're having a panel on extensions at the summit.  We should discuss in detail 
then.

As one of the two authors of the Java binding, I can tell you how I plan on 
dealing with extensions:


  *   Map the JSON/XML/HPSTR to a strongly-typed model (so the representation 
is irrelevant to the consumer of the Java library).
  *   Each (known) extension has its own strongly-typed model object.
  *   These are stored in a registry.
  *   When we come across an extension, we look it up in the registry and 
either bind or ignore it.
  *   Every model object has an Extensions collection, which can be queried 
by type, to see if that extension data was present.

(Note: this has mostly been tested with the XML)


When you say registry, do you mean maven?


The nice thing about this is that a consumer of the library can write a binding 
for an extension, and register it with the client library, and it just works. 
 So, even if you have a private extension, which you tell nobody about and run 
only on your private cloud, you can use it with the stock Java library.

That sounds like a really great approach!  I'd love to check out the 
nitty-gritty details...Are you documenting this somewhere?


Now, how you would do something that awesome in Python, I don't know ;-)


I'm sure there are Pythonistas out there working to figure it out.

-jOrGe W.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Image API v2 Draft 4

2012-04-12 Thread Jorge Williams
Generally, I agree with a lot of what you're saying, but I want to point out a 
couple of things:

1.  Static language folks gravitate to XML, not simply because they're invested 
in it, but because it solves a real problem:

In a static language, I want to to say something like:

myServer.name = hello;

if I misspell name

myServer.nam = hello;

I should see an error...if I assign it to an integer...

myServer.name=10;

I should also see an error...

and I should see those errors at compile time or while I'm authoring my code if 
a have a good editor or IDEI shouldn't have to wait until my program is run 
to catch those errors. 

Sure, I can parse some JSON turn the result into a hashtable and if I'm using 
something like Scala, I can do  this

myServer(name) = hello

but at that point, there's very little benefit to having a static language, 
because I don't have the assurances and guarantees that the language provides. 
So I don't see a lot of Java folks dealing with JSON in that manner.  Most devs 
will  need to build a class before hand.  So, you decrease barriers for static 
language clients because there's a set of tools that can extract the relevant 
info from XML schema languages and generate a set of class files at compile 
time.   There's nothing saying you can't do something similar with JSON, but 
those sort of tools aren't there  yet.

2.  Then there's the issue of extensibilityespecially distributed 
extensibility. XML has that notion built in, JSON has no concept of it...and we 
are building extensible APIs. There are no standard way in JSON to introduce a  
new property without guaranteeing that there won't be  clash.  You've mention 
the need for namespaces in JSON precisely to  deal with this sort of issue 
(http://www.mnot.net/blog/2011/10/12/thinking_about_namespaces_in_json). 
In the absence of a standard method, we're been using prefixes, which has 
worked out well, but most JSON tools don't know how to deal with them and they 
seem alien to folk that are used to using JSON day to day. This is a big deal 
because dynamic language folks are more likely to deal with the JSON 
directly...Static language folks are generally not dealing with XML in the same 
way.  In XML, the notion of extensibility is build into parsers  and data 
binding tools directly.  Most folks don't have to worry too much about it.  In 
fact extensible protocols like XMPP and Atom Pub generally benefit from the 
extensibility that's already  built in:  
http://metajack.im/2010/02/01/json-versus-xml-not-as-simple-as-you-think/ 

Given that, if we're going to go the route of just picking one format, I think 
the fact that our API is extensible means that we might want to ask ourselves 
whether XML isn't a better fit :-)

Having said all of that,  I realize that our devs are working in a dynamic 
language, and don't see a lot of value in XML.  It's important to take that 
into consideration, but we should also be asking whether there's value to our 
current clients and potential clients.  Like it or not, there's a lot of folks 
out there using static languages. 

You're right in stating that if we had really good language bindings for Java 
and .Net, then issue #1 would essentially go away -- especially if we had 
language binding that was flexible enough to  remove the need to go down to the 
HTTP level.  Also, if the language binding itself was extensible we could also 
deal with issue 2.  As things stand today though, I don't thing that we are 
even remotely there yet.

-jOrGe W.


On Apr 12, 2012, at 2:58 PM, Mark Nottingham wrote:

 A little fuel for the fire / entertainment before the summit:
  http://www.mnot.net/blog/2012/04/13/json_or_xml_just_decide
 
 Cheers,
 
 
 On 10/04/2012, at 3:56 PM, Vishvananda Ishaya wrote:
 
 On Apr 10, 2012, at 2:26 AM, Thierry Carrez wrote:
 
 Jay Pipes wrote:
 I take it you didn't attend the glorious JSON debate of a couple of
 summits ago :-)
 
 Glorious it was indeed.
 
 I think the key quote was something like:
 Please don't bastardize my JSON with your XML crap
 
 According to my twitter, the actual quote was: Don't bring your XML filth 
 into my JSON
 
 Vish
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 Mark Nottingham
 http://www.mnot.net/
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Image API v2 Draft 4

2012-04-10 Thread Jorge Williams
I'm also a strong supporter of XML. XML does a good job of lowering barriers 
for a key group of clients, specifically those that work with statically typed 
languages.  It offers key benefits in terms of extensibility and validation.  
I'd hate to lose it.

-jOrGe W.

On Apr 10, 2012, at 12:57 PM, Justin Santa Barbara wrote:

It definitely has improved - thank you for all your work;  I didn't mean to put 
down anyone's work here.  It's simply a Sisyphean task.

Either way, though, if I had the choice, I'd rip all of nova's XML support out 
tomorrow…

As a strong supporter of XML, who thinks JSON is for kids that haven't figured 
out that the Easter bunny isn't real yet :-)...  +1

Justin

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Image API v2 Draft 4

2012-04-09 Thread Jorge Williams
Justin,

From a JAX-RS / Java  persecutive, starting with an XML schema and having that 
dictate what the JSON will look like -- doesn't just make sense -- it makes 
life *A LOT* easier. And a lot of services written in Java do just that.   
Unfortunately, as you pointed out, this approach has the  tendency to create 
very  unJSONic JSON -- in that case JSON is just an afterthought..it's a 
second class citizen -- essentially it's XML with curly braces...and that 
doesn't jive well with dynamic languages.

You can go the other route and start from JSON to create XML -- and some well 
intended folks do this, but you end up in the other direction...essentially you 
have JSON/, which is essentially JSON in XML form.  That kind of XML sucks 
and it's pretty useless if you work with static languages like Java and .Net -- 
where having a static (but extensible) schema is really useful.

That's the world we live in today...static languages work really well with XML 
and dynamic languages work really well with JSON.  In my opinion, you have to 
account for both and treat both as first class citizens.   I'd vote for having 
an abstract model and treat the XML and JSON  as different renderings.   Doing 
so is not easy as pie -- but it's not rocket science either.

I am totally with you on a couple of points though:

1.  We do need to differentiate between user data and core attributes. Jay and 
I have had debates about this before.
2.  Having entirely dynamic discoverable schema, seems nice, but it has the 
potential to introduce a lot of complexity.   In particular, i'm not exactly 
sure how to process the schema and  keep the goodness that comes from using a 
statically typed language.   And, as you point out,  it's not clear how one 
would support multiple media types given that approach.

How about we discuss this further at the summit :-)

-jOrGe W.


On Apr 9, 2012, at 4:14 PM, Justin Santa Barbara wrote:

When you're designing JSON considering only JSON, you'd probably use {
key1: value1 } - as you have done.  If you're designing generically,
you'd probably use { key: key1, value: value1 }.

You mean we'd have to do dumb crap because XML doesn't have the native concept 
of a list? ;)

XML has lists, as does Avro, ProtocolBuffers  Thrift.  XML supports extensible 
lists, which is why the syntax is different.


You'd *think* this would work. In practice, however, it really doesn't. Neither 
does (good, valid) code generation...

Of course it works!  Every JAX-RS webserver does this.  You just can't start 
with JSON first and expect everything to magically be OK.

If you think it doesn't work, can you provide an example?

You start with an abstract model, and then check what it looks like in JSON, in 
XML, in ProtocolBuffers, in Avro, in Thrift, in HPSTR, etc...  If you start 
with JSON, then of course it won't work.  If we're going to treat XML as an 
afterthought, then I'd rather we just didn't support XML at all (and yes, I 
absolutely mean that - it is good that Glance is honest that they don't support 
XML.)

Even ignoring XML, I can't help but think that not having a strict delineation 
between user-provided data and the structure of your document is a pretty risky 
idea.


In the 2.0 API we *are* specifying it in JSON. JSON Schema, specifically...

Are JSON schemas an April Fool's joke?  Once you introduce schemas, you might 
as well just go with XML ;-)

 I think the only thing you need to avoid is
no changing-at-runtime keys; I think this makes it compatible
with XML, Avro, ProtocolBuffers  Thrift.

That is antithetical to having dynamic, but discoverable, schemas. JSON Schema 
(and XSD, fwiw) provide things like additionalProperties and xsd:any for just 
this sort of thing. Making a schema entirely static is really only useful for 
generating (bad and soon-to-be-outdated) client code.

Having dynamic and discoverable schemas enables clients to respond to 
backwards-compatible schema changes (like the addition of standard properties 
or the addition of extra-standard additionalProperties) without having to 
recompile a client or change any client code at all...

I couldn't disagree more: what does it mean?  There's the implicit contract 
underlying the interface; the semantics that underpin the syntax.  e.g. syntax: 
a glance image id is a string, semantics: the id is unique to a glance 
installation and is used to refer to an image in REST calls.

xsd:any allows you to put elements _from another schema_ into your XML 
document.  That foreign schema defines the semantics of those elements.  It's 
schemas all the way down, giving semantics to your syntax.

If your additional properties in Glance are properly schematized, then that's 
great.  But true cross-representation schemas are an open problem, I believe, 
so you're really painting yourself into a corner (how do you support XML 
output, if you let people upload arbitrary JSON schemas?)


Incidentally, I just heard about yet another new format - 

Re: [Openstack] Image API v2 Draft 4

2012-04-09 Thread Jorge Williams

On Apr 9, 2012, at 6:03 PM, Justin Santa Barbara wrote:

How about we discuss this further at the summit :-)

I think that's a sensible proposal.  We're not likely to reach a good 
conclusion here.  I think my viewpoint is that even json-dressed-as-xml is 
fine; no end-user gives two hoots what our JSON/XML/HPSTR looks like.  I'd 
wager most users of the EC2 API have never even seen the data representation!


I take it you didn't attend the glorious JSON debate of a couple of summits ago 
:-)

I'm up for round two,

-jOrGe W.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Java API

2012-02-21 Thread Jorge Williams
Some thoughts,

Using the binding to generate WADLs and XSDs would definitely be useful -- 
especially since a lot of the extensions are currently undocumented.  Certainly 
we can use these as a starting point for our documentation efforts.

Keep in mind, though, that extensions are optional and the extensions your 
binding will encounter will vary from one deployment to the next.  You should 
be making a call to /extensions to auto detect what extensions are available on 
the server side and adjust accordingly.

Don't make the assumption that because an auto generated schema validates 
against a stock OpenStack install, that it will validate against all OpenStack 
deployments.  Vendors may choose to remove and add features, only the core API 
is guaranteed.

The schemas that we're currently publishing in

https://github.com/openstack/compute-api

have forward compatibility rules that keep them ticking in the presence of new 
extensions. Any schema's you produce should have similar rules.  See the

@XmlAnyElement and @XmlAnyAttribute

as a means of capturing and exposing these extra attributes with JAXB.

Once you produce these XSDs/WADLs  would you mind sending them to our Docs team?

Things like:

http://api.openstack.org/and
http://docs.openstack.org/api/

are all driven by them.

Thanks,

-jOrGe W.

On Feb 16, 2012, at 9:16 AM, Luis Gervaso wrote:

You can use schemagen to generate the XSD from the java classes. You can try, 
this it's working from now.

We have not published yet because the schemas are unstable. I have planned to 
publish them starting with Essex release.

We will not publish any WADL since we are on the client side. I'm evaluating to 
create extra services for the server side regarding to the
billing part and they will have WADL.

Regards

On Thu, Feb 16, 2012 at 8:48 AM, Craig Vyvial 
cp16...@gmail.commailto:cp16...@gmail.com wrote:
Once you have the API implemented in with Jersey you can get the XSD like you 
said and also a valid up to date WADL. That could be very useful for docs 
and/or other devs.

Great work!

-Craig Vyvial


On Wed, Feb 15, 2012 at 8:19 PM, Luis Gervaso 
l...@woorea.esmailto:l...@woorea.es wrote:
Hi Justin,

Great!

I have tried a variety of options to implement this in a clean way. As you can 
see Jersey afford it in the most clean way.

My thoughts to make this are:

1. Start with a handcoded JAXB annotations, since the schemas are out-of-date 
and then we will create the XSD super easy.
2. I have seen than in servers endpoint the structure of XML and JSON are not 
the same (networks / addresses part). Then i decided
to start the binding from XML in order to have as soon as posible the generated 
schemas and then apply any patch to make it work
with JSON as well.
3. For my point of view HATEOAS is a must. The Client API must be easy 
integrated with any business process or workflow. I know that
this will be funny since we are on the cutting edge, I think the next version 
of Jersey have an early support for this.

Cheers!

Luis



On Thu, Feb 16, 2012 at 12:14 AM, Justin Santa Barbara 
jus...@fathomdb.commailto:jus...@fathomdb.com wrote:
This is awesome.  I was working on a binding myself, but your use of jersey 
makes for much less code.

I've extended your work in a github fork.  I added a CLI so that I could test 
it out; the few bits of functionality that I added work great and I'm going to 
try using it as my primary interface and fixing/adding things that aren't 
working.

One goal I have is to do extensions right.  So we should allow people to code 
extensions without changing the core API code (equivalently, we shouldn't 
assume that we know all the extensions when we build the API).  I have an 
example of how this can be done where extra XML attributes are returned (which 
happens on an out-of-the-box server listing); I'm going to do more work on more 
advanced scenarios (extra elements, extra REST endpoints).  I would eventually 
like to use the (hand-coded) Java models to generate valid XSD files.

My fork is here:  https://github.com/justinsb/openstack-java-sdk  I'd like to 
work together on this!

Justin

---

Justin Santa Barbara
Founder, FathomDB




On Mon, Feb 13, 2012 at 8:53 AM, Luis Gervaso 
l...@woorea.esmailto:l...@woorea.es wrote:
The Dasein Arch is great and the code is very clean. Congrats for it.

I can't find a fully implementation of OS API.

are using EC2 API to talk with OS?

Cheers!





On Sat, Feb 11, 2012 at 8:15 PM, George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com wrote:
There's also Dasein Cloud if you are interested at 
http://dasein-cloud.sf.nethttp://dasein-cloud.sf.net/.

-George

On Feb 11, 2012, at 12:28 AM, Monty Taylor wrote:

Hi!

Awesome, and thanks for the work!

Just in case you didn't know about it:

http://www.jclouds.org/

Is a Java library with multi-cloud support, including OpenStack, which
might be a fun place for you to hack - and I know Adrian loves contributors.

On the 

Re: [Openstack] Keystone: is revoke token API officially supported

2012-01-26 Thread Jorge Williams
Moving it to an extension makes sense to me.  Ziad, does it make sense to add 
it to OS-KSADM...or is this a different extension all together...revoke token 
extension?

-jOrGe W.

On Jan 26, 2012, at 11:43 AM, Dolph Mathews wrote:

It is definitely not a documented call (hence the should this be removed? 
comment in the implementation); if it were to be promoted from undocumented 
to an extension, I imagine it would belong in OS-KSADM.

- Dolph

On Thu, Jan 26, 2012 at 10:51 AM, Yee, Guang 
guang@hp.commailto:guang@hp.com wrote:
I see it implemented in the code as

DELETE /v2.0/tokens/{tokenId}

But it doesn’t appear to be documented in any of the WADLs.


Thanks!

Guang


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone: is revoke token API officially supported

2012-01-26 Thread Jorge Williams

On Jan 26, 2012, at 4:39 PM, Ziad Sawalha wrote:

If a client has bound to the contract XSD, they will break if we add this, 
won't they?


No.  XSD only concerns itself with the attributes and elements of the message.  
This is just adding a delete.  That's a separate method, it shouldn't break any 
clients.  It's a WADL only extension.


But… I don't know how many clients would have bound to the OS-KSADM contracts. 
We've been diligent and strict about not changing the core contract, but this 
is the first time we've been presented with a change to an extension like this.

I'd still lean towards the correct practice of adding this as another 
extension. Especially since that extension would only be adding a new method on 
an existing resource, so would not require complex naming changes…

Open to alternative points of view..


I agree.


Z


From: Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com
Date: Thu, 26 Jan 2012 13:36:13 -0600
To: Dolph Mathews dolph.math...@gmail.commailto:dolph.math...@gmail.com
Cc: Yee, Guang guang@hp.commailto:guang@hp.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net, Ziad 
Sawalha ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com
Subject: Re: [Openstack] Keystone: is revoke token API officially supported

Moving it to an extension makes sense to me.  Ziad, does it make sense to add 
it to OS-KSADM...or is this a different extension all together...revoke token 
extension?

-jOrGe W.

On Jan 26, 2012, at 11:43 AM, Dolph Mathews wrote:

It is definitely not a documented call (hence the should this be removed? 
comment in the implementation); if it were to be promoted from undocumented 
to an extension, I imagine it would belong in OS-KSADM.

- Dolph

On Thu, Jan 26, 2012 at 10:51 AM, Yee, Guang 
guang@hp.commailto:guang@hp.com wrote:
I see it implemented in the code as

DELETE /v2.0/tokens/{tokenId}

But it doesn’t appear to be documented in any of the WADLs.


Thanks!

Guang


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone: is revoke token API officially supported

2012-01-26 Thread Jorge Williams

On Jan 26, 2012, at 5:17 PM, Dolph Mathews wrote:

A) This wasn't documented at all (AFAIK), so there's no concern of breaking 
contracts.

I agree, it shouldn't break anything.


B) Even if it's moved to an extension, would the call change from it's current 
form?:

DELETE /tokens/{token_id}

I'm not sure what the extension convention is here.


We could put that in a separate URI, but I don't think we have to in this case. 
DELETE has a very well understood semantics.  I can't see a DELETE that would 
work in anyway different than this one.


-Dolph Mathews

On Jan 26, 2012, at 4:39 PM, Ziad Sawalha 
ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com wrote:

If a client has bound to the contract XSD, they will break if we add this, 
won't they?

But… I don't know how many clients would have bound to the OS-KSADM contracts. 
We've been diligent and strict about not changing the core contract, but this 
is the first time we've been presented with a change to an extension like this.

I'd still lean towards the correct practice of adding this as another 
extension. Especially since that extension would only be adding a new method on 
an existing resource, so would not require complex naming changes…

Open to alternative points of view..

Z


From: Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com
Date: Thu, 26 Jan 2012 13:36:13 -0600
To: Dolph Mathews dolph.math...@gmail.commailto:dolph.math...@gmail.com
Cc: Yee, Guang guang@hp.commailto:guang@hp.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net, Ziad 
Sawalha ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com
Subject: Re: [Openstack] Keystone: is revoke token API officially supported

Moving it to an extension makes sense to me.  Ziad, does it make sense to add 
it to OS-KSADM...or is this a different extension all together...revoke token 
extension?

-jOrGe W.

On Jan 26, 2012, at 11:43 AM, Dolph Mathews wrote:

It is definitely not a documented call (hence the should this be removed? 
comment in the implementation); if it were to be promoted from undocumented 
to an extension, I imagine it would belong in OS-KSADM.

- Dolph

On Thu, Jan 26, 2012 at 10:51 AM, Yee, Guang 
guang@hp.commailto:guang@hp.com wrote:
I see it implemented in the code as

DELETE /v2.0/tokens/{tokenId}

But it doesn’t appear to be documented in any of the WADLs.


Thanks!

Guang


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone: is revoke token API officially supported

2012-01-26 Thread Jorge Williams
Okay just to make things clear...

Totally agree with everything you said.  I don't think we should just put the 
functionality in core.  The safest thing to do is to put it in a separate 
extension rather than modifying the existing management extension.   The safest 
thing to do is also to move the functionality to a separate URI space as well.  
If you do all of this you will have no chance  of breaking clients or of 
running into future conflicts.

I'm glad to see you protecting the contract :-)

Having said all of that.  This *particular* change is not likely to break folks 
because it introduces new functionality rather than changing existing 
functionality and I don't think that conflicts with DELETE token are very 
likely.

-jOrGe W.


On Jan 26, 2012, at 5:29 PM, Ziad Sawalha wrote:

A) It sounds like yore making an assumption about what the type of client is. 
Some clients use WADL to generate stubs or validate contracts. Consider clients 
like JAX-RS/CXF clients? If you change the WADL, you've changed the contract. 
Like I said, I think this would be an edge case, but a key reason we offer API 
contracts is to allow for predictability from the client side. You break that 
is you change then contract.

B) No, the HTTP call would not change. An alternative would be for us to add 
this to OS-KSVALIDATE which we just shipped. The call would then be:

DELETE /OS-KSVALIDATE/token
X-Auth_token: …
X-Subject-Token: {token_id}


From: Dolph Mathews dolph.math...@gmail.commailto:dolph.math...@gmail.com
Date: Thu, 26 Jan 2012 17:17:12 -0600
To: Ziad Sawalha ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com
Cc: Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com, Dolph 
Mathews dolph.math...@gmail.commailto:dolph.math...@gmail.com, Yee, Guang 
guang@hp.commailto:guang@hp.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net)
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Keystone: is revoke token API officially supported

A) This wasn't documented at all (AFAIK), so there's no concern of breaking 
contracts.

B) Even if it's moved to an extension, would the call change from it's current 
form?:

DELETE /tokens/{token_id}

I'm not sure what the extension convention is here.

-Dolph Mathews

On Jan 26, 2012, at 4:39 PM, Ziad Sawalha 
ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com wrote:

If a client has bound to the contract XSD, they will break if we add this, 
won't they?

But… I don't know how many clients would have bound to the OS-KSADM contracts. 
We've been diligent and strict about not changing the core contract, but this 
is the first time we've been presented with a change to an extension like this.

I'd still lean towards the correct practice of adding this as another 
extension. Especially since that extension would only be adding a new method on 
an existing resource, so would not require complex naming changes…

Open to alternative points of view..

Z


From: Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com
Date: Thu, 26 Jan 2012 13:36:13 -0600
To: Dolph Mathews dolph.math...@gmail.commailto:dolph.math...@gmail.com
Cc: Yee, Guang guang@hp.commailto:guang@hp.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net, Ziad 
Sawalha ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com
Subject: Re: [Openstack] Keystone: is revoke token API officially supported

Moving it to an extension makes sense to me.  Ziad, does it make sense to add 
it to OS-KSADM...or is this a different extension all together...revoke token 
extension?

-jOrGe W.

On Jan 26, 2012, at 11:43 AM, Dolph Mathews wrote:

It is definitely not a documented call (hence the should this be removed? 
comment in the implementation); if it were to be promoted from undocumented 
to an extension, I imagine it would belong in OS-KSADM.

- Dolph

On Thu, Jan 26, 2012 at 10:51 AM, Yee, Guang 
guang@hp.commailto:guang@hp.com wrote:
I see it implemented in the code as

DELETE /v2.0/tokens/{tokenId}

But it doesn’t appear to be documented in any of the WADLs.


Thanks!

Guang


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Re: [Openstack] WADL for compute API v1.1

2012-01-25 Thread Jorge Williams
They should point to the correct links.

I believe that the PDFs and WADL are published on docs.openstack.org, and the 
links should point to the artifacts there.  Or you can do what keystone is 
doing and host the stuff locally.

-jOrGe W.



On Jan 25, 2012, at 10:08 AM, Eoghan Glynn wrote:

 
 
 Hi Folks,
 
 The describedby links in nova/api/openstack/compute/versions.py
 contain broken hrefs to a v1.1 WADL document[1] and PDF[1].
 
 Looks like a copy'n'paste from the corresponding 1.0 versions of the
 WADL[3] and PDF[4], both of which are present and correct.
 
 So I was wondering whether there was an intention to publish a v1.1
 (AKA v2.0) WADL or whether these links are purely a throw-back to 1.0
 and should be removed?
 
 Cheers,
 Eoghan
 
 
 [1] http://docs.rackspacecloud.com/servers/api/v1.1/application.wadl
 [2] http://docs.rackspacecloud.com/servers/api/v1.1/cs-devguide-20110125.pdf
 [3] http://docs.rackspacecloud.com/servers/api/v1.0/application.wadl
 [4] http://docs.rackspacecloud.com/servers/api/v1.0/cs-devguide-20110415.pdf
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] WADL for compute API v1.1

2012-01-25 Thread Jorge Williams
I don't think that it would be too nasty given the way that Anne has structured:

https://github.com/openstack/compute-api

Where we have a different directory for each version of the API.

-jOrGe W.

On Jan 25, 2012, at 10:30 AM, Eoghan Glynn wrote:

 
 
 So I was wondering whether there was an intention to publish a v1.1 WADL ...
 
 
 Follow up question: would it be nasty to serve out that WADL directly from 
 github?
 
 e.g 
 https://github.com/openstack/compute-api/blob/essex-final-tag/openstack-compute-api-1.1/src/os-compute-1.1.wadl
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Supporting start/stop compute api from OpenStack API

2012-01-17 Thread Jorge Williams
Tomoe,

Once you get the extension up and running you'd want to document it :-)

There are a set of templates for documenting the extension here:

https://github.com/RackerWilliams/extension-doc-templates

More (high level) details on API extensions here:

http://docs.rackspace.com/openstack-extensions/apix-intro/content/Overview.html

Interim Extension Registry here:

http://docs.rackspace.com/openstack-extensions/

-jOrGe W.


On Jan 17, 2012, at 11:50 PM, Chris Behrens wrote:

 Vish:  Looks like it's only in ec2 api.
 
 Tomoe: The support will need to be added in an extension, since it's not in 
 the current API spec.  Ie, it'll need to go under 
 nova/api/openstack/compute/contrib in the tree, not directly in 
 compute/servers.py.
 
 - Chris
 
 On Jan 17, 2012, at 9:21 PM, Vishvananda Ishaya wrote:
 
 You can propose code without an approved blueprint. Are you sure there isn't 
 already a server action for stop/start?
 
 Vish
 
 On Jan 17, 2012, at 7:46 PM, Tomoe Sugihara wrote:
 
 Hi,
 
 I have put up a blueprint
 (https://blueprints.launchpad.net/nova/+spec/start-stop-methods-support-in-os-servers-api)
 for supporting start/stop compute api, which should work well for
 boot-from-volume from OpenStack API, and
 I'm happy to contribute code for Essex release.
 
 Could someone tell me what the next step would be? I thought the
 blueprint should be approved to push the code in, but I couldn't
 find how to get an approval from here:
 http://wiki.openstack.org/HowToContribute
 
 Comment, guidance appreciated.
 
 Cheers,
 Tomoe
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Configure Rate limits on OS API

2012-01-10 Thread Jorge Williams
Hi Blake,

Repose is capable of rate limiting based on group.  It also supports querying 
limits and maintaining the limits consistent even as nodes are scaled 
horizontally.

You can find the code on git hub:

https://github.com/rackspace/repose

Here's the presentation I gave on the subject on Essex: 
https://github.com/rackspace/repose/raw/master/documentation/presentations/OpenStack_Essex_2011/ReposePresentation.pdf

Our mailing lists if you have further questions here:

http://lists.openrepose.org/mailman/listinfo

-jOrGe W.


On Jan 10, 2012, at 4:06 PM, Blake Yeager wrote:

On Tue, Dec 27, 2011 at 2:33 PM, Nirmal Ranganathan 
rnir...@gmail.commailto:rnir...@gmail.com wrote:
You can configure those values thru the paste conf.

[filter:ratelimit]
paste.filter_factory = nova.api.openstack.limits:RateLimitingMiddleware.factory
limits =(POST, *, .*, 10, MINUTE);(POST, */servers, ^/servers, 50, 
DAY);(PUT, *, .*, 10, MINUTE);(GET, *changes-since*, 
.*changes-since.*, 3, MINUTE);(DELETE, *, .*, 100, MINUTE)


Am I correct in assuming that this will only work with setting the global 
limits?  Is there anyway to specify different limits for different accounts or 
groups of accounts?

-Blake


On Mon, Dec 19, 2011 at 1:28 PM, Day, Phil 
philip@hp.commailto:philip@hp.com wrote:
Hi Folks,

Is there a file that can be used to configure the API rate limits for the OS 
API on a per user basis ?

I can see where the default values are set in the code, but it looks as if 
there should be a less brutal configuration mechanism to go along with this ?

Thanks
Phil

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




--
Nirmal

http://rnirmal.comhttp://rnirmal.com/

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Automatically confirmed after 24 hours on Resize API

2011-12-27 Thread Jorge Williams

I'm with Waldon on this.  This is a spec...the implementation hasnt caught up.


Sent from my Motorola Smartphone on the Now Network from Sprint!


-Original message-
From: Brian Waldon brian.wal...@rackspace.com
To: Anne Gentle annegen...@justwriteclick.com
Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net
Sent: Tue, Dec 27, 2011 10:20:30 CST
Subject: Re: [Openstack] Automatically confirmed after 24 hours on Resize API

I'm more inclined to not call this a doc bug. There isn't anything wrong with 
the docs, we just have an incomplete implementation.

Brian

On Dec 27, 2011, at 11:10 AM, Anne Gentle wrote:

 Let's call it a doc bug - although that doc is a spec to indicate how the api 
 should work. Still, we need to track this type of discrepancy so that we can 
 be accurate on the API site.

 I can't log it easily this week (on vacay with only my cell phone) but I 
 please log it on Openstack-manuals with a compute-API tag.

 Thanks,

 Anne Gentle
 Content Stacker
 a...@openstack.org


 On Dec 27, 2011, at 7:48 AM, Brian Waldon brian.wal...@rackspace.com wrote:

 Hi Nachi!

 You are 100% correct, we have not yet implemented that in Nova. I don't see 
 any bugs/blueprints referencing this, so maybe if one was created we could 
 make sure it gets done.

 Thanks!
 Brian

 On Dec 26, 2011, at 11:48 PM, Nachi Ueno wrote:

 Hi folks

 The doc says *All resizes are automatically confirmed after 24 hours
 if they are not explicitly confirmed or reverted*, but I couldn't
 found such implementations.
 Is this a bug of Nova? or Is this a bug of doc?

 The resize function converts an existing server to a different flavor,
 in essence, scaling the server up or down. The original server is
 saved for a period of time to allow rollback if there is a problem.
 All resizes should be tested and explicitly confirmed, at which time
 the original server is removed. *All resizes are automatically
 confirmed after 24 hours if they are not explicitly confirmed or
 reverted*.

 http://docs.openstack.org/api/openstack-compute/1.1/content/Resize_Server-d1e3707.html

 Cheers
 Nachi Ueno

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Extension Documentation

2011-12-16 Thread Jorge Williams
Joe,

I fully support your effort in creating a central reference page like the
one on your mock...and in fact I'm working with some of the doc tools
folks to help make that happen.

I think that the extension site contains different kind of docs for
different audience --  made up of implementors and language binding
builders. The need there is to evaluate exactly how an individual
extension modifies the core -- you can certainly build a centralized
reference page from the info contained there in.

-jOrGe W.


-Original Message-
From: Joseph Heck he...@mac.com
Date: Fri, 9 Dec 2011 11:47:27 -0800
To: Brian Waldon brian.wal...@rackspace.com
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
openstack@lists.launchpad.net
Subject: Re: [Openstack] Extension Documentation

I totally agree with Anne that the documentation in this split up
format is very hard to both find and parse. It's not inaccurate, so much
as it leaves a gaping hole in understanding what is and isn't available
when you have 9+ documents to read and they're not really interlinked.

The effort I kicked off, but haven't had a lot of time to put into
lately, to create a single unified portal/page for the API was an idea to
address this weakness with the current structure.

I've created a github pages site to stub out how this might work -
https://github.com/heckj/api-site-mock, with the generated site at
http://heckj.github.com/api-site-mock/. It's very much a work in
progress, which I hope to resume work on in a few weeks when I should be
able to free up some additional time. I have documented my intention for
the site's goals 
(https://github.com/heckj/api-site-mock/blob/master/GOALS.md) and design
(https://github.com/heckj/api-site-mock/blob/master/DESIGN.md) - tl;dr,
making a unified API directory for immediate web-based consumption (i.e.
browser) along the lines of:
 * https://www.parse.com/docs/rest
 * https://dev.twitter.com/docs/api

If anyone else would be interested in collaborating on this site live to
move it forward, I would be happy to add your accounts to directly push
into the repository. And of course I'm happy to take pull requests.

-joe

On Dec 9, 2011, at 6:29 AM, Brian Waldon wrote:
 Hey Anne,
 
 Great feedback! As for number 8, I think the nova-api team might be the
best group to be tasked with reviewing code and documentation for any
extensions proposed to Nova's codebase. And we can absolutely discuss
this at the meeting today!
 
 Brian
 
 
 On Dec 9, 2011, at 9:17 AM, Anne Gentle wrote:
 
 Hi everyone -
 Overall I support this effort and have discussed it at length with the
 Rackers working on it.
 
 I'd really like to get feedback from everyone who thinks they'll
 consume this type of information. I don't find it easy to use from an
 API consumer's perspective, but it is an absolute must for the
 projects to have a way to describe what parts of their API is an
 extension.
 
 Here are my suggestions on this first iteration, which I've talked to
 Jorge about but also want to share with the list to get input.
 
 1. The header - at first it may confuse people since it's an OpenStack
 header on a Rackspace domain name. I understand this convention was
 chosen since you intend to give it over to OpenStack.
 2. In the header, I don't believe Extensions Documentation is the
 correct label, probably just highlight Documentation.
 3. I don't have a good sense of how readers will get to API
 documentation from this page. With the API site also being worked on,
 we'll need to find a good secondary nav for these types of sites.
 4. All of the links need to add an additional /content/ to the link to
 avoid redirects.
 5. All of these mini-docs need to use a processing instruction or
 pom directive to avoid the tiny chunking and only chunk at the chapter
 level.
 6. I made some minor changes to the DocBook template so that people
 can find the WADL normalizer tool.
 7. For the API site we're constructing, we're not yet sure how to
 handle extensions for the API reference. Right now we need to fill in
 a lot of reference information. Suggestions for integration are
 welcomed.
 8. We need a discussion about who will review these extension
 submissions and ensure they get built.
 
 Based on the struggle to get these docs written, I also want to know
 if you all find the templates useful and think you'll author these.
 Any suggestions for the authoring side?
 
 Brian, can we discuss at the nova-api meeting tomorrow at 3:00 CST in
 #openstack-manuals as well? I'll also discuss at the Doc Team meeting
 Monday 12/12 at 2:00 CST (20:00 UTC).
 
 Thanks for all the work here. Let's iterate on this site.
 Thanks,
 Anne
 
 On Thu, Dec 8, 2011 at 10:58 AM, Jorge Williams
 jorge.willi...@rackspace.com wrote:
 
 Hi All,
 
 I've started putting together a site to hold extension documentation.
 You can see it here:
 
 http://docs.rackspace.com/openstack-extensions/
 
 The idea is to have a repository for all extensions

[Openstack] Extension Documentation

2011-12-08 Thread Jorge Williams

Hi All,

I've started putting together a site to hold extension documentation.  You can 
see it here:

http://docs.rackspace.com/openstack-extensions/

The idea is to have a repository for all extensions, whether the extension is 
an OpenStack extension or a vendor specific extension. It makes sense to me 
that users can go to a central place to get extension docs regardless of where 
the extension came from or who wrote it, etc.  I'm putting this out as somewhat 
of a proposal with the hopes that we can roll this page into our own docs site. 
The site is *somewhat* automated. If you put the webhelp output that comes out 
of our docs tool chain, it will reach into the extension description sample for 
info about the extension and just roll it into the index page.  The idea is 
that we can have something like Jenkins spit out some webhelp to a directory 
and things will just work. The script that does this is written in perl though 
If anyone wants to take a stab at rewriting it in another language I'm all for 
it.  You can find it here:

https://github.com/RackerWilliams/extension-docs

What I'm really interested in right now is in getting good accurate docs for 
our extensions and putting them out there it a central place where people can 
see.  If you have info about a particular extension send it over.  There are 
pointers to doc templates at the bottom of the page and I know that I'll be 
working on documenting some of the extension that are currently out there for 
compute. BTW:  Take the compete  extensions that are out there at this very 
moment with a grain of salt  as some of these are still under development.

Finally, I've updated the extension guide based on feedback from folks.  You 
can find the guide here:

http://docs.rackspace.com/openstack-extensions/apix-intro/content/Overview.html

Note that the document is still a draft, so things are likely to change -- 
though I don't reckon dramatically. We're planning on having a wider discussion 
about extensions in the next nova-api team meeting on friday.

Questions?  Thoughts?

-jOrGe W.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OSAPI and Zones

2011-11-15 Thread Jorge Williams
Inline:

On Nov 15, 2011, at 3:36 AM, Doude wrote:

 Thanks a lot for your answers.
 
 But why do you want to move the Zone code into the extension part ?
 It's a core part of OpenStack, why it doesn't stay in the core code ?

If something is in core then it's guaranteed to be available always.  A client 
should be able to count on the functionality in core for a particular version 
of an API.  Zone's offer admin level functionality that may not be available to 
all users.  I don't think that Rackspace will expose Zones to it's customers 
right away, for example. By having Zones as an extension a client can detect 
whether zone support is available or not.

 
 Another question about extensions. I had understand that an extension
 will be integrated to the core OSAPI when it will be mature, is that
 true ? The extension mechanism is like an incubator for OSAPI
 functionalities ?

Yes, the extension mechanism can be used as an incubator for functionality.   
But, not all extensions are destine to make it to the core.  Some extension for 
example will cover niche functionality that may be useful for a small set of 
users. Other extension may cover functionality that would set the bar high for 
folks deploying OS clouds. Other extension may expose functionality that's 
applicable to a specific hypervisor etc.

That said, at the end of the day, the PTL decides which of these extensions 
make it to core and which stay as extensions.

I've got a draft of a write up to explain extensions in some detail here:

https://github.com/RackerWilliams/OpenStack-Extensions/blob/master/apix-intro.pdf

Nothing there is set entirely in stone, but it drafts the concept as we're 
currently thinking of  it.  I'm in the process of setting up an extension 
registry and documenting a number of extensions. These certainly inform the doc 
above -- so expect some changes.  As soon as things stabilize I'll publish the 
doc in our doc site.

-jOrGe W.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OSAPI and Zones

2011-11-14 Thread Jorge Williams
Last time I had a conversation about this, I believe the goal was to refractor 
and document Zone support as an extension to the core API.  We're just not 
there yet.

-jOrGe W.

On Nov 14, 2011, at 9:49 AM, Doude wrote:

 Hi all,
 
 I'm trying to understand the multi-zone architecture of OpenStack.
 I saw zone commands (list, show, select ...) have been added to the
 OSAPI v1.1 (not as an extension but as a core component of the API)
 but I cannot find any documentations in the OSAPI book:
 http://docs.openstack.org/trunk/openstack-compute/developer/openstack-compute-api-1.1/content/
 
 Where I can find this documentation ? In OpenStack wiki ? Where I can
 open a bug about this lack of documentation ?
 
 Regards,
 Édouard.
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] describing APIs for OpenStack consumers

2011-11-14 Thread Jorge Williams
The core API WADL is here:

https://github.com/openstack/compute-api/blob/master/openstack-compute-api-1.1/src/os-compute-1.1.wadl

Keystone also has a number of WADLs here:

https://github.com/openstack/keystone/tree/master/keystone/content

-jOrGe W.

On Nov 14, 2011, at 2:21 PM, Rupak Ganguly wrote:

Is the WADL for Nova and or its extensions available somewhere to look at?

Thanks,
Rupak Ganguly
Ph: 678-648-7434


On Fri, Oct 28, 2011 at 3:17 AM, Bryan Taylor 
btay...@rackspace.commailto:btay...@rackspace.com wrote:
On 10/27/2011 05:52 PM, Mark Nottingham wrote:
Generating WADL (or anything else) from code is fine, as long as we have the 
processes / tools (e.g., CI) in place to assure that a trivial code change 
doesn't make a backwards-incompatible change in what we expose to clients.
You bring up a really good point here.
Do we?

I doubt it. I vaguely recall there were WSDL backwards compatibility checkers, 
which implies there must be XSD backwards compatibility checkers.  I don't know 
of anything that can do this for WADL. And without some mechanism to define a 
JSON format in a machine readable way, I'm not even sure how you could possibly 
accomplish this for JSON.


(really, we should have these in place regardless of how things are generated)
We should.




___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Push vs Polling (from Versioning Thread)

2011-10-28 Thread Jorge Williams

On Oct 28, 2011, at 8:11 AM, George Reese wrote:

Push notifications don't make your core system any more complex. You push the 
change to a message queue and rely on another system to do the work.

The other system is scalable. It has no need to be stateless and can be run in 
an on-demand format using agents to handle the growing/shrinking notification 
needs.

Bryan brings up the point that some of these subscription endpoints may go 
away. That's a total red-herring. You have mechanisms in place to detect failed 
deliveries and unsubscribe after a time (among other strategies).


I think what Bryan is saying is this.  Someone, on another system, lets call 
it a hub,  has to do the work of tracking what messages have been received by a 
particular client.  The failure scenarios there can cause a lot of head aches.

You can try to scale  hubs out horizontally, but each hub will be handling a 
different set of clients at a particular point in time.  So that data needs to 
be tracked.  The best you can do is to have a central data store tracking when 
a client has received and acknowledged a particular message.   If there are a 
lot of clients that's a lot of data to sort through and partition.  If you 
don't have a central store then a particular hub will be responsible for a 
certain set of clients. And in this case, how many clients should be tracked by 
a hub? 100? 1000? 100,000?  The more clients a hub handles the more memory it 
needs to use to track those clients.  If a hub is at  capacity  but you're 
monitoring system is starting to detect disk failures, how do you migrate those 
clients to another hub? Do you split the clients up among existing hubs, if so 
what's the algorithm there?  Or do you have to stand up a new hub?

As for the other failure states, the issue isn't just about detecting failed 
deliveries, it's about tracking down successful deliveries too.  Say after 
immediately sending a message to client A, that hub goes down.  There's no 
record in the system that the message was sent  to client A.  How do we detect 
that that happened? If we do detect it should we resend the message here? Keep 
in mind,  the client may have received it but may or may not have acknowledged 
it.  If we do resend the message, will that mess up the client?  Does the 
client even care?

There's a whole lot of inefficiencies to.  Consider that there are some cases 
where the client also needs to track what messages have been received. Both the 
client and the hub are tracking the state in this scenario and that's pretty 
inefficient.  I would argue far more inefficient than the polling scenario 
because it involves memory and potentially storage space.  If the client 
doesn't really care to track state we are tracking it at the hub for no reason.

Say we have a client that's tracking sate, maybe saving it to the datastore. 
(We have a lot of customers that do this.)  The client receives a message, but 
before it can save it, it goes down.  Upon coming up again, it has no awareness 
of the lost message, will it be delivered again? How?  How does the client 
inform the hub of it's state?

Other questions arise:  How long should you track clients before you 
unsubscribe them? etc...etc...

There's just so many similar scenarios that add a lot of complexity and I would 
argue, at cloud scale, far greater inefficiencies into the system.

With the polling scenario, the work is split between the server and the client. 
 The server keeps track of the messages.  The client keeps track of it's own 
state (what was the last message received? etc).  It's scalable and, I would 
argue more efficient,  because it allows the client to track state if it wants 
to, when it wants to, how it wants to.  On the server end statelessness means 
that each pubsub node is a carbon copy of another -- if one goes down another 
can replace it with no problem -- no need to transfer sate.  What's more, the 
memory usage of the node is constant, no matter how many clients are hitting it.

That's not to say that polling is always the right choice.  As Mark said, there 
are a lot of factors to consider.  In cases where there are a large number of 
messages latencies may increase dramatically. It's just that when we're talking 
web scale, it is *usually* a good choice.

-jOrGe W.




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Push vs Polling (from Versioning Thread)

2011-10-28 Thread Jorge Williams

On Oct 28, 2011, at 10:33 AM, George Reese wrote:

 You are describing an all-purpose system, not one that supports the narrow 
 needs of IaaS state notifications. 
 
 There's no reason in this scenario to guarantee message delivery. 

Like I said, there are a lot of factors to consider.  And guarantee delivery 
may or may not be a requirement based on your use case. For example, wouldn't 
we want to monitor based on state changes? Missing a message could mean not 
sending an alert, right? How do you compensate for that if you don't guarantee 
delivery?





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Push vs Polling (from Versioning Thread)

2011-10-28 Thread Jorge Williams
Huh?  I didn't write that.  George did.

On Oct 28, 2011, at 11:35 AM, Caitlin Bestler wrote:

 Jorge Williams wrote:
 
 Push notifications don't make your core system any more complex. You push 
 the change to a message queue and rely on another system to do the work.
 
 That is only true if the messaging system and the core system are largely 
 independent, which could have some implications that would probably be fine 
 for
 most human users but could be quite problematic for applications.
 
 Can the push notification system block the core system? If not the push 
 notifications ultimately become unreliable. A human who is not notified that
 a given update once in 10,000 times is probably just going to shrug it off.  
 But an application that needs to know it is looking at the most recent version
 of a document before it modifies it is ultimately going to have to rely on 
 polling, or have the notification be built into the core system complete with
 throttling of updates when absolutely necessary to ensure that notifications 
 are sent.
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Versioning and Extensibility

2011-10-27 Thread Jorge Williams
Response inline:

On Oct 27, 2011, at 12:50 AM, Bryan Taylor wrote:

 On 10/26/2011 04:45 PM, Jorge Williams wrote:
 
 On Oct 26, 2011, at 1:19 PM, Bryan Taylor wrote:
 
 So no pdfs or excel spreadsheets without conneg.
 
 But PDFs and excel spreadsheets are precisely why you want variants!
 
 Reports and spreadsheets are presentation layer resources that should come 
 from control panels and dashboards and not from a web services API layer.
 
 In fact, it's with some reluctance that I even suggested having HTML  in the 
 services layer, but we said an API goal was to target developers eyeballing 
 our data formats in a browser. HTML is the best media type to use for this, 
 leveraging the pre element, perhaps with some syntax highlighting eye candy.
 
 Here's my usage stats for 2009...
 
 http://usage.api.acme.com/v1.0/jorgew/2009/usage.pdf;
 
 That shouldn't be coming directly from an openstack API.
 
 We're actually building a usage service on top of OpenStack and we don't have 
 any PDFs in it. Dashboards, control panels, BI systems etc, should host that 
 resource, not our APIs.
 
 You mean to tell me that I can't send that out as an e-mail? Instead I
 have to say
 
 Please run this command to see my usage stats for 2009
 
 Our use case is to show *developers* what the openstack API payloads look 
 like, not to deal with arbitrary end user presentation desires.
 
 curl -H Accept: application/vnd.acme.com+pdf;version=1.0
 http://usage.api.acme.com/jorgew/2009/usage;
 
 That seems silly to me, we're missing an important feature, the ability
 to click.
 
 We are adding an important feature by leaving it out: separation of 
 presentation and data.
 

In this case you can think of the PDF or excel spreadsheet as simply an 
alternate representation of the data.  Providing these alternate 
representations can lower barriers for a lot of clients and personally I think 
they make sense in some cases.  It's a pattern I've seen used quite 
successfully.

That said,  I'm not to worried about generating PDFs from our current APIs 
because we're just not likely to run into that use case.

What I'm more worried about is being able to support things like feeds. A feed 
can be an alternate representation of an existing collection of servers, for 
example, and here again we have to deal with the browser as a user agent, that 
may not participate in the content negotiation process as we'd like.  The pre 
element approach your suggesting won't work in this case either.

The current, load balancer, API uses feeds as an alternate representation for 
most collection types so that you can track changes, here's an example call.

http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/List_Virtual_IPs-d1e2809.html

The API uses a variant (.atom).

You also see this pattern in stuff like the Netflix API as well See 
/users/{user_id}/feeds in:

http://developer.netflix.com/docs/read/REST_API_Reference

Here the parameter output=atom and a (read only) token is placed in the URI as 
well, so that one can get access to the feed from a browser.

-jOrGe W.







___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] +1, All services should have WADLs

2011-10-27 Thread Jorge Williams
Ah yes,

API reference pages that span all the projects.  That's totally doable, I know 
we had plans for doing such a thing, but not sure where those plans are.  We 
were planing on using WADL for that.  Maybe we should get together with some of 
the doc folks Anne, David to come up with a strategy. How about we discuss at 
the next Doc Team Meeting  http://wiki.openstack.org/Meetings/DocTeamMeeting

-jOrGe W.


On Oct 27, 2011, at 1:20 PM, Joseph Heck wrote:

Jorge -

It's way back the beginning of this thread - A consolidated single website with 
API docs as HTML pages that is easy for developers to consume. I'm looking 
forward to seeing the WADL parser, already on that thread with David Cramer 
directly. I can wait until he's got it in github, which he said would likely be 
next week.

The docs generated on doc.openstack.orghttp://doc.openstack.org/ are all in 
docbook format - neat, but not what I'm after. As I mentioned some 40 msgs back 
(now quite lost, I'm sure), what I'm looking to create is something like these 
sites provide:

https://dev.twitter.com/docs/api
http://developer.netflix.com/docs/REST_API_Reference
http://code.google.com/p/bitly-api/wiki/ApiDocumentation#REST_API
http://upcoming.yahoo.com/services/api/

That we can generate (ideally dynamically, but I'm not wedded to that) from the 
API's of all of Nova, Glance, Keystone and Quantum - both what we've labelled 
as core and extensions.

My goal isn't to make, parse, or manually read WADL's - it's to make this set 
of web pages. If WADL helps me get there expediently, I'm all over it.

-joe

On Oct 27, 2011, at 11:03 AM, Jorge Williams wrote:
As I stated in previous emails, we are pulling data from the WADL to grab 
human-consumable REST API docs that live at 
docs.openstack.orghttp://docs.openstack.org/ today.  We can certainly expand 
that capability to create a unified API documentation set rather than 
individual guides.  A lot of the hard work for parsing is already done, and 
we'll be releasing a WADL normalizer that puts the WADL in an easer to process 
form.

Joe, I'd love to hear more about what you're trying to accomplish.  Maybe we 
can help you leverage the tools we have to accomplish them.

-jOrGe W.


On Oct 27, 2011, at 10:51 AM, Joseph Heck wrote:

Yeah, that's what I've been poking at and the original start of this rather 
lengthy thread. Unfortunately, WADL, while it appears complete, is rather 
obnoxious for pulling out data. Or more accurately, I haven't fully understood 
the WADL specification in order to write a WADL parser to allow me to do just 
that. I'm poking at it now, but my original goal wasn't to write an XML parser 
but to just create a unified API documentation set on a web site to make it 
easier to consume OpenStack services.

-joe

On Oct 27, 2011, at 8:04 AM, Lorin Hochstein wrote:
It would be great if we could do some kind of transform of the IDL to generate 
(some of) the human-consumable REST API documentation that lives at 
docs.openstack.orghttp://docs.openstack.org/. That would simplify the task of 
keeping those docs up to date.

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin


On Oct 27, 2011, at 9:54 AM, Sandy Walsh wrote:
Sounds awesome!

I've done an application like this in the past where an entire web UI was data 
driven using a custom IDL. It had to have presentation hints associated with it 
(acceptable values, display widget, etc). Not something WADL supports 
inherently I'm sure. But, I know from experience this can work.

I don't really care what the IDL is, so long as we don't have to write a parser 
for it in 10 different languages ... which is why XML/JSON hold such appeal 
(although JSON in C keeps me awake at night).

-S


From: Mark Nottingham [m...@mnot.netmailto:m...@mnot.net]
Sent: Thursday, October 27, 2011 10:38 AM
To: Sandy Walsh
Cc: Mellquist, Peter; Joseph Heck; 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] +1,  All services should have WADLs

I'm totally on board with having the interface being machine-consumable at 
runtime -- see the previous discussion on versioning and extensibility -- but 
WADL isn't really designed for this. I'm sketching up something more 
appropriate, and will be able to talk about it soon (hopefully).


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Re: [Openstack] +1, All services should have WADLs

2011-10-26 Thread Jorge Williams
I don't mind generating a WADL so long as we have a good expressive tool for 
doing so.  I haven't found one yet. There was a project a while back for doing 
so called Rest Described and Compile that seemed to be heading in the right 
direction, but it hasn't been worked on in a while.

http://tomayac.de/rest-describe/latest/RestDescribe.html

You used a GUI like interface to describe your REST service and it filled in 
all the XML for you.  I really like that approach.

SOAP UI,  also allows you to develop a WADL via a UI.

http://sourceforge.net/projects/soapui/

The UI there is kinda crappy, and there are a lot of features of WADL that 
don't get exposed.  Still you can use it to build a template of your WADL.

 What I would like to have in a WADL generating tool is the ability to add RST, 
wiki, docbook  descriptions and documentation,   as well as example JSON and 
XML payloads.  That would be really cool, but there's no tool that does that 
just yet.

In the interim, though,  we have to develop in XML.  Tools like Oxygen help a 
lot especially if you configure it with David's WADL plugin, which help catch a 
lot of errors early and help you fill in the blanks.I know that we have 
plans to extend the plugin to add new features, so that should also help as 
well.

-jOrGe W.

On Oct 26, 2011, at 7:17 AM, Sandy Walsh wrote:

 As discussed at the summit, I agree there should be some form of IDL (WADL 
 being the likely candidate for REST), I think manually crafting/maintaining a 
 WADL (or XML in general) is a fools errand. This stuff is made for machine 
 consumption and should be machine generated. Whatever solution we adopt, we 
 should keep that requirement in mind.
 
 $0.02
 
 -S
 
 
 From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
 [openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf 
 of Mellquist, Peter [peter.mellqu...@hp.com]
 Sent: Wednesday, October 26, 2011 2:06 AM
 To: Joseph Heck; openstack@lists.launchpad.net
 Subject: [Openstack] +1,  All services should have WADLs
 
 Excellent topic Joe, thanks for bringing this up.
 
 There are two main perspectives on WADLs: WADLs from a service developer 
 point of view and WADLs from a cloud developer point of view. I consider the 
 later the most important since we need to ensure that developers who write 
 all the killer Openstack apps have first class API definitions. WADLs allow 
 developers to utilize a standard definition of the APIs rather than dig 
 through API documents which are often out of synch with the code. As shown in 
 other projects, it is definitely possible to define all REST APIs in WADLs 
 and then generate docs and code .. keeping everything in synch. Some 
 implementation frameworks do not support REST / WADLs very well and this is 
 where we hear the most complaining from service developers for reasons to not 
 support WADLs.
 
 'all the services should have a WADL somewhere describing the API.'  100% 
 AGREE.
 
 The topic of when an API should be defined is also important. Do we define an 
 API / WADL 1) up front before the service is implemented, 2) in parallel with 
 the impl, 3) or after the impl? I am an advocate of #1 or perhaps #2 but not 
 #3 since #3 is just retrofitting an API on existing impl without any real API 
 design considerations.
 
 Peter.
 
 
 
 
 
 -Original Message-
 From: openstack-bounces+peter.mellquist=hp@lists.launchpad.net 
 [mailto:openstack-bounces+peter.mellquist=hp@lists.launchpad.net] On 
 Behalf Of Joseph Heck
 Sent: Tuesday, October 25, 2011 12:42 PM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] describing APIs for OpenStack consumers
 
 I expect this is going to open a nasty can of worms... today we don't have a 
 consistent way of describing the APIs for the various services. I saw Nati's 
 bug (https://launchpad.net/bugs/881621), which implies that all the services 
 should have a WADL somewhere describing the API.
 
 I'm not a huge fan of WADL, but the only other thing I've found is swagger 
 (http://swagger.wordnik.com/spec).  I have been working towards trying to 
 create an comprehensive OpenStack API documentation set that can be published 
 as HTML, not unlike some of these:
 
https://dev.twitter.com/docs/api
http://developer.netflix.com/docs/REST_API_Reference
http://code.google.com/p/bitly-api/wiki/ApiDocumentation#REST_API
http://upcoming.yahoo.com/services/api/
 
 To make this sort of web-page documentation effective, I think it's best to 
 drive it from descriptions on each of the projects (if we can). I've checked 
 with some friends who've done similar, and learned that most of the those API 
 doc sets are maintained by hand - not generated from description files.
 
 What do you all think about standardizing on WADL (or swagger) as a 
 description of the API and generating comprehensive web-site-based API 
 documentation from 

Re: [Openstack] +1, All services should have WADLs

2011-10-26 Thread Jorge Williams
++Totally agree with that approach.

Looking forward to looking over the Images 2.0 API :-)

-jOrGe W.

On Oct 26, 2011, at 10:23 AM, Jay Pipes wrote:

 On Wed, Oct 26, 2011 at 1:06 AM, Mellquist, Peter
 peter.mellqu...@hp.com wrote:
 The topic of when an API should be defined is also important. Do we define 
 an API / WADL 1) up front before the service is implemented, 2) in parallel 
 with the impl, 3) or after the impl? I am an advocate of #1 or perhaps #2 
 but not #3 since #3 is just retrofitting an API on existing impl without any 
 real API design considerations.
 
 Wow, +10. We had a rousing discussion about this at the design
 summit... I'm in the process of finalizing the proposal for an
 OpenStack Images API 2.0 which will be sent to the mailing list
 shortly (just got some excellent feedback from Mark Nottingham this
 morning on some pieces that I'm going to change, thanks Mark!). We
 (the Glance contribs) will ask the community for feedback over a 3-4
 week RFC period. At the same time, we'll begin implementing the
 proposal in a separate branch of Glance, providing more feedback to
 the mailing list if we run into issues where the implementation of the
 proposed API is cumbersome or we recommend changes to the proposal. At
 the same time, we'll incorporate feedback as we get it on the mailing
 list and try working that feedback into the implementation we'll be
 working on.
 
 Once the community decides to accept some iterated-over proposed 2.0
 API, we'll work with Anne to put the API into
 http://github.com/openstack/images-api and teams like the QA team can
 get busy writing tests *against the proposed 2.0 API, without worrying
 that the API will change three times a day*.
 
 Cheers!
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Versioning and Extensibility

2011-10-26 Thread Jorge Williams

On Oct 26, 2011, at 1:19 PM, Bryan Taylor wrote:

So no pdfs or excel spreadsheets without conneg.

But PDFs and excel spreadsheets are precisely why you want variants!

Here's my usage stats for 2009...

http://usage.api.acme.com/v1.0/jorgew/2009/usage.pdf;

You mean to tell me that I can't send that out as an e-mail?  Instead I have to 
say

Please run this command to see my usage stats for 2009

curl -H Accept: application/vnd.acme.com+pdf;version=1.0  
http://usage.api.acme.com/jorgew/2009/usage;

That seems silly to me, we're missing an important feature, the ability to 
click.

-jOrGe W.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] describing APIs for OpenStack consumers

2011-10-25 Thread Jorge Williams
Some of that dev guide documentation is generated from a WADL :-)  The purpose 
of a WADL is that it is machine readable so it opens up a lot of possibilities, 
for creating docs, testing, validation, etc.

-jOrGe W.

On Oct 25, 2011, at 4:14 PM, Daryl Walleck wrote:

Hi everyone,

This is just my opinion, but I've only found WADLs very useful when use tool 
based automation. To me they're a huge headache to read. To me, the current dev 
guide style of documentation has been far more helpful in developing automation.

Daryl

On Oct 25, 2011, at 3:24 PM, Anne Gentle wrote:

Hi all -

Would also love Swagger. Nati looked into it and he thought it would require a 
Python client generator, based on reading that Client generators are currently 
available for Scala, Java, Javascript, Ruby, PHP, and Actionscript 3. So in 
the meantime the QA list and Nati suggested WADL as a starting point for 
auto-generating simple API documentation while also looking towards Swagger for 
a way to document a public cloud like the Free Cloud. At the last OpenStack 
hackathon in the Bay Area (California), Nati worked through a simple WADL 
reader, he may be able to describe it better.

Hope that helps - sorry it's not more detailed than that but wanted to give 
some background, sounds like we all want similar outcomes and the resources for 
tasks to get us to outcomes is all we're lacking. QA Team, let me know how the 
Docs Team can work with you here.

Anne
Anne Gentle
a...@openstack.orgmailto:a...@openstack.org
my bloghttp://justwriteclick.com/ | my 
bookhttp://xmlpress.net/publications/conversation-community/ | 
LinkedInhttp://www.linkedin.com/in/annegentle | 
Delicioushttp://del.icio.us/annegentle | 
Twitterhttp://twitter.com/annegentle
On Tue, Oct 25, 2011 at 2:41 PM, Joseph Heck 
he...@mac.commailto:he...@mac.com wrote:
I expect this is going to open a nasty can of worms... today we don't have a 
consistent way of describing the APIs for the various services. I saw Nati's 
bug (https://launchpad.net/bugs/881621), which implies that all the services 
should have a WADL somewhere describing the API.

I'm not a huge fan of WADL, but the only other thing I've found is swagger 
(http://swagger.wordnik.com/spec).  I have been working towards trying to 
create an comprehensive OpenStack API documentation set that can be published 
as HTML, not unlike some of these:

   https://dev.twitter.com/docs/api
   http://developer.netflix.com/docs/REST_API_Reference
   http://code.google.com/p/bitly-api/wiki/ApiDocumentation#REST_API
   http://upcoming.yahoo.com/services/api/

To make this sort of web-page documentation effective, I think it's best to 
drive it from descriptions on each of the projects (if we can). I've checked 
with some friends who've done similar, and learned that most of the those API 
doc sets are maintained by hand - not generated from description files.

What do you all think about standardizing on WADL (or swagger) as a description 
of the API and generating comprehensive web-site-based API documentation from 
those description files? Does anyone have any other description formats that 
would work for this as an alternative?

(I admit I don't want to get into XML parsing hell, which is what it appears 
that WADL might lead too)

-joe


___
Mailing list: 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] describing APIs for OpenStack consumers

2011-10-25 Thread Jorge Williams
Keystone is using it more than Nova, especially to document their extensions.  
It's working with our existing docs tool chain.

You can reference a WADL directly from the DocBook source,  you can go in and 
reference particular resources and methods it will parse stuff out and put it 
in the right place.  For example in:

https://github.com/openstack/identity-api/blob/master/openstack-identity-api/src/docbkx/identity-service-api.xml

You see something like this:


section xml:id=Tenant_Operations
titleTenant Operations/title
wadl:resources xmlns:wadl=http://wadl.dev.java.net/2009/02;
wadl:resource href=identity-admin.wadl#tenants
wadl:method href=listTenants/
wadl:method href=getTenantByName/
/wadl:resource
wadl:resource href=identity-admin.wadl#tenantById
wadl:method href=getTenantById/
/wadl:resource
wadl:resource 
href=identity-admin.wadl#userRolesForTenant
wadl:method href=listRolesForUserOnTenant/
/wadl:resource
/wadl:resources
/section


And that's saying reach into the wadl identity-admin.wadl look  the resources 
and methods listed and generate the docs here.  That produces section 3.2.3 
(http://docs.openstack.org/api/openstack-identity-service/2.0/content/Tenant_Operations.html)
 and all of the related subsections 3.2.3.1-3.2.3-4. For some reason the team 
has decided to put the WADL and Docbook in separate projects.  You can see the 
WADL that's being referred to here:

https://github.com/openstack/keystone/blob/master/keystone/content/admin/identity-admin.wadl

You can also embed WADL directly into the docbook instead of referencing it 
from a separate file. Additionally, you can process the WADL directly (this a 
new feature) and generate something like an appendix.  WADL isn't narrative, so 
the DocBook is there to glue the operations into a narrative form.

-jOrGe W.

On Oct 25, 2011, at 5:30 PM, Joseph Heck wrote:

Which dev docs and how? I haven't spotted those scripts or systems...

-joe

On Oct 25, 2011, at 2:58 PM, Jorge Williams wrote:

Some of that dev guide documentation is generated from a WADL :-)  The purpose 
of a WADL is that it is machine readable so it opens up a lot of possibilities, 
for creating docs, testing, validation, etc.

-jOrGe W.

On Oct 25, 2011, at 4:14 PM, Daryl Walleck wrote:

Hi everyone,

This is just my opinion, but I've only found WADLs very useful when use tool 
based automation. To me they're a huge headache to read. To me, the current dev 
guide style of documentation has been far more helpful in developing automation.

Daryl

On Oct 25, 2011, at 3:24 PM, Anne Gentle wrote:

Hi all -

Would also love Swagger. Nati looked into it and he thought it would require a 
Python client generator, based on reading that Client generators are currently 
available for Scala, Java, Javascript, Ruby, PHP, and Actionscript 3. So in 
the meantime the QA list and Nati suggested WADL as a starting point for 
auto-generating simple API documentation while also looking towards Swagger for 
a way to document a public cloud like the Free Cloud. At the last OpenStack 
hackathon in the Bay Area (California), Nati worked through a simple WADL 
reader, he may be able to describe it better.

Hope that helps - sorry it's not more detailed than that but wanted to give 
some background, sounds like we all want similar outcomes and the resources for 
tasks to get us to outcomes is all we're lacking. QA Team, let me know how the 
Docs Team can work with you here.

Anne
Anne Gentle
a...@openstack.orgmailto:a...@openstack.org
my bloghttp://justwriteclick.com/ | my 
bookhttp://xmlpress.net/publications/conversation-community/ | 
LinkedInhttp://www.linkedin.com/in/annegentle | 
Delicioushttp://del.icio.us/annegentle | 
Twitterhttp://twitter.com/annegentle
On Tue, Oct 25, 2011 at 2:41 PM, Joseph Heck 
he...@mac.commailto:he...@mac.com wrote:
I expect this is going to open a nasty can of worms... today we don't have a 
consistent way of describing the APIs for the various services. I saw Nati's 
bug (https://launchpad.net/bugs/881621), which implies that all the services 
should have a WADL somewhere describing the API.

I'm not a huge fan of WADL, but the only other thing I've found is swagger 
(http://swagger.wordnik.com/spec).  I have been working towards trying to 
create an comprehensive OpenStack API documentation set that can be published 
as HTML, not unlike some of these:

   https://dev.twitter.com/docs/api
   http://developer.netflix.com/docs/REST_API_Reference
   http://code.google.com/p/bitly-api/wiki/ApiDocumentation#REST_API
   http://upcoming.yahoo.com/services/api/

To make this sort of web-page documentation effective, I think it's best

Re: [Openstack] describing APIs for OpenStack consumers

2011-10-25 Thread Jorge Williams
Totally agree.  The goal is to create narrative documents that devs can read 
etc.  The WADL is just there to fill in the nitty gritty details in a 
consistent way.

-jOrGe W.

On Oct 25, 2011, at 5:34 PM, Caitlin Bestler wrote:

WADL sounds like a wonderful validation tool.

But shouldn’t our primary goal be finding a consistent way to describe the APIs
for *application developers*.

Syntax tools, whether ancient notations like BNF or the latest XML concoction 
only tell you the syntax of the operation.
There also has to be consistent information that provides information to the 
reader as to when and why they would use
this specific operation, not just how to format it.

There is also a tendency of syntax oriented tools to omit vital state 
information,  particularly the expected sequence of operations.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] describing APIs for OpenStack consumers

2011-10-25 Thread Jorge Williams
The hard thing about processing a WADL is that WADLs uses links and references.

For example:  WADL A may refer to a method defined in WADL B, that's useful 
when you're defining extensions.  Or WADL A may define two resources that share 
GET, PUT, POST operations:  You see this with metadata in servers and images in 
the compute API /servers/{id}/metadata and /images/{id}/metadata, work exactly 
the same way, in WADL you don't need to define those operations twice you just 
link them in to different URIs.

Another issue is that there are different ways of defining resources.  You can 
take a flat approach, much like Swagger uses:

resource path=/resource/
resource path=/resource/level2/
resource path=/resoruce/level2/level3/

Or you can take a hierarchical  approach:

resource path=/resource
   resource path=level2
   resource path=level3/
  /resource
/resource

What's worse you can have a mixture of the two:

resource path=/resource
   resource path=level2
   resource path=level3/level4/level5/
  /resource
/resource

The hard bit is that you need to resolve  all of those links and normalize the 
paths if you want to process the WADL.  We've (and by we I mean David Cramer),  
created a command line tool that can process the WADL and give you a flat 
normalized WADL that does just that. There are options for flatting the path or 
expanding and resolving links etc.  The tool just runs the WADL though a couple 
of XSLT transformations and you end up with an easy to processes WADL on the 
other end.  You should run this as a preprocessing step, if you plan on writing 
a script to extract data.  We do this when we process the documents.

I know that the WADL normalizer is opensource but not sure where.  David, is it 
on github?

-jOrGe W.

On Oct 25, 2011, at 6:05 PM, Nati Ueno wrote:

 Hi Joe
 
 2011/10/25 Joseph Heck he...@mac.com:
 It sounds like even though most of us hate WADL, it's what we're expending
 effort after to make a consolidated API set. So unless Nati and Ravi want to
 switch to using Swagger (or something else), WADL is the direction we're
 heading.
 
 I'm voting WADL for sure :)
 
 I totally agree with Daryl that reading it is a PITA, and am
 finding (from my part) that the only definitive way to know about writing
 the docs and documenting the authoritative API is to read the underlying
 code. (which is what I suspect Nati likely did with the pull request that
 adds in WADL for the Nova/OpenCompute extension API)
 Nati - do you have your WADL parsing/reading code stashed in a public git
 repo somewhere that I could work with and help expand upon? I'd like to see
 what I can do to modify it to generate some of the interactive docs.
 
 Sorry, It may takes time to open source it because of some paper works.
 But it is just 300 lines script.
 
 I used lxml.objectify
 http://lxml.de/objectify.html
 
 You can read wadl as python object.
 It is very easy to generate something from the WADL if you know WADL 
 structures.
 
 xsd_root = objectify.parse(PATH2WADL).getroot()
 xsd_root.resoruce_type  #get resource types
 xsd_root .iterchildren()   #get childs
 xsd_root.get('attribute') #get attributes
 
 
 On Oct 25, 2011, at 2:56 PM, Jorge Williams wrote:
 
 We've done quite a bit of work to get high quality documentation from a
 WADL,  in fact we are using some of this today.  We've taken most of the
 hard work re: parsing the WADL, at least for the purpose of generating docs
 from it and of writing code that can read it (though that later part can use
 a bit more work).
 We are also working to add WADL support in Repose, which we presented at the
 summit, you can find the presentation here:
 https://github.com/rackspace/repose/tree/master/documentation/presentations.
 The plan there is to have an HTTP proxy that can do validation of a service
 on the fly.  When it's done, you could, for example, turn this on when you
 run functional tests and get a gauge as to what your API coverage is and
 track both client and service API errors.
 Other API tools like Apigee and Mashery already have support for WADL.  In
 fact apigee maintains an extensive wadl library for common
 APIs: https://github.com/apigee/wadl-library.  There is some WADL support in
 python as well, though I haven't tested it first hand.
 So obviously, I'd vote for WADL.
 I haven't looked at Swagger too deeply, at first glance it *seems* to be
 missing some stuff -- but I'll have to study it in detail to be sure. (How
 do you define acceptable media types, matrix parameters, that a particular
 HTTP header is required?)
 I don't like the fact that it tries to describe the format of the message as
 well as the HTTP operations.  I'd rather take the approach that WADL takes
 which is to rely on existing schema languages like XML Schema and JSON
 Schema.
 What I do like about Swagger is that you seem to be able to generate some
 really cool interactive documentation from it.  I really like their API
 explorer feature for example:   You

Re: [Openstack] describing APIs for OpenStack consumers

2011-10-25 Thread Jorge Williams
The WADL should be complete for Nova.  There are a couple of error fixes that 
I've completed but haven't pushed up yet.  I'll try to get to those tomorrow 
and I'll look over Nachi's contributions as well.

What's not done in Nova is documenting all of the extensions.  I'm working on 
that and will be checking those in soon as well.

-jOrGe W.

On Oct 25, 2011, at 9:56 PM, Joseph Heck wrote:

 The WADL is unfortunately not complete for Nova, Glance, and Quantum  - I 
 believe Keystone has been keeping it quite up to date as the changes to the 
 API have been being made. Nachi's made a couple of pull requests today for 
 updates to the WADL related to the OpenStack API, and offered to help create 
 a WADL (which didn't exist previously) for Quantum.
 
 -joe
 
 On Oct 25, 2011, at 7:25 PM, Ziad Sawalha wrote:
 Hi Nati - I might be opening a can of worms here, but I thought the API spec 
 and WADL were complete and we were working on implementing it. It sounds to 
 me like you are doing the reverse and matching the WADL to the current state 
 of the code. There's value in that, but i know it will cause problems for 
 anyone trying to rely and code to the spec (which I know we are).
 
 Z
 
 
 
 On Oct 25, 2011, at 4:00 PM, Nati Ueno nati.u...@gmail.com wrote:
 
 Hi Joe, Anne
 
 I'm working on WADL of Openstack Diablo in order to generate both of
 Test List and API docs from WADL.
 I wrote simple script which generate a simple api list from WADL. It
 is very helpful.
 
 Nova  and Keystone has WADL, and Ravi@HP is working for glance.
 Nova's WADL is inconsistent with the code of Nova, I also fixing it.
 And also, I wrote admin api WADL and extensions WADL for nova. (The
 bug,joe you mentioned.
 https://bugs.launchpad.net/openstack-manuals/+bug/881621)
 
 Personally, I hate WADL!!  It is terrible authoring WADL.
 However I don't know there are no other way to describe API specs clearly.
 
 Generating something automatically is may be kind of a dream (or
 nightmare :) )
 However, Clear specs are definitely needed for QA.
 
 QA Team, let me know how the Docs Team can work with you here.
 Thanks! Anne
 
 2011/10/25 Anne Gentle a...@openstack.org:
 Hi all -
 
 Would also love Swagger. Nati looked into it and he thought it would 
 require
 a Python client generator, based on reading that Client generators are
 currently available for Scala, Java, Javascript, Ruby, PHP, and 
 Actionscript
 3. So in the meantime the QA list and Nati suggested WADL as a starting
 point for auto-generating simple API documentation while also looking
 towards Swagger for a way to document a public cloud like the Free Cloud. 
 At
 the last OpenStack hackathon in the Bay Area (California), Nati worked
 through a simple WADL reader, he may be able to describe it better.
 
 Hope that helps - sorry it's not more detailed than that but wanted to give
 some background, sounds like we all want similar outcomes and the resources
 for tasks to get us to outcomes is all we're lacking. QA Team, let me know
 how the Docs Team can work with you here.
 
 Anne
 Anne Gentle
 a...@openstack.org
 my blog | my book | LinkedIn | Delicious | Twitter
 On Tue, Oct 25, 2011 at 2:41 PM, Joseph Heck he...@mac.com wrote:
 
 I expect this is going to open a nasty can of worms... today we don't have
 a consistent way of describing the APIs for the various services. I saw
 Nati's bug (https://launchpad.net/bugs/881621), which implies that all the
 services should have a WADL somewhere describing the API.
 
 I'm not a huge fan of WADL, but the only other thing I've found is swagger
 (http://swagger.wordnik.com/spec).  I have been working towards trying to
 create an comprehensive OpenStack API documentation set that can be
 published as HTML, not unlike some of these:
 
  https://dev.twitter.com/docs/api
  http://developer.netflix.com/docs/REST_API_Reference
  http://code.google.com/p/bitly-api/wiki/ApiDocumentation#REST_API
  http://upcoming.yahoo.com/services/api/
 
 To make this sort of web-page documentation effective, I think it's best
 to drive it from descriptions on each of the projects (if we can). I've
 checked with some friends who've done similar, and learned that most of 
 the
 those API doc sets are maintained by hand - not generated from description
 files.
 
 What do you all think about standardizing on WADL (or swagger) as a
 description of the API and generating comprehensive web-site-based API
 documentation from those description files? Does anyone have any other
 description formats that would work for this as an alternative?
 
 (I admit I don't want to get into XML parsing hell, which is what it
 appears that WADL might lead too)
 
 -joe
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 

Re: [Openstack] Some updates to REST API specs

2011-10-20 Thread Jorge Williams
We had extend discussions about the HTTP error code that we retuned for rate 
limiting while discussing the compute API.  The issue is that we allow users to 
discover and query their rate limits.  So an over-limit response should be in 
the 400 range because we see it as a client error.  None of the codes fit 
exactly right, but we felt 413 fit the best for that use case, because it 
provided for a RetryAfter header and if you think about it Request Entity Too 
Large makes sense because if you've been rate limited we'd allow 0 bytes on the 
request.  Still it was somewhat ambiguous. Services that don't provide 
queryable rate limits return an error in the 500 range (503), essentially 
saying hey, I'm being overloaded right now, back off.

The nice thing about 429, is that it's a 400 level code specific for rate 
limiting that's completely unambiguous.  If there's even an effort to 
standardize around that code I think that we should support it in the next 
revision of our APIs.

-jOrGe W.

On Oct 20, 2011, at 11:35 AM, Caitlin Bestler wrote:

 Russel Bryant wrote:
 
 We need to add these codes to maintain compliance.
 
 https://tools.ietf.org/html/draft-nottingham-http-new-status-02
 
 To maintain compliance with what?  I ask since the linked document is a 
 draft.
 
 Waiting until a draft was an official RFC would be an excessive delay. The 
 IETF process is thorough but far from fleet of foot.
 However, waiting for a draft to be a workgroup product before saying that a 
 project SHOULD comply with it makes sense.
 Of course if we agree that it is a good idea then we MAY comply with it. I'm 
 not enough of an HTTP/REST expert to have an opinion on that.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Guidelines for OpenStack APIs

2011-10-11 Thread Jorge Williams
++ Like the idea..yes I think we should aim to include all OpenStack APIs -- 
whatever that means :-)

-jOrGe W.

On Oct 11, 2011, at 9:52 AM, Jay Pipes wrote:

 On Tue, Oct 11, 2011 at 10:08 AM, Mark McLoughlin mar...@redhat.com wrote:
 On Tue, 2011-10-11 at 16:11 +1100, Mark Nottingham wrote:
 +1 (sorry for the lag, been travelling).
 
 I'd like to start two wiki pages; one collecting goals for the APIs,
 one for collecting common patterns of use in the APIs (not rules, not
 even guidelines).
 
 Yeah, it'd be awesome to have the common patterns described somewhere.
 It's hard to discuss potential guidelines without a crisp summary of the
 current patterns employed by the APIs.
 
 FWIW, we made some effort to do this for a bunch of related REST APIs.
 The result is neither complete or perfect, but it's something like what
 I'd imagine would work well here:
 
  http://fedoraproject.org/wiki/Cloud_APIs_REST_Style_Guide
 
 Btw, which APIs are we talking about here? Just compute and storage. Or
 image and identity too?
 
 Definitely Images and Identity too. :)
 
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Repose now on GitHub

2011-10-03 Thread Jorge Williams

Thanks to all who attended our chat Repose today.  Just wanted to send a quick 
message to let you know that the code is available today on GitHub!

https://github.com/rackspace/repose

-jOrGe W.
This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Guidelines for OpenStack APIs

2011-09-22 Thread Jorge Williams
Starting from a set of goals makes sense to me as well.  I had put
together a sample set of goals for the PPB proposal a week or so ago and
some sample guidelines.  You can find them here. Standards for standards
sake don't make sense to me either.

http://wiki.openstack.org/Governance/Proposed/APIManagement-sampleGuideline
s

Mind you these are just samples.

I also think it's hard to do this on etherpad.  I think that something
like a wiki with discus like capabilities would work best.  That way we
can separate the text from the discussion a bit better.  I understand
etherpad has the chat think on the side, but it doesn't support threading.

jOrGe W.


-Original Message-
From: Bryan Taylor btay...@rackspace.com
Date: Thu, 22 Sep 2011 13:29:07 -0500
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Guidelines for OpenStack APIs

The etherpad thing is already somewhat hard to read. I wonder if we
could try first to simply get a list of topics that we want guidelines
on without first trying to say what the standard is. My experience
trying to come up with such standards internally is that they will
generate a huge amount of discussion.

Also, we should have some goals for *why* we are creating standards, so
that we can push back if people go to far, or defend it if we get push
back by people who just don't want any standards. We don't want
standards for standards sake, but standards that delivery some specific
tangible goals.

On 09/18/2011 10:38 PM, Jonathan Bryce wrote:
 After the mailing list discussion around APIs a few weeks back, several
 community members asked the Project Policy Board to come up with a
 position on APIs. The conclusion of the PPB was that each project's PTL
 will own the definition and implementation of the project's official
 API, and APIs across all OpenStack projects should follow a set of
 guidelines that the PPB will approve. This will allow the APIs to be
 tied to the functionality in the project while ensuring a level of
 consistency and familiarity across all projects for API consumers.

 We've started an Etherpad to collect input and comments on suggested
 guidelines. It's a little messy but proposed guidelines are set off with
 an asterisk (*):

 http://etherpad.openstack.org/RFC-API-Guidelines

 Feel free to add comments on the Etherpad, the list or give me feedback
 directly.

 Jonathan.



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in
error, please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Guidelines for OpenStack APIs

2011-09-19 Thread Jorge Williams


On 9/19/11 1:03 AM, Mark McLoughlin mar...@redhat.com wrote:

The spec is actually quite clear on the different between PUT and POST:

  The fundamental difference between the POST and PUT requests is
   reflected in the different meaning of the Request-URI. The URI in a
   POST request identifies the resource that will handle the enclosed
   entity. That resource might be a data-accepting process, a gateway to
   some other protocol, or a separate entity that accepts annotations.
   In contrast, the URI in a PUT request identifies the entity enclosed
   with the request

Right.  Another important difference between PUT and POST in that PUT is
idempotent -- see section 9.1.2.  In reality both PUT and POST can be used
for create and update. Some further thoughts on this in
http://etherpad.openstack.org/RFC-API-Guidelines

-jOrGe W.

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Spec

2011-08-23 Thread Jorge Williams
I'm not proposing that the API be frozen solid.  I'm all for making changes to 
the API in a backward compatible manner.  In fact, I think most changes should 
come in this way, which is why versions of the *core* API shouldn't have to 
change frequently, but the API itself may be under constant rapid development.  
New features should be added in a backward compatible manner as frequently as 
you can dream them up.  Go to town.

We are in a different position than github, though.  To my knowledge,  GitHub, 
Inc is in complete control of the API.  They can add backward compatible 
changes without jumping through hoops. In our case, anybody has access to the 
source code for our APIs and different vendors may build solutions based of of 
them and may extend them. The vendors may not want to contribute the changes 
back to the main repo -- either because it doesn't make a lot of sense (think 
pricing extensions) or because the vendor wants a way in which to differentiate 
their deployment from other deployments in order to find a niche and remain 
competitive. Certainly that's their prerogative -- if we can make them 
successful then OpenStack will be successful.  This means, however that the 
development of our APIs, unlike the github APIs,  can happen in a completely 
decentralized manner. Imagine that Rackspace comes up with a feature to perform 
backups and places it in /backups.  HP comes up with it's own backup feature 
and also puts it in /backups. The features are different so a client expecting 
Rackspace backup will break when it encounters an HP /backup.  The idea of 
extensions is to prevent this from happening.   What's actually in the core is 
what gets protected  because that's the functionality that the client can 
*always* depend on.  If Rackspace and HP want to get together and work out the 
differences between their backup features and propose them as part of the core 
-- there is a method by which this can be accomplished but the process then 
becomes more deliberate and centralized.  That said the feature could have been 
developed rapidly and in a completely decentralized manner.

-jOrGe W.


On Aug 23, 2011, at 12:46 AM, Christopher MacGown wrote:

This is just a robustness principal argument. If a client breaks because we've 
added a new key to a JSON dict, it's the fault of the client's developer. If 
the client breaks because the primitives have changed and a dictionary has been 
changed into a list, then it's ours. The features that have been proposed that 
would drive functionality in the API from nova-core aren't changing the 
semantics of the API, they're adding additional fields.

There are examples of well-written APIs that commonly add new features to their 
API without breaking backward compatibility. If you take, for example, github 
(not to start a git/bzr argument, it's just late and I know the API), they 
regularly add features to their v2 and v3 API. The well-written clients don't 
break when new features come out, even the mediocrely written ones like mine 
don't break when new features come out.

If even mediocre programers like me can handle programming against a constantly 
changing API without causing any breakage in my client, I'm not sure what 
justification there is to freeze the API solid and not allow additional 
features. We shouldn't be building our API for people who write terrible 
clients that break when a feature gets added, because it'll waste everyone's 
time waiting for new features that may not get common support across service 
providers, and because no one will bother using those clients anyway.


Christopher MacGown
Piston Cloud Computing, Inc.
w: (650) 24-CLOUD
m: (415) 300-0944
ch...@pistoncloud.commailto:ch...@pistoncloud.com

On Aug 22, 2011, at 9:18 PM, Jorge Williams wrote:

Comments inline

On Aug 22, 2011, at 9:05 PM, Vishvananda Ishaya wrote:

Inline
On Aug 22, 2011, at 4:59 PM, Jorge Williams wrote:

Hi Vish,

I don't have a problem moving the spec out of docs manuals and into another 
project even the nova repo.   But, I do have a number of issues with the 
approach that you're proposing. First, I think that fundamentally there should 
be a decoupling of the spec and the implementation.   If you have the spec 
generated from the code than essentially the spec is whatever the code does. 
It's very difficult to interoperate with specs that are generated this way as 
the specs tend to be very brittle and opaque (since you have to study the 
code). If you introduce a  bug in the code that bug filters it's way all the 
way to the spec (this was a big problem with SOAP and CORBA). It's difficult to 
detect errors because you cant validate. By keeping the implementation and the 
spec separate you can validate one against the other.

The spec SHOULD BE exactly what the code does, verifiably.  I'm proposing that 
we have exactly the existing document, only that the xml and json examples in 
the spec are actually generated from the code so

Re: [Openstack] API Spec

2011-08-22 Thread Jorge Williams
Hi Vish,

I don't have a problem moving the spec out of docs manuals and into another 
project even the nova repo.   But, I do have a number of issues with the 
approach that you're proposing. First, I think that fundamentally there should 
be a decoupling of the spec and the implementation.   If you have the spec 
generated from the code than essentially the spec is whatever the code does. 
It's very difficult to interoperate with specs that are generated this way as 
the specs tend to be very brittle and opaque (since you have to study the 
code). If you introduce a  bug in the code that bug filters it's way all the 
way to the spec (this was a big problem with SOAP and CORBA). It's difficult to 
detect errors because you cant validate. By keeping the implementation and the 
spec separate you can validate one against the other.

Second, I don't think that the core OpenStack API should change with every 
OpenStack release. There are a number of efforts to provide multiple 
implementation of an existing OpenStack API.  We should encourage this, but it 
becomes difficult if the core spec is in constant flux.  Certainly you can use 
the extension mechanism to bring functionality out to market quickly, but the 
process of deciding what goes into the core should be more deliberate. Really 
good specs, shouldn't need to change very often, think HTTP, X11, SMTP, etc. We 
need to encourage clients to write support for our spec and we need to also 
encourage other implementors to write implementations for it. These efforts 
become very difficult if the spec is in constant flux.

-jOrGe W.

On Aug 22, 2011, at 5:43 PM, Vishvananda Ishaya wrote:

Hey Everyone,

We discussed at the Diablo design summit having API spec changes be proposed 
along with code changes and reviewed according to the merge process that we use 
for code.  This has been impossible up until now because the canonical spec has 
been in the openstack-manuals project.

My suggestion is that we move the openstack-compute spec into the nova source 
tree.  During a six-month release we can propose changes to the spec by 
proposing along with the code that changes it.  In the final freeze for the 
release, we can increment the spec version number and copy the current version 
of the spec into openstack-manuals and that will be the locked down spec for 
that release.

This means that openstack 1.1 will be the official spec for diablo, at which 
point we will start working on a new api (we can call it 1.2 but it might be 
best to use a temporary name like 'latest') during the essex release cycle, 
then at essex release we lock the spec down and it becomes the new version of 
the openstack api.

Ultimately I would like the spec to be generated from the code, but as a first 
pass, we should at least be able to edit the future version of the spec as we 
make changes.  I've proposed the current version of the spec here:

https://code.launchpad.net/~vishvananda/nova/add-api-docs/+merge/72506

Are there any issues with this approach?

Vish

This email may include confidential information. If you received it in error, 
please delete it.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Spec

2011-08-22 Thread Jorge Williams
Comments inline

On Aug 22, 2011, at 9:05 PM, Vishvananda Ishaya wrote:

 Inline
 On Aug 22, 2011, at 4:59 PM, Jorge Williams wrote:
 
 Hi Vish,
 
 I don't have a problem moving the spec out of docs manuals and into another 
 project even the nova repo.   But, I do have a number of issues with the 
 approach that you're proposing. First, I think that fundamentally there 
 should be a decoupling of the spec and the implementation.   If you have the 
 spec generated from the code than essentially the spec is whatever the code 
 does. It's very difficult to interoperate with specs that are generated this 
 way as the specs tend to be very brittle and opaque (since you have to study 
 the code). If you introduce a  bug in the code that bug filters it's way all 
 the way to the spec (this was a big problem with SOAP and CORBA). It's 
 difficult to detect errors because you cant validate. By keeping the 
 implementation and the spec separate you can validate one against the other. 
 
 The spec SHOULD BE exactly what the code does, verifiably.  I'm proposing 
 that we have exactly the existing document, only that the xml and json 
 examples in the spec are actually generated from the code so that we know 
 they are accurate.  This is a minor point to me though, and could be 
 accomplished by testing the spec against the code as well.

Let's say you generate the samples from the code.  Everything works great.  The 
samples represent exactly what the code is doing.  A client developer takes a 
look at the spec samples and develops an app against it.  Then a merge request 
goes through that changes the format of the JSON inadvertently in a subtle way. 
 Now the client is out of synch and breaks. The client dev looks back at the 
spec,  not only has the code changed but the spec has changed too since it's 
generated, -- the client therefore  assumes that the bug is his bad -- so he 
changes the code to match the spec.  Things work great, until the service 
developer notices that the merge changed the format of the JSON and changes it 
back to what it used to be -- now the client is broken againAnyway that 
illustrates why these type of approaches are brittle. 

Here's what I think is a better approach.  Design your API and write the 
samples by hand. You may design schema for your XML representation and verify 
that your samples validate. Write tests that check the generated XML and JSON  
that comes out of your service against the hand written versions and, if 
applicable, against the schema.  You know the API is done when the tests pass.  
If a merge  comes in that changes the representation then the tests will fail. 
Either way there's not much you can do in your code that can change's the spec 
from under your client's feet because the spec and the code are separate.  Sure 
during early development the spec and the code may be out of synch, but at 
least the client knows where you're headed and can work to meet you there.

So yes, you should be using those samples for testing. You just shouldn't be 
generating the samples from the code. The code SHOULD DO what the spec says 
verifiably -- not the other way around.


 
 
 Second, I don't think that the core OpenStack API should change with every 
 OpenStack release. There are a number of efforts to provide multiple 
 implementation of an existing OpenStack API.  We should encourage this, but 
 it becomes difficult if the core spec is in constant flux.  Certainly you 
 can use the extension mechanism to bring functionality out to market 
 quickly, but the process of deciding what goes into the core should be more 
 deliberate. Really good specs, shouldn't need to change very often, think 
 HTTP, X11, SMTP, etc. We need to encourage clients to write support for our 
 spec and we need to also encourage other implementors to write 
 implementations for it. These efforts become very difficult if the spec is 
 in constant flux.
 
 Incrementing the version number is only a problem if we fail to support the 
 old versions.  At the rate we are adding new functionality, I think we will 
 easily need a new spec every six months for the foreseeable future.  If there 
 are no reasonable changes in a six month period, we can skip a release and 
 not have a new version of the spec
 

Things have been changing because we've been working hard to realize what the 
code API should be.  Once we have this settled, I don't see a big reason why 
the core spec should change every 6 months. In fact, functionality wise the 
core API you see in 1.1 today hasn't really changed all that much in comparison 
to the 1.0 API.


-jOrGe W.

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Spec

2011-08-22 Thread Jorge Williams

On Aug 22, 2011, at 8:59 PM, Vishvananda Ishaya wrote:

 Inline
 
 On Aug 22, 2011, at 4:15 PM, Jay Pipes wrote:
 
 It may be just me, but having DocBookXML in the source tree is hideous
 to me. Not only does it clutter the source tree with non-RST
 documentation, but as you know, reviewing diffs of XML is just about
 makes people want to slit their throats with a spoon. There is a
 special type of person (Anne!) that seem to be impervious to the
 throat-slitting urge and can successfully digest such a review
 request, but for many, the process is too painful.
 
 I hate xml changes as well, but anne + people in manuals don't really have 
 the expertise to know if something belongs in the api.  *-core should be 
 responsible for the spec for the project.


Agreed there should be someone in the loop that can help validate the changes. 
A manuals person alone should not suffice to let those type of changes go into 
the spec.  I think we can adjust our process to be able to accomplish this.   
Doesn't gerrit help for this sort of stuff? 

 
 In addition to the above gripe, the development, stability, and
 enhancement of the OpenStack APIs can and should (IMHO) be managed
 separately from the source code of the project. The API can evolve in
 the openstack-manuals project and developers can code the OpenStack
 subproject to match the API documented in openstack-manuals project
 (at the tagged version). So, for instance, if the compute API needs to
 change, the API documentation changes would be proposed in
 openstack-manuals, reviewed, discussed and approved. The new API docs
 would be published to something like
 http://docs.openstack.org/compute/2.0/ and developers coding Essex
 features having to do with implementing such a 2.0 API would refer to
 the API documentation there while writing the feature...
 
 If we want to separate the xml code out, we can do a separate nova-spec repo 
 for the spec, but this should be managed by nova-core

Having a separate nova-spec repo is a good idea. I've never been a fan of 
having them all in a single repo.  You still want a writer in the loop to check 
style, apply the latest templates, do general editing etc.

-jOrGe W.
This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Spec

2011-08-22 Thread Jorge Williams
Inline

On Aug 22, 2011, at 9:12 PM, Anne Gentle wrote:

I think it makes sense to have an openstack-api project with all the API docs 
(specs and learning materials) gathered in one place. I think it's preferable 
to have the API separated out from the code for several reasons - ease of 
review, ease of check out, also for learning materials for the API itself.


+1

I'd envision these would go in it for starters:

Compute API (docbook, core + extensions)
Glance API (RST to docbook, core)
Keystone API (docbook, incubation, core + extensions)
Swift API (docbook, core)

Notes:
- Yep, Keystone moved their docbook out of the core code base due to the 
overhead associated with a large-ish document ensconced with the code.
- Glance's API is documented in a single RST page. We have a simple RST to 
docbook tool inhouse at Rackspace that lets me take updates and move them into 
docbook.
- Just today I had a request to bring the Load Balancing API into 
openstack-manuals for comments and review from 
http://wiki.openstack.org/Atlas-LB, since our wiki doesn't enable comments. I'm 
not sure what to do with nascent APIs for review that aren't yet in incubation.

So these are some of my observations having worked with the API docs for a 
while, to consider while seeking the ideal solution:
Incubation - how do we indicate an API is incubated?
Pre-incubation/review period - how could a team offer an API up for community 
review and commenting?
Core - how do we indicate what is core and how to get extensions? At Rackspace 
the Compute API team is working on a solution to getting extensions and telling 
people how to use them once they're available.
Source - DocBook is the source for the two biggest API docs, Compute and Object 
Storage, Keystone is a close third, and I can get DocBook out of Glance. Do we 
need to set DocBook as the standard source?
Output - I'd also like to focus on not only API specs but also deliverables 
that help people learn the APIs, such as the frameworks recently opensourced by 
Mashery (example: http://developer.klout.com/iodocs) and Wordnik 
(http://swagger.wordnik.com/). If we also deliver this type of web tool, we'd 
also need JSON or XML as source files (many of which are already embedded into 
the DocBook).

I'd like the best of both worlds - API specs and self-documenting APIs. I think 
we can get there, and I think a separate API project with a core review team 
moves us in that direction.


+1

Thanks for the good discussion here.
Anne


Anne Gentle
http://www.facebook.com/conversationandcommunity
my bloghttp://justwriteclick.com/ | my 
bookhttp://xmlpress.net/publications/conversation-community/ | 
LinkedInhttp://www.linkedin.com/in/annegentle | 
Delicioushttp://del.icio.us/annegentle | 
Twitterhttp://twitter.com/annegentle

On Mon, Aug 22, 2011 at 7:49 PM, Jan Drake 
jan_dr...@hotmail.commailto:jan_dr...@hotmail.com wrote:
+1




On Aug 22, 2011, at 5:06 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

 ++

 On Mon, Aug 22, 2011 at 7:59 PM, Jorge Williams
 jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:
 Hi Vish,
 I don't have a problem moving the spec out of docs manuals and into another
 project even the nova repo.   But, I do have a number of issues with the
 approach that you're proposing. First, I think that fundamentally there
 should be a decoupling of the spec and the implementation.   If you have the
 spec generated from the code than essentially the spec is whatever the code
 does. It's very difficult to interoperate with specs that are generated this
 way as the specs tend to be very brittle and opaque (since you have to study
 the code). If you introduce a  bug in the code that bug filters it's way all
 the way to the spec (this was a big problem with SOAP and CORBA). It's
 difficult to detect errors because you cant validate. By keeping the
 implementation and the spec separate you can validate one against the other.

 Second, I don't think that the core OpenStack API should change with every
 OpenStack release. There are a number of efforts to provide multiple
 implementation of an existing OpenStack API.  We should encourage this, but
 it becomes difficult if the core spec is in constant flux.  Certainly you
 can use the extension mechanism to bring functionality out to market
 quickly, but the process of deciding what goes into the core should be more
 deliberate. Really good specs, shouldn't need to change very often, think
 HTTP, X11, SMTP, etc. We need to encourage clients to write support for our
 spec and we need to also encourage other implementors to write
 implementations for it. These efforts become very difficult if the spec is
 in constant flux.
 -jOrGe W.
 On Aug 22, 2011, at 5:43 PM, Vishvananda Ishaya wrote:

 Hey Everyone,
 We discussed at the Diablo design summit having API spec changes be proposed
 along with code changes and reviewed according to the merge process that we
 use for code.  This has been

Re: [Openstack] API Spec

2011-08-22 Thread Jorge Williams

I say we up the version number when we can't ensure backward compatibility.  As 
to how long older versions should be supported.  Hard to say.  It depends on a 
lot of factors, and at the end of the day it may come up to how popular a 
version is and how  willing and able operators and client devs are to upgrading.

-jOrGe W.


On Aug 22, 2011, at 8:49 PM, Thor Wolpert wrote:

 I agree the Specs shouldn't change often ... but just to use your
 examples, they where all simplifications of larger specs that took
 years to create.
 
 If an API changes and is deprecated, how long does backwards
 compatibility stay in place?
 
 Thanks,
 Thor W
 
 On Mon, Aug 22, 2011 at 5:49 PM, Jan Drake jan_dr...@hotmail.com wrote:
 +1
 
 
 
 
 On Aug 22, 2011, at 5:06 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 ++
 
 On Mon, Aug 22, 2011 at 7:59 PM, Jorge Williams
 jorge.willi...@rackspace.com wrote:
 Hi Vish,
 I don't have a problem moving the spec out of docs manuals and into another
 project even the nova repo.   But, I do have a number of issues with the
 approach that you're proposing. First, I think that fundamentally there
 should be a decoupling of the spec and the implementation.   If you have 
 the
 spec generated from the code than essentially the spec is whatever the code
 does. It's very difficult to interoperate with specs that are generated 
 this
 way as the specs tend to be very brittle and opaque (since you have to 
 study
 the code). If you introduce a  bug in the code that bug filters it's way 
 all
 the way to the spec (this was a big problem with SOAP and CORBA). It's
 difficult to detect errors because you cant validate. By keeping the
 implementation and the spec separate you can validate one against the 
 other.
 
 Second, I don't think that the core OpenStack API should change with every
 OpenStack release. There are a number of efforts to provide multiple
 implementation of an existing OpenStack API.  We should encourage this, but
 it becomes difficult if the core spec is in constant flux.  Certainly you
 can use the extension mechanism to bring functionality out to market
 quickly, but the process of deciding what goes into the core should be more
 deliberate. Really good specs, shouldn't need to change very often, think
 HTTP, X11, SMTP, etc. We need to encourage clients to write support for our
 spec and we need to also encourage other implementors to write
 implementations for it. These efforts become very difficult if the spec is
 in constant flux.
 -jOrGe W.
 On Aug 22, 2011, at 5:43 PM, Vishvananda Ishaya wrote:
 
 Hey Everyone,
 We discussed at the Diablo design summit having API spec changes be 
 proposed
 along with code changes and reviewed according to the merge process that we
 use for code.  This has been impossible up until now because the canonical
 spec has been in the openstack-manuals project.
 My suggestion is that we move the openstack-compute spec into the nova
 source tree.  During a six-month release we can propose changes to the spec
 by proposing along with the code that changes it.  In the final freeze for
 the release, we can increment the spec version number and copy the current
 version of the spec into openstack-manuals and that will be the locked down
 spec for that release.
 This means that openstack 1.1 will be the official spec for diablo, at 
 which
 point we will start working on a new api (we can call it 1.2 but it might 
 be
 best to use a temporary name like 'latest') during the essex release cycle,
 then at essex release we lock the spec down and it becomes the new version
 of the openstack api.
 Ultimately I would like the spec to be generated from the code, but as a
 first pass, we should at least be able to edit the future version of the
 spec as we make changes.  I've proposed the current version of the spec
 here:
 https://code.launchpad.net/~vishvananda/nova/add-api-docs/+merge/72506
 Are there any issues with this approach?
 Vish
 
 This email may include confidential information. If you received it in
 error, please delete it.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help

Re: [Openstack] Physical host identification

2011-07-16 Thread Jorge Williams
Right so we should really be hashing this with the tenant ID as well.

-jOrGe W.

On Jul 15, 2011, at 6:16 PM, Chris Behrens wrote:

 I think it's sensitive because one could figure out how many hosts a SP has 
 globally... which a SP might not necessarily want to reveal.
 
 - Chris
 
 
 On Jul 15, 2011, at 3:34 PM, karim.allah.ah...@gmail.com wrote:
 
 On Fri, Jul 15, 2011 at 11:31 PM, Chris Behrens 
 chris.behr...@rackspace.com wrote:
 Nevermind.  Just found a comment in the API spec that says hostID is 
 unique per account, not globally.  Hmmm...
 
 This is weird ! I can't find anything in the code that says so !! hostID is 
 just a hashed version of the 'host' which is set as the 'hostname' of the 
 physical machine and this isn't user sensitive. So, It's supposed to be a 
 global thing !
 
 Can somebody explain how this is a user sensitive ?
 
 
 
 On Jul 15, 2011, at 2:27 PM, Chris Behrens wrote:
 
 I see the v1.1 API spec talks about a 'hostId' item returned when you list 
 your instances (section 4.1.1 in the spec).  These should be the same 
 thing, IMO.
 
 I think you're right, though.  I don't believe we have any sort of 'hostId' 
 today, since hosts just become available by attaching to AMQP.
 
 - Chris
 
 On Jul 15, 2011, at 1:16 PM, Glen Campbell wrote:
 
 I understand that we're all familiar with virtualization and its benefits. 
 However, in the Real World, those of us who run clouds often need to work 
 with physical devices. I've proposed a blueprint and spec for a /hosts 
 admin API resource that would return information on physical hosts. 
 However, I don't believe that there's any way for us to actually identify 
 a specific server (I'm actually hoping I'm mistaken about this, because 
 that would make my life easier).
 
 So, to get information about a specific host, you'd use /host/{id} — but 
 what should go in the {id} slot?
 
 We'd also like to include this data elsewhere; for example, in error 
 messages, it might help to know the physical device on which a server is 
 created.
 
 
 signature[4].png
 This email may include confidential information. If you received it in 
 error, please delete it.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 This email may include confidential information. If you received it in 
 error, please delete it.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 -- 
 Karim Allah Ahmed.
 LinkedIn
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 This email may include confidential information. If you received it in error, 
 please delete it.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cross-zone instance identifiers in EC2 API - Is it worth the effort?

2011-07-08 Thread Jorge Williams
I'm with Ewan on this point:   One of the nice thing about having a contract is 
that it clearly designates what's a bug and what isn't.  If the spec says the 
ID is a string and the client assumes it's an integer, then the client is at 
fault.  End of story.  It would be a different issue if the contract didn't 
specify what an ID was or if the contract only allowed for integers.

It's bad enough that we are spending resources trying to support an API which 
isn't open and which we don't control, now on top of that we want to support 
buggy clients that don't follow the spec?  Where do we draw the line? I'm all 
for being flexible and forgiving in what we expect from clients, but I don't 
think we should be making serious engineering decisions based on the fact that 
a client developer made a bad assumption or didn't read the spec.

If we know that there are clients out there that make the assumptions then 
contact the folks that maintain the client and ask them to adjust their code.  
If they give you grief, point to the contract and that should settle the issue. 
It's to their interest to support as many deployments of the API as possible. 
It's not our responsibility to support their buggy code.

Though I have some reservations about it, I'm okay offering some support for 
the EC2 contract. What I'm not okay with is in being in the business of reverse 
engineering Amazon's EC2 implementation.  Those are two very different things 
and I think the latter is orders of magnitude more difficult.  In fact I would 
argue that reverse engineering EC2 is a project onto itself. That means that 
when EC2 has a bug, we need to replicate it etc.  That's almost impossible and 
it makes it really easy for Amazon to disrupt our efforts if they so wish.  
What's more, it gets in the way of our ability to innovate and break new ground.

-jOrGe W.

On Jul 8, 2011, at 7:39 AM, Soren Hansen wrote:

 2011/7/8 Ewan Mellor ewan.mel...@eu.citrix.com:
 The whole point of supporting the EC2 API is to support people's
 existing tools and whatnot. If we follow the spec, but the clients
 don't work, we're doing it wrong.
 True enough.  However, in the case where we've got a demonstrated divergence 
 from the spec, we should report that against the client.  I agree that we 
 want to support pre-existing tools, but it's less clear that we want to 
 support _buggy_ tools.
 
 We do. We have to. We have no way to know what sort of clients people
 are using. We can only attempt to check the open source ones, but
 there's likely loads of other ones that people have built themselves
 and never shared. Not only do people have to be able, motivated and
 allowed to change their tools to work with OpenStack, they also need
 to *realise* that this is something that needs to happen. We can't
 assume the symptoms they'll experience even gives as much as a hint
 that the ID's they're getting back is too long. They may just get a
 general error of some sort.
 
 If we a) expect people to consume the EC2 API we expose, and (more
 importantly) b) expect ISP's to offer this API to their customers, it
 needs to be as close to just another EC2 region as possible.
 
 If Amazon turn out to be resistant to fixing that problem, then we'll 
 obviously have to accept that and move on, but we should at least give them 
 a chance to respond on that.
 
 Amazon is not the problem. At least not the only problem. I'm not even
 going to begin to guess how many different tools exist to talk to the
 EC2 API.
 
 -- 
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Refocusing the Lunr Project

2011-07-08 Thread Jorge Williams
Chuck,

What does this mean in terms of APIs?  Will there be a separate Volume API?  
Will volumes be embedded in the compute API?

-jOrGe W.


On Jul 8, 2011, at 10:40 AM, Chuck Thier wrote:

Openstack Community,

Through the last few months the Lunr team has learned many things.  This
week, it has become clear to us that it would be better to integrate
with the existing Nova Volume code. It is upon these reflections that we
have decided to narrow the focus of the Lunr Project.

Lunr will continue to focus on delivering an open commodity storage
platform that will integrate with the Nova Volume service.  This will
be accomplished by implementing a Nova Volume driver. We will work
with the Nova team, and other storage vendors, to drive the features
needed to provide a flexible volume service.

I believe that this new direction will ensure a bright future for storage
in Nova, and look forward to continuing to work with everyone in making this
possible.

Sincerely,

Chuck Thier (@creiht)
Lunr Team Lead ___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cross-zone instance identifiers in EC2 API - Is it worth the effort?

2011-07-08 Thread Jorge Williams
HTTP, SMTP, and IMAP and even ANSI C are all open standards.  The specs were 
developed and continue to be developed in the open -- and both clients and 
servers (proprietary and open source)  are very compliant to them.  I'd like to 
propose that our APIs take the same approach. 

You are proposing something different than simply implementing HTTP or SMTP.  
What you are proposing that we try to achieve with EC2 what the  Wine folks 
want to achieve with the Windows API.  It's a different problem. It's a much 
harder problem because it involves reverse engineering and it's prone to more 
risk.

-jOrGe W.

On Jul 8, 2011, at 3:05 PM, Soren Hansen wrote:

 One thing that keeps coming up in this discussion is the issue of
 being tied to an API we don't control.
 
 People... We're *fantastically* privileged that we get to define an
 API of our own. Lots and lots and lots of people and projects spend
 all their time implementing existing (open, but completely static)
 protocols and specifications.
 
 Every HTTP, SMTP, and IMAP server on the planet does it. Every single
 C compiler on the planet does it. All of these are things that have
 been defined a long time ago. You can have all the opinions you want
 about IMAP, but that doesn't mean you can just implement it
 differently. At least not if you expect people to support your stuff.
 When there are ambiguities in the spec, sure, you can insist on taking
 one path even though everyone else has taken a different one, but
 don't expect the rest of the world to change to accommodate you. If
 you want to do offer something better by doing something differently,
 offer it as an alternative that people can switch to once you've won
 them over. Don't make it a prerequisite.
 
 There's a golden rule when implementing things according to an
 existing specification: Be very conservative in what you deliver, and
 be very liberal in what you accept. Otherwise, people. will. use.
 something. else. period.
 
 -- 
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cross-zone instance identifiers in EC2 API - Is it worth the effort?

2011-07-08 Thread Jorge Williams
If the spec doesn't match the specification then they are admitting that they 
have a bug.  They broke the contract, we can lobby them to change it or write 
an implementation that does the right thing.  Or we should not be using the 
spec at all.  I'd much rather encourage clients to implement against an Open 
API.

-jOrGe W.


On Jul 8, 2011, at 2:41 PM, Soren Hansen wrote:

 2011/7/8 Jorge Williams jorge.willi...@rackspace.com:
 I'm with Ewan on this point:   One of the nice thing about having a contract 
 is that it clearly designates what's a bug and what isn't.  If the spec says 
 the ID is a string and the client assumes it's an integer, then the client 
 is at fault.  End of story.  It would be a different issue if the contract 
 didn't specify what an ID was or if the contract only allowed for integers.
 
 Answer me this: If the spec says a particular method was called Foo,
 but EC2 actually calls it Bar and every client on the planet calls it
 Bar, would you still insist on following the spec and ignoring what
 the clients do?
 
 Also, if the spec said that two arguments to a method needed to be
 passed in one order, but they actually needed to be passed the other
 way around and every single  client on the planet had figured this out
 and followed what EC2 *actually* does rather than what it says in the
 spec would you insist on following the spec? What good could that
 possibly serve?
 
 The EC2 API isn't a published, open standard with many implementations
 of both server and client. What EC2 *actually* does is the real spec.
 And what the clients *actually* expect is what we need to deliver. We
 can argue all day long that the spec says this or that, but if every
 client expects something else (or something more specific), that's
 what we need to deal with. We're not on a mission to create
 something that is a stricter implementation of the EC2 API. We're
 trying to provide something that people can use.
 
 If someone was trying to offer me something, claiming it's compatible
 with something I've used for years, but as we're closing the deal he
 says oh, by the way.. you're probably going to have to change your
 tools for this to *actually* work, I'd tell him to take his
 compatibility and stick it somewhere (in)appropriate.
 
 It's bad enough that we are spending resources trying to support an API 
 which isn't open and which we don't control, now on top of that we want to 
 support buggy clients that don't follow the spec?
 
 The spec is completely irrelevant. You can call it a compatibility
 layer all day long, but if it's *actually* incompatible with what the
 clients expect, it's worthless.
 
 If we know that there are clients out there that make the assumptions then 
 contact the folks that maintain the client and ask them to adjust their code.
 
 How do you suggest we find them?
 
 
  If they give you grief, point to the contract and that should settle the 
 issue.
 
 Nonsense. They might be perfectly happy with EC2. If you want them to
 switch to OpenStack, that's going to be a tough sell if they can't
 reuse their code. To them, it's *you* who's doing something wrong,
 because you're doing something different from what EC2 does.
 
 Though I have some reservations about it, I'm okay offering some support for 
 the EC2 contract. What I'm not okay with is in being in the business of 
 reverse engineering Amazon's EC2 implementation.  Those are two very 
 different things and I think the latter is orders of magnitude more 
 difficult.
 
 Making useful software is difficult. Anyone claiming otherwise is full of it.
 
 Of course we should follow the spec, but if there are well-known
 places where reality is different, we need to follow reality. That's
 what the clients do.
 
 -- 
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cross-zone instance identifiers in EC2 API - Is it worth the effort?

2011-07-08 Thread Jorge Williams
On Jul 8, 2011, at 10:44 PM, Sandy Walsh wrote:
 
 Wow, really? Is EC2 really that sporadic/chaotic? 
 
 I have to plead ignorance because I don't know where the rubber meets the 
 road, but that kinda surprises me.


I'm not saying that.  In fact let me say that I don't think the Windows API 
itself is sporadic or chaotic. I used to be a Windows dev way back in the day 
and I never got that impression.

The problem is that the Windows API is not open and is not really designed to 
be implemented by others.  The Wine folks (and the ReactOS folks) have been 
working really hard to implement it for a long time.  And with good reason, 
there are  a lot of incentives to have a free Windows compatible  OS.  The task 
the Wine folks have is very hard though. There are no reference implementations 
for the Windows API, so you can't look at the code, you have to replicate bugs 
in the implementation and bugs in client apps etc, oh and do you really think 
MS wants a free Windows compatible OS on the market? -- you have to account for 
them messing with you as well.

Soren was suggesting that supporting EC2 was much like writing an 
implementation of HTTP or SMTP (both open specs with open reference 
implementations).  All I'm saying is that reverse engineering a living, rapidly 
changing, closed system and writing another system that behaves like it exactly 
(bugs and all) is not the same thing as implementing an open spec -- it's 
harder.

-jOrGe W.

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Compute API 1.1 -- Seeking Community Input

2011-06-29 Thread Jorge Williams
Hello All,

New version of OpenStack Compute 1.1 API spec is out.

PDF:  
http://docs.openstack.org/cactus/openstack-compute/developer/openstack-compute-api-1.1/os-compute-devguide-cactus.pdf
WebHelp: 
http://docs.openstack.org/cactus/openstack-compute/developer/openstack-compute-api-1.1/content/index.html

See the Document Change History section for a list of changes.

I've gotten a lot of suggestions since the summit, and I've tried to take them 
all into account. The only changes that I have planed is to update some of the 
status codes as suggested by Mark Nottingham. That said, I'd like to go through 
one final round of reviews before we fix the contract. 

Please submit your comments by July 11th after which I propose we freeze the 
core API -- new changes can come in as extensions or will have to wait until 
the next version. 

You can contribute by leaving comments in the WebHelp version or if  you find 
something broken, or want to make another change, you can submit a merge 
request to the openstack-manuals project.

Thank you very much for your input,

jOrGe W.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Identity: Keystone API Proposal

2011-06-21 Thread Jorge Williams

On Jun 21, 2011, at 11:49 PM, Ziad Sawalha wrote:

Hi Bryan -

A general comment, which colors much of the thinking on Keystone, is that we 
are looking to have pluggable backends and protocols. We provide a sqlite 
implementation for reference and for ease of deployment, but it is not the 
intention for Keystone to be a comprehensive Identity Management solution.


On groups:
My understanding of the idea of a core API call is that it is implemented by 
all services. We're looking for Keystone to allow plugging in different 
back-ends (such as LDAP, PAM, Active Directory, or even simple, flat text 
files) and adding groups to the core APIs means all the backends will have to 
support them and map to the model we add. We don't want to force the model we 
have out there on all backends, so we'll take it out and let the best model win 
using the extension mechanism.

On tenant:
Your ITIL description is quite accurate, except that we're not extending the 
definition to include the authorization concern mentioned in ITIL. There may be 
other aspects to how a tenant is used (ex. Isolation of resources from neighbor 
effect in compute) that do not fit in that description. Let me attempt a 
modification to make it better describe what we have in mind for tenant in 
OpenStack:
A tenant is a configuration within the service that holds configuration items 
for a specific customer of the service, where customer is defined by the 
operator and where the service provides the tenant identifier in relevant logs 
to allow per-tenant accounting (per 
http://wiki.openstack.org/openstack-accounting).
Note: This is key. Thanks for the valuable input on this, Bryan. I'd like to 
include this in the docs when this conversation/thread ends…

On pagination:
Deferring to those spending many more hours on the APIs than I am. I will go by 
the spec as it evolves. This should probably be an OpenStack standard as 
opposed to a Keystone-specific behaviour (also included in docs like 
http://wiki.openstack.org/OpenStackAPI_1-1?action=AttachFiledo=gettarget=c11-devguide-20110209.pdf
 and http://docs.openstack.orghttp://docs.openstack.org// ).

I agree that we can get away without exposing the URI structure.  Still it 
makes sense to expose the marker if only to ensure that implementors provide 
clients with a way in which they can iterate through the list in a stable 
manner. For the compute API we are considering requiring only next links -- and 
requiring that these lists reference a marker.


On PUT operations:
The identifier for users right now (username) is supplied in the payload, so it 
is a PUT. Same with groups.

On ATOM:
I agree with the principle, but the challenge will be picking up changes on a 
back-end store (like LDAP) and publishing ATOMs on those.

On clear text password and RFC2617:
Valid concern on clear text password. Can you elaborate on the bar you think we 
need to measure up against? RFC2617 is a pretty low bar…
Also, the 401 response that responds with WWW-Authenticate options is what we 
would expect would come back from the service (ex. Nova) and not necessarily 
from Keystone. Although Keystone may respond back with a list of protocols it 
supports. The idea of making the list of available protocols discoverable is 
logged here https://github.com/khussein/keystone/issues/31 .

On roleRef:
The fact that you ask reaffirms that we need to make them clearer. Wondering if 
if we're gaining RESTfulness at the expense of understandability here. I'll add 
an issue to track and follow up on this. 
(https://github.com/rackspace/keystone/issues/56)

On BaseURLs:
We renamed them to endpoints and endpoint-templates and will be updating the 
documentation soon. (https://github.com/rackspace/keystone/issues/57)



Bryan – thank you so much for your diligent review and solid feedback. I've 
added a ticket with most of the content of your email to make sure we 
address/incorporate all the items you raise. 
(https://github.com/rackspace/keystone/issues/58)

Regards,

Ziad



From: Bryan Taylor btay...@rackspace.commailto:btay...@rackspace.com
Date: Mon, 20 Jun 2011 23:17:02 -0500
To: Ziad Sawalha 
ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Identity: Keystone API Proposal

I'm reading through the dev guide for the first time. I hope my comments are 
timely.

I'm glad to see section 2 begin by defining concepts. I'd suggest adding all 
these concepts to the openstack glossary at http://wiki.openstack.org/Glossary. 
The github site for keystone defines these concepts in the README, but also 
includes group, so I'd add that to section 2. I see groups and tenant groups 
are defined in the appendix as an extension. Why an extension? If an operator 
doesn't want to use them, can't they just ignore them?

The tenant concept is interesting. This may 

[Openstack] Extensions Guide

2011-06-10 Thread Jorge Williams
Hey Guys,

I've been working on a guide to extensions that will form the basis of a 
proposal for a standard extensions mechanism to the PPB.

You can find the doc here:

http://docs.openstack.org/trunk/

I only have the high level overview sections for now, but I wanted to get 
comments sooner rather than later.  The nitty-gritty REST details are coming 
soon, I'll keep the docs in /trunk/  up to date as I work on it so keep an eye 
out for them.

I appreciate your comments and suggestions, please place them in the web-help 
version:

http://docs.openstack.org/trunk/openstack-compute/developer/openstack-api-extensions/content/index.html

Thank You,

jOrGe W.

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Feedback on HTTP APIs

2011-06-03 Thread Jorge Williams

The whole idea behind the bookmark links is to give you the functionality 
that you want -- that is a URL without a version number in it.  Looks like the 
implementation hasn't yet added support for that, but it will.

-jOrGe W.

On Jun 2, 2011, at 5:04 PM, Thorsten von Eicken wrote:

We neither hate nor love UUIDs, but we like them when they provide value and we 
also accept alternatives. What we do hate is ambiguity and in certain cases 
UUIDs help.

Look at the hrefs returned in this sample resource and picture what you'd store 
in your database as unique identifier to refer to each one:
{server=
  {name=kd_test_3,
   
flavorRef=http://5.5.2.2:8774/v1.1/flavors/3http://50.56.22.22:8774/v1.1/flavors/3,
   addresses=
{public=[], private=[{version=4, addr=5.5.2.5}]},
   metadata={v2=d2, 4=14, 5=17},
   
imageRef=http://http://50.56.22.22:8774/v1.1/images/15.5.2.2http://50.56.22.22:8774/v1.1/flavors/3:8774/v1.1/images/1http://50.56.22.22:8774/v1.1/images/1,
   id=26,
   hostId=4e6200284bc7bd28e49016aa047fbdc6a3f5,
   links=

[{href=http://http://50.56.22.22:8774/v1.1/servers/265.5.2.2http://50.56.22.22:8774/v1.1/flavors/3:8774/v1.1/servers/26http://50.56.22.22:8774/v1.1/servers/26,
 rel=self},
 
{href=http://http://50.56.22.22:8774/v1.1/servers/265.5.2.2http://50.56.22.22:8774/v1.1/flavors/3:8774/v1.1/servers/26http://50.56.22.22:8774/v1.1/servers/26,
  rel=bookmark,
  type=application/json
 },
 
{href=http://http://50.56.22.22:8774/v1.1/servers/265.5.2.2http://50.56.22.22:8774/v1.1/flavors/3:8774/v1.1/servers/26http://50.56.22.22:8774/v1.1/servers/26,
  rel=bookmark,
  type=application/xml
 }
],
   status=ACTIVE
  }
}

Are the hostnames significant? Are the port numbers significant? Are the 
version IDs significant? Is the next URI component significant? Is the integer 
ID significant? Mhhh, maybe it's obvious to the OpenStack implementers, but it 
leaves quite some room for error for all the users out there. We end up having 
to write a little algorithm that throws away the hostname and port, then throws 
away the version number if there is one -- it really should NOT be part of the 
URL! -- then looks at the next path component to decide whether its the 
resource type and whether the path component after that is the resource id, or 
whether there is further nesting of path components. Then we can assemble a 
unique ID and see whether we know about that resource or need to fetch it. It 
would be really nice to have a UUID attribute that eliminates this guesswork 
and we like UUIDs that start with a type-specific prefix, such as inst-12345 or 
img-12345.

Our recommendation:
 - option 1: use canonical hrefs that can be used as unique IDs, which means 
without host/port and without version number
 - option 2: use a unique ID with a type prefix and include that as attribute 
in hrefs, we like small IDs, but we don't care that much

WRT UUIDs, we try to use integer IDs when we can easily generate them 
centrally, but switch to UUIDs when we need to distribute the ID generation and 
we keep them as short as practical.

Thanks much!
Thorsten - CTO RightScale


On 6/2/2011 12:40 PM, Jay Pipes wrote:

On Thu, Jun 2, 2011 at 1:18 PM, George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com wrote:


I hate UUIDs with a passion.

* They are text fields, which means slower database indexes


Text fields are stored on disk/in memory as bytes the same as any
integer. It's that the number of bytes needed to store it is greater,
resulting in larger indexes and more bytes to store the keys. But, as
Jorge mentioned, some databases have native support for large-integer
types like UUIDs.



* They are completely user-unfriendly. The whole copy and paste argument 
borders on silliness


Yes, it's not easy to remember UUIDs. That's why virtually every
resource has some other way of identifying themselves. Typically, this
is a name attribute, though not all resources enforce uniqueness on
the name attribute, thus the need for a unique identifier.

I don't see people manually looking up resources based on UUIDs. I see
*machines* manually looking up resources based on UUIDs, and humans
looking up resources by, say, name, or (name, tenant_id) or (name,
user_id), etc.



* And uniqueness across regions for share nothing can be managed with a 
variety of alternative options without resorting to the ugliness that is UUIDs


Like URIs? I don't know of any other options that would work. Please
let us know what you think about in this respect.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : 

Re: [Openstack] XML and JSON for API's

2011-06-03 Thread Jorge Williams

On Jun 2, 2011, at 10:41 PM, Mark Nottingham wrote:

 The problem I mentioned before, though, is that XML Schema brings more issues 
 to the table than it solves.
 
 1) People inevitably use schema to generate bindings to [insert language], 
 and because of the complexity of the underlying data model of XML (Infoset), 
 the mapping of information items to objects can happen in a variety of 
 different ways. This is an endless source of bugs.


I understand where you're coming from Mark.  I'm still suffering PTSD from the 
SOAP days.  One of the lessons leaned there was that auto generated language 
bindings are a bad idea.  Unless you strictly control the client and server 
implementations -- it all falls apart really quickly.   That's not an XML 
thing, honestly, I think an auto-generated JSON client would suffer from 
similar interoperability problems -- there really needs to be a human in the 
loop. 

Given that, we should be building and distributing language bindings for common 
languages with all our APIs -- it's well worth the investment in my opinion.

Also, I really don't see people generating language bindings for REST services 
the way they did for SOAP.  Note that XML Schema isn't going to give you a 
language binding in the first place because it describes data types not 
operations -- and I don't see people using WADL in that way.  We use this sort 
of stuff, internally, for machine processable documentation and validation -- 
and there are many benefits in both of those cases.

 
 2) It's very, very hard to define an XML Schema that's reasonably extensible; 
 unless you use exactly the right design patterns in your schema (which are 
 absurdly convoluted, btw), you'll end up locking out future 
 backwards-compatible changes. The authority in this space is Dave Orchard; 
 see his conclusions at  
 http://www.pacificspirit.com/Authoring/Compatibility/ProvidingCompatibleSchemaEvolution.html.

A lot of this has changed with XSD 1.1 -- and we are using it to define our 
extensible contracts.  In particular a lot of restrictions based on ordering 
have gone away, the unique particle attribution issue is now also gone.  
Frankly, I'm running into more issues with extensibility and JSON, I don't know 
a lot of truly extensible JSON media types, where different vendors may define 
different extensions and you need to prevent clashes etc. We can and will make 
things work in JSON, it's our default format and it should remain so. But this 
level of extensibility with JSON  is a bit uncharted  at the moment -- and we 
still need to figure out the best approach -- in XML this sort of extensibility 
is a no brainer. 

 
 3) An XML Schema can never express all of the constraints on the format. So, 
 you'll still need to document those that aren't captured in the schema.
 

XSD 1.1 goes pretty far in this regard as well in that it includes the ability 
to add schematron like assertions. Most of what can't be captured in the XSD 
directly can be included as an assertion.


 I suppose the central question is what people are using the schema for. If 
 it's just to document the format, that's great; we can have a discussion 
 about how to do that. If they're using it for databinding, I'd suggest that 
 JSON is far superior, as a separate databinding step isn't needed. Finally, 
 if they're using it for runtime validation, I'd agree with Jay below; it's 
 much easier to use json parse + runtime value checks for validation 
 (especially in HTTP, where clients always have to be ready for errors anyway).


The validation that Jay is proposing works great when there is a single 
implementation.  This isn't always going to be the case.  If our API's are 
going to become the ubiquitous cloud APIs we want them to be, then others are 
going to want/have to implement them.  This is happening with compute today -- 
there will literally be two implementations of the compute 1.1 API from day 
one.   We need assurances that a client that works with one implementation can 
work with any of them seamlessly.  The validation rules can't simply be defined 
in the code itself  -- they need to be described outside of it -- being able to 
describe these rules in a formal language and use this for validation and 
conformance testing is very useful.  This isn't strictly an XML vs JSON thing 
-- though today there are better tools for doing this sort of thing with XML.


 
 Just my .02.
 
 Cheers,
 
 
 On 03/06/2011, at 5:20 AM, Jorge Williams wrote:
 
 It's not just about the service itself  validating it, its as Joseph said, 
 making sure that the data structures themselves are documented in detail to 
 the client.  To my knowledge there is no accepted schema language in JSON  
 though JSON schema is starting to catch on.
 
 At the end of the day it should be a matter of providing our customers with 
 a representation that they can readily use.  It could be that my perception 
 is wrong, but it seems to me that there's support

Re: [Openstack] Feedback on HTTP APIs

2011-06-02 Thread Jorge Williams

On Jun 2, 2011, at 12:18 PM, George Reese wrote:

 I hate UUIDs with a passion.
 
 * They are text fields, which means slower database indexes

They are not text fields they are large integers and you should store them as 
such.  Some databases offer direct support for them. 

 * They are completely user-unfriendly. The whole copy and paste argument 
 borders on silliness

If you supply links  in the rest api -- you fix the problem of users having to 
deal with them for the most part.

 * The bursting scenario makes no sense to me. Why do two different clouds 
 need to agree on uniqueness of resource IDs? (said as one of the few people 
 actually doing bursting)
 * And uniqueness across regions for share nothing can be managed with a 
 variety of alternative options without resorting to the ugliness that is UUIDs
 

Would love to here your ideas on this.

 On Jun 2, 2011, at 7:40 PM, Glen Campbell wrote:
 
 
 There was another specific use case, where someone with a private
 OpenStack cloud was bursting into a public cloud. UUIDs would help ensure
 the uniqueness of identifiers in that case.
 
 
 
 On 5/29/11 8:43 PM, Mark Nottingham m...@mnot.net wrote:
 
 Ah -- makes sense. Thanks.
 
 On 30/05/2011, at 11:40 AM, Ed Leafe wrote:
 
 On May 29, 2011, at 9:01 PM, Mark Nottingham wrote:
 
 WIth regards to UUIDs -- I'm not sure what the use cases being
 discussed are (sorry for coming in late), but in my experience UUIDs
 are good fits for cases where you truly need distributed extensibility
 without coordination. In other uses, they can be a real burden for
 developers, if for no other reason than their extremely unwieldy
 syntax. What are the use cases here?
 
 
The primary use case I can think of is a deployment with several zones
 that are geographically dispersed. Since each zone is shared-nothing
 with other zones, UUIDs are the most logical choice for instance IDs
 that need to be unique across zones. This is precisely the use case that
 UUIDs were created for.
 
In my experience, UUIDs are no more of a programmatic burden than any
 other sort of PK; the only place where they are unwieldy is when
 humans have to type them into a command line or a browser URL. But since
 most humans doing that would have access to copy/paste, it isn't nearly
 as bad as it might seem.
 
 
 
 -- Ed Leafe
 
 
 
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use
 of the
 individual or entity to which this message is addressed, and unless
 otherwise
 expressly indicated, is confidential and privileged information of
 Rackspace.
 Any dissemination, distribution or copying of the enclosed material is
 prohibited.
 If you receive this transmission in error, please notify us immediately
 by e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation is appreciated.
 
 
 --
 Mark Nottingham   http://www.mnot.net/
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use of the
 individual or entity to which this message is addressed, and unless otherwise
 expressly indicated, is confidential and privileged information of Rackspace.
 Any dissemination, distribution or copying of the enclosed material is 
 prohibited.
 If you receive this transmission in error, please notify us immediately by 
 e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation is appreciated.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.comt: @GeorgeReesep: +1.207.956.0217f: 
 +1.612.338.5041
 enStratus: Governance for Public, Private, and Hybrid Clouds - @enStratus - 
 http://www.enstratus.com
 To schedule a meeting with me: http://tungle.me/GeorgeReese
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Conclusion on Pagination (hopefully!) :)

2011-05-27 Thread Jorge Williams

Jay,

+1 on this,  however, I would also add linking to the API layer -- as we do in 
compute 1.1. Links make it supper easy for language bindings to traverse pages 
-- especially in the case that Thorsten points to, where you want to traverse 
all items in the collection.  In this case, a client can keep following the 
next link until there aren't any. Links  are less error prone, and easier to 
use in that  they keep clients from having to mess with markers at all.  The 
href is based on a marker of course, but the client doesn't have to construct 
the URL. 

-jOrGe W.

On May 27, 2011, at 11:00 AM, Jay Pipes wrote:

 Thanks all for some awesome input on the pagination thread. I wanted
 to summarize what I think were the conclusions to come out of it.
 Please do let me know if I got it right.
 
 Proposal:
 
 1) Push the LIMIT variable into the database API layer
 2) Ensure that all queries that return a set of results have an ORDER
 BY expression to them
 3) Push the marker into the database API layer. Continue to have the
 marker variable be a value of a unique key (primary key for now at
 least). Use a WHERE field  $marker LIMIT $pagesize construct.
 
 I *think* this is what's agreed upon? It's basically the Swift model
 with a variation that the order of results is not static (it can be
 specified by the user).
 
 Please ++ if that looks good and I'll draw up a blueprint
 
 Thanks!
 jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams
Hi Sandy,

My understanding (Correct me if i'm wrong here guys) is that creating multiple 
instances with a single call is not in scope for the 1.1 API.  Same thing for 
changing the way in which flavors work.  Both features can be brought in as 
extensions though.

I should note that when creating single instances the instance id should really 
be equivalent to a reservation id.  That is, the create should be asynchronous 
and the instance id can be used to poll for changes.  Because of this, a user 
can create multiple instances in very rapid succession.   Additionally, the 
changes-since feature in the API allows a user to efficiently monitor the 
creation of multiple instances simultaneously.

-jOrGe W.

On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:

Hi everyone,

We're deep into the Zone / Distributed Scheduler merges and stumbling onto an 
interesting problem.

EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
- Reservation ID
- Number of Instances to create

Typical use case: Create 1000 instances. The API allocates a Reservation ID 
and all the instances are created until this ID. The ID is immediately returned 
to the user who can later query on this ID to check status.

From what I can see, the OS API only deals with single instance creation and 
returns the Instance ID from the call. Both of these need to change to support 
Reservation ID's and creating N instances. The value of the distributed 
scheduler comes from being able to create N instances load balanced across 
zones.

Anyone have any suggestions how we can support this?

Additionally, and less important at this stage, users at the summit expressed 
an interest in being able to specify instances with something richer than 
Flavors. We have some mockups in the current host-filter code for doing this 
using a primitive little JSON grammar. So, let's assume the Flavor-like query 
would just be a string. Thoughts?

-S



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.commailto:ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams
Comments inline:

On May 23, 2011, at 8:59 AM, Jay Pipes wrote:

 Hi Jorge! Comments inline :)
 
 On Mon, May 23, 2011 at 9:42 AM, Jorge Williams
 jorge.willi...@rackspace.com wrote:
 Hi Sandy,
 My understanding (Correct me if i'm wrong here guys) is that creating
 multiple instances with a single call is not in scope for the 1.1 API.
 
 Actually, I don't think we *could* do this without issuing a 2.0 API.
 The reason is because changing POST /servers to return a reservation
 ID instead of the instance ID would break existing clients, and
 therefore a new major API version would be needed.

Why?  Clients just see an ID.  I'm suggesting that for single instances, the 
instanceID == the reservationID.
In the API you query based on Some ID.

http://my.openstack-compute.net/v1.1/2233/servers/{Some unique ID}

 
 Same
 thing for changing the way in which flavors work.  Both features can be
 brought in as extensions though.
 
 Sorry, I'm not quite sure I understand what you mean by changing the
 way flavours work. Could you elaborate a bit on that?

Sandy was suggesting we employ a method richer than Flavors.  I'll let him 
elaborate.

 
 I should note that when creating single instances the instance id should
 really be equivalent to a reservation id.  That is, the create should be
 asynchronous and the instance id can be used to poll for changes.
 
 Hmm, it's actually a bit different. In one case, you need to actually
 get an identifier for the instance from whatever database (zone db?)
 would be responsible for creating the instance. In the other case, you
 merely create a token/task that can then be queried for a status of
 the operation. In the former case, you unfortunately make the
 scheduler's work synchronous, since the instance identifier would need
 to be determined from the zone the instance would be created in. :(
 

If we make the instance ID a unique ID -- which we probably should.   Why not 
also treat it as a reservation id and generate/assign it up front?

 Because
 of this, a user can create multiple instances in very rapid succession.
 
 Not really the same as issuing a request to create 100 instances. Not
 only would the user interface implications be different, but you can
 also do all-or-nothing scheduling with a request for 100 instances
 versus 100 requests for a single instance. All-or-nothing allows a
 provider to pin a request to a specific SLA or policy. For example, if
 a client requests 100 instances be created with requirements X, Y, and
 Z, and you create 88 instances and 12 instances don't get created
 because there is no more available room that meets requirements X, Y,
 and Z, then you have failed to service the entire request...
 


I totally understand this.  I'm just suggesting that since this is not is scope 
for 1.1 -- you should be able to launch individual instances as an alternative.

Also, keep in mind that the all-or-nothing requires a compensation when 
something fails.



 Additionally, the changes-since feature in the API allows a user to
 efficiently monitor the creation of multiple instances simultaneously.
 
 Agreed, but I think that is tangential to the above discussion.
 
 Cheers!
 jay
 
 -jOrGe W.
 On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:
 
 Hi everyone,
 We're deep into the Zone / Distributed Scheduler merges and stumbling onto
 an interesting problem.
 EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
 - Reservation ID
 - Number of Instances to create
 Typical use case: Create 1000 instances. The API allocates a Reservation
 ID and all the instances are created until this ID. The ID is immediately
 returned to the user who can later query on this ID to check status.
 From what I can see, the OS API only deals with single instance creation and
 returns the Instance ID from the call. Both of these need to change to
 support Reservation ID's and creating N instances. The value of the
 distributed scheduler comes from being able to create N instances load
 balanced across zones.
 Anyone have any suggestions how we can support this?
 Additionally, and less important at this stage, users at the summit
 expressed an interest in being able to specify instances with something
 richer than Flavors. We have some mockups in the current host-filter code
 for doing this using a primitive little JSON grammar. So, let's assume the
 Flavor-like query would just be a string. Thoughts?
 -S
 
 
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use of
 the
 individual or entity to which this message is addressed, and unless
 otherwise
 expressly indicated, is confidential and privileged information of
 Rackspace.
 Any dissemination, distribution or copying of the enclosed material is
 prohibited.
 If you receive this transmission in error, please notify us immediately by
 e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams

On May 23, 2011, at 10:15 AM, Jay Pipes wrote:

 /me wishes you were on IRC ;)
 
 Discussing this with Mark Wash on IRC...
 

I'll stop by :-)

 Basically, I'm cool with using a UUID-like pregenerated instance ID
 and returning that as a reservation ID in the 1.X API.

Cool.

 I was really
 just brainstorming about a future, request-centric 2.0 API that would
 allow for more atomic operations on the instance creation level.
 

Okay, I'll follow up.

 Cheers!
 jay
 
 On Mon, May 23, 2011 at 10:35 AM, Jorge Williams
 jorge.willi...@rackspace.com wrote:
 Comments inline:
 
 On May 23, 2011, at 8:59 AM, Jay Pipes wrote:
 
 Hi Jorge! Comments inline :)
 
 On Mon, May 23, 2011 at 9:42 AM, Jorge Williams
 jorge.willi...@rackspace.com wrote:
 Hi Sandy,
 My understanding (Correct me if i'm wrong here guys) is that creating
 multiple instances with a single call is not in scope for the 1.1 API.
 
 Actually, I don't think we *could* do this without issuing a 2.0 API.
 The reason is because changing POST /servers to return a reservation
 ID instead of the instance ID would break existing clients, and
 therefore a new major API version would be needed.
 
 Why?  Clients just see an ID.  I'm suggesting that for single instances, the 
 instanceID == the reservationID.
 In the API you query based on Some ID.
 
 http://my.openstack-compute.net/v1.1/2233/servers/{Some unique ID}
 
 
 Same
 thing for changing the way in which flavors work.  Both features can be
 brought in as extensions though.
 
 Sorry, I'm not quite sure I understand what you mean by changing the
 way flavours work. Could you elaborate a bit on that?
 
 Sandy was suggesting we employ a method richer than Flavors.  I'll let him 
 elaborate.
 
 
 I should note that when creating single instances the instance id should
 really be equivalent to a reservation id.  That is, the create should be
 asynchronous and the instance id can be used to poll for changes.
 
 Hmm, it's actually a bit different. In one case, you need to actually
 get an identifier for the instance from whatever database (zone db?)
 would be responsible for creating the instance. In the other case, you
 merely create a token/task that can then be queried for a status of
 the operation. In the former case, you unfortunately make the
 scheduler's work synchronous, since the instance identifier would need
 to be determined from the zone the instance would be created in. :(
 
 
 If we make the instance ID a unique ID -- which we probably should.   Why 
 not also treat it as a reservation id and generate/assign it up front?
 
 Because
 of this, a user can create multiple instances in very rapid succession.
 
 Not really the same as issuing a request to create 100 instances. Not
 only would the user interface implications be different, but you can
 also do all-or-nothing scheduling with a request for 100 instances
 versus 100 requests for a single instance. All-or-nothing allows a
 provider to pin a request to a specific SLA or policy. For example, if
 a client requests 100 instances be created with requirements X, Y, and
 Z, and you create 88 instances and 12 instances don't get created
 because there is no more available room that meets requirements X, Y,
 and Z, then you have failed to service the entire request...
 
 
 
 I totally understand this.  I'm just suggesting that since this is not is 
 scope for 1.1 -- you should be able to launch individual instances as an 
 alternative.
 
 Also, keep in mind that the all-or-nothing requires a compensation when 
 something fails.
 
 
 
 Additionally, the changes-since feature in the API allows a user to
 efficiently monitor the creation of multiple instances simultaneously.
 
 Agreed, but I think that is tangential to the above discussion.
 
 Cheers!
 jay
 
 -jOrGe W.
 On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:
 
 Hi everyone,
 We're deep into the Zone / Distributed Scheduler merges and stumbling onto
 an interesting problem.
 EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
 - Reservation ID
 - Number of Instances to create
 Typical use case: Create 1000 instances. The API allocates a Reservation
 ID and all the instances are created until this ID. The ID is immediately
 returned to the user who can later query on this ID to check status.
 From what I can see, the OS API only deals with single instance creation 
 and
 returns the Instance ID from the call. Both of these need to change to
 support Reservation ID's and creating N instances. The value of the
 distributed scheduler comes from being able to create N instances load
 balanced across zones.
 Anyone have any suggestions how we can support this?
 Additionally, and less important at this stage, users at the summit
 expressed an interest in being able to specify instances with something
 richer than Flavors. We have some mockups in the current host-filter code
 for doing this using a primitive little JSON grammar. So, let's assume the
 Flavor-like query would

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams

On May 23, 2011, at 10:15 AM, Ed Leafe wrote:

 On May 23, 2011, at 10:35 AM, Jorge Williams wrote:
 
 If we make the instance ID a unique ID -- which we probably should.   Why 
 not also treat it as a reservation id and generate/assign it up front?
 
 
   Because that precludes the 1:M relationship of a reservation to created 
 instances. 
 
   If I request 100 instances, they are all created with unique IDs, but 
 with a single reservation ID. 
 

I don't see how that peculates anything.  Treat the instance id as the 
reservation id on single instance creations -- have a separate reservation id 
when launching multiple instances.  End of the day even if you have the 
capability to launch multiple instances at once you should be able to poll a 
specific instance for changes.  

 
 
 -- Ed Leafe
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams

On May 23, 2011, at 11:25 AM, Sandy Walsh wrote:

From: Jorge Williams

 So this is 2.0 API stuff -- right.

Well, we need it now ... so we have to find a short term solution.

 Why not simply have a request on the server list with the reservation id as a 
 parameter.
 This can easily be supported as an extension.

 So GET  /servers/detail?RID=3993882

 I would probably call it a build ID.  That would narrow the response to only 
 those that are
 currently being build with a single request (3993882).

I'm cool with that ... why does it need to be an extension, per se? It's just 
an additional parameter which will be ignored until something going looking for 
it.

To prevent clashes.  To detect if the feature is available -- it probably wont 
be available in our legacy system.


How about the POST /zones/server idea?


I'll have to think about it.

-S

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams
+1

On May 23, 2011, at 11:54 AM, Vishvananda Ishaya wrote:

 So I think we've identified the real problem...
 
 :)
 
 sounds like we really need to do the UUID switchover to optimize here.
 
 Vish
 
 On May 23, 2011, at 9:42 AM, Jay Pipes wrote:
 
 On Mon, May 23, 2011 at 12:33 PM, Brian Schott
 brian.sch...@nimbisservices.com wrote:
 Why does getting the instance id require the API to block?  I can create 1 
 or 1000 UUIDs in order (1) time in the API server and hand back 1000 
 instance ids in a list of server entries in the same amount of time.
 
 Instance IDs aren't currently UUIDs :) They are auto-increment
 integers that are local to the zone database. And because they are
 currently assigned by the zone, the work of identifying the
 appropriate zone to place a requested instance in would need to be a
 synchronous operation (you can't have the instance ID until you find
 the zone to put it in).
 
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Notifications proposal

2011-05-10 Thread Jorge Williams

On May 10, 2011, at 11:07 AM, Matt Dietz wrote:

Alright, I'll buy it. Simply adding a UUID would be trivial


Cool.

Regarding categories, I tend to agree with Jay on this. I think it would
be treacherous to try to account for any number of possibilities, and I
also think that we need to keep this as simple as possible.


Okay fair enough,  the external publisher may create categories as needed.

On 5/10/11 10:35 AM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

On Mon, May 9, 2011 at 11:58 PM, Jorge Williams
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:
On May 9, 2011, at 6:39 PM, Matt Dietz wrote:

Jorge,
  Thanks for the feedback!
  Regarding the message format, we actually don't need the unique id
in the
generic event format because that's implementation specific. The
external
publisher I've implemented actually does all of the pubsubhubbub
specific
heavy lifting for you. The idea behind keeping this system simple at the
nova layer is allowing people to implement anything they'd like, such as
emails or paging.

I guess, I'm not seeing the whole picture.  So these are internal
messages?
Will they cross service boundaries / zones?  I'm sorry I missed the
conversation at the summit :-) Is there a blueprint I should be reading?

On this particular point, I agree with Jorge. A unique identifier
should be attached to a message *before* it leaves Nova via the
publisher. Otherwise, subscribers will not be able to distinguish
between different messages if more than one publisher is publishing
the message and tacking on their own unique identifier.

For instance, if a Rabbit publisher and email publisher are both
enabled, and both attach a unique identifier in a different way,
there's no good way to determine two messages are the same.

For categories, were you considering this to be a list? Could you give
an
example of an event that would span multiple categories?

From an Atom perspective, I suppose anything a client might want to key
in
on or subscribe to may be a category.  So create may be a category --
a
billing layer may key in on all create messages and ignore others.
compute
may also be a category -- you can aggregate messages from other
services so
It'd be nice for messages from compute to have their own category.  To
my
knowledge, atom doesn't have the concept of priority so WARN may also
be a
category.  I suppose if these are internal messages an external
publisher
can split the event_type and priority into individual categories.

I disagree with this assessment, Jorge, for this reason: attempting to
identify all the possible categories that an organization may wish to
assign to a particular event may be near impossible, and in all
likelihood, different deployers will have different categories for
events.

I think a solution of codifying the event_type in the message to a
singular set of strings, with a single dotted group notation (like
instance.create or something like that) is the best we can do. The
subscriber of messages can later act as a translation or aggregator
based on the business rules in place at the deployer. For example,
let's say a deployer wanted to aggregate messages with event_type of
instance.create into two categories instance and create. A
custom-written subscriber could either do the aggregation itself, or
modify the message payload to include these custom deployer-specific
categories.

Hope that makes sense.

-jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Glance] New Glance API changes .. feedback needed

2011-05-10 Thread Jorge Williams

On May 10, 2011, at 5:52 PM, Jay Pipes wrote:

 Hey all,
 
 We've been working to improve the Glance API. The first step to
 improving the API, however, is to add versioning to it.
 
 We've gotten a lot of the work done on this versioning of the API (see
 https://code.launchpad.net/~jaypipes/glance/api-version/+merge/60130).
 
 However, there is an issue that remains unresolved that we would like
 some community input on.
 
 We have a choice of using the following two API URIs:
 
 /v1/images
 /v1.0/images
 
 I coded the latter (/v1.0/images) because I was copying the way that
 swauth and Nova's OpenStack API do it, but Brian Waldon brought up the
 fact that major versions of APIs should never break clients written to
 that major version of the API, so there is no real reason to specify
 the minor version in the URLs.

Whether or not you use a .0  at the end of the URI version is at your 
digression -- it may still be useful to have it denote that the API hasn't 
changed radically from one version to the next.  Thus the 1.1 compute API has 
only minor additions and subtractions but no dramatic changes.  That said, in 
Rackspace, we consider the entire version in the URI (v1.0, v1.1) a major 
version number -- I know that's confusing.   There's nothing saying you can't 
start with v1 and move to v1.5  or v2 if need be.   The important thing to 
consider is that the whole things is  major version number.

We do use minor version numbers for other aspects of the service.  Here are the 
rules that we're considering at least internally at Rackspace:

WADLs  major and minor version number, minor revision changes are backward 
compatible.
XSD major and minor version numbers, with minor revisions denoting backward 
compatible changes.
Media Types -- major number only
XML Namespaces -- major number only
Version URIs -- major number only

Backward compatible changes with the Media Type, the XML Namespace, and the 
Version URI always fall within the same major version number.  There's really 
no benefit, for example,  in having a separate URI if we have backward 
compatible changes.

 I would prefer the shorter /v1/images API, myself.


This is what we're considering at Rackspace as a standard :-)  Fair warning 
though, there may be non-technical (i.e. marketing) reasons for using a 
pseudo-minor version number in the future.

-jOrGe W.



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Notifications proposal

2011-05-09 Thread Jorge Williams

On May 9, 2011, at 5:20 PM, Matt Dietz wrote:

Message example:

{ 'publisher_id': 'compute.host1',
  'timestamp': '2011-05-09 22:00:14.621831',
  'priority': 'WARN',
  'event_type': 'compute.create_instance',
  'payload': {'instance_id': 12, ... }}

There was a lot of concern voiced over messages backing up in any of the 
queueing implementations, as well as the intended priority of one message over 
another. There are couple of immediately obvious solutions to this. We think 
the simplest solution is to implement N queues, where N is equal the number of 
priorities. Afterwards, consuming those queues is implementation specific and 
dependent on the solution that works best for the user.

The current plan for the Rackspace specific implementation is to use 
PubSubHubBub, with a dedicated worker consuming the notification queues and 
providing the glue necessary to work with a standard Hub implementation. I have 
a very immature worker implementation at https://github.com/Cerberus98/yagi if 
you're interested in checking that out.


Some thoughts:

In order to support PubSubHubBub you'll also need each message to also contain 
a globally unique ID.  It would also be nice if you had the concept of 
categories.  I realize you kinda get that with the event type 
compute.create_instance but there are always going to be messages that may 
belong to multiple categories. Also, ISO timestamps with a T :  
2011-05-09T22:00:14.621831 are way more interoperable -- I would also include 
a timezone designator Z for standard time 2011-05-09T22:00:14.621831Z -- 
otherwise some implementation assume the local timezone.

-jOrGe W.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Notifications proposal

2011-05-09 Thread Jorge Williams

On May 9, 2011, at 6:39 PM, Matt Dietz wrote:

Jorge,

   Thanks for the feedback!

   Regarding the message format, we actually don't need the unique id in the 
generic event format because that's implementation specific. The external 
publisher I've implemented actually does all of the pubsubhubbub specific heavy 
lifting for you. The idea behind keeping this system simple at the nova layer 
is allowing people to implement anything they'd like, such as emails or paging.

I guess, I'm not seeing the whole picture.  So these are internal messages? 
Will they cross service boundaries / zones?  I'm sorry I missed the 
conversation at the summit :-) Is there a blueprint I should be reading?


For categories, were you considering this to be a list? Could you give an 
example of an event that would span multiple categories?


From an Atom perspective, I suppose anything a client might want to key in on 
or subscribe to may be a category.  So create may be a category -- a billing 
layer may key in on all create messages and ignore others. compute may also 
be a category -- you can aggregate messages from other services so It'd be 
nice for messages from compute to have their own category.  To my knowledge, 
atom doesn't have the concept of priority so WARN may also be a category.  I 
suppose if these are internal messages an external publisher can split the 
event_type and priority into individual categories.

Finally, I can make the changes to the timestamp. This as just a hypothetical 
example, anyway.


Okay cool, thanks Matt.



On May 9, 2011, at 6:13 PM, Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:


On May 9, 2011, at 5:20 PM, Matt Dietz wrote:

Message example:

{ 'publisher_id': 'compute.host1',
  'timestamp': '2011-05-09 22:00:14.621831',
  'priority': 'WARN',
  'event_type': 'compute.create_instance',
  'payload': {'instance_id': 12, ... }}

There was a lot of concern voiced over messages backing up in any of the 
queueing implementations, as well as the intended priority of one message over 
another. There are couple of immediately obvious solutions to this. We think 
the simplest solution is to implement N queues, where N is equal the number of 
priorities. Afterwards, consuming those queues is implementation specific and 
dependent on the solution that works best for the user.

The current plan for the Rackspace specific implementation is to use 
PubSubHubBub, with a dedicated worker consuming the notification queues and 
providing the glue necessary to work with a standard Hub implementation. I have 
a very immature worker implementation at https://github.com/Cerberus98/yagi 
https://github.com/Cerberus98/yagi if you're interested in checking that out.


Some thoughts:

In order to support PubSubHubBub you'll also need each message to also contain 
a globally unique ID.  It would also be nice if you had the concept of 
categories.  I realize you kinda get that with the event type 
compute.create_instance but there are always going to be messages that may 
belong to multiple categories. Also, ISO timestamps with a T :  
2011-05-09T22:00:14.621831 are way more interoperable -- I would also include 
a timezone designator Z for standard time 2011-05-09T22:00:14.621831Z -- 
otherwise some implementation assume the local timezone.

-jOrGe W.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with values in JSON responses

2011-05-03 Thread Jorge Williams

On May 3, 2011, at 6:29 PM, Eldar Nugaev wrote:

 Hi gents.
 
 At this moment we have problem in OS API 1.1. Any JSON response with
 values doesn't meet specification.

What specification are you referring to?

 Could you please provide information - why we want to see values
 field in JSON and who is responsable for implementation this
 specification in OS API 1.1?
 

We use values as a first attempt to add extensibility to collections in JSON. 
 We had a lengthy discussion about this at the summit.  To summarize:  there's 
no really easy/clean way of doing this as JSON is not extensible.  We're 
currently exploring other approaches, so values may go away in the long term.  
The 1.1 spec is not set in stone in this regard.


 Also we have broken documentation on openstack.org OS API 1.0
 http://docs.openstack.org/cactus/openstack-compute/developer/openstack-compute-api-1.0/content/index.html
 

Right, Anne -- can you look into this?


 -- 
 Eldar
 Skype: eldar.nugaev
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal to defer existing OS API v1.1 to Diablo, for greater stability in Cactus

2011-03-31 Thread Jorge Williams

I agree with Justin on 1.  JSON may be acceptable, but I don't believe we are 
validating throughly.  For example, in the JSON below...active is not a valid 
status, flavorId should be an integer,  etc.  That said, the JSON seems to be 
pretty close.  The XML is not that far behind it has the same problem as the 
JSON plus a missing namespace and some bad elements. Personally, I think we may 
be able to fix the XML with some creative WSGI and XSLT hacking but I'm not 
sure how easy it would be incorporating that into the code.

We really need to be writing tests around this stuff.  What we do in Rackspace 
today is that we generate the example in the DevGuide in XML validate the XML 
against the schema then translate to JSON,  then compare the translated  JSON 
against the JSON example in the DevGuide.   OR we take the JSON translate it to 
XML and validate the XML against the schema AND compare the XML against the 
example in the DevGuide...for instances where we expect input and output 
(servers, images), we iterate through multiple iterations of this.  Because 
there's a schema in the loop, we can catch errors like the ones I mentioned 
above even in the JSON.  Little validation errors are easy to sneak in and they 
do break clients.  The only real protection that we have is good testing.

As for the missing features in 1.0, I don't think that they're a big deal.  As 
Justin stated, the underlying implementation is missing AND they've been 
dropped from the 1.1 spec.  We should wait till the implementation is done -- I 
have a feeling they'll get implemented because I don't see Rackspace moving to 
Nova without them :-)   At that point, we can bring then in as extensions to 
1.1, then put them into the core in a later version if we think they're worth 
it.

As for changes in 1.1.  I don't think we should be focused on maximizing 
compatibility when we switch from one version to another. The whole purpose of 
having different versions is that they introduce features that would otherwise 
break clients. That is client developers will have to go back to the code and 
make some changes in order to get things to work. If we always made compatible 
changes there would be no reason for changing the version.  I agree that we 
should be able to support at least two active versions at one time to give 
people a chance to migrate to the latest version.  I also think we should be 
investing in language bindings to help people along.   Let's talk about the 
changes that are coming in 1.1 and discuss exactly what we want those to look 
like in the summit.

Finally, I'd also rather XML support be marked as experimental in 1.0 rather 
than excluding it.  We have a large number of clients would rather use XML, 
because it's more testable or because they have better tooling.

-jOrGe W.


On Mar 31, 2011, at 7:24 PM, Gabe Westmaas wrote:

 Thanks Justin, appreciate the input!  Answers inline.
 
 On Thursday, March 31, 2011 5:31pm, Justin Santa Barbara 
 jus...@fathomdb.com said:
 
 I think there are a few distinct issues:
 
 1) XML support.  I was thinking that most XML issues would also be issues in
 the JSON, so validating the XML will also serve as validating the JSON.  I'd
 appreciate an example here, but I agree in general that focusing on JSON is
 acceptable - as long as its not just that we don't see the problems in JSON
 because we're not validating it as thoroughly.
 
 
 So the XML is generated based on the json, but it goes through an additional 
 transformation so just checking the XML does not ensure that the json is 
 correct.
 
 Good point about an example, I should have provided one!  Below is the output 
 for a JSON and XML request on the same resource (/servers/id).  Things are 
 mostly correct until you get down to the IP address and metadata level.
 
 {server: 
  {status: active, 
   hostId: 84fd63700cb981fed0d55e7a7eca3b25d111477b5b67e70efcf39b93, 
   addresses: {public: [], private: [172.19.1.2]}, 
   name: metaserver, 
   flavorId: m1.tiny, 
   imageId: 1, 
   id: 6, 
   metadata: {data1: value1}
  }
 }
 
 server flavorId=m1.tiny 
 hostId=84fd63700cb981fed0d55e7a7eca3b25d111477b5b67e70efcf39b93 id=6 
 imageId=1 name=metaserver status=active
addresses
public/
private
item
172.19.1.2
/item
/private
/addresses
metadata
data1
value1
/data1
/metadata
 /server
 
 Correct XML would be:
 server flavorId=m1.tiny 
 hostId=84fd63700cb981fed0d55e7a7eca3b25d111477b5b67e70efcf39b93 id=6 
 imageId=1 name=metaserver status=active
addresses
public/
private
ip addr=10.176.42.16/
/private
/addresses
metadata
meta key=data1value1/meta
/metadata
 /server
 
 
 2) Missing features.  I don't think of this as an API issue, unless these
 are supported features which we simply aren't exposing.  My understanding is
 that it's the underlying support that's 

Re: [Openstack] Openstack API - Volumes?

2011-03-22 Thread Jorge Williams
Describing full Volume API as an extension may work in the short term.  Long 
term though we're moving towards having a suite of APIs.  From this perspective 
adding an extension to OpenStack Compute API seems a little strange to me.  But 
it could work short term, my concern is that we should really be thinking of 
Volumes as a core API, not additional features to Compute. Certainly we may 
need to extend compute to do things like mount/unmount a volume.

-jOrGe W.


On Mar 22, 2011, at 3:20 PM, Jay Pipes wrote:

 On Tue, Mar 22, 2011 at 3:21 PM, Justin Santa Barbara
 jus...@fathomdb.com wrote:
 So: When can we expect volume support in nova?  If I repackaged my volumes
 API as an extension, can we get it merged into Cactus?
 
 I would personally support this.
 
 Wasn't one of the ideas of the extensions API to provide a bit of a
 playground for features to bake that, at some future time, be made a
 core resource endpoint in the OpenStack API? This would be a perfect
 example of that, no?
 
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack API - Volumes?

2011-03-22 Thread Jorge Williams

On Mar 22, 2011, at 3:33 PM, Justin Santa Barbara wrote:

 The thing I don't like about the extension promotion process is that it 
 breaks clients when the volume API is promoted


Promotion only occurs between version changes in the API, that is when we're 
moving from say 1.1 to 2.0.  Version changes by definition break clients 
anyway.  The extension will stay an extension in 1.1 always, even if the 
extension is promoted in 2.0.  So clients built against 1.1 will continue to 
consume the extension as is and will not be broken -- code will have to be 
changed regardless to move to 2.0 at this point the extension is core.

-jOrGe W.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] State of OpenStack Auth

2011-03-04 Thread Jorge Williams

On Mar 4, 2011, at 10:29 AM, Jay Pipes wrote:

 Would the best option be if the OpenStack API supported both auth
 mechanisms (signature, basic HTTP) and allowed the deployers to pick
 which ones were best for which clients? For instance, if OpenStack
 supported both auth mechanisms simultaneously, mobile apps could
 choose signatures whereas other clients, say a simple web dashboard,
 could choose HTTP basic auth an re-auth every N hours?


We proposed a blueprint that addressed the very issue of supporting multiple 
authentication schemes simultaneously.  We also proposed a default dead simple 
authentication component -- based on basic auth -- though we removed the 
details of this part at the request of the swift team.  We also supported a 
clear separation between auth and individual services so that teams can 
concentrate on their components without worrying about auth. 

We didn't go so far as proposing a default authentication system, but with the 
default authentication component you wouldn't need one.  Khaled implement this 
code to support both the default auth component  and to integrate the blueprint 
with swift.  The response to our blueprint, from the swift guys, was that it 
wasn't needed.

Now that we are focused on auth perhaps the blueprint is worth another look:

https://blueprints.launchpad.net/nova/+spec/nova-authn
http://wiki.openstack.org/openstack-authn

-jOrGe W.

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] State of OpenStack Auth

2011-03-04 Thread Jorge Williams

On Mar 4, 2011, at 12:09 PM, Greg wrote:

 On Mar 4, 2011, at 11:05, Jorge Williams jorge.willi...@rackspace.com wrote:
 
 
 though we removed the details of this part at the request of the swift team. 
Khaled implement this code to support both the default auth component  
 and to integrate the blueprint with swift.  The response to our blueprint, 
 from the swift guys, was that it wasn't needed.
 
 
 
 Yes, the swift team is horrible about that. :P To be fair to us though, we do 
 prefer integration code that actually integrates with something instead of 
 just the idea of something. Our actual recommendation was to implement a 
 working system before submitting integration pieces to existing projects. 
 Also existing tests have to pass. :P
 

I don't want this to deteriorate into a flame war but let's get our facts 
straight:

1)  It completely integrated with the  existing auth system and it integrated 
with our default Auth component.
2)  All tests passed. I was looking over your shoulder when you ran them.

These are facts.  They are provable. We can check out the code from  the day we 
made the submit and show you.

-jOrGe W.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] State of OpenStack Auth

2011-03-03 Thread Jorge Williams

On Mar 3, 2011, at 5:45 PM, Chuck Thier wrote:

 The problem with this logic is that you are optimizing wrong.  In a token 
 based auth system, the tokens are valid generally for a period of time (24 
 hours normally with Rackspace auth), and it is a best practice to cache this. 
  Saying that you are reducing HTTP requests for 1 request that has to happen 
 every 24 hours isn't saving you that much.
 
 But back to the auth questions in general, I would like to comment on a 
 couple of things that have come up:
 
 1.  Basic Auth - I'm not fond of this mainly because auth credentials 
 (identification and secret) are sent, and have to be verified on every single 
 request.  This also means that every endpoint is going to be handling the 
 users' secrets for every request.  I think there is good precedent with no 
 major service providers using basic auth (even including twitter moving away 
 from basic auth, to OAuth)

We could do something like digest that is also easy to use and has really good  
support.

 
 2. Signed vs. Token based auth - Why not support both?  It isn't that 
 complex.  It is also interesting that OAuth v1 was signature based, while 
 OAuth v2 has moved to a token based auth system, so there is broad support in 
 the general community for both methods.

We're not going to avoid OAuth -- that's something that we're going to 
eventually have to support because delegation is such a compelling use case.  
Both OAuth v1 and v2 were token based if I recall correctly.  V2 dropped the 
requirement that everything be signed -- a really good move in my opinion.  
You're right in that signatures are not *that* complicated, but they do  raise 
the barrier of entry to an API.  There are also a lot of subtleties associated 
with them --  Cn14 comes to mind 
(http://en.wikipedia.org/wiki/XML_Signature#XML_Canonicalization), I believe 
there is a  similar problem with JSON(?)  I also potentially see performance 
issues. Just speaking as someone who's had to maintain day to day an API, I can 
already feel the headaches.  If signed request were optional, as they are in 
OAuth 2, I would vote to not use them and just secure everything with SSL.

-jOrGe W.

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-03-02 Thread Jorge Williams
Metadata in the OpenStack Compute/Cloud Servers  API is meant to describe user 
defined properties.  That's it.  So in that case, I agree with Brian that we 
shouldn't be overloading that functionality by performing some action based on 
user-defined metadata.

Speaking more generally though, any attribute that you associate with a 
resource can be thought of as metadata as well.  Isn't the name of an instance 
metadata about the instance?  Should operators be able to define arbitrary 
metadata and then be able to act on it in some way?  I think so, that's a very 
powerful feature. That said,  I would be cautious about exposing this as an 
arbitrary set of name value pairs because it provides a means by which you can 
bypass the contract and that will cause grief for our clients.  Additionally, 
there's the possibility of clashing metadata names between deployments.  The 
idea behind extensions is that you can define arbitrary metadata about a 
resource, while maintaining a contract and while avoiding conflicts with other 
operators/deployments/implementations.  I should note that the approach really 
isn't that different from AWS in that essentially as an operator you use a 
prefix to separate your metadata from customer metadata...the prefix is simply 
defined by the extension and  you can present your metadata in a separate 
attribute or element in the message.

Given that, I'm still a little fuzzy about whether we've reached a decision as 
to whether affinity id:

1) Should be part of the core Compute API
2) Should be a more general concept that may span different services, as Eric 
Day proposes 
3) Should be introduced as an extension, which can later be promoted to the 
core...or not :-)

As Erik Carlin noted, instances with the same affinity id will likely be placed 
in the same public subnet in Rackspace. Other operators may interpret affinity 
id differently. Is that concept general enough to be in the core?  I'd rather 
not introduce something in the core and then have to take it out.

-jOrGe W.


On Mar 1, 2011, at 11:26 AM, Mark Washenberger wrote:

 Are we using the name metadata to describe a different feature than the one 
 that exists in the CloudServers api?
 
 It seems like a different feature for the user-properties metadata to have 
 meaning to the api other than store this information so I can read it later.
 
 Justin Santa Barbara jus...@fathomdb.com said:
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 We decided in the merge to call it Metadata, despite being fully aware of
 the semantic issues, because that's what the CloudServers / OpenStack API
 uses.
 
 There are many better terms, but for the sake of avoiding a Tower of Babel,
 let's just call it Metadata.
 
 
 
 On Tue, Mar 1, 2011 at 6:56 AM, Sandy Walsh sandy.wa...@rackspace.comwrote:
 
 Was just speaking with dabo about this and we agree that metadata is a bad
 name for this capability.
 
 I don't really care about what we call it, but metadata has some
 preconceived notions/meanings. Perhaps Criteria?
 
 Currently we have *some* criteria for creating a new instance on the
 Openstack API side: flavors and operating system. But I think the OS API
 payload allows for additional criteria to be passed in with the request
 (not sure).
 
 Eventually all this stuff will have to make it to the Scheduler for
 Server-Best-Match/Zone-Best-Match. That's somewhere on our task list for
 Cactus :/
 
 $0.02
 
 -S
 
 
 
 
 
 
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use of
 the
 individual or entity to which this message is addressed, and unless
 otherwise
 expressly indicated, is confidential and privileged information of
 Rackspace.
 Any dissemination, distribution or copying of the enclosed material is
 prohibited.
 If you receive this transmission in error, please notify us immediately by
 e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation is appreciated.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Compute 1.1

2011-03-02 Thread Jorge Williams

Hey guys,

New version of OpenStack Compute 1.1 is out.

PDF:  http://docs.openstack.org/openstack-compute/developer/cs-devguide.pdf
WebHelp: http://docs.openstack.org/openstack-compute/developer/content/

See the Document Change History section for a list of changes.

The API is now in Launchpad in the openstack-manuals project.   I checked it in 
3 stages

1) Cloud Servers 1.0 :  This is the version of Cloud Servers we're running on 
Rackspace
2) Open Stack Compute 1.1 (2/9/11) :  This is the version first shared on 
OpenStack 
3) Open Stack Compute 1.1 (3/1/11):  This is the current version

I did this so that you can run diffs against the three versions and see exactly 
what's changed.  From now on all changes are going directly into Launchpad.

I've gotten a lot of suggestions over the past couple of weeks, and I've tried 
to take them all into account.  There are still a couple of changes coming 
based on those suggestions but they're not very big -- mostly cosmetic.

I realize we're still having a debate about affinity id.  Affinity id is 
still mentioned in the spec, but I'm totally open to removing it if we decide 
that's not the best approach.

I appreciate your input.  You can contribute by leaving comments in the WebHelp 
version (I don't think enterpad is going to work for this sort of thing).  Or 
if  you find something broken, or want to make another change, you can make 
changes to the openstack-manuals project submit a merge request.

Thanks,

jOrGe W.

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-03-02 Thread Jorge Williams

On Mar 2, 2011, at 11:43 AM, Eric Day wrote:

 Now the arguments stated by many folks is that service_metadata
 is really instance properties or instance attributes and should
 instead be part of the instance object/record directly (like size,
 flavor id, etc. are). I don't disagree, but unfortunately there is
 a little more overhead since we're using a structured data store,
 and this requires an alter table for every change at this point.
 It's more difficult to introduce, test, and remove service attributes. If
 we want deployments to be able to define service-specific metadata
 and use that in pluggable schedulers, a DB schema change is not a
 very elegant way to support this.

How you store the data internally and how you present it in the API are two 
different issues.  You don't necessarily have to store extension data where you 
store standard attributes in order to present things as instance properties. 
You can store this data in a completely different table or in a flat file, or 
in memory, whatever.  You can have middle ware that inserts it into the object 
before you present it to the user.  In fact this is a big plus because it makes 
extensions plug-able and because it allows each one to map it's data as it sees 
fit.


-jOrGe W.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-03-02 Thread Jorge Williams

On Mar 2, 2011, at 3:54 PM, Eric Day wrote:

 On Wed, Mar 02, 2011 at 09:48:27PM +, Jorge Williams wrote:
 On Mar 2, 2011, at 11:43 AM, Eric Day wrote:
 Now the arguments stated by many folks is that service_metadata
 is really instance properties or instance attributes and should
 instead be part of the instance object/record directly (like size,
 flavor id, etc. are). I don't disagree, but unfortunately there is
 a little more overhead since we're using a structured data store,
 and this requires an alter table for every change at this point.
 It's more difficult to introduce, test, and remove service attributes. If
 we want deployments to be able to define service-specific metadata
 and use that in pluggable schedulers, a DB schema change is not a
 very elegant way to support this.
 
 How you store the data internally and how you present it in the API are two 
 different issues.  You don't necessarily have to store extension data where 
 you store standard attributes in order to present things as instance 
 properties. You can store this data in a completely different table or in a 
 flat file, or in memory, whatever.  You can have middle ware that inserts it 
 into the object before you present it to the user.  In fact this is a big 
 plus because it makes extensions plug-able and because it allows each one to 
 map it's data as it sees fit.
 
 Agreed, but we're talking about how to actually store both in
 nova. Justin added a metadata table in the nova.db SQL schema, but
 we're trying to decide if that should be user only (and add another
 table) or both user and service with prefixes. The 1.1 API spec won't
 change either way.
 
 -Eric

Got it :-)


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute 1.1

2011-03-02 Thread Jorge Williams
https://launchpad.net/openstack-manuals

On Mar 2, 2011, at 10:42 AM, Justin Santa Barbara wrote:

Looks like some good changes.  Where is this in launchpad, so the community can 
help develop it?  For example, I'm willing to document the reservation of the 
aws: prefix.

Justin




On Wed, Mar 2, 2011 at 8:29 AM, Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:

Hey guys,

New version of OpenStack Compute 1.1 is out.

PDF:  http://docs.openstack.org/openstack-compute/developer/cs-devguide.pdf
WebHelp: http://docs.openstack.org/openstack-compute/developer/content/

See the Document Change History section for a list of changes.

The API is now in Launchpad in the openstack-manuals project.   I checked it in 
3 stages

1) Cloud Servers 1.0 :  This is the version of Cloud Servers we're running on 
Rackspace
2) Open Stack Compute 1.1 (2/9/11) :  This is the version first shared on 
OpenStack
3) Open Stack Compute 1.1 (3/1/11):  This is the current version

I did this so that you can run diffs against the three versions and see exactly 
what's changed.  From now on all changes are going directly into Launchpad.

I've gotten a lot of suggestions over the past couple of weeks, and I've tried 
to take them all into account.  There are still a couple of changes coming 
based on those suggestions but they're not very big -- mostly cosmetic.

I realize we're still having a debate about affinity id.  Affinity id is 
still mentioned in the spec, but I'm totally open to removing it if we decide 
that's not the best approach.

I appreciate your input.  You can contribute by leaving comments in the WebHelp 
version (I don't think enterpad is going to work for this sort of thing).  Or 
if  you find something broken, or want to make another change, you can make 
changes to the openstack-manuals project submit a merge request.

Thanks,

jOrGe W.

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.commailto:ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute 1.1

2011-03-02 Thread Jorge Williams
I'd prefer the comments sections so that we have a reference when we discuss. 
But hey, I'll take suggestions from anywhere :-)

-jOrGe W.

On Mar 2, 2011, at 11:16 AM, Michael Mayo wrote:

 Should comments/suggestions go in this email thread or in the comments 
 sections of the web help?  I hate to spam this list :)
 
 Mike
 
 On Mar 2, 2011, at 8:29 AM, Jorge Williams wrote:
 
 
 Hey guys,
 
 New version of OpenStack Compute 1.1 is out.
 
 PDF:  http://docs.openstack.org/openstack-compute/developer/cs-devguide.pdf
 WebHelp: http://docs.openstack.org/openstack-compute/developer/content/
 
 See the Document Change History section for a list of changes.
 
 The API is now in Launchpad in the openstack-manuals project.   I checked it 
 in 3 stages
 
 1) Cloud Servers 1.0 :  This is the version of Cloud Servers we're running 
 on Rackspace
 2) Open Stack Compute 1.1 (2/9/11) :  This is the version first shared on 
 OpenStack 
 3) Open Stack Compute 1.1 (3/1/11):  This is the current version
 
 I did this so that you can run diffs against the three versions and see 
 exactly what's changed.  From now on all changes are going directly into 
 Launchpad.
 
 I've gotten a lot of suggestions over the past couple of weeks, and I've 
 tried to take them all into account.  There are still a couple of changes 
 coming based on those suggestions but they're not very big -- mostly 
 cosmetic.
 
 I realize we're still having a debate about affinity id.  Affinity id is 
 still mentioned in the spec, but I'm totally open to removing it if we 
 decide that's not the best approach.
 
 I appreciate your input.  You can contribute by leaving comments in the 
 WebHelp version (I don't think enterpad is going to work for this sort of 
 thing).  Or if  you find something broken, or want to make another change, 
 you can make changes to the openstack-manuals project submit a merge request.
 
 Thanks,
 
 jOrGe W.
 
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use of the
 individual or entity to which this message is addressed, and unless otherwise
 expressly indicated, is confidential and privileged information of Rackspace.
 Any dissemination, distribution or copying of the enclosed material is 
 prohibited.
 If you receive this transmission in error, please notify us immediately by 
 e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation is appreciated.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 Mike Mayo
 901-299-9306
 @greenisus
 
 
 



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jorge Williams
There are lots of advantages:

1) It allows services to be more autonomous, and gives us clearly defined 
service boundaries. Each service can be treated as a black box.
2) All service communication becomes versioned, not just the public API but 
also the admin API.  This means looser coupling which can help us work in 
parallel.  So glance can be on 1.2 of their API, but another API that depends 
on it (say compute) can continue to consume 1.1 until they're ready to switch 
-- we don't have the bottlenecks of everyone having to update everything 
together.
3) Also because things are loosely coupled and there are clearly defined 
boundaries  it positions us to have many other services (LBaaS, FWaaS, DBaaS, 
DNSaaS, etc).
4) It also becomes easier to deploy a subset of functionality ( you want 
compute and image, but not block).
5) Interested developers can get involved in only the services that they care 
about without worrying about other services.
6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
integration as it is, it makes sense for us to standardize on it.

We are certainly changing the way we are doing things, but I don't really think 
we are throwing away a lot of functionality.  As PVO mentioned, things should 
work very similar to the way they are working now.  You still have compute 
workers, you may still have an internal queue, the only difference is that 
cross-service communication is now happening by issuing REST calls.

-jOrGe W.


On Feb 18, 2011, at 9:34 AM, Jay Pipes wrote:

 OK, fair enough.
 
 Can I ask what the impetus for moving from AMQP to REST for all
 internal APIs is? Seems to me we will be throwing away a lot of
 functionality for the benefit of cross-WAN REST communication?
 
 -jay
 
 On Fri, Feb 18, 2011 at 9:31 AM, Paul Voccio paul.voc...@rackspace.com 
 wrote:
 Jay,
 
 I understand Justin's concern if we move /network and /images and /volume
 to their own endpoints then it would be a change to the customer. I think
 this could be solved by putting a proxy in front of each endpoint and
 routing back to the appropriate service endpoint.
 
 I added another image on the wiki page to describe what I'm trying to say.
 http://wiki.openstack.org/api_transition
 
 I think might not be as bad of a transition since the compute worker would
 receive a request for a new compute node then it would proxy over to the
 admin or public api of the network or volume node to request information.
 It would work very similar to how the queues work now.
 
 pvo
 
 On 2/17/11 8:33 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 Sorry, I don't view the proposed changes from AMQP to REST as being
 customer facing API changes. Could you explain? These are internal
 interfaces, no?
 
 -jay
 
 On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
 jus...@fathomdb.com wrote:
 An API is for life, not just for Cactus.
 I agree that stability is important.  I don't see how we can claim to
 deliver 'stability' when the plan is then immediately to destablize
 everything with a very disruptive change soon after, including customer
 facing API changes and massive internal re-architecting.
 
 
 On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
 jus...@fathomdb.com wrote:
 Pulling volumes  images out into separate services (and moving from
 AMQP to
 REST) sounds like a huge breaking change, so if that is indeed the
 plan,
 let's do that asap (i.e. Cactus).
 
 Sorry, I have to disagree with you here, Justin :)  The Cactus release
 is supposed to be about stability and the only features going into
 Cactus should be to achieve API parity of the OpenStack Compute API
 with the Rackspace Cloud Servers API. Doing such a huge change like
 moving communication from AMQP to HTTP for volume and network would be
 a change that would likely undermine the stability of the Cactus
 release severely.
 
 -jay
 
 
 
 
 
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use of the
 individual or entity to which this message is addressed, and unless otherwise
 expressly indicated, is confidential and privileged information of Rackspace.
 Any dissemination, distribution or copying of the enclosed material is 
 prohibited.
 If you receive this transmission in error, please notify us immediately by 
 e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation is appreciated.
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jorge Williams

On Feb 18, 2011, at 11:53 AM, Jay Pipes wrote:

I think your points are all valid, Jorge. Not disagreeing with them;
more just outlining that while saying all services must *publish* a
REST interface, services can listen and respond on more than one
protocol.

I'm glad we're *mostly* in agreement :-)


So, I agree with you basically, just pointing out that while having a
REST interface is a good standard, it shouldn't be the *only* way that
services can communicate with each other :)


Again, I'm not saying it's the *only* way services should communicate with one 
another especially if there exist protocols that make no sense replicating in 
REST.  That said, I don't like the idea of having to maintain different 
protocols otherwise.  I'm not convinced that doing so is necessary, it muddies 
the water on what exactly the true service interface is, it keeps us from 
consuming the same dog food we're selling, and I'm afraid it may lead to added 
work for service teams.


-jay

On Fri, Feb 18, 2011 at 12:46 PM, Jorge Williams
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:

On Feb 18, 2011, at 10:27 AM, Jay Pipes wrote:

Hi Jorge! Thanks for the detailed response. Comments inline. :)

On Fri, Feb 18, 2011 at 11:02 AM, Jorge Williams
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:
There are lots of advantages:

1) It allows services to be more autonomous, and gives us clearly defined 
service boundaries. Each service can be treated as a black box.

Agreed.

2) All service communication becomes versioned, not just the public API but 
also the admin API.  This means looser coupling which can help us work in 
parallel.  So glance can be on 1.2 of their API, but another API that depends 
on it (say compute) can continue to consume 1.1 until they're ready to switch 
-- we don't have the bottlenecks of everyone having to update everything 
together.

Agreed.

3) Also because things are loosely coupled and there are clearly defined 
boundaries  it positions us to have many other services (LBaaS, FWaaS, DBaaS, 
DNSaaS, etc).

Agreed.

4) It also becomes easier to deploy a subset of functionality ( you want 
compute and image, but not block).

Agreed.

5) Interested developers can get involved in only the services that they care 
about without worrying about other services.

Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
the communication protocol between internal Nova services (network,
compute, and volume) right now. Developers can currently get involved
in the services they want to without messing with the other services.


I'm saying we can even package/deploy/run each service separately.  I supposed 
you can also do this with AMQP, I just see less roadblocks to doing this with 
HTTP.  So for example, AMQP requires a message bus which is external to the 
service.  That affects autonomy.  With an HTTP/REST approach, I can simply talk 
to the service directly. I suppose things could be a little different if had a 
queuing service.  But even then, do we really want all of our messages to go to 
the queue service first?


6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
integration as it is, it makes sense for us to standardize on it.

Unless I'm mistaken, we're not talking about APIs. We're talking about
protocols. AMQP vs. HTTP.

What we call APIs are really protocols, so the OpenStack compute API is really 
a protocol for talking to compute.  Keep in mind we intimately use HTTP in our 
restful protocol...content negotiation, headers, status codes, etc... all of 
these are part of the API.

Another thing I should note, is that I see benefits in keeping the  interface 
to service same regardless of whether it's a user or another service that's 
making a call.  This allows us to eat our own dog food. That is, there's no 
separate protocol for developers than there is for clients.  Sure there may be 
an Admin API, but the difference between the Admin API and the Public API is 
really defined in terms of security policies by the operator.


We are certainly changing the way we are doing things, but I don't really think 
we are throwing away a lot of functionality.  As PVO mentioned, things should 
work very similar to the way they are working now.  You still have compute 
workers, you may still have an internal queue, the only difference is that 
cross-service communication is now happening by issuing REST calls.

I guess I'm on the fence with this one. I agree that:

* Having clear boundaries between services is A Good Thing
* Having versioning in the interfaces between services is A Good Thing

I'm just not convinced that services shouldn't be able to communicate
on different protocols. REST over HTTP is a fine interface. Serialized
messages over AMQP is similarly a fine interface.

I don't think we're saying you can't use any protocol besides HTTP.  If it 
makes sense to use something like AMQP **within  your service

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Jorge Williams
I think I understand your confusing Justin.  Extensions are not there to bind 
APIs together.  The examples I gave were probably a bit misleading.  Extensions 
are there to support niche functionality and to allow developers to innovate 
without having to wait for some centralized group to approve.

You're right,  things should become clearer as we move towards code :-)

-jOrGe W.


On Feb 18, 2011, at 5:08 PM, Justin Santa Barbara wrote:

I find this even more confusing than before.  On the one hand, we talk about a 
suite of independent APIs, and on the other hand we talk about binding them 
together using extensions.  We talk about standardizing around one API, and we 
talk about letting a thousand flowers bloom as extensions.

I'm going to wait till there's concrete code here before commenting further, I 
think, so that we can talk in specifics.

Justin


On Fri, Feb 18, 2011 at 2:32 PM, Erik Carlin 
erik.car...@rackspace.commailto:erik.car...@rackspace.com wrote:
The way I see it, there isn't a singular OpenStack API (even today there is 
swift, nova, and glance).  OpenStack is a suite of IaaS each with their own API 
– so there is a SUITE of standard OS APIs.  And each OS service should strive 
to define the canonical API for automating that particular service.  If I just 
want to run an image repo, I deploy glance.  If my SAN guy can't get storage 
provisioned fast enough, I deploy the OS block storage service (once we have 
it).  And if I want a full cloud suite, I deploy all the services.  They are 
loosely coupled and (ideally) independent building blocks.  Whether one chooses 
to front the different service endpoints with a proxy to unify them or have 
separate service endpoints is purely a deployment decision.  Either way, there 
are no competing OS APIs.  Support for 3rd party APIs (e.g. EC2) is secondary 
IMO, and to some degree, detrimental.  Standards are defined largely in part by 
ubiquity.  We want OS to become ubiquitous and we want the OS APIs to become 
defacto.  Supporting additional APIs (or even variations of the same API like 
AMQP per the other thread) doesn't help us here.  I would love to see the 
community rally behind a per service standard OS REST API that we can own and 
drive.

To that end, the goal as I see it is to launch canonical OpenStack Compute 
(nova) and Image (glance) APIs with Cactus.  In Diablo, we would then work to 
introduce separate network and block storage services with REST APIs as well.  
All APIs would be independently versioned and stable.  I'm ALL for per language 
OpenStack bindings that implement support for the entire suite of services.

Re: extensions, it's actually the technical aspects that are driving it.  There 
is a tension between standards and innovation that needs to be resolved.  In 
addition, we need to be able to support niche functionality (e.g. Rackspace may 
want to support API operations related to managed services) without imposing it 
on everyone.  These problems are not new.  We've seen the same exact thing with 
OpenGL and they have a very successful extension model that has solved this.  
Jorge studied this when did his PhD and has designed extensions with that in 
mind.  He has a presentation on extensions here if you haven't seen it.  I 
think extensions are critically important and would encourage dialog amongst 
the community to come to a consensus on this.  Per my points above, I would 
prefer to avoid separate APIs for the same service.  Let's see if we can get 
behind a per service API that becomes THE defacto standard way for automating 
that service.

Erik

From: Justin Santa Barbara jus...@fathomdb.commailto:jus...@fathomdb.com
Date: Fri, 18 Feb 2011 09:57:12 -0800

To: Paul Voccio paul.voc...@rackspace.commailto:paul.voc...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

Subject: Re: [Openstack] OpenStack Compute API 1.1

 How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be the 
OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will 
continue to use their existing wrappers (e.g. jclouds)  Once there's an 
OpenStack API, then end-users will want to find a library for that, and we 
don't want that to be a poor experience.  To maintain a good experience, we 
either can't break the API, or we need to write and maintain a lot of proxying 
code to maintain compatibility.  We know we're not ready for the first 
commitment, and I don't think we get enough to justify the second.

 I think the proxy would make sense if you wanted to have a single api. Not 
 all service providers will but I see this as entirely optional, not required 
 to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the API, 
they use a wrapper library.  They want a stable library that works and is kept 
up to 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Jorge Williams

On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:


How would this work if someone didn't run  a volume service or glance? Should 
the api listen for that?

My expectation is that if someone didn't run a volume service, we should expose 
that just as if there were insufficient resources (because that's not far from 
the case.)  We'd return an error like no resources to satisfy your request.  
That way there's only one code path for quota exhausted / zero quota / no 
volume service / all disks full / no 'HIPAA compliant' or 'earthquake proof' 
volumes available when a user requests that.


A better approach is to simply provide a service catalog with list of 
endpoints, you can easily detect whether a volume service  is available etc. 
This allows you to detect what services are available with a single request, 
rather than polling for multiple failures.  Think of writing a control panel 
and having a long list of services (image, volumes, network, etc).  Do you want 
to make separate calls??  Do you have images?  volumes?  networks?  etc etc.  A 
service catalog allows you to make a single call and that gives you an 
inventory of what's available you can then decide what to enable and disable in 
your control panel.

For glance, I don't know - how is it even possible to boot an instance without 
an image?

You may have a list of stock images that you support within compute and do 
without a full image service implementation (translation, cataloging, etc).


 We shouldn't be relying on extensions for Cactus.  In fact, I'd rather leave 
out extensions until we have a solid use case.  You may be saying that volumes 
will be our test-use case, but I think that will yield a sub-optimal API.


I see extensions doing a few things. First, it gives a way for other developers 
to work on and promote additions to the api without fighting to get them into 
core at first.

While I agree that's a good goal, I don't think we should rely on it for our 
core services, because it will give a sub-optimal experience.  I also think 
that this extension element may well be worse than simply having separate APIs. 
 Right now I think we're designing in a vacuum, as you say.

I don't see our core services relying on extensions.  Once we design APIs for 
our core services, the core APIs should have enough functionality to support 
core implementation.  Extensions are there to support functionality that is out 
of the core.  New features etc.  It allows us to innovate.


Can you explain how it would yield a sub-optimal api?
It would yield a sub-optimal API precisely because the process of fighting to 
get things into core makes them better.  If you don't believe that, then we 
should shut down the mailing list and develop closed-source.


I totally believe that fighting to get things into core will make things 
better.  Having extensions doesn't prevent this from happing I would argue that 
it encourages folks to develop stuff and show it off. If you have a great idea, 
you can show it working, if clients like it they will code against it and you 
can create incentive from getting it into the core.

Another thing I would note is that not everything belongs in the core -- 
there's always a need  for niche functionality that may be applicable only to a 
single operator or group of operators.  Troy gave a really great example for 
Rackspace with backup schedules -- and that's just one example there are others 
-- for Rackspace there are  features will likely never make it to the core 
because they require a very specific support infrastructure behind them.  With 
extensions we can add these features without breaking clients.


A less meta reasoning would be that when we design two things together, we're 
able to ensure they work together.  The screw and the screwdriver didn't evolve 
independently.  If we're designing them together, we shouldn't complicate 
things by use of extensions.

Again, our core services should stand on their own -- without extensions -- 
extensions are there to support new features in a backwards compatible way, to 
allow operators to differentiate themselves, and to offer support for niche 
functionality.



I don't think that anyone is proposing that a volume API be entirely defined as 
an extension to OpenStack compute. The volume extension servers simply as an 
example and it  covers the case for mounting and un-mounting  a volume.  If we 
can figure out a way of doing this in a general way we can always promote the 
functionality to the core.

I don't disagree that there should be core apis for each service, but that in 
the long run, there may not be a single api. Glance already doesn't have an api 
in the openstack 1.1 spec. What about Swift?

OK - so it sounds like volumes are going to be in the core API (?) - good.

No more like, there will be a core API for managing volumes that is different 
from the Compute API.


Let's get that into the API spec.  It also sounds like extensions (swift / 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Jorge Williams

On Feb 14, 2011, at 3:08 PM, Jay Pipes wrote:

 The reason I haven't responded yet is because it's difficult for me to:
 
 diff -u some.pdf other.pdf
 
 In all seriousness, the wiki spec page says this about the differences
 in the 1.1 OpenStack API:
 


I'll work with Anne to make the source documents available to you guys so you 
can do a diff etc.  Give me a couple of days to get this working, existing docs 
are built into the implementation, this is a nice thing because our unit tests 
use the samples from the docs to make sure they're always correct...anyway now  
I need to separate these out.


 ==start wiki==
 
 OS API 1.1 Features
 
 IPv6
 
 Extensions
 
 Migrate to OpenStack namespace
 
 ==end wiki==
 
 There's just not much detail to go on. I had to go through the PDF to
 see what the proposed changes to the CS 1.0 API looked like.
 
 After looking at the PDF, I have a couple suggestions for improvement,
 but overall things look good :)
 
 1) Give extensions a way to version themselves. Currently, the main
 fields in the API response to GET /extensions looks like this:
 
 {
 extensions : [
 {
 name : Public Image Extension,
 namespace : http://docs.rackspacecloud.com/servers/api/ext/pie/v1.0;,
 alias : RS-PIE,
 updated : 2011-01-22T13:25:27-06:00,
 description : Adds the capability to share an image with other users.,
 links : [
 {
 rel : describedby,
 type : application/pdf,
 href : http://docs.rackspacecloud.com/servers/api/ext/cs-pie-2011.pdf;
 },
 {
 rel : describedby,
 type : application/vnd.sun.wadl+xml,
 href : http://docs.rackspacecloud.com/servers/api/ext/cs-pie.wadl;
 }
 ]
 }, ... ]}
 
 I would suggest adding a version field to the extension resource
 definition so that extension developers will have a way of identifying
 the version of their extension the OpenStack deployment has installed.

Do we want to deal with extension versions?  If you need to version your 
extension because it's not backwards compatible simply create a new extension 
and append a version number to it. So RS-CBS and RS-CBS2, etc. This is how 
things work with OpenGL which served as a reference for our extension mechanism.

 
 2) I would suggest leaving the links collection off of the main
 result returned by GET /extensions and instead only returned the
 links collection when a specific extension is queried with a call to
 GET /extensions/ALIAS. You could even mimick the rest of the CS API
 and do a GET /extensions/detail that could return those fields?

I like this idea.

 
 3) IPv6 stuff in the PDF looked good as far as I could tell. Mostly, I
 was looking at the examples on pages 29 and 30. Was there a specific
 section that spoke to IPv6 changes; I could not find one.
 

I'm working to flesh this out a bit. Also I've gotten a bunch of comments on 
eitherpad (http://etherpad.openstack.org/osapi1-1), which I'm incorporating 
into the spec.  Expect more comments on eitherpad, and a new revision of the 
spec soon --  as well as access to the source :-).  In the meantime keep 
comments coming.

Thanks,

jOrGe W.









Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Jorge Williams

On Feb 14, 2011, at 3:35 PM, Jay Pipes wrote:

 On Mon, Feb 14, 2011 at 4:27 PM, Jorge Williams
 jorge.willi...@rackspace.com wrote:
 On Feb 14, 2011, at 3:08 PM, Jay Pipes wrote:
 I'll work with Anne to make the source documents available to you guys so 
 you can do a diff etc.  Give me a couple of days to get this working, 
 existing docs are built into the implementation, this is a nice thing 
 because our unit tests use the samples from the docs to make sure they're 
 always correct...anyway now  I need to separate these out.
 
 Cool, thanks Jorge! :)
 
 I would suggest adding a version field to the extension resource
 definition so that extension developers will have a way of identifying
 the version of their extension the OpenStack deployment has installed.
 
 Do we want to deal with extension versions?  If you need to version your 
 extension because it's not backwards compatible simply create a new 
 extension and append a version number to it. So RS-CBS and RS-CBS2, etc. 
 This is how things work with OpenGL which served as a reference for our 
 extension mechanism.
 
 Hmm, I suppose that's possible, too.  I'd prefer a unique field that
 has version information, but either could work.
 
 Another field that could be nice is author or authors to allow the
 developers or developer company/organization to be listed?

Another great idea.  I'll get that in there.

 
 2) I would suggest leaving the links collection off of the main
 result returned by GET /extensions and instead only returned the
 links collection when a specific extension is queried with a call to
 GET /extensions/ALIAS. You could even mimick the rest of the CS API
 and do a GET /extensions/detail that could return those fields?
 
 I like this idea.
 
 Cool :)
 
 3) IPv6 stuff in the PDF looked good as far as I could tell. Mostly, I
 was looking at the examples on pages 29 and 30. Was there a specific
 section that spoke to IPv6 changes; I could not find one.
 
 
 I'm working to flesh this out a bit. Also I've gotten a bunch of comments on 
 eitherpad (http://etherpad.openstack.org/osapi1-1), which I'm incorporating 
 into the spec.  Expect more comments on eitherpad, and a new revision of the 
 spec soon --  as well as access to the source :-).  In the meantime keep 
 comments coming.
 
 Gotcha. Will do :)
 
 Cheers,
 jay



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


  1   2   >