Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing "request identification"

2013-11-22 Thread Mitsuru Kanabuchi
n" (we can
> > call it 'request-token', for example).
> >Before sending actual API request to nova-api, client sends a
> > request to keystone to get 'request-token'.
> >
> >
> > -2
> >
> >Then client sends an actual API request with 'request-token'.
> >Nova-api will consult it to keystone whether it was really
> > generated.
> >Sounds like a auth-token generated by keystone, differences are:
> >  [lifecycle] auth-token is used for multiple API requests
> > before it expires,
> > 'request-token' is used for only single API request.
> >  [reusing] if the same 'request-token' was specified twice or
> > more times,
> > nova-api simply returns 20x (works like client token in
> > AWS[6]).
> > Keystone needs to maintain 'request-tokens' until they expire.
> >For backward compatibility, actual API request without
> > 'request-token' should work as before.
> >We can consider several options for uniqueness of 'request-token':
> >  uuid, any string with uniqueness per tennant, etc.
> >
> > IMO, since adding new implementation to Keystone is a little bit
> > hard work,
> > so implement of 1 is reasonable for me, just idea.
> >
> > Any comments will be appreciated.
> >
> > Sincerely, Haruka Tanizawa
> >
> > [0] https://blueprints.launchpad.net/nova/+spec/instance-tasks-api
> > [1] https://wiki.openstack.org/wiki/Support-retry-with-idempotency
> > [2] https://blueprints.launchpad.net/nova/+spec/cancel-swap-volume
> > [3]
> > 
> > http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg09023.html
> > [4]
> > https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token
> > [5] https://review.openstack.org/#/c/29480/
> > [6]
> > 
> > http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > <mailto:OpenStack-dev@lists.openstack.org>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > -- 
> >
> > -Dolph
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack convergence first steps

2013-12-08 Thread Mitsuru Kanabuchi

On Thu, 5 Dec 2013 22:13:18 -0600
Christopher Armstrong  wrote:

> On Thu, Dec 5, 2013 at 7:25 PM, Randall Burt 
> wrote:
> 
> >  On Dec 5, 2013, at 6:25 PM, Christopher Armstrong <
> > chris.armstr...@rackspace.com>
> >  wrote:
> >
> >   On Thu, Dec 5, 2013 at 3:50 PM, Anderson Mesquita <
> > anderson...@thoughtworks.com> wrote:
> >
> >> Hey stackers,
> >>
> >> We've been working towards making stack convergence (
> >> https://blueprints.launchpad.net/heat/+spec/stack-convergence) one step
> >> closer to being ready at a time.  After the first patch was submitted we
> >> got positive feedback on it as well as some good suggestions as to how to
> >> move it forward.
> >>
> >> The first step (https://blueprints.launchpad.net/heat/+spec/stack-check)
> >> is to get all the statuses back from the real world resources and update
> >> our stacks accordingly so that we'll be able to move on to the next step:
> >> converge it to the desired state, fixing any errors that may have happened.
> >>
> >> We just submitted another WiP for review, and as we were doing it, a few
> >> questions were raised and we'd like to get everybody's input on them. Our
> >> main concern is around the use and purpose of the `status` of a
> >> stack/resource.  `status` currently appears to represent the status of the
> >> last action taken, and it seems that we may need to repurpose it or
> >> possibly create something else to represent a stack's "health" (i.e.
> >> everything is up and running as expected, something smells fishy, something
> >> broke, stack's is doomed).  We described this thoroughly here:
> >> https://etherpad.openstack.org/p/heat-convergence
> >>
> >> Any thoughts?
> >>
> >> Cheers,
> >>
> >>
> >  I think a lot of OpenStack projects use "status" fields as "status of
> > the most recent operation", and I think it's totally wrong. "status" should
> > be a known state of the _object_, not an action, and if we need statuses
> > for operations, then those operations should be addressable REST objects.
> > Of course there are cases where object status should be updated to reflect
> > an operating status if it's a completely exclusive operation (BUILDING and
> > DELETING make sense, for example).
> >
> >  Actually, I think most projects are the opposite where "status" means
> > "what's the state of the resource" (Nova, Trove, Cinder, etc), whereas Heat
> > uses status as the state of the last operation. Probably wouldn't be too
> > terrible to have a new "state" for stacks and their resources then perhaps
> > deprecate and use "status" in the accepted way in the v2 API?
> 
> Well, my point is that it's done inconsistently. Yes, it's mostly used as
> an object status, but nova for example uses it as an operation status for
> things like resize.

Nova's status of in resize is "RESIZE" and "VERITY_RESIZE".
This status means "Currently, Instance is ACTIVE and resize in progress".
I think Heat can assume resource status is "ACTIVE" in this case.

Thus, several status that contain operation status have to map resource(object)
status. However in my impression, a status that should assume another status
isn't a lot.

In my opinion, status mapping table is reasonable in present time.

Regards

--
Mitsuru Kanabuchi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] time consuming of listing resource

2014-02-18 Thread Mitsuru Kanabuchi
the code level:
> > >
> > >On the schema level:
> > >
> > >* The indexes, especially on sourceassoc, are wrong:
> > >  ** The order of the columns in the multi-column indexes like idx_sr, 
> > >idx_sm, idx_su, idx_sp is incorrect. Columns used in predicates should 
> > >*precede* columns (like source_id) that are used in joins. The way the 
> > >indexes are structured now makes them unusable by the query optimizer 
> > >for 99% of queries on the sourceassoc table, which means any queries on 
> > >sourceassoc trigger a full table scan of the hundreds of millions of 
> > >records in the table. Things are made worse by the fact that INSERT 
> > >operations are slowed for each index on a table, and the fact that none 
> > >of these indexes are used just means we're wasting cycles on each INSERT 
> > >for no reason.
> > >  ** The indexes are across the entire VARCHAR(255) field width. This 
> > >isn't necessary (and I would argue that the base field type should be 
> > >smaller). Index width can be reduced (and performance increased) by 
> > >limiting the indexable width to 32 (or smaller).
> > >
> > >The solution to the main indexing issues is to do the following:
> > >
> > >DROP INDEX idx_sr ON sourceassoc;
> > >CREATE INDEX idx_sr ON sourceassoc (resource_id(32), source_id(32));
> > >DROP INDEX idx_sp ON sourceassoc;
> > >CREATE INDEX idx_sp ON sourceassoc (project_id(32), source_id(32));
> > >DROP INDEX idx_su ON sourceassoc;
> > >CREATE INDEX idx_su ON sourceassoc (user_id(32), source_id(32));
> > >DROP INDEX idx_sm ON sourceassoc;
> > >CREATE INDEX idx_sm ON sourceassoc (meter_id, source_id(32));
> > >
> > >Keep in mind if you have (hundreds of) millions of records in the 
> > >sourceassoc table, the above will take a long time to run. It will take 
> > >hours, but you'll be happy you did it. You'll see the database 
> > >performance increase dramatically.
> > >
> > >* The columns that refer to IDs of various kinds should not be UTF8. 
> > >Changing these columns to a latin1 or even binary charset would cut the 
> > >space requirements for the data and index storage by 65%. This means you 
> > >can fit around 3x as many records in the same data and index pages. The 
> > >more records you fit into an index page, the faster seeks and scans will 
> > >be.
> > >
> > >* sourceassoc has no primary key.
> > >
> > >* The meter table has the following:
> > >
> > >   KEY ix_meter_id (id)
> > >
> > >   which is entirely redundant (id is the primary key) and does nothing 
> > >but slow down insert operations for every record in the meter table.
> > >
> > >* The meter table mixes frequently searched and aggregated fields (like 
> > >timestamp, counter_type, project_id) with infrequently accessed fields 
> > >(like resource_metadata, which is a VARCHAR(5000)). This leads to poorer 
> > >performance of aggregate queries on the meter table that use the 
> > >clustered index (primary key) in aggregation (for an example, see the 
> > >particular line of code that we comment out of Ceilometer above). A 
> > >better performing schema would consolidate slim, frequently accessed 
> > >fields into the main meter table and move infrequently accessed or 
> > >searched fields into a meter_extra table. This would mean many more 
> > >records of the main meter table can fit into a single InnoDB data page 
> > >(the clustered index), which means faster seeks and scans for 99% of 
> > >queries on that table.
> > >
> > >On the code level there are a variety of inefficient queries that are 
> > >generated, and there are a number of places where using something like a 
> > >memcache caching layer for common lookup queries could help reduce load 
> > >on the DB server.
> > >
> > >I'm hoping to push some patches in the early part of 2014 that address 
> > >performance and scalability issues in the SQL driver for Ceilometer.
> > >
> > >Best,
> > >-jay
> > >
> > >___
> > >OpenStack-dev mailing list
> > >OpenStack-dev@lists.openstack.org
> > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Neutron] Refactoring heat LBaaS architecture according Neutron API

2014-02-20 Thread Mitsuru Kanabuchi

Hi Sergey,

On Thu, 20 Feb 2014 19:58:14 +0400
Sergey Kraynev  wrote:

> Hello community.
> 
> I'd like to discuss feature of Neutron LBaaS in Heat.
> Currently Heat resources are not identical to Neutron's.
> There are four resources here:
> 'OS::Neutron::HealthMonitor'
> 'OS::Neutron::Pool'
> 'OS::Neutron::PoolMember'
> 'OS::Neutron::LoadBalancer'
> 
> According to this representation the VIP is a part of resource
> Loadbalancer, whereas Neutron has separate object VIP.  I think it should
> be changed to conform with Neutron's implementation.
> So the main question: what is the best way to change it? I see following
> options:
> 
> 1. Move VIP in separate resource in icehouse release (without any
> additions).
> Possibly we should support both (old and new) implementation for users too.
>  IMO, it has also one big danger, because now we have stable version of it
> and have not enough time to check new approach.
> Also I think it does not make sense now, because Neutron team are
> discussing new object model (
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/027480.html)
> and it will be implemented in Juno.
> 
> 2. The second idea is to wait all architecture changes that are planed in
> Juno in Neutron. (look at link above)
> Then we could recreate or change Heat LBaaS architecture at all.

+1

In my understand, it's not necessarily the case that be identical with
underlying resources. Actually, several heat resources aren't identical
with underlying resources by mainly dependency reasons.

IMO, we should wait change of neutron's side model definition,
then we should consider what resource model is appropriate from heat's
perspective. Targeting Juno would be appropriate timing to consider
refactoring LBaaS model.

> Your feedback and other ideas about better implementation plan are welcome.
> 
> Regards,
> Sergey.

Regards,


  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat]Blueprint for retry function with idenpotency in Heat

2013-10-16 Thread Mitsuru Kanabuchi

Hi all,

We proposed a blueprint that supports API retry function with idenpotency for 
Heat.
Prease review the blueprint.

  https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency

Any comments will be gratefully appreciated. 

Regards.


  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Blueprint for retry function with idenpotency in Heat

2013-10-17 Thread Mitsuru Kanabuchi

Hello Mr. Clint,

Thank you for your comment and prioritization.
I'm glad to discuss you who feel same issue.

> I took the liberty of targeting your blueprint at icehouse. If you don't
> think it is likely to get done in icehouse, please raise that with us at
> the weekly meeting if you can and we can remove it from the list.

Basically, this blueprint is targeted IceHouse release.

However, the schedule is depend on follows blueprint:
  https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token

We're going to start implementation to Heat after ClientToken implemented.
I think ClientToken is necessary function for this blueprint, and important 
function for other callers!


On Wed, 16 Oct 2013 23:32:22 -0700
Clint Byrum  wrote:

> Excerpts from Mitsuru Kanabuchi's message of 2013-10-16 04:47:08 -0700:
> > 
> > Hi all,
> > 
> > We proposed a blueprint that supports API retry function with idenpotency 
> > for Heat.
> > Prease review the blueprint.
> > 
> >   https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency
> > 
> 
> This looks great. It addresses some of what I've struggled with while
> thinking of how to handle the retry problem.
> 
> I went ahead and linked bug #1160052 to the blueprint, as it is one that
> I've been trying to get a solution for.
> 
> I took the liberty of targeting your blueprint at icehouse. If you don't
> think it is likely to get done in icehouse, please raise that with us at
> the weekly meeting if you can and we can remove it from the list.
> 
> Note that there is another related blueprint here:
> 
> https://blueprints.launchpad.net/heat/+spec/retry-failed-update
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Blueprint for retry function with idenpotency in Heat

2013-10-18 Thread Mitsuru Kanabuchi

On Fri, 18 Oct 2013 10:34:11 +0100
Steven Hardy  wrote:
> IMO we don't want to go down the path of retry-loops in Heat, or scheduled
> self-healing. We should just allow the user to trigger an stack update from
> a failed state (CREATE_FAILED, or UPDATE_FAILED), and then they can define
> their own policy on when recovery happens by triggering a stack update.

I think "retry" has two different implications in this topic.
I'd like to organize "retry" means.

=
1) Stack Creation retry

  proposed here:
https://blueprints.launchpad.net/heat/+spec/retry-failed-update

  - trigger: stack update to failed stack
  - function: replace failed resource and go ahead

2) API retry

  proposed here(Our blueprint):
https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency

  - trigger: can't get API response or get unexpected response code
  - function: retry API requests until it gets expected response code or it 
reaches a retry limit
=

Our proposal is 2)
After over the retry limit, Stack would change to XXX_FAILED status.
I think it is same of currently heat behavior.
We won't change mechanism of stack state transition.

I understand proposal 1) aims to restart stack-processing of failed stack.
These are different layer's subject, and both functionality will able to exist 
together.


On Fri, 18 Oct 2013 10:34:11 +0100
Steven Hardy  wrote:

> On Fri, Oct 18, 2013 at 12:13:45PM +1300, Steve Baker wrote:
> > On 10/18/2013 01:54 AM, Mitsuru Kanabuchi wrote:
> > > Hello Mr. Clint,
> > >
> > > Thank you for your comment and prioritization.
> > > I'm glad to discuss you who feel same issue.
> > >
> > >> I took the liberty of targeting your blueprint at icehouse. If you don't
> > >> think it is likely to get done in icehouse, please raise that with us at
> > >> the weekly meeting if you can and we can remove it from the list.
> > > Basically, this blueprint is targeted IceHouse release.
> > >
> > > However, the schedule is depend on follows blueprint:
> > >   https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token
> > >
> > > We're going to start implementation to Heat after ClientToken implemented.
> > > I think ClientToken is necessary function for this blueprint, and 
> > > important function for other callers!
> > Can there not be a default retry implementation which deletes any
> > ERRORed resource and attempts the operation again? Then specific
> > resources can switch to ClientToken as they become available.
> 
> Yes, I think this is the way to go - have logic in every resources
> handle_update (which would probably be common with check_create_complete),
> which checks the status of the underlying physical resource, and if it's
> not in the expected status, we replace it.
> 
> This probably needs to be a new flag or API operation, as it clearly has
> the possibility to be more destructive than a normal update (may delete
> resources which have not changed in the template, but are in a bad state)
> 
> > > On Wed, 16 Oct 2013 23:32:22 -0700
> > > Clint Byrum  wrote:
> > >
> > >> Excerpts from Mitsuru Kanabuchi's message of 2013-10-16 04:47:08 -0700:
> > >>> Hi all,
> > >>>
> > >>> We proposed a blueprint that supports API retry function with 
> > >>> idenpotency for Heat.
> > >>> Prease review the blueprint.
> > >>>
> > >>>   
> > >>> https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency
> > >>>
> > >> This looks great. It addresses some of what I've struggled with while
> > >> thinking of how to handle the retry problem.
> > >>
> > >> I went ahead and linked bug #1160052 to the blueprint, as it is one that
> > >> I've been trying to get a solution for.
> > >>
> > >> I took the liberty of targeting your blueprint at icehouse. If you don't
> > >> think it is likely to get done in icehouse, please raise that with us at
> > >> the weekly meeting if you can and we can remove it from the list.
> > >>
> > >> Note that there is another related blueprint here:
> > >>
> > >> https://blueprints.launchpad.net/heat/+spec/retry-failed-update
> > >>
> > >>
> > 
> > Has any thought been given to where the policy should be specified for
> > how many retries to attempt?
> > 
> > Maybe sensible defaults should be defined in the python resou

Re: [openstack-dev] [Nova]Ideas of idempotentcy-client-token

2013-10-30 Thread Mitsuru Kanabuchi

On Tue, 29 Oct 2013 10:32:18 +
Joe Gordon  wrote:
> * Can you fill out the questions found in
> http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/

I guess, ClientToken(Idempotent) implementation is nice idea for API requester.

In my opinion, after POST requested, A requester have to stop processing when 
it cannot get response, because requester doesn't know what the resouce created 
actually.
In this case, retry is bad way, because it might cause create duplicate 
resources.

I think, ClientToken provide the solution.
If resource had been exist, It would skip resource creation by using 
ClientToken.
A requester can avoid create duplicate resources using ClientToken in retry.

I wish this blueprint would be implement. And I proposed related blueprint:

  https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency

Regards


On Tue, 29 Oct 2013 10:32:18 +
Joe Gordon  wrote:

> On Tue, Oct 29, 2013 at 8:50 AM, haruka tanizawa wrote:
> 
> > Hi all!
> >
> >
> > I proposed 'Idempotency for OpenStack API' as before.
> > In this time, I rewrote BP(
> > https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token )
> > and I implemented proto of it.
> >
> >
> > I image below use-case.
> > User can't know instance ID when the client has gone away before user get
> > 'create server' response of request.
> > So, User need to something which User can specify token like a mark.
> > In the service, which can also be a problem of charging.
> >
> > In this case, idempotency client token is so useful.
> > To specify the token itself by User, User can know status of server.
> > How many times User put POST method, it is guaranteed the state of the
> > POST which was same with return of User's first POST request.
> >
> >
> > Moreover, I found that this BP(
> > https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency)
> > is based on my blueprint.
> >
> >
> > If you have any idea about or question, please feel free to discuss
> > anything.
> > ** Also, I will attend HK summit.
> >
> 
> I like the idea but two comments:
> 
> * Can you fill out the questions found in
> http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/
> * Can you break down the blueprint into work items, so we can see what
> steps are involved
> * Since this is for OpenStack APIs only the name client-token makes me
> think of keystone tokens, so I think we need a better name.
> 
> > Sincerely,
> > Haruka Tanizawa
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >



  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat]Updated summit etherpad: API-retry-with-idempotency

2013-11-11 Thread Mitsuru Kanabuchi

Hello Heat folks.

Thank you very much for a lot of discussion about API-retry-with-idempotency
at Hong Kong summit.

I wrote discussion's points to summit etherpad.
Sorry for the delay.

  https://etherpad.openstack.org/p/icehouse-summit-heat-convergence

Please comment me if I'm misunderstanding.

I will also update the detail of API-retry-with-idempotency here.

  https://etherpad.openstack.org/p/kgpc00uuQr

I will email after it updated.

Regards

----
  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Updated summit etherpad: API-retry-with-idempotency

2013-11-13 Thread Mitsuru Kanabuchi

Thank you for comments!

On Thu, 14 Nov 2013 11:02:34 +1300
Steve Baker  wrote:
> On 11/12/2013 07:40 PM, Mitsuru Kanabuchi wrote:
> (snip)
> Just to confirm, my understanding of the outcome of that session was
> that pythonclients should implement retries of failed requests with the
> idempotency token.
> 
> Which means that no changes are required in heat, since the clients are
> attempting the retries inside a single client call.

In my understand, the conclusion of summit discussion doesn't contain about
implementation target(heat or pythonclient).
I think, it needs more discussions.

In my opinion, API-retry function should implement to heat for the following
reasons.

  1) Heat has to judge necessity of API-retry when Heat could get HTTP response.
  2) (Mr.Zane commented) Heat has to delete underlying resource using 
idempotency
 key when POST retry-over happened.

I think these processing(judge response code and cleanup resource) aren't
pythonclient's work.
What do you think?

On Wed, 13 Nov 2013 23:19:00 +0100
Zane Bitter  wrote:
> (snip)
> Assuming this can still fail eventually (even after retries) we still 
> need a way in Heat to make sure we can delete the resource by looking it 
> up from the idempotency token.
> 
> Of course the idempotency token *should* be just the name, but since 
> most projects have inexplicably chosen not to enforce unique names (in 
> tenant scope), we're in the odd position of requiring 3 ways to look up 
> any resource (by name, UUID, and idempotency token). That's bonkers, but 
> what can you do?

I agree with you.

We don't want to add new look-up-keys if we could.
Our objective is to solve the problem that is happen when API response is
lost and Client doesn't get resource ID.

Currently, parameters(uuid and name) aren't appropriate for the objective
because these parameters can't get when API response was lost.
There is no way to check resource existence from client side.

I thought Client Token(Idempotency Token?) is best way to cope that 
circumstance.

  https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token

This blueprint will provide us that Amazon like idempotency functionality.
I really hope the blueprint's discussion will be active more.

On Wed, 13 Nov 2013 16:35:15 -0600
Chris Friesen  wrote:
> On 11/13/2013 04:19 PM, Zane Bitter wrote:
> (snip)
> Why would the idempotency token not be the UUID?  Presumably that should 
> be unique.

I think so too, idempotency token have to be unique.
In addition, the token would generated by each user.
The idempotency token have to be unique per user for avoiding token conflict.

Regards


  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Updated summit etherpad: API-retry-with-idempotency

2013-11-18 Thread Mitsuru Kanabuchi

On Fri, 15 Nov 2013 12:46:44 +0100
Zane Bitter  wrote:

> Yes, but you don't know the UUID until you know it, and by then it's too 
> late (the resource has been created). So the idempotency token has to be 
> something passed in by the user.

I completely agree with you that token has to be something passed in by
the user.

> You could allow the user to supply the UUID (you would obviously check 
> it for uniqueness). There is however, many possible ways in which that 
> could go horribly wrong (e.g. if you sharded based on UUID, an attacker 
> might be able to exploit that to overload one of your machines; the 
> uniqueness check leaks information from other tenants, &c.)

Umm...
Thank you for important comments.

I understood your comment imply that idempotency token has to generate by
trusted services. (e.g. keystone?)

One of other hand, I'm thinking for easily way to implement idempotency token.
In my idea, idempotency token has to:

  - be String (Don't use UUID)
# for avoiding UUID generate problem

  - isolate per tenant
# for avoiding uniqueness check leaks

is appropriate. What do you think about that?


  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev