Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2014-01-06 Thread Ladislav Smola

On 12/20/2013 05:51 PM, Clint Byrum wrote:

Excerpts from Ladislav Smola's message of 2013-12-20 05:48:40 -0800:

On 12/20/2013 02:37 PM, Imre Farkas wrote:

On 12/20/2013 12:25 PM, Ladislav Smola wrote:

2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.

It's not the issue of locking, but the goal of Tuskar with the
Provision button is not only a single stack creation. After Heat's job
is done, the overcloud needs to be properly configured: Keystone needs
to be initialized, the services need to be registered, etc. I don't
think Horizon wants to add a background worker to handle such operations.


Yes, that is a valid point. I hope we will be able to pack it all to
Heat Template in I. This could be the way
https://blueprints.launchpad.net/heat/+spec/hot-software-config

Seems like the consensus is: It belongs to Heat. We are just not able to
do it that way now.

So there is a question, whether we should try to solve it in Tuskar-API
temporarily. Or rather focus on the Heat.


Interestingly enough, what Imre has just mentioned isn't necessarily
covered by hot-software-config. That blueprint is specifically about
configuring machines, but not API's.

I think we actually need multi-cloud to support what Imre is talking
about. These are API operations that need to follow the entire stack
bring-up, but happen in a different cloud (the new one).

Assuming single servers instead of loadbalancers and stuff for simplicity:


resources:
   keystone:
 type: OS::Nova::Server
   glance:
 type: OS::Nova::Server
   nova:
 type: OS::Nova::Server
   cloud-setup:
 type: OS::Heat::Stack
 properties:
   cloud-endpoint: str_join [ 'https://', get_attribute [ 'keystone', 
'first_ip' ], ':35357/' ]
   cloud-credentials: get_parameter ['something']
   template:
 keystone-catalog:
   type: OS::Keystone::Catalog
   properties:
 endpoints:
   - type: Compute
 publicUrl: str_join [ 'https://', get_attribute [ 'nova', 
'first_ip' ], ':8447/' ]
   - type: Image
 publicUrl: str_join [ 'https://', get_attribute [ 'glance', 
'first_ip' ], ':12345/' ]

What I mean is, you want the Heat stack to be done not when the hardware
is up, but when the API's have been orchestrated.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks for pointing that out, we should discuss it with Heat guys.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2014-01-03 Thread Jiří Stránský

On 21.12.2013 06:10, Jay Pipes wrote:

On 12/20/2013 11:34 AM, Clint Byrum wrote:

Excerpts from Radomir Dopieralski's message of 2013-12-20 01:13:20 -0800:

On 20/12/13 00:17, Jay Pipes wrote:

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
"thinner" -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like "deploy an undercloud on these nodes") in a way that
can handle failures in a graceful and/or atomic way.


Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.


I'm familiar with the traditional non-distributed software concept of a
mutex (or in Windows world, a critical section). But we aren't dealing
with traditional non-distributed software here. We're dealing with
highly distributed software where components involved in the
"transaction" may not be running on the same host or have much awareness
of each other at all.


Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.



Radomir, what Jay is suggesting is that it seems pretty unlikely that
two individuals would be given a directive to deploy OpenStack into a
single pool of hardware at such a scale where they will both use the
whole thing.

Worst case, if it does happen, they both run out of hardware, one
individual deletes their deployment, the other one resumes. This is the
optimistic position and it will work fine. Assuming you are driving this
all through Heat (which, AFAIK, Tuskar still uses Heat) there's even a
blueprint to support you that I'm working on:

https://blueprints.launchpad.net/heat/+spec/retry-failed-update

Even if both operators put the retry in a loop, one would actually
finish at some point.


Yes, thank you Clint. That is precisely what I was saying.


Trying to make a complex series of related but distributed actions --
like the underlying actions of the Tuskar -> Ironic API calls -- into an
atomic operation is just not a good use of programming effort, IMO.
Instead, I'm advocating that programming effort should instead be spent
coding a workflow/taskflow pipeline that can gracefully retry failed
operations and report the state of the total taskflow back to the user.


Sure, there are many ways to solve any particular synchronisation
problem. Let's say that we have one that can actually be solved by
retrying. Do you want to retry infinitely? Would you like to increase
the delays between retries exponentially? If so, where are you going to
keep the shared counters for the retries? Perhaps in tuskar-api, hmm?



I don't think a sane person would retry more than maybe once without
checking with the other operators.


Or are you just saying that we should pretend that the nondeteministic
bugs appearing due to the lack of synchronization simply don't exist?
They cannot be easily reproduced, after all. We could just close our
eyes, cover our ears, sing "lalalala" and close any bug reports with
such errors with "could not reproduce on my single-user, single-machine
development installation". I know that a lot of software companies do
exactly that, so I guess it's a valid business practice, I just want to
make sure that this is actually the tactic that we are going to take,
before commiting to an architectural decision that will make those bugs
impossible to fix.



OpenStack is non-deterministic. Deterministic systems are rigid and unable
to handle failure modes of any kind of diversity. We tend to err toward
pushing problems back to the user and giving them tools to resolve the
problem. Avoiding spurious problems is important too, no doubt. However,
what Jay has been suggesting is that the situation a pessimistic locking
system would avoid is entirely user created, and thus lower priority
than say, actually having a complete UI for deploying OpenStack.


+1. I very much agree with Jay and Clint on this matter.

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2014-01-02 Thread Jay Pipes

On 01/02/2014 06:41 AM, Radomir Dopieralski wrote:

On 20/12/13 17:34, Clint Byrum wrote:

OpenStack is non-deterministic. Deterministic systems are rigid and unable
to handle failure modes of any kind of diversity.


I wonder how you are going to debug a non-deterministic system :-)


Very carefully.


We tend to err toward
pushing problems back to the user and giving them tools to resolve the
problem. Avoiding spurious problems is important too, no doubt. However,
what Jay has been suggesting is that the situation a pessimistic locking
system would avoid is entirely user created, and thus lower priority
than say, actually having a complete UI for deploying OpenStack.


I fail to see how leaving ourselves the ability to add locks when they
become needed, by keeping tuskar-api in place, conflicts with actually
having a complete UI for deploying OpenStack. Can you elaborate on that?


I think all Clint was saying is that completing the UI for base 
OpenStack deployment (Tuskar UI) is a higher priority than trying to add 
a pessimistic lock model/concurrency to any particular part of the 
existing UI.


That doesn't mean you can't work on a pessimistic locking model. It just 
means that Clint (and I) think that completing the as-yet-finished UI 
work is a more important task.


Best, and Happy New Year!
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2014-01-02 Thread Radomir Dopieralski
On 20/12/13 17:34, Clint Byrum wrote:
> OpenStack is non-deterministic. Deterministic systems are rigid and unable
> to handle failure modes of any kind of diversity.

I wonder how you are going to debug a non-deterministic system :-)

> We tend to err toward
> pushing problems back to the user and giving them tools to resolve the
> problem. Avoiding spurious problems is important too, no doubt. However,
> what Jay has been suggesting is that the situation a pessimistic locking
> system would avoid is entirely user created, and thus lower priority
> than say, actually having a complete UI for deploying OpenStack.

I fail to see how leaving ourselves the ability to add locks when they
become needed, by keeping tuskar-api in place, conflicts with actually
having a complete UI for deploying OpenStack. Can you elaborate on that?

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Jay Pipes

On 12/20/2013 11:34 AM, Clint Byrum wrote:

Excerpts from Radomir Dopieralski's message of 2013-12-20 01:13:20 -0800:

On 20/12/13 00:17, Jay Pipes wrote:

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
"thinner" -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like "deploy an undercloud on these nodes") in a way that
can handle failures in a graceful and/or atomic way.


Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.


I'm familiar with the traditional non-distributed software concept of a
mutex (or in Windows world, a critical section). But we aren't dealing
with traditional non-distributed software here. We're dealing with
highly distributed software where components involved in the
"transaction" may not be running on the same host or have much awareness
of each other at all.


Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.



Radomir, what Jay is suggesting is that it seems pretty unlikely that
two individuals would be given a directive to deploy OpenStack into a
single pool of hardware at such a scale where they will both use the
whole thing.

Worst case, if it does happen, they both run out of hardware, one
individual deletes their deployment, the other one resumes. This is the
optimistic position and it will work fine. Assuming you are driving this
all through Heat (which, AFAIK, Tuskar still uses Heat) there's even a
blueprint to support you that I'm working on:

https://blueprints.launchpad.net/heat/+spec/retry-failed-update

Even if both operators put the retry in a loop, one would actually
finish at some point.


Yes, thank you Clint. That is precisely what I was saying.


Trying to make a complex series of related but distributed actions --
like the underlying actions of the Tuskar -> Ironic API calls -- into an
atomic operation is just not a good use of programming effort, IMO.
Instead, I'm advocating that programming effort should instead be spent
coding a workflow/taskflow pipeline that can gracefully retry failed
operations and report the state of the total taskflow back to the user.


Sure, there are many ways to solve any particular synchronisation
problem. Let's say that we have one that can actually be solved by
retrying. Do you want to retry infinitely? Would you like to increase
the delays between retries exponentially? If so, where are you going to
keep the shared counters for the retries? Perhaps in tuskar-api, hmm?



I don't think a sane person would retry more than maybe once without
checking with the other operators.


Or are you just saying that we should pretend that the nondeteministic
bugs appearing due to the lack of synchronization simply don't exist?
They cannot be easily reproduced, after all. We could just close our
eyes, cover our ears, sing "lalalala" and close any bug reports with
such errors with "could not reproduce on my single-user, single-machine
development installation". I know that a lot of software companies do
exactly that, so I guess it's a valid business practice, I just want to
make sure that this is actually the tactic that we are going to take,
before commiting to an architectural decision that will make those bugs
impossible to fix.



OpenStack is non-deterministic. Deterministic systems are rigid and unable
to handle failure modes of any kind of diversity. We tend to err toward
pushing problems back to the user and giving them tools to resolve the
problem. Avoiding spurious problems is important too, no doubt. However,
what Jay has been suggesting is that the situation a pessimistic locking
system would avoid is entirely user created, and thus lower priority
than say, actually having a complete UI for deploying OpenStack.


Bingo.

Thanks,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Clint Byrum
Excerpts from Ladislav Smola's message of 2013-12-20 05:48:40 -0800:
> On 12/20/2013 02:37 PM, Imre Farkas wrote:
> > On 12/20/2013 12:25 PM, Ladislav Smola wrote:
> >> 2. Heat stack create, update
> >> This is locked in the process of the operation, so nobody can mess with
> >> it while it is updating or creating.
> >> Once we will pack all operations that are now aside in this, we should
> >> be alright. And that should be doable in I.
> >> So we should push towards this, rather then building some temporary
> >> locking solution in Tuskar-API.
> >
> > It's not the issue of locking, but the goal of Tuskar with the 
> > Provision button is not only a single stack creation. After Heat's job 
> > is done, the overcloud needs to be properly configured: Keystone needs 
> > to be initialized, the services need to be registered, etc. I don't 
> > think Horizon wants to add a background worker to handle such operations.
> >
> 
> Yes, that is a valid point. I hope we will be able to pack it all to 
> Heat Template in I. This could be the way 
> https://blueprints.launchpad.net/heat/+spec/hot-software-config
> 
> Seems like the consensus is: It belongs to Heat. We are just not able to 
> do it that way now.
> 
> So there is a question, whether we should try to solve it in Tuskar-API 
> temporarily. Or rather focus on the Heat.
> 

Interestingly enough, what Imre has just mentioned isn't necessarily
covered by hot-software-config. That blueprint is specifically about
configuring machines, but not API's.

I think we actually need multi-cloud to support what Imre is talking
about. These are API operations that need to follow the entire stack
bring-up, but happen in a different cloud (the new one).

Assuming single servers instead of loadbalancers and stuff for simplicity:


resources:
  keystone:
type: OS::Nova::Server
  glance:
type: OS::Nova::Server
  nova:
type: OS::Nova::Server
  cloud-setup:
type: OS::Heat::Stack
properties:
  cloud-endpoint: str_join [ 'https://', get_attribute [ 'keystone', 
'first_ip' ], ':35357/' ]
  cloud-credentials: get_parameter ['something']
  template:
keystone-catalog:
  type: OS::Keystone::Catalog
  properties:
endpoints:
  - type: Compute
publicUrl: str_join [ 'https://', get_attribute [ 'nova', 
'first_ip' ], ':8447/' ]
  - type: Image
publicUrl: str_join [ 'https://', get_attribute [ 'glance', 
'first_ip' ], ':12345/' ]

What I mean is, you want the Heat stack to be done not when the hardware
is up, but when the API's have been orchestrated.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Clint Byrum
Excerpts from Radomir Dopieralski's message of 2013-12-20 01:13:20 -0800:
> On 20/12/13 00:17, Jay Pipes wrote:
> > On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:
> >> On 14/12/13 16:51, Jay Pipes wrote:
> >>
> >> [snip]
> >>
> >>> Instead of focusing on locking issues -- which I agree are very
> >>> important in the virtualized side of things where resources are
> >>> "thinner" -- I believe that in the bare-metal world, a more useful focus
> >>> would be to ensure that the Tuskar API service treats related group
> >>> operations (like "deploy an undercloud on these nodes") in a way that
> >>> can handle failures in a graceful and/or atomic way.
> >>
> >> Atomicity of operations can be achieved by intoducing critical sections.
> >> You basically have two ways of doing that, optimistic and pessimistic.
> >> Pessimistic critical section is implemented with a locking mechanism
> >> that prevents all other processes from entering the critical section
> >> until it is finished.
> > 
> > I'm familiar with the traditional non-distributed software concept of a
> > mutex (or in Windows world, a critical section). But we aren't dealing
> > with traditional non-distributed software here. We're dealing with
> > highly distributed software where components involved in the
> > "transaction" may not be running on the same host or have much awareness
> > of each other at all.
> 
> Yes, that is precisely why you need to have a single point where they
> can check if they are not stepping on each other's toes. If you don't,
> you get race conditions and non-deterministic behavior. The only
> difference with traditional, non-distributed software is that since the
> components involved are communicating over a, relatively slow, network,
> you have a much, much greater chance of actually having a conflict.
> Scaling the whole thing to hundreds of nodes practically guarantees trouble.
> 

Radomir, what Jay is suggesting is that it seems pretty unlikely that
two individuals would be given a directive to deploy OpenStack into a
single pool of hardware at such a scale where they will both use the
whole thing.

Worst case, if it does happen, they both run out of hardware, one
individual deletes their deployment, the other one resumes. This is the
optimistic position and it will work fine. Assuming you are driving this
all through Heat (which, AFAIK, Tuskar still uses Heat) there's even a
blueprint to support you that I'm working on:

https://blueprints.launchpad.net/heat/+spec/retry-failed-update

Even if both operators put the retry in a loop, one would actually
finish at some point.

> > Trying to make a complex series of related but distributed actions --
> > like the underlying actions of the Tuskar -> Ironic API calls -- into an
> > atomic operation is just not a good use of programming effort, IMO.
> > Instead, I'm advocating that programming effort should instead be spent
> > coding a workflow/taskflow pipeline that can gracefully retry failed
> > operations and report the state of the total taskflow back to the user.
> 
> Sure, there are many ways to solve any particular synchronisation
> problem. Let's say that we have one that can actually be solved by
> retrying. Do you want to retry infinitely? Would you like to increase
> the delays between retries exponentially? If so, where are you going to
> keep the shared counters for the retries? Perhaps in tuskar-api, hmm?
> 

I don't think a sane person would retry more than maybe once without
checking with the other operators.

> Or are you just saying that we should pretend that the nondeteministic
> bugs appearing due to the lack of synchronization simply don't exist?
> They cannot be easily reproduced, after all. We could just close our
> eyes, cover our ears, sing "lalalala" and close any bug reports with
> such errors with "could not reproduce on my single-user, single-machine
> development installation". I know that a lot of software companies do
> exactly that, so I guess it's a valid business practice, I just want to
> make sure that this is actually the tactic that we are going to take,
> before commiting to an architectural decision that will make those bugs
> impossible to fix.
> 

OpenStack is non-deterministic. Deterministic systems are rigid and unable
to handle failure modes of any kind of diversity. We tend to err toward
pushing problems back to the user and giving them tools to resolve the
problem. Avoiding spurious problems is important too, no doubt. However,
what Jay has been suggesting is that the situation a pessimistic locking
system would avoid is entirely user created, and thus lower priority
than say, actually having a complete UI for deploying OpenStack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Jay Dobies



On 12/20/2013 08:40 AM, Ladislav Smola wrote:

On 12/20/2013 02:06 PM, Radomir Dopieralski wrote:

On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.



Hehe. :-)

Ok after the last meeting we are ready to say what goes to Tuskar-API.

Who wants to start that thread? :-)


I'm writing something up, but I won't have anything worth showing until 
after the New Year (sounds so far away when I say it that way; it's 
simply that I'm on vacation starting today until the 6th).





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 02:37 PM, Imre Farkas wrote:

On 12/20/2013 12:25 PM, Ladislav Smola wrote:

2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.


It's not the issue of locking, but the goal of Tuskar with the 
Provision button is not only a single stack creation. After Heat's job 
is done, the overcloud needs to be properly configured: Keystone needs 
to be initialized, the services need to be registered, etc. I don't 
think Horizon wants to add a background worker to handle such operations.




Yes, that is a valid point. I hope we will be able to pack it all to 
Heat Template in I. This could be the way 
https://blueprints.launchpad.net/heat/+spec/hot-software-config


Seems like the consensus is: It belongs to Heat. We are just not able to 
do it that way now.


So there is a question, whether we should try to solve it in Tuskar-API 
temporarily. Or rather focus on the Heat.




Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 02:06 PM, Radomir Dopieralski wrote:

On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.



Hehe. :-)

Ok after the last meeting we are ready to say what goes to Tuskar-API.

Who wants to start that thread? :-)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 01:04 PM, Radomir Dopieralski wrote:

On 20/12/13 12:25, Ladislav Smola wrote:

May I propose we keep the conversation Icehouse related. I don't think
we can make any sort of locking
mechanism in I.

By getting rid of tuskar-api and putting all the logic higher up, we are
forfeiting the ability to ever create it. That worries me. I hate to
remove potential solutions from my toolbox, even when the problems they
solve may as well never materialize.



Well, I expect that there will be decisions whether we should not land
a feature because it's not ready or we should make some temporary hack
that will make it work.

I am just little worried to have some temporary hacks in stable version,
cause then the update to the next version will be hard. And we will most 
likely

have to support these hacks as a backwards compatibility.

I wouldn't say we are forfeiting the ability to create it. I would say 
we are

forfeiting the ability to create hacked together temporary solutions, that
might go against how upstream wants to do it. That is a good thing I 
think. :-)



Though it would be worth of creating some WikiPage that would present it
whole in some consistent
manner. I am kind of lost in these emails. :-)

So, what do you thing are the biggest issues for the Icehouse tasks we
have?

1. GET operations?
I don't think we need to be atomic here. We basically join resources
from multiple APIs together. I think
it's perfectly fine that something will be deleted in the process. Even
right now we join together only things
that exists. And we can handle when something is not. There is no need
of locking or retrying here AFAIK.
2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.

3. Reservation of resources
As we can deploy only one stack now, so I think it shouldn't be a
problem with multiple users there. When
somebody will delete the resources from 'free pool' in the process, it
will fail with 'Not enough free resources'
I guess that is fine.
Also not sure how it's now, but it should be possible to deploy smartly,
so the stack will be working even
with smaller amount of resources. Then we would just heat stack-update
with numbers it ended up with,
and it would switch to OK status without changing anything.

So, are there any other critical sections you see?

It's hard for me to find critical sections in a system that doesn't
exist, is not documented and will be designed as we go. Perhaps you are
right and I am just panicking, and we won't have any such critical
sections, or can handle the ones we do without any need for
synchronization. You probably have a much better idea how the whole
system will look like. Even then, I think it still makes sense to keep
that door open an leave ourselves the possibility of implementing
locking/sessions/serialization/counters/any other synchronization if we
need them, unless there is a horrible cost involved. Perhaps I'm just
not aware of the cost?


Well yeah I guess for some J features, we might need to do
something like this. I have no idea right now. So the doors are
always open. :-)



As far as I know, Tuskar is going to have more than just GETs and Heat
stack operations. I seem to remember stuff like resource classes, roles,
node profiles, node discovery, etc. How will updates to those be handled
and how will they interact with the Heat stack updates? Will every
change trigger a heat stack update immediately and force a refresh for
all open tuskar-ui pages?


resource classes: it's definitely J, are we are not yet sure how it will 
look like


node_profiles: it's nova flavor in I and it will stay that way because 
of scheduler
From start we will have just one flavor. Even when we had more flavors, 
I think

I don't see issues here.
This heavily relies on how we are going to build the Heat Template. But 
adding

flavors should be separated form creating or updating a heat template.

creating and updating heat template: seems like we will be doing this in 
Tuskar-API

do you see any potential problems here?

node-discovery: will be in Ironic. Should be also a separate operation. 
So I don't see problems

here.



Every time we will have a number of operations batched together -- such
as in any of those wizard dialogs, for which we've had so many
wireframes already, and which I expect to see more -- we will have a
critical section. That critical section doesn't begin when the "OK"
button is pressed, it starts when the dialog is first displayed, because
the user is making decisions based on the information that is presented
to her or him there. If by the time he finished the wizard and presses
OK the situation has changed, you are risking doing something else than
the user

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Imre Farkas

On 12/20/2013 12:25 PM, Ladislav Smola wrote:

2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.


It's not the issue of locking, but the goal of Tuskar with the Provision 
button is not only a single stack creation. After Heat's job is done, 
the overcloud needs to be properly configured: Keystone needs to be 
initialized, the services need to be registered, etc. I don't think 
Horizon wants to add a background worker to handle such operations.


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 12:25, Ladislav Smola wrote:
> May I propose we keep the conversation Icehouse related. I don't think
> we can make any sort of locking
> mechanism in I.

By getting rid of tuskar-api and putting all the logic higher up, we are
forfeiting the ability to ever create it. That worries me. I hate to
remove potential solutions from my toolbox, even when the problems they
solve may as well never materialize.

> Though it would be worth of creating some WikiPage that would present it
> whole in some consistent
> manner. I am kind of lost in these emails. :-)
> 
> So, what do you thing are the biggest issues for the Icehouse tasks we
> have?
> 
> 1. GET operations?
> I don't think we need to be atomic here. We basically join resources
> from multiple APIs together. I think
> it's perfectly fine that something will be deleted in the process. Even
> right now we join together only things
> that exists. And we can handle when something is not. There is no need
> of locking or retrying here AFAIK.
> 2. Heat stack create, update
> This is locked in the process of the operation, so nobody can mess with
> it while it is updating or creating.
> Once we will pack all operations that are now aside in this, we should
> be alright. And that should be doable in I.
> So we should push towards this, rather then building some temporary
> locking solution in Tuskar-API.
> 
> 3. Reservation of resources
> As we can deploy only one stack now, so I think it shouldn't be a
> problem with multiple users there. When
> somebody will delete the resources from 'free pool' in the process, it
> will fail with 'Not enough free resources'
> I guess that is fine.
> Also not sure how it's now, but it should be possible to deploy smartly,
> so the stack will be working even
> with smaller amount of resources. Then we would just heat stack-update
> with numbers it ended up with,
> and it would switch to OK status without changing anything.
> 
> So, are there any other critical sections you see?

It's hard for me to find critical sections in a system that doesn't
exist, is not documented and will be designed as we go. Perhaps you are
right and I am just panicking, and we won't have any such critical
sections, or can handle the ones we do without any need for
synchronization. You probably have a much better idea how the whole
system will look like. Even then, I think it still makes sense to keep
that door open an leave ourselves the possibility of implementing
locking/sessions/serialization/counters/any other synchronization if we
need them, unless there is a horrible cost involved. Perhaps I'm just
not aware of the cost?

As far as I know, Tuskar is going to have more than just GETs and Heat
stack operations. I seem to remember stuff like resource classes, roles,
node profiles, node discovery, etc. How will updates to those be handled
and how will they interact with the Heat stack updates? Will every
change trigger a heat stack update immediately and force a refresh for
all open tuskar-ui pages?

Every time we will have a number of operations batched together -- such
as in any of those wizard dialogs, for which we've had so many
wireframes already, and which I expect to see more -- we will have a
critical section. That critical section doesn't begin when the "OK"
button is pressed, it starts when the dialog is first displayed, because
the user is making decisions based on the information that is presented
to her or him there. If by the time he finished the wizard and presses
OK the situation has changed, you are risking doing something else than
the user intended. Will we need to implement such interface elements,
and thus need synchronization mechanisms for it?

I simply don't know. And when I'm not sure, I like to have an option.

As I said, perhaps I just don't understand that there is a large cost
involved in keeping the logic inside tuskar-api instead of somewhere
else. Perhaps that cost is significant enough to justify this difficult
decision and limit our options. In the discussion I saw I didn't see
anything like that pointed out, but maybe it's just so obvious that
everybody takes it for granted and it's just me that can't see it. In
that case I will rest my case.
-- 
Radomir Dopieralski



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola
May I propose we keep the conversation Icehouse related. I don't think 
we can make any sort of locking

mechanism in I.

Though it would be worth of creating some WikiPage that would present it 
whole in some consistent

manner. I am kind of lost in these emails. :-)

So, what do you thing are the biggest issues for the Icehouse tasks we have?

1. GET operations?
I don't think we need to be atomic here. We basically join resources 
from multiple APIs together. I think
it's perfectly fine that something will be deleted in the process. Even 
right now we join together only things
that exists. And we can handle when something is not. There is no need 
of locking or retrying here AFAIK.


2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with 
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should 
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary 
locking solution in Tuskar-API.


3. Reservation of resources
As we can deploy only one stack now, so I think it shouldn't be a 
problem with multiple users there. When
somebody will delete the resources from 'free pool' in the process, it 
will fail with 'Not enough free resources'

I guess that is fine.
Also not sure how it's now, but it should be possible to deploy smartly, 
so the stack will be working even
with smaller amount of resources. Then we would just heat stack-update 
with numbers it ended up with,

and it would switch to OK status without changing anything.

So, are there any other critical sections you see?

I know we did it bad way in the previous Tuskar-API and I think we are 
avoiding that now. And we will avoid
it in the future. By simply not doing these kind of stuff until there is 
a proper way to do it.


Thanks,
Ladislav


On 12/20/2013 10:13 AM, Radomir Dopieralski wrote:

On 20/12/13 00:17, Jay Pipes wrote:

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
"thinner" -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like "deploy an undercloud on these nodes") in a way that
can handle failures in a graceful and/or atomic way.

Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.

I'm familiar with the traditional non-distributed software concept of a
mutex (or in Windows world, a critical section). But we aren't dealing
with traditional non-distributed software here. We're dealing with
highly distributed software where components involved in the
"transaction" may not be running on the same host or have much awareness
of each other at all.

Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.


And, in any case (see below), I don't think that this is a problem that
needs to be solved in Tuskar.


Perhaps you have some other way of making them atomic that I can't
think of?

I should not have used the term atomic above. I actually do not think
that the things that Tuskar/Ironic does should be viewed as an atomic
operation. More below.

OK, no operations performed by Tuskar need to be atomic, noted.


For example, if the construction or installation of one compute worker
failed, adding some retry or retry-after-wait-for-event logic would be
more useful than trying to put locks in a bunch of places to prevent
multiple sysadmins from trying to deploy on the same bare-metal nodes
(since it's just not gonna happen in the real world, and IMO, if it did
happen, the sysadmins/deployers should be punished and have to clean up
their own mess ;)

I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.

The issue I am getting at is that, in the real world, the problem of
multiple users of Tuskar attempting to deploy an undercloud on the exact
same set of bare metal machines is just not goin

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 00:17, Jay Pipes wrote:
> On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:
>> On 14/12/13 16:51, Jay Pipes wrote:
>>
>> [snip]
>>
>>> Instead of focusing on locking issues -- which I agree are very
>>> important in the virtualized side of things where resources are
>>> "thinner" -- I believe that in the bare-metal world, a more useful focus
>>> would be to ensure that the Tuskar API service treats related group
>>> operations (like "deploy an undercloud on these nodes") in a way that
>>> can handle failures in a graceful and/or atomic way.
>>
>> Atomicity of operations can be achieved by intoducing critical sections.
>> You basically have two ways of doing that, optimistic and pessimistic.
>> Pessimistic critical section is implemented with a locking mechanism
>> that prevents all other processes from entering the critical section
>> until it is finished.
> 
> I'm familiar with the traditional non-distributed software concept of a
> mutex (or in Windows world, a critical section). But we aren't dealing
> with traditional non-distributed software here. We're dealing with
> highly distributed software where components involved in the
> "transaction" may not be running on the same host or have much awareness
> of each other at all.

Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.

> And, in any case (see below), I don't think that this is a problem that
> needs to be solved in Tuskar.
>
>> Perhaps you have some other way of making them atomic that I can't
>> think of?
> 
> I should not have used the term atomic above. I actually do not think
> that the things that Tuskar/Ironic does should be viewed as an atomic
> operation. More below.

OK, no operations performed by Tuskar need to be atomic, noted.

>>> For example, if the construction or installation of one compute worker
>>> failed, adding some retry or retry-after-wait-for-event logic would be
>>> more useful than trying to put locks in a bunch of places to prevent
>>> multiple sysadmins from trying to deploy on the same bare-metal nodes
>>> (since it's just not gonna happen in the real world, and IMO, if it did
>>> happen, the sysadmins/deployers should be punished and have to clean up
>>> their own mess ;)
>>
>> I don't see why they should be punished, if the UI was assuring them
>> that they are doing exactly the thing that they wanted to do, at every
>> step, and in the end it did something completely different, without any
>> warning. If anyone deserves punishment in such a situation, it's the
>> programmers who wrote the UI in such a way.
> 
> The issue I am getting at is that, in the real world, the problem of
> multiple users of Tuskar attempting to deploy an undercloud on the exact
> same set of bare metal machines is just not going to happen. If you
> think this is actually a real-world problem, and have seen two sysadmins
> actively trying to deploy an undercloud on bare-metal machines at the
> same time without unbeknownst to each other, then I feel bad for the
> sysadmins that found themselves in such a situation, but I feel its
> their own fault for not knowing about what the other was doing.

How can it be their fault, when at every step of their interaction with
the user interface, the user interface was assuring them that they are
going to do the right thing (deploy a certain set of nodes), but when
they finally hit the confirmation button, did a completely different
thing (deployed a different set of nodes)? The only fault I see is in
them using such software. Or are you suggesting that they should
implement the lock themselves, through e-mails or some other means of
communication?

Don't get me wrong, the deploy button is just one easy example of this
problem. We have it all over the user interface. Even such a simple
operation, as retrieving a list of node ids, and then displaying the
corresponding information to the user has a race condition in it -- what
if some of the nodes get deleted after we get the list of ids, but
before we make the call to get node details about them? This should be
done as an atomic operation that either locks, or fails if there was a
change in the middle of it, and since the calls are to different
systems, the only place where you can set a lock or check if there was a
change, is the tuskar-api. And no, trying to get again the information
about a deleted node won't help -- you can keep retrying for years, and
the node will still remain deleted. This is all over the place. And,
saying that "this is the user's fault" doesn't help.

> Trying to make a comple

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-19 Thread Jay Pipes

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
"thinner" -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like "deploy an undercloud on these nodes") in a way that
can handle failures in a graceful and/or atomic way.


Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.


I'm familiar with the traditional non-distributed software concept of a 
mutex (or in Windows world, a critical section). But we aren't dealing 
with traditional non-distributed software here. We're dealing with 
highly distributed software where components involved in the 
"transaction" may not be running on the same host or have much awareness 
of each other at all.


And, in any case (see below), I don't think that this is a problem that 
needs to be solved in Tuskar.



Perhaps you have some other way of making them atomic that I can't think of?


I should not have used the term atomic above. I actually do not think 
that the things that Tuskar/Ironic does should be viewed as an atomic 
operation. More below.



For example, if the construction or installation of one compute worker
failed, adding some retry or retry-after-wait-for-event logic would be
more useful than trying to put locks in a bunch of places to prevent
multiple sysadmins from trying to deploy on the same bare-metal nodes
(since it's just not gonna happen in the real world, and IMO, if it did
happen, the sysadmins/deployers should be punished and have to clean up
their own mess ;)


I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.


The issue I am getting at is that, in the real world, the problem of 
multiple users of Tuskar attempting to deploy an undercloud on the exact 
same set of bare metal machines is just not going to happen. If you 
think this is actually a real-world problem, and have seen two sysadmins 
actively trying to deploy an undercloud on bare-metal machines at the 
same time without unbeknownst to each other, then I feel bad for the 
sysadmins that found themselves in such a situation, but I feel its 
their own fault for not knowing about what the other was doing.


Trying to make a complex series of related but distributed actions -- 
like the underlying actions of the Tuskar -> Ironic API calls -- into an 
atomic operation is just not a good use of programming effort, IMO. 
Instead, I'm advocating that programming effort should instead be spent 
coding a workflow/taskflow pipeline that can gracefully retry failed 
operations and report the state of the total taskflow back to the user.


Hope that makes more sense,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-19 Thread Radomir Dopieralski
On 14/12/13 16:51, Jay Pipes wrote:

[snip]

> Instead of focusing on locking issues -- which I agree are very
> important in the virtualized side of things where resources are
> "thinner" -- I believe that in the bare-metal world, a more useful focus
> would be to ensure that the Tuskar API service treats related group
> operations (like "deploy an undercloud on these nodes") in a way that
> can handle failures in a graceful and/or atomic way.

Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished. Optimistic one is implemented using transactions,
that assume that there will be no conflict, and just rollback all the
changes if there was. Since none of OpenStack services that we use
expose any kind of transaction mechanisms (mostly, because they have
REST, stateless APIs, and transacrions imply state), we are left with
locks as the only tool to assure atomicity. Thus, your sentence above is
a little bit contradictory, advocating ignoring locking issues, and
proposing making operations atomic at the same time.
Perhaps you have some other way of making them atomic that I can't think of?

> For example, if the construction or installation of one compute worker
> failed, adding some retry or retry-after-wait-for-event logic would be
> more useful than trying to put locks in a bunch of places to prevent
> multiple sysadmins from trying to deploy on the same bare-metal nodes
> (since it's just not gonna happen in the real world, and IMO, if it did
> happen, the sysadmins/deployers should be punished and have to clean up
> their own mess ;)

I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-14 Thread Jay Pipes
On Sat, 2013-12-14 at 10:52 -0500, Tzu-Mainn Chen wrote:
> > On Wed, 2013-12-11 at 17:48 +0100, Jiří Stránský wrote:
> > 
> > > >> When you say python- clients, is there a distinction between the CLI 
> > > >> and
> > > >> a bindings library that invokes the server-side APIs? In other words,
> > > >> the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
> > > 
> > > python-tuskarclient = Python bindings to tuskar-api + CLI, in one project
> > > 
> > > tuskar-ui doesn't have it's own bindings, it depends on
> > > python-tuskarclient for bindings to tuskar-api (and other clients for
> > > bindings to other APIs). UI makes use just of the Python bindings part
> > > of clients and doesn't interact with the CLI part. This is the general
> > > OpenStack way of doing things.
> > 
> > Please everyone excuse my relative lateness in joining this discussion,
> > but I'm wondering if someone could point me to discussions (or summit
> > session etherpads?) where the decision was made to give Tuskar a
> > separate UI from Horizon? I'm curious what the motivations were around
> > this?
> > 
> > Thanks, and again, sorry for being late to the party! :)
> > 
> > -jay
> > 
> 
> Heya - just to clear up what I think might be a possible misconception here - 
> the
> Tuskar-UI is built on top of Horizon.  It's developed as a separate Horizon 
> dashboard
> - Infrastructure - that can be added into the OpenStack dashboard alongside 
> the existing
> dashboards - Project, Admin.  The Tuskar-UI developers are active within 
> Horizon, and
> there's currently an effort underway to get the UI placed under the Horizon 
> program.
> 
> Does that answer your question, or did I miss the thrust of it?

That precisely answers my questions! :) Thanks, Mainn!

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-14 Thread Tzu-Mainn Chen
> On Wed, 2013-12-11 at 17:48 +0100, Jiří Stránský wrote:
> 
> > >> When you say python- clients, is there a distinction between the CLI and
> > >> a bindings library that invokes the server-side APIs? In other words,
> > >> the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
> > 
> > python-tuskarclient = Python bindings to tuskar-api + CLI, in one project
> > 
> > tuskar-ui doesn't have it's own bindings, it depends on
> > python-tuskarclient for bindings to tuskar-api (and other clients for
> > bindings to other APIs). UI makes use just of the Python bindings part
> > of clients and doesn't interact with the CLI part. This is the general
> > OpenStack way of doing things.
> 
> Please everyone excuse my relative lateness in joining this discussion,
> but I'm wondering if someone could point me to discussions (or summit
> session etherpads?) where the decision was made to give Tuskar a
> separate UI from Horizon? I'm curious what the motivations were around
> this?
> 
> Thanks, and again, sorry for being late to the party! :)
> 
> -jay
> 

Heya - just to clear up what I think might be a possible misconception here - 
the
Tuskar-UI is built on top of Horizon.  It's developed as a separate Horizon 
dashboard
- Infrastructure - that can be added into the OpenStack dashboard alongside the 
existing
dashboards - Project, Admin.  The Tuskar-UI developers are active within 
Horizon, and
there's currently an effort underway to get the UI placed under the Horizon 
program.

Does that answer your question, or did I miss the thrust of it?

Mainn


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-14 Thread Jay Pipes
On Thu, 2013-12-12 at 15:22 +0100, Hugh O. Brock wrote:
> On Thu, Dec 12, 2013 at 03:11:14PM +0100, Ladislav Smola wrote:
> > Agree with this.
> > 
> > Though I am an optimist,  I believe that this time, we can avoid
> > calling multiple services in one request that depend on each other.
> > About the multiple users at once, this should be solved inside the
> > API calls of the services.
> > 
> > So I think we should forbid building these complex API calls
> > composites in the Tuskar-API. If we will want something like this,
> > we should implement
> > it properly inside the services itself. If we will not be able to
> > convince the community about it, maybe it's just not that good
> > feature. :-D
> > 
> 
> It's worth adding that in the particular case Radomir sites (the
> "Deploy" button), even with all the locks in the world, the resources
> that we have supposedly requisitioned in the undercloud for the user may
> have already been allocated to someone else by Nova -- because Nova
> currently doesn't allow reservation of resources. (There is work under
> way to allow this but it is quite a way off.) So we could find ourselves
> claiming for the user that we're going to deploy an overcloud at a
> certain scale and then find ourselves unable to do so.
> 
> Frankly I think the whole multi-user case for Tuskar is far enough off
> that I would consider wrapping a single-login restriction around the
> entire thing and calling it a day... except that that would be
> crazy. I'm just trying to make the point that making these operations
> really safe for multiple users is way harder than just putting a lock on
> the tuskar API.

That's actually not that crazy, Hugh :) We've deployed more than a half
dozen availability zones, and I've never seen anyone trample over each
other trying to deploy OpenStack to the same set of bare-metal machines
at the same time... it simply doesn't happen in the real world -- or at
least, it would be so exceedingly rare that trying to deal with this
kind of thing is more of an academic exercise than anything else.

Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
"thinner" -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like "deploy an undercloud on these nodes") in a way that
can handle failures in a graceful and/or atomic way.

For example, if the construction or installation of one compute worker
failed, adding some retry or retry-after-wait-for-event logic would be
more useful than trying to put locks in a bunch of places to prevent
multiple sysadmins from trying to deploy on the same bare-metal nodes
(since it's just not gonna happen in the real world, and IMO, if it did
happen, the sysadmins/deployers should be punished and have to clean up
their own mess ;)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-14 Thread Jay Pipes
On Wed, 2013-12-11 at 17:48 +0100, Jiří Stránský wrote:

> >> When you say python- clients, is there a distinction between the CLI and
> >> a bindings library that invokes the server-side APIs? In other words,
> >> the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
> 
> python-tuskarclient = Python bindings to tuskar-api + CLI, in one project
> 
> tuskar-ui doesn't have it's own bindings, it depends on 
> python-tuskarclient for bindings to tuskar-api (and other clients for 
> bindings to other APIs). UI makes use just of the Python bindings part 
> of clients and doesn't interact with the CLI part. This is the general 
> OpenStack way of doing things.

Please everyone excuse my relative lateness in joining this discussion,
but I'm wondering if someone could point me to discussions (or summit
session etherpads?) where the decision was made to give Tuskar a
separate UI from Horizon? I'm curious what the motivations were around
this?

Thanks, and again, sorry for being late to the party! :)

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-14 Thread Jay Pipes
On Wed, 2013-12-11 at 09:32 -0500, Jay Dobies wrote:
> On 12/11/2013 07:33 AM, Jiří Stránský wrote:
> > 3) Keep tuskar-api and python-tuskarclient thin, make another library
> > sitting between Tuskar UI and all python-***clients. This new project
> > would contain the logic of using undercloud services to provide the
> > "tuskar experience" it would expose python bindings for Tuskar UI and
> > contain a CLI. (Think of it like traditional python-*client but instead
> > of consuming a REST API, it would consume other python-*clients. I
> > wonder if this is overengineering. We might end up with too many
> > projects doing too few things? :) )
> 
> This is the sort of thing I was describing with the facade image above. 
> Rather than beefing up python-tuskarclient, I'd rather we have a 
> specific logic layer that isn't the CLI nor is it the bindings, but is 
> specifically for the purposes of coordination across multiple APIs.
> 
> That said, I'm -1 to my own facade diagram. I think that should live 
> service-side in the API.

And this is what Dean said should be implemented as WSGI middleware,
which I agree is likely the most flexible and architecturally-sound way
of doing this.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-13 Thread Tzu-Mainn Chen
> On 12.12.2013 17:10, Mark McLoughlin wrote:
> > On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:
> >> Hi all,
> >>
> >> TL;DR: I believe that "As an infrastructure administrator, Anna wants a
> >> CLI for managing the deployment providing the same fundamental features
> >> as UI." With the planned architecture changes (making tuskar-api thinner
> >> and getting rid of proxying to other services), there's not an obvious
> >> way to achieve that. We need to figure this out. I present a few options
> >> and look forward for feedback.
> > ..
> >
> >> 1) Make a thicker python-tuskarclient and put the business logic there.
> >> Make it consume other python-*clients. (This is an unusual approach
> >> though, i'm not aware of any python-*client that would consume and
> >> integrate other python-*clients.)
> >>
> >> 2) Make a thicker tuskar-api and put the business logic there. (This is
> >> the original approach with consuming other services from tuskar-api. The
> >> feedback on this approach was mostly negative though.)
> >
> > FWIW, I think these are the two most plausible options right now.
> >
> > My instinct is that tuskar could be a stateless service which merely
> > contains the business logic between the UI/CLI and the various OpenStack
> > services.
> >
> > That would be a first (i.e. an OpenStack service which doesn't have a
> > DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
> > as a purely client-side library used by the UI/CLI (i.e. 1) as far it
> > can go until we hit actual cases where we need (2).
> 
> For the features that we identified for Icehouse, we probably don't need
> to store any data necessarily. But going forward, it's not entirely
> sure. We had a chat and identified some data that is probably not suited
> for storing in any of the other services (at least in their current state):
> 
> * Roles (like Compute, Controller, Object Storage, Block Storage) - for
> Icehouse we'll have these 4 roles hardcoded. Going forward, it's
> probable that we'll want to let admins define their own roles. (Is there
> an existing OpenStack concept that we could map Roles onto? Something
> similar to using Flavors as hardware profiles? I'm not aware of any.)
> 
> * Links to Flavors to use with the roles - to define on what hardware
> can a particular Role be deployed. For Icehouse we assume homogenous
> hardware.
> 
> * Links to Images for use with the Role/Flavor pairs - we'll have
> hardcoded Image names for those hardcoded Roles in Icehouse. Going
> forward, having multiple undercloud Flavors associated with a Role,
> maybe each [Role-Flavor] pair should have it's own image link defined -
> some hardware types (Flavors) might require special drivers in the image.
> 
> * Overcloud heat template - for Icehouse it's quite possible it might be
> hardcoded as well and we could just just use heat params to set it up,
> though i'm not 100% sure about that. Going forward, assuming dynamic
> Roles, we'll need to generate it.


One more (possible) item to this list: "# of nodes per role in a deployment" -
we'll need this if we want to stage the deployment, although that could
potentially be done on the client-side UI/CLI.


> ^ So all these things could probably be hardcoded for Icehouse, but not
> in the future. Guys suggested that if we'll be storing them eventually
> anyway, we might build these things into Tuskar API right now (and
> return hardcoded values for now, allow modification post-Icehouse). That
> seems ok to me. The other approach of having all this hardcoding
> initially done in a library seems ok to me too.
> 
> I'm not 100% sure that we cannot store some of this info in existing
> APIs, but it didn't seem so to me (to us). We've talked briefly about
> using Swift for it, but looking back on the list i wrote, it doesn't
> seem as very Swift-suited data.
> 
> >
> > One example worth thinking through though - clicking "deploy my
> > overcloud" will generate a Heat template and sent to the Heat API.
> >
> > The Heat template will be fairly closely tied to the overcloud images
> > (i.e. the actual image contents) we're deploying - e.g. the template
> > will have metadata which is specific to what's in the images.
> >
> > With the UI, you can see that working fine - the user is just using a UI
> > that was deployed with the undercloud.
> >
> > With the CLI, it is probably not running on undercloud machines. Perhaps
> > your undercloud was deployed a while ago and you've just installed the
> > latest TripleO client-side CLI from PyPI. With other OpenStack clients
> > we say that newer versions of the CLI should support all/most older
> > versions of the REST APIs.
> >
> > Having the template generation behind a (stateless) REST API could allow
> > us to define an API which expresses "deploy my overcloud" and not have
> > the client so tied to a specific undercloud version.
> 
> Yeah i see that advantage of making it an API, Dean pointed this out
> too. The combination of 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-13 Thread Jiří Stránský

On 12.12.2013 17:10, Mark McLoughlin wrote:

On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

..


1) Make a thicker python-tuskarclient and put the business logic there.
Make it consume other python-*clients. (This is an unusual approach
though, i'm not aware of any python-*client that would consume and
integrate other python-*clients.)

2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


FWIW, I think these are the two most plausible options right now.

My instinct is that tuskar could be a stateless service which merely
contains the business logic between the UI/CLI and the various OpenStack
services.

That would be a first (i.e. an OpenStack service which doesn't have a
DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
as a purely client-side library used by the UI/CLI (i.e. 1) as far it
can go until we hit actual cases where we need (2).


For the features that we identified for Icehouse, we probably don't need 
to store any data necessarily. But going forward, it's not entirely 
sure. We had a chat and identified some data that is probably not suited 
for storing in any of the other services (at least in their current state):


* Roles (like Compute, Controller, Object Storage, Block Storage) - for 
Icehouse we'll have these 4 roles hardcoded. Going forward, it's 
probable that we'll want to let admins define their own roles. (Is there 
an existing OpenStack concept that we could map Roles onto? Something 
similar to using Flavors as hardware profiles? I'm not aware of any.)


* Links to Flavors to use with the roles - to define on what hardware 
can a particular Role be deployed. For Icehouse we assume homogenous 
hardware.


* Links to Images for use with the Role/Flavor pairs - we'll have 
hardcoded Image names for those hardcoded Roles in Icehouse. Going 
forward, having multiple undercloud Flavors associated with a Role, 
maybe each [Role-Flavor] pair should have it's own image link defined - 
some hardware types (Flavors) might require special drivers in the image.


* Overcloud heat template - for Icehouse it's quite possible it might be 
hardcoded as well and we could just just use heat params to set it up, 
though i'm not 100% sure about that. Going forward, assuming dynamic 
Roles, we'll need to generate it.


^ So all these things could probably be hardcoded for Icehouse, but not 
in the future. Guys suggested that if we'll be storing them eventually 
anyway, we might build these things into Tuskar API right now (and 
return hardcoded values for now, allow modification post-Icehouse). That 
seems ok to me. The other approach of having all this hardcoding 
initially done in a library seems ok to me too.


I'm not 100% sure that we cannot store some of this info in existing 
APIs, but it didn't seem so to me (to us). We've talked briefly about 
using Swift for it, but looking back on the list i wrote, it doesn't 
seem as very Swift-suited data.




One example worth thinking through though - clicking "deploy my
overcloud" will generate a Heat template and sent to the Heat API.

The Heat template will be fairly closely tied to the overcloud images
(i.e. the actual image contents) we're deploying - e.g. the template
will have metadata which is specific to what's in the images.

With the UI, you can see that working fine - the user is just using a UI
that was deployed with the undercloud.

With the CLI, it is probably not running on undercloud machines. Perhaps
your undercloud was deployed a while ago and you've just installed the
latest TripleO client-side CLI from PyPI. With other OpenStack clients
we say that newer versions of the CLI should support all/most older
versions of the REST APIs.

Having the template generation behind a (stateless) REST API could allow
us to define an API which expresses "deploy my overcloud" and not have
the client so tied to a specific undercloud version.


Yeah i see that advantage of making it an API, Dean pointed this out 
too. The combination of this and the fact that we'll need to store the 
Roles and related data eventually anyway might be the tipping point.



Thanks! :)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Mark McLoughlin
On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:
> Hi all,
> 
> TL;DR: I believe that "As an infrastructure administrator, Anna wants a 
> CLI for managing the deployment providing the same fundamental features 
> as UI." With the planned architecture changes (making tuskar-api thinner 
> and getting rid of proxying to other services), there's not an obvious 
> way to achieve that. We need to figure this out. I present a few options 
> and look forward for feedback.
..

> 1) Make a thicker python-tuskarclient and put the business logic there. 
> Make it consume other python-*clients. (This is an unusual approach 
> though, i'm not aware of any python-*client that would consume and 
> integrate other python-*clients.)
> 
> 2) Make a thicker tuskar-api and put the business logic there. (This is 
> the original approach with consuming other services from tuskar-api. The 
> feedback on this approach was mostly negative though.)

FWIW, I think these are the two most plausible options right now.

My instinct is that tuskar could be a stateless service which merely
contains the business logic between the UI/CLI and the various OpenStack
services.

That would be a first (i.e. an OpenStack service which doesn't have a
DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
as a purely client-side library used by the UI/CLI (i.e. 1) as far it
can go until we hit actual cases where we need (2).

One example worth thinking through though - clicking "deploy my
overcloud" will generate a Heat template and sent to the Heat API.

The Heat template will be fairly closely tied to the overcloud images
(i.e. the actual image contents) we're deploying - e.g. the template
will have metadata which is specific to what's in the images.

With the UI, you can see that working fine - the user is just using a UI
that was deployed with the undercloud.

With the CLI, it is probably not running on undercloud machines. Perhaps
your undercloud was deployed a while ago and you've just installed the
latest TripleO client-side CLI from PyPI. With other OpenStack clients
we say that newer versions of the CLI should support all/most older
versions of the REST APIs.

Having the template generation behind a (stateless) REST API could allow
us to define an API which expresses "deploy my overcloud" and not have
the client so tied to a specific undercloud version.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Hugh O. Brock
On Thu, Dec 12, 2013 at 03:11:14PM +0100, Ladislav Smola wrote:
> Agree with this.
> 
> Though I am an optimist,  I believe that this time, we can avoid
> calling multiple services in one request that depend on each other.
> About the multiple users at once, this should be solved inside the
> API calls of the services.
> 
> So I think we should forbid building these complex API calls
> composites in the Tuskar-API. If we will want something like this,
> we should implement
> it properly inside the services itself. If we will not be able to
> convince the community about it, maybe it's just not that good
> feature. :-D
> 

It's worth adding that in the particular case Radomir sites (the
"Deploy" button), even with all the locks in the world, the resources
that we have supposedly requisitioned in the undercloud for the user may
have already been allocated to someone else by Nova -- because Nova
currently doesn't allow reservation of resources. (There is work under
way to allow this but it is quite a way off.) So we could find ourselves
claiming for the user that we're going to deploy an overcloud at a
certain scale and then find ourselves unable to do so.

Frankly I think the whole multi-user case for Tuskar is far enough off
that I would consider wrapping a single-login restriction around the
entire thing and calling it a day... except that that would be
crazy. I'm just trying to make the point that making these operations
really safe for multiple users is way harder than just putting a lock on
the tuskar API.

--H

> 
> On 12/12/2013 02:35 PM, Jiří Stránský wrote:
> >On 12.12.2013 14:26, Jiří Stránský wrote:
> >>On 12.12.2013 11:49, Radomir Dopieralski wrote:
> >>>On 11/12/13 13:33, Jiří Stránský wrote:
> >>>
> >>>[snip]
> >>>
> TL;DR: I believe that "As an infrastructure administrator,
> Anna wants a
> CLI for managing the deployment providing the same
> fundamental features
> as UI." With the planned architecture changes (making
> tuskar-api thinner
> and getting rid of proxying to other services), there's not an obvious
> way to achieve that. We need to figure this out. I present a
> few options
> and look forward for feedback.
> >>>
> >>>[snip]
> >>>
> 2) Make a thicker tuskar-api and put the business logic
> there. (This is
> the original approach with consuming other services from
> tuskar-api. The
> feedback on this approach was mostly negative though.)
> >>>
> >>>This is a very simple issue, actualy. We don't have any choice. We need
> >>>locks. We can't make the UI, CLI and API behave in consistent and
> >>>predictable manner when multiple people (and cron jobs on top of that)
> >>>are using them, if we don't have locks for the more complex operations.
> >>>And in order to have locks, we need to have a single point where the
> >>>locks are applied. We can't have it on the client side, or in the UI --
> >>>it has to be a single, shared place. It has to be Tuskar-API, and I
> >>>really don't see any other option.
> >>>
> >>
> >>You're right that we should strive for atomicity, but I'm afraid putting
> >>the complex operations (which call other services) into tuskar-api will
> >>not solve the problem for us. (Jay and Ladislav already discussed the
> >>issue.)
> >>
> >>If we have to do multiple API calls to perform a complex action, then
> >>we're in the same old situation. Should i get back to the rack creation
> >>example that Ladislav posted, it could still happen that Tuskar API
> >>would return error to the UI like: "We haven't created the rack in
> >>Tuskar because we tried to modifiy info about 8 nodes in Ironic, but
> >>only 5 modifications succeeded. So we've tried to revert those 5
> >>modifications but we only managed to revert 2. Please figure this out
> >>and come back." We moved the problem, but didn't solve it.
> >>
> >>I think that if we need something to be atomic, we'll need to make sure
> >>that one operation only "writes" to one service, where the "single
> >>source of truth" for that data lies, and make sure that the operation is
> >>atomic within that service. (See Ladislav's example with overcloud
> >>deployment via Heat in this thread.)
> >>
> >>Thanks :)
> >>
> >>Jirka
> >>
> >
> >And just to make it clear how that relates to locking: Even if i
> >can lock something within Tuskar API, i cannot lock the related
> >data (which i need to use in the complex operation) in the other
> >API (say Ironic). Things can still change under Tuskar API's
> >hands. Again, we just move the unpredictability, but not remove
> >it.
> >
> >Jirka
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
== Hugh Brock, 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Ladislav Smola

Agree with this.

Though I am an optimist,  I believe that this time, we can avoid calling 
multiple services in one request that depend on each other.
About the multiple users at once, this should be solved inside the API 
calls of the services.


So I think we should forbid building these complex API calls composites 
in the Tuskar-API. If we will want something like this, we should implement
it properly inside the services itself. If we will not be able to 
convince the community about it, maybe it's just not that good feature. :-D


Ladislav

On 12/12/2013 02:35 PM, Jiří Stránský wrote:

On 12.12.2013 14:26, Jiří Stránský wrote:

On 12.12.2013 11:49, Radomir Dopieralski wrote:

On 11/12/13 13:33, Jiří Stránský wrote:

[snip]

TL;DR: I believe that "As an infrastructure administrator, Anna 
wants a
CLI for managing the deployment providing the same fundamental 
features
as UI." With the planned architecture changes (making tuskar-api 
thinner

and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few 
options

and look forward for feedback.


[snip]

2) Make a thicker tuskar-api and put the business logic there. 
(This is
the original approach with consuming other services from 
tuskar-api. The

feedback on this approach was mostly negative though.)


This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.



You're right that we should strive for atomicity, but I'm afraid putting
the complex operations (which call other services) into tuskar-api will
not solve the problem for us. (Jay and Ladislav already discussed the
issue.)

If we have to do multiple API calls to perform a complex action, then
we're in the same old situation. Should i get back to the rack creation
example that Ladislav posted, it could still happen that Tuskar API
would return error to the UI like: "We haven't created the rack in
Tuskar because we tried to modifiy info about 8 nodes in Ironic, but
only 5 modifications succeeded. So we've tried to revert those 5
modifications but we only managed to revert 2. Please figure this out
and come back." We moved the problem, but didn't solve it.

I think that if we need something to be atomic, we'll need to make sure
that one operation only "writes" to one service, where the "single
source of truth" for that data lies, and make sure that the operation is
atomic within that service. (See Ladislav's example with overcloud
deployment via Heat in this thread.)

Thanks :)

Jirka



And just to make it clear how that relates to locking: Even if i can 
lock something within Tuskar API, i cannot lock the related data 
(which i need to use in the complex operation) in the other API (say 
Ironic). Things can still change under Tuskar API's hands. Again, we 
just move the unpredictability, but not remove it.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Jiří Stránský

On 12.12.2013 14:26, Jiří Stránský wrote:

On 12.12.2013 11:49, Radomir Dopieralski wrote:

On 11/12/13 13:33, Jiří Stránský wrote:

[snip]


TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.


[snip]


2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.



You're right that we should strive for atomicity, but I'm afraid putting
the complex operations (which call other services) into tuskar-api will
not solve the problem for us. (Jay and Ladislav already discussed the
issue.)

If we have to do multiple API calls to perform a complex action, then
we're in the same old situation. Should i get back to the rack creation
example that Ladislav posted, it could still happen that Tuskar API
would return error to the UI like: "We haven't created the rack in
Tuskar because we tried to modifiy info about 8 nodes in Ironic, but
only 5 modifications succeeded. So we've tried to revert those 5
modifications but we only managed to revert 2. Please figure this out
and come back." We moved the problem, but didn't solve it.

I think that if we need something to be atomic, we'll need to make sure
that one operation only "writes" to one service, where the "single
source of truth" for that data lies, and make sure that the operation is
atomic within that service. (See Ladislav's example with overcloud
deployment via Heat in this thread.)

Thanks :)

Jirka



And just to make it clear how that relates to locking: Even if i can 
lock something within Tuskar API, i cannot lock the related data (which 
i need to use in the complex operation) in the other API (say Ironic). 
Things can still change under Tuskar API's hands. Again, we just move 
the unpredictability, but not remove it.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Radomir Dopieralski
On 12/12/13 11:49, Radomir Dopieralski wrote:
> On 11/12/13 13:33, Jiří Stránský wrote:
> 
> [snip]
> 
>> TL;DR: I believe that "As an infrastructure administrator, Anna wants a
>> CLI for managing the deployment providing the same fundamental features
>> as UI." With the planned architecture changes (making tuskar-api thinner
>> and getting rid of proxying to other services), there's not an obvious
>> way to achieve that. We need to figure this out. I present a few options
>> and look forward for feedback.
> 
> [snip]
> 
>> 2) Make a thicker tuskar-api and put the business logic there. (This is
>> the original approach with consuming other services from tuskar-api. The
>> feedback on this approach was mostly negative though.)
> 
> This is a very simple issue, actualy. We don't have any choice. We need
> locks. We can't make the UI, CLI and API behave in consistent and
> predictable manner when multiple people (and cron jobs on top of that)
> are using them, if we don't have locks for the more complex operations.
> And in order to have locks, we need to have a single point where the
> locks are applied. We can't have it on the client side, or in the UI --
> it has to be a single, shared place. It has to be Tuskar-API, and I
> really don't see any other option.
> 

Ok, it seems that not everyone is convinced that we will actually need
locks, transactions, sessions or some other way of keeping the
operations synchronized, so I will give you a couple of examples. For
clarity, I will talk about what we have in Tuskar-UI right now, not
about something that is just planned. Please don't respond with "but we
will do this particular thing differently this time". We will hit the
same issue again in a different place, because the whole nature of
Tuskar is to provide large operations that abstract away the smaller
operations that could be done without Tuskar.

One example of which I spoke already is the Resource Class creation
workflow. In that workflow in the first step we fill in the information
about the particular Resource Class -- its name, kind, network subnet,
etc. In the second step, we add the nodes that should be included in
that Resource Class. Then we hit "OK" the nodes are created, one by one,
then the nodes are assigned to the newly created Resource Class.

There are several concurrency-related problems here if you assume that
multiple users are using the UI:
* In the mean time, someone can create a Resource Class with the same
  name, but different ID and different set of nodes. Our new nodes will
  get created, but creating the resource class will fail (as the name
  has to be unique) and we will have a bunch of unassigned nodes.
* In the mean time, someone can add one of our nodes to a different
  Resource Class. The creation of nodes will fail at some point (as
  the MAC addresses need to be unique), and we will have a bunch of
  unassigned nodes, no Resource Class, and lost user input for the
  nodes that didn't get created.
* Someone adds one of our nodes to a different Resource Class, but does
  it in a moment between our creating the nodes, and creating the
  Resource Class. Hard to tell in which Resource Class the nodes is now.

The only way around such problem is to have a critical section there.
This can be done in multiple ways, but they all require some means of
synchronization and would be preferably implemented in a single place.
The Tuskar-API is that place.

Another example is the "deploy" button. When you press it, you are
presented with a list of undeployed nodes that will be deployed, and are
asked for confirmation. But if any nodes are created or deleted in
the mean time, the list you saw is not the list of nodes that actually
are going to be deployed -- you have been lied to. You may accidentally
deploy nodes that you didn't want.

This sort of problem will pop up again and again -- it's common in user
interfaces. Without a single point of synchronization where we can check
for locks, sessions, operation serial numbers and state, there is no way
to solve it. That's why we need to have all operations go through
Tuskar-API.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Jiří Stránský

On 12.12.2013 11:49, Radomir Dopieralski wrote:

On 11/12/13 13:33, Jiří Stránský wrote:

[snip]


TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.


[snip]


2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.



You're right that we should strive for atomicity, but I'm afraid putting 
the complex operations (which call other services) into tuskar-api will 
not solve the problem for us. (Jay and Ladislav already discussed the 
issue.)


If we have to do multiple API calls to perform a complex action, then 
we're in the same old situation. Should i get back to the rack creation 
example that Ladislav posted, it could still happen that Tuskar API 
would return error to the UI like: "We haven't created the rack in 
Tuskar because we tried to modifiy info about 8 nodes in Ironic, but 
only 5 modifications succeeded. So we've tried to revert those 5 
modifications but we only managed to revert 2. Please figure this out 
and come back." We moved the problem, but didn't solve it.


I think that if we need something to be atomic, we'll need to make sure 
that one operation only "writes" to one service, where the "single 
source of truth" for that data lies, and make sure that the operation is 
atomic within that service. (See Ladislav's example with overcloud 
deployment via Heat in this thread.)


Thanks :)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Radomir Dopieralski
On 11/12/13 13:33, Jiří Stránský wrote:

[snip]

> TL;DR: I believe that "As an infrastructure administrator, Anna wants a
> CLI for managing the deployment providing the same fundamental features
> as UI." With the planned architecture changes (making tuskar-api thinner
> and getting rid of proxying to other services), there's not an obvious
> way to achieve that. We need to figure this out. I present a few options
> and look forward for feedback.

[snip]

> 2) Make a thicker tuskar-api and put the business logic there. (This is
> the original approach with consuming other services from tuskar-api. The
> feedback on this approach was mostly negative though.)

This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Jan Provaznik

On 12/11/2013 01:33 PM, Jiří Stránský wrote:

Hi all,



Hi Jirka,


TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.

This meant that the "integration logic" of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.


Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.

We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all). We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).

Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)

Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there.
Make it consume other python-*clients. (This is an unusual approach
though, i'm not aware of any python-*client that would consume and
integrate other python-*clients.)



I would prefer this solution in cases where it can't be part of 
tuskar-api (option 2). It makes sense to me define an API which 
represent set of methods/actions from Tuskar POV (even if these methods 
just wrap API calls to other openstack APIs).



2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)

3) Keep tuskar-api and python-tuskarclient thin, make another library
sitting between Tuskar UI and all python-***clients. This new project
would contain the logic of using undercloud services to provide the
"tuskar experience" it would expose python bindings for Tuskar UI and
contain a CLI. (Think of it like traditional python-*client but instead
of consuming a REST API, it would consume other python-*clients. I
wonder if this is overengineering. We might end up with too many
projects doing too few things? :) )

4) Keep python-tuskarclient thin, but build a separate CLI app that
would provide same integration features as Tuskar UI does. (This would
lead to code duplication. Depends on the actual amount of logic to
duplicate if this is bearable or not.)


Which of the options you see as best? Did i miss some better option? Am
i just being crazy and trying to solve a non-issue? Please tell me :)

Please don't consider the time aspect of this, focus rather on what's
the right approach, where we want to get eventually. (We might want to
keep a thick Tuskar UI for Icehouse not to set the hell loose, there
will be enough refactoring already.)


Thanks

Jirka



Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Ladislav Smola

On 12/12/2013 10:49 AM, Jiří Stránský wrote:

On 12.12.2013 09:48, Ladislav Smola wrote:

On 12/11/2013 06:15 PM, Jiří Stránský wrote:

On 11.12.2013 17:13, Ladislav Smola wrote:

Hi,

thanks for starting this conversation.
I will take it little side ways. I think we should be asking why 
have we

needed the tuskar-api. It has done some more complex logic (e.g.
building a heat template) or storing additional info, not supported by
the services we use (like rack associations).
That is a perfectly fine use-case of introducing tuskar-api.

Although now, when everything is shifting to the services 
themselves, we

don't need tuskar-api for that kind of stuff. Can you please list what
complex operations are left, that should be done in tuskar? I think
discussing concrete stuff would be best.


I believe this is an orthogonal discussion. Regardless if we have
tuskar-api or not, Tuskar UI is going to be an "integrated face" over
multiple services (Heat, Ironic, maybe others), and i'd think we could
use a CLI counterpart too.



Well that is how dashboard works. I think point of Service oriented
architecture is to use the services. Not trying to integrate it on the
other end.


Yeah i don't want to integrate it on the API side. But if there's some 
logic we're building on top of the APIs (and i believe there is, i 
gave an example in my initial e-mail), i'd like to have the same logic 
code used in CLI and UI. And the easiest way to do this is to pull the 
logic out of UI into some library, ideally directly into 
python-tuskarclient, since UI already depends on it anyway (and i also 
believe tuskarclient is one of the possible correct places for that 
code to live in, if we make it a separate namespace).





Let's build the whole story in
UI, then we can see if there are abstractions that are usable for both
CLI and UI. In the mean time, you will have CLI maybe little harder to
use, but more general.


Yeah i think i'd be fine with that. As i wrote in my initial e-mail, 
we might want to keep a thicker UI initially (maybe for the Icehouse 
release) to avoid doing too much refactoring at the same time. But 
eventually, i think we should pull the logic out, so that CLI and UI 
have comparable capabilities, including ease of use.


Thanks for the feedback :)

Jirka


That sounds reasonable. Let's cross that bridge when we come to it. Like 
late I3 or J1.


Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Jiří Stránský

On 12.12.2013 09:48, Ladislav Smola wrote:

On 12/11/2013 06:15 PM, Jiří Stránský wrote:

On 11.12.2013 17:13, Ladislav Smola wrote:

Hi,

thanks for starting this conversation.
I will take it little side ways. I think we should be asking why have we
needed the tuskar-api. It has done some more complex logic (e.g.
building a heat template) or storing additional info, not supported by
the services we use (like rack associations).
That is a perfectly fine use-case of introducing tuskar-api.

Although now, when everything is shifting to the services themselves, we
don't need tuskar-api for that kind of stuff. Can you please list what
complex operations are left, that should be done in tuskar? I think
discussing concrete stuff would be best.


I believe this is an orthogonal discussion. Regardless if we have
tuskar-api or not, Tuskar UI is going to be an "integrated face" over
multiple services (Heat, Ironic, maybe others), and i'd think we could
use a CLI counterpart too.



Well that is how dashboard works. I think point of Service oriented
architecture is to use the services. Not trying to integrate it on the
other end.


Yeah i don't want to integrate it on the API side. But if there's some 
logic we're building on top of the APIs (and i believe there is, i gave 
an example in my initial e-mail), i'd like to have the same logic code 
used in CLI and UI. And the easiest way to do this is to pull the logic 
out of UI into some library, ideally directly into python-tuskarclient, 
since UI already depends on it anyway (and i also believe tuskarclient 
is one of the possible correct places for that code to live in, if we 
make it a separate namespace).





Let's build the whole story in
UI, then we can see if there are abstractions that are usable for both
CLI and UI. In the mean time, you will have CLI maybe little harder to
use, but more general.


Yeah i think i'd be fine with that. As i wrote in my initial e-mail, we 
might want to keep a thicker UI initially (maybe for the Icehouse 
release) to avoid doing too much refactoring at the same time. But 
eventually, i think we should pull the logic out, so that CLI and UI 
have comparable capabilities, including ease of use.


Thanks for the feedback :)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Ladislav Smola

On 12/11/2013 05:41 PM, Jay Dobies wrote:
> I will take it little side ways. I think we should be asking why 
have > we needed the tuskar-api. It has done some more complex logic 
(e.g. > > building a heat template) or storing additional info, not 
supported > > by the services we use (like rack associations).

> That is a perfectly fine use-case of introducing tuskar-api.

> Although now, when everything is shifting to the services 
themselves, > we don't need tuskar-api for that kind of stuff. Can you 
please list > what complex operations are left, that should be done in 
tuskar? I > > think discussing concrete stuff would be best.


This is a good call to circle back on that I'm not sure of it either. 
The wireframes I've seen so far largely revolve around node listing 
and allocation, but I 100% know I'm oversimplifying it and missing 
something bigger there.



Also, as I have been talking with rdopieralsky, there has been some
problems in the past, with tuskar doing more steps in one. Like create a
rack and register new nodes in the same time. As those have been
separate API calls and there is no transaction handling, we should not
do this kind of things in the first place. If we have actions that
depends on each other, it should go from UI one by one. Otherwise we
will be showing messages like, "The rack has not been created, but 5
from 8 nodes has been added. We have tried to delete those added nodes,
but 2 of the 5 deletions has failed. Please figure this out, then you
can run this awesome action that calls multiple dependent APIs without
real rollback again." (or something like that, depending on what gets
created first)


This is what I expected to see as the primary argument against it, the 
lack of a good transactional model for calling the dependent APIs. And 
it's certainly valid.


But what you're describing is the exact same problem regardless if you 
go from the UI or from the Tuskar API. If we're going to do any sort 
of higher level automation of things for the user that spans APIs, 
we're going to run into it. The question is if the client(s) handle it 
or the API. The alternative is to not have the awesome action in the 
first place, in which case we're not really giving the user as much 
value as an application.




Well, not necessarily. You can have the whole deployment story in 2 steps.

1. Get the nodes by manually typing MAC addresses(there can be bulk 
adding), or auto discovery.


2. And create a stack via heat. If the hardware was discovered 
correctly. Heat will just magically do this according to template 
declaration.


This is just enough magic for me. There doesn't have to be a button 'Get 
the hardware and build the Cloud for me please'. If it does, heat 
stack-create can have parameter autodiscover_nodes=True. I think the 
automation should be always done inside one API call, so we can handle 
transactions correctly. Otherwise we are just building it wrong.


But it's just what I think...

Thanks for the feedback.


I am not saying we should not have tuskar-api. Just put there things
that belongs there, not proxy everything.

>

btw. the real path of the diagram is

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heatclient <-> heat-api
.|ironic|etc.

My conclusion
--

I say if it can be tuskar-ui <-> heatclient <-> heat-api, lets keep it
that way.


I'm still fuzzy on what OpenStack means when it says *client. Is that 
just a bindings library that invokes a remote API or does it also 
contain the CLI bits?



If we realize we are putting some business logic to UI, that needs to be
done also to CLI, or we need to store some additional data, that doesn't
belong anywhere let's put it in Tuskar-API.

Kind Regards,
Ladislav


Thanks for the feedback  :)




On 12/11/2013 03:32 PM, Jay Dobies wrote:

Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
new to the project. I only mention it again because it's relevant in
that I missed any of the discussion on why proxying from tuskar API to
other APIs is looked down upon. Jiri and I had been talking yesterday
and he mentioned it to me when I started to ask these same sorts of
questions.

On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna 
wants a
CLI for managing the deployment providing the same fundamental 
features
as UI." With the planned architecture changes (making tuskar-api 
thinner

and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few 
options

and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the
individual APIs directly put a lot of knowledge into the clients that
had to be replicated across clients. At the best case, that's simply
knowing where to look for da

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Ladislav Smola

On 12/11/2013 06:15 PM, Jiří Stránský wrote:

On 11.12.2013 17:13, Ladislav Smola wrote:

Hi,

thanks for starting this conversation.
I will take it little side ways. I think we should be asking why have we
needed the tuskar-api. It has done some more complex logic (e.g.
building a heat template) or storing additional info, not supported by
the services we use (like rack associations).
That is a perfectly fine use-case of introducing tuskar-api.

Although now, when everything is shifting to the services themselves, we
don't need tuskar-api for that kind of stuff. Can you please list what
complex operations are left, that should be done in tuskar? I think
discussing concrete stuff would be best.


I believe this is an orthogonal discussion. Regardless if we have 
tuskar-api or not, Tuskar UI is going to be an "integrated face" over 
multiple services (Heat, Ironic, maybe others), and i'd think we could 
use a CLI counterpart too.




Well that is how dashboard works. I think point of Service oriented 
architecture is to use the services. Not trying to integrate it on the 
other end.




There can be a CLI or API deployment story using Openstack services, not
necessarily calling only tuskar-cli and api as proxies.
E.g. in documentation you will have

now create the stack by: heat stack-create params

it's much better than:
You can create stack by tuskar-deploy params, which actually calls heat
stack-create params

What is wrong about calling the original services? Why do we want to
hide it?


Well, imho that's a bit like asking "why should we have command-line 
e-mail clients if the terminal users can simply write SMTP protocol by 
hand" :) Or, to be closer to our topic: "Why build a Tuskar UI when 
user can use Dashboard to deploy overcloud - just upload the heat 
template, and provide the params."




Well it seems to me like writing an email client over an email client, 
not over SMTP.


"Why build a Tuskar UI when user can use Dashboard to deploy overcloud - 
just upload the heat template, and provide the params."
Well yeah, I think it's where we can be heading. User should be able to 
switch the the Heat template (e.g. a Heat template with kickass 
distributed architecture). User will be able to have many overclouds 
(stacks) each with different Heat template.


So yes, I still see openstack as a complex application you deploy via 
Heat. I can be wrong though and we might need some CLI abstraction. 
Though that is why the sysadmins are writing scripts for themselves, 
they like to build their own abstractions from the atomic operations.


There's nothing essentially wrong about using heat command line or 
Dashboard for deploying overcloud i believe, other than that it's not 
very user friendly. That's the whole reason why we build Tuskar UI in 
the first place, i'd say. And my subjective opinion is that CLI users 
should be able to work on a similar level of abstraction.




Dashboard doesn't fit very well for deploying and managing hardware. 
That is why we build separate UI, I think. Though that doesn't mean we 
can't just use the APIs in a different way. All I am saying is, let's 
not build this abstraction in advance. Let's build the whole story in 
UI, then we can see if there are abstractions that are usable for both 
CLI and UI. In the mean time, you will have CLI maybe little harder to 
use, but more general.



Jirka



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský
I'm going to reply to Dean's and James' posts in one shot because it 
imho makes most sense.


On 11.12.2013 17:00, Dean Troyer wrote:

On Wed, Dec 11, 2013 at 9:35 AM, James Slagle wrote:


On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský  wrote:

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.


To be clear, tuskarclient is just a library right?  So both the UI and
CLI use tuskarclient, at least was that the original plan?


Currently, tuskarclient is a library (Python bindings around tuskar-api) 
and CLI on top of that library (though the CLI is not as complete as the 
bindings).



I would expect tuskarclient above to be the Python API bindings without the
business logic.


Yes.





I don't think we want the business logic in the UI.



+1



Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)


IMO, we want an equivalent CLI and UI.  A big reason is so that it can
be sanely scripted/automated.


+1




At a minimum you need to be sure that all of the atomic operations in your
business logic are exposed via _some_ API.  ie, to script something the
script may be where the business logic exists.

Building on that is moving that logic into a library that calls multiple
Python client APIs.  This may or may not be part of tuskarclient.

The next step up is to put your business logic into what we used to call
middleware, the layer between client and backend.  This is server-side and
IMHO where it belongs.  This is really the ONLY way you can ensure that
various clients get the same experience.



python-openstackclient consumes other clients :).  Ok, that's probably
not a great example :).



:) No, not really.  But it is also developing some 'value-added' functions
that are cross-project APIs and has a similar problem.  So far that is just
smoke and mirrors hiding the duck tape behind the scenes but it is not
unlike some of the things that Horizon does for user convenience.



This approach makes the most sense to me.  python-tuskarclient would
make the decisions about if it can call the heat api directly, or the
tuskar api, or some other api.  The UI and CLI would then both use
python-tuskarclient.




Yeah making tuskarclient consume other clients seems most appealing to 
me as well. Solution 3 is very similar.




If you do this keep the low-level API bindings separate from the
higher-level logical API.


Agreed, this would be essential part of such solution. We'd need to have 
separate namespaces for the low-level Python API (thin wrapper over 
Tuskar REST API) vs. the high-level Python API (business logic on top of 
the internal low-level Python API and other python-*clients).






2) Make a thicker tuskar-api and put the business logic there. (This is

the

original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


So, typically, I would say this is the right approach.  However given
what you pointed out above that sometimes we can use other API's
directly, we then have a seperation where sometimes you have to use
tuskar-api and sometimes you'd use heat/etc api.  By using
python-tuskarclient, you're really just pushing that abstraction into
a library instead of an API, and I think that makes some sense.



Consider that pushig out to the client requires that the client be in sync
with what is deployed.  You'll have to make sure your client logic can
handle the multiple versions of server APIs that it will encounter.
  Putting that server-side lets you stay in sync with the other OpenStack
APIs you need to use.


Hmm this is quite a good argument for the server-side approach... But i 
wonder if it's worth the complexity of proxying some (possibly a lot of) 
API calls. If we don't need to keep any additional data about entities 
(by entities i mean Heat stacks, Ironic nodes, ...), then i think having 
a REST API just to ensure we can stay in sync with other services in the 
release is a bit overkill. I don't think maintaining compatibility of 
the client layer will be easy, but maintaining "the proxy" wouldn't be 
easy either imho.






3) Keep tuskar-api and python-tuskarclient thin, make another library
sitting between Tuskar UI and all python-***clients. This new project

would

contain the logic of using undercloud services to provide the "tuskar
experience" it would expose python bindings for Tuskar UI and contain a

CLI.

(Think of it like traditional python-*client but instead of consuming a

REST

API, it would consume other python-*clients. I wonder if this is
overengineering. We might end up with too many projects doing too few
things? :) )


I don't follow how this new library would be different from
python-tuskarclient.  Unless I'm just misinterpreting what
python-tuskarclient is meant to be, which may very well be true :).


1 and 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský

On 11.12.2013 17:13, Ladislav Smola wrote:

Hi,

thanks for starting this conversation.
I will take it little side ways. I think we should be asking why have we
needed the tuskar-api. It has done some more complex logic (e.g.
building a heat template) or storing additional info, not supported by
the services we use (like rack associations).
That is a perfectly fine use-case of introducing tuskar-api.

Although now, when everything is shifting to the services themselves, we
don't need tuskar-api for that kind of stuff. Can you please list what
complex operations are left, that should be done in tuskar? I think
discussing concrete stuff would be best.


I believe this is an orthogonal discussion. Regardless if we have 
tuskar-api or not, Tuskar UI is going to be an "integrated face" over 
multiple services (Heat, Ironic, maybe others), and i'd think we could 
use a CLI counterpart too.




There can be a CLI or API deployment story using Openstack services, not
necessarily calling only tuskar-cli and api as proxies.
E.g. in documentation you will have

now create the stack by: heat stack-create params

it's much better than:
You can create stack by tuskar-deploy params, which actually calls heat
stack-create params

What is wrong about calling the original services? Why do we want to
hide it?


Well, imho that's a bit like asking "why should we have command-line 
e-mail clients if the terminal users can simply write SMTP protocol by 
hand" :) Or, to be closer to our topic: "Why build a Tuskar UI when user 
can use Dashboard to deploy overcloud - just upload the heat template, 
and provide the params."


There's nothing essentially wrong about using heat command line or 
Dashboard for deploying overcloud i believe, other than that it's not 
very user friendly. That's the whole reason why we build Tuskar UI in 
the first place, i'd say. And my subjective opinion is that CLI users 
should be able to work on a similar level of abstraction.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Dean Troyer
On Wed, Dec 11, 2013 at 10:41 AM, Jay Dobies wrote:
>
> I'm still fuzzy on what OpenStack means when it says *client. Is that just
> a bindings library that invokes a remote API or does it also contain the
> CLI bits?


For the older python-*client projects they are both Python API bindings and
a this CLI on top of them.  Some of the newer clients may not include a
CLI.  By default I think most people mean the library API when referring to
clients without 'CLI'.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Ladislav Smola

On 12/11/2013 04:35 PM, James Slagle wrote:

On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský  wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a CLI
for managing the deployment providing the same fundamental features as UI."
With the planned architecture changes (making tuskar-api thinner and getting
rid of proxying to other services), there's not an obvious way to achieve
that. We need to figure this out. I present a few options and look forward
for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.

To be clear, tuskarclient is just a library right?  So both the UI and
CLI use tuskarclient, at least was that the original plan?


This meant that the "integration logic" of how to use heat, ironic and other
services to manage an OpenStack deployment lied within *tuskar-api*. This
gave us an easy way towards having a CLI - just build tuskarclient to wrap
abilities of tuskar-api.


Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.

I think we should do that whereever we can for sure.  For example, to
get the status of a deployment we can do the same API call as "heat
stack-status ..." does, no need to write a new Tuskar API to do that.


But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which means
there's a natural parity between what the Dashboard and the CLIs can do.

We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at all).
We're building a separate UI because we need *additional logic* on top of
the APIs. E.g. instead of directly working with Heat templates and Heat
stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker than
Dashboard is, and the natural parity between CLI and UI vanishes. By having
this logic in UI, we're effectively preventing its use from CLI. (If i were
bold i'd also think about integrating Tuskar with other software which would
be prevented too if we keep the business logic in UI, but i'm not absolutely
positive about use cases here).

I don't think we want the business logic in the UI.


Can you specify what kind of business logic?

Like we do validations in UI before we send it to API (both on server 
and client).
We occasionally do some joins. E.g. list of nodes is join of nova 
baremetal-list and nova list.


That is considered to be a business logic. Though if it is only for UI 
purposes, it should stay in UI.


Other than this, it's just API calls.




Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)

IMO, we want an equivalent CLI and UI.  A big reason is so that it can
be sanely scripted/automated.


Sure, we have. It's just API calls. Though e.g. when you want massive 
instance delete, you will write a script for that in CLI. In UI you will 
filter it and use checkboxes.

So the equivalence is in API calls, not in the complex operations.


Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there. Make
it consume other python-*clients. (This is an unusual approach though, i'm
not aware of any python-*client that would consume and integrate other
python-*clients.)

python-openstackclient consumes other clients :).  Ok, that's probably
not a great example :).

This approach makes the most sense to me.  python-tuskarclient would
make the decisions about if it can call the heat api directly, or the
tuskar api, or some other api.  The UI and CLI would then both use
python-tuskarclient.


Guys, I am not sure about this. I thought python-xxxclient should follow 
Remote Proxy Pattern, being an object wrapper for the service API calls.


Even if you do this, it should call rather e.g. python-heatclient, 
rather than API directly. Though I haven't seen this one before in 
Openstack.




2) Make a thicker tuskar-api and put the business logic there. (This is the
original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)

So, typically, I would say this is the right approach.  However given
what you pointed out above that sometimes we can use other API's
directly, we then have a seperation where sometimes you have to use
tuskar-api and sometimes you'd use heat/etc api.  By using
python-tuskarclient, you're really just pushing that abstraction into
a library instead of an API, and I think that makes some sense.


Shouldn't be general libs in the Oslo, rather than client?


3) Keep tuskar-api and python-tuskarclient thin, make another library
sittin

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský

On 11.12.2013 16:43, Tzu-Mainn Chen wrote:

Thanks for writing this all out!

- Original Message -

Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
new to the project. I only mention it again because it's relevant in
that I missed any of the discussion on why proxying from tuskar API to
other APIs is looked down upon. Jiri and I had been talking yesterday
and he mentioned it to me when I started to ask these same sorts of
questions.

On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the
individual APIs directly put a lot of knowledge into the clients that
had to be replicated across clients. At the best case, that's simply
knowing where to look for data. But I suspect it's bigger than that and
there are workflows that will be implemented for tuskar needs. If the
tuskar API can't call out to other APIs, that workflow implementation
needs to be done at a higher layer, which means in each client.

Something I'm going to talk about later in this e-mail but I'll mention
here so that the diagrams sit side-by-side is the potential for a facade
layer that hides away the multiple APIs. Lemme see if I can do this in
ASCII:

tuskar-ui -+   +-tuskar-api
 |   |
 +-client-facade-+-nova-api
 |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that
calls across APIs and adds in the tuskar magic. That keeps the tuskar
API from calling into other APIs* but keeps all of the API call logic
abstracted away from the UX pieces.

* Again, I'm not 100% up to speed with the API discussion, so I'm going
off the assumption that we want to avoid API to API calls. If that isn't
as strict of a design principle as I'm understanding it to be, then the
above picture probably looks kinda silly, so keep in mind the context
I'm going from.

For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
 |
 +-tuskar-api-+-nova-api
 ||
tuskar-cli-++-heat-api

Where a tuskar client talked to the tuskar API to do tuskar things.
Whatever was needed to do anything tuskar-y was hidden away behind the
tuskar API.


This meant that the "integration logic" of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.

Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.


When you say python- clients, is there a distinction between the CLI and
a bindings library that invokes the server-side APIs? In other words,
the CLI is packaged as CLI+bindings and the UI as GUI+bindings?


python-tuskarclient = Python bindings to tuskar-api + CLI, in one project

tuskar-ui doesn't have it's own bindings, it depends on 
python-tuskarclient for bindings to tuskar-api (and other clients for 
bindings to other APIs). UI makes use just of the Python bindings part 
of clients and doesn't interact with the CLI part. This is the general 
OpenStack way of doing things.





We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all). We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).


I see your point about preventing its use from the CLI, but more
disconcerting IMO is that it just doesn't belong in the UI. 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jay Dobies
> I will take it little side ways. I think we should be asking why have 
> we needed the tuskar-api. It has done some more complex logic (e.g. > 
> building a heat template) or storing additional info, not supported > 
> by the services we use (like rack associations).

> That is a perfectly fine use-case of introducing tuskar-api.

> Although now, when everything is shifting to the services themselves, 
> we don't need tuskar-api for that kind of stuff. Can you please list 
> what complex operations are left, that should be done in tuskar? I > 
> think discussing concrete stuff would be best.


This is a good call to circle back on that I'm not sure of it either. 
The wireframes I've seen so far largely revolve around node listing and 
allocation, but I 100% know I'm oversimplifying it and missing something 
bigger there.



Also, as I have been talking with rdopieralsky, there has been some
problems in the past, with tuskar doing more steps in one. Like create a
rack and register new nodes in the same time. As those have been
separate API calls and there is no transaction handling, we should not
do this kind of things in the first place. If we have actions that
depends on each other, it should go from UI one by one. Otherwise we
will be showing messages like, "The rack has not been created, but 5
from 8 nodes has been added. We have tried to delete those added nodes,
but 2 of the 5 deletions has failed. Please figure this out, then you
can run this awesome action that calls multiple dependent APIs without
real rollback again." (or something like that, depending on what gets
created first)


This is what I expected to see as the primary argument against it, the 
lack of a good transactional model for calling the dependent APIs. And 
it's certainly valid.


But what you're describing is the exact same problem regardless if you 
go from the UI or from the Tuskar API. If we're going to do any sort of 
higher level automation of things for the user that spans APIs, we're 
going to run into it. The question is if the client(s) handle it or the 
API. The alternative is to not have the awesome action in the first 
place, in which case we're not really giving the user as much value as 
an application.



I am not saying we should not have tuskar-api. Just put there things
that belongs there, not proxy everything.

>

btw. the real path of the diagram is

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heatclient <-> heat-api
.|ironic|etc.

My conclusion
--

I say if it can be tuskar-ui <-> heatclient <-> heat-api, lets keep it
that way.


I'm still fuzzy on what OpenStack means when it says *client. Is that 
just a bindings library that invokes a remote API or does it also 
contain the CLI bits?



If we realize we are putting some business logic to UI, that needs to be
done also to CLI, or we need to store some additional data, that doesn't
belong anywhere let's put it in Tuskar-API.

Kind Regards,
Ladislav


Thanks for the feedback  :)




On 12/11/2013 03:32 PM, Jay Dobies wrote:

Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
new to the project. I only mention it again because it's relevant in
that I missed any of the discussion on why proxying from tuskar API to
other APIs is looked down upon. Jiri and I had been talking yesterday
and he mentioned it to me when I started to ask these same sorts of
questions.

On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the
individual APIs directly put a lot of knowledge into the clients that
had to be replicated across clients. At the best case, that's simply
knowing where to look for data. But I suspect it's bigger than that
and there are workflows that will be implemented for tuskar needs. If
the tuskar API can't call out to other APIs, that workflow
implementation needs to be done at a higher layer, which means in each
client.

Something I'm going to talk about later in this e-mail but I'll
mention here so that the diagrams sit side-by-side is the potential
for a facade layer that hides away the multiple APIs. Lemme see if I
can do this in ASCII:

tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that
calls across APIs and adds in the tuskar magic. That keeps t

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Ladislav Smola

Hi,

thanks for starting this conversation.
I will take it little side ways. I think we should be asking why have we 
needed the tuskar-api. It has done some more complex logic (e.g. 
building a heat template) or storing additional info, not supported by 
the services we use (like rack associations).

That is a perfectly fine use-case of introducing tuskar-api.

Although now, when everything is shifting to the services themselves, we 
don't need tuskar-api for that kind of stuff. Can you please list what 
complex operations are left, that should be done in tuskar? I think 
discussing concrete stuff would be best.


There can be a CLI or API deployment story using Openstack services, not 
necessarily calling only tuskar-cli and api as proxies.

E.g. in documentation you will have

now create the stack by: heat stack-create params

it's much better than:
You can create stack by tuskar-deploy params, which actually calls heat 
stack-create params


What is wrong about calling the original services? Why do we want to 
hide it?



Also, as I have been talking with rdopieralsky, there has been some 
problems in the past, with tuskar doing more steps in one. Like create a 
rack and register new nodes in the same time. As those have been 
separate API calls and there is no transaction handling, we should not 
do this kind of things in the first place. If we have actions that 
depends on each other, it should go from UI one by one. Otherwise we 
will be showing messages like, "The rack has not been created, but 5 
from 8 nodes has been added. We have tried to delete those added nodes, 
but 2 of the 5 deletions has failed. Please figure this out, then you 
can run this awesome action that calls multiple dependent APIs without 
real rollback again." (or something like that, depending on what gets 
created first)


I am not saying we should not have tuskar-api. Just put there things 
that belongs there, not proxy everything.


btw. the real path of the diagram is

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heatclient <-> heat-api   
.|ironic|etc.


My conclusion
--

I say if it can be tuskar-ui <-> heatclient <-> heat-api, lets keep it 
that way.


If we realize we are putting some business logic to UI, that needs to be 
done also to CLI, or we need to store some additional data, that doesn't 
belong anywhere let's put it in Tuskar-API.


Kind Regards,
Ladislav



On 12/11/2013 03:32 PM, Jay Dobies wrote:
Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm 
new to the project. I only mention it again because it's relevant in 
that I missed any of the discussion on why proxying from tuskar API to 
other APIs is looked down upon. Jiri and I had been talking yesterday 
and he mentioned it to me when I started to ask these same sorts of 
questions.


On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the 
individual APIs directly put a lot of knowledge into the clients that 
had to be replicated across clients. At the best case, that's simply 
knowing where to look for data. But I suspect it's bigger than that 
and there are workflows that will be implemented for tuskar needs. If 
the tuskar API can't call out to other APIs, that workflow 
implementation needs to be done at a higher layer, which means in each 
client.


Something I'm going to talk about later in this e-mail but I'll 
mention here so that the diagrams sit side-by-side is the potential 
for a facade layer that hides away the multiple APIs. Lemme see if I 
can do this in ASCII:


tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that 
calls across APIs and adds in the tuskar magic. That keeps the tuskar 
API from calling into other APIs* but keeps all of the API call logic 
abstracted away from the UX pieces.


* Again, I'm not 100% up to speed with the API discussion, so I'm 
going off the assumption that we want to avoid API to API calls. If 
that isn't as strict of a design principle as I'm understanding it to 
be, then the above picture probably looks kinda silly, so keep in mind 
the context I'm going from.


For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
   |
   +-tuskar-api-+-nova-api

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Dean Troyer
On Wed, Dec 11, 2013 at 9:35 AM, James Slagle wrote:

> On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský  wrote:
> > Previously, we had planned Tuskar arcitecture like this:
> >
> > tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.
>
> To be clear, tuskarclient is just a library right?  So both the UI and
> CLI use tuskarclient, at least was that the original plan?


I would expect tuskarclient above to be the Python API bindings without the
business logic.


> I don't think we want the business logic in the UI.


+1


> > Now this raises a question - how do we get CLI reasonably on par with
> > abilities of the UI? (Or am i wrong that Anna the infrastructure
> > administrator would want that?)
>
> IMO, we want an equivalent CLI and UI.  A big reason is so that it can
> be sanely scripted/automated.


At a minimum you need to be sure that all of the atomic operations in your
business logic are exposed via _some_ API.  ie, to script something the
script may be where the business logic exists.

Building on that is moving that logic into a library that calls multiple
Python client APIs.  This may or may not be part of tuskarclient.

The next step up is to put your business logic into what we used to call
middleware, the layer between client and backend.  This is server-side and
IMHO where it belongs.  This is really the ONLY way you can ensure that
various clients get the same experience.


> python-openstackclient consumes other clients :).  Ok, that's probably
> not a great example :).
>

:) No, not really.  But it is also developing some 'value-added' functions
that are cross-project APIs and has a similar problem.  So far that is just
smoke and mirrors hiding the duck tape behind the scenes but it is not
unlike some of the things that Horizon does for user convenience.


> This approach makes the most sense to me.  python-tuskarclient would
> make the decisions about if it can call the heat api directly, or the
> tuskar api, or some other api.  The UI and CLI would then both use
> python-tuskarclient.


If you do this keep the low-level API bindings separate from the
higher-level logical API.


> > 2) Make a thicker tuskar-api and put the business logic there. (This is
> the
> > original approach with consuming other services from tuskar-api. The
> > feedback on this approach was mostly negative though.)
>
> So, typically, I would say this is the right approach.  However given
> what you pointed out above that sometimes we can use other API's
> directly, we then have a seperation where sometimes you have to use
> tuskar-api and sometimes you'd use heat/etc api.  By using
> python-tuskarclient, you're really just pushing that abstraction into
> a library instead of an API, and I think that makes some sense.


Consider that pushig out to the client requires that the client be in sync
with what is deployed.  You'll have to make sure your client logic can
handle the multiple versions of server APIs that it will encounter.
 Putting that server-side lets you stay in sync with the other OpenStack
APIs you need to use.


> > 3) Keep tuskar-api and python-tuskarclient thin, make another library
> > sitting between Tuskar UI and all python-***clients. This new project
> would
> > contain the logic of using undercloud services to provide the "tuskar
> > experience" it would expose python bindings for Tuskar UI and contain a
> CLI.
> > (Think of it like traditional python-*client but instead of consuming a
> REST
> > API, it would consume other python-*clients. I wonder if this is
> > overengineering. We might end up with too many projects doing too few
> > things? :) )
>
> I don't follow how this new library would be different from
> python-tuskarclient.  Unless I'm just misinterpreting what
> python-tuskarclient is meant to be, which may very well be true :).


This is essentially what I suggested above.  It need not be a separate repo
or installable package, but the internal API should have its own
namespace/modules/whatever.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread James Slagle
On Wed, Dec 11, 2013 at 10:35 AM, James Slagle  wrote:
> On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský  wrote:
>> 1) Make a thicker python-tuskarclient and put the business logic there. Make
>> it consume other python-*clients. (This is an unusual approach though, i'm
>> not aware of any python-*client that would consume and integrate other
>> python-*clients.)
>
> python-openstackclient consumes other clients :).  Ok, that's probably
> not a great example :).
>
> This approach makes the most sense to me.  python-tuskarclient would
> make the decisions about if it can call the heat api directly, or the
> tuskar api, or some other api.  The UI and CLI would then both use
> python-tuskarclient.

Another example:

Each python-*client also uses keystoneclient to do auth and get
endpoints.  So, it's not like each client has reimplemented the code
to make HTTP requests to keystone, they reuse the keystone Client
class object.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Tzu-Mainn Chen
Thanks for writing this all out!

- Original Message -
> Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
> new to the project. I only mention it again because it's relevant in
> that I missed any of the discussion on why proxying from tuskar API to
> other APIs is looked down upon. Jiri and I had been talking yesterday
> and he mentioned it to me when I started to ask these same sorts of
> questions.
> 
> On 12/11/2013 07:33 AM, Jiří Stránský wrote:
> > Hi all,
> >
> > TL;DR: I believe that "As an infrastructure administrator, Anna wants a
> > CLI for managing the deployment providing the same fundamental features
> > as UI." With the planned architecture changes (making tuskar-api thinner
> > and getting rid of proxying to other services), there's not an obvious
> > way to achieve that. We need to figure this out. I present a few options
> > and look forward for feedback.
> >
> > Previously, we had planned Tuskar arcitecture like this:
> >
> > tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.
> 
> My biggest concern was that having each client call out to the
> individual APIs directly put a lot of knowledge into the clients that
> had to be replicated across clients. At the best case, that's simply
> knowing where to look for data. But I suspect it's bigger than that and
> there are workflows that will be implemented for tuskar needs. If the
> tuskar API can't call out to other APIs, that workflow implementation
> needs to be done at a higher layer, which means in each client.
> 
> Something I'm going to talk about later in this e-mail but I'll mention
> here so that the diagrams sit side-by-side is the potential for a facade
> layer that hides away the multiple APIs. Lemme see if I can do this in
> ASCII:
> 
> tuskar-ui -+   +-tuskar-api
> |   |
> +-client-facade-+-nova-api
> |   |
> tuskar-cli-+   +-heat-api
> 
> The facade layer runs client-side and contains the business logic that
> calls across APIs and adds in the tuskar magic. That keeps the tuskar
> API from calling into other APIs* but keeps all of the API call logic
> abstracted away from the UX pieces.
> 
> * Again, I'm not 100% up to speed with the API discussion, so I'm going
> off the assumption that we want to avoid API to API calls. If that isn't
> as strict of a design principle as I'm understanding it to be, then the
> above picture probably looks kinda silly, so keep in mind the context
> I'm going from.
> 
> For completeness, my gut reaction was expecting to see something like:
> 
> tuskar-ui -+
> |
> +-tuskar-api-+-nova-api
> ||
> tuskar-cli-++-heat-api
> 
> Where a tuskar client talked to the tuskar API to do tuskar things.
> Whatever was needed to do anything tuskar-y was hidden away behind the
> tuskar API.
> 
> > This meant that the "integration logic" of how to use heat, ironic and
> > other services to manage an OpenStack deployment lied within
> > *tuskar-api*. This gave us an easy way towards having a CLI - just build
> > tuskarclient to wrap abilities of tuskar-api.
> >
> > Nowadays we talk about using heat and ironic (and neutron? nova?
> > ceilometer?) apis directly from the UI, similarly as Dashboard does.
> > But our approach cannot be exactly the same as in Dashboard's case.
> > Dashboard is quite a thin wrapper on top of python-...clients, which
> > means there's a natural parity between what the Dashboard and the CLIs
> > can do.
>
> When you say python- clients, is there a distinction between the CLI and
> a bindings library that invokes the server-side APIs? In other words,
> the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
> 
> > We're not wrapping the APIs directly (if wrapping them directly would be
> > sufficient, we could just use Dashboard and not build Tuskar API at
> > all). We're building a separate UI because we need *additional logic* on
> > top of the APIs. E.g. instead of directly working with Heat templates
> > and Heat stacks to deploy overcloud, user will get to pick how many
> > control/compute/etc. nodes he wants to have, and we'll take care of Heat
> > things behind the scenes. This makes Tuskar UI significantly thicker
> > than Dashboard is, and the natural parity between CLI and UI vanishes.
> > By having this logic in UI, we're effectively preventing its use from
> > CLI. (If i were bold i'd also think about integrating Tuskar with other
> > software which would be prevented too if we keep the business logic in
> > UI, but i'm not absolutely positive about use cases here).
> 
> I see your point about preventing its use from the CLI, but more
> disconcerting IMO is that it just doesn't belong in the UI. That sort of
> logic, the "Heat things behind the scenes", sounds like the jurisdiction
> of the API (if I'm reading into what that entails correctly).
> 
> > Now this raises a question - 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread James Slagle
On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský  wrote:
> Hi all,
>
> TL;DR: I believe that "As an infrastructure administrator, Anna wants a CLI
> for managing the deployment providing the same fundamental features as UI."
> With the planned architecture changes (making tuskar-api thinner and getting
> rid of proxying to other services), there's not an obvious way to achieve
> that. We need to figure this out. I present a few options and look forward
> for feedback.
>
> Previously, we had planned Tuskar arcitecture like this:
>
> tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.

To be clear, tuskarclient is just a library right?  So both the UI and
CLI use tuskarclient, at least was that the original plan?

> This meant that the "integration logic" of how to use heat, ironic and other
> services to manage an OpenStack deployment lied within *tuskar-api*. This
> gave us an easy way towards having a CLI - just build tuskarclient to wrap
> abilities of tuskar-api.
>
>
> Nowadays we talk about using heat and ironic (and neutron? nova?
> ceilometer?) apis directly from the UI, similarly as Dashboard does.

I think we should do that whereever we can for sure.  For example, to
get the status of a deployment we can do the same API call as "heat
stack-status ..." does, no need to write a new Tuskar API to do that.

> But our approach cannot be exactly the same as in Dashboard's case.
> Dashboard is quite a thin wrapper on top of python-...clients, which means
> there's a natural parity between what the Dashboard and the CLIs can do.
>
> We're not wrapping the APIs directly (if wrapping them directly would be
> sufficient, we could just use Dashboard and not build Tuskar API at all).
> We're building a separate UI because we need *additional logic* on top of
> the APIs. E.g. instead of directly working with Heat templates and Heat
> stacks to deploy overcloud, user will get to pick how many
> control/compute/etc. nodes he wants to have, and we'll take care of Heat
> things behind the scenes. This makes Tuskar UI significantly thicker than
> Dashboard is, and the natural parity between CLI and UI vanishes. By having
> this logic in UI, we're effectively preventing its use from CLI. (If i were
> bold i'd also think about integrating Tuskar with other software which would
> be prevented too if we keep the business logic in UI, but i'm not absolutely
> positive about use cases here).

I don't think we want the business logic in the UI.

> Now this raises a question - how do we get CLI reasonably on par with
> abilities of the UI? (Or am i wrong that Anna the infrastructure
> administrator would want that?)

IMO, we want an equivalent CLI and UI.  A big reason is so that it can
be sanely scripted/automated.

>
> Here are some options i see:
>
> 1) Make a thicker python-tuskarclient and put the business logic there. Make
> it consume other python-*clients. (This is an unusual approach though, i'm
> not aware of any python-*client that would consume and integrate other
> python-*clients.)

python-openstackclient consumes other clients :).  Ok, that's probably
not a great example :).

This approach makes the most sense to me.  python-tuskarclient would
make the decisions about if it can call the heat api directly, or the
tuskar api, or some other api.  The UI and CLI would then both use
python-tuskarclient.

> 2) Make a thicker tuskar-api and put the business logic there. (This is the
> original approach with consuming other services from tuskar-api. The
> feedback on this approach was mostly negative though.)

So, typically, I would say this is the right approach.  However given
what you pointed out above that sometimes we can use other API's
directly, we then have a seperation where sometimes you have to use
tuskar-api and sometimes you'd use heat/etc api.  By using
python-tuskarclient, you're really just pushing that abstraction into
a library instead of an API, and I think that makes some sense.

> 3) Keep tuskar-api and python-tuskarclient thin, make another library
> sitting between Tuskar UI and all python-***clients. This new project would
> contain the logic of using undercloud services to provide the "tuskar
> experience" it would expose python bindings for Tuskar UI and contain a CLI.
> (Think of it like traditional python-*client but instead of consuming a REST
> API, it would consume other python-*clients. I wonder if this is
> overengineering. We might end up with too many projects doing too few
> things? :) )

I don't follow how this new library would be different from
python-tuskarclient.  Unless I'm just misinterpreting what
python-tuskarclient is meant to be, which may very well be true :).

> 4) Keep python-tuskarclient thin, but build a separate CLI app that would
> provide same integration features as Tuskar UI does. (This would lead to
> code duplication. Depends on the actual amount of logic to duplicate if this
> is bearable or not.)

-1

>
>
> Which of the options you see as be

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jay Dobies
Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm 
new to the project. I only mention it again because it's relevant in 
that I missed any of the discussion on why proxying from tuskar API to 
other APIs is looked down upon. Jiri and I had been talking yesterday 
and he mentioned it to me when I started to ask these same sorts of 
questions.


On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the 
individual APIs directly put a lot of knowledge into the clients that 
had to be replicated across clients. At the best case, that's simply 
knowing where to look for data. But I suspect it's bigger than that and 
there are workflows that will be implemented for tuskar needs. If the 
tuskar API can't call out to other APIs, that workflow implementation 
needs to be done at a higher layer, which means in each client.


Something I'm going to talk about later in this e-mail but I'll mention 
here so that the diagrams sit side-by-side is the potential for a facade 
layer that hides away the multiple APIs. Lemme see if I can do this in 
ASCII:


tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that 
calls across APIs and adds in the tuskar magic. That keeps the tuskar 
API from calling into other APIs* but keeps all of the API call logic 
abstracted away from the UX pieces.


* Again, I'm not 100% up to speed with the API discussion, so I'm going 
off the assumption that we want to avoid API to API calls. If that isn't 
as strict of a design principle as I'm understanding it to be, then the 
above picture probably looks kinda silly, so keep in mind the context 
I'm going from.


For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
   |
   +-tuskar-api-+-nova-api
   ||
tuskar-cli-++-heat-api

Where a tuskar client talked to the tuskar API to do tuskar things. 
Whatever was needed to do anything tuskar-y was hidden away behind the 
tuskar API.



This meant that the "integration logic" of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.

Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.


When you say python- clients, is there a distinction between the CLI and 
a bindings library that invokes the server-side APIs? In other words, 
the CLI is packaged as CLI+bindings and the UI as GUI+bindings?



We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all). We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).


I see your point about preventing its use from the CLI, but more 
disconcerting IMO is that it just doesn't belong in the UI. That sort of 
logic, the "Heat things behind the scenes", sounds like the jurisdiction 
of the API (if I'm reading into what that entails correctly).



Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)


To reiterate my point above, I see the idea of getting the CLI on par, 
but I also see it as striving for a cleaner design as well.



Here are some 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský
A few clarifications added, next time i'll need to triple-read after 
myself :)


On 11.12.2013 13:33, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.

This meant that the "integration logic" of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.


Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.

We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all).


Sorry, this should have said "not build Tuskar *UI* at all".


We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).

Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)

Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there.
Make it consume other python-*clients. (This is an unusual approach
though, i'm not aware of any python-*client that would consume and
integrate other python-*clients.)

2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)

3) Keep tuskar-api and python-tuskarclient thin, make another library
sitting between Tuskar UI and all python-***clients. This new project
would contain the logic of using undercloud services to provide the
"tuskar experience" it would expose python bindings for Tuskar UI and


"expose python bindings for Tuskar UI" is double-meaning - to be more 
precise: "expose python bindings for use within Tuskar UI".



contain a CLI. (Think of it like traditional python-*client but instead
of consuming a REST API, it would consume other python-*clients. I
wonder if this is overengineering. We might end up with too many
projects doing too few things? :) )

4) Keep python-tuskarclient thin, but build a separate CLI app that
would provide same integration features as Tuskar UI does. (This would
lead to code duplication. Depends on the actual amount of logic to
duplicate if this is bearable or not.)


Which of the options you see as best? Did i miss some better option? Am
i just being crazy and trying to solve a non-issue? Please tell me :)

Please don't consider the time aspect of this, focus rather on what's
the right approach, where we want to get eventually. (We might want to
keep a thick Tuskar UI for Icehouse not to set the hell loose, there
will be enough refactoring already.)


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a 
CLI for managing the deployment providing the same fundamental features 
as UI." With the planned architecture changes (making tuskar-api thinner 
and getting rid of proxying to other services), there's not an obvious 
way to achieve that. We need to figure this out. I present a few options 
and look forward for feedback.


Previously, we had planned Tuskar arcitecture like this:

tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.

This meant that the "integration logic" of how to use heat, ironic and 
other services to manage an OpenStack deployment lied within 
*tuskar-api*. This gave us an easy way towards having a CLI - just build 
tuskarclient to wrap abilities of tuskar-api.



Nowadays we talk about using heat and ironic (and neutron? nova? 
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case. 
Dashboard is quite a thin wrapper on top of python-...clients, which 
means there's a natural parity between what the Dashboard and the CLIs 
can do.


We're not wrapping the APIs directly (if wrapping them directly would be 
sufficient, we could just use Dashboard and not build Tuskar API at 
all). We're building a separate UI because we need *additional logic* on 
top of the APIs. E.g. instead of directly working with Heat templates 
and Heat stacks to deploy overcloud, user will get to pick how many 
control/compute/etc. nodes he wants to have, and we'll take care of Heat 
things behind the scenes. This makes Tuskar UI significantly thicker 
than Dashboard is, and the natural parity between CLI and UI vanishes. 
By having this logic in UI, we're effectively preventing its use from 
CLI. (If i were bold i'd also think about integrating Tuskar with other 
software which would be prevented too if we keep the business logic in 
UI, but i'm not absolutely positive about use cases here).


Now this raises a question - how do we get CLI reasonably on par with 
abilities of the UI? (Or am i wrong that Anna the infrastructure 
administrator would want that?)


Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there. 
Make it consume other python-*clients. (This is an unusual approach 
though, i'm not aware of any python-*client that would consume and 
integrate other python-*clients.)


2) Make a thicker tuskar-api and put the business logic there. (This is 
the original approach with consuming other services from tuskar-api. The 
feedback on this approach was mostly negative though.)


3) Keep tuskar-api and python-tuskarclient thin, make another library 
sitting between Tuskar UI and all python-***clients. This new project 
would contain the logic of using undercloud services to provide the 
"tuskar experience" it would expose python bindings for Tuskar UI and 
contain a CLI. (Think of it like traditional python-*client but instead 
of consuming a REST API, it would consume other python-*clients. I 
wonder if this is overengineering. We might end up with too many 
projects doing too few things? :) )


4) Keep python-tuskarclient thin, but build a separate CLI app that 
would provide same integration features as Tuskar UI does. (This would 
lead to code duplication. Depends on the actual amount of logic to 
duplicate if this is bearable or not.)



Which of the options you see as best? Did i miss some better option? Am 
i just being crazy and trying to solve a non-issue? Please tell me :)


Please don't consider the time aspect of this, focus rather on what's 
the right approach, where we want to get eventually. (We might want to 
keep a thick Tuskar UI for Icehouse not to set the hell loose, there 
will be enough refactoring already.)



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev