Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-28 Thread Tzu-Mainn Chen
> On 27 January 2016 at 08:10, Tzu-Mainn Chen < tzuma...@redhat.com > wrote:

> > > Okay, so I initially thought we weren't making much progress on this
> 
> > > discussion, but after some more thought and reading of the existing PoC,
> 
> > > we're (maybe?) less far apart than I initially thought.
> 
> > >
> 
> > > I think there are kind of three different designs being discussed.
> 
> > >
> 
> > > 1) Rewrite a bunch of stuff into MistrYAML, with the idea that users
> 
> > > could edit our workflows. I think this is what I've been most
> 
> > > strenuously objecting to, and for the most part my previous arguments
> 
> > > pertain to this model.
> 
> > >
> 
> > > 2) However, I think there's another thing going on/planned with at least
> 
> > > some of the actions. It sounds like some of our workflows are going to
> 
> > > essentially be a single action that just passes the REST params into our
> 
> > > Python code. This sort of API as a Service would be more palatable to
> 
> > > me, as it doesn't really split our implementation between YAML and
> 
> > > Python (the YAML is pretty much only defining the REST API in this
> 
> > > model), but it still gives us a quick and easy REST interface to the
> 
> > > existing code. It also keeps a certain amount of separation between
> 
> > > Mistral and the TripleO code in case we decide some day that we need a
> 
> > > proper API service and need to swap out the Mistral frontend for a
> 
> > > different one. This should also be the easiest to implement since it
> 
> > > doesn't involve rewriting anything - we're mostly just moving the
> 
> > > existing code into Mistral actions and creating some pretty trivial
> 
> > > Mistral workflows.
> 
> > >
> 
> > > 3) The thing I _want_ to see, which is a regular Python-based API
> 
> > > service. Again, you can kind of see my arguments around why I think we
> 
> > > should do this elsewhere in the thread. It's also worth noting that
> 
> > > there is already an initial implementation of this proposed to
> 
> > > tripleo-common, so it's not like we'd be starting from zero here either.
> 
> > >
> 
> > > I'm still not crazy about 2, but if it lets me stop spending excessive
> 
> > > amounts of time on this topic it might be worth it. :-)
> 
> > >
> 

> > I'm kinda with Ben here; I'm strongly for 3), but 2) is okay-ish - with a
> 
> > few caveats. This thread has raised a lot of interesting points that, if
> 
> > clarified, might help me feel more comfortable about 2), so I'm hoping
> 
> > that Dan/Steve, you'd be willing to help me understand a few things:
> 

> > a) One argument against the TripleO API is that the Tuskar API tied us
> 
> > far too strongly with one way of doing things. However, the Mistral
> 
> > solution would create a set of workflows that essentially have the same
> 
> > interface as the TripleO API, correct? If so, would we support those
> 
> > workflows the same as if they were an API, with extensive documentation
> 
> > and guaranteeing stability from version to version of TripleO?
> 

> I believe we would, yes. This needs to be a stable interface, if we really
> need
> to make breaking changes then we could use versions in the workflow names
> which Dan suggested at one point.

> > b) For simple features that we might expose through the Mistral API as
> 
> > one-step workflows calling a single function (getting parameters for a
> 
> > deployment, say): when we expose these features through the CLI, would we
> 
> > also enforce the CLI going through Mistral to access those features rather
> 
> > than calling that single function?
> 

> I think we should, it is just much simpler to have everyone use the same
> interface
> than decide of a case by case basis. However, I could be persuaded otherwise
> if others object to this.

> > c) Is there consideration to the fact that multiple OpenStack projects
> 
> > have created their own REST APIs to the point that seems like more of
> 
> > a known technology than using Mistral to front everything? Or are we
> 
> > going to argue that other projects should also switch to using Mistral?
> 

> Projects are so different, that I don't think this can really be compared so
> widely. It doesn't really make sense. Projects should be free to choose
> which approach 

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Tzu-Mainn Chen
gt; It
> > > > > > > integrates very nicely with OpenStack services and is very
> > > > > > > customizable
> > > > > > > with custom actions. The fact that Mistral sits much closer
> > > > > > > to
> > > > > > > OpenStack and is essentially a light shim on top of it is
> > > > > > > to
> > > > > > > our
> > > > > > > advantage (being TripleO). To think that we can build up a
> > > > > > > proxy
> > > > > > > API in
> > > > > > > such a manner that we might be able to swap in an entirely
> > > > > > > new
> > > > > > > backend
> > > > > > > (without even having a fully implement backend yet to begin
> > > > > > > with)
> > > > > > > is
> > > > > > > for me a bit of a stretch. We've got a lot of "TripleO API"
> > > > > > > maturing
> > > > > > > before we'll get to this point. Which is why I lean towards
> > > > > > > using a
> > > > > > > generic workflow API to accomplis the same task.
> > > > > > > 
> > > > > > > I actually think rather than shielding users we should be
> > > > > > > more
> > > > > > > transparent about the actual workflows that are driving
> > > > > > > deployment.
> > > > > > > Smaller more focused workflows that we string together to
> > > > > > > drive the
> > > > > > > deployment.
> > > > > > > 
> > > > > > > > 4) It raises the bar even further for both new deployers
> > > > > > > > and
> > > > > > > > developers.
> > > > > > > >   You already need to have a pretty firm grasp of Puppet
> > > > > > > > and Heat
> > > > > > > > templates to understand how our stuff works, not to
> > > > > > > > mention
> > > > > > > > a
> > > > > > > > decent
> > > > > > > > understanding of quite a number of OpenStack services.
> > > > > > > > 
> > > > > > > > This presents a big chicken and egg problem for people
> > > > > > > > new
> > > > > > > > to
> > > > > > > > OpenStack.
> > > > > > > >   It's great that we're based on OpenStack and that
> > > > > > > > allows
> > > > > > > > people
> > > > > > > > to
> > > > > > > > peek
> > > > > > > > under the hood and do some tinkering, but it can't be
> > > > > > > > required
> > > > > > > > for
> > > > > > > > everyone.  A lot of our deployers are going to have
> > > > > > > > little
> > > > > > > > to no
> > > > > > > > OpenStack experience, and TripleO is already a daunting
> > > > > > > > task for
> > > > > > > > those
> > > > > > > > people (hell, it's daunting for people who _are_
> > > > > > > > experienced).
> > > > > > > > 
> > > > > > > And on the flipside you will get more of a community around
> > > > > > > using
> > > > > > > an
> > > > > > > OpenStack project than you ever would going off and
> > > > > > > building
> > > > > > > your
> > > > > > > own
> > > > > > > "Deployment/Workflow API".
> > > > > > > 
> > > > > > > I would actually argue this is less of a deployers thing
> > > > > > > and
> > > > > > > more
> > > > > > > of a
> > > > > > > development tool choice. IMO most deployers will use
> > > > > > > python-
> > > > > > > tripleoclient or some UI and not mistralclient directly.
> > > > > > > The
> > > > > > > code
> > > > > > > I've
> > > > > > > posted this week shows a prototype of just thi

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Tzu-Mainn Chen
> Okay, so I initially thought we weren't making much progress on this
> discussion, but after some more thought and reading of the existing PoC,
> we're (maybe?) less far apart than I initially thought.
> 
> I think there are kind of three different designs being discussed.
> 
> 1) Rewrite a bunch of stuff into MistrYAML, with the idea that users
> could edit our workflows.  I think this is what I've been most
> strenuously objecting to, and for the most part my previous arguments
> pertain to this model.
> 
> 2) However, I think there's another thing going on/planned with at least
> some of the actions.  It sounds like some of our workflows are going to
> essentially be a single action that just passes the REST params into our
> Python code.  This sort of API as a Service would be more palatable to
> me, as it doesn't really split our implementation between YAML and
> Python (the YAML is pretty much only defining the REST API in this
> model), but it still gives us a quick and easy REST interface to the
> existing code.  It also keeps a certain amount of separation between
> Mistral and the TripleO code in case we decide some day that we need a
> proper API service and need to swap out the Mistral frontend for a
> different one.  This should also be the easiest to implement since it
> doesn't involve rewriting anything - we're mostly just moving the
> existing code into Mistral actions and creating some pretty trivial
> Mistral workflows.
> 
> 3) The thing I _want_ to see, which is a regular Python-based API
> service.  Again, you can kind of see my arguments around why I think we
> should do this elsewhere in the thread.  It's also worth noting that
> there is already an initial implementation of this proposed to
> tripleo-common, so it's not like we'd be starting from zero here either.
> 
> I'm still not crazy about 2, but if it lets me stop spending excessive
> amounts of time on this topic it might be worth it. :-)
> 

I'm kinda with Ben here; I'm strongly for 3), but 2) is okay-ish - with a
few caveats.  This thread has raised a lot of interesting points that, if
clarified, might help me feel more comfortable about 2), so I'm hoping
that Dan/Steve, you'd be willing to help me understand a few things:

a) One argument against the TripleO API is that the Tuskar API tied us
far too strongly with one way of doing things.  However, the Mistral
solution would create a set of workflows that essentially have the same
interface as the TripleO API, correct?  If so, would we support those
workflows the same as if they were an API, with extensive documentation
and guaranteeing stability from version to version of TripleO?

b) For simple features that we might expose through the Mistral API as
one-step workflows calling a single function (getting parameters for a
deployment, say): when we expose these features through the CLI, would we
also enforce the CLI going through Mistral to access those features rather
than calling that single function?

c) Is there consideration to the fact that multiple OpenStack projects
have created their own REST APIs to the point that seems like more of
a known technology than using Mistral to front everything?  Or are we
going to argue that other projects should also switch to using Mistral?

d) If we proceed down the Mistral path and run into issues, is there a
point when we'd be willing to back away?


Mainn

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-26 Thread Tzu-Mainn Chen
- Original Message -
> On Tue, Jan 26, 2016 at 09:23:00AM -0500, Tzu-Mainn Chen wrote:
> > - Original Message -
> > > On Mon, Jan 25, 2016 at 05:45:30PM -0600, Ben Nemec wrote:
> > > > On 01/25/2016 03:56 PM, Steven Hardy wrote:
> > > > > On Fri, Jan 22, 2016 at 11:24:20AM -0600, Ben Nemec wrote:
> > > > >> So I haven't weighed in on this yet, in part because I was on
> > > > >> vacation
> > > > >> when it was first proposed and missed a lot of the initial
> > > > >> discussion,
> > > > >> and also because I wanted to take some time to order my thoughts on
> > > > >> it.
> > > > >>  Also because my initial reaction...was not conducive to calm and
> > > > >> rational discussion. ;-)
> > > > >>
> > > > >> The tldr is that I don't like it.  To explain why, I'm going to make
> > > > >> a
> > > > >> list (everyone loves lists, right? Top $NUMBER reasons we should
> > > > >> stop
> > > > >> expecting other people to write our API for us):
> > > > >>
> > > > >> 1) We've been down this road before.  Except last time it was with
> > > > >> Heat.
> > > > >>  I'm being somewhat tongue-in-cheek here, but expecting a general
> > > > >> service to provide us a user-friendly API for our specific use case
> > > > >> just
> > > > >> doesn't make sense to me.
> > > > > 
> > > > > Actually, we've been down this road before with Tuskar, and
> > > > > discovered
> > > > > that
> > > > > designing and maintaining a bespoke API for TripleO is really hard.
> > > > 
> > > > My takeaway from Tuskar was that designing an API that none of the
> > > > developers on the project use is doomed to fail.  Tuskar also suffered
> > > > from a lack of some features in Heat that the new API is explicitly
> > > > depending on in an attempt to avoid many of the problems Tuskar had.
> > > > 
> > > > Problem #1 is still developer apathy IMHO though.
> > > 
> > > I think the main issue is developer capacity - we're a small community
> > > and
> > > I for one am worried about the effort involved with building and
> > > maintaining a bespoke API - thus this whole discussion is essentially
> > > about
> > > finding a quicker and easier way to meet the needs of those needing an
> > > API.
> > > 
> > 
> > Just a quick note about this; developer capacity works both ways.  On a
> > practical level, if we were to get involved with Mistral wouldn't we need
> > developers to get deeply involved with the Mistral community?  If Mistral
> > were to become the effective REST API interface for the whole deployment
> > API, bugs like https://bugs.launchpad.net/mistral/+bug/1423054 which affect
> > load would have to be fixed, right?
> 
> Well, sure the advantage of not solving only for TripleO is that some
> time can presumably be spent working on helping to improve Mistral instead
> of writing a new API completely from scratch.  In general this is a good
> thing, and to be encouraged, right? :)
>

I don't think this is an entirely fair comparison.  Using Mistral as our REST
API feels like a new use of Mistral, and we're not expecting other OpenStack
projects to follow suit, are we?  I think there may be unknown consequences to
putting a workflow engine in front of every single REST API request.

> 
> You've mentioned that bug before, have you seen this?
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html
> 
> I'm not sure it's even a bug, it's not confirmed as such and it sounds like
> the configuration issues mentioned in that thread to me.
> 

Fair enough, thanks for pointing that out!

> But, sure, there will be bugs, are you suggesting we'll have less if we
> start from scratch with a TripleO specific API?  Be interested to
> understand your reasoning if so :)
> 

I think there's a slightly subtle point being missed.  Sure, the Mistral API
exists, but presumably we'd create workflows corresponding to each proposed
TripleO API function.  Those workflows would be our true 'API', and I expect
that TripleO would have to provide a guarantee of stability there.  I would
expect development of those workflows to have roughly the same amount of
issues as creating a TripleO API.

If we're talking solely about creating a REST API - then yeah, I think that
creating a REST API is well-demonstrated throughout OpenStack, and using
Mistral as the REST API interface is less so.


Mainn

> Cheers,
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-26 Thread Tzu-Mainn Chen
- Original Message -
> On Mon, Jan 25, 2016 at 05:45:30PM -0600, Ben Nemec wrote:
> > On 01/25/2016 03:56 PM, Steven Hardy wrote:
> > > On Fri, Jan 22, 2016 at 11:24:20AM -0600, Ben Nemec wrote:
> > >> So I haven't weighed in on this yet, in part because I was on vacation
> > >> when it was first proposed and missed a lot of the initial discussion,
> > >> and also because I wanted to take some time to order my thoughts on it.
> > >>  Also because my initial reaction...was not conducive to calm and
> > >> rational discussion. ;-)
> > >>
> > >> The tldr is that I don't like it.  To explain why, I'm going to make a
> > >> list (everyone loves lists, right? Top $NUMBER reasons we should stop
> > >> expecting other people to write our API for us):
> > >>
> > >> 1) We've been down this road before.  Except last time it was with Heat.
> > >>  I'm being somewhat tongue-in-cheek here, but expecting a general
> > >> service to provide us a user-friendly API for our specific use case just
> > >> doesn't make sense to me.
> > > 
> > > Actually, we've been down this road before with Tuskar, and discovered
> > > that
> > > designing and maintaining a bespoke API for TripleO is really hard.
> > 
> > My takeaway from Tuskar was that designing an API that none of the
> > developers on the project use is doomed to fail.  Tuskar also suffered
> > from a lack of some features in Heat that the new API is explicitly
> > depending on in an attempt to avoid many of the problems Tuskar had.
> > 
> > Problem #1 is still developer apathy IMHO though.
> 
> I think the main issue is developer capacity - we're a small community and
> I for one am worried about the effort involved with building and
> maintaining a bespoke API - thus this whole discussion is essentially about
> finding a quicker and easier way to meet the needs of those needing an API.
> 

Just a quick note about this; developer capacity works both ways.  On a
practical level, if we were to get involved with Mistral wouldn't we need
developers to get deeply involved with the Mistral community?  If Mistral
were to become the effective REST API interface for the whole deployment
API, bugs like https://bugs.launchpad.net/mistral/+bug/1423054 which affect
load would have to be fixed, right?

Mainn

>
> In terms of apathy, I think as a developer I don't need an abstraction
> between me, my templates and heat.  Some advanced operators will feel
> likewise, others won't.  What I would find useful sometimes is a general
> purpose workflow engine, which is where I think the more pluggable mistral
> based solution may have advantages in terms of developer and advanced
> operator uptake.
> 
> > > I somewhat agree that heat as an API is insufficient, but that doesn't
> > > necessarily imply you have to have a TripleO specific abstraction, just
> > > that *an* abstraction is required.
> > > 
> > >> 2) The TripleO API is not a workflow API.  I also largely missed this
> > >> discussion, but the TripleO API is a _Deployment_ API.  In some cases
> > >> there also happens to be a workflow going on behind the scenes, but
> > >> honestly that's not something I want our users to have to care about.
> > > 
> > > How would you differentiate between "deployment" in a generic sense in
> > > contrast to a generic workflow?
> > > 
> > > Every deployment I can think of involves a series of steps, involving
> > > some
> > > choices and interactions with other services.  That *is* a workflow?
> > 
> > Well, I mean if we want to take this to extremes then pretty much every
> > API is a workflow API.  You make a REST call, a "workflow" happens in
> > the service, and you get back a result.
> > 
> > Let me turn this around: Would you implement Heat's API on Mistral?  All
> > that happens when I call Heat is that a series of OpenStack calls are
> > made from heat-engine, after all.  Or is that a gross oversimplification
> > of what's happening?  I could argue that the same is true of this
> > discussion. :-)
> 
> As Hugh has mentioned the main thing Heat does is actually manage
> dependencies.  It processes the templates, builds a graph, then walks the
> graph running a "workflow" to create/update/delete/etc each resource.
> 
> I could imagine a future where we interface to some external workflow tool to
> e.g do each resource action (e.g create a nova server, poll until it's
> active),
> however that's actually a pretty high overhead approach, and it'd probably
> be better to move towards better use of notifications instead (e.g less
> internal workflow)
> 
> > >> 3) It ties us 100% to a given implementation.  If Mistral proves to be a
> > >> poor choice for some reason, or insufficient for a particular use case,
> > >> we have no alternative.  If we have an API and decide to change our
> > >> implementation, nobody has to know or care.  This is kind of the whole
> > >> point of having an API - it shields users from all the nasty
> > >> implementation details under the surface.
> > > 
> > > Thi

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-21 Thread Tzu-Mainn Chen
- Original Message -

> On 21 January 2016 at 14:46, Dougal Matthews < dou...@redhat.com > wrote:

> > On 20 January 2016 at 20:05, Tzu-Mainn Chen < tzuma...@redhat.com > wrote:
> 

> > > - Original Message -
> > 
> 
> > > > On 18.1.2016 19:49, Tzu-Mainn Chen wrote:
> > 
> 
> > > > > - Original Message -
> > 
> 
> > > > >> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
> > 
> 
> > > > >>>
> > 
> 
> > > > >>> - Original Message -
> > 
> 
> > > > >>>> On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > 
> 
> > > > >>>>> Hey all,
> > 
> 
> > > > >>>>>
> > 
> 
> > > > >>>>> I realize now from the title of the other TripleO/Mistral thread
> > 
> 
> > > > >>>>> [1] that
> > 
> 
> > > > >>>>> the discussion there may have gotten confused. I think using
> > 
> 
> > > > >>>>> Mistral for
> > 
> 
> > > > >>>>> TripleO processes that are obviously workflows - stack
> > 
> 
> > > > >>>>> deployment, node
> > 
> 
> > > > >>>>> registration - makes perfect sense. That thread is exploring
> > 
> 
> > > > >>>>> practicalities
> > 
> 
> > > > >>>>> for doing that, and I think that's great work.
> > 
> 
> > > > >>>>>
> > 
> 
> > > > >>>>> What I inappropriately started to address in that thread was a
> > 
> 
> > > > >>>>> somewhat
> > 
> 
> > > > >>>>> orthogonal point that Dan asked in his original email, namely:
> > 
> 
> > > > >>>>>
> > 
> 
> > > > >>>>> "what it might look like if we were to use Mistral as a
> > 
> 
> > > > >>>>> replacement for the
> > 
> 
> > > > >>>>> TripleO API entirely"
> > 
> 
> > > > >>>>>
> > 
> 
> > > > >>>>> I'd like to create this thread to talk about that; more of a
> > 
> 
> > > > >>>>> 'should we'
> > 
> 
> > > > >>>>> than 'can we'. And to do that, I want to indulge in a thought
> > 
> 
> > > > >>>>> exercise
> > 
> 
> > > > >>>>> stemming from an IRC discussion with Dan and others. All, please
> > 
> 
> > > > >>>>> correct
> > 
> 
> > > > >>>>> me
> > 
> 
> > > > >>>>> if I've misstated anything.
> > 
> 
> > > > >>>>>
> > 
> 
> > > > >>>>> The IRC discussion revolved around one use case: deploying a Heat
> > 
> 
> > > > >>>>> stack
> > 
> 
> > > > >>>>> directly from a Swift container. With an updated patch, the Heat
> > 
> 
> > > > >>>>> CLI can
> > 
> 
> > > > >>>>> support this functionality natively. Then we don't need a
> > 
> 
> > > > >>>>> TripleO API; we
> > 
> 
> > > > >>>>> can use Mistral to access that functionality, and we're done,
> > 
> 
> > > > >>>>> with no need
> > 
> 
> > > > >>>>> for additional code within TripleO. And, as I understand it,
> > 
> 
> > > > >>>>> that's the
> > 
> 
> > > > >>>>> true motivation for using Mistral instead of a TripleO API:
> > 
> 
> > > > >>>>> avoiding custom
> > 
> 
> > > > >>>>> code within TripleO.
> > 
> 
> > > > >>>>>
> > 
> 
> > > > >>>>> That's definitely a worthy goal... except from my perspective,
> > 
> 
> > > > >>>>> the story
> > 
> 
> > > > >>>>> doesn't quite end there. A GUI needs additional functionality,
> > 
> 
> > > > >>>>

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-20 Thread Tzu-Mainn Chen
- Original Message -
> On 18.1.2016 19:49, Tzu-Mainn Chen wrote:
> > - Original Message -
> >> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
> >>>
> >>> - Original Message -
> >>>> On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> >>>>> Hey all,
> >>>>>
> >>>>> I realize now from the title of the other TripleO/Mistral thread
> >>>>> [1] that
> >>>>> the discussion there may have gotten confused.  I think using
> >>>>> Mistral for
> >>>>> TripleO processes that are obviously workflows - stack
> >>>>> deployment, node
> >>>>> registration - makes perfect sense.  That thread is exploring
> >>>>> practicalities
> >>>>> for doing that, and I think that's great work.
> >>>>>
> >>>>> What I inappropriately started to address in that thread was a
> >>>>> somewhat
> >>>>> orthogonal point that Dan asked in his original email, namely:
> >>>>>
> >>>>> "what it might look like if we were to use Mistral as a
> >>>>> replacement for the
> >>>>> TripleO API entirely"
> >>>>>
> >>>>> I'd like to create this thread to talk about that; more of a
> >>>>> 'should we'
> >>>>> than 'can we'.  And to do that, I want to indulge in a thought
> >>>>> exercise
> >>>>> stemming from an IRC discussion with Dan and others.  All, please
> >>>>> correct
> >>>>> me
> >>>>> if I've misstated anything.
> >>>>>
> >>>>> The IRC discussion revolved around one use case: deploying a Heat
> >>>>> stack
> >>>>> directly from a Swift container.  With an updated patch, the Heat
> >>>>> CLI can
> >>>>> support this functionality natively.  Then we don't need a
> >>>>> TripleO API; we
> >>>>> can use Mistral to access that functionality, and we're done,
> >>>>> with no need
> >>>>> for additional code within TripleO.  And, as I understand it,
> >>>>> that's the
> >>>>> true motivation for using Mistral instead of a TripleO API:
> >>>>> avoiding custom
> >>>>> code within TripleO.
> >>>>>
> >>>>> That's definitely a worthy goal... except from my perspective,
> >>>>> the story
> >>>>> doesn't quite end there.  A GUI needs additional functionality,
> >>>>> which boils
> >>>>> down to: understanding the Heat deployment templates in order to
> >>>>> provide
> >>>>> options for a user; and persisting those options within a Heat
> >>>>> environment
> >>>>> file.
> >>>>>
> >>>>> Right away I think we hit a problem.  Where does the code for
> >>>>> 'understanding
> >>>>> options' go?  Much of that understanding comes from the
> >>>>> capabilities map
> >>>>> in tripleo-heat-templates [2]; it would make sense to me that
> >>>>> responsibility
> >>>>> for that would fall to a TripleO library.
> >>>>>
> >>>>> Still, perhaps we can limit the amount of TripleO code.  So to
> >>>>> give API
> >>>>> access to 'getDeploymentOptions', we can create a Mistral
> >>>>> workflow.
> >>>>>
> >>>>>Retrieve Heat templates from Swift -> Parse capabilities map
> >>>>>
> >>>>> Which is fine-ish, except from an architectural perspective
> >>>>> 'getDeploymentOptions' violates the abstraction layer between
> >>>>> storage and
> >>>>> business logic, a problem that is compounded because
> >>>>> 'getDeploymentOptions'
> >>>>> is not the only functionality that accesses the Heat templates
> >>>>> and needs
> >>>>> exposure through an API.  And, as has been discussed on a
> >>>>> separate TripleO
> >>>>> thread, we're not even sure Swift is sufficient for our needs;
> >>

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-18 Thread Tzu-Mainn Chen
- Original Message -
> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
> > 
> > - Original Message -
> > > On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > > > Hey all,
> > > > 
> > > > I realize now from the title of the other TripleO/Mistral thread
> > > > [1] that
> > > > the discussion there may have gotten confused.  I think using
> > > > Mistral for
> > > > TripleO processes that are obviously workflows - stack
> > > > deployment, node
> > > > registration - makes perfect sense.  That thread is exploring
> > > > practicalities
> > > > for doing that, and I think that's great work.
> > > > 
> > > > What I inappropriately started to address in that thread was a
> > > > somewhat
> > > > orthogonal point that Dan asked in his original email, namely:
> > > > 
> > > > "what it might look like if we were to use Mistral as a
> > > > replacement for the
> > > > TripleO API entirely"
> > > > 
> > > > I'd like to create this thread to talk about that; more of a
> > > > 'should we'
> > > > than 'can we'.  And to do that, I want to indulge in a thought
> > > > exercise
> > > > stemming from an IRC discussion with Dan and others.  All, please
> > > > correct
> > > > me
> > > > if I've misstated anything.
> > > > 
> > > > The IRC discussion revolved around one use case: deploying a Heat
> > > > stack
> > > > directly from a Swift container.  With an updated patch, the Heat
> > > > CLI can
> > > > support this functionality natively.  Then we don't need a
> > > > TripleO API; we
> > > > can use Mistral to access that functionality, and we're done,
> > > > with no need
> > > > for additional code within TripleO.  And, as I understand it,
> > > > that's the
> > > > true motivation for using Mistral instead of a TripleO API:
> > > > avoiding custom
> > > > code within TripleO.
> > > > 
> > > > That's definitely a worthy goal... except from my perspective,
> > > > the story
> > > > doesn't quite end there.  A GUI needs additional functionality,
> > > > which boils
> > > > down to: understanding the Heat deployment templates in order to
> > > > provide
> > > > options for a user; and persisting those options within a Heat
> > > > environment
> > > > file.
> > > > 
> > > > Right away I think we hit a problem.  Where does the code for
> > > > 'understanding
> > > > options' go?  Much of that understanding comes from the
> > > > capabilities map
> > > > in tripleo-heat-templates [2]; it would make sense to me that
> > > > responsibility
> > > > for that would fall to a TripleO library.
> > > > 
> > > > Still, perhaps we can limit the amount of TripleO code.  So to
> > > > give API
> > > > access to 'getDeploymentOptions', we can create a Mistral
> > > > workflow.
> > > > 
> > > >   Retrieve Heat templates from Swift -> Parse capabilities map
> > > > 
> > > > Which is fine-ish, except from an architectural perspective
> > > > 'getDeploymentOptions' violates the abstraction layer between
> > > > storage and
> > > > business logic, a problem that is compounded because
> > > > 'getDeploymentOptions'
> > > > is not the only functionality that accesses the Heat templates
> > > > and needs
> > > > exposure through an API.  And, as has been discussed on a
> > > > separate TripleO
> > > > thread, we're not even sure Swift is sufficient for our needs;
> > > > one possible
> > > > consideration right now is allowing deployment from templates
> > > > stored in
> > > > multiple places, such as the file system or git.
> > > 
> > > Actually, that whole capabilities map thing is a workaround for a
> > > missing
> > > feature in Heat, which I have proposed, but am having a hard time
> > > reaching
> > > consensus on within the Heat community:
> > > 
> > > https://review.openstack.org/#/c/196656/
> > > 
> >

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-14 Thread Tzu-Mainn Chen


- Original Message -
> On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > Hey all,
> > 
> > I realize now from the title of the other TripleO/Mistral thread [1] that
> > the discussion there may have gotten confused.  I think using Mistral for
> > TripleO processes that are obviously workflows - stack deployment, node
> > registration - makes perfect sense.  That thread is exploring
> > practicalities
> > for doing that, and I think that's great work.
> > 
> > What I inappropriately started to address in that thread was a somewhat
> > orthogonal point that Dan asked in his original email, namely:
> > 
> > "what it might look like if we were to use Mistral as a replacement for the
> > TripleO API entirely"
> > 
> > I'd like to create this thread to talk about that; more of a 'should we'
> > than 'can we'.  And to do that, I want to indulge in a thought exercise
> > stemming from an IRC discussion with Dan and others.  All, please correct
> > me
> > if I've misstated anything.
> > 
> > The IRC discussion revolved around one use case: deploying a Heat stack
> > directly from a Swift container.  With an updated patch, the Heat CLI can
> > support this functionality natively.  Then we don't need a TripleO API; we
> > can use Mistral to access that functionality, and we're done, with no need
> > for additional code within TripleO.  And, as I understand it, that's the
> > true motivation for using Mistral instead of a TripleO API: avoiding custom
> > code within TripleO.
> > 
> > That's definitely a worthy goal... except from my perspective, the story
> > doesn't quite end there.  A GUI needs additional functionality, which boils
> > down to: understanding the Heat deployment templates in order to provide
> > options for a user; and persisting those options within a Heat environment
> > file.
> > 
> > Right away I think we hit a problem.  Where does the code for
> > 'understanding
> > options' go?  Much of that understanding comes from the capabilities map
> > in tripleo-heat-templates [2]; it would make sense to me that
> > responsibility
> > for that would fall to a TripleO library.
> > 
> > Still, perhaps we can limit the amount of TripleO code.  So to give API
> > access to 'getDeploymentOptions', we can create a Mistral workflow.
> > 
> >   Retrieve Heat templates from Swift -> Parse capabilities map
> > 
> > Which is fine-ish, except from an architectural perspective
> > 'getDeploymentOptions' violates the abstraction layer between storage and
> > business logic, a problem that is compounded because 'getDeploymentOptions'
> > is not the only functionality that accesses the Heat templates and needs
> > exposure through an API.  And, as has been discussed on a separate TripleO
> > thread, we're not even sure Swift is sufficient for our needs; one possible
> > consideration right now is allowing deployment from templates stored in
> > multiple places, such as the file system or git.
> 
> Actually, that whole capabilities map thing is a workaround for a missing
> feature in Heat, which I have proposed, but am having a hard time reaching
> consensus on within the Heat community:
> 
> https://review.openstack.org/#/c/196656/
> 
> Given that is a large part of what's anticipated to be provided by the
> proposed TripleO API, I'd welcome feedback and collaboration so we can move
> that forward, vs solving only for TripleO.
> 
> > Are we going to have duplicate 'getDeploymentOptions' workflows for each
> > storage mechanism?  If we consolidate the storage code within a TripleO
> > library, do we really need a *workflow* to call a single function?  Is a
> > thin TripleO API that contains no additional business logic really so bad
> > at that point?
> 
> Actually, this is an argument for making the validation part of the
> deployment a workflow - then the interface with the storage mechanism
> becomes more easily pluggable vs baked into an opaque-to-operators API.
> 
> E.g, in the long term, imagine the capabilities feature exists in Heat, you
> then have a pre-deployment workflow that looks something like:
> 
> 1. Retrieve golden templates from a template store
> 2. Pass templates to Heat, get capabilities map which defines features user
> must/may select.
> 3. Prompt user for input to select required capabilites
> 4. Pass user input to Heat, validate the configuration, get a mapping of
> required options for the selected c

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-13 Thread Tzu-Mainn Chen
- Original Message -
> On Wed, 2016-01-13 at 04:41 -0500, Tzu-Mainn Chen wrote:
> > Hey all,
> > 
> > I realize now from the title of the other TripleO/Mistral thread [1]
> > that
> > the discussion there may have gotten confused.  I think using Mistral
> > for
> > TripleO processes that are obviously workflows - stack deployment,
> > node
> > registration - makes perfect sense.  That thread is exploring
> > practicalities
> > for doing that, and I think that's great work.
> > 
> > What I inappropriately started to address in that thread was a
> > somewhat
> > orthogonal point that Dan asked in his original email, namely:
> > 
> > "what it might look like if we were to use Mistral as a replacement
> > for the
> > TripleO API entirely"
> > 
> > I'd like to create this thread to talk about that; more of a 'should
> > we'
> > than 'can we'.  And to do that, I want to indulge in a thought
> > exercise
> > stemming from an IRC discussion with Dan and others.  All, please
> > correct me
> > if I've misstated anything.
> > 
> > The IRC discussion revolved around one use case: deploying a Heat
> > stack
> > directly from a Swift container.  With an updated patch, the Heat CLI
> > can
> > support this functionality natively.  Then we don't need a TripleO
> > API; we
> > can use Mistral to access that functionality, and we're done, with no
> > need
> > for additional code within TripleO.  And, as I understand it, that's
> > the
> > true motivation for using Mistral instead of a TripleO API: avoiding
> > custom
> > code within TripleO.
> 
> The True motivation for investigating Mistral was to counter the
> assertion that we needed to build our own REST API for workflows in the
> TripleO API spec. This:
> 
>  "We need a REST API that supports the overcloud deployment workflow."
> 
> https://review.openstack.org/#/c/230432/13/specs/mitaka/tripleo-overclo
> ud-deployment-api.rst
> 
> In doing that I was trying to wrap some of the existing code in
> tripleo-common in Mistral actions and it occurred to me that some
> things (like the ability to deploy heat templates from Swift) might
> benefit others more if we put them in the respective client libraries
> (like heatclient) instead of carrying them in tripleo-common where they
> only benefit us. Especially since in the heatclient case it already had
> a --template-object option to begin with (https://bugs.launchpad.net/py
> thon-heatclient/+bug/1532326)
> 
> This goes towards a similar pattern we follow with other components
> like Puppet which is that instead of trying to create our own
> functionality in puppet-tripleo we are trying to as much as possible
> add the features we need to the individual modules for each project
> (puppet-nova, puppet-swift, etc).
> 
> In other words avoiding custom code within TripleO is just good
> practice in general, not something that is specific to the Mistral vs
> TripleO API discussion. When someone looks at TripleO as a project I
> would very much like them to admire our architecture... not what we've
> had to build to support it. As an example I'm very glad to see TripleO
> be a bit more friendly to config management tooling... rather than
> trying to build our own version (os-apply-config, etc.). And adding
> code to the right place usually works out better for everyone and can
> help build up the larger OpenStack community too.
> 
> > 
> > That's definitely a worthy goal... except from my perspective, the
> > story
> > doesn't quite end there.  A GUI needs additional functionality, which
> > boils
> > down to: understanding the Heat deployment templates in order to
> > provide
> > options for a user; and persisting those options within a Heat
> > environment
> > file.
> 
> TripleO API was previously described as what would be our "workflows
> API" for TripleO. A places for common workflows that are used for the
> CLI, and UI together (one code path for all!).
> 
> What you describe here sounds like a different sort of idea which is a
> GUI helper sort of API. FWIW I think it is totally fine for a GUI to
> maintain its own API for caching reasons etc if the members of that
> team find it to be a requirement. Please don't feel that any of the
> workflow API comparisons block requirements for things to build a GUI
> though.
> 
> 
> > 
> > Right away I think we hit a problem.  Where does the code for
> > 'understanding
> > options' go?  Much o

[openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-13 Thread Tzu-Mainn Chen
Hey all,

I realize now from the title of the other TripleO/Mistral thread [1] that
the discussion there may have gotten confused.  I think using Mistral for
TripleO processes that are obviously workflows - stack deployment, node
registration - makes perfect sense.  That thread is exploring practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should we'
than 'can we'.  And to do that, I want to indulge in a thought exercise
stemming from an IRC discussion with Dan and others.  All, please correct me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat stack
directly from a Swift container.  With an updated patch, the Heat CLI can
support this functionality natively.  Then we don't need a TripleO API; we
can use Mistral to access that functionality, and we're done, with no need
for additional code within TripleO.  And, as I understand it, that's the
true motivation for using Mistral instead of a TripleO API: avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective, the story
doesn't quite end there.  A GUI needs additional functionality, which boils
down to: understanding the Heat deployment templates in order to provide
options for a user; and persisting those options within a Heat environment
file.

Right away I think we hit a problem.  Where does the code for 'understanding
options' go?  Much of that understanding comes from the capabilities map
in tripleo-heat-templates [2]; it would make sense to me that responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code.  So to give API
access to 'getDeploymentOptions', we can create a Mistral workflow.

  Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage and
business logic, a problem that is compounded because 'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and needs
exposure through an API.  And, as has been discussed on a separate TripleO
thread, we're not even sure Swift is sufficient for our needs; one possible
consideration right now is allowing deployment from templates stored in
multiple places, such as the file system or git.

Are we going to have duplicate 'getDeploymentOptions' workflows for each
storage mechanism?  If we consolidate the storage code within a TripleO
library, do we really need a *workflow* to call a single function?  Is a
thin TripleO API that contains no additional business logic really so bad
at that point?

My gut reaction is to say that proposing Mistral in place of a TripleO API
is to look at the engineering concerns from the wrong direction.  The
Mistral alternative comes from a desire to limit custom TripleO code at all
costs.  I think that is an extremely dangerous attitude that leads to
compromises and workarounds that will quickly lead to a shaky code base
full of design flaws that make it difficult to implement or extend any
functionality cleanly.

I think the correct attitude is to simply look at the problem we're
trying to solve and find the correct architecture.  For these get/set
methods that the API needs, it's pretty simple: storage -> some logic ->
a REST API.  Adding a workflow engine on top of that is unneeded, and I
believe that means it's an incorrect solution.


Thanks,
Tzu-Mainn Chen



[1] http://lists.openstack.org/pipermail/openstack-dev/2016-January/083757.html
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities_map.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Tzu-Mainn Chen
> On 01/11/2016 11:09 PM, Tzu-Mainn Chen wrote:
> > - Original Message -
> >> Background info:
> >>
> >> We've got a problem in TripleO at the moment where many of our
> >> workflows can be driven by the command line only. This causes some
> >> problems for those trying to build a UI around the workflows in that
> >> they have to duplicate deployment logic in potentially multiple places.
> >> There are specs up for review which outline how we might solve this
> >> problem by building what is called TripleO API [1].
> >>
> >> Late last year I began experimenting with an OpenStack service called
> >> Mistral which contains a generic workflow API. Mistral supports
> >> defining workflows in YAML and then creating, managing, and executing
> >> them via an OpenStack API. Initially the effort was focused around the
> >> idea of creating a workflow in Mistral which could supplant our
> >> "baremetal introspection" workflow which currently lives in python-
> >> tripleoclient. I create a video presentation which outlines this effort
> >> [2]. This particular workflow seemed to fit nicely within the Mistral
> >> tooling.
> >>
> >> 
> >>
> >> More recently I've turned my attention to what it might look like if we
> >> were to use Mistral as a replacement for the TripleO API entirely. This
> >> brings forth the question of would TripleO be better off building out
> >> its own API... or would relying on existing OpenStack APIs be a better
> >> solution?
> >>
> >> Some things I like about the Mistral solution:
> >>
> >> - The API already exists and is generic.
> >>
> >> - Mistral already supports interacting with many of the OpenStack API's
> >> we require [3]. Integration with keystone is baked in. Adding support
> >> for new clients seems straightforward (I've had no issues in adding
> >> support for ironic, inspector, and swift actions).
> >>
> >> - Mistral actions are pluggable. We could fairly easily wrap some of
> >> our more complex workflows (perhaps those that aren't easy to replicate
> >> with pure YAML workflows) by creating our own TripleO Mistral actions.
> >> This approach would be similar to creating a custom Heat resource...
> >> something we have avoided with Heat in TripleO but I think it is
> >> perhaps more reasonable with Mistral and would allow us to again build
> >> out our YAML workflows to drive things. This might allow us to build
> >> off some of the tripleo-common consolidation that is already underway
> >> ...
> >>
> >> - We could achieve a "stable API" by simply maintaining input
> >> parameters for workflows in a stable manner. Or perhaps workflows get
> >> versioned like a normal API would be as well.
> >>
> >> - The purist part of me likes Mistral quite a bit. It fits nicely with
> >> the deploy OpenStack with OpenStack. I sort of feel like if we have to
> >> build our own API in TripleO part of this vision has failed and could
> >> even be seen as a massive technical debt which would likely be hard to
> >> build a community around outside of TripleO.
> >>
> >> - Some of the proposed validations could perhaps be implemented as new
> >> Mistral actions as well. I'm not convinced we require TripleO API just
> >> to support a validations mechanism yet. Perhaps validations seem hard
> >> because we are simply trying to do them in the wrong places anyway?
> >> (like for example perhaps we should validate network connectivity at
> >> inspection time rather than during provisioning).
> >>
> >> - Power users might find a workflow built around a Mistral API more
> >> easy to interact with and expand upon. Perhaps this ends up being
> >> something that gets submitted as a patchset back to the TripleO that we
> >> accept into our upstream "stock" workflow sets.
> >>
> >
> > Hiya!  Thanks for putting down your thoughts.
> >
> > I think I fundamentally disagree with the idea of using Mistral, simply
> > because many of the actions we'd like to expose through a REST API
> > (described in the tripleo-common deployment library spec [1]) aren't
> > workflows; they're straightforward get/set methods.
> 
> Right, because this spec describes nearly nothing from what is present
> in tripleoclient now. And what we realistically have now is workflows,
> w

Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-11 Thread Tzu-Mainn Chen
- Original Message -
> Background info:
> 
> We've got a problem in TripleO at the moment where many of our
> workflows can be driven by the command line only. This causes some
> problems for those trying to build a UI around the workflows in that
> they have to duplicate deployment logic in potentially multiple places.
> There are specs up for review which outline how we might solve this
> problem by building what is called TripleO API [1].
> 
> Late last year I began experimenting with an OpenStack service called
> Mistral which contains a generic workflow API. Mistral supports
> defining workflows in YAML and then creating, managing, and executing
> them via an OpenStack API. Initially the effort was focused around the
> idea of creating a workflow in Mistral which could supplant our
> "baremetal introspection" workflow which currently lives in python-
> tripleoclient. I create a video presentation which outlines this effort
> [2]. This particular workflow seemed to fit nicely within the Mistral
> tooling.
> 
> 
> 
> More recently I've turned my attention to what it might look like if we
> were to use Mistral as a replacement for the TripleO API entirely. This
> brings forth the question of would TripleO be better off building out
> its own API... or would relying on existing OpenStack APIs be a better
> solution?
> 
> Some things I like about the Mistral solution:
> 
> - The API already exists and is generic.
> 
> - Mistral already supports interacting with many of the OpenStack API's
> we require [3]. Integration with keystone is baked in. Adding support
> for new clients seems straightforward (I've had no issues in adding
> support for ironic, inspector, and swift actions).
> 
> - Mistral actions are pluggable. We could fairly easily wrap some of
> our more complex workflows (perhaps those that aren't easy to replicate
> with pure YAML workflows) by creating our own TripleO Mistral actions.
> This approach would be similar to creating a custom Heat resource...
> something we have avoided with Heat in TripleO but I think it is
> perhaps more reasonable with Mistral and would allow us to again build
> out our YAML workflows to drive things. This might allow us to build
> off some of the tripleo-common consolidation that is already underway
> ...
> 
> - We could achieve a "stable API" by simply maintaining input
> parameters for workflows in a stable manner. Or perhaps workflows get
> versioned like a normal API would be as well.
> 
> - The purist part of me likes Mistral quite a bit. It fits nicely with
> the deploy OpenStack with OpenStack. I sort of feel like if we have to
> build our own API in TripleO part of this vision has failed and could
> even be seen as a massive technical debt which would likely be hard to
> build a community around outside of TripleO.
> 
> - Some of the proposed validations could perhaps be implemented as new
> Mistral actions as well. I'm not convinced we require TripleO API just
> to support a validations mechanism yet. Perhaps validations seem hard
> because we are simply trying to do them in the wrong places anyway?
> (like for example perhaps we should validate network connectivity at
> inspection time rather than during provisioning).
> 
> - Power users might find a workflow built around a Mistral API more
> easy to interact with and expand upon. Perhaps this ends up being
> something that gets submitted as a patchset back to the TripleO that we
> accept into our upstream "stock" workflow sets.
> 

Hiya!  Thanks for putting down your thoughts.

I think I fundamentally disagree with the idea of using Mistral, simply
because many of the actions we'd like to expose through a REST API
(described in the tripleo-common deployment library spec [1]) aren't
workflows; they're straightforward get/set methods.  Putting a workflow
engine in front of that feels like overkill and an added complication
that simply isn't needed.  And added complications can lead to unneeded
complications: for instance, one open Mistral bug details how it may
not scale well [2].

The Mistral solution feels like we're trying to force a circular peg
into a round-ish hole.  In a vacuum, if we were to consider the
engineering problem of exposing a code base to outside consumers in a
non-language specific fashion - I'm pretty sure we'd just suggest the
creation of a REST API and be done with it; the thought of using a
workflow engine as the frontend would not cross our minds.

I don't really agree with the 'purist' argument.  We already have custom
business logic written in the TripleO CLI; accepting that within TripleO,
but not a very thin API layer, feels like an arbitrary line to me.  And
if that line exists, I'd argue that if it forces overcomplicated
solutions to straightforward engineering problems, then that line needs
to be moved.


Mainn


[1] 
https://github.com/openstack/tripleo-specs/blob/master/specs/mitaka/tripleo-overcloud-deployment-library.rst
[2] https://bugs.launchpad.net/mis

Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-07 Thread Tzu-Mainn Chen


- Original Message -
> On 12/07/2015 04:33 AM, Tzu-Mainn Chen wrote:
> > 
> >
> > On 04/12/15 23:04, Dmitry Tantsur wrote:
> >
> >     On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:
> >
> 
> >
> >
> > 5. In-Progress Development
> >
> > The initial spec for the tripleo-common library has already
> > been approved, and
> > various people have been pushing work forward.  Here's a
> > summary:
> >
> > * Move shared business logic out of CLI
> > * https://review.openstack.org/249134 - simple
> > validations (WIP)
> >
> >
> > When is this going to be finished? It's going to get me a huge
> > merge conflict in https://review.openstack.org/#/c/250405/ (and
> > make it impossible to backport to liberty btw).
> >
> > This plan would be fine if Mitaka development was the only
> > consideration but I hope that it can be adapted a little bit to take
> > into account the Liberty branches, and the significant backports
> > that will be happening there. The rdomanager-plugin->tripleoclient
> > transition made backports painful, and having moved on for that it
> > would be ideal if we didn't create the same situation again.
> >
> > What I would propose is the following:
> > - the tripleo_common repo is renamed to tripleo and consumed by Mitaka
> > - the tripleo_common repo continues to exist in Liberty
> > - the change to rename the package tripleo_common to tripleo occurs
> > on the tripleo repo in the master branch using oslo-style wildcard
> > imports[1], and initially no deprecation message
> > - this change is backported to the tripleo_common repo on the
> > stable/liberty branch
> >
> >
> > Heya - apologies for the bit of churn here.  I re-visited the
> > repo-renaming issue in IRC, and it sounds like
> > the vast majority of people are actually in favor of putting the
> > relevant library and API code in the
> > tripleo-common repository, and revisiting the creation of a tripleo
> > repository later, once code has had a
> > chance to settle.  I personally like this idea, as it reduces some
> > disruptive activity at a time when we're trying
> > to add quite a bit of new code.
> >
> > I double-checked with the people who originally raised objections to the
> > idea of putting API code into
> > tripleo-common.  One said that his objection had to do with package
> > naming, and he removed his objection
> > once it was realized that the package name could be independent of the
> > repo name.  The other clarified his
> > objection as simply a consistency issue that he thought was okay to
> > resolve until after the API code settled a
> > bit.
> >
> > So: I'm putting the idea of *not* creating a tripleo repo just quite yet
> > out there on the mailing list, and I'm
> > hopeful we can come to agreement in Tuesday's tripleo weekly IRC
> > meeting.  That would resolve a lot of the
> > concerns mentioned here, correct?
> 
> It does not seem to resolve mine concern, though. I'm still wondering
> where should I continue the major profiles refactoring. If it moves to
> triple-common/tripleo (whatever the name), how do I backport it?
> 

This part of the move is still pending; your concerns are resolved if your
PR goes in first, correct?  I think that should be fine.

Mainn

>
> >
> >
> > Mainn
> >
> >
> > Once this is in place, stable/liberty tripleoclient can gradually
> > move from the tripleo_common to the tripleo package, and parts of
> > then tripleoclient -> tripleo_common business logic move can also be
> > backported where appropriate.
> >
> > I'm planning on adding some liberty backportable commands as part of
> > blueprint tripleo-manage-software-deployments [2] and this approach
> > would greatly ease the backport process, and allow the business
> > logic to start in the tripleo repo.
> >
> > * https://review.openstack.org/228991 - deployment code
> > (ready for review)
> >
> > * Additional TripleO business logic
> > * rename tripleo-common repo to tripleo
> >   * https://review.openstack.org/#/c/249521/ (ready for
> > review)
> >  

Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-06 Thread Tzu-Mainn Chen
- Original Message -

> On 04/12/15 23:04, Dmitry Tantsur wrote:

> > On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:
> 

> > > Hey all,
> > 
> 

> > > Over the past few months, there's been a lot of discussion and work
> > > around
> > 
> 
> > > creating a new REST API-supported TripleO deployment workflow. However
> > > most
> > 
> 
> > > of that discussion has been fragmented within spec reviews and weekly IRC
> > 
> 
> > > meetings, so I thought it might make sense to provide a high-level
> > > overview
> > 
> 
> > > of what's been going on. Hopefully it'll provide some useful perspective
> > > for
> > 
> 
> > > those that are curious!
> > 
> 

> > > Thanks,
> > 
> 
> > > Tzu-Mainn Chen
> > 
> 

> > > --
> > 
> 
> > > 1. Explanation for Deployment Workflow Change
> > 
> 

> > > TripleO uses Heat to deploy clouds. Heat allows tremendous flexibility at
> > > the
> > 
> 
> > > cost of enormous complexity. Fortunately TripleO has the space to allow
> > 
> 
> > > developers to create tools to simplify the process tremendously,
> > > resulting
> > > in
> > 
> 
> > > a deployment process that is both simple and flexible to user needs.
> > 
> 

> > > The current CLI-based TripleO workflow asks the deployer to modify a base
> > > set
> > 
> 
> > > of Heat environment files directly before calling Heat's stack-create
> > > command.
> > 
> 
> > > This requires much knowledge and precision, and is a process prone to
> > > error.
> > 
> 

> > > However this process can be eased by understanding that there is a
> > > pattern
> > > to
> > 
> 
> > > these modifications; for example, if a deployer wishes to enable network
> > 
> 
> > > isolation, a specific set of modifications must be made. These
> > > modification
> > 
> 
> > > sets can be encapsulated through pre-created Heat environment files, and
> > > TripleO
> > 
> 
> > > contains a library of these
> > 
> 
> > > (
> > > https://github.com/openstack/tripleo-heat-templates/tree/master/environments
> > > ).
> > 
> 

> > > These environments are further categorized through the proposed
> > > environment
> > 
> 
> > > capabilities map ( https://review.openstack.org/#/c/242439 ). This
> > > mapping
> > > file
> > 
> 
> > > contains programmatic metadata, adding items such as user-friendly text
> > > around
> > 
> 
> > > environment files and marking certain environments as mutually exclusive.
> > 
> 

> > > 2. Summary of Updated Deployment Workflow
> > 
> 

> > > Here's a summary of the updated TripleO deployment workflow.
> > 
> 

> > > 1. Create a Plan: Upload a base set of heat templates and environment
> > > files
> > 
> 
> > > into a Swift container. This Swift container will be versioned to allow
> > 
> 
> > > for future work with respect to updates and upgrades.
> > 
> 

> > > 2. Environment Selection: Select the appropriate environment files for
> > > your
> > 
> 
> > > deployment.
> > 
> 

> > > 3. Modify Parameters: Modify additional deployment parameters. These
> > 
> 
> > > parameters are influenced by the environment selection in step 2.
> > 
> 

> > > 4. Deploy: Send the contents of the plan's Swift container to Heat for
> > 
> 
> > > deployment.
> > 
> 

> > > Note that the current CLI-based workflow still fits here: a deployer can
> > > modify
> > 
> 
> > > Heat files directly prior to step 1, follow step 1, and then skip
> > > directly
> > > to
> > 
> 
> > > step 4. This also allows for trial deployments with test configurations.
> > 
> 

> > > 3. TripleO Python Library, REST API, and GUI
> > 
> 

> > > Right now, much of the existing TripleO deployment logic lives within the
> > > TripleO
> > 
> 
> > > CLI code, making it inaccessible to non-Python based UIs. Putting both
> > > old
> > > and
> > 
> 
> > > new deployment logic into tripleo-common and then cr

[openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-03 Thread Tzu-Mainn Chen
Hey all,

Over the past few months, there's been a lot of discussion and work around
creating a new REST API-supported TripleO deployment workflow.  However most
of that discussion has been fragmented within spec reviews and weekly IRC
meetings, so I thought it might make sense to provide a high-level overview
of what's been going on.  Hopefully it'll provide some useful perspective for
those that are curious!

Thanks,
Tzu-Mainn Chen

--
1. Explanation for Deployment Workflow Change

TripleO uses Heat to deploy clouds.  Heat allows tremendous flexibility at the
cost of enormous complexity.  Fortunately TripleO has the space to allow
developers to create tools to simplify the process tremendously,  resulting in
a deployment process that is both simple and flexible to user needs.

The current CLI-based TripleO workflow asks the deployer to modify a base set
of Heat environment files directly before calling Heat's stack-create command.
This requires much knowledge and precision, and is a process prone to error.

However this process can be eased by understanding that there is a pattern to
these modifications; for example, if a deployer wishes to enable network
isolation, a specific set of modifications must be made.  These  modification
sets can be encapsulated through pre-created Heat environment files, and TripleO
contains a library of these
(https://github.com/openstack/tripleo-heat-templates/tree/master/environments).

These environments are further categorized through the proposed environment
capabilities map (https://review.openstack.org/#/c/242439).  This mapping file
contains programmatic metadata, adding items such as user-friendly text around
environment files and marking certain environments as mutually exclusive.


2. Summary of Updated Deployment Workflow

Here's a summary of the updated TripleO deployment workflow.

1. Create a Plan: Upload a base set of heat templates and environment files
   into a Swift container.  This Swift container will be versioned to allow
   for future work with respect to updates and upgrades.

2. Environment Selection: Select the appropriate environment files for your
   deployment.

3. Modify Parameters: Modify additional deployment parameters.  These
   parameters are influenced by the environment selection in step 2.

4. Deploy: Send the contents of the plan's Swift container to Heat for
   deployment.

Note that the current CLI-based workflow still fits here: a deployer can modify
Heat files directly prior to step 1, follow step 1, and then skip directly to
step 4.  This also allows for trial deployments with test configurations.


3. TripleO Python Library, REST API, and GUI

Right now, much of the existing TripleO deployment logic lives within the 
TripleO
CLI code, making it inaccessible to non-Python based UIs.   Putting both old and
new deployment logic into tripleo-common and then creating a REST API on top of
that logic will enable modern Javascript-based GUIs to create cloud deployments
using TripleO.


4. Future Work - Validations

A possible next step is to add validations to the TripleO toolkit: scripts that
can be used to check the validity of your deployment pre-, in-, and  
post-flight.
These validations will be runnable and queryable through a  REST API.  Note that
the above deployment workflow should not be a requirement for validations to be
run.


5. In-Progress Development

The initial spec for the tripleo-common library has already been approved, and
various people have been pushing work forward.  Here's a summary:

* Move shared business logic out of CLI
  * https://review.openstack.org/249134 - simple validations (WIP)
  * https://review.openstack.org/228991 - deployment code (ready for review)

* Additional TripleO business logic
  * rename tripleo-common repo to tripleo
* https://review.openstack.org/#/c/249521/ (ready for review)
* https://review.openstack.org/#/c/249524/ (ready for review)
* https://review.openstack.org/#/c/247834/ (ready for review)
  * https://review.openstack.org/#/c/242439 - capabilities map (ready for 
review)
  * https://review.openstack.org/#/c/227297/ - base tripleo library code (ready 
for review)
  * https://review.openstack.org/#/c/232534/ - utility functions to manage 
environments (ready for review)
  * after the above is merged, plan.py will need to be updated to include 
environment methods

* TripleO API
  * https://review.openstack.org/#/c/230432/ - API spec (ready for review)
  * https://review.openstack.org/#/c/243737/  - API (WIP)
  * after the library code is fully merged, API will need to be updated to 
allow access
to a plan's environment manipulation methods

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@list

Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-17 Thread Tzu-Mainn Chen
- Original Message -

> On 10 November 2015 at 15:08, Tzu-Mainn Chen < tzuma...@redhat.com > wrote:

> > Hi all,
> 

> > At the last IRC meeting it was agreed that the new TripleO REST API
> 
> > should forgo the Tuskar name, and simply be called... the TripleO
> 
> > API. There's one more point of discussion: where should the API
> 
> > live? There are two possibilities:
> 

> > a) Put it in tripleo-common, where the business logic lives. If we
> 
> > do this, it would make sense to rename tripleo-common to simply
> 
> > tripleo.
> 

> +1 - I think this makes most sense if we are not going to support the tripleo
> repo as a library.

Okay, this seems to be the consensus, which is great. 

The leftover question is how to package the renamed repo. 'tripleo' is already 
intuitively in use by tripleo-incubator. 
In IRC, bnemec and trown suggested splitting the renamed repo into two packages 
- 'python-tripleo' and 'tripleo-api', 
which seems sensible to me. 

What do others think? 

Mainn 

> > b) Put it in its own repo, tripleo-api
> 

> > The first option made a lot of sense to people on IRC, as the proposed
> 
> > API is a very thin layer that's bound closely to the code in tripleo-
> 
> > common. The major objection is that renaming is not trivial; however
> 
> > it was mentioned that renaming might not be *too* bad... as long as
> 
> > it's done sooner rather than later.
> 

> > What do people think?
> 

> > Thanks,
> 
> > Tzu-Mainn Chen
> 

> > __
> 
> > OpenStack Development Mailing List (not for usage questions)
> 
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Location of TripleO REST API

2015-11-10 Thread Tzu-Mainn Chen
Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.

b) Put it in its own repo, tripleo-api


The first option made a lot of sense to people on IRC, as the proposed
API is a very thin layer that's bound closely to the code in tripleo-
common.  The major objection is that renaming is not trivial; however
it was mentioned that renaming might not be *too* bad... as long as
it's done sooner rather than later.

What do people think?


Thanks,
Tzu-Mainn Chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] overcloud deployment workflow spec

2015-09-16 Thread Tzu-Mainn Chen
Hey all,

I've been working on GUIs that use the TripleO deployment methodology
for a while now.  Recent changes have started to diverge the
CLI-supported deployment workflow from an API supported workflow.
There is an increased amount of business logic in the CLI that GUIs
cannot access.

To fix this divergence, I've created a spec targeted for mitaka
(https://review.openstack.org/#/c/219754/ ) that proposes to take the
business logic out of the CLI and place it into the tripleo-common
library; and later, create a PoC REST API that allows GUIs to easily
use the TripleO deployment workflow.

What do people think?


Thanks,
Tzu-Mainn Chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Encapsulating logic and state in the client

2015-08-17 Thread Tzu-Mainn Chen
- Original Message -
> It occurs to me that there has never been a detailed exposition of the
> purpose of the tripleo-common library here, and that this might be a
> good time to rectify that.
> 
> Basically, there are two things that it sucks to have in the client:
> 
> First, logic - that is, any code that is not related to the core client
> functionality of taking input from the user, making ReST API calls, and
> returning output to the user. This sucks because anyone needing to
> connect to a ReST API using a language other than Python has to
> reimplement the logic in their own language. It also creates potential
> versioning issues, because there's nothing to guarantee that the client
> code and anything it interacts with on the server are kept in sync.
> 
> Secondly, state. This sucks because the state is contained in a user's
> individual session, which not only causes all sorts of difficulties for
> anyone trying to implement a web UI but also means that you're at risk
> of losing some state if you e.g. accidentally Ctrl-C the CLI client.
> 
> Unfortunately, as undesirable as these are, they're sometimes necessary
> in the world we currently live in. The only long-term solution to this
> is to put all of the logic and state behind a ReST API where it can be
> accessed from any language, and where any state can be stored
> appropriately, possibly in a database. In principle that could be
> accomplished either by creating a tripleo-specific ReST API, or by
> finding native OpenStack undercloud APIs to do everything we need. My
> guess is that we'll find a use for the former before everything is ready
> for the latter, but that's a discussion for another day. We're not there
> yet, but there are things we can do to keep our options open to make
> that transition in the future, and this is where tripleo-common comes in.
> 
> I submit that anything that adds logic or state to the client should be
> implemented in the tripleo-common library instead of the client plugin.
> This offers a couple of advantages:
> 
> - It provides a defined boundary between code that is CLI-specific and
> code that is shared between the CLI and GUI, which could become the
> model for a future ReST API once it has stabilised and we're ready to
> take that step.
> - It allows for an orderly transition when that happens - we can have a
> deprecation period during which the tripleo-common library is imported
> into both the client and the (future, hypothetical) ReST API.
> 
> cheers,
> Zane.
> 

+1; as someone who's done work integrating OpenStack with external code bases
(Ruby), I like this idea a lot.  People have been extremely impressed with
how easy it's been to integrate services like Nova or Ironic, and I think
eventually enabling that sort of ease of integration with TripleO will be a
good step in encouraging wider adoption of TripleO.


Mainn



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Running custom puppet manifests during overcloud post-deployment

2015-04-02 Thread Tzu-Mainn Chen


- Original Message -
> On Thu, Apr 02, 2015 at 10:34:29AM -0400, Dan Prince wrote:
> > On Wed, 2015-04-01 at 21:31 -0400, Tzu-Mainn Chen wrote:
> > > Hey all,
> > > 
> > > I've run into a requirement where it'd be useful if, as an end user, I
> > > could inject
> > > a personal ssh key onto all provisioned overcloud nodes.
> > > 
> > > Obviously this is something that not every user would need or want.  I
> > > talked about
> > > some options with Dan Prince on IRC, and (besides suggesting that I bring
> > > the
> > > discussion to the mailing list) he proposed some generic solutions - and
> > > Dan, please
> > > feel free to correct me if I misunderstood any of your ideas.
> > > 
> > > The first is to specify a pre-set custom puppet manifest to be run when
> > > the Heat
> > > stack is created by adding a post_deployment_customizations.pp puppet
> > > manifest to
> > > be run by all roles.  Users would simply override this manifest.
> > > 
> > > The second solution is essentially the same as the first, except we'd
> > > perform
> > > the override at the Heat resource registry level: the user would update
> > > the
> > > resource reference to point to a their custom manifest (rather than
> > > overriding
> > > the default post-deployment customization manifest).
> > > 
> > > Do either of these solutions seem acceptable to others?  Would one be
> > > preferred?
> > 
> > Talking about this a bit more on IRC this morning we all realized that
> > Puppet isn't a hard requirement. Just simply providing a pluggable
> > mechanism to inject this sort of information into the nodes in a clean
> > way is all we need.
> > 
> > Steve Hardy's suggestion here is probably the cleanest way to support
> > this sort of configuration in a generic fashion.
> > 
> > https://review.openstack.org/170137
> > 
> > I don't believe this solution runs post deployment however. So if
> > running a hook post deployment is a requirement we may need to wire in a
> > similar generic config parameter for that as well.
> 
> No that's correct, this will only run when the initial node boot happens
> and cloud-init runs, so it is pre-deployment only.
> 
> If we need post-deployment hooks too, then we could add a similar hook at
> the end of *-post.yaml, which pulls in some deployer defined additional
> post-deployment config to apply.
> 
> Steve

Post-deployment hooks would definitely be useful; one of the things we'd like
to do is create a user with very specific permissions on various openstack-
related files and executables.

Mainn

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][puppet] Running custom puppet manifests during overcloud post-deployment

2015-04-01 Thread Tzu-Mainn Chen
Hey all,

I've run into a requirement where it'd be useful if, as an end user, I could 
inject
a personal ssh key onto all provisioned overcloud nodes.

Obviously this is something that not every user would need or want.  I talked 
about
some options with Dan Prince on IRC, and (besides suggesting that I bring the
discussion to the mailing list) he proposed some generic solutions - and Dan, 
please
feel free to correct me if I misunderstood any of your ideas.

The first is to specify a pre-set custom puppet manifest to be run when the Heat
stack is created by adding a post_deployment_customizations.pp puppet manifest 
to
be run by all roles.  Users would simply override this manifest.

The second solution is essentially the same as the first, except we'd perform
the override at the Heat resource registry level: the user would update the
resource reference to point to a their custom manifest (rather than overriding
the default post-deployment customization manifest).

Do either of these solutions seem acceptable to others?  Would one be preferred?


Thanks,
Tzu-Mainn Chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-10 Thread Tzu-Mainn Chen
> On 10/11/14 14:20, Tzu-Mainn Chen wrote:
> > How about 'horizon_dashboard'?  I think pairing that with 'horizon_lib'
> > would make the purpose of each very clear.
> > 
> Wouldn't that imply, there exists an openstack_dashboard as well?
> 
> Matthias
> 

Well, the original renaming idea of openstack_dashboard > horizon was
suggested because most people think of the openstack_dashboard when
they say 'horizon', so I think it might be okay.  But it's just an
opinion!


Mainn



> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-10 Thread Tzu-Mainn Chen
> On Thu, Oct 30, 2014 at 01:13:48PM +0100, Matthias Runge wrote:
> > Hi,
> > 
> > tl;dr: how to progreed in separating horizon and openstack_dashboard
> > 
> > About a year ago now we agreed, it makes sense to separate horizon and
> > openstack_dashboard.
> 
> At the past summit, we discussed this again. Currently, our repo
> contains two directories: horizon and openstack_dashboard, they both
> will need new names.
> 
> We discussed a renaming in the past; the former consensus was:
> rename horizon to horizon_lib and
> rename openstack_dashboard to horizon.
> 
> IMHO that doesn't make any sense and will confuse people a lot. I
> wouldn't object to rename horizon to horizon_lib, although any other
> name, e.g django-horizon should be fine as well.
> 
> openstack_dashboard is our official name; people from outside refer to
> the Dashboard as Horizon, why not rename to openstack_horizon here?
> 
> Thoughts? Opinions? Suggestions?

How about 'horizon_dashboard'?  I think pairing that with 'horizon_lib'
would make the purpose of each very clear.

Mainn

> --
> Matthias Runge 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Custom Gerrit Dashboard for Juno-2

2014-07-01 Thread Tzu-Mainn Chen
Heya,

I was trying to help with the Horizon Juno-2 reviews, and found the launchpad 
link
(https://launchpad.net/horizon/+milestone/juno-2) a bit unhelpful in figuring 
out
which items actually need reviews, as some of the items marked 'Needs Code 
Review'
are works in progress, or already reviewed and waiting for approval, or 
abandoned.

So I've created a custom Gerrit dashboard for Horizon Juno-2 blueprints and 
bugs:

http://goo.gl/hmhMXW (the long URL is excessively long)

It is limited to reviews that are related to Juno-2 blueprints and bugs, and
relies on the topic branch being set correctly.  The top section lists changes 
that
only need a +A.  The bottom section lists changes that are waiting for a +2 
review,
and includes changes with -1s for informational purposes (it excludes changes 
with
a -2, though).

The URL was created using Sean Dague's gerrit-dash-creator using a 
script-generated
dashboard definition that takes in a milestone argument and scrapes the 
appropriate
Horizon launchpad milestone page.  That means that the above URL won't update if
and when items are added or removed from Juno-2.  However, it can be easily
regenerated; and can be run for Juno-3 and future milestones.

Hope this is helpful!


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Quick Survey: Horizon Mid-Cycle Meetup

2014-06-24 Thread Tzu-Mainn Chen
> On 6/20/14, 6:24 AM, "Radomir Dopieralski"  wrote:
> 
> >On 20/06/14 13:56, Jaromir Coufal wrote:
> >> On 2014/19/06 09:58, Matthias Runge wrote:
> >>> On Wed, Jun 18, 2014 at 10:55:59AM +0200, Jaromir Coufal wrote:
>  My quick questions are:
>  * Who would be interested (and able) to get to the meeting?
>  * What topics do we want to discuss?
> 
>  https://etherpad.openstack.org/p/horizon-juno-meetup
> 
> >>> Thanks for bringing this up!
> >>>
> >>> Do we really have items to discuss, where it needs a meeting in person?
> >>>
> >>> Matthias
> >> 
> >> I am not sure TBH, that's why I added also the Topic section to figure
> >> out if there is something what needs to be discussed. Though I don't see
> >> much interest yet.
> >
> >Apart from the split, I also work on configuration files rework, which
> >could benefit from discussion, but i think it's better done here or on
> >the wiki/etherpad, as that leaves tangible traces. I will post a
> >detailed e-mail in a few days. Other than that, I don't see a compelling
> >reason to organize it.
> >
> >--
> >Radomir Dopieralski
> >
> 
> I don¹t think the split warrants a mid-cycle meetup. A topic that would
> benefit from several people being in the room is client side architecture,
> but I¹m not entirely sure we¹re ready to work through that yet, and the
> dates are a little aggressive.  If we have interest in that, we could look
> to a slightly later date.
> 
> David

This was talked about a bit in today's Horizon weekly IRC meeting, and the
outcome was that it might make sense to see if people have the interest or
the time to attend such a meetup.  In order to gauge interest, here's an
etherpad where interested parties can put down their names next to dates
when they'd be available to attend.

https://etherpad.openstack.org/p/juno-horizon-meetup

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-06-09 Thread Tzu-Mainn Chen
Hiya,

That makes sense.  Just to take a concrete example - in tuskar-ui, our
flavors' table code 
(https://github.com/openstack/tuskar-ui/blob/master/tuskar_ui/infrastructure/flavors/tables.py)
uses the following code from openstack_dashboard.dashboards.admin.flavors.tables
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/flavors/tables.py):

1) extends CreateFlavor LinkAction to modify the linked url
2) extends DeleteFlavor DeleteAction to add an 'allowed' check
3) uses FlavorFilterAction FilterAction
4) uses get_size and get_disk_size methods for formatting a column value

Would it be suggested that all of the above from the admin dashboard go
into openstack_dashboard/common?

I could see arguments for tuskar-ui not to extend 1) and simply create a
new LinkAction since we're using an entirely new url.  4) seems to me
that it might possibly belong in the api code.  So would we just take
2) and 3) and stick it in openstack_dashboard/common/flavors/tables.py
or something. . . ?

Thanks,
Tzu-Mainn Chen


- Original Message -
> I think this falls inline with other items we are working toward in
> Horizon, namely more pluggable components on panels.
> 
> I think creating a directory in openstack_dashboard for these reusable
> components makes a lot of sense. And usage should eventually moved to
> there.
> I would suggest something as mundane as ³openstack_dashboard/common².
> 
> David
> 
> On 5/28/14, 10:36 AM, "Tzu-Mainn Chen"  wrote:
> 
> >Heya,
> >
> >Tuskar-UI is currently extending classes directly from
> >openstack-dashboard.  For example, right now
> >our UI for Flavors extends classes in both
> >openstack_dashboard.dashboards.admin.flavors.tables and
> >openstack_dashboard.dashboards.admin.flavors.workflows.  In the future,
> >this sort of pattern will
> >increase; we anticipate doing similar things with Heat code in
> >openstack-dashboard.
> >
> >However, since tuskar-ui is intended to be a separate dashboard that has
> >the potential to live
> >away from openstack-dashboard, it does feel odd to directly extend
> >openstack-dashboard dashboard
> >components.  Is there a separate place where such code might live?
> >Something similar in concept
> >to
> >https://github.com/openstack/horizon/tree/master/openstack_dashboard/usage
> > ?
> >
> >
> >Thanks,
> >Tzu-Mainn Chen
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Ironic] [Ceilometer] [Horizon] [TripleO] Nodes Management UI - designs

2014-05-28 Thread Tzu-Mainn Chen
Hi Jarda,

These look pretty good!  However, I'm having trouble evaluating from a purely
functional point of view, as I'm not entirely sure what the requirements
driving these design.  Would it be possible to list those out. . . ?

Thanks,
Tzu-Mainn Chen

- Original Message -
> Hi All,
> 
> There is a lot of tags in the subject of this e-mail but believe me that
> all listed projects (and even more) are relevant for the designs which I
> am sending out.
> 
> Nodes management section in Horizon is being expected for a while and
> finally I am sharing the results of designing around it.
> 
> http://people.redhat.com/~jcoufal/openstack/horizon/nodes/2014-05-28_nodes-ui.pdf
> 
> These views are based on modular approach and combination of multiple
> services together; for example:
> * Ironic - HW details and management
> * Ceilometer - Monitoring graphs
> * TripleO/Tuskar - Deployment Roles
> etc.
> 
> Whenever some service is missing, that particular functionality should
> be disabled and not displayed to a user.
> 
> I am sharing this without any bigger description so that I can get
> feedback whether people can get oriented in the UI without hints. Of
> course you cannot get each and every detail without exploring, having
> tooltips, etc. But the goal for each view is to manage to express at
> least the main purpose without explanation. If it does not, it needs to
> be fixed.
> 
> Next week I will organize a recorded broadcast where I will walk you
> through the designs, explain high-level vision, details and I will try
> to answer questions if you have any. So feel free to comment anything or
> ask whatever comes to your mind here in this thread, so that I can cover
> your concerns. Any feedback is very welcome - positive so that I know
> what you think that works, as well as negative so that we can improve
> the result before implementation.
> 
> Thank you all
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-05-28 Thread Tzu-Mainn Chen
Hi Doug,

Thanks for the response!  I agree with you in the cases where we are extending
things like panels; if you're extending those, you're extending the dashboard
itself.  However, things such as workflows feel like they could reasonably live
independently of the dashboard for re-use elsewhere.

Incidentally, I know that within openstack_dashboard there are cases where, say,
the admin dashboard extends instances tables from the project dashboard.  That
feels a bit odd to me; wouldn't it be cleaner to have both dashboards extend
some common instances table that lives independently of either dashboard?

Thanks,
Tzu-Mainn Chen

- Original Message -
> Hey Tzu-Mainn,
> 
> I've actually discouraged people from doing this sort of thing when
> customizing Horizon.  IMO it's risky to extend those panels because they
> really aren't intended as extension points.  We intend Horizon to be
> extensible by adding additional panels or dashboards.  I know you are
> closely involved in Horizon development, so you are better able to manage
> that better than most customizers.
> 
> Still, I wonder if we can better address this for Tuskar-UI as well as
> other situations by defining extensibility points in the dashboard panels
> and workflows themselves.  Like well defined ways to add/show a column of
> data, add/hide row actions, add/skip a workflow step, override text
> elements, etc.  Is it viable to create a few well defined extension points
> and meet your need to modify existing dashboard panels?
> 
> In any case, it seems to me that if you are overriding the dashboard
> panels, it's reasonable that tuskar-ui should be dependent on the
> dashboard.
> 
> Doug Fish
> 
> 
> 
> 
> 
> From: Tzu-Mainn Chen 
> To:   "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 05/28/2014 11:40 AM
> Subject:  [openstack-dev] [Horizon][Tuskar-UI] Location for common
> dashboard code?
> 
> 
> 
> Heya,
> 
> Tuskar-UI is currently extending classes directly from openstack-dashboard.
> For example, right now
> our UI for Flavors extends classes in both
> openstack_dashboard.dashboards.admin.flavors.tables and
> openstack_dashboard.dashboards.admin.flavors.workflows.  In the future,
> this sort of pattern will
> increase; we anticipate doing similar things with Heat code in
> openstack-dashboard.
> 
> However, since tuskar-ui is intended to be a separate dashboard that has
> the potential to live
> away from openstack-dashboard, it does feel odd to directly extend
> openstack-dashboard dashboard
> components.  Is there a separate place where such code might live?
> Something similar in concept
> to
> https://github.com/openstack/horizon/tree/master/openstack_dashboard/usage
> ?
> 
> 
> Thanks,
> Tzu-Mainn Chen
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-05-28 Thread Tzu-Mainn Chen
Heya,

Tuskar-UI is currently extending classes directly from openstack-dashboard.  
For example, right now
our UI for Flavors extends classes in both 
openstack_dashboard.dashboards.admin.flavors.tables and
openstack_dashboard.dashboards.admin.flavors.workflows.  In the future, this 
sort of pattern will
increase; we anticipate doing similar things with Heat code in 
openstack-dashboard.

However, since tuskar-ui is intended to be a separate dashboard that has the 
potential to live
away from openstack-dashboard, it does feel odd to directly extend 
openstack-dashboard dashboard
components.  Is there a separate place where such code might live?  Something 
similar in concept
to https://github.com/openstack/horizon/tree/master/openstack_dashboard/usage ?


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar][Summit] Tuskar Session Etherpad

2014-05-07 Thread Tzu-Mainn Chen
Hey all,

Here's a link to the etherpad for the summit session on Tuskar:

https://etherpad.openstack.org/p/juno-summit-tripleo-tuskar-planning

Please feel free to discuss or add suggestions or make changes!

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Question regarding new panels and tests

2014-04-23 Thread Tzu-Mainn Chen
> I have a question about adding new panels with respect to tests.
> 
> I have a new panel (for the Sahara project, data processing) that lives (for
> now at least) under the projects folder.  The plan is to have it activated
> only by using the new "enabled" mechanism to define a new panel group, "data
> processing", where it and several other panels will eventually show up.
> 
> When I did this, it works fine for me when I run it, but running the tests
> (tox -e py27) fails because my panel is not registered anywhere.  Is there a
> config for the tests that I can tweak so that my panel can be registered for
> the tests (tests/enabled or something like that?
> 
> Thanks,
> Chad
> 

Hiya - I haven't tested this, but one thing you could try. . . if you look at 
horizon/openstack_dashboard/settings.py, there's a section that determines
where the dashboard looks for plugins 
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/settings.py#L220)

   # Load the pluggable dashboard settings
   import openstack_dashboard.enabled
   import openstack_dashboard.local.enabled
   from openstack_dashboard.utils import settings

   INSTALLED_APPS = list(INSTALLED_APPS) # Make sure it's mutable
   settings.update_dashboards([
   openstack_dashboard.enabled,
   openstack_dashboard.local.enabled,
   ], HORIZON_CONFIG, INSTALLED_APPS)

If you edit horizon/openstack_dashboard/settings.py to add a similar section,
only using, say, openstack_dashboard.test.local.enabled, you may be able to 
enable
the panels for the tests there.

Mainn



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar][TripleO] Tuskar Planning for Juno

2014-04-07 Thread Tzu-Mainn Chen
Hi all,

One of the topics of discussion during the TripleO midcycle meetup a few weeks
ago was the direction we'd like to take Tuskar during Juno.  Based on the ideas
presented there, we've created a tentative list of items we'd like to address:

https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning 

Please feel free to take a look and question, comment, or criticize!


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Tzu-Mainn Chen
> Hi OpenStackers,
> 
> User interface which is managing the OpenStack Infrastructure is
> currently named Tuskar-UI because of historical reasons. Tuskar itself
> is a small service, which is giving logic into generating and managing
> Heat templates and helps user to model and manage his deployment. The
> user interface, which is the subject of this call, is based on TripleO
> approach and resembles OpenStack Dashboard (Horizon) with the way of how
> it consumes other services. The UI is consuming not just Tuskar API, but
> also Ironic (nova-baremetal), Nova (flavors), Ceilometer, etc in order
> to design, deploy, manage and monitor your OpenStack deployments.
> Because of this I find the name Tuskar-UI improper (it's more closer to
> say TripleO-UI) and I would like the community to help to find better
> name for it. After brainstorming, we can start voting on the final
> project's name.
> 
> https://etherpad.openstack.org/p/openstack-management-ui-names
> 
> Thanks
> -- Jarda (jcoufal)

Thanks for starting this thread!  I wonder if it might make sense to have
some of this discussion during the weekly horizon meeting, as that might
help clarify whether there are existing or desired policies around UI
naming.

Mainn

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Tzu-Mainn Chen
> Hello,
> 
> I think if we will use Openstack CLI, it has to be something like this
> https://github.com/dtroyer/python-oscplugin.
> Otherwise we are not Openstack on Openstack.
> 
> Btw. abstracting it all to one big CLI will be just more confusing when
> people will debug issues. So it would
> have to be done very good.
> 
> E.g calling 'openstack-client net-create' fails.
> Where do you find error log?
> Are you using nova-networking or Neutron?
> ..
> 
> Calli 'neutron net-create' and you just know.
> 
> Btw. who would actually hire a sysadmin that will start to use CLI and
> have no
> idea what is he doing? They need to know what each service do, how to
> correctly
> use them and how do debug it when something is wrong.
> 
> 
> For flavors just use flavors, we call them flavors in code too. It has
> just nicer face in UI.

Actually, don't we called them node_profiles in the UI code?  Personally,
I'd much prefer that we call them flavors in the code.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Tzu-Mainn Chen
Multiple flavors, but a single flavor per role, correct?

Mainn

- Original Message -
> I think we still are going to multiple flavors for I, e.g.:
> https://review.openstack.org/#/c/74762/
> On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> > 
> > On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > > Hi.
> > >
> > > While implementing CRUD operations for node profiles in Tuskar (which
> > > are essentially Nova flavors renamed) I encountered editing of flavors
> > > and I have some doubts about it.
> > >
> > > Editing of nova flavors in Horizon is implemented as
> > > deleting-then-creating with a _new_ flavor ID.
> > > For us it essentially means that all links to flavor/profile (e.g. from
> > > overcloud role) will become broken. We had the following proposals:
> > > - Update links automatically after editing by e.g. fetching all
> > > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > > concurrent editing of either node profiles or overcloud roles.
> > >Even worse, are we sure that user really wants overcloud roles to be
> > > updated?
> > 
> > This is a big question. Editing has always been a complicated concept in
> > Tuskar. How soon do you want the effects of the edit to be made live?
> > Should it only apply to future creations or should it be applied to
> > anything running off the old configuration? What's the policy on how to
> > apply that (canary v. the-other-one-i-cant-remember-the-name-for v.
> > something else)?
> > 
> > > - The same as previous but with confirmation from user. Also risk of
> > > race conditions.
> > > - Do not update links. User may be confused: operation called "edit"
> > > should not delete anything, nor is it supposed to invalidate links. One
> > > of the ideas was to show also deleted flavors/profiles in a separate
> > > table.
> > > - Implement clone operation instead of editing. Shows user a creation
> > > form with data prefilled from original profile. Original profile will
> > > stay and should be deleted manually. All links also have to be updated
> > > manually.
> > > - Do not implement editing, only creating and deleting (that's what I
> > > did for now in https://review.openstack.org/#/c/73576/ ).
> > 
> > I'm +1 on not implementing editing. It's why we wanted to standardize on
> > a single flavor for Icehouse in the first place, the use cases around
> > editing or multiple flavors are very complicated.
> > 
> > > Any ideas on what to do?
> > >
> > > Thanks in advance,
> > > Dmitry Tantsur
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2014-02-05 Thread Tzu-Mainn Chen
Hi,

In parallel to Jarda's updated wireframes, and based on various discussions 
over the past
weeks, here are the updated Tuskar requirements for Icehouse:

https://wiki.openstack.org/wiki/TripleO/TuskarIcehouseRequirements

Any feedback is appreciated.  Thanks!

Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Tzu-Mainn Chen
> On 1 February 2014 10:03, Tzu-Mainn Chen  wrote:
> > So after reading the replies on this thread, it seems like I (and others
> > advocating
> > a custom scheduler) may have overthought things a bit.  The reason this
> > route was
> > suggested was because of conflicting goals for Icehouse:
> >
> > a) homogeneous nodes (to simplify requirements)
> > b) support diverse hardware sets (to allow as many users as possible to try
> > Tuskar)
> 
> > Option b) requires either a custom scheduler or forcing nodes to have the
> > same attributes,
> > and the answer to that question is where much of the debate lies.
> 
> Not really. It all depends on how you define 'support diverse hardware
> sets'. The point I've consistently made is that by working within the
> current scheduler we can easily deliver homogeneous support *within* a
> given 'service role'. So that is (a), not 'every single node is
> identical.
> 
> A (b) of supporting arbitrary hardware within a single service role is
> significantly more complex, and while I think its entirely doable, it
> would be a mistake to tackle this within I (and possibly J). I don't
> think users will be impaired by us delaying however.
> 
> > However, taking a step back, maybe the real answer is:
> >
> > a) homogeneous nodes
> > b) document. . .
> >- **unsupported** means of "demoing" Tuskar (set node attributes to
> >match flavors, hack
> >  the scheduler, etc)
> >- our goals of supporting heterogeneous nodes for the J-release.
> >
> > Does this seem reasonable to everyone?
> 
> No, because a) is overly scoped.
> 
> I think we should have a flavor attribute in the definition of a
> service role, and no unsupported hacks needed; and J goals should be
> given a chunk of time to refine in Atlanta.

Fair enough.  It's my fault for being imprecise, but in my email I meant 
"homogeneous"
as "homogeneous per service role".

That being said, are people on board with:

a) a single flavor per service role for Icehouse?
b) documentation as suggested above?

A single flavor per service role shouldn't be significantly harder than a 
single flavor
for all service roles (multiple flavors per service role is where tricky issues 
start
to creep in).

Mainn

> -Rob
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-31 Thread Tzu-Mainn Chen
So after reading the replies on this thread, it seems like I (and others 
advocating
a custom scheduler) may have overthought things a bit.  The reason this route 
was
suggested was because of conflicting goals for Icehouse:

a) homogeneous nodes (to simplify requirements)
b) support diverse hardware sets (to allow as many users as possible to try 
Tuskar)

Option b) requires either a custom scheduler or forcing nodes to have the same 
attributes,
and the answer to that question is where much of the debate lies.

However, taking a step back, maybe the real answer is:

a) homogeneous nodes
b) document. . .
   - **unsupported** means of "demoing" Tuskar (set node attributes to match 
flavors, hack
 the scheduler, etc)
   - our goals of supporting heterogeneous nodes for the J-release.

Does this seem reasonable to everyone?


Mainn

- Original Message -
> On 30 January 2014 23:26, Tomas Sedovic  wrote:
> > Hi all,
> >
> > I've seen some confusion regarding the homogenous hardware support as the
> > first step for the tripleo UI. I think it's time to make sure we're all on
> > the same page.
> >
> > Here's what I think is not controversial:
> >
> > 1. Build the UI and everything underneath to work with homogenous hardware
> > in the Icehouse timeframe
> > 2. Figure out how to support heterogenous hardware and do that (may or may
> > not happen within Icehouse)
> >
> > The first option implies having a single nova flavour that will match all
> > the boxes we want to work with. It may or may not be surfaced in the UI (I
> > think that depends on our undercloud installation story).
> 
> I don't agree that (1) implies a single nova flavour. In the context
> of the discussion it implied avoiding doing our own scheduling, and
> due to the many moving parts we never got beyond that.
> 
> My expectation is that (argh naming of things) a service definition[1]
> will specify a nova flavour, right from the get go. That gives you
> homogeneous hardware for any service
> [control/network/block-storage/object-storage].
> 
> Jaromir's wireframes include the ability to define multiple such
> definitions, so two definitions for compute, for instance (e.g. one
> might be KVM, one Xen, or one w/GPUs and the other without, with a
> different host aggregate configured).
> 
> As long as each definition has a nova flavour, users with multiple
> hardware configurations can just create multiple definitions, done.
> 
> That is not entirely policy driven, so for longer term you want to be
> able to say 'flavour X *or* Y can be used for this', but as a early
> iteration it seems very straight forward to me.
> 
> > Now, someone (I don't honestly know who or when) proposed a slight step up
> > from point #1 that would allow people to try the UI even if their hardware
> > varies slightly:
> 
> > 1.1 Treat similar hardware configuration as equal
> 
> I think this is a problematic idea, because of the points raised
> elsewhere in the thread.
> 
> But more importantly, it's totally unnecessary. If one wants to handle
> minor variations in hardware (e.g. 1TB vs 1.1TB disks) just register
> them as being identical, with the lowest common denominator - Nova
> will then treat them as equal.
> 
> -Rob
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tzu-Mainn Chen
- Original Message -
> On 30 January 2014 23:26, Tomas Sedovic  wrote:
> > Hi all,
> >
> > I've seen some confusion regarding the homogenous hardware support as the
> > first step for the tripleo UI. I think it's time to make sure we're all on
> > the same page.
> >
> > Here's what I think is not controversial:
> >
> > 1. Build the UI and everything underneath to work with homogenous hardware
> > in the Icehouse timeframe
> > 2. Figure out how to support heterogenous hardware and do that (may or may
> > not happen within Icehouse)
> >
> > The first option implies having a single nova flavour that will match all
> > the boxes we want to work with. It may or may not be surfaced in the UI (I
> > think that depends on our undercloud installation story).
> 
> I don't agree that (1) implies a single nova flavour. In the context
> of the discussion it implied avoiding doing our own scheduling, and
> due to the many moving parts we never got beyond that.
> 
> My expectation is that (argh naming of things) a service definition[1]
> will specify a nova flavour, right from the get go. That gives you
> homogeneous hardware for any service
> [control/network/block-storage/object-storage].
> 
> Jaromir's wireframes include the ability to define multiple such
> definitions, so two definitions for compute, for instance (e.g. one
> might be KVM, one Xen, or one w/GPUs and the other without, with a
> different host aggregate configured).
> 
> As long as each definition has a nova flavour, users with multiple
> hardware configurations can just create multiple definitions, done.
> 
> That is not entirely policy driven, so for longer term you want to be
> able to say 'flavour X *or* Y can be used for this', but as a early
> iteration it seems very straight forward to me.
> 
> > Now, someone (I don't honestly know who or when) proposed a slight step up
> > from point #1 that would allow people to try the UI even if their hardware
> > varies slightly:
> 
> > 1.1 Treat similar hardware configuration as equal
> 
> I think this is a problematic idea, because of the points raised
> elsewhere in the thread.
> 
> But more importantly, it's totally unnecessary. If one wants to handle
> minor variations in hardware (e.g. 1TB vs 1.1TB disks) just register
> them as being identical, with the lowest common denominator - Nova
> will then treat them as equal.

Thanks for the reply!  So if I understand correctly, the proposal is for:

Icehouse: one flavor per service role, so nodes are homogeneous per role
J: multiple flavors per service role

That sounds reasonable; the part that gives me pause is when you talk about
handling variations in hardware by registering the nodes as equal.  If those
differences vanish, then won't there be problems in the future when we might
be able to properly handle those variations?

Or do you propose that we only allow minor variations to be registered as 
equal, so
that the UI has to understand the concept of minor variances?

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tzu-Mainn Chen
Wouldn't lying about the hardware specs when registering nodes be problematic 
for upgrades? Users would have 
to re-register their nodes. 

One reason why a custom filter feels attractive is that it provides us with a 
clear upgrade path: 

Icehouse 
* nodes are registered with correct attributes 
* create a custom scheduler filter that allows any node to match 
* users are informed that for this release, Tuskar will not differentiate 
between heterogeneous hardware 

J-Release 
* implement the proper use of flavors within Tuskar, allowing Tuskar to work 
with heterogeneous hardware 
* work with nova regarding scheduler filters (if needed) 
* remove the custom scheduler filter 

Mainn 

- Original Message -

> As far as nova-scheduler and Ironic go, I believe this is a solved problem.
> Steps are:
> - enroll hardware with proper specs (CPU, RAM, disk, etc)
> - create flavors based on hardware specs
> - scheduler filter matches requests exactly

> There are, I suspect, three areas where this would fall short today:
> - exposing to the user when certain flavors shouldn't be picked, because
> there is no more hardware available which could match it
> - ensuring that hardware is enrolled with the proper specs //
> trouble-shooting when it is not
> - a UI that does these well

> If I understand your proposal correctly, you're suggesting that we introduce
> non-deterministic behavior. If the scheduler filter falls back to >$flavor
> when $flavor is not available, even if the search is in ascending order and
> upper-bounded by some percentage, the user is still likely to get something
> other than what they requested. From a utilization and inventory-management
> standpoint, this would be a headache, and from a user standpoint, it would
> be awkward. Also, your proposal is only addressing the case where hardware
> variance is small; it doesn't include a solution for deployments with
> substantially different hardware.

> I don't think introducing a non-deterministic hack when the underlying
> services already work, just to provide a temporary UI solution, is
> appropriate. But that's just my opinion.

> Here's an alternate proposal to support same-arch but different cpu/ram/disk
> hardware environments:
> - keep the scheduler filter doing an exact match
> - have the UI only allow the user to define one flavor, and have that be the
> lowest common denominator of available hardware
> - assign that flavor's properties to all nodes -- basically lie about the
> hardware specs when enrolling them
> - inform the user that, if they have heterogeneous hardware, they will get
> randomly chosen nodes from their pool, and that scheduling on heterogeneous
> hardware will be added in a future UI release

> This will allow folks who are using TripleO at the commandline to take
> advantage of their heterogeneous hardware, instead of crippling
> already-existing functionality, while also allowing users who have slightly
> (or wildly) different hardware specs to still use the UI.

> Regards,
> Devananda

> On Thu, Jan 30, 2014 at 7:14 AM, Tomas Sedovic < tsedo...@redhat.com > wrote:

> > On 30/01/14 15:53, Matt Wagner wrote:
> 

> > > On 1/30/14, 5:26 AM, Tomas Sedovic wrote:
> > 
> 

> > > > Hi all,
> > > 
> > 
> 

> > > > I've seen some confusion regarding the homogenous hardware support as
> > > 
> > 
> 
> > > > the first step for the tripleo UI. I think it's time to make sure we're
> > > 
> > 
> 
> > > > all on the same page.
> > > 
> > 
> 

> > > > Here's what I think is not controversial:
> > > 
> > 
> 

> > > > 1. Build the UI and everything underneath to work with homogenous
> > > 
> > 
> 
> > > > hardware in the Icehouse timeframe
> > > 
> > 
> 
> > > > 2. Figure out how to support heterogenous hardware and do that (may or
> > > 
> > 
> 
> > > > may not happen within Icehouse)
> > > 
> > 
> 

> > > > The first option implies having a single nova flavour that will match
> > > 
> > 
> 
> > > > all the boxes we want to work with. It may or may not be surfaced in
> > > > the
> > > 
> > 
> 
> > > > UI (I think that depends on our undercloud installation story).
> > > 
> > 
> 

> > > > Now, someone (I don't honestly know who or when) proposed a slight step
> > > 
> > 
> 
> > > > up from point #1 that would allow people to try the UI even if their
> > > 
> > 
> 
> > > > hardware varies slightly:
> > > 
> > 
> 

> > > > 1.1 Treat similar hardware configuration as equal
> > > 
> > 
> 

> > > > The way I understand it is this: we use a scheduler filter that
> > > > wouldn't
> > > 
> > 
> 
> > > > do a strict match on the hardware in Ironic. E.g. if our baremetal
> > > 
> > 
> 
> > > > flavour said 16GB ram and 1TB disk, it would also match a node with
> > > > 24GB
> > > 
> > 
> 
> > > > ram or 1.5TB disk.
> > > 
> > 
> 

> > > > The UI would still assume homogenous hardware and treat it as such.
> > > > It's
> > > 
> > 
> 
> > > > just that we would allow for small differences.
> > > 
> > 
> 

> > > > This *isn't* proposin

Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Tzu-Mainn Chen
Yep, although the reason why - that no end-user will know what these terms mean 
-
has never been entirely convincing to me.  But even if we don't use the word
'overcloud', I think we should use *something*.  Deployment is just so vague
that without some context, it could refer to anything.

As a side note, the original terminology thread ended with a general upstream
consensus that we should "call things what they are in OpenStack".  That's why
the 'deployment' model is actually called 'overcloud' in the UI/api; others
strongly favored using that term to make it clear to developers what we
were modeling.

Part of the difficulty here is the perception that developers and end-users
have different needs when it comes to terminology.


Mainn

- Original Message -
> I thought we were avoiding using overcloud and undercloud within the UI?
> 
> -J
> 
> On 01/28/2014 03:04 AM, Tzu-Mainn Chen wrote:
> > I've spent some time thinking about this, and I have a clarification.
> > 
> > I don't like the use of the word 'deployment', because it's not exact
> > enough for me.  Future plans for the tuskar-ui include management of the
> > undercloud as well, and at that point, 'deployment role' becomes vague, as
> > it could also logically apply to the undercloud.
> > 
> > For that reason, I think we should call it an 'overcloud deployment role',
> > or 'overcloud role' for short.
> > 
> > That being said, I think that the UI could get away with just displaying
> > 'Role', as presumably the user would be in a page with enough context to
> > make it clear that he's in the overcloud section.
> > 
> > 
> > Mainn
> > 
> > - Original Message -
> >> I'd argue that we should call it 'overcloud role' - at least from the
> >> modeling
> >> point of view - since the tuskar-api calls a deployment an overcloud.
> >>
> >> But I like the general direction of the term-renaming!
> >>
> >> Mainn
> >>
> 
> 
> 
> --
> Jason E. Rist
> Senior Software Engineer
> OpenStack Management UI
> Red Hat, Inc.
> +1.919.754.4048
> Freenode: jrist
> github/identi.ca: knowncitizen
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Tzu-Mainn Chen
I've spent some time thinking about this, and I have a clarification.

I don't like the use of the word 'deployment', because it's not exact
enough for me.  Future plans for the tuskar-ui include management of the
undercloud as well, and at that point, 'deployment role' becomes vague, as
it could also logically apply to the undercloud.

For that reason, I think we should call it an 'overcloud deployment role',
or 'overcloud role' for short.

That being said, I think that the UI could get away with just displaying
'Role', as presumably the user would be in a page with enough context to
make it clear that he's in the overcloud section.


Mainn

- Original Message -
> I'd argue that we should call it 'overcloud role' - at least from the
> modeling
> point of view - since the tuskar-api calls a deployment an overcloud.
> 
> But I like the general direction of the term-renaming!
> 
> Mainn
> 
> - Original Message -
> > Based on this thread which didn't seem to get clear outcome, I have one
> > last suggestion:
> > 
> > * Deployment Role
> > 
> > It looks that it might satisfy participants of this discussion. When I
> > internally talked to people it got the best reactions from already
> > suggested terms.
> > 
> > Depending on your reactions for this suggestion, if we don't get to
> > agreement of majority by the end of the week, I would call for voting
> > starting next week.
> > 
> > Thanks
> > -- Jarda
> > 
> > On 2014/21/01 15:19, Jaromir Coufal wrote:
> > > Hi folks,
> > >
> > > when I was getting feedback on wireframes and we talked about Roles,
> > > there were various objections and not much suggestions. I would love to
> > > call for action and think a bit about the term for concept currently
> > > known as Role (= Resource Category).
> > >
> > > Definition:
> > > Role is a representation of a group of nodes, with specific behavior.
> > > Each role contains (or will contain):
> > > * one or more Node Profiles (which specify HW which is going in)
> > > * association with image (which will be provisioned on new coming nodes)
> > > * specific service settings
> > >
> > > So far suggested terms:
> > > * Role *
> > >- short name - plus points
> > >- quite overloaded term (user role, etc)
> > >
> > > * Resource Category *
> > >- pretty long (devs already shorten it - confusing)
> > >- Heat specific term
> > >
> > > * Resource Class *
> > >- older term
> > >
> > > Are there any other suggestions (ideally something short and accurate)?
> > > Or do you prefer any of already suggested terms?
> > >
> > > Any ideas are welcome - we are not very good in finding the best match
> > > for this particular term.
> > >
> > > Thanks
> > > -- Jarda
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-27 Thread Tzu-Mainn Chen
I'd argue that we should call it 'overcloud role' - at least from the modeling
point of view - since the tuskar-api calls a deployment an overcloud.

But I like the general direction of the term-renaming!

Mainn

- Original Message -
> Based on this thread which didn't seem to get clear outcome, I have one
> last suggestion:
> 
> * Deployment Role
> 
> It looks that it might satisfy participants of this discussion. When I
> internally talked to people it got the best reactions from already
> suggested terms.
> 
> Depending on your reactions for this suggestion, if we don't get to
> agreement of majority by the end of the week, I would call for voting
> starting next week.
> 
> Thanks
> -- Jarda
> 
> On 2014/21/01 15:19, Jaromir Coufal wrote:
> > Hi folks,
> >
> > when I was getting feedback on wireframes and we talked about Roles,
> > there were various objections and not much suggestions. I would love to
> > call for action and think a bit about the term for concept currently
> > known as Role (= Resource Category).
> >
> > Definition:
> > Role is a representation of a group of nodes, with specific behavior.
> > Each role contains (or will contain):
> > * one or more Node Profiles (which specify HW which is going in)
> > * association with image (which will be provisioned on new coming nodes)
> > * specific service settings
> >
> > So far suggested terms:
> > * Role *
> >- short name - plus points
> >- quite overloaded term (user role, etc)
> >
> > * Resource Category *
> >- pretty long (devs already shorten it - confusing)
> >- Heat specific term
> >
> > * Resource Class *
> >- older term
> >
> > Are there any other suggestions (ideally something short and accurate)?
> > Or do you prefer any of already suggested terms?
> >
> > Any ideas are welcome - we are not very good in finding the best match
> > for this particular term.
> >
> > Thanks
> > -- Jarda
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-23 Thread Tzu-Mainn Chen


- Original Message -
> On 23 January 2014 21:39, Jaromir Coufal  wrote:
> > On 2014/22/01 19:46, Tzu-Mainn Chen wrote:
> 
> >
> > So... For now, the attributes are:
> > - Name
> > - Description
> > - Image (Image was discussed on the level of a Role, not Node Profile)
> > - Node Profile(s)
> >
> > Future:
> > + Service Specific Configuration (?)
> > + Policies (spin up new instance, if...)
> 
> http://docs.openstack.org/developer/heat/template_guide/openstack.html
> 
> Is the list of 'resource types' in heat. You can see that a resource
> is [roughly] anything that can be addressed by an API. The instances
> we deploy are indeed resources, but not all resources are instances.
> 
> It seems to me that there are two ways to think about the naming problem.
> 
> A) What if we were writing (we're not, but this is a gedanken) a
> generic Heat deployment designer.
> 
> B) What if we are not :)
> 
> If we are, not only should we use heat terms, we should probably use
> the most generic ones, because we need to expose all of heat.
> 
> However, we aren't. So while I think we *should* use heat terms when
> referring to something heat based, we don't need to use the most
> generic such term: Instances is fine to describe what we deploy.
> Instances on nodes.

The issue I have with Instances is that it gets fairly confusing when
working within the UI.  From the UI, we have calls to the Heat client
grabbing Resources; we also have calls to the Nova client from which we
get Instances.  When we get information about the Overcloud, we query
 from Stack -> Resources -> Instance -> Node

So calling it Instance would imply (to me) that a Resource
has no specificity - which I don't think is the case, as there are 
attributes to a Heat Resource that mark it as a Compute/Controller/whatever.
Calling it Instance and explaining that it *does* apply to a
Heat resource (but that we just renamed it) feels simply confusing.

Mainn

> Separately, what we've got in the template is essentially a tree:
> 
> root:
>  parameters:
>  resources:
>   thing:
> type: OS::Service::Thing
> ...
>   thing2:
> type: OS::Service::Thing
> 
> And Tuskar's job is to hide the plumbing from that tree (e.g. that we
> need an OS::Heat::AccessPolicy there, because there is a single right
> answer for our case, and we can programatically generate it.
> 
> The implementation is going to change as we move from merge.py to HOT
> etc, but in principle we have one key under resources for each thing
> that we scale out.
> 
> I don't know if that helps with the naming of things,but there you go :)
> 
> -Rob
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-22 Thread Tzu-Mainn Chen


- Original Message -
> Oh dear user... :)
> 
> I'll step a little bit back. We need to agree if we want to name
> concepts one way in the background and other way in the UI for user (did
> we already agree on this point?). We all know pros and cons. And I will
> still fight for users to get global infrastructure terminology  (e.g. he
> is going to define Node Profiles instead of Flavors). Because I received

Jarda, sidepoint - could you explain again what the attributes of a node profile
are?  Beyond the Flavor, does it also define an image. . . ?

Mainn


> a lot of negative feedback on mixing overcloud terms into undercloud,
> confusion about overcloud/undercloud term itself, etc. If it would be
> easier for developers to name the concepts in the background differently
> then it's fine - we just need to talk about 2 terms per concept then.
> And I would be a bit afraid of schizophrenia...
> 
> 
> On 2014/22/01 15:10, Tzu-Mainn Chen wrote:
> > That's a fair question; I'd argue that it *should* be resources.  When we
> > update an overcloud deployment, it'll create additional resources.
> 
> Honestly it would get super confusing for me, if somebody tells me - you
> have 5 compute resources. (And I am talking from user's world, not from
> developers one). But resource itself can be anything.
> 
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-01-22 Thread Tzu-Mainn Chen
Thanks for the update!

One question - is it possible to move the "Deploying Status" bar from page 12 
to page 14?

The reason is that the former looks like a "Deployment Creation" page, while 
the latter is
a "Deployment Detail" page.  For me, the former is about sending parameters to 
the API to
create an overcloud; the latter is for information about an already created 
overcloud.  And
if we have a 'deployment status' then we already have an overcloud (a stack 
with a
CREATE_IN_PROGRESS status).

There are also other times when the detail page might need that status bar - 
when one user
scales the deployment upwards, say - so I think it makes sense to establish the 
status bar there.

If we do move the Deploying Status bar, then maybe we can also include the 
expected Resource
count in the details page?  Right now, that page only shows the actual Resource 
count
(from Heat); adding the expected Resource count (from Tuskar) could provide 
verification that
expectations match reality.

Mainn


- Original Message -
> Hey everybody,
> 
> I am sending updated wireframes.
> 
> http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-22_tripleo-ui-icehouse.pdf
> 
> Updates:
> * p15-18 for down-scaling deployment
> 
> Any questions are welcome, I am happy to answer them.
> -- Jarda
> 
> 
> On 2014/16/01 01:50, Jaromir Coufal wrote:
> > Hi folks,
> >
> > thanks everybody for feedback. Based on that I updated wireframes and
> > tried to provide a minimum scope for Icehouse timeframe.
> >
> > http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf
> >
> >
> > Hopefully we are able to deliver described set of features. But if you
> > find something what is missing which is critical for the first release
> > (or that we are implementing a feature which should not have such high
> > priority), please speak up now.
> >
> > The wireframes are very close to implementation. In time, there will
> > appear more views and we will see if we can get them in as well.
> >
> > Thanks all for participation
> > -- Jarda
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-22 Thread Tzu-Mainn Chen
> On Jan 22, 2014, at 4:02 AM, Jaromir Coufal  wrote:
> 
> > 
> > 
> > On 2014/22/01 00:56, Tzu-Mainn Chen wrote:
> >> Hiya - Resource is actually a Heat term that corresponds to what we're
> >> deploying within
> >> the Overcloud Stack - i.e., if we specify that we want an Overcloud with 1
> >> Controller
> >> and 3 Compute, Heat will create a Stack that contains 1 Controller and 3
> >> Compute
> >> Resources.
> > 
> > Then a quick question - why do we design deployment by
> > increasing/decreasing number of *instances* instead of resources?
> 
> Yeah, great question Jarda. When I test out the “Stacks” functionality in
> Horizon the user doesn’t create a Stack that spins up resources, it spins up
> instances. Maybe there is a difference around the terms being used behind
> the scenes and in Horizon?

Maybe we're looking at different parts of the UI, but when I look at a Stack
detail page in Horizon, I see a tab for Resources, and not Instances.  The 
resource
table might link to an Instance, but that information is retrieved from the 
Resource.

Mainn

> Liz
> 
> > 
> > -- Jarda
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-22 Thread Tzu-Mainn Chen
> On Jan 22, 2014, at 7:09 AM, Jaromir Coufal  wrote:
> 
> > 
> > 
> > On 2014/22/01 10:00, Jaromir Coufal wrote:
> >> 
> >> 
> >> On 2014/22/01 00:56, Tzu-Mainn Chen wrote:
> >>> Hiya - Resource is actually a Heat term that corresponds to what we're
> >>> deploying within
> >>> the Overcloud Stack - i.e., if we specify that we want an Overcloud
> >>> with 1 Controller
> >>> and 3 Compute, Heat will create a Stack that contains 1 Controller and
> >>> 3 Compute
> >>> Resources.
> >> 
> >> Then a quick question - why do we design deployment by
> >> increasing/decreasing number of *instances* instead of resources?
> >> 
> >> -- Jarda
> > 
> > And one more thing - Resource is very broad term as well as Role is. The
> > only difference is that Heat accepted 'Resource' as specific term for them
> > (you see? they used broad term for their concept). So I am asking myself,
> > where is difference between generic term Resource and Role? Why cannot we
> > accept Roles? It's short, well describing...
> > 
> > I am leaning towards Role. We can be more specific with adding some extra
> > word, e.g.:
> > * Node Role
> 
> +1 to Node Role. I agree that “role” is being used as a generic term here.
> I’m still convinced it’s important to use “Node” in the name since this is
> the item we are describing by assigning it a certain type of role.

I'm *strongly* against Node Role.  In Ironic, a Node has no explicit Role 
assigned
to it; whatever Role it has is implicit through the Instance running on it
(which maps to a Heat Resource).

In that sense, we're not really monitoring Nodes; we're monitoring Resources, 
and
a Node "just happens" to be one attribute of a Resource.

Mainn

> Liz
> 
> > * Deployment Role
> > ... and if we are in the context of undercloud, people can shorten it to
> > just Roles. But 'Resource Category' seems to me that it doesn't solve
> > anything.
> > 
> > -- Jarda
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-22 Thread Tzu-Mainn Chen
> > On 2014/22/01 10:00, Jaromir Coufal wrote:
> > >
> > >
> > > On 2014/22/01 00:56, Tzu-Mainn Chen wrote:
> > >> Hiya - Resource is actually a Heat term that corresponds to what we're
> > >> deploying within
> > >> the Overcloud Stack - i.e., if we specify that we want an Overcloud
> > >> with 1 Controller
> > >> and 3 Compute, Heat will create a Stack that contains 1 Controller and
> > >> 3 Compute
> > >> Resources.
> > >
> > > Then a quick question - why do we design deployment by
> > > increasing/decreasing number of *instances* instead of resources?
> > >
> > > -- Jarda
> > 
> > And one more thing - Resource is very broad term as well as Role is. The
> > only difference is that Heat accepted 'Resource' as specific term for
> > them (you see? they used broad term for their concept). So I am asking
> > myself, where is difference between generic term Resource and Role? Why
> > cannot we accept Roles? It's short, well describing...
> 
> True, but Heat was creating something new, while it seems like (to me),
> our intention is mostly to consume other Openstack APIs and expose the
> results in the UI.  If I call a Heat API which returns something that
> they call a Resource, I think it's confusing to developers to rename
> that.
> 
> > I am leaning towards Role. We can be more specific with adding some
> > extra word, e.g.:
> > * Node Role
> > * Deployment Role
> > ... and if we are in the context of undercloud, people can shorten it to
> > just Roles. But 'Resource Category' seems to me that it doesn't solve
> > anything.
> 
> I'd be okay with Resource Role!

Actually - didn't someone raise the objection that Role was a defined term 
within
Keystone and potentially a source of confusion?

Mainn

> > -- Jarda
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-22 Thread Tzu-Mainn Chen


- Original Message -
> 
> 
> On 2014/22/01 10:00, Jaromir Coufal wrote:
> >
> >
> > On 2014/22/01 00:56, Tzu-Mainn Chen wrote:
> >> Hiya - Resource is actually a Heat term that corresponds to what we're
> >> deploying within
> >> the Overcloud Stack - i.e., if we specify that we want an Overcloud
> >> with 1 Controller
> >> and 3 Compute, Heat will create a Stack that contains 1 Controller and
> >> 3 Compute
> >> Resources.
> >
> > Then a quick question - why do we design deployment by
> > increasing/decreasing number of *instances* instead of resources?
> >
> > -- Jarda
> 
> And one more thing - Resource is very broad term as well as Role is. The
> only difference is that Heat accepted 'Resource' as specific term for
> them (you see? they used broad term for their concept). So I am asking
> myself, where is difference between generic term Resource and Role? Why
> cannot we accept Roles? It's short, well describing...

True, but Heat was creating something new, while it seems like (to me),
our intention is mostly to consume other Openstack APIs and expose the
results in the UI.  If I call a Heat API which returns something that 
they call a Resource, I think it's confusing to developers to rename
that.

> I am leaning towards Role. We can be more specific with adding some
> extra word, e.g.:
> * Node Role
> * Deployment Role
> ... and if we are in the context of undercloud, people can shorten it to
> just Roles. But 'Resource Category' seems to me that it doesn't solve
> anything.

I'd be okay with Resource Role!

> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-22 Thread Tzu-Mainn Chen
That's a fair question; I'd argue that it *should* be resources.  When we
update an overcloud deployment, it'll create additional resources.

Mainn

- Original Message -
> 
> 
> On 2014/22/01 00:56, Tzu-Mainn Chen wrote:
> > Hiya - Resource is actually a Heat term that corresponds to what we're
> > deploying within
> > the Overcloud Stack - i.e., if we specify that we want an Overcloud with 1
> > Controller
> > and 3 Compute, Heat will create a Stack that contains 1 Controller and 3
> > Compute
> > Resources.
> 
> Then a quick question - why do we design deployment by
> increasing/decreasing number of *instances* instead of resources?
> 
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-21 Thread Tzu-Mainn Chen
> On Jan 21, 2014, at 9:40 AM, Dougal Matthews  wrote:
> 
> > On 21/01/14 14:19, Jaromir Coufal wrote:
> >> when I was getting feedback on wireframes and we talked about Roles,
> >> there were various objections and not much suggestions. I would love to
> >> call for action and think a bit about the term for concept currently
> >> known as Role (= Resource Category).
> > 
> > This indeed a bit confusing, I think Role has mostly been rejected and I've
> > seen Resource Category used the most since.
> > 
> > 
> >> So far suggested terms:
> >> * Role *
> >>   - short name - plus points
> >>   - quite overloaded term (user role, etc)
> > 
> > -1 for Role, I don't think short is a good enough reason.
> 
> Agreed. Role is overloaded IMO.
> > 
> > 
> >> * Resource Category *
> >>   - pretty long (devs already shorten it - confusing)
> >>   - Heat specific term
> > 
> > +0, I'ge gotten used to this, its quite long, but its not that bad.
> > 
> > 
> >> 
> >> * Resource Class *
> >>   - older term
> > 
> > -0, this strikes me as confusing as what we are defining now is somewhat
> > different to what a Resource Class was. However, if we can clear it up
> > this name is otherwise fine.
> > 
> > 
> >> Are there any other suggestions (ideally something short and accurate)?
> > 
> > I'll throw out a couple to start ideas -
> > 
> > - Resource Role (people seem to like Resource and role! ;-) )
> > - Resource Group
> > - Role Type
> 
> How about Instance Type? I’m looking at page 10 of the latest wireframes [1]
> and I see we are using the terms “Resource”, “Node”, and “Instance” to
> labels certain items. I’m pretty sure Node and Instance are different, but
> I’m wondering if we need to introduce Resource as a new term.

Hiya - Resource is actually a Heat term that corresponds to what we're 
deploying within
the Overcloud Stack - i.e., if we specify that we want an Overcloud with 1 
Controller
and 3 Compute, Heat will create a Stack that contains 1 Controller and 3 Compute
Resources.


Mainn

> My thoughts,
> Liz
> [1]http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-20_tripleo-ui-icehouse.pdf
> 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-21 Thread Tzu-Mainn Chen
Thanks for starting this!  Comments in-line:

> Hi folks,
> 
> when I was getting feedback on wireframes and we talked about Roles,
> there were various objections and not much suggestions. I would love to
> call for action and think a bit about the term for concept currently
> known as Role (= Resource Category).
> 
> Definition:
> Role is a representation of a group of nodes, with specific behavior.

I don't think this is technically correct; according to the wireframes, a 'Role'
is a representation of a group of Heat resources from an overcloud stack - a 
Controller Resource, Compute Resource, etc.  Free nodes have no Role.

> Each role contains (or will contain):
> * one or more Node Profiles (which specify HW which is going in)
> * association with image (which will be provisioned on new coming nodes)
> * specific service settings
> 
> So far suggested terms:
> * Role *
>- short name - plus points
>- quite overloaded term (user role, etc)
> 
> * Resource Category *
>- pretty long (devs already shorten it - confusing)
>- Heat specific term

That's why I suggested this term after others objected to Role; it seems to me
that whatever term we use should have the world 'Resource' in it, in order to
make the correspondence clear.


Mainn

> * Resource Class *
>- older term
> 
> Are there any other suggestions (ideally something short and accurate)?
> Or do you prefer any of already suggested terms?
> 
> Any ideas are welcome - we are not very good in finding the best match
> for this particular term.
> 
> Thanks
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tuskar-UI navigation

2014-01-11 Thread Tzu-Mainn Chen
Thanks!  Just wanted to check before we went deeper into our coding.

- Original Message -
> The Resources(Nodes) item that is collapsible on the left hand side in that
> attached wireframes is a Panel Group in the Infrastructure Dashboard.  The
> plan is to make Panel Groups expandable/collapsible with the UI
> improvements.  There is nothing in Horizon's implementation that prevents
> the Panels under Resources(Nodes) to be in separate directories.  Currently,
> each Panel in a Dashboard is in an separate directory in the Dashboard
> directory.  As the potential number of panels in a Dashboard grows, I see no
> reason to not make a subdirectory for each panel group.

Just to be clear, we're not talking about making a subdirectory per panel group;
we're talking about making a subdirectory for each panel within that panel 
group.
We've already tested that as a solution and it works, but I guess my question 
was
more about what Horizon standards exist around this, if any.

Changing from the following. . .

nodes/urls.py - contains IndexView, FreeNodesView, ResourceNodesView

. . . to. . .

nodes/
 |
 + overview/urls.py - contains IndexView
 |
 + free/urls.py - contains FreeNodesView
 |
 + resource/urls.py - contains ResourcesNodesView

. . . purely for the sake of navigation - seems a bit - ugly? - to me, but if 
it's
acceptable by Horizon standards, then we're fine with it as well :)


Mainn

> David
> 
> > -Original Message-
> > From: Tzu-Mainn Chen [mailto:tzuma...@redhat.com]
> > Sent: Saturday, January 11, 2014 12:50 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: [openstack-dev] [Horizon][Tuskar] Tuskar-UI navigation
> > 
> > Hey all,
> > 
> > I have a question regarding the development of the tuskar-ui navigation.
> > 
> > So, to give some background: we are currently working off the wireframes
> > that Jaromir Coufal has developed:
> > 
> > http://people.redhat.com/~jcoufal/openstack/tripleo/2013-12-03_tripleo-
> > ui_02-resources.pdf
> > 
> > In these wireframes, you can see a left-hand navigation for Resources
> > (which
> > we have since renamed Nodes).  This
> > left-hand navigation includes sub-navigation for Resources: Overview,
> > Resource Nodes, Unallocated, etc.
> > 
> > It seems like the "Horizon way" to implement this would be to create a
> > 'nodes/' directory within our dashboard.
> > We would create a tabs.py with a Tab for Overview, Resource Nodes,
> > Unallocated, etc, and views.py would contain
> > a single TabbedTableView populated by our tabs.
> > 
> > However, this prevents us from using left-handed navigation.  As a result,
> > our nodes/ directory currently appears
> > as such: https://github.com/openstack/tuskar-
> > ui/tree/master/tuskar_ui/infrastructure/nodes
> > 
> > 'overview', 'resource', and 'free' are subdirectories within nodes, and
> > they
> > each define their own panel.py,
> > enabling them to appear in the left-handed navigation.
> > 
> > This leads to the following questions:
> > 
> > * Would our current workaround be acceptable?  Or should we follow
> > Horizon precedent more closely?
> > * I understand that a more flexible navigation system is currently under
> > development
> >   (https://blueprints.launchpad.net/horizon/+spec/navigation-
> > enhancement) - would it be preferred that
> >   we follow Horizon precedent until that navigation system is ready, rather
> > than use our own workarounds?
> > 
> > Thanks in advance for any opinions!
> > 
> > 
> > Tzu-Mainn Chen
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Tuskar] Tuskar-UI navigation

2014-01-10 Thread Tzu-Mainn Chen
Hey all,

I have a question regarding the development of the tuskar-ui navigation.

So, to give some background: we are currently working off the wireframes that 
Jaromir Coufal has developed:

http://people.redhat.com/~jcoufal/openstack/tripleo/2013-12-03_tripleo-ui_02-resources.pdf

In these wireframes, you can see a left-hand navigation for Resources (which we 
have since renamed Nodes).  This
left-hand navigation includes sub-navigation for Resources: Overview, Resource 
Nodes, Unallocated, etc.

It seems like the "Horizon way" to implement this would be to create a 'nodes/' 
directory within our dashboard.
We would create a tabs.py with a Tab for Overview, Resource Nodes, Unallocated, 
etc, and views.py would contain
a single TabbedTableView populated by our tabs.

However, this prevents us from using left-handed navigation.  As a result, our 
nodes/ directory currently appears
as such: 
https://github.com/openstack/tuskar-ui/tree/master/tuskar_ui/infrastructure/nodes

'overview', 'resource', and 'free' are subdirectories within nodes, and they 
each define their own panel.py,
enabling them to appear in the left-handed navigation.

This leads to the following questions:

* Would our current workaround be acceptable?  Or should we follow Horizon 
precedent more closely?
* I understand that a more flexible navigation system is currently under 
development
  (https://blueprints.launchpad.net/horizon/+spec/navigation-enhancement) - 
would it be preferred that
  we follow Horizon precedent until that navigation system is ready, rather 
than use our own workarounds?

Thanks in advance for any opinions!


Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen
- Original Message -
> > The UI will also need to be able to look at the Heat resources running
> > within the overcloud stack and classify them according to a resource
> > category.  How do you envision that working?
> 
> There's a way in a Heat template to specify arbitrary metadata on a
> resource. We can add flags in there and key off of those.

That seems reasonable, but I wonder - given a resource in a template,
how do we know what category metadata needs to be set?  Is it simply based
off the resource name?  If that's the case, couldn't we use that
name <-> category mapping directly, without the need to set metadata?

Mainn

> >> Next steps for me are to start to work on the Tuskar APIs around
> >> Resource Category CRUD and their conversion into a Heat template.
> >> There's some discussion to be had there as well, but I don't want to put
> >> too much into one e-mail.
> >>
> >
> > I'm looking forward to seeing the API specification, as Resource Category
> > CRUD is currently a big unknown in the tuskar-ui api.py file.
> >
> >
> > Mainn
> >
> >
> >>
> >> Thoughts?
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen


- Original Message -
> On Thu, 2014-01-09 at 16:02 -0500, Tzu-Mainn Chen wrote:> There are a
> number of other models in the tuskar code[1], do we need to
> > > consider these now too?
> > > 
> > > [1]:
> > > https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py
> > 
> > Nope, these are gone now, in favor of Tuskar interacting directly with
> > Ironic, Heat, etc.
> 
> Hmm, not quite.
> 
> If compare the models in Ironic [1] to Tuskar's (link above), there are
> some dramatic differences. Notably:
> 
> * No Rack model in Ironic. Closest model seems to be the Chassis model
> [2], but the Ironic Chassis model doesn't have nearly the entity
> specificity that Tuskar's Rack model has. For example, the following
> (important) attributes are missing from Ironic's Chassis model:
>  - slots (how does Ironic know how many RU are in a chassis?)
>  - location (very important for integration with operations inventory
> management systems, trust me)
>  - subnet (based on my experience, I've seen deployers use a
> rack-by-rack or paired-rack control and data plane network static IP
> assignment. While Tuskar's single subnet attribute is not really
> adequate for describing production deployments that typically have 3+
> management, data and overlay network routing rules for each rack, at
> least Tuskar has the concept of networking rules in its Rack model,
> while Ironic does not)
>  - state (how does Ironic know whether a rack is provisioned fully or
> not? Must it query each each Node's powr_state field that has a
> chassis_id matching the Chassis' id field?)
>  -
> * The Tuskar Rack model has a field "chassis_id". I have no idea what
> this is... or its relation to the Ironic Chassis model.
> 
> As much as the Tuskar Chassis model is lacking compared to the Tuskar
> Rack model, the opposite problem exists for each project's model of
> Node. In Tuskar, the Node model is pretty bare and useless, whereas
> Ironic's Node model is much richer.
> 
> So, it's not as simple as it may initially seem :)

Ah, I should have been clearer in my statement - my understanding is that
we're scrapping concepts like Rack entirely.

Mainn

> Best,
> -jay
> 
> [1]
> https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py
> [2]
> https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py#L83
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen


- Original Message -
> I'm glad we are hashing this out as I think there is still some debate
> around if Tuskar will need a database at all.
> 
> One thing to bear in mind, I think we need to make sure the terminology
> matches that described in the previous thread. I think it mostly does
> here but I'm not sure the Tuskar models do.
> 
> A few comments below.
> 
> On 09/01/14 17:22, Jay Dobies wrote:
> > = Nodes =
> > A node is a baremetal machine on which the overcloud resources will be
> > deployed. The ownership of this information lies with Ironic. The Tuskar
> > UI will accept the needed information to create them and pass it to
> > Ironic. Ironic is consulted directly when information on a specific node
> > or the list of available nodes is needed.
> >
> >
> > = Resource Categories =
> > A specific type of "thing" that will be deployed into the overcloud.
> 
> nit - Wont they be deployed into undercloud to form the overcloud?
> 
> 
> > These are static definitions that describe the entities the user will
> > want to add to the overcloud and are owned by Tuskar. For Icehouse, the
> > categories themselves are added during installation for the four types
> > listed in the wireframes.
> >
> > Since this is a new model (as compared to other things that live in
> > Ironic or Heat), I'll go into some more detail. Each Resource Category
> > has the following information:
> >
> > == Metadata ==
> > My intention here is that we do things in such a way that if we change
> > one of the original 4 categories, or more importantly add more or allow
> > users to add more, the information about the category is centralized and
> > not reliant on the UI to provide the user information on what it is.
> >
> > ID - Unique ID for the Resource Category.
> > Display Name - User-friendly name to display.
> > Description - Equally self-explanatory.
> >
> > == Count ==
> > In the Tuskar UI, the user selects how many of each category is desired.
> > This stored in Tuskar's domain model for the category and is used when
> > generating the template to pass to Heat to make it happen.
> >
> > These counts are what is displayed to the user in the Tuskar UI for each
> > category. The staging concept has been removed for Icehouse. In other
> > words, the wireframes that cover the "waiting to be deployed" aren't
> > relevant for now.
> >
> > == Image ==
> > For Icehouse, each category will have one image associated with it. Last
> > I remember, there was discussion on whether or not we need to support
> > multiple images for a category, but for Icehouse we'll limit it to 1 and
> > deal with it later.
> 
> +1, that matches my recollection.
> 
> >
> > Metadata for each Resource Category is owned by the Tuskar API. The
> > images themselves are managed by Glance, with each Resource Category
> > keeping track of just the UUID for its image.
> >
> >
> > = Stack =
> > There is a single stack in Tuskar, the "overcloud". The Heat template
> > for the stack is generated by the Tuskar API based on the Resource
> > Category data (image, count, etc.). The template is handed to Heat to
> > execute.
> >
> > Heat owns information about running instances and is queried directly
> > when the Tuskar UI needs to access that information.
> >
> > --
> >
> > Next steps for me are to start to work on the Tuskar APIs around
> > Resource Category CRUD and their conversion into a Heat template.
> > There's some discussion to be had there as well, but I don't want to put
> > too much into one e-mail.
> >
> >
> > Thoughts?
> 
> There are a number of other models in the tuskar code[1], do we need to
> consider these now too?
> 
> [1]:
> https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py

Nope, these are gone now, in favor of Tuskar interacting directly with Ironic, 
Heat, etc.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen
Thanks!  This is very informative.  From a high-level perspective, this
maps well with my understanding of how Tuskar will interact with various
OpenStack services.  A question or two inline:

- Original Message -
> I'm trying to hash out where data will live for Tuskar (both long term
> and for its Icehouse deliverables). Based on the expectations for
> Icehouse (a combination of the wireframes and what's in Tuskar client's
> api.py), we have the following concepts:
> 
> 
> = Nodes =
> A node is a baremetal machine on which the overcloud resources will be
> deployed. The ownership of this information lies with Ironic. The Tuskar
> UI will accept the needed information to create them and pass it to
> Ironic. Ironic is consulted directly when information on a specific node
> or the list of available nodes is needed.
> 
> 
> = Resource Categories =
> A specific type of "thing" that will be deployed into the overcloud.
> These are static definitions that describe the entities the user will
> want to add to the overcloud and are owned by Tuskar. For Icehouse, the
> categories themselves are added during installation for the four types
> listed in the wireframes.
> 
> Since this is a new model (as compared to other things that live in
> Ironic or Heat), I'll go into some more detail. Each Resource Category
> has the following information:
> 
> == Metadata ==
> My intention here is that we do things in such a way that if we change
> one of the original 4 categories, or more importantly add more or allow
> users to add more, the information about the category is centralized and
> not reliant on the UI to provide the user information on what it is.
> 
> ID - Unique ID for the Resource Category.
> Display Name - User-friendly name to display.
> Description - Equally self-explanatory.
> 
> == Count ==
> In the Tuskar UI, the user selects how many of each category is desired.
> This stored in Tuskar's domain model for the category and is used when
> generating the template to pass to Heat to make it happen.
> 
> These counts are what is displayed to the user in the Tuskar UI for each
> category. The staging concept has been removed for Icehouse. In other
> words, the wireframes that cover the "waiting to be deployed" aren't
> relevant for now.
> 
> == Image ==
> For Icehouse, each category will have one image associated with it. Last
> I remember, there was discussion on whether or not we need to support
> multiple images for a category, but for Icehouse we'll limit it to 1 and
> deal with it later.
> 
> Metadata for each Resource Category is owned by the Tuskar API. The
> images themselves are managed by Glance, with each Resource Category
> keeping track of just the UUID for its image.
> 
> 
> = Stack =
> There is a single stack in Tuskar, the "overcloud". The Heat template
> for the stack is generated by the Tuskar API based on the Resource
> Category data (image, count, etc.). The template is handed to Heat to
> execute.
> 
> Heat owns information about running instances and is queried directly
> when the Tuskar UI needs to access that information.

The UI will also need to be able to look at the Heat resources running
within the overcloud stack and classify them according to a resource
category.  How do you envision that working?

> --
> 
> Next steps for me are to start to work on the Tuskar APIs around
> Resource Category CRUD and their conversion into a Heat template.
> There's some discussion to be had there as well, but I don't want to put
> too much into one e-mail.
> 

I'm looking forward to seeing the API specification, as Resource Category
CRUD is currently a big unknown in the tuskar-ui api.py file.


Mainn


> 
> Thoughts?
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Tzu-Mainn Chen
+1 to this!

- Original Message -
> So after a lot of consideration, my opinion is the two code bases should stay
> in separate repos under the Horizon Program, for a few reasons:
> -Adding a large chunk of code for an incubated project is likely going to
> cause the Horizon delivery some grief due to dependencies and packaging
> issues at the distro level.
> -The code in Tuskar-UI is currently in a large state of flux/rework.  The
> Tuskar-UI code needs to be able to move quickly and at times drastically,
> this could be detrimental to the stability of Horizon.  And conversely, the
> stability needs of Horizon and be detrimental to the speed at which
> Tuskar-UI can change.
> -Horizon Core can review changes in the Tuskar-UI code base and provide
> feedback without the code needing to be integrated in Horizon proper.
> Obviously, with an eye to the code bases merging in the long run.
> 
> As far as core group organization, I think the current Tuskar-UI core should
> maintain their +2 for only Tuskar-UI.  Individuals who make significant
> review contributions to Horizon will certainly be considered for Horizon
> core in time.  I agree with Gabriel's suggestion of adding Horizon Core to
> tuskar-UI core.  The idea being that Horizon core is looking for
> compatibility with Horizon initially and working toward a deeper
> understanding of the Tuskar-UI code base.  This will help insure the
> integration process goes as smoothly as possible when Tuskar/TripleO comes
> out of incubation.
> 
> I look forward to being able to merge the two code bases, but I don't think
> the time is right yet and Horizon should stick to only integrating code into
> OpenStack Dashboard that is out of incubation.  We've made exceptions in the
> past, and they tend to have unfortunate consequences.
> 
> -David
> 
> 
> > -Original Message-
> > From: Jiri Tomasek [mailto:jtoma...@redhat.com]
> > Sent: Thursday, December 19, 2013 4:40 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] Horizon and Tuskar-UI codebase merge
> > 
> > On 12/19/2013 08:58 AM, Matthias Runge wrote:
> > > On 12/18/2013 10:33 PM, Gabriel Hurley wrote:
> > >
> > >> Adding developers to Horizon Core just for the purpose of reviewing
> > >> an incubated umbrella project is not the right way to do things at
> > >> all.  If my proposal of two separate groups having the +2 power in
> > >> Gerrit isn't technically feasible then a new group should be created
> > >> for management of umbrella projects.
> > > Yes, I totally agree.
> > >
> > > Having two separate projects with separate cores should be possible
> > > under the umbrella of a program.
> > >
> > > Tuskar differs somewhat from other projects to be included in horizon,
> > > because other projects contributed a view on their specific feature.
> > > Tuskar provides an additional dashboard and is talking with several apis
> > > below. It's a something like a separate dashboard to be merged here.
> > >
> > > When having both under the horizon program umbrella, my concern is,
> > that
> > > both projects wouldn't be coupled so tight, as I would like it.
> > >
> > > Esp. I'd love to see an automatic merge of horizon commits to a
> > > (combined) tuskar and horizon repository, thus making sure, tuskar will
> > > work in a fresh (updated) horizon environment.
> > 
> > Please correct me if I am wrong, but I think this is not an issue.
> > Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we
> > create symlink to tuskar-ui local clone and to run Horizon with
> > Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs
> > on latest version of Horizon. (If you pull regularly of course).
> > 
> > >
> > > Matthias
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > Jirka
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-17 Thread Tzu-Mainn Chen
> On 2013/13/12 23:11, Jordan OMara wrote:
> > On 13/12/13 16:20 +1300, Robert Collins wrote:
> >> However, on instance - 'instance' is a very well defined term in Nova
> >> and thus OpenStack: Nova boot gets you an instance, nova delete gets
> >> rid of an instance, nova rebuild recreates it, etc. Instances run
> >> [virtual|baremetal] machines managed by a hypervisor. So
> >> nova-scheduler is not ever going to be confused with instance in the
> >> OpenStack space IMO. But it brings up a broader question, which is -
> >> what should we do when terms that are well defined in OpenStack - like
> >> Node, Instance, Flavor - are not so well defined for new users? We
> >> could use different terms, but that may confuse 'stackers, and will
> >> mean that our UI needs it's own dedicated terminology to map back to
> >> e.g. the manuals for Nova and Ironic. I'm inclined to suggest that as
> >> a principle, where there is a well defined OpenStack concept, that we
> >> use it, even if it is not ideal, because the consistency will be
> >> valuable.
> >
> > I think this is a really important point. I think the consistency is a
> > powerful tool for teaching new users how they should expect
> > tripleo/tuskar to work and should lessen the learning curve, as long
> > they've used openstack before.
> I don't 100% agree here. Yes it is important for user to keep
> consistency in naming - but as long as he is working within the same
> domain. Problem is that our TripleO/Tuskar UI user is very different
> from OpenStack UI user. They are operators, and they might be very often
> used to different terminology - globally used and known in their field
> (for example Flavor is very OpenStack specific term, they might better
> see HW profile, or similar).
> 
> I think that mixing these terms in overcloud and undercloud might lead
> to problems and users' confusion. They already are confused about the
> whole 'over/under cloud' stuff. They are not working with this approach
> daily as we are. They care about deploying OpenStack, not about how it
> works under the hood. Bringing another more complicated level of
> terminology (explaining what is and belongs to under/overcloud) will
> increase the confusion here.
> 
> For developers, it might be easier to deal with the same terms as
> OpenStack already have or what is used in the background to make that
> happen. But for user - it will be confusing going to
> infrastructure/hardware management part and see the very same terms.
> 
> Therefor I incline more to broadly accepted general terminology and not
> reusing OpenSTack terms (at least in the UI).
> 
> -- Jarda

I think you're correct with respect to the end-user, and I can see the argument
for terminology changes at the view level; it is important not to confuse the
end-user.

But at the level where developers are working with OpenStack APIs, I think it's
important not to confuse the developers and reviewers, and that's most easily 
done
by sticking with established OpenStack terminology.


Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-14 Thread Tzu-Mainn Chen
> On Wed, 2013-12-11 at 17:48 +0100, Jiří Stránský wrote:
> 
> > >> When you say python- clients, is there a distinction between the CLI and
> > >> a bindings library that invokes the server-side APIs? In other words,
> > >> the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
> > 
> > python-tuskarclient = Python bindings to tuskar-api + CLI, in one project
> > 
> > tuskar-ui doesn't have it's own bindings, it depends on
> > python-tuskarclient for bindings to tuskar-api (and other clients for
> > bindings to other APIs). UI makes use just of the Python bindings part
> > of clients and doesn't interact with the CLI part. This is the general
> > OpenStack way of doing things.
> 
> Please everyone excuse my relative lateness in joining this discussion,
> but I'm wondering if someone could point me to discussions (or summit
> session etherpads?) where the decision was made to give Tuskar a
> separate UI from Horizon? I'm curious what the motivations were around
> this?
> 
> Thanks, and again, sorry for being late to the party! :)
> 
> -jay
> 

Heya - just to clear up what I think might be a possible misconception here - 
the
Tuskar-UI is built on top of Horizon.  It's developed as a separate Horizon 
dashboard
- Infrastructure - that can be added into the OpenStack dashboard alongside the 
existing
dashboards - Project, Admin.  The Tuskar-UI developers are active within 
Horizon, and
there's currently an effort underway to get the UI placed under the Horizon 
program.

Does that answer your question, or did I miss the thrust of it?

Mainn


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-13 Thread Tzu-Mainn Chen
> Thanks Mainn, comments inline :)
> 
> On Fri, 2013-12-13 at 19:31 -0500, Tzu-Mainn Chen wrote:
> > Thanks for the reply!  Let me try and address one particular section for
> > now,
> > since it seems to be the part causing the most confusion:
> > 
> > > >  * SERVICE CLASS - a further categorization within a service role
> > > >  for a
> > > >  particular deployment.
> > > > 
> > > >   * NODE PROFILE - a set of requirements that specify what
> > > >   attributes a node must have in order to be mapped to
> > > >a service class
> > > 
> > > I think I still need some more information about the above two. They
> > > sound vaguely like Cobbler's system profiles... :)
> > 
> > I admit that this concept was a bit fuzzy for me as well, but after a few
> > IRC
> > chats, I think I have a better handle on this.  Let me begin with my
> > understanding of what
> > happens in Heat, using Heat terminology:
> > 
> > A Heat stack template defines RESOURCES.  When a STACK is deployed using
> > that template,
> > the resource information in the template is used to instantiate an INSTANCE
> > of that
> > resource on a NODE.  Heat can pass a FLAVOR (flavors?) to nova-scheduler in
> > order to
> > filter for appropriate nodes.
> > 
> > So: based on that explanation, here's what Tuskar has been calling the
> > above:
> > 
> > HEAT TERM == TUSKAR TERM
> > 
> > NODE == NODE
> > STACK == DEPLOYMENT
> > INSTANCE == INSTANCE
> > RESOURCE == SERVICE CLASS (at the very least, it's a one-to-one
> > correspondence)
> > FLAVOR == NODE PROFILE
> > ???== ROLE
> > 
> > The ??? is because ROLE is entirely a Tuskar concept, based on what TripleO
> > views
> > as the fundamental kinds of building blocks for an overcloud: Compute,
> > Controller,
> > Object Storage, Block Storage.  A ROLE essentially categorizes
> > RESOURCES/SERVICE CLASSES;
> > for example, the Control ROLE might contain a control-db resource,
> > control-secure resource,
> > control-api resource, etc.
> 
> So, based on your explanation above, perhaps it makes sense to just
> ditch the concept of roles entirely? Is the concept useful more than
> just being a list of workloads that a node is running?

I think it's still useful from a UI perspective, especially for large 
deployments
with lots of running instances.  Quickly separating out control/compute/storage
instances seems like a good feature.

> > Heat cares only about the RESOURCE and not the ROLE; if the roles were
> > named Foo1, Foo2, Foo3,
> > and Barney, Heat would not care.  Also, if the UI miscategorized, say, the
> > control-db resource
> > under the Block Storage category - Heat would again not care, and the
> > deploy action would work.
> > 
> > From that perspective, I think the above terminology should either *be* the
> > Heat term, or be
> > a word that closely corresponds to the intended purpose.  For example, I
> > think DEPLOYMENT reasonably
> > describes a STACK, but I don't think SERVICE CLASS works for RESOURCE.  I
> > also think ROLE should be
> > RESOURCE CATEGORY, since that seems to me to be the most straightforward
> > description of its purpose.
> 
> I agree with you that either the Tuskar terminology should match the
> Heat terminology, or there should be a good reason for it not to match
> the Heat terminology.
> 
> With regards to "stack" vs. "deployment", perhaps it's best to just
> stick with "stack".
> 
> For "service class", "node profile", and "role", perhaps it may be
> useful to scrap those terms entirely and use a term that the Solum
> project has adopted for describing an application deployment: the
> "plan".
> 
> In Solum-land, the "plan" is simply the instructions for deploying the
> application. In Tuskar-land, the "plan" would simply be the instructions
> for setting up the undercloud.
> 
> So, instead of "SIZE THE ROLES", you would just be defining the "plan"
> in Tuskar.

I'm not against the use of the word "plan" - it's accurate and generic,
which are both pluses. But even if we use that term, we still need to name
the internals of the plan, which would then have several components -
"sizing the roles" is just one step the user needs to perform.  And we still
need terms for the objects within the plan - the resources/service classes and
flavors/node profiles - because the UI and API still need to manipulate them.

Mainn


> Thoughts?
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-13 Thread Tzu-Mainn Chen
Thanks for the reply!  Let me try and address one particular section for now,
since it seems to be the part causing the most confusion:

> >  * SERVICE CLASS - a further categorization within a service role for a
> >  particular deployment.
> > 
> >   * NODE PROFILE - a set of requirements that specify what
> >   attributes a node must have in order to be mapped to
> >a service class
> 
> I think I still need some more information about the above two. They
> sound vaguely like Cobbler's system profiles... :)

I admit that this concept was a bit fuzzy for me as well, but after a few IRC
chats, I think I have a better handle on this.  Let me begin with my 
understanding of what
happens in Heat, using Heat terminology:

A Heat stack template defines RESOURCES.  When a STACK is deployed using that 
template,
the resource information in the template is used to instantiate an INSTANCE of 
that
resource on a NODE.  Heat can pass a FLAVOR (flavors?) to nova-scheduler in 
order to
filter for appropriate nodes.

So: based on that explanation, here's what Tuskar has been calling the above:

HEAT TERM == TUSKAR TERM

NODE == NODE
STACK == DEPLOYMENT
INSTANCE == INSTANCE
RESOURCE == SERVICE CLASS (at the very least, it's a one-to-one correspondence)
FLAVOR == NODE PROFILE
???== ROLE

The ??? is because ROLE is entirely a Tuskar concept, based on what TripleO 
views
as the fundamental kinds of building blocks for an overcloud: Compute, 
Controller,
Object Storage, Block Storage.  A ROLE essentially categorizes 
RESOURCES/SERVICE CLASSES;
for example, the Control ROLE might contain a control-db resource, 
control-secure resource,
control-api resource, etc.

Heat cares only about the RESOURCE and not the ROLE; if the roles were named 
Foo1, Foo2, Foo3,
and Barney, Heat would not care.  Also, if the UI miscategorized, say, the 
control-db resource
under the Block Storage category - Heat would again not care, and the deploy 
action would work.

>From that perspective, I think the above terminology should either *be* the 
>Heat term, or be
a word that closely corresponds to the intended purpose.  For example, I think 
DEPLOYMENT reasonably
describes a STACK, but I don't think SERVICE CLASS works for RESOURCE.  I also 
think ROLE should be
RESOURCE CATEGORY, since that seems to me to be the most straightforward 
description of its purpose.

People with more experience in Heat, please correct any of my misunderstandings!


Mainn

> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread Tzu-Mainn Chen
> Quick note - I want to keep this discussion a bit high-level and not to
> get into big implementation details. For everyone, please, let's agree
> in this thread on the direction and approach and we can start follow-up
> threads with bigger details of how to get those things done.

I'm not sure how the items listed below are implementation details; they seem
like scoping the requirements to me.

> On 2013/13/12 12:04, Tzu-Mainn Chen wrote:
> >> *VERSION 0*
> >> ===
> >> Enable user to deploy OpenStack with the simpliest TripleO way, no
> >> difference between hardware.
> >>
> >> Target:
> >> - end of icehouse-2
> >
> > My impression was that some of these features required features to be
> > developed in other
> > OpenStack services - if so, should we call those out so that we can see if
> > they'll be
> > available in the icehouse-2 timeframe?
> As for below listed features for v0 - it is the smallest set of what we
> have to have in the UI - if there is some delay in other services, we
> have to put attention there as well. But I don't think there is anything
> blocking us at the moment.
> 
> >> Features we need to get in:
> >> - Enable manual nodes registration (Ironic)
> >> - Get images available for user (Glance)
> >
> > Are we still providing the Heat template?  If so, are there image
> > requirements that we
> > need to take into account?
> I am not aware of any special requirements, but I will let experts to
> answer here...
> 
> >
> >> - Node roles (hardcode): Controller, Compute, Object Storage, Block
> >> Storage
> >> - Design deployment (number of nodes per role)
> >
> > We're only allowing a single deployment, right?
> Correct. For the whole Icehouse. I don't think we can get multiple
> deployments in time, there are much more important features.
> 
> >> - Deploy (Heat + Nova)
> >
> > What parameters are we passing in for deploy?  Is it limited to the # of
> > nodes/role, or
> > are we also passing in the image?
> I think it is # nodes/role and image as well. Though images might be
> hardcoded for the very first iteration. Soon we should be able to let
> user assign images to roles.
> 
> > Do we also need the following?
> >
> > * unregister a node in Ironic
> > * update a deployment (add or destroy instances)
> > * destroy a deployment
> > * view information about management node (instance?)
> > * list nodes/instances by role
> > * view deployment configuration
> > * view status of deployment as it's being deployed
> Some of that is part of above mentioned, some a bit later down the road
> (not far away though). We need all of that, but let's enable user to
> deploy first and we can add next features after we get something working
> then.

Well, these are requirements that I previously pulled from your wireframes
that aren't listed anywhere, so I don't know if they were forgotten, descoped,
or assumed to be part of release 0.  If it's assumed, I think it's important
that we call it out; otherwise, I'm not sure how we can appropriately evaluate
whether the feature list fits into the icehouse-2 timeframe.

I'm also not sure what we consider "needed".  To me, it seems like:

a) NEEDED
   * list nodes/instances by role

b) STANDARD (but possibly not needed?)
   * unregister a node
   * update a deployment
   * destroy a deployment
   * view status of deployment as it's being deployed

c) UNSURE
   * view information about a management node
   * view deployment configuration - I vaguely recall someone saying that it 
was 
important for the user to view the options being used when creating the 
overcloud,
even if those options were uneditable defaults.


Regarding the split in features between icehouse-2 and icehouse-3 - I'm not 
sure it makes sense.
We're re-architecting all of tuskar, and as such, I think it's more important 
to call out the
features we want for *all* of icehouse.  Otherwise, it's possible that we'll 
create an architecture
that works for icehouse-2 but which needs to be significantly reworked for 
icehouse-3.

For that reason, I think it might make more sense to work towards a single 
deadline within icehouse.


Mainn


> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes for Resource Management - ready for implementation

2013-12-13 Thread Tzu-Mainn Chen
> On 2013/13/12 11:20, Tzu-Mainn Chen wrote:
> > These look good!  Quick question - can you explain the purpose of Node
> > Tags?  Are they
> > an additional way to filter nodes through nova-scheduler (is that even
> > possible?), or
> > are they there solely for display in the UI?
> >
> > Mainn
> 
> We start easy, so that's solely for UI needs of filtering and monitoring
> (grouping of nodes). It is already in Ironic, so there is no reason why
> not to take advantage of it.
> -- Jarda

Okay, great.  Just for further clarification, are you expecting this UI 
filtering
to be present in release 0?  I don't think Ironic natively supports filtering
by node tag, so that would be further work that would have to be done.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-13 Thread Tzu-Mainn Chen
> On 12.12.2013 17:10, Mark McLoughlin wrote:
> > On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:
> >> Hi all,
> >>
> >> TL;DR: I believe that "As an infrastructure administrator, Anna wants a
> >> CLI for managing the deployment providing the same fundamental features
> >> as UI." With the planned architecture changes (making tuskar-api thinner
> >> and getting rid of proxying to other services), there's not an obvious
> >> way to achieve that. We need to figure this out. I present a few options
> >> and look forward for feedback.
> > ..
> >
> >> 1) Make a thicker python-tuskarclient and put the business logic there.
> >> Make it consume other python-*clients. (This is an unusual approach
> >> though, i'm not aware of any python-*client that would consume and
> >> integrate other python-*clients.)
> >>
> >> 2) Make a thicker tuskar-api and put the business logic there. (This is
> >> the original approach with consuming other services from tuskar-api. The
> >> feedback on this approach was mostly negative though.)
> >
> > FWIW, I think these are the two most plausible options right now.
> >
> > My instinct is that tuskar could be a stateless service which merely
> > contains the business logic between the UI/CLI and the various OpenStack
> > services.
> >
> > That would be a first (i.e. an OpenStack service which doesn't have a
> > DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
> > as a purely client-side library used by the UI/CLI (i.e. 1) as far it
> > can go until we hit actual cases where we need (2).
> 
> For the features that we identified for Icehouse, we probably don't need
> to store any data necessarily. But going forward, it's not entirely
> sure. We had a chat and identified some data that is probably not suited
> for storing in any of the other services (at least in their current state):
> 
> * Roles (like Compute, Controller, Object Storage, Block Storage) - for
> Icehouse we'll have these 4 roles hardcoded. Going forward, it's
> probable that we'll want to let admins define their own roles. (Is there
> an existing OpenStack concept that we could map Roles onto? Something
> similar to using Flavors as hardware profiles? I'm not aware of any.)
> 
> * Links to Flavors to use with the roles - to define on what hardware
> can a particular Role be deployed. For Icehouse we assume homogenous
> hardware.
> 
> * Links to Images for use with the Role/Flavor pairs - we'll have
> hardcoded Image names for those hardcoded Roles in Icehouse. Going
> forward, having multiple undercloud Flavors associated with a Role,
> maybe each [Role-Flavor] pair should have it's own image link defined -
> some hardware types (Flavors) might require special drivers in the image.
> 
> * Overcloud heat template - for Icehouse it's quite possible it might be
> hardcoded as well and we could just just use heat params to set it up,
> though i'm not 100% sure about that. Going forward, assuming dynamic
> Roles, we'll need to generate it.


One more (possible) item to this list: "# of nodes per role in a deployment" -
we'll need this if we want to stage the deployment, although that could
potentially be done on the client-side UI/CLI.


> ^ So all these things could probably be hardcoded for Icehouse, but not
> in the future. Guys suggested that if we'll be storing them eventually
> anyway, we might build these things into Tuskar API right now (and
> return hardcoded values for now, allow modification post-Icehouse). That
> seems ok to me. The other approach of having all this hardcoding
> initially done in a library seems ok to me too.
> 
> I'm not 100% sure that we cannot store some of this info in existing
> APIs, but it didn't seem so to me (to us). We've talked briefly about
> using Swift for it, but looking back on the list i wrote, it doesn't
> seem as very Swift-suited data.
> 
> >
> > One example worth thinking through though - clicking "deploy my
> > overcloud" will generate a Heat template and sent to the Heat API.
> >
> > The Heat template will be fairly closely tied to the overcloud images
> > (i.e. the actual image contents) we're deploying - e.g. the template
> > will have metadata which is specific to what's in the images.
> >
> > With the UI, you can see that working fine - the user is just using a UI
> > that was deployed with the undercloud.
> >
> > With the CLI, it is probably not running on undercloud machines. Perhaps
> > your undercloud was deployed a while ago and you've just installed the
> > latest TripleO client-side CLI from PyPI. With other OpenStack clients
> > we say that newer versions of the CLI should support all/most older
> > versions of the REST APIs.
> >
> > Having the template generation behind a (stateless) REST API could allow
> > us to define an API which expresses "deploy my overcloud" and not have
> > the client so tied to a specific undercloud version.
> 
> Yeah i see that advantage of making it an API, Dean pointed this out
> too. The combination of 

Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread Tzu-Mainn Chen
Nice list!  Questions and comments in-line:

- Original Message -
> Hey folks,
> 
> after a bit longer but very useful discussion about requirements (almost
> 60 e-mails so far) I think we need to get to some conclusion what to
> deliver in Icehouse, what are the steps and what we should continue
> discussing to get better understanding for. Time is running mercilessly
> fast.
> 
> I think there was general agreement about need for TripleO's approach in
> the UI as well. We should start from simple use cases and add smartness
> and more functionality in time. I believe we all have consensus on this
> part and we should all get our focus here. Giving operator a full
> control over his deployment is under discussion for now - there are some
> fuzzy areas, so I think the solution here is to continue in this
> discussion in the meantime, get more feedback from operators and
> potential users so we get clearer vision of what exactly to get in. But
> it shouldn't block us at the moment.
> 
> Based on above mentioned, I propose two following Icehouse milestones:
> 
> *VERSION 0*
> ===
> Enable user to deploy OpenStack with the simpliest TripleO way, no
> difference between hardware.
> 
> Target:
> - end of icehouse-2

My impression was that some of these features required features to be developed 
in other
OpenStack services - if so, should we call those out so that we can see if 
they'll be
available in the icehouse-2 timeframe?

> 
> Features we need to get in:
> - Enable manual nodes registration (Ironic)
> - Get images available for user (Glance)

Are we still providing the Heat template?  If so, are there image requirements 
that we
need to take into account?

> - Node roles (hardcode): Controller, Compute, Object Storage, Block Storage
> - Design deployment (number of nodes per role)

We're only allowing a single deployment, right?

> - Deploy (Heat + Nova)

What parameters are we passing in for deploy?  Is it limited to the # of 
nodes/role, or
are we also passing in the image?

Do we also need the following?

* unregister a node in Ironic
* update a deployment (add or destroy instances)
* destroy a deployment
* view information about management node (instance?)
* list nodes/instances by role
* view deployment configuration
* view status of deployment as it's being deployed

> 
> 
> *VERSION 1*
> ===
> Enable user to auto-discover his hardware, specify rules for roles and
> get OpenStack deployed based on those rules. Give user ability to
> monitor state of services and Nodes and to track issues easily.
> 
> Target:
> - end of icehouse-3
> 
> Features we need to get in:
> - Node-profiles (Nova-scheduler filters)
> - Auto-discovery (Ironic)
> - Monitoring (Ceilometer)
> 
> *VERSION 2*
> ===
> ... TBD based on how fast we can get previous mentioned features
> 
> I'd like to ask everybody to ACK or NACK proposed milestones and list of
> features, so that we have clear direction and free way to develop our
> solution. In case of NACK, please suggest clear alternative or solution
> since we need to move forward fast. Keep in mind that we have only about
> 2 months left.
> 
> Fairly soon, I'd like to kick off discussion about future direction and
> next steps. But please, at the moment, let's keep focused on our current
> deliverables.
> 
> Thanks
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread Tzu-Mainn Chen
Nice list!  Questions and comments in-line:

- Original Message -
> Hey folks,
> 
> after a bit longer but very useful discussion about requirements (almost
> 60 e-mails so far) I think we need to get to some conclusion what to
> deliver in Icehouse, what are the steps and what we should continue
> discussing to get better understanding for. Time is running mercilessly
> fast.
> 
> I think there was general agreement about need for TripleO's approach in
> the UI as well. We should start from simple use cases and add smartness
> and more functionality in time. I believe we all have consensus on this
> part and we should all get our focus here. Giving operator a full
> control over his deployment is under discussion for now - there are some
> fuzzy areas, so I think the solution here is to continue in this
> discussion in the meantime, get more feedback from operators and
> potential users so we get clearer vision of what exactly to get in. But
> it shouldn't block us at the moment.
> 
> Based on above mentioned, I propose two following Icehouse milestones:
> 
> *VERSION 0*
> ===
> Enable user to deploy OpenStack with the simpliest TripleO way, no
> difference between hardware.
> 
> Target:
> - end of icehouse-2

My impression was that some of these features required features to be developed 
in other
OpenStack services - if so, should we call those out so that we can see if 
they'll be
available in the icehouse-2 timeframe?

> 
> Features we need to get in:
> - Enable manual nodes registration (Ironic)
> - Get images available for user (Glance)

Are we still providing the Heat template?  If so, are there image requirements 
that we
need to take into account?

> - Node roles (hardcode): Controller, Compute, Object Storage, Block Storage
> - Design deployment (number of nodes per role)

We're only allowing a single deployment, right?

> - Deploy (Heat + Nova)

What parameters are we passing in for deploy?  Is it limited to the # of 
nodes/role, or
are we also passing in the image?

Do we also need the following?

* unregister a node in Ironic
* update a deployment (add or destroy instances)
* destroy a deployment
* view information about management node (instance?)
* list nodes/instances by role
* view deployment configuration
* view status of deployment as it's being deployed

> 
> 
> *VERSION 1*
> ===
> Enable user to auto-discover his hardware, specify rules for roles and
> get OpenStack deployed based on those rules. Give user ability to
> monitor state of services and Nodes and to track issues easily.
> 
> Target:
> - end of icehouse-3
> 
> Features we need to get in:
> - Node-profiles (Nova-scheduler filters)
> - Auto-discovery (Ironic)
> - Monitoring (Ceilometer)
> 
> *VERSION 2*
> ===
> ... TBD based on how fast we can get previous mentioned features
> 
> I'd like to ask everybody to ACK or NACK proposed milestones and list of
> features, so that we have clear direction and free way to develop our
> solution. In case of NACK, please suggest clear alternative or solution
> since we need to move forward fast. Keep in mind that we have only about
> 2 months left.
> 
> Fairly soon, I'd like to kick off discussion about future direction and
> next steps. But please, at the moment, let's keep focused on our current
> deliverables.
> 
> Thanks
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes for Resource Management - ready for implementation

2013-12-13 Thread Tzu-Mainn Chen
These look good!  Quick question - can you explain the purpose of Node Tags?  
Are they
an additional way to filter nodes through nova-scheduler (is that even 
possible?), or 
are they there solely for display in the UI?

Mainn

- Original Message -
> Thanks everybody for your feedback.
> 
> Based on it, here is final version for Nodes registration.
> 
> Thread:
> http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/?answer=113#post-id-113
> 
> Wireframes:
> http://people.redhat.com/~jcoufal/openstack/tripleo/2013-12-13_tripleo-ui_nodes-registration.pdf
> 
> (mostly just revised set of fields in Nodes' detail)
> 
> Cheers
> -- Jarda
> 
> On 2013/06/12 15:12, Jaromir Coufal wrote:
> > Hey everybody,
> >
> > based on feedback, I updated wireframes for resource management and
> > summarized changes in Askbot tool:
> >
> > http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/?answer=110#post-id-110
> >
> > Feel free to follow the discussion there.
> >
> > I am passing these wireframes to devel team, so that we can start
> > working on that.
> >
> > Thanks all for contribution
> > -- Jarda
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-12 Thread Tzu-Mainn Chen
Thanks for all the replies so far!  Let me try and distill the thread down to 
the points of
interest and contention:

1) NODE vs INSTANCE

This is a distinction that Robert brings up, and I think it's a good one that 
people agree
with.  The various ROLES can apply to either.

What isn't clear to me (and it's orthogonal to this particular thread) is 
whether the
wireframes are correct in classifying things by NODES, when it feels like we 
might want
to classify things by INSTANCE?

2) UNDEPLOYED NODE - a node that has not been deployed with an instance

Other suggestions included  UNASSIGED, UNMAPPED, FREE, and AVAILABLE.  Some 
people (I'm one of them)
find AVAILABLE to be a bit of an overloaded term, as it can also be construed 
to mean that, say,
a service instance is now running on a node and is now "available for use".  
I'm in favor of an
"UN" word, and it sounds like "UNDEPLOYED" was the most generally acceptable?

3) SIZE THE DEPLOYMENT - the act of deciding how many nodes will need to be 
assigned to each role in a deployment

Other suggestions included "SET NODE COUNT", "DESIGN THE DEPLOYMENT", "SIZE THE 
CLOUD".  SIZE THE DEPLOYMENT
sounds like the suggested option that used the most words from the other 
choices, so I picked it as a likely
candidate :)  I also like that it uses the word deployment, as that's what 
we're calling the end result.

4) SERVICE CLASS/RESOURCE CLASS/ROLE CONFIGURATION

So, an example of this was

> KVM Compute is a role configuration. KVM compute(GPU) might be another

I'm personally somewhat averse to "role configuration".  Although there are 
aspects of configuration to
this, "configuration" seems somewhat misleading to me when the purpose of this 
classification is something
more like - a subrole?

5) NODE PROFILE/FLAVOR

Flavor seemed generally acceptable, although there was some mention of it being 
an overloaded term.  Is there
a case for using a more "user-friendly" term in the UI (like node profile or 
hardware configuration or. . .)?
Or should we just expect users to be familiar with OpenStack terminology?


Please feel free to correct me if I've left anything out or misrepresented 
anything!

Mainn



- Original Message -
> Hi,
> 
> I'm trying to clarify the terminology being used for Tuskar, which may be
> helpful so that we're sure
> that we're all talking about the same thing :)  I'm copying responses from
> the requirements thread
> and combining them with current requirements to try and create a unified
> view.  Hopefully, we can come
> to a reasonably rapid consensus on any desired changes; once that's done, the
> requirements can be
> updated.
> 
> * NODE a physical general purpose machine capable of running in many roles.
> Some nodes may have hardware layout that is particularly
>useful for a given role.
> 
>  * REGISTRATION - the act of creating a node in Ironic
> 
>  * ROLE - a specific workload we want to map onto one or more nodes.
>  Examples include 'undercloud control plane', 'overcloud control
>plane', 'overcloud storage', 'overcloud compute' etc.
> 
>  * MANAGEMENT NODE - a node that has been mapped with an undercloud
>  role
>  * SERVICE NODE - a node that has been mapped with an overcloud role
> * COMPUTE NODE - a service node that has been mapped to an
> overcloud compute role
> * CONTROLLER NODE - a service node that has been mapped to an
> overcloud controller role
> * OBJECT STORAGE NODE - a service node that has been mapped to an
> overcloud object storage role
> * BLOCK STORAGE NODE - a service node that has been mapped to an
> overcloud block storage role
> 
>  * UNDEPLOYED NODE - a node that has not been mapped with a role
>   * another option - UNALLOCATED NODE - a node that has not been
>   allocated through nova scheduler (?)
>- (after reading lifeless's explanation, I
>agree that "allocation" may be a
>   misleading term under TripleO, so I
>   personally vote for UNDEPLOYED)
> 
>  * INSTANCE - A role deployed on a node - this is where work actually
>  happens.
> 
> * DEPLOYMENT
> 
>  * SIZE THE ROLES - the act of deciding how many nodes will need to be
>  assigned to each role
>* another option - DISTRIBUTE NODES (?)
>  - (I think the former is more accurate, but
>  perhaps there's a better way to say it?)
> 
>  * SCHEDULING - the process of deciding which role is deployed on which
>  node
> 
>  * SERVICE CLASS - a further categorization within a service role for a
>  particular deployment.
> 
>   * NODE PROFILE - a set of requirements that specify what attributes
>   a node mu

Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Tzu-Mainn Chen
> >Hi,
> >
> >I'm trying to clarify the terminology being used for Tuskar, which may be
> >helpful so that we're sure
> >that we're all talking about the same thing :)  I'm copying responses from
> >the requirements thread
> >and combining them with current requirements to try and create a unified
> >view.  Hopefully, we can come
> >to a reasonably rapid consensus on any desired changes; once that's done,
> >the requirements can be
> >updated.
> >
> >* NODE a physical general purpose machine capable of running in many roles.
> >Some nodes may have hardware layout that is particularly
> >   useful for a given role.
> >
> > * REGISTRATION - the act of creating a node in Ironic
> >
> > * ROLE - a specific workload we want to map onto one or more nodes.
> > Examples include 'undercloud control plane', 'overcloud control
> >   plane', 'overcloud storage', 'overcloud compute' etc.
> >
> > * MANAGEMENT NODE - a node that has been mapped with an undercloud
> > role
> > * SERVICE NODE - a node that has been mapped with an overcloud role
> >* COMPUTE NODE - a service node that has been mapped to an
> >overcloud compute role
> >* CONTROLLER NODE - a service node that has been mapped to an
> >overcloud controller role
> >* OBJECT STORAGE NODE - a service node that has been mapped to
> >an overcloud object storage role
> >* BLOCK STORAGE NODE - a service node that has been mapped to an
> >overcloud block storage role
> >
> > * UNDEPLOYED NODE - a node that has not been mapped with a role
> >  * another option - UNALLOCATED NODE - a node that has not been
> >  allocated through nova scheduler (?)
> >   - (after reading lifeless's explanation,
> >   I agree that "allocation" may be a
> >  misleading term under TripleO, so I
> >  personally vote for UNDEPLOYED)
> >
> > * INSTANCE - A role deployed on a node - this is where work actually
> > happens.
> >
> >* DEPLOYMENT
> >
> > * SIZE THE ROLES - the act of deciding how many nodes will need to be
> > assigned to each role
> >   * another option - DISTRIBUTE NODES (?)
> > - (I think the former is more accurate, but
> > perhaps there's a better way to say it?)
> >
> > * SCHEDULING - the process of deciding which role is deployed on which
> > node
> >
> > * SERVICE CLASS - a further categorization within a service role for a
> > particular deployment.
> >
> >  * NODE PROFILE - a set of requirements that specify what
> >  attributes a node must have in order to be mapped to
> >   a service class
> >
> >
> >
> >Does this seem accurate?  All feedback is appreciated!
> >
> >Mainn
> >
> 
> Thanks for doing this! Presumably this is going to go on a wiki where
> we can look at it forever and ever?


Yep, if consensus is reached, I'd replace the current tuskar glossary on the 
wiki with this
(as well as update the requirements).

Mainn


> --
> Jordan O'Mara 
> Red Hat Engineering, Raleigh
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Tzu-Mainn Chen
Hi,

I'm trying to clarify the terminology being used for Tuskar, which may be 
helpful so that we're sure
that we're all talking about the same thing :)  I'm copying responses from the 
requirements thread
and combining them with current requirements to try and create a unified view.  
Hopefully, we can come
to a reasonably rapid consensus on any desired changes; once that's done, the 
requirements can be
updated.

* NODE a physical general purpose machine capable of running in many roles. 
Some nodes may have hardware layout that is particularly
   useful for a given role.

 * REGISTRATION - the act of creating a node in Ironic

 * ROLE - a specific workload we want to map onto one or more nodes. 
Examples include 'undercloud control plane', 'overcloud control
   plane', 'overcloud storage', 'overcloud compute' etc.

 * MANAGEMENT NODE - a node that has been mapped with an undercloud role
 * SERVICE NODE - a node that has been mapped with an overcloud role
* COMPUTE NODE - a service node that has been mapped to an 
overcloud compute role
* CONTROLLER NODE - a service node that has been mapped to an 
overcloud controller role
* OBJECT STORAGE NODE - a service node that has been mapped to an 
overcloud object storage role
* BLOCK STORAGE NODE - a service node that has been mapped to an 
overcloud block storage role

 * UNDEPLOYED NODE - a node that has not been mapped with a role
  * another option - UNALLOCATED NODE - a node that has not been 
allocated through nova scheduler (?)
   - (after reading lifeless's explanation, I 
agree that "allocation" may be a
  misleading term under TripleO, so I 
personally vote for UNDEPLOYED)

 * INSTANCE - A role deployed on a node - this is where work actually 
happens.

* DEPLOYMENT

 * SIZE THE ROLES - the act of deciding how many nodes will need to be 
assigned to each role
   * another option - DISTRIBUTE NODES (?)
 - (I think the former is more accurate, but 
perhaps there's a better way to say it?)

 * SCHEDULING - the process of deciding which role is deployed on which node

 * SERVICE CLASS - a further categorization within a service role for a 
particular deployment.

  * NODE PROFILE - a set of requirements that specify what attributes a 
node must have in order to be mapped to
   a service class



Does this seem accurate?  All feedback is appreciated!

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-11 Thread Tzu-Mainn Chen
> On 2013/10/12 19:39, Tzu-Mainn Chen wrote:
> >>
> >> Ideally, we don't. But with this approach we would take out the
> >> possibility to change something or decide something from the user.
> >>
> >> The 'easiest' way is to support bigger companies with huge deployments,
> >> tailored infrastructure, everything connected properly.
> >>
> >> But there are tons of companies/users who are running on old
> >> heterogeneous hardware. Very likely even more than the number of
> >> companies having already mentioned large deployments. And giving them
> >> only the way of 'setting up rules' in order to get the service on the
> >> node - this type of user is not gonna use our deployment system.
> >>
> >> Somebody might argue - why do we care? If user doesn't like TripleO
> >> paradigm, he shouldn't use the UI and should use another tool. But the
> >> UI is not only about TripleO. Yes, it is underlying concept, but we are
> >> working on future *official* OpenStack deployment tool. We should care
> >> to enable people to deploy OpenStack - large/small scale,
> >> homo/heterogeneous hardware, typical or a bit more specific use-cases.
> >
> > I think this is a very important clarification, and I'm glad you made it.
> > It sounds
> > like manual assignment is actually a sub-requirement, and the feature
> > you're arguing
> > for is: supporting non-TripleO deployments.
>
> Mostly but not only. The other argument is - keeping control on stuff I
> am doing. Note that undercloud user is different from overcloud user.

Sure, but again, that argument seems to me to be a non-TripleO approach.  I'm
not saying that it's not a possible use case, I'm saying that you're advocating
for a deployment strategy that fundamentally diverges from the TripleO
philosophy - and as such, that strategy will likely require a separate UI, 
underlying
architecture, etc, and should not be planned for in the Icehouse timeframe.

> > That might be a worthy goal, but I think it's a distraction for the
> > Icehouse timeframe.
> > Each new deployment strategy requires not only a new UI, but different
> > deployment
> > architectures that could have very little common with each other.
> > Designing them all
> > to work in the same space is a recipe for disaster, a convoluted gnarl of
> > code that
> > doesn't do any one thing particularly well.  To use an analogy: there's a
> > reason why
> > no one makes a flying boat car.
> >
> > I'm going to strongly advocate that for Icehouse, we focus exclusively on
> > large scale
> > TripleO deployments, working to make that UI and architecture as sturdy as
> > we can.  Future
> > deployment strategies should be discussed in the future, and if they're not
> > TripleO based,
> > they should be discussed with the proper OpenStack group.
> One concern here is - it is quite likely that we get people excited
> about this approach - it will be a new boom - 'wow', there is automagic
> doing everything for me. But then the question would be reality - how
> many from that excited users will actually use TripleO for their real
> deployments (I mean in the early stages)? Would it be only couple of
> them (because of covered use cases, concerns of maturity, lack of
> control scarcity)? Can we assure them that if anything goes wrong, they
> have control over it?
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Tzu-Mainn Chen
Thanks for writing this all out!

- Original Message -
> Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
> new to the project. I only mention it again because it's relevant in
> that I missed any of the discussion on why proxying from tuskar API to
> other APIs is looked down upon. Jiri and I had been talking yesterday
> and he mentioned it to me when I started to ask these same sorts of
> questions.
> 
> On 12/11/2013 07:33 AM, Jiří Stránský wrote:
> > Hi all,
> >
> > TL;DR: I believe that "As an infrastructure administrator, Anna wants a
> > CLI for managing the deployment providing the same fundamental features
> > as UI." With the planned architecture changes (making tuskar-api thinner
> > and getting rid of proxying to other services), there's not an obvious
> > way to achieve that. We need to figure this out. I present a few options
> > and look forward for feedback.
> >
> > Previously, we had planned Tuskar arcitecture like this:
> >
> > tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.
> 
> My biggest concern was that having each client call out to the
> individual APIs directly put a lot of knowledge into the clients that
> had to be replicated across clients. At the best case, that's simply
> knowing where to look for data. But I suspect it's bigger than that and
> there are workflows that will be implemented for tuskar needs. If the
> tuskar API can't call out to other APIs, that workflow implementation
> needs to be done at a higher layer, which means in each client.
> 
> Something I'm going to talk about later in this e-mail but I'll mention
> here so that the diagrams sit side-by-side is the potential for a facade
> layer that hides away the multiple APIs. Lemme see if I can do this in
> ASCII:
> 
> tuskar-ui -+   +-tuskar-api
> |   |
> +-client-facade-+-nova-api
> |   |
> tuskar-cli-+   +-heat-api
> 
> The facade layer runs client-side and contains the business logic that
> calls across APIs and adds in the tuskar magic. That keeps the tuskar
> API from calling into other APIs* but keeps all of the API call logic
> abstracted away from the UX pieces.
> 
> * Again, I'm not 100% up to speed with the API discussion, so I'm going
> off the assumption that we want to avoid API to API calls. If that isn't
> as strict of a design principle as I'm understanding it to be, then the
> above picture probably looks kinda silly, so keep in mind the context
> I'm going from.
> 
> For completeness, my gut reaction was expecting to see something like:
> 
> tuskar-ui -+
> |
> +-tuskar-api-+-nova-api
> ||
> tuskar-cli-++-heat-api
> 
> Where a tuskar client talked to the tuskar API to do tuskar things.
> Whatever was needed to do anything tuskar-y was hidden away behind the
> tuskar API.
> 
> > This meant that the "integration logic" of how to use heat, ironic and
> > other services to manage an OpenStack deployment lied within
> > *tuskar-api*. This gave us an easy way towards having a CLI - just build
> > tuskarclient to wrap abilities of tuskar-api.
> >
> > Nowadays we talk about using heat and ironic (and neutron? nova?
> > ceilometer?) apis directly from the UI, similarly as Dashboard does.
> > But our approach cannot be exactly the same as in Dashboard's case.
> > Dashboard is quite a thin wrapper on top of python-...clients, which
> > means there's a natural parity between what the Dashboard and the CLIs
> > can do.
>
> When you say python- clients, is there a distinction between the CLI and
> a bindings library that invokes the server-side APIs? In other words,
> the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
> 
> > We're not wrapping the APIs directly (if wrapping them directly would be
> > sufficient, we could just use Dashboard and not build Tuskar API at
> > all). We're building a separate UI because we need *additional logic* on
> > top of the APIs. E.g. instead of directly working with Heat templates
> > and Heat stacks to deploy overcloud, user will get to pick how many
> > control/compute/etc. nodes he wants to have, and we'll take care of Heat
> > things behind the scenes. This makes Tuskar UI significantly thicker
> > than Dashboard is, and the natural parity between CLI and UI vanishes.
> > By having this logic in UI, we're effectively preventing its use from
> > CLI. (If i were bold i'd also think about integrating Tuskar with other
> > software which would be prevented too if we keep the business logic in
> > UI, but i'm not absolutely positive about use cases here).
> 
> I see your point about preventing its use from the CLI, but more
> disconcerting IMO is that it just doesn't belong in the UI. That sort of
> logic, the "Heat things behind the scenes", sounds like the jurisdiction
> of the API (if I'm reading into what that entails correctly).
> 
> > Now this raises a question - 

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-10 Thread Tzu-Mainn Chen
Thanks for the reply!  Comments in-line:

> > The disagreement comes from whether we need manual node assignment or not.
> > I would argue that we
> > need to step back and take a look at the real use case: heterogeneous
> > nodes.  If there are literally
> > no characteristics that differentiate nodes A and B, then why do we care
> > which gets used for what?  Why
> > do we need to manually assign one?
> 
> Ideally, we don't. But with this approach we would take out the
> possibility to change something or decide something from the user.
> 
> The 'easiest' way is to support bigger companies with huge deployments,
> tailored infrastructure, everything connected properly.
> 
> But there are tons of companies/users who are running on old
> heterogeneous hardware. Very likely even more than the number of
> companies having already mentioned large deployments. And giving them
> only the way of 'setting up rules' in order to get the service on the
> node - this type of user is not gonna use our deployment system.
> 
> Somebody might argue - why do we care? If user doesn't like TripleO
> paradigm, he shouldn't use the UI and should use another tool. But the
> UI is not only about TripleO. Yes, it is underlying concept, but we are
> working on future *official* OpenStack deployment tool. We should care
> to enable people to deploy OpenStack - large/small scale,
> homo/heterogeneous hardware, typical or a bit more specific use-cases.

I think this is a very important clarification, and I'm glad you made it.  It 
sounds
like manual assignment is actually a sub-requirement, and the feature you're 
arguing
for is: supporting non-TripleO deployments.

That might be a worthy goal, but I think it's a distraction for the Icehouse 
timeframe.
Each new deployment strategy requires not only a new UI, but different 
deployment
architectures that could have very little common with each other.  Designing 
them all
to work in the same space is a recipe for disaster, a convoluted gnarl of code 
that
doesn't do any one thing particularly well.  To use an analogy: there's a 
reason why
no one makes a flying boat car.

I'm going to strongly advocate that for Icehouse, we focus exclusively on large 
scale
TripleO deployments, working to make that UI and architecture as sturdy as we 
can.  Future
deployment strategies should be discussed in the future, and if they're not 
TripleO based,
they should be discussed with the proper OpenStack group.


> As an underlying paradigm of how to install cloud - awesome idea,
> awesome concept, it works. But user doesn't care about how it is being
> deployed for him. He cares about getting what he wants/needs. And we
> shouldn't go that far that we violently force him to treat his
> infrastructure as cloud. I believe that possibility to change/control -
> if needed - is very important and we should care.
> 
> And what is key for us is to *enable* users - not to prevent them from
> using our deployment tool, because it doesn't work for their requirements.
> 
> 
> > If we can agree on that, then I think it would be sufficient to say that we
> > want a mechanism to allow
> > UI users to deal with heterogeneous nodes, and that mechanism must use
> > nova-scheduler.  In my mind,
> > that's what resource classes and node profiles are intended for.
> 
> Not arguing on this point. Though that mechanism should support also
> cases, where user specifies a role for a node / removes node from a
> role. The rest of nodes which I don't care about should be handled by
> nova-scheduler.
> 
> > One possible objection might be: nova scheduler doesn't have the
> > appropriate filter that we need to
> > separate out two nodes.  In that case, I would say that needs to be taken
> > up with nova developers.
> 
> Give it to Nova guys to fix it... What if that user's need would be
> undercloud specific requirement?  Why should Nova guys care? What should
> our unhappy user do until then? Use other tool? Will he be willing to
> get back to use our tool once it is ready?
> 
> I can also see other use-cases. It can be distribution based on power
> sockets, networking connections, etc. We can't think about all the ways
> which our user will need.

In this case - it would be our job to make the Nova guys care and to work with 
them to develop
the feature.  Creating parallel services with the same fundamental purpose - I 
think that
runs counter to what OpenStack is designed for.

> 
> > b) Terminology
> >
> > It feels a bit like some of the disagreement come from people using
> > different words for the same thing.
> > For example, the wireframes already details a UI where Robert's roles come
> > first, but I think that message
> > was confused because I mentioned "node types" in the requirements.
> >
> > So could we come to some agreement on what the most exact terminology would
> > be?  I've listed some examples below,
> > but I'm sure there are more.
> >
> > node type | role
> +1 role
> 
> > management node | ?
> > resource

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Tzu-Mainn Chen
Thanks for the explanation!

I'm going to claim that the thread revolves around two main areas of 
disagreement.  Then I'm going
to propose a way through:

a) Manual Node Assignment

I think that everyone is agreed that automated node assignment through 
nova-scheduler is by
far the most ideal case; there's no disagreement there.

The disagreement comes from whether we need manual node assignment or not.  I 
would argue that we
need to step back and take a look at the real use case: heterogeneous nodes.  
If there are literally
no characteristics that differentiate nodes A and B, then why do we care which 
gets used for what?  Why
do we need to manually assign one?

If we can agree on that, then I think it would be sufficient to say that we 
want a mechanism to allow
UI users to deal with heterogeneous nodes, and that mechanism must use 
nova-scheduler.  In my mind,
that's what resource classes and node profiles are intended for.

One possible objection might be: nova scheduler doesn't have the appropriate 
filter that we need to
separate out two nodes.  In that case, I would say that needs to be taken up 
with nova developers.


b) Terminology

It feels a bit like some of the disagreement come from people using different 
words for the same thing.
For example, the wireframes already details a UI where Robert's roles come 
first, but I think that message
was confused because I mentioned "node types" in the requirements.

So could we come to some agreement on what the most exact terminology would be? 
 I've listed some examples below,
but I'm sure there are more.

node type | role
management node | ?
resource node | ?
unallocated | available | undeployed
create a node distribution | size the deployment
resource classes | ?
node profiles | ?

Mainn

- Original Message -
> On 10 December 2013 09:55, Tzu-Mainn Chen  wrote:
> >> >* created as part of undercloud install process
> 
> >> By that note I meant, that Nodes are not resources, Resource instances
> >> run on Nodes. Nodes are the generic pool of hardware we can deploy
> >> things onto.
> >
> > I don't think "resource nodes" is intended to imply that nodes are
> > resources; rather, it's supposed to
> > indicate that it's a node where a resource instance runs.  It's supposed to
> > separate it from "management node"
> > and "unallocated node".
> 
> So the question is are we looking at /nodes/ that have a /current
> role/, or are we looking at /roles/ that have some /current nodes/.
> 
> My contention is that the role is the interesting thing, and the nodes
> is the incidental thing. That is, as a sysadmin, my hierarchy of
> concerns is something like:
>  A: are all services running
>  B: are any of them in a degraded state where I need to take prompt
> action to prevent a service outage [might mean many things: - software
> update/disk space criticals/a machine failed and we need to scale the
> cluster back up/too much load]
>  C: are there any planned changes I need to make [new software deploy,
> feature request from user, replacing a faulty machine]
>  D: are there long term issues sneaking up on me [capacity planning,
> machine obsolescence]
> 
> If we take /nodes/ as the interesting thing, and what they are doing
> right now as the incidental thing, it's much harder to map that onto
> the sysadmin concerns. If we start with /roles/ then can answer:
>  A: by showing the list of roles and the summary stats (how many
> machines, service status aggregate), role level alerts (e.g. nova-api
> is not responding)
>  B: by showing the list of roles and more detailed stats (overall
> load, response times of services, tickets against services
>  and a list of in trouble instances in each role - instances with
> alerts against them - low disk, overload, failed service,
> early-detection alerts from hardware
>  C: probably out of our remit for now in the general case, but we need
> to enable some things here like replacing faulty machines
>  D: by looking at trend graphs for roles (not machines), but also by
> looking at the hardware in aggregate - breakdown by age of machines,
> summary data for tickets filed against instances that were deployed to
> a particular machine
> 
> C: and D: are (F) category work, but for all but the very last thing,
> it seems clear how to approach this from a roles perspective.
> 
> I've tried to approach this using /nodes/ as the starting point, and
> after two terrible drafts I've deleted the section. I'd love it if
> someone could show me how it would work:)
> 
> >> > * Unallocated nodes
> >> >
> >> > This implies an 'allocation' step,

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Tzu-Mainn Chen
> >* created as part of undercloud install process
> >* can create additional management nodes (F)
> > * Resource nodes
> >
> > ^ nodes is again confusing layers - nodes are
> > what things are deployed to, but they aren't the entry point
> >
> > Can you, please be a bit more specific here? I don't understand this note.
> 
> By the way, can you get your email client to insert > before the text
> you are replying to rather than HTML | marks? Hard to tell what I
> wrote and what you did :).
> 
> By that note I meant, that Nodes are not resources, Resource instances
> run on Nodes. Nodes are the generic pool of hardware we can deploy
> things onto.

I don't think "resource nodes" is intended to imply that nodes are resources; 
rather, it's supposed to
indicate that it's a node where a resource instance runs.  It's supposed to 
separate it from "management node"
and "unallocated node".

> > * Unallocated nodes
> >
> > This implies an 'allocation' step, that we don't have - how about
> > 'Idle nodes' or something.
> >
> > It can be auto-allocation. I don't see problem with 'unallocated' term.
> 
> Ok, it's not a biggy. I do think it will frame things poorly and lead
> to an expectation about how TripleO works that doesn't match how it
> does, but we can change it later if I'm right, and if I'm wrong, well
> it won't be the first time :).
> 

I'm interested in what the distinction you're making here is.  I'd rather get 
things
defined correctly the first time, and it's very possible that I'm missing a 
fundamental
definition here.


Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Tzu-Mainn Chen
> > - As an infrastructure administrator, Anna wants to be able to unallocate a
> > node from a deployment.
> 
> > Why? Whats her motivation. One plausible one for me is 'a machine
> 
> > needs to be serviced so Anna wants to remove it from the deployment to
> 
> > avoid causing user visible downtime.'  So lets say that: Anna needs to
> 
> > be able to take machines out of service so they can be maintained or
> 
> > disposed of.
> 

> Node being serviced is a different user story for me.

> I believe we are still 'fighting' here with two approaches and I believe we
> need both. We can't only provide a way 'give us resources we will do a
> magic'. Yes this is preferred way - especially for large deployments, but we
> also need a fallback so that user can say - no, this node doesn't belong to
> the class, I don't want it there - unassign. Or I need to have this node
> there - assign.
Just for clarification - the wireframes don't cover individual nodes being 
manually assigned, do they? I thought the concession to manual control was 
entirely through resource classes and node profiles, which are still parameters 
to be passed through to the nova-scheduler filter. To me, that's very different 
from manual assignment. 

Mainn 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-08 Thread Tzu-Mainn Chen
Thanks to all who replied, it's extremely helpful.  I'll add a focus on
integration tests to the list of requirements

Mainn

- Original Message -
> Hey, you've already got a bunch of answers, but FWIW:
> 
> a) I think it's fine to do a few big patches deleting stuff you don't
> want. You can always bring it back from git history. OTOH if you bring
> it back it will be reviewed again :).
> 
> b) I think UI + API + pythonclient in parallel is ok, but:
>  - please get tempest (which implies devstack too) API tests up as
> quickly as possible. Tempest provides the contract definition to
> detect regressions in API usage. You can't deploy a production cloud
> on top of devstack, but you can use Heat and Keystone and Glance and
> Ironic APIs sensibly, which will let meaningful tests of Tuskar API.
> I'm not sure where the JS / Horizon tests fit precisely, but again -
> lets make sure that we have functional tests in Tempest as quickly as
> possible: this is crucial for when we start 'Integration'.
> 
> Cheers,
> Rob
> 
> On 7 December 2013 04:37, Tzu-Mainn Chen  wrote:
> > Hey all,
> >
> > We're starting to work on the UI for tuskar based on Jarda's wireframes,
> > and as we're doing so, we're realizing that
> > we're not quite sure what development methodology is appropriate.  Some
> > questions:
> >
> > a) Because we're essentially doing a tear-down and re-build of the whole
> > architecture (a lot of the concepts in tuskar
> > will simply disappear), it's difficult to do small incremental patches that
> > support existing functionality.  Is it okay
> > to have patches that break functionality?  Are there good alternatives?
> >
> > b) In the past, we allowed parallel development of the UI and API by having
> > well-documented expectations of what the API
> > would provide.  We would then mock those calls in the UI, replacing them
> > with real API calls as they became available.  Is
> > this acceptable?
> >
> > If there are precedents for this kind of stuff, we'd be more than happy to
> > follow them!
> >
> > Thanks,
> > Tzu-Mainn Chen
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
> On 7 December 2013 08:15, Jay Dobies  wrote:
> > Disclaimer: I'm very new to the project, so apologies if some of my
> > questions have been already answered or flat out don't make sense.
> 
> 
> NP :)
> 
> 
> >>  * optional node profile for a resource class (M)
> >>  * acts as filter for nodes that can be allocated to that
> >> class (M)
> >
> >
> > To my understanding, once this is in Icehouse, we'll have to support
> > upgrades. If this filtering is pushed off, could we get into a situation
> > where an allocation created in Icehouse would no longer be valid in
> > Icehouse+1 once these filters are in place? If so, we might want to make it
> > more of a priority to get them in place earlier and not eat the headache of
> > addressing these sorts of integrity issues later.
> 
> We need to be wary of over-implementing now; a lot of the long term
> picture is moving Tuskar prototype features into proper homes like
> Heat and Nova; so the more we implement now the more we have to move.
> 
> >>  * Unallocated nodes
> >
> >
> > Is there more still being flushed out here? Things like:
> >  * Listing unallocated nodes
> >  * Unallocating a previously allocated node (does this make it a vanilla
> > resource or does it retain the resource type? is this the only way to
> > change
> > a node's resource type?)
> 
> Nodes don't have resource types. Nodes are machines Ironic knows
> about, and thats all they are.

Once nodes are assigned by nova scheduler, would it be accurate to say that they
have an implicit resource type?  Or am I missing the point entirely?

> >  * Unregistering nodes from Tuskar's inventory (I put this under
> >  unallocated
> > under the assumption that the workflow will be an explicit unallocate
> > before
> > unregister; I'm not sure if this is the same as "archive" below).
> 
> Tuskar shouldn't have an inventory of nodes.

Would it be correct to say that Ironic has an inventory of nodes, and that we 
may
want to remove a node from Ironic's inventory?

Mainn

> -Rob
> 
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
Thanks for the comments and questions!  I fully expect that this list of 
requirements
will need to be fleshed out, refined, and heavily modified, so the more the 
merrier.

Comments inline:

> >
> > *** Requirements are assumed to be targeted for Icehouse, unless marked
> > otherwise:
> >(M) - Maybe Icehouse, dependency on other in-development features
> >(F) - Future requirement, after Icehouse
> >
> > * NODES
> 
> Note that everything in this section should be Ironic API calls.
> 
> >* Creation
> >   * Manual registration
> >  * hardware specs from Ironic based on mac address (M)
> 
> Ironic today will want IPMI address + MAC for each NIC + disk/cpu/memory
> stats
> 
> >  * IP auto populated from Neutron (F)
> 
> Do you mean IPMI IP ? I'd say IPMI address managed by Neutron here.
> 
> >   * Auto-discovery during undercloud install process (M)
> >* Monitoring
> >* assignment, availability, status
> >* capacity, historical statistics (M)
> 
> Why is this under 'nodes'? I challenge the idea that it should be
> there. We will need to surface some stuff about nodes, but the
> underlying idea is to take a cloud approach here - so we're monitoring
> services, that happen to be on nodes. There is room to monitor nodes,
> as an undercloud feature set, but lets be very very specific about
> what is sitting at what layer.

That's a fair point.  At the same time, the UI does want to monitor both
services and the nodes that the services are running on, correct?  I would
think that a user would want this.

Would it be better to explicitly split this up into two separate requirements?

> >* Management node (where triple-o is installed)
> 
> This should be plural :) - TripleO isn't a single service to be
> installed - We've got Tuskar, Ironic, Nova, Glance, Keystone, Neutron,
> etc.

I misspoke here - this should be "where the undercloud is installed".  My
current understanding is that our initial release will only support the 
undercloud
being installed onto a single node, but my understanding could very well be 
flawed.

> >* created as part of undercloud install process
> >* can create additional management nodes (F)
> > * Resource nodes
> 
> ^ nodes is again confusing layers - nodes are
> what things are deployed to, but they aren't the entry point
> 
> > * searchable by status, name, cpu, memory, and all attributes from
> > ironic
> > * can be allocated as one of four node types
> 
> Not by users though. We need to stop thinking of this as 'what we do
> to nodes' - Nova/Ironic operate on nodes, we operate on Heat
> templates.

Right, I didn't mean to imply that users would be doing this allocation.  But 
once Nova
does this allocation, the UI does want to be aware of how the allocation is 
done, right?
That's what this requirement meant.

> > * compute
> > * controller
> > * object storage
> > * block storage
> > * Resource class - allows for further categorization of a node type
> > * each node type specifies a single default resource class
> > * allow multiple resource classes per node type (M)
> 
> Whats a node type?

Compute/controller/object storage/block storage.  Is another term besides "node 
type"
more accurate?

> 
> > * optional node profile for a resource class (M)
> > * acts as filter for nodes that can be allocated to that
> > class (M)
> 
> I'm not clear on this - you can list the nodes that have had a
> particular thing deployed on them; we probably can get a good answer
> to being able to see what nodes a particular flavor can deploy to, but
> we don't want to be second guessing the scheduler..

Correct; the goal here is to provide a way through the UI to send additional 
filtering
requirements that will eventually be passed into the scheduler, allowing the 
scheduler
to apply additional filters.

> > * nodes can be viewed by node types
> > * additional group by status, hardware specification
> 
> *Instances* - e.g. hypervisors, storage, block storage etc.
> 
> > * controller node type
> 
> Again, need to get away from node type here.
> 
> >* each controller node will run all openstack services
> >   * allow each node to run specified service (F)
> >* breakdown by workload (percentage of cpu used per node) (M)
> > * Unallocated nodes
> 
> This implies an 'allocation' step, that we don't have - how about
> 'Idle nodes' or something.

Is it imprecise to say that nodes are allocated by the scheduler?  Would 
something like
'active/idle' be better?

> > * Archived nodes (F)
> > * Will be separate openstack service (F)
> >
> > * DEPLOYMENT
> >* multiple deployments allowed (F)
> >  * initially just one
> >* deployment specifies a node distribution across no

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
The relevant wiki page is here:

https://wiki.openstack.org/wiki/TripleO/Tuskar#Icehouse_Planning


- Original Message -
> That looks really good, thanks for putting that together!
> 
> I'm going to put together a wiki page that consolidates the various Tuskar
> planning documents - requirements, user stories, wireframes, etc - so it's
> easier to see the whole planning picture.
> 
> Mainn
> 
> - Original Message -
> > 
> > On Dec 5, 2013, at 9:31 PM, Tzu-Mainn Chen  wrote:
> > 
> > > Hey all,
> > > 
> > > I've attempted to spin out the requirements behind Jarda's excellent
> > > wireframes
> > > (http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
> > > Hopefully this can add some perspective on both the wireframes and the
> > > needed changes to the tuskar-api.
> > 
> > This list is great, thanks very much for taking the time to write this up!
> > I
> > think a big part of the User Experience design is to take a step back and
> > understand the requirements from an end user's point of view…what would
> > they
> > want to accomplish by using this UI? This might influence the design in
> > certain ways, so I've taken a cut at a set of user stories for the Icehouse
> > timeframe based on these requirements that I hope will be useful during
> > discussions.
> > 
> > Based on the OpenStack Personas[1], I think that Anna would be the main
> > consumer of the TripleO UI, but please let me know if you think otherwise.
> > 
> > - As an infrastructure administrator, Anna needs to deploy or update a set
> > of
> > resources that will run OpenStack (This isn't a very specific use case, but
> > more of the larger end goal of Anna coming into the UI.)
> > - As an infrastructure administrator, Anna expects that the management node
> > for the deployment services is already up and running and the status of
> > this
> > node is shown in the UI.
> > - As an infrastructure administrator, Anna wants to be able to quickly see
> > the set of unallocated nodes that she could use for her deployment of
> > OpenStack. Ideally, she would not have to manually tell the system about
> > these nodes. If she needs to manually register nodes for whatever reason,
> > Anna would only want to have to define the essential data needed to
> > register
> > these nodes.
> > - As an infrastructure administrator, Anna needs to assign a role to each
> > of
> > the necessary nodes in her OpenStack deployment. The nodes could be either
> > controller, compute, networking, or storage resources depending on the
> > needs
> > of this deployment.
> > - As an infrastructure administrator, Anna wants to review the distribution
> > of the nodes that she has assigned before kicking off the "Deploy" task.
> > - As an infrastructure administrator, Anna wants to monitor the deployment
> > process of all of the nodes that she has assigned.
> > - As an infrastructure administrator, Anna needs to be able to troubleshoot
> > any errors that may occur during the deployment of nodes process.
> > - As an infrastructure administrator, Anna wants to monitor the
> > availability
> > and status of each node in her deployment.
> > - As an infrastructure administrator, Anna wants to be able to unallocate a
> > node from a deployment.
> > - As an infrastructure administrator, Anna wants to be able to view the
> > history of nodes that have been in a deployment.
> > - As an infrastructure administrator, Anna needs to be notified of any
> > important changes to nodes that are in the OpenStack deployment. She does
> > not want to be spammed with non-important notifications.
> > 
> > Please feel free to comment, change, or add to this list.
> > 
> > [1]https://docs.google.com/document/d/16rkiXWxxgzGT47_Wc6hzIPzO2-s2JWAPEKD0gP2mt7E/edit?pli=1#
> > 
> > Thanks,
> > Liz
> > 
> > > 
> > > All comments are welcome!
> > > 
> > > Thanks,
> > > Tzu-Mainn Chen
> > > 
> > > 
> > > 
> > > *** Requirements are assumed to be targeted for Icehouse, unless marked
> > > otherwise:
> > >   (M) - Maybe Icehouse, dependency on other in-development features
> > >   (F) - Future requirement, after Icehouse
> > > 
> > > * NODES
> > >   * Creation
> > >  * Manual registration
> > > * hardware specs from Ironic based on mac address (M)
> > >

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
That looks really good, thanks for putting that together!

I'm going to put together a wiki page that consolidates the various Tuskar
planning documents - requirements, user stories, wireframes, etc - so it's
easier to see the whole planning picture.

Mainn

- Original Message -
> 
> On Dec 5, 2013, at 9:31 PM, Tzu-Mainn Chen  wrote:
> 
> > Hey all,
> > 
> > I've attempted to spin out the requirements behind Jarda's excellent
> > wireframes
> > (http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
> > Hopefully this can add some perspective on both the wireframes and the
> > needed changes to the tuskar-api.
> 
> This list is great, thanks very much for taking the time to write this up! I
> think a big part of the User Experience design is to take a step back and
> understand the requirements from an end user's point of view…what would they
> want to accomplish by using this UI? This might influence the design in
> certain ways, so I've taken a cut at a set of user stories for the Icehouse
> timeframe based on these requirements that I hope will be useful during
> discussions.
> 
> Based on the OpenStack Personas[1], I think that Anna would be the main
> consumer of the TripleO UI, but please let me know if you think otherwise.
> 
> - As an infrastructure administrator, Anna needs to deploy or update a set of
> resources that will run OpenStack (This isn't a very specific use case, but
> more of the larger end goal of Anna coming into the UI.)
> - As an infrastructure administrator, Anna expects that the management node
> for the deployment services is already up and running and the status of this
> node is shown in the UI.
> - As an infrastructure administrator, Anna wants to be able to quickly see
> the set of unallocated nodes that she could use for her deployment of
> OpenStack. Ideally, she would not have to manually tell the system about
> these nodes. If she needs to manually register nodes for whatever reason,
> Anna would only want to have to define the essential data needed to register
> these nodes.
> - As an infrastructure administrator, Anna needs to assign a role to each of
> the necessary nodes in her OpenStack deployment. The nodes could be either
> controller, compute, networking, or storage resources depending on the needs
> of this deployment.
> - As an infrastructure administrator, Anna wants to review the distribution
> of the nodes that she has assigned before kicking off the "Deploy" task.
> - As an infrastructure administrator, Anna wants to monitor the deployment
> process of all of the nodes that she has assigned.
> - As an infrastructure administrator, Anna needs to be able to troubleshoot
> any errors that may occur during the deployment of nodes process.
> - As an infrastructure administrator, Anna wants to monitor the availability
> and status of each node in her deployment.
> - As an infrastructure administrator, Anna wants to be able to unallocate a
> node from a deployment.
> - As an infrastructure administrator, Anna wants to be able to view the
> history of nodes that have been in a deployment.
> - As an infrastructure administrator, Anna needs to be notified of any
> important changes to nodes that are in the OpenStack deployment. She does
> not want to be spammed with non-important notifications.
> 
> Please feel free to comment, change, or add to this list.
> 
> [1]https://docs.google.com/document/d/16rkiXWxxgzGT47_Wc6hzIPzO2-s2JWAPEKD0gP2mt7E/edit?pli=1#
> 
> Thanks,
> Liz
> 
> > 
> > All comments are welcome!
> > 
> > Thanks,
> > Tzu-Mainn Chen
> > 
> > 
> > 
> > *** Requirements are assumed to be targeted for Icehouse, unless marked
> > otherwise:
> >   (M) - Maybe Icehouse, dependency on other in-development features
> >   (F) - Future requirement, after Icehouse
> > 
> > * NODES
> >   * Creation
> >  * Manual registration
> > * hardware specs from Ironic based on mac address (M)
> > * IP auto populated from Neutron (F)
> >  * Auto-discovery during undercloud install process (M)
> >   * Monitoring
> >   * assignment, availability, status
> >   * capacity, historical statistics (M)
> >   * Management node (where triple-o is installed)
> >   * created as part of undercloud install process
> >   * can create additional management nodes (F)
> >* Resource nodes
> >* searchable by status, name, cpu, memory, and all attributes from
> >ironic
> >* can be allocated as one of four node types
> >* compute
>

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
Thanks for the comments!  Responses inline:

> Disclaimer: I'm very new to the project, so apologies if some of my
> questions have been already answered or flat out don't make sense.
> 
> As I proofread, some of my comments may drift a bit past basic
> requirements, so feel free to tell me to take certain questions out of
> this thread into specific discussion threads if I'm getting too detailed.
> 
> > 
> >
> > *** Requirements are assumed to be targeted for Icehouse, unless marked
> > otherwise:
> > (M) - Maybe Icehouse, dependency on other in-development features
> > (F) - Future requirement, after Icehouse
> >
> > * NODES
> > * Creation
> >* Manual registration
> >   * hardware specs from Ironic based on mac address (M)
> >   * IP auto populated from Neutron (F)
> >* Auto-discovery during undercloud install process (M)
> > * Monitoring
> > * assignment, availability, status
> > * capacity, historical statistics (M)
> > * Management node (where triple-o is installed)
> > * created as part of undercloud install process
> > * can create additional management nodes (F)
> >  * Resource nodes
> >  * searchable by status, name, cpu, memory, and all attributes from
> >  ironic
> >  * can be allocated as one of four node types
> 
> It's pretty clear by the current verbiage but I'm going to ask anyway:
> "one and only one"?

Yep, that's right!

> >  * compute
> >  * controller
> >  * object storage
> >  * block storage
> >  * Resource class - allows for further categorization of a node
> >  type
> >  * each node type specifies a single default resource class
> >  * allow multiple resource classes per node type (M)
> 
> My gut reaction is that we want to bite this off sooner rather than
> later. This will have data model and API implications that, even if we
> don't commit to it for Icehouse, should still be in our minds during it,
> so it might make sense to make it a first class thing to just nail down now.

That is entirely correct, which is one reason it's on the list of requirements. 
 The
forthcoming API design will have to account for it.  Not recreating the entire 
data
model between releases is a key goal :)


> >  * optional node profile for a resource class (M)
> >  * acts as filter for nodes that can be allocated to that
> >  class (M)
> 
> To my understanding, once this is in Icehouse, we'll have to support
> upgrades. If this filtering is pushed off, could we get into a situation
> where an allocation created in Icehouse would no longer be valid in
> Icehouse+1 once these filters are in place? If so, we might want to make
> it more of a priority to get them in place earlier and not eat the
> headache of addressing these sorts of integrity issues later.

That's true.  The problem is that to my understanding, the filters we'd
need in nova-scheduler are not yet fully in place.

I also think that this is an issue that we'll need to address no matter what.
Even once filters exist, if a user applies a filter *after* nodes are allocated,
we'll need to do something clever if the already-allocated nodes don't meet the
filter criteria.

> >  * nodes can be viewed by node types
> >  * additional group by status, hardware specification
> >  * controller node type
> > * each controller node will run all openstack services
> >* allow each node to run specified service (F)
> > * breakdown by workload (percentage of cpu used per node) (M)
> >  * Unallocated nodes
> 
> Is there more still being flushed out here? Things like:
>   * Listing unallocated nodes
>   * Unallocating a previously allocated node (does this make it a
> vanilla resource or does it retain the resource type? is this the only
> way to change a node's resource type?)
>   * Unregistering nodes from Tuskar's inventory (I put this under
> unallocated under the assumption that the workflow will be an explicit
> unallocate before unregister; I'm not sure if this is the same as
> "archive" below).

Ah, you're entirely right.  I'll add these to the list.

> >  * Archived nodes (F)
> 
> Can you elaborate a bit more on what this is?

To be honest, I'm a bit fuzzy about this myself; Jarda mentioned that there was
an OpenStack service in the process of being planned that would handle this
requirement.  Jarda, can you detail a bit?

Thanks again for the comments!


Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-06 Thread Tzu-Mainn Chen
> >> b) In the past, we allowed parallel development of the UI and API by
> >> having well-documented expectations of what the API
> 
> Are these expectations documented yet? I'm new to the project and still
> finding my way around. I've seen the wireframes and am going through
> Chen's icehouse requirements, but I haven't stumbled on too much talk
> about the APIs specifically (not suggesting they don't exist, more
> likely that I haven't found them yet).

Not quite yet; we'd like to finalize the requirements somewhat first.  Hopefully
something will be available sometime next week.  In the meantime, targeted UI 
work
is mostly structural (navigation) and making sure that the right widgets exist
for the wireframes.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-06 Thread Tzu-Mainn Chen
Hey all,

We're starting to work on the UI for tuskar based on Jarda's wireframes, and as 
we're doing so, we're realizing that
we're not quite sure what development methodology is appropriate.  Some 
questions:

a) Because we're essentially doing a tear-down and re-build of the whole 
architecture (a lot of the concepts in tuskar
will simply disappear), it's difficult to do small incremental patches that 
support existing functionality.  Is it okay
to have patches that break functionality?  Are there good alternatives?

b) In the past, we allowed parallel development of the UI and API by having 
well-documented expectations of what the API
would provide.  We would then mock those calls in the UI, replacing them with 
real API calls as they became available.  Is
this acceptable?

If there are precedents for this kind of stuff, we'd be more than happy to 
follow them!

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-05 Thread Tzu-Mainn Chen
Hey all,

I've attempted to spin out the requirements behind Jarda's excellent wireframes 
(http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
Hopefully this can add some perspective on both the wireframes and the needed 
changes to the tuskar-api.

All comments are welcome!

Thanks,
Tzu-Mainn Chen



*** Requirements are assumed to be targeted for Icehouse, unless marked 
otherwise:
   (M) - Maybe Icehouse, dependency on other in-development features
   (F) - Future requirement, after Icehouse

* NODES
   * Creation
  * Manual registration
 * hardware specs from Ironic based on mac address (M)
 * IP auto populated from Neutron (F)
  * Auto-discovery during undercloud install process (M)
   * Monitoring
   * assignment, availability, status
   * capacity, historical statistics (M)
   * Management node (where triple-o is installed)
   * created as part of undercloud install process
   * can create additional management nodes (F)
* Resource nodes
* searchable by status, name, cpu, memory, and all attributes from 
ironic
* can be allocated as one of four node types
* compute
* controller
* object storage
* block storage
* Resource class - allows for further categorization of a node type
* each node type specifies a single default resource class
* allow multiple resource classes per node type (M)
* optional node profile for a resource class (M)
* acts as filter for nodes that can be allocated to that class 
(M)
* nodes can be viewed by node types
* additional group by status, hardware specification
* controller node type
   * each controller node will run all openstack services
  * allow each node to run specified service (F)
   * breakdown by workload (percentage of cpu used per node) (M)
* Unallocated nodes
* Archived nodes (F)
* Will be separate openstack service (F)

* DEPLOYMENT
   * multiple deployments allowed (F)
 * initially just one
   * deployment specifies a node distribution across node types
  * node distribution can be updated after creation
   * deployment configuration, used for initial creation only
  * defaulted, with no option to change
 * allow modification (F)
   * review distribution map (F)
   * notification when a deployment is ready to go or whenever something changes

* DEPLOYMENT ACTION
   * Heat template generated on the fly
  * hardcoded images
 * allow image selection (F)
  * pre-created template fragments for each node type
  * node type distribution affects generated template
   * nova scheduler allocates nodes
  * filters based on resource class and node profile information (M)
   * Deployment action can create or update
   * status indicator to determine overall state of deployment
  * status indicator for nodes as well
  * status includes 'time left' (F)

* NETWORKS (F)
* IMAGES (F)
* LOGS (F)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] {TripleO] UI Wireframes - close to implementation start

2013-12-03 Thread Tzu-Mainn Chen
> Hey folks,

> I opened 2 issues on UX discussion forum with TripleO UI topics:

> Resource Management:
> http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
> - this section was already reviewed before, there is not much surprises, just
> smaller updates
> - we are about to implement this area

> http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
> - these are completely new views and they need a lot of attention so that in
> time we don't change direction drastically
> - any feedback here is welcome

> We need to get into implementation ASAP. It doesn't mean that we have
> everything perfect from the very beginning, but that we have direction and
> we move forward by enhancements.

> Therefor implementation of above mentioned areas should start very soon.

> If all possible, I will try to record walkthrough with further explanations.
> If you have any questions or feedback, please follow the threads on
> ask-openstackux.

> Thanks
> -- Jarda

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
These wireframes look really good! However, would it be possible to get the 
list of requirements driving them? For example, something on the level of: 

1) removal of resource classes and racks 
2) what happens behind the scenes when deployment occurs 
3) the purpose of "compute class" 
4) etc 

I think it'd be easier to understand the "big picture" that way. Thanks! 

Mainn 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-30 Thread Tzu-Mainn Chen
I think we may all be approaching the planning of this project in the wrong 
way, because of confusions such as: 

> Well, I think there is one small misunderstanding. I've never said that
> manual way should be primary workflow for us. I agree that we should lean
> toward as much automation and smartness as possible. But in the same time, I
> am adding that we need manual fallback for user to change that smart
> decision.

> Primary way would be to let TripleO decide, where the stuff go. I think we
> agree here.
That's a pretty fundamental requirement that both sides seem to agree upon - 
but that agreement got lost in the discussions of what feature should come in 
which release, etc. That seems backwards to me. 

I think it's far more important that we list out requirements and create a 
design document that people agree upon first. Otherwise, we run the risk of 
focusing on feature X for release 1 without ensuring that our architecture 
supports feature Y for release 2. 

To make this example more specific: it seems clear that everyone agrees that 
the current Tuskar design (where nodes must be assigned to racks, which are 
then used as the primary means of manipulation) is not quite correct. Instead, 
we'd like to introduce a philosophy where we assume that users don't want to 
deal with homogeneous nodes individually, instead letting TripleO make 
decisions for them. 

When we have a bunch of heterogeneous nodes, we want to be able to break them 
up into several homogeneous groups, and assign different capabilities to each. 
But again, within each individual homogeneous group, we don't want users 
dealing with each individual nodes; instead, we want TripleO to take care of 
business. 

The point of disagreement here - which actually seems quite minor to me - is 
how far we want to go in defining heterogeneity. Are existing node attributes 
such as cpu and memory enough? Or do we need to go further? To take examples 
from this thread, some additional possibilities include: rack, network 
connectivity, etc. Presumably, such attributes will be user defined and managed 
within TripleO itself. 

If that understanding is correct, it seems to me that the requirements are 
broadly in agreement, and that "TripleO defined node attributes" is a feature 
that can easily be slotted into this sort of architecture. Whether it needs to 
come first. . . should be a different discussion (my gut feel is that it 
shouldn't come first, as it depends on everything else working, but maybe I'm 
wrong). 

In any case, if we can a) detail requirements without talking about releases 
and b) create a design architecture, I think that it'll be far easier to come 
up with a set of milestones that make developmental sense. 

> > Folk that want to manually install openstack on a couple of machines
> 
> > can already do so : we don't change the game for them by replacing a
> 
> > manual system with a manual system. My vision is that we should
> 
> > deliver something significantly better!
> 

> We should! And we can. But I think we shouldn't deliver something, what will
> discourage people from using TripleO. Especially at the beginning - see
> user, we are doing first steps here, the distribution is not perfect and
> what you wanted, but you can do the change you need. You don't have to go
> away and come back in 6 months when we try to be smarter and address your
> case.

Regarding this - I think we may want to clarify what the purpose of our 
releases are at the moment. Personally, I don't think our current planning is 
about several individual product releases that we expect to be production-ready 
and usable by the world; I think it's about milestone releases which build 
towards a more complete product. 

>From that perspective, if I were a prospective user, I would be less concerned 
>with each release containing exactly what I need. Instead, what I would want 
>most out of the project is: 

a) frequent stable releases (so I can be comfortable with the pace of 
development and the quality of code) 
b) design documentation and wireframes (so I can be comfortable that the 
architecture will support features I need) 
c) a roadmap (so I have an idea when my requirements will be met) 

Mainn 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-27 Thread Tzu-Mainn Chen


- Original Message -
> 
> On 2013/27/11 00:00, Robert Collins wrote:
> 
> 
> 
> On 26 November 2013 07:41, Jaromir Coufal  wrote:
> 
> 
> 
> Hey Rob,
> 
> can we add 'Slick Overcloud deployment through the UI' to the list? There
> was no session about that, but we discussed it afterwords and agreed that it
> is high priority for Icehouse as well.
> 
> I just want to keep it on the list, so we are aware of that.
> Certainly. Please add a blueprint for that and I'll mark itup appropriately.
> I will do.
> 
> 
> 
> 
> Related to that we had a long chat in IRC that I was to follow up here, so -
> ...
> 
> Tuskar is refocusing on getting the basics really right - slick basic
> install, and then work up. At the same time, just about every nova
> person I've spoken too (a /huge/ sample of three, but meh :)) has
> expressed horror that Tuskar is doing it's own scheduling, and
> confusion about the need to manage flavors in such detail.
> 
> 
> 
> So the discussion on IRC was about getting back to basics - a clean
> core design and something that we aren't left with technical debt that
> we need to eliminate in order to move forward - which the scheduler
> stuff would be.
> 
> So: my question/proposal was this: lets set a couple of MVPs.
> 
> 0: slick install homogeneous nodes:
>  - ask about nodes and register them with nova baremetal / Ironic (can
> use those APIs directly)
>  - apply some very simple heuristics to turn that into a cloud:
>- 1 machine - all in one
>- 2 machines - separate hypervisor and the rest
>- 3 machines - two hypervisors and the rest
>- 4 machines - two hypervisors, HA the rest
>- 5 + scale out hypervisors
>  - so total forms needed = 1 gather hw details
>  - internals: heat template with one machine flavor used
> 
> 1: add support for heterogeneous nodes:
>  - for each service (storage compute etc) supply a list of flavors
> we're willing to have that run on
>  - pass that into the heat template
>  - teach heat to deal with flavor specific resource exhaustion by
> asking for a different flavor (or perhaps have nova accept multiple
> flavors and 'choose one that works'): details to be discussed with
> heat // nova at the right time.
> 
> 2: add support for anti-affinity for HA setups:
>  - here we get into the question about short term deliverables vs long
> term desire, but at least we'll have a polished installer already.
> 
> -Rob
> 
> Important point here is, that we agree on starting with very basics - grow
> then. Which is great.
> 
> The whole deployment workflow (not just UI) is all about user experience
> which is built on top of TripleO's approach. Here I see two important
> factors:
> - There are users who are having some needs and expectations .
> - There is underlying concept of TripleO , which we are using for
> implementing features which are satisfying those needs.
> 
> We are circling around and trying to approach the problem from wrong end -
> which is implementation point of view (how to avoid own scheduling).
> 
> Let's try get out of the box and start with thinking about our audience first
> - what they expect, what they need. Then we go back, put our implementation
> thinking hat on and find out how we are going to re-use OpenStack components
> to achieve our goals. In the end we have detailed plan.
> 
> 
> === Users ===
> 
> I would like to start with our targeted audience first - without milestones,
> without implementation details.
> 
> I think here is the main point where I disagree and which leads to different
> approaches. I don't think, that user of TripleO cares only about deploying
> infrastructure without any knowledge where the things go. This is overcloud
> user's approach - 'I want VM and I don't care where it runs'. Those are
> self-service users / cloud users. I know we are OpenStack on OpenStack, but
> we shouldn't go that far that we expect same behavior from undercloud users.
> I can tell you various examples of why the operator will care about where
> the image goes and what runs on specific node.
> 
> One quick example:
> I have three racks of homogenous hardware and I want to design it the way so
> that I have one control node in each, 3 storage nodes and the rest compute.
> With that smart deployment, I'll never know what my rack contains in the
> end. But if I have control over stuff, I can say that this node is
> controller, those three are storage and those are compute - I am happy from
> the very beginning.
> 
> Our targeted audience are sysadmins, operators. They hate 'magics'. They want
> to have control over things which they are doing. If we put in front of them
> workflow, where they click one button and they get cloud installed, they
> will get horrified.
> 
> That's why I am very sure and convinced that we need to have ability for user
> to have control over stuff. What node is having what role. We can be smart,
> suggest and advice. But not hiding this functionality from user. Otherwise,
> I am afraid that

Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-07 Thread Tzu-Mainn Chen
> Hi, like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
> 
> Please see Russell's excellent stats:
> http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
> http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
> 
> For joining and retaining core I look at the 90 day statistics; folk
> who are particularly low in the 30 day stats get a heads up: it's not
> a purely mechanical process :).
> 
> As we've just merged review teams with Tuskar devs, we need to allow
> some time for everyone to get up to speed; so for folk who are core as
> a result of the merge will be retained as core, but November I expect
> the stats will have normalised somewhat and that special handling
> won't be needed.
> 
> IMO these are the reviewers doing enough over 90 days to meet the
> requirements for core:
> 
> |   lifeless **| 3498 140   2 19957.6% |2
> (  1.0%)  |
> | clint-fewbar **  | 3292  54   1 27283.0% |7
> (  2.6%)  |
> | cmsj **  | 2481  25   1 22189.5% |   13
> (  5.9%)  |
> |derekh ** |  880  28  23  3768.2% |6
> ( 10.0%)  |
> 
> Who are already core, so thats easy.
> 
> If you are core, and not on that list, that may be because you're
> coming from tuskar, which doesn't have 90 days of history, or you need
> to get stuck into some more reviews :).
> 
> Now, 30 day history - this is the heads up for folk:
> 
> | clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
> | cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
> |   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
> |derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
> |  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
> |ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |
> 
> 
> I'm using the fairly simple metric of 'average at least one review a
> day' as a proxy for 'sees enough of the code and enough discussion of
> the code to be an effective reviewer'. James and Ghe, good stuff -
> you're well on your way to core. If you're not in that list, please
> treat this as a heads-up that you need to do more reviews to keep on
> top of what's going on, whether so you become core, or you keep it.
> 
> In next month's update I'll review whether to remove some folk that
> aren't keeping on top of things, as it won't be a surprise :).
> 
> Cheers,
> Rob
> 
> 
> 
> 
> 
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Hi,

I feel like I should point out that before tuskar merged with tripleo, we had 
some distinction between the team working on the tuskar api and the team 
working on the UI, with each team focusing reviews on its particular experties. 
 The latter team works quite closely with horizon, to the extent of spending a 
lot of time involved with horizon development and blueprints.  This is done so 
that horizon changes can be understood and utilized by tuskar-ui.

For that reason, I feel like a UI core reviewer split here might make sense. . 
. ?  tuskar-ui doesn't require as many updates as tripleo/tuskar api, but a 
certain level of horizon and UI expertise is definitely helpful in reviewing 
the UI patches.

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Tzu-Mainn Chen
> Hi everyone,
> 
> Some of us Tuskar developers have had the chance to meet the TripleO
> developers face to face and discuss the visions and goals of our projects.
> 
> Tuskar's ultimate goal is to have to a full OpenStack management
> solution: letting the cloud operators try OpenStack, install it, keep it
> running throughout the entire lifecycle (including bringing in new
> hardware, burning it in, decommissioning), help to scale it, secure the
> setup, monitor for failures, project the need for growth and so on.
> 
> And to provide a good user interface and API to let the operators
> control and script this easily.
> 
> Now, the scope of the OpenStack Deployment program (TripleO) includes
> not just installation, but the entire lifecycle management (from racking
> it up to decommissioning). Among other things they're thinking of are
> issue tracker integration and inventory management, but these could
> potentially be split into a separate program.
> 
> That means we do have a lot of goals in common and we've just been going
> at them from different angles: TripleO building the fundamental
> infrastructure while Tuskar focusing more on the end user experience.
> 
> We've come to a conclusion that it would be a great opportunity for both
> teams to join forces and build this thing together.
> 
> The benefits for Tuskar would be huge:
> 
> * being a part of an incubated project
> * more eyballs (see Linus' Law (the ESR one))
> * better information flow between the current Tuskar and TripleO teams
> * better chance at attracting early users and feedback
> * chance to integrate earlier into an OpenStack release (we could make
> it into the *I* one)
> 
> TripleO would get a UI and more developers trying it out and helping
> with setup and integration.
> 
> This shouldn't even need to derail us much from the rough roadmap we
> planned to follow in the upcoming months:
> 
> 1. get things stable and robust enough to demo in Hong Kong on real hardware
> 2. include metrics and monitoring
> 3. security
> 
> What do you think?
> 
> Tomas

I think this is great.  I would like to understand the organization of the 
teams and the code,
but I assume that is forthcoming?

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Results of voting for Glossary (1st round)

2013-09-19 Thread Tzu-Mainn Chen
Hey all,

To assist with the naming and revoting, Matt and I put together a glossary and 
a diagram with our understanding of the terms:

https://wiki.openstack.org/wiki/Tuskar/Glossary
http://ma.ttwagner.com/tuskar-diagram-draft/

Thanks,
Tzu-Mainn Chen

- Original Message -
> Hey buddies,
> 
> 1st round of voting has happened during the weekly meetings, you can see the
> log here:
> http://eavesdrop.openstack.org/meetings/tuskar/2013/tuskar.2013-09-17-19.00.html
> 
> There are few options which needs to revote, so I updated the etherpad with
> suggested names: https://etherpad.openstack.org/tuskar-naming
> 
> Please think through that, throw another suggestions so that next week we can
> close the naming topic completely.
> 
> Thanks a lot for participation
> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev