Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-15 Thread Dmitry Tantsur
On Mon, 2014-09-15 at 11:04 -0700, Jim Rollenhagen wrote:
> On Mon, Sep 15, 2014 at 12:44:24PM +0100, Steven Hardy wrote:
> > All,
> > 
> > Starting this thread as a follow-up to a strongly negative reaction by the
> > Ironic PTL to my patches[1] adding initial Heat->Ironic integration, and
> > subsequent very detailed justification and discussion of why they may be
> > useful in this spec[2].
> > 
> > Back in Atlanta, I had some discussions with folks interesting in making
> > "ready state"[3] preparation of bare-metal resources possible when
> > deploying bare-metal nodes via TripleO/Heat/Ironic.
> > 
> > The initial assumption is that there is some discovery step (either
> > automatic or static generation of a manifest of nodes), that can be input
> > to either Ironic or Heat.
> 
> We've discussed this a *lot* within Ironic, and have decided that
> auto-discovery (with registration) is out of scope for Ironic. 
Even if there is such an agreement, it's the first time I hear about it.
All previous discussions _I'm aware of_ (e.g. midcycle) ended up with
"we can discover only things that are required for scheduling". When did
it change?

> In my
> opinion, this is straightforward enough for operators to write small
> scripts to take a CSV/JSON/whatever file and register the nodes in that
> file with Ironic. This is what we've done at Rackspace, and it's really
> not that annoying; the hard part is dealing with incorrect data from
> the (vendor|DC team|whatever).
Provided this CSV contains all the required data, not only IPMI
credentials, which IIRC is often the case.

> 
> That said, I like the thought of Ironic having a bulk-registration
> feature with some sort of specified format (I imagine this would just be
> a simple JSON list of node objects).
> 
> We are likely doing a session on discovery in general in Paris. It seems
> like the main topic will be about how to interface with external
> inventory management systems to coordinate node discovery. Maybe Heat is
> a valid tool to integrate with here, maybe not.
> 
> > Following discovery, but before an undercloud deploying OpenStack onto the
> > nodes, there are a few steps which may be desired, to get the hardware into
> > a state where it's ready and fully optimized for the subsequent deployment:
> 
> These pieces are mostly being done downstream, and (IMO) in scope for
> Ironic in the Kilo cycle. More below.
> 
> > - Updating and aligning firmware to meet requirements of qualification or
> >   site policy
> 
> Rackspace does this today as part of what we call "decommissioning".
> There are patches up for review for both ironic-python-agent (IPA) [1] and
> Ironic [2] itself. We have support for 1) flashing a BIOS on a node, and
> 2) Writing a set of BIOS settings to a node (these are embedded in the agent
> image as a set, not through an Ironic API). These are both implemented as
> a hardware manager plugin, and so can easily be vendor-specific.
> 
> I expect this to land upstream in the Kilo release.
> 
> > - Optimization of BIOS configuration to match workloads the node is
> >   expected to run
> 
> The Ironic team has also discussed this, mostly at the last mid-cycle
> meetup. We'll likely have a session on "capabilities", which we think
> might be the best way to handle this case. Essentially, a node can be
> tagged with arbitrary capabilities, e.g. "hypervisor", which Nova
> (flavors?) could use for scheduling, and Ironic drivers could use to do
> per-provisioning work, like setting BIOS settings. This may even tie in
> with the next point.
> 
> Looks like Jay just ninja'd me a bit on this point. :)
> 
> > - Management of machine-local storage, e.g configuring local RAID for
> >   optimal resilience or performance.
> 
> I don't see why Ironic couldn't do something with this in Kilo. It's
> dangerously close to the "inventory management" line, however I think
> it's reasonable for a user to specify that his or her root partition
> should be on a RAID or a specific disk out of many in the node.
> 
> > Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
> > of these steps possible, but there's no easy way to either encapsulate the
> > (currently mostly vendor specific) data associated with each step, or to
> > coordinate sequencing of the steps.
> 
> It's important to remember that just because a blueprint/spec exists,
> does not mean it will be approved. :) I don't expect the "DRAC
> discovery" blueprint to go through, and the "DRAC RAID" blueprint is
> questionable, with regards to scope.
> 
> > What is required is some tool to take a text definition of the required
> > configuration, turn it into a correctly sequenced series of API calls to
> > Ironic, expose any data associated with those API calls, and declare
> > success or failure on completion.  This is what Heat does.
> 
> This is a fair point, however none of these use cases have code landed
> in mainline Ironic, and certainly don't have APIs exposed, with the
> exc

Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-17 Thread Dmitry Tantsur
On Tue, 2014-09-16 at 15:42 -0400, Zane Bitter wrote:
> On 16/09/14 15:24, Devananda van der Veen wrote:
> > On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter  wrote:
> >> On 16/09/14 13:56, Devananda van der Veen wrote:
> >>>
> >>> On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy  wrote:
> 
>  For example, today, I've been looking at the steps required for driving
>  autodiscovery:
> 
>  https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
> 
>  Driving this process looks a lot like application orchestration:
> 
>  1. Take some input (IPMI credentials and MAC addresses)
>  2. Maybe build an image and ramdisk(could drop credentials in)
>  3. Interact with the Ironic API to register nodes in maintenance mode
>  4. Boot the nodes, monitor state, wait for a signal back containing some
>   data obtained during discovery (same as WaitConditions or
>   SoftwareDeployment resources in Heat..)
>  5. Shutdown the nodes and mark them ready for use by nova
> 
> >>>
> >>> My apologies if the following sounds snarky -- but I think there are a
> >>> few misconceptions that need to be cleared up about how and when one
> >>> might use Ironic. I also disagree that 1..5 looks like application
> >>> orchestration. Step 4 is a workflow, which I'll go into in a bit, but
> >>> this doesn't look at all like describing or launching an application
> >>> to me.
> >>
> >>
> >> +1 (Although step 3 does sound to me like something that matches Heat's
> >> scope.)
> >
> > I think it's a simplistic use case, and Heat supports a lot more
> > complexity than is necessary to enroll nodes with Ironic.
> >
> >>
> >>> Step 1 is just parse a text file.
> >>>
> >>> Step 2 should be a prerequisite to doing -anything- with Ironic. Those
> >>> images need to be built and loaded in Glance, and the image UUID(s)
> >>> need to be set on each Node in Ironic (or on the Nova flavor, if going
> >>> that route) after enrollment. Sure, Heat can express this
> >>> declaratively (ironic.node.driver_info must contain key:deploy_kernel
> >>> with value:), but are you suggesting that Heat build the images,
> >>> or just take the UUIDs as input?
> >>>
> >>> Step 3 is, again, just parse a text file
> >>>
> >>> I'm going to make an assumption here [*], because I think step 4 is
> >>> misleading. You shouldn't "boot a node" using Ironic -- you do that
> >>> through Nova. And you _dont_ get to specify which node you're booting.
> >>> You ask Nova to provision an _instance_ on a _flavor_ and it picks an
> >>> available node from the pool of nodes that match the request.
> >>
> >>
> >> I think your assumption is incorrect. Steve is well aware that provisioning
> >> a bare-metal Ironic server is done through the Nova API. What he's
> >> suggesting here is that the nodes would be booted - not Nova-booted, but
> >> booted in the sense of having power physically applied - while in
> >> maintenance mode in order to do autodiscovery of their capabilities,
> >
> > Except simply applying power doesn't, in itself, accomplish anything
> > besides causing the machine to power on. Ironic will only prepare the
> > PXE boot environment when initiating a _deploy_.
> 
>  From what I gather elsewhere in this thread, the autodiscovery stuff is 
> a proposal for the future, not something that exists in Ironic now, and 
> that may be the source of the confusion.
> 
> In any case, the etherpad linked at the top of this email was written by 
> someone in the Ironic team and _clearly_ describes PXE booting a 
> "discovery image" in maintenance mode in order to obtain hardware 
> information about the box.
If was written by me and it seems to be my fault that I didn't state
there more clear that this work is not and probably will not be merged
into Ironic upstream. Sorry for the confusion.

That said, my experiments proved quite possible (though not without some
network-related hacks as of now) to follow these steps to collect (aka
discover) hardware information required for scheduling from a node,
knowing only it's IPMI credentials.

> 
> cheers,
> Zane.
> 
> >> which
> >> is presumably hard to do automatically when they're turned off.
> >
> > Vendors often have ways to do this while the power is turned off, eg.
> > via the OOB management interface.
> >
> >> He's also
> >> suggesting that Heat could drive this process, which I happen to disagree
> >> with because it is a workflow not an end state.
> >
> > +1
> >
> >> However the main takeaway
> >> here is that you guys are talking completely past one another, and have 
> >> been
> >> for some time.
> >>
> >
> > Perhaps more detail in the expected interactions with Ironic would be
> > helpful and avoid me making (perhaps incorrect) assumptions.
> >
> > -D
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> ___

Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-17 Thread Dmitry Tantsur
On Wed, 2014-09-17 at 10:36 +0100, Steven Hardy wrote:
> On Tue, Sep 16, 2014 at 02:06:59PM -0700, Devananda van der Veen wrote:
> > On Tue, Sep 16, 2014 at 12:42 PM, Zane Bitter  wrote:
> > > On 16/09/14 15:24, Devananda van der Veen wrote:
> > >>
> > >> On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter  wrote:
> > >>>
> > >>> On 16/09/14 13:56, Devananda van der Veen wrote:
> > 
> > 
> >  On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy  
> >  wrote:
> > >
> > >
> > > For example, today, I've been looking at the steps required for 
> > > driving
> > > autodiscovery:
> > >
> > > https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
> > >
> > > Driving this process looks a lot like application orchestration:
> > >
> > > 1. Take some input (IPMI credentials and MAC addresses)
> > > 2. Maybe build an image and ramdisk(could drop credentials in)
> > > 3. Interact with the Ironic API to register nodes in maintenance mode
> > > 4. Boot the nodes, monitor state, wait for a signal back containing
> > > some
> > >  data obtained during discovery (same as WaitConditions or
> > >  SoftwareDeployment resources in Heat..)
> > > 5. Shutdown the nodes and mark them ready for use by nova
> > >
> > 
> >  My apologies if the following sounds snarky -- but I think there are a
> >  few misconceptions that need to be cleared up about how and when one
> >  might use Ironic. I also disagree that 1..5 looks like application
> >  orchestration. Step 4 is a workflow, which I'll go into in a bit, but
> >  this doesn't look at all like describing or launching an application
> >  to me.
> > >>>
> > >>>
> > >>>
> > >>> +1 (Although step 3 does sound to me like something that matches Heat's
> > >>> scope.)
> > >>
> > >>
> > >> I think it's a simplistic use case, and Heat supports a lot more
> > >> complexity than is necessary to enroll nodes with Ironic.
> > >>
> > >>>
> >  Step 1 is just parse a text file.
> > 
> >  Step 2 should be a prerequisite to doing -anything- with Ironic. Those
> >  images need to be built and loaded in Glance, and the image UUID(s)
> >  need to be set on each Node in Ironic (or on the Nova flavor, if going
> >  that route) after enrollment. Sure, Heat can express this
> >  declaratively (ironic.node.driver_info must contain key:deploy_kernel
> >  with value:), but are you suggesting that Heat build the images,
> >  or just take the UUIDs as input?
> > 
> >  Step 3 is, again, just parse a text file
> > 
> >  I'm going to make an assumption here [*], because I think step 4 is
> >  misleading. You shouldn't "boot a node" using Ironic -- you do that
> >  through Nova. And you _dont_ get to specify which node you're booting.
> >  You ask Nova to provision an _instance_ on a _flavor_ and it picks an
> >  available node from the pool of nodes that match the request.
> > >>>
> > >>>
> > >>>
> > >>> I think your assumption is incorrect. Steve is well aware that
> > >>> provisioning
> > >>> a bare-metal Ironic server is done through the Nova API. What he's
> > >>> suggesting here is that the nodes would be booted - not Nova-booted, but
> > >>> booted in the sense of having power physically applied - while in
> > >>> maintenance mode in order to do autodiscovery of their capabilities,
> > >>
> > >>
> > >> Except simply applying power doesn't, in itself, accomplish anything
> > >> besides causing the machine to power on. Ironic will only prepare the
> > >> PXE boot environment when initiating a _deploy_.
> > >
> > >
> > > From what I gather elsewhere in this thread, the autodiscovery stuff is a
> > > proposal for the future, not something that exists in Ironic now, and that
> > > may be the source of the confusion.
> > >
> > > In any case, the etherpad linked at the top of this email was written by
> > > someone in the Ironic team and _clearly_ describes PXE booting a 
> > > "discovery
> > > image" in maintenance mode in order to obtain hardware information about 
> > > the
> > > box.
> > >
> > 
> > Huh. I should have looked at that earlier in the discussion. It is
> > referring to out-of-tree code whose spec was not approved during Juno.
> > 
> > Apparently, and unfortunately, throughout much of this discussion,
> > folks have been referring to potential features Ironic might someday
> > have, whereas I have been focused on the features we actually support
> > today. That is probably why it seems we are "talking past each other."
> 
> FWIW I think a big part of the problem has been that you've been focussing
> on the fact that my solution doesn't match your preconceived ideas of how
> Ironic should interface with the world, while completely ignoring the
> use-case, e.g the actual problem I'm trying to solve.
> 
> That is why I'm referring to features Ironic might someday have - because
> Ironic currently does not solve my prob

Re: [openstack-dev] [ironic] Team gathering at the Forum?

2018-11-01 Thread Dmitry Tantsur

Hi,

You mean Lindenbrau, right? I was thinking of changing the venue, but if you're 
there, I think we should go there too! I just hope they can still fit 15 ppl :) 
Will try reserving tomorrow.


Dmitry

On 11/1/18 12:11 AM, Stig Telfer wrote:

Hello Ironicers -

We’ve booked the same venue for the Scientific SIG for Wednesday evening, and 
hopefully we’ll see you there.  There’s plenty of cross-over between our 
groups, particularly at an operator level.

Cheers,
Stig



On 29 Oct 2018, at 14:58, Dmitry Tantsur  wrote:

Hi folks!

This is your friendly reminder to vote on the day. Even if you're fine with all 
days, please leave a vote, so that we know how many people are coming. We will 
need to make a reservation, and we may not be able to accommodate more people 
than voted!

Dmitry

On 10/22/18 6:06 PM, Dmitry Tantsur wrote:

Hi ironicers! :)
We are trying to plan an informal Ironic team gathering in Berlin. If you care 
about Ironic and would like to participate, please fill in 
https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, 
also depending on how many people sign up.
Dmitry



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Team gathering at the Forum

2018-11-03 Thread Dmitry Tantsur

Hi Ironicers!

Good news: I have made the reservation for the gathering! :) It will happen on 
Wednesday, November 14, 2018 at 7 p.m. in restaurant Lindenbräu am Potsdamer 
Platz (https://goo.gl/maps/DYb5ikGGmdw). I will depart from the venue at around 
6:15. Follow the hangouts chat (to be set up) for any last-minute changes.


If you go along, you will need to get to Potsdamer Platz, there are S-Bahn, 
U-Bahn, train and bus stations there. Google suggests we take S3/S9 (direction 
Erkner/Airport) from Messe Süd first, then switch to S1/S2/S25/S26 on 
Friedrichstraße. Those going from Crowne Plaza Hotel can take bus 200 directly 
to Postdamer Platz. You'll need an A-B zone ticket for the travel. The 
restaurant is located in a court behind the tall DB building.


If you want to come but did not sign up in the doodle, please let me know 
off-list.

See you in Berlin,
Dmitry

On 10/29/18 3:58 PM, Dmitry Tantsur wrote:

Hi folks!

This is your friendly reminder to vote on the day. Even if you're fine with all 
days, please leave a vote, so that we know how many people are coming. We will 
need to make a reservation, and we may not be able to accommodate more people 
than voted!


Dmitry

On 10/22/18 6:06 PM, Dmitry Tantsur wrote:

Hi ironicers! :)

We are trying to plan an informal Ironic team gathering in Berlin. If you care 
about Ironic and would like to participate, please fill in 
https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, 
also depending on how many people sign up.


Dmitry





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Dmitry Tantsur

Hi all,

Not sure how official the information about the next summit is, but it's on the 
web site [1], so I guess worth asking..


Are we planning for the summit to overlap with the May holidays? The 1st of May 
is a holiday in big part of the world. We ask people to skip it in addition to 
3+ weekend days they'll have to spend working and traveling.


To make it worse, 1-3 May are holidays in Russia this time. To make it even 
worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it 
considered? Is it possible to move the days to less conflicting time (mid-May 
maybe)?


Dmitry

[1] https://www.openstack.org/summit/denver-2019/
[2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Dmitry Tantsur
On Mon, Nov 5, 2018, 20:07 Julia Kreger  *removes all of the hats*
>
> *removes years of dust from unrelated event planning hat, and puts it on
> for a moment*
>
> In my experience, events of any nature where convention venue space is
> involved, are essentially set in stone before being publicly advertised as
> contracts are put in place for hotel room booking blocks as well as the
> convention venue space. These spaces are also typically in a relatively
> high demand limiting the access and available times to schedule. Often
> venues also give preference (and sometimes even better group discounts) to
> repeat events as they are typically a known entity and will have somewhat
> known needs so the venue and hotel(s) can staff appropriately.
>
> tl;dr, I personally wouldn't expect any changes to be possible at this
> point.
>
> *removes event planning hat of past life, puts personal scheduling hat on*
>
> I imagine that as a community, it is near impossible to schedule something
> avoiding holidays for everyone in the community.
>

I'm not taking about everyone. And I'm mostly fine with my holiday, but the
conflicts with Russia and Japan seem huge. This certainly does not help our
effort to engage people outside of NA/EU.

Quick googling suggests that the week of May 13th would have much fewer
conflicts.


> I personally have lost count of the number of holidays and special days
> that I've spent on business trips over the past four years. While I may be
> an out-lier in my feelings on this subject, I'm not upset, annoyed, or even
> bitter about lost times. This community is part of my family.
>

Sure :)

But outside of our small nice circle there is a huge world of people who
may not share our feeling and the level of commitment to openstack. These
occasional contributors we talked about when discussing the cycle length. I
don't think asking them to abandon 3-5 days of holidays is a productive way
to engage them.

And again, as much as I love meeting you all, I think we're outgrowing the
format of these meetings..

Dmitry


> -Julia
>
> On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur  wrote:
>
>> Hi all,
>>
>> Not sure how official the information about the next summit is, but it's
>> on the
>> web site [1], so I guess worth asking..
>>
>> Are we planning for the summit to overlap with the May holidays? The 1st
>> of May
>> is a holiday in big part of the world. We ask people to skip it in
>> addition to
>> 3+ weekend days they'll have to spend working and traveling.
>>
>> To make it worse, 1-3 May are holidays in Russia this time. To make it
>> even
>> worse than worse, the week of 29th is the Golden Week in Japan [2]. Was
>> it
>> considered? Is it possible to move the days to less conflicting time
>> (mid-May
>> maybe)?
>>
>> Dmitry
>>
>> [1] https://www.openstack.org/summit/denver-2019/
>> [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running tests

2014-02-11 Thread Dmitry Tantsur
Hi. This seems to be related:
https://bugs.launchpad.net/openstack-ci/+bug/1274135
We also encountered this.

On Tue, 2014-02-11 at 14:56 +0530, Swapnil Kulkarni wrote:
> Hello,
> 
> 
> I created a new devstack environment today and installed tox 1.7.0,
> and getting error "tox.ConfigError: ConfigError: substitution key
> 'posargs' not found".
> 
> 
> Details in [1].
> 
> 
> Anybody encountered similar error before? Any workarounds/updates
> needed?
> 
> 
> [1] http://paste.openstack.org/show/64178/
> 
> 
> 
> 
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
Hi.

While implementing CRUD operations for node profiles in Tuskar (which
are essentially Nova flavors renamed) I encountered editing of flavors
and I have some doubts about it.

Editing of nova flavors in Horizon is implemented as
deleting-then-creating with a _new_ flavor ID.
For us it essentially means that all links to flavor/profile (e.g. from
overcloud role) will become broken. We had the following proposals:
- Update links automatically after editing by e.g. fetching all
overcloud roles and fixing flavor ID. Poses risk of race conditions with
concurrent editing of either node profiles or overcloud roles.
  Even worse, are we sure that user really wants overcloud roles to be
updated?
- The same as previous but with confirmation from user. Also risk of
race conditions.
- Do not update links. User may be confused: operation called "edit"
should not delete anything, nor is it supposed to invalidate links. One
of the ideas was to show also deleted flavors/profiles in a separate
table.
- Implement clone operation instead of editing. Shows user a creation
form with data prefilled from original profile. Original profile will
stay and should be deleted manually. All links also have to be updated
manually.
- Do not implement editing, only creating and deleting (that's what I
did for now in https://review.openstack.org/#/c/73576/ ).

Any ideas on what to do?

Thanks in advance,
Dmitry Tantsur


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
I think we still are going to multiple flavors for I, e.g.:
https://review.openstack.org/#/c/74762/
On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> 
> On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > Hi.
> >
> > While implementing CRUD operations for node profiles in Tuskar (which
> > are essentially Nova flavors renamed) I encountered editing of flavors
> > and I have some doubts about it.
> >
> > Editing of nova flavors in Horizon is implemented as
> > deleting-then-creating with a _new_ flavor ID.
> > For us it essentially means that all links to flavor/profile (e.g. from
> > overcloud role) will become broken. We had the following proposals:
> > - Update links automatically after editing by e.g. fetching all
> > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > concurrent editing of either node profiles or overcloud roles.
> >Even worse, are we sure that user really wants overcloud roles to be
> > updated?
> 
> This is a big question. Editing has always been a complicated concept in 
> Tuskar. How soon do you want the effects of the edit to be made live? 
> Should it only apply to future creations or should it be applied to 
> anything running off the old configuration? What's the policy on how to 
> apply that (canary v. the-other-one-i-cant-remember-the-name-for v. 
> something else)?
> 
> > - The same as previous but with confirmation from user. Also risk of
> > race conditions.
> > - Do not update links. User may be confused: operation called "edit"
> > should not delete anything, nor is it supposed to invalidate links. One
> > of the ideas was to show also deleted flavors/profiles in a separate
> > table.
> > - Implement clone operation instead of editing. Shows user a creation
> > form with data prefilled from original profile. Original profile will
> > stay and should be deleted manually. All links also have to be updated
> > manually.
> > - Do not implement editing, only creating and deleting (that's what I
> > did for now in https://review.openstack.org/#/c/73576/ ).
> 
> I'm +1 on not implementing editing. It's why we wanted to standardize on 
> a single flavor for Icehouse in the first place, the use cases around 
> editing or multiple flavors are very complicated.
> 
> > Any ideas on what to do?
> >
> > Thanks in advance,
> > Dmitry Tantsur
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Cleaning up spec review queue.

2014-11-19 Thread Dmitry Tantsur

On 11/18/2014 06:13 PM, Chris K wrote:

Hi all,

In an effort to keep the Ironic specs review queue as up to date as
possible, I have identified several specs that were proposed in the Juno
cycle and have not been updated to reflect the changes to the current
Kilo cycle.

I would like to set a deadline to either update them to reflect the Kilo
cycle or abandon them if they are no longer relevant.
If there are no objections I will abandon any specs on the list below
that have not been updated to reflect the Kilo cycle after the end of
the next Ironic meeting (Nov. 24th 2014).

Below is the list of specs I have identified that would be affected:
https://review.openstack.org/#/c/107344 - *Generic Hardware Discovery Bits*

Killed it with fire :D


https://review.openstack.org/#/c/102557 - *Driver for NetApp storage arrays*
https://review.openstack.org/#/c/108324 - *DRAC hardware discovery*

Imre, are you going to work on it?


https://review.openstack.org/#/c/103065 - *Design spec for iLO driver
for firmware settings*
https://review.openstack.org/#/c/108646 - *Add HTTP GET support for
vendor_passthru API*

This one is replaced by Lucas' work.


https://review.openstack.org/#/c/94923 - *Make the REST API fully
asynchronous*
https://review.openstack.org/#/c/103760 - *iLO Management Driver for
firmware update*
https://review.openstack.org/#/c/110217 - *Cisco UCS Driver*
https://review.openstack.org/#/c/96538 - *Add console log support*
https://review.openstack.org/#/c/100729 - *Add metric reporting spec.*
https://review.openstack.org/#/c/101122 - *Firmware setting design spec.*
https://review.openstack.org/#/c/96545 - *Reset service processor*
*
*
*This list may also be found on this ether pad:
https://etherpad.openstack.org/p/ironic-juno-specs-to-be-removed*
*
*
If you believe one of the above specs should not be abandoned please
update the spec to reflect the current Kilo cycle, or let us know that a
update is forth coming.

Please feel free to reply to this email, I will also bring this topic up
at the next meeting to ensure we have as much visibility as possible
before abandoning the old specs.

Thank you,
Chris Krelle
IRC: NobodyCam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Dmitry Tantsur

On 11/20/2014 04:38 PM, Ruby Loo wrote:

Hi, we had an interesting discussion on IRC about whether or not we
should be maintaining backwards compatibility within a release cycle. In
this particular case, we introduced a new decorator in this kilo cycle,
and were discussing the renaming of it, and whether it needed to be
backwards compatible to not break any out-of-tree driver using master.

Some of us (ok, me or I) think it doesn't make sense to make sure that
everything we do is backwards compatible. Others disagree and think we
should, or at least strive for 'must be' backwards compatible with the
caveat that there will be cases where this isn't
feasible/possible/whatever. (I hope I captured that correctly.)

Although I can see the merit (well, sort of) of trying our best, trying
doesn't mean 'must', and if it is 'must', who decides what can be
exempted from this, and how will we communicate what is exempted, etc?
It make sense to try to preserve compatibility, especially for things 
that landed some time ago. For newly invented things, like the decorator 
it makes no sense to me, however.


People consuming master have to be prepared. That does not mean that we 
should break them every week obviously, but still. That's why we have 
releases: to promise stability to people. By consuming master you agree 
that things might break rarely.


Thoughts?

--ruby


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Do we need an IntrospectionInterface?

2014-11-26 Thread Dmitry Tantsur
Hi all!

As our state machine and discovery discussion proceeds, I'd like to ask
your opinion on whether we need an IntrospectionInterface
(DiscoveryInterface?). Current proposal [1] suggests adding a method for
initiating a discovery to the ManagementInterface. IMO it's not 100%
correct, because:
1. It's not management. We're not changing anything.
2. I'm aware that some folks want to use discoverd-based discovery [2] even
for DRAC and ILO (e.g. for vendor-specific additions that can't be
implemented OOB).

Any ideas?

Dmitry.

[1] https://review.openstack.org/#/c/100951/
[2] https://review.openstack.org/#/c/135605/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] ironic driver retries on ironic driver Conflict response

2014-11-28 Thread Dmitry Tantsur

Hi!

On 11/28/2014 11:41 AM, Murray, Paul (HP Cloud) wrote:

Hi All,

Looking at the ironic virt driver code in nova it seems that a Conflict
(409) response from the ironic client results in the driver re-trying
the request. Given the comment below in the ironic code I would imagine
that is not the right behavior – it reads as though this is something
that would fail on the retry as well.

class Conflict(HTTPClientError):

 """HTTP 409 - Conflict.

 Indicates that the request could not be processed because of conflict

 in the request, such as an edit conflict.

 """

 http_status = 409

 message = _("Conflict")

An example of this is if the virt driver attempts to assign an instance
to a node that is in the power on state it will issue this Conflict
response.
It's possible that a periodic background process is going on, retrying 
makes perfect sense for this case. We're trying to get away from 
background processes causing Conflict btw.


Have I understood this or is there something about this I am not getting
right?

Paul

Paul Murray

Nova Technical Lead, HP Cloud

+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks
RG12 1HN Registered No: 690597 England. The contents of this message and
any attachments to it are confidential and may be legally privileged. If
you have received this message in error, you should delete it from your
system immediately and advise the sender. To any recipient of this
message within HP, unless otherwise stated you should consider this
message and attachments as "HP CONFIDENTIAL".



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Dmitry Tantsur

Hi folks,

Thank you for additional explanation, it does clarify things a bit. I'd 
like to note, however, that you talk a lot about how _different_ Fuel 
Agent is from what Ironic does now. I'd like actually to know how well 
it's going to fit into what Ironic does (in additional to your specific 
use cases). Hence my comments inline:


On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We install OS
on bare metal servers and on VMs
and then configure this OS using Puppet. We have been using Cobbler as
our OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach).
Moreover, we'd like to implement even more
flexible user experience. We'd like to allow user to choose which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to
support all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image
internals, some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of
IPA, we implemented so called Fuel Agent.
Important Fuel Agent features are:

* It does not have REST API

I would not call it a feature :-P

Speaking seriously, if you agent is a long-running thing and it gets 
it's configuration from e.g. JSON file, how can Ironic notify it of any 
changes?



* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP
(kind of metadata service)
* It is designed to be agnostic to input data format, not only Fuel
format (data drivers)
* It is designed to be agnostic to image format (tar images, file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm
(currently gzip)
* It is designed to be agnostic to image downloading protocol (currently
local file and HTTP link)
Does it support Glance? I understand it's HTTP, but it requires 
authentication.




So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.
My favorite use case is hardware introspection (aka getting data 
required for scheduling from a node automatically). Any ideas on this? 
(It's not a priority for this discussion, just curious).




According Fuel itself, our nearest plan is to get rid of Cobbler because
in the case of image based approach it is huge overhead. The question is
which tool we can use instead of Cobbler. We need power management,
we need TFTP management, we need DHCP management. That is
exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP
management tool independently, but as Devananda said, we're all working
on the same problems,
so let's do it together.  Power/TFTP/DHCP management is where we are
working on the same problems,
but IPA and Fuel Agent are about different use cases. This case is not
just Fuel, any mature
deployment case require advanced partition/fs management.
Taking into consideration that you're doing a generic OS installation 
tool... yeah, it starts to make some sense. For cloud advanced partition 
is definitely a "pet" case.


However, for

me it is OK, if it is easily possible
to use Ironic with external drivers (not merged to Ironic and not tested
on Ironic CI).

AFAIU, this spec https://review.openstack.org/#/c/138115/ does not
assume changing Ironic API and core.
Jim asked about how Fuel Agent will know about advanced disk
partitioning scheme if API is not
changed. The answer is simple: Ironic is supposed to send a link to
metadata service (http or local file)
where Fuel Agent can download input json data.
That's not about not changing Ironic. Changing Ironic is ok for 
reasonable use cases - we do a huge change right now to accommodate 
zapping, hardware introspection and RAID configuration.


I actually have problems with this particular statement. It does not 
sound like Fuel Agent will integrate enough with Ironic. This JSON file: 
who is going to generate it? In the most popular use case we're driven 
by Nova. Will Nova generate this file?


If the answer is "generate it manually for ever

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Dmitry Tantsur

On 12/09/2014 03:40 PM, Vladimir Kozhukalov wrote:



Vladimir Kozhukalov

On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi folks,

Thank you for additional explanation, it does clarify things a bit.
I'd like to note, however, that you talk a lot about how _different_
Fuel Agent is from what Ironic does now. I'd like actually to know
how well it's going to fit into what Ironic does (in additional to
your specific use cases). Hence my comments inline:



On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We
install OS
on bare metal servers and on VMs
and then configure this OS using Puppet. We have been using
Cobbler as
our OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach).
Moreover, we'd like to implement even more
flexible user experience. We'd like to allow user to choose
which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV
or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are
bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to
support all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image
internals, some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of
IPA, we implemented so called Fuel Agent.
Important Fuel Agent features are:

* It does not have REST API

I would not call it a feature :-P

Speaking seriously, if you agent is a long-running thing and it gets
it's configuration from e.g. JSON file, how can Ironic notify it of
any changes?

Fuel Agent is not long-running service. Currently there is no need to
have REST API. If we deal with kind of keep alive stuff of
inventory/discovery then we probably add API. Frankly, IPA REST API is
not REST at all. However that is not a reason to not to call it a
feature and through it away. It is a reason to work on it and improve.
That is how I try to look at things (pragmatically).

Fuel Agent has executable entry point[s] like /usr/bin/provision. You
can run this entry point with options (oslo.config) and point out where
to find input json data. It is supposed Ironic will  use ssh (currently
in Fuel we use mcollective) connection and run this waiting for exit
code. If exit code is equal to 0, provisioning is done. Extremely simple.

* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP
(kind of metadata service)
* It is designed to be agnostic to input data format, not only Fuel
format (data drivers)
* It is designed to be agnostic to image format (tar images,
file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm
(currently gzip)
* It is designed to be agnostic to image downloading protocol
(currently
local file and HTTP link)

Does it support Glance? I understand it's HTTP, but it requires
authentication.


So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.

My favorite use case is hardware introspection (aka getting data
required for scheduling from a node automatically). Any ideas on
this? (It's not a priority for this discussion, just curious).


That is exactly what we do in Fuel. Currently we use so called 'Default'
pxelinux config and all nodes being powered on are supposed to boot with
so called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent)
runs periodically and sends hardware report to Fuel master node.
User then is able to look at CPU, hard drive and network info and choose
which nodes to use for controllers, wh

[openstack-dev] [Ironic] ironic-discoverd status update

2014-12-11 Thread Dmitry Tantsur

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of 
the means to do hardware inspection for Ironic (see e.g. spec [2]), so I 
decided it's worth to give some updates to the community from time to 
time. This email is purely informative, you may safely skip it, if 
you're not interested.


Background
==

The discoverd project (I usually skip the "ironic-" part when talking 
about it) solves the problem of populating information about a node in 
Ironic database without help of any vendor-specific tool. This 
information usually includes Nova scheduling properties (CPU, RAM, disk 
size) and MAC's for ports.


Introspection is done by booting a ramdisk on a node, collecting data 
there and posting it back to discoverd HTTP API. Thus actually discoverd 
consists of 2 components: the service [1] and the ramdisk [3]. The 
service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for 
introspection does not interfere with Neutron


The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno 
RDO. After the Paris summit, we agreed on bringing it closer to the 
Ironic upstream, and now discoverd is hosted on StackForge and tracks 
bugs on Launchpad.


Future
==

The basic feature of discoverd: supply Ironic with properties required 
for scheduling, is pretty finished as of the latest stable series 0.2.


However, more features are planned for release 1.0.0 this January [5]. 
They go beyond the bare minimum of finding out CPU, RAM, disk size and 
NIC MAC's.


Plugability
~~~

An interesting feature of discoverd is support for plugins, which I 
prefer to call hooks. It's possible to hook into the introspection data 
processing chain in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd 
to ramdisks that have different data format. The only requirement is 
that the ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for 
MAC's, but before any actual data update. This gives an opportunity to 
alter, which properties discoverd is going to update.


Actually, even the default logic of update Node.properties is contained 
in a plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py 
[6]. This plugability opens wide opportunities for integrating with 3rd 
party ramdisks and CMDB's (which as we know Ironic is not ;).


Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent 
set of patches [7] introduces a possibility to request manual power on 
of the machine and update IPMI credentials via the ramdisk to the 
expected values. Note that support of this feature in the reference 
ramdisk [3] is not ready yet. Also note that this scenario is only 
possible when using discoverd directly via it's API, not via Ironic API 
like in [2].


Get Involved


Discoverd terribly lacks reviews. Out team is very small and 
self-approving is not a rare case. I'm even not against fast-tracking 
any existing Ironic core to a discoverd core after a couple of 
meaningful reviews :)


And of course patches are welcome, especially plugins for integration 
with existing systems doing similar things and CMDB's. Patches are 
accepted via usual Gerrit workflow. Ideas are accepted as Launchpad 
blueprints (we do not follow the Gerrit spec process right now).


Finally, please comment on the Ironic spec [2], I'd like to know what 
you think.


References
==

[1] https://pypi.python.org/pypi/ironic-discoverd
[2] https://review.openstack.org/#/c/135605/
[3] 
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk

[4] https://github.com/agroup/instack-undercloud/
[5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
[6] 
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py
[7] 
https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-05 Thread Dmitry Tantsur

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when talking about it) 
solves the problem of populating information about a node in Ironic database without help 
of any vendor-specific tool. This information usually includes Nova scheduling properties 
(CPU, RAM, disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for introspection does 
not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is contained in a 
plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the machine 
and update IPMI credentials via the ramdisk to the expected values. Note that 
support of this feature in the reference ramdisk [3] is not ready yet. Also 
note that this scenario is only possible when using discoverd directly via it's 
API, not via Ironic API like in [2].

Get Involved


Discoverd terribly lacks reviews. Out team is very small and self-approving is 
not a rare case. I'm even not against fast-tracking any existing Ironic core to 
a discoverd core after a couple of meaningful reviews :)

And of course patches are welcome, especially plugins for integration with 
existing systems doing similar things and CMDB's. Patches are accepted via 
usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
follow the Gerrit spec process right now).

Finally, please comment on the Ironic spec [2], I'd like to know what you think.

References
==

[1] https://pypi.python.org/pypi/ironic-discoverd
[2] https://review.openstack.org/#/c/135605/
[3]
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk
[4] https://github.com/agroup/instack-undercloud/
[5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
[6]
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py
[7]
https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials

__

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Dmitry Tantsur

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean when you 
create an ironic node, it will start discover in the background. So we don't 
need two services?
Well, the decision on the summit was that it's better to keep it 
separate. Please see https://review.openstack.org/#/c/135605/ for 
details on future interaction between discoverd and Ironic.



Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when talking
about it) solves the problem of populating information about a node in
Ironic database without help of any vendor-specific tool. This
information usually includes Nova scheduling properties (CPU, RAM,
disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is
contained in a plugin - see SchedulerHook in
ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the machine 
and update IPMI credentials via the ramdisk to the expected values. Note that 
support of this feature in the reference ramdisk [3] is not ready yet. Also 
note that this scenario is only possible when using discoverd directly via it's 
API, not via Ironic API like in [2].

Get Involved


Discoverd terribly lacks reviews. Out team is very small and
self-approving is not a rare case. I'm even not against fast-tracking
any existing Ironic core to a discoverd core after a couple of
meaningful reviews :)

And of course patches are welcome, especially plugins for integration with 
existing systems doing similar things and CMDB's. Patches are accepted via 
usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
follow

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Dmitry Tantsur

On 01/07/2015 03:44 PM, Matt Keenan wrote:

On 01/07/15 14:24, Kumar, Om (Cloud OS R&D) wrote:

If it's a separate project, can it be extended to perform out of band
discovery too..? That way there will be a single service to perform
in-band as well as out of band discoveries.. May be it could follow
driver framework for discovering nodes, where one driver could be
native (in-band) and other could be iLO specific etc...



I believe the following spec outlines plans for out-of-band discovery:
   https://review.openstack.org/#/c/100951/
Right, so Ironic will have drivers, one of which (I hope) will be a 
driver for discoverd.




No idea what the progress is with regard to implementation within the
Kilo cycle though.

For now we hope to get it merged in K.



cheers

Matt


Just a thought.

-Om

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean
when you create an ironic node, it will start discover in the
background. So we don't need two services?

Well, the decision on the summit was that it's better to keep it
separate. Please see https://review.openstack.org/#/c/135605/ for
details on future interaction between discoverd and Ironic.


Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one
of the means to do hardware inspection for Ironic (see e.g. spec
[2]), so I decided it's worth to give some updates to the community
from time to time. This email is purely informative, you may safely
skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when talking
about it) solves the problem of populating information about a node
in Ironic database without help of any vendor-specific tool. This
information usually includes Nova scheduling properties (CPU, RAM,
disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting
data there and posting it back to discoverd HTTP API. Thus actually
discoverd consists of 2 components: the service [1] and the ramdisk
[3]. The service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in
Ironic database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself after
we discovered that this change is going to be too intrusive.
Discoverd was actively tested as part of Instack [4] and it's RPM is
a part of Juno RDO. After the Paris summit, we agreed on bringing it
closer to the Ironic upstream, and now discoverd is hosted on
StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties
required for scheduling, is pretty finished as of the latest stable
series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size
and NIC MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I
prefer to call hooks. It's possible to hook into the introspection
data processing chain in 2 places:
* Before any data processing. This opens opportunity to adopt
discoverd to ramdisks that have different data format. The only
requirement is that the ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for
MAC's, but before any actual data update. This gives an opportunity
to alter, which properties discoverd is going to update.

Actually, even the default logic of update Node.properties is
contained in a plugin - see SchedulerHook in
ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with
3rd party ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-08 Thread Dmitry Tantsur

On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS R&D) wrote:

My understanding of discovery was to get all details for a node and then 
register that node to ironic. i.e. Enrollment of the node to ironic. Pardon me 
if it was out of line with your understanding of discovery.
That's why we agreed to use terms inspection/introspection :) sorry for 
not being consistent here (name 'discoverd' is pretty old and hard to 
change).


discoverd does not enroll nodes. while possible, I'm somewhat resistant 
to make it do enrolling, mostly because I want it to be user-controlled 
process.




What I understand from the below mentioned spec is that the Node is registered, 
but the spec will help ironic discover other properties of the node.

that's what discoverd does currently.



-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 20:20
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 03:44 PM, Matt Keenan wrote:

On 01/07/15 14:24, Kumar, Om (Cloud OS R&D) wrote:

If it's a separate project, can it be extended to perform out of band
discovery too..? That way there will be a single service to perform
in-band as well as out of band discoveries.. May be it could follow
driver framework for discovering nodes, where one driver could be
native (in-band) and other could be iLO specific etc...



I believe the following spec outlines plans for out-of-band discovery:
https://review.openstack.org/#/c/100951/

Right, so Ironic will have drivers, one of which (I hope) will be a driver for 
discoverd.



No idea what the progress is with regard to implementation within the
Kilo cycle though.

For now we hope to get it merged in K.



cheers

Matt


Just a thought.

-Om

-----Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean
when you create an ironic node, it will start discover in the
background. So we don't need two services?

Well, the decision on the summit was that it's better to keep it
separate. Please see https://review.openstack.org/#/c/135605/ for
details on future interaction between discoverd and Ironic.


Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one
of the means to do hardware inspection for Ironic (see e.g. spec
[2]), so I decided it's worth to give some updates to the community
from time to time. This email is purely informative, you may safely
skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when
talking about it) solves the problem of populating information
about a node in Ironic database without help of any vendor-specific
tool. This information usually includes Nova scheduling properties
(CPU, RAM, disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting
data there and posting it back to discoverd HTTP API. Thus actually
discoverd consists of 2 components: the service [1] and the ramdisk
[3]. The service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in
Ironic database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself
after we discovered that this change is going to be too intrusive.
Discoverd was actively tested as part of Instack [4] and it's RPM
is a part of Juno RDO. After the Paris summit, we agreed on
bringing it closer to the Ironic upstream, and now discoverd is
hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties
required for scheduling, is pretty finished as of the latest stable
series 0.2.

However, more featu

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-09 Thread Dmitry Tantsur

On 01/09/2015 08:43 AM, Jerry Xinyu Zhao wrote:

tuskar-ui is supposed to enroll nodes into ironic.

Right. And it has support for discoverd IIRC.



On Thu, Jan 8, 2015 at 4:36 AM, Zhou, Zhenzan mailto:zhenzan.z...@intel.com>> wrote:

Sounds like we could add something new to automate the enrollment of
new nodes:-)
Collecting IPMI info into a csv file is still a trivial job...

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
Sent: Thursday, January 8, 2015 5:19 PM
To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS R&D) wrote:
 > My understanding of discovery was to get all details for a node
and then register that node to ironic. i.e. Enrollment of the node
to ironic. Pardon me if it was out of line with your understanding
of discovery.
That's why we agreed to use terms inspection/introspection :) sorry
for not being consistent here (name 'discoverd' is pretty old and
hard to change).

discoverd does not enroll nodes. while possible, I'm somewhat
resistant to make it do enrolling, mostly because I want it to be
user-controlled process.

 >
 > What I understand from the below mentioned spec is that the Node
is registered, but the spec will help ironic discover other
properties of the node.
that's what discoverd does currently.

 >
 > -Om
 >
 > -Original Message-
 > From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
 > Sent: 07 January 2015 20:20
 > To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
 > Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update
 >
 > On 01/07/2015 03:44 PM, Matt Keenan wrote:
 >> On 01/07/15 14:24, Kumar, Om (Cloud OS R&D) wrote:
 >>> If it's a separate project, can it be extended to perform out
of band
 >>> discovery too..? That way there will be a single service to perform
 >>> in-band as well as out of band discoveries.. May be it could follow
 >>> driver framework for discovering nodes, where one driver could be
 >>> native (in-band) and other could be iLO specific etc...
 >>>
 >>
 >> I believe the following spec outlines plans for out-of-band
discovery:
 >> https://review.openstack.org/#/c/100951/
 > Right, so Ironic will have drivers, one of which (I hope) will be
a driver for discoverd.
 >
 >>
 >> No idea what the progress is with regard to implementation
within the
 >> Kilo cycle though.
     > For now we hope to get it merged in K.
 >
 >>
 >> cheers
 >>
 >> Matt
 >>
 >>> Just a thought.
 >>>
 >>> -Om
 >>>
 >>> -Original Message-
 >>> From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
 >>> Sent: 07 January 2015 14:34
 >>> To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
 >>> Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status
update
 >>>
 >>> On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:
 >>>> So is it possible to just integrate this project into ironic?
I mean
 >>>> when you create an ironic node, it will start discover in the
 >>>> background. So we don't need two services?
 >>> Well, the decision on the summit was that it's better to keep it
 >>> separate. Please see https://review.openstack.org/#/c/135605/ for
 >>> details on future interaction between discoverd and Ironic.
 >>>
 >>>> Just a thought, thanks.
 >>>>
 >>>> BR
 >>>> Zhou Zhenzan
 >>>>
 >>>> -Original Message-
 >>>> From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
 >>>> Sent: Monday, January 5, 2015 4:49 PM
 >>>> To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
 >>>> Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status
update
 >>>>
 >>>> On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
 >>>>> Hi, Dmitry
 >>>>>
 >>>>

Re: [openstack-dev] [all] python 2.6 for clients

2015-01-09 Thread Dmitry Tantsur

On 01/09/2015 02:37 PM, Ihar Hrachyshka wrote:

On 01/09/2015 02:33 PM, Andreas Jaeger wrote:

On 01/09/2015 02:25 PM, Ihar Hrachyshka wrote:

Hi all,

I assumed that we still support py26 for clients, but then I saw [1]
that removed corresponding tox environment from ironic client.

What's our take on that? Shouldn't clients still support Python 2.6?

[1]:
https://github.com/openstack/ironic-python-agent/commit/d95a99d5d1a62ef5c085ce20ec07d960a3f23ac1


Indeed, clients are supposed to continue supporting 2.6 as mentioned
here:

http://lists.openstack.org/pipermail/openstack-dev/2014-October/049111.html


Andreas


OK, thanks. Reverting: https://review.openstack.org/#/c/146083/
Thanks you for your time folks, but this is not a client :) it's an 
alternative ramdisk for Ironic.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev topics now work correctly

2015-01-12 Thread Dmitry Tantsur

On 01/09/2015 08:12 PM, Stefano Maffulli wrote:

Dear all,

if you've tried the topics on this mailing list and haven't received
emails, well... we had a problem on our side: the topics were not setup
correctly.

Luigi Toscano helped isolate the problem and point at the solution[1].
He noticed that only the "QA topic" was working and that's the only one
defined with a single regular expression, while all the others use
multiple line regexp.

I corrected the regexp as described in the mailman FAQ and tested that
the delivery works correctly. If you want to subscribe only to some
topics now you can. Thanks again to Luigi for the help.

Cheers,
stef

[1] http://wiki.list.org/pages/viewpage.action?pageId=8683547



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi!

Is it possible to make topic lists more up-to-date with what real in-use 
topic are? I would appreciate at least topics "oslo" and "all".


Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd status update: 1.0.0 feature freeze and testing

2015-01-19 Thread Dmitry Tantsur

Hi all!

That's pure information email about discoverd, feel free to skip it, if 
not interested.


For those interested I'm glad to announce that ironic-discoverd 1.0.0 is 
feature complete and is scheduled to release on Feb 5 with Kilo-2 
milestone. Master branch is under feature freeze now and will only 
receive bug fixes and documentation updates until the release. This is 
the version intended to work with my in-band inspection spec 
http://specs.openstack.org/openstack/ironic-specs/specs/kilo/inband-properties-discovery.html


Preliminary release notes: 
https://github.com/stackforge/ironic-discoverd#10-series
Release tracking page: 
https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
Installation notes: 
https://github.com/stackforge/ironic-discoverd#installation (might be 
slightly outdated, but should be correct)


I'm not providing a release candidate tarball, but you can treat git 
master at https://github.com/stackforge/ironic-discoverd as such. Users 
of RPM-based distros can use my repo: 
https://copr.fedoraproject.org/coprs/divius/ironic-discoverd/ but beware 
that it's kind of experimental, and it will be receiving updates from 
git master after released is pushed to PyPI.


Lastly, I do not expect this release to be a long-term supported one. 
Next feature release 1.1.0 is expected to arrive around Kilo RC and will 
be supported for longer time.


Looking forward to your comments/suggestions/bug reports.
Dmitry.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Dmitry Tantsur
On Thu, 2014-05-22 at 09:48 +0100, Lucas Alvares Gomes wrote:
> On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
>  wrote:
> > I'd like to bring up the topic of drivers which, for one reason or another,
> > are probably never going to have third party CI testing.
> >
> > Take for example the iBoot driver proposed here:
> >   https://review.openstack.org/50977
> >
> > I would like to encourage this type of driver as it enables individual
> > contributors, who may be using off-the-shelf or home-built systems, to
> > benefit from Ironic's ability to provision hardware, even if that hardware
> > does not have IPMI or another enterprise-grade out-of-band management
> > interface. However, I also don't expect the author to provide a full
> > third-party CI environment, and as such, we should not claim the same level
> > of test coverage and consistency as we would like to have with drivers in
> > the gate.
> 
> +1
But we'll still expect unit tests that work via mocking their 3rd party
library (for example), right?

> 
> >
> > As it is, Ironic already supports out-of-tree drivers. A python module that
> > registers itself with the appropriate entrypoint will be made available if
> > the ironic-conductor service is configured to load that driver. For what
> > it's worth, I recall Nova going through a very similar discussion over the
> > last few cycles...
> >
> > So, why not just put the driver in a separate library on github or
> > stackforge?
> 
> I would like to have this drivers within the Ironic tree under a
> separated directory (e.g /drivers/staging/, not exactly same but kinda
> like what linux has in their tree[1]). The advatanges of having it in
> the main ironic tree is because it makes it easier to other people
> access the drivers, easy to detect and fix changes in the Ironic code
> that would affect the driver, share code with the other drivers, add
> unittests and provide a common place for development.
I do agree, that having these drivers in-tree would make major changes
much easier for us (see also above about unit tests).

> 
> We can create some rules for people who are thinking about submitting
> their driver under the staging directory, it should _not_ be a place
> where you just throw the code and forget it, we would need to agree
> that the person submitting the code will also babysit it, we also
> could use the same process for all the other drivers wich wants to be
> in the Ironic tree to be accepted which is going through ironic-specs.
+1

> 
> Thoughts?
> 
> [1] http://lwn.net/Articles/285599/
> 
> Cheers,
> Lucas
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TripleO] virtual-ironic job now voting!

2014-05-25 Thread Dmitry Tantsur
Great news! Even being non-voting, it already helped me 2-3 times to
spot a subtle error in a patch.

On Fri, 2014-05-23 at 18:56 -0700, Devananda van der Veen wrote:
> Just a quick heads up to everyone -- the tempest-dsvm-virtual-ironic
> job is now fully voting in both check and gate queues for Ironic. It's
> also now symmetrically voting on diskimage-builder, since that tool is
> responsible for building the deploy ramdisk used by this test.
> 
> 
> Background: We discussed this prior to the summit, and agreed to
> continue watching the stability of the job through the summit week.
> It's been reliable for over a month now, and I've seen it catch
> several real issues, both in Ironic and in other projects, and all the
> core reviewers I spoke lately have been eager to enable voting on this
> test. So, it's done!
> 
> 
> Cheers,
> Devananda
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Dmitry Tantsur
Hi Ironic folks, hi Devananda!

I'd like to share with you my thoughts on asynchronous API, which is
spec https://review.openstack.org/#/c/94923
First I was planned this as comments to the review, but it proved to be
much larger, so I post it for discussion on ML.

Here is list of different consideration, I'd like to take into account
when prototyping async support, some are reflected in spec already, some
are from my and other's comments:

1. "Executability"
We need to make sure that request can be theoretically executed,
which includes:
a) Validating request body
b) For each of entities (e.g. nodes) touched, check that they are
available
   at the moment (at least exist).
   This is arguable, as checking for entity existence requires going to
DB.

2. Appropriate state
For each entity in question, ensure that it's either in a proper state
or
moving to a proper state.
It would help avoid users e.g. setting deploy twice on the same node
It will still require some kind of NodeInAWrongStateError, but we won't
necessary need a client retry on this one.

Allowing the entity to be _moving_ to appropriate state gives us a
problem:
Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
to desired state. What if OP1 fails? What if conductor, doing OP1
crashes?
That's why we may want to approve only operations on entities that do
not
undergo state changes. What do you think?

Similar problem with checking node state.
Imagine we schedule OP2 while we had OP1 - regular checking node state.
OP1 discovers that node is actually absent and puts it to maintenance
state.
What to do with OP2?
a) Obvious answer is to fail it
b) Can we make client wait for the results of periodic check?
   That is, wait for OP1 _before scheduling_ OP2?

Anyway, this point requires some state framework, that knows about
states,
transitions, actions and their compatibility with each other.

3. Status feedback
People would like to know, how things are going with their task.
What they know is that their request was scheduled. Options:
a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
   Pros:
   - Should be easy to implement
   Cons:
   - Requires persistent storage for tasks. Does AMQP allow to do this
kinds
 of queries? If not, we'll need to duplicate tasks in DB.
   - Increased load on API instances and DB
b) Callback: take endpoint, call it once task is done/fails.
   Pros:
   - Less load on both client and server
   - Answer exactly when it's ready
   Cons:
   - Will not work for cli and similar
   - If conductor crashes, there will be no callback.

Seems like we'd want both (a) and (b) to comply with current needs.

If we have a state framework from (2), we can also add notifications to
it.

4. Debugging consideration
a) This is an open question: how to debug, if we have a lot of requests
   and something went wrong?
b) One more thing to consider: how to make command like `node-show`
aware of
   scheduled transitioning, so that people don't try operations that are
   doomed to failure.

5. Performance considerations
a) With async approach, users will be able to schedule nearly unlimited
   number of tasks, thus essentially blocking work of Ironic, without
any
   signs of the problem (at least for some time).
   I think there are 2 common answers to this problem:
   - Request throttling: disallow user to make too many requests in some
 amount of time. Send them 503 with Retry-After header set.
   - Queue management: watch queue length, deny new requests if it's too
large.
   This means actually getting back error 503 and will require retrying
again!
   At least it will be exceptional case, and won't affect Tempest run...
b) State framework from (2), if invented, can become a bottleneck as
well.
   Especially with polling approach.

6. Usability considerations
a) People will be unaware, when and whether their request is going to be
   finished. As there will be tempted to retry, we may get flooded by
   duplicates. I would suggest at least make it possible to request
canceling
   any task (which will be possible only if it is not started yet,
obviously).
b) We should try to avoid scheduling contradictive requests.
c) Can we somehow detect duplicated requests and ignore them?
   E.g. we won't want user to make 2-3-4 reboots in a row just because
the user
   was not patient enough.

--

Possible takeaways from this letter:
- We'll need at least throttling to avoid DoS
- We'll still need handling of 503 error, though it should not happen
under
  normal conditions
- Think about state framework that unifies all this complex logic with
features:
  * Track entities, their states and actions on entities
  * Check whether new action is compatible with states of entities it
touches
and with other ongoing and scheduled actions on these entities.
  * Handle notifications for finished and failed actions by providing
both
pull and push approaches.
  * Track whether started action is still executed,

Re: [openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Dmitry Tantsur
A task scheduler responsibility: this is basically a
> state check before task is scheduled, and it should be
> done one more time once the task is started, as
> mentioned above.
>  
> c) Can we somehow detect duplicated requests
> and ignore them?
>E.g. we won't want user to make 2-3-4
> reboots in a row just because
> the user
>was not patient enough.
> 
> 
> Queue similar tasks. All the users will be pointed to
> the similar task resource, or maybe to a different
> resources which tied to the same conductor action. 
>  
> Best regards,
> Max Lobur,
> Python Developer, Mirantis, Inc.
> Mobile: +38 (093) 665 14 28
> Skype: max_lobur
> 38, Lenina ave. Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
> 
> 
> On Wed, May 28, 2014 at 5:10 PM, Lucas Alvares
> Gomes  wrote:
> On Wed, May 28, 2014 at 2:02 PM, Dmitry
> Tantsur  wrote:
> > Hi Ironic folks, hi Devananda!
> >
> > I'd like to share with you my thoughts on
> asynchronous API, which is
> > spec https://review.openstack.org/#/c/94923
> > First I was planned this as comments to the
> review, but it proved to be
> > much larger, so I post it for discussion on
> ML.
> >
> > Here is list of different consideration, I'd
> like to take into account
> > when prototyping async support, some are
> reflected in spec already, some
> > are from my and other's comments:
> >
> > 1. "Executability"
> > We need to make sure that request can be
> theoretically executed,
> > which includes:
> > a) Validating request body
> > b) For each of entities (e.g. nodes)
> touched, check that they are
> > available
> >at the moment (at least exist).
> >This is arguable, as checking for entity
> existence requires going to
> > DB.
> 
> >
> 
> > 2. Appropriate state
> > For each entity in question, ensure that
> it's either in a proper state
> > or
> > moving to a proper state.
> > It would help avoid users e.g. setting
> deploy twice on the same node
> > It will still require some kind of
> NodeInAWrongStateError, but we won't
> > necessary need a client retry on this one.
> >
> > Allowing the entity to be _moving_ to
> appropriate state gives us a
> > problem:
> > Imagine OP1 was running and OP2 got
> scheduled, hoping that OP1 will come
> > to desired state. What if OP1 fails? What if
> conductor, doing OP1
> > crashes?
> > That's why we may want to approve only
> operations on entities that do
> > not
> > undergo state changes. What do you think?
> >
> > Similar problem with checking node state.
> > Imagine we schedule OP2 while we had OP1 -
>  

[openstack-dev] [Ironic] Proposal for shared review dashboard

2014-06-02 Thread Dmitry Tantsur
Hi folks,

Inspired by great work by Sean Dague [1], I have created a review
dashboard for Ironic projects. Main ideas:

Ordering:
0. Viewer's own patches, that have any kind of negative feedback
1. Specs
2. Changes w/o negative feedback, with +2 already
3. Changes that did not have any feedback for 5 days
4. Changes without negative feedback (no more than 50)
5. Other changes (no more than 20)

Shows only verified patches, except for 0 and 5.
Never shows WIP patches.

I'll be thankful for any tips on how to include prioritization from
Launchpad bugs.

Short link: http://goo.gl/hqRrRw
Long link: [2]

Source code (will create PR after discussion on today's meeting): 
https://github.com/Divius/gerrit-dash-creator
To generate a link, use:
$ ./gerrit-dash-creator dashboards/ironic.dash

Dmitry.

[1] https://github.com/Divius/gerrit-dash-creator
[2] https://review.openstack.org/#/dashboard/?foreach=%28project%
3Aopenstack%2Fironic+OR+project%3Aopenstack%2Fpython-ironicclient+OR
+project%3Aopenstack%2Fironic-python-agent+OR+project%3Aopenstack%
2Fironic-specs%29+status%3Aopen+NOT+label%3AWorkflow%3C%3D-1+NOT+label%
3ACode-Review%3C%3D-2+NOT+label%3AWorkflow%3E%3D1&title=Ironic+Inbox&My
+Patches+Requiring+Attention=owner%3Aself+%28label%3AVerified-1%
252cjenkins+OR+label%3ACode-Review-1%29&Ironic+Specs=NOT+owner%3Aself
+project%3Aopenstack%2Fironic-specs&Needs+Approval=label%3AVerified%3E%
3D1%252cjenkins+NOT+owner%3Aself+label%3ACode-Review%3E%3D2+NOT+label%
3ACode-Review-1&5+Days+Without+Feedback=label%3AVerified%3E%3D1%
252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%2Fironic-specs+NOT
+label%3ACode-Review%3C%3D2+age%3A5d&No+Negative+Feedback=label%
3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%
2Fironic-specs+NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%
3E%3D2+limit%3A50&Other=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%
3Aself+NOT+project%3Aopenstack%2Fironic-specs+label%3ACode-Review-1
+limit%3A20



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Review dashboard update

2014-06-03 Thread Dmitry Tantsur
Hi everyone!

It's hard to stop polishing things, and today I got an updated review
dashboard. It's sources are merged to Sean Dague's repository [1], so I
expect this to be the final version. Thank you everyone for numerous
comments and suggestions, especially Ruby Loo.

Here is nice link to it: http://perm.ly/ironic-review-dashboard

Major changes since previous edition:
- "My Patches Requiring Attention" section - all your patches that are
either WIP or have any -1.
- "Needs Reverify" - approved changes that failed Jenkins verification
- Added last section with changes that either WIP or got -1 from Jenkins
(all other sections do not include these).
- Specs section show also WIP specs

I know someone requesting dashboard with IPA subproject highlighted - I
can do such things on case-by-case base - ping me on IRC.

Hope this will be helpful :)

Dmitry.

[1] https://github.com/sdague/gerrit-dash-creator


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
Hi!

Workflow is not entirely documented by now AFAIK. After PXE boots deploy
kernel and ramdisk, it exposes hard drive via iSCSI and notifies Ironic.
After that Ironic partitions the disk, copies an image and reboots node
with final kernel and ramdisk.

On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> Hi, All:
> 
> I searched a lot about how ironic automatically install image
> on bare metal. But there seems to be no clear workflow out there.
> 
> What I know is, in traditional PXE, a bare metal pull image
> from PXE server using tftp. In tftp root, there is a ks.conf which
> tells tftp which image to kick start.
> 
> But in ironic there is no ks.conf pointed in tftp. How do bare
> metal know which image to install ? Is there any clear workflow where
> I can read ?
> 
> 
> 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> Hi,
> 
> Thank you very much for your reply !
> 
> But there are still some questions for me. Now I've come to the step
> where ironic partitions the disk as you replied.
> 
> Then, how does ironic copies an image ? I know the image comes from
> glance. But how to know image is really available when reboot? 
I don't quite understand your question, what do you mean by "available"?
Anyway, before deploying Ironic downloads image from Glance, caches it
and just copies to a mounted iSCSI partition (using dd or so).

> 
> And, what are the differences between final kernel (ramdisk) and
> original kernel (ramdisk) ? 
We have 2 sets of kernel+ramdisk:
1. Deploy k+r: these are used only for deploy process itself to provide
iSCSI volume and call back to Ironic. There's ongoing effort to create
smarted ramdisk, called Ironic Python Agent, but it's WIP.
2. Your k+r as stated in Glance metadata for an image - they will be
used for booting after deployment.

> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> 
> 
> 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur :
> Hi!
> 
> Workflow is not entirely documented by now AFAIK. After PXE
> boots deploy
> kernel and ramdisk, it exposes hard drive via iSCSI and
> notifies Ironic.
> After that Ironic partitions the disk, copies an image and
> reboots node
> with final kernel and ramdisk.
> 
> On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> > Hi, All:
> >
> > I searched a lot about how ironic automatically
> install image
> > on bare metal. But there seems to be no clear workflow out
> there.
> >
> > What I know is, in traditional PXE, a bare metal
> pull image
> > from PXE server using tftp. In tftp root, there is a ks.conf
> which
> > tells tftp which image to kick start.
> >
> > But in ironic there is no ks.conf pointed in tftp.
> How do bare
> > metal know which image to install ? Is there any clear
> workflow where
> > I can read ?
> >
> >
> >
> >
> > Best Regards!
> > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
> Thank you !
> 
> I noticed the two sets of k+r in tftp configuration of ironic.
> 
> Should the two sets be the same k+r ?
Deploy images are created for you by DevStack/whatever. If you do it by
hand, you may use diskimage-builder. Currently they are stored in flavor
metadata, will be stored in node metadata later.

And than you have "production" images that are whatever you want to
deploy and they are stored in Glance metadata for the instance image.

TFTP configuration should be created automatically, I doubt you should
change it anyway.

> 
> The first set is defined in the ironic node definition. 
> 
> How do we define the second set correctly ? 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> ------
> 
> 
> 
> 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur :
> On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> > Hi,
> >
> > Thank you very much for your reply !
> >
> > But there are still some questions for me. Now I've come to
> the step
> > where ironic partitions the disk as you replied.
> >
> > Then, how does ironic copies an image ? I know the image
> comes from
> > glance. But how to know image is really available when
> reboot?
> 
> I don't quite understand your question, what do you mean by
> "available"?
> Anyway, before deploying Ironic downloads image from Glance,
> caches it
> and just copies to a mounted iSCSI partition (using dd or so).
> 
> >
> > And, what are the differences between final kernel (ramdisk)
> and
> > original kernel (ramdisk) ?
> 
> We have 2 sets of kernel+ramdisk:
> 1. Deploy k+r: these are used only for deploy process itself
> to provide
> iSCSI volume and call back to Ironic. There's ongoing effort
> to create
> smarted ramdisk, called Ironic Python Agent, but it's WIP.
> 2. Your k+r as stated in Glance metadata for an image - they
> will be
> used for booting after deployment.
> 
> >
> > Best Regards!
>     > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> >
> >
> > 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur
> :
> > Hi!
> >
> > Workflow is not entirely documented by now AFAIK.
> After PXE
> > boots deploy
> > kernel and ramdisk, it exposes hard drive via iSCSI
> and
> > notifies Ironic.
> > After that Ironic partitions the disk, copies an
> image and
> > reboots node
> > with final kernel and ramdisk.
> >
> > On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> > > Hi, All:
> > >
> > > I searched a lot about how ironic
> automatically
> > install image
> > > on bare metal. But there seems to be no clear
> workflow out
> > there.
> > >
> > > What I know is, in traditional PXE, a bare
> metal
> > pull image
> > > from PXE server using tftp. In tftp root, there is
> a ks.conf
> > which
> > > tells tftp which image to kick start.
> > >
> > > But in ironic there is no ks.conf pointed
> in tftp.
> > How do bare
> > > metal know which image to install ? Is there any
> clear
> > workflow where
> > > I can read ?
> > >
> > >
> > >
> > >
> > > Best Regards!
> > > Chao Yan
> > > --
> > > My twitter:Andy Yan @yanchao727
> > > My Weibo:http://weibo.com/herewearenow
> >   

Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:51 +0800, 严超 wrote:
> Yes, but when you assign a "production" image to an ironic bare metal
> node. You should provide ramdisk_id and kernel_id. 
What do you mean by "assign" here? Could you quote some documentation?
Instance image is "assigned" using --image argument to `nova boot`, k&r
are fetched from it's metadata.

Deploy k&r are currently taken from flavor provided by --flavor argument
(this will change eventually).
If you're using e.g. DevStack, you don't even touch deploy k&r, they're
bound to flavor "baremetal".

Please see quick start guide for hints on this:
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html

> 
> Should the ramdisk_id and kernel_id be the same as deploy images (aka
> the first set of k+r) ?
> 
> You didn't answer me if the two sets of r + k should be the same ? 
> 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> 
> 
> 2014-06-04 21:27 GMT+08:00 Dmitry Tantsur :
> On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
> > Thank you !
> >
> > I noticed the two sets of k+r in tftp configuration of
> ironic.
> >
> > Should the two sets be the same k+r ?
> 
> Deploy images are created for you by DevStack/whatever. If you
> do it by
> hand, you may use diskimage-builder. Currently they are stored
> in flavor
> metadata, will be stored in node metadata later.
> 
> And than you have "production" images that are whatever you
> want to
> deploy and they are stored in Glance metadata for the instance
> image.
> 
> TFTP configuration should be created automatically, I doubt
> you should
> change it anyway.
> 
> >
> > The first set is defined in the ironic node definition.
> >
> > How do we define the second set correctly ?
> >
> > Best Regards!
> > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> >
> >
> > 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur
> :
> > On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> > > Hi,
> > >
> > > Thank you very much for your reply !
> > >
> > > But there are still some questions for me. Now
> I've come to
> > the step
> > > where ironic partitions the disk as you replied.
> > >
> > > Then, how does ironic copies an image ? I know the
> image
> > comes from
> > > glance. But how to know image is really available
> when
> > reboot?
> >
> > I don't quite understand your question, what do you
> mean by
> > "available"?
> > Anyway, before deploying Ironic downloads image from
> Glance,
> > caches it
> > and just copies to a mounted iSCSI partition (using
> dd or so).
> >
> > >
> > > And, what are the differences between final kernel
> (ramdisk)
> > and
> > > original kernel (ramdisk) ?
> >
> > We have 2 sets of kernel+ramdisk:
> > 1. Deploy k+r: these are used only for deploy
> process itself
> > to provide
> > iSCSI volume and call back to Ironic. There's
> ongoing effort
> > to create
> > smarted ramdisk, called Ironic Python Agent, but
>     it's WIP.
> > 2. Your k+r as stated in Glance metadata for an
> image - they
> > will be
> > used for booting after deployment.
> >
> > >
> > > Best Regards!
> > > Chao Yan
> > > --
> > > My twitter:Andy Yan @yanchao727
> > > My Weibo:http:

Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Dmitry Tantsur

On 09/25/2014 06:23 PM, Lucas Alvares Gomes wrote:

Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?
It's a pity we won't have sample config by default, but I guess it can't 
be helped. +1 from me.




[1] https://review.openstack.org/#/c/124090/
[2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
[3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-10-01 Thread Dmitry Tantsur

On 09/30/2014 02:03 PM, Soren Hansen wrote:

2014-09-12 1:05 GMT+02:00 Jay Pipes :

If Nova was to take Soren's advice and implement its data-access layer
on top of Cassandra or Riak, we would just end up re-inventing SQL
Joins in Python-land.


I may very well be wrong(!), but this statement makes it sound like you've
never used e.g. Riak. Or, if you have, not done so in the way it's
supposed to be used.

If you embrace an alternative way of storing your data, you wouldn't just
blindly create a container for each table in your RDBMS.

For example: In Nova's SQL-based datastore we have a table for security
groups and another for security group rules. Rows in the security group
rules table have a foreign key referencing the security group to which
they belong. In a datastore like Riak, you could have a security group
container where each value contains not just the security group
information, but also all the security group rules. No joins in
Python-land necessary.


I've said it before, and I'll say it again. In Nova at least, the SQL
schema is complex because the problem domain is complex. That means
lots of relations, lots of JOINs, and that means the best way to query
for that data is via an RDBMS.


I was really hoping you could be more specific than "best"/"most
appropriate" so that we could have a focused discussion.

I don't think relying on a central data store is in any conceivable way
appropriate for a project like OpenStack. Least of all Nova.

I don't see how we can build a highly available, distributed service on
top of a centralized data store like MySQL.
Coming from Skype background I can assure your that you definitely can, 
depending on your needs (and our experiments with e.g. MongoDB ended 
very badly: it just died under IO loads, that our PostgreSQL treated 
like normal). I mean, that's complex topic and I see a lot of people 
switching to NoSQL and a lot of people switching from. NoSQL is not a 
silver bullet for scalability. Just my 0.5.


/me disappears again



Tens or hundreds of thousands of nodes, spread across many, many racks
and datacentre halls are going to experience connectivity problems[1].

This means that some percentage of your infrastructure (possibly many
thousands of nodes, affecting many, many thousands of customers) will
find certain functionality not working on account of your datastore not
being reachable from the part of the control plane they're attempting to
use (or possibly only being able to read from it).

I say over and over again that people should own their own uptime.
Expect things to fail all the time. Do whatever you need to do to ensure
your service keeps working even when something goes wrong. Of course
this applies to our customers too. Even if we take the greatest care to
avoid downtime, customers should spread their workloads across multiple
availability zones and/or regions and probably even multiple cloud
providers. Their service towards their users is their responsibility.

However, our service towards our users is our responsibility. We should
take the greatest care to avoid having internal problems affect our
users.  Building a massively distributed system like Nova on top of a
centralized data store is practically a guarantee of the opposite.


For complex control plane software like Nova, though, an RDBMS is the
best tool for the job given the current lay of the land in open source
data storage solutions matched with Nova's complex query and
transactional requirements.


What transactional requirements?


Folks in these other programs have actually, you know, thought about
these kinds of things and had serious discussions about alternatives.
It would be nice to have someone acknowledge that instead of snarky
comments implying everyone else "has it wrong".


I'm terribly sorry, but repeating over and over that an RDBMS is "the
best tool" without further qualification than "Nova's data model is
really complex" reads *exactly* like a snarky comment implying everyone
else "has it wrong".

[1]: http://aphyr.com/posts/288-the-network-is-reliable




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Import errors in tests

2014-10-02 Thread Dmitry Tantsur

On 10/02/2014 01:30 PM, Lucas Alvares Gomes wrote:

Hi,

I don't know if it's a known issue, but we have this patch in Ironic
here https://review.openstack.org/#/c/124610/ and the gate jobs for
python26 and python27 are failing because of some import error[1] and
it doesn't show me what is the error exactly, it's important to say
also that the tests run locally without any problem so I can't
reproduce the error locally here.

Did you try with fresh environment?



Have anyone seem something like that ?
I have to say that our test toolchain is completely inadequate in case 
of import errors, even locally spotting import error involves manually 
importing all suspicious modules, because tox just outputs garbage. 
Something has to be done with it.




I will continue to dig into it and see if I can spot something, but I
thought it would be nice to share it here too cause that's maybe a
potential gate problem.

[1] 
http://logs.openstack.org/10/124610/14/check/gate-ironic-python27/5c21433/console.html

Cheers,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-17 Thread Dmitry Tantsur

Hi Jim,

On 10/16/2014 07:23 PM, Jim Mankovich wrote:

All,

I would like to get some feedback on a proposal  to change to the
current sensor naming implemented in ironic and ceilometer.

I would like to provide vendor specific sensors within the current
structure for IPMI sensors in ironic and ceilometer, but I have found
that the current  implementation of sensor meters in ironic and
ceilometer is IPMI specific (from a meter naming perspective) . This is
not suitable as it currently stands to support sensor information from a
provider other than IPMI.Also, the current Resource ID naming makes
it difficult for a consumer of sensors to quickly find all the sensors
for a given Ironic Node ID, so I would like to propose changing the
Resource ID naming as well.

Currently, sensors sent by ironic to ceilometer get named by ceilometer
as has "hardware.ipmi.SensorType", and the Resource ID is the Ironic
Node ID with a post-fix containing the Sensor ID.  For Details
pertaining to the issue with the Resource ID naming, see
https://bugs.launchpad.net/ironic/+bug/1377157, "ipmi sensor naming in
ceilometer is not consumer friendly"

Here is an example of what meters look like for sensors in ceilometer
with the current implementation:
| Name| Type  | Unit | Resource ID
| hardware.ipmi.current   | gauge | W|
edafe6f4-5996-4df8-bc84-7d92439e15c0-power_meter_(0x16)
| hardware.ipmi.temperature   | gauge | C|
edafe6f4-5996-4df8-bc84-7d92439e15c0-16-system_board_(0x15)

What I would like to propose is dropping the ipmi string from the name
altogether and appending the Sensor ID to the name  instead of to the
Resource ID.   So, transforming the above to the new naming would result
in the following:
| Name | Type  | Unit | Resource ID
| hardware.current.power_meter_(0x16)  | gauge | W|
edafe6f4-5996-4df8-bc84-7d92439e15c0
| hardware.temperature.system_board_(0x15) | gauge | C|
edafe6f4-5996-4df8-bc84-7d92439e15c0

+1

Very-very nit, feel free to ignore if inappropriate: maybe 
hardware.temperature.system_board.0x15 ? I.e. use separation with dots, 
do not use brackets?


This structure would provide the ability for a consumer to do a
ceilometer resource list using the Ironic Node ID as the Resource ID to
get all the sensors in a given platform.   The consumer would then then
iterate over each of the sensors to get the samples it wanted.   In
order to retain the information as to who provide the sensors, I would
like to propose that a standard "sensor_provider" field be added to the
resource_metadata for every sensor where the "sensor_provider" field
would have a string value indicating the driver that provided the sensor
information. This is where the string "ipmi", or a vendor specific
string would be specified.

+1


I understand that this proposed change is not backward compatible with
the existing naming, but I don't really see a good solution that would
retain backward compatibility.
For backward compatibility you could _also_ keep old ones (with ipmi in 
it) for IPMI sensors.




Any/All Feedback will be appreciated,
In this version it makes a lot of sense to me, +1 if Ceilometer folks 
are not against.



Jim




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-10-21 Thread Dmitry Tantsur

On 10/21/2014 02:11 AM, Devananda van der Veen wrote:

Hi all,

I was reminded in the Ironic meeting today that the words "hardware
discovery" are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building consensus
in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words, and
some other words that I use to mean something else which is what some
people mean when they use those words. I'm not saying my words are the
right words -- they're just the words that make sense to my brain
right now. If someone else has better words, and those words also make
sense (or make more sense) then I'm happy to use those instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

"hardware discovery"
The process or act of identifying hitherto unknown hardware, which is
addressable by the management system, in order to later make it
available for provisioning and management.

"hardware introspection"
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.
I generally agree with this separation, though it brings some troubles 
to me, as I'm used to calling "discovery" what you called 
"introspection" (it was not the case this summer, but now I changed my 
mind). And the term "discovery" is baked into the.. hmm.. introspection 
service that I've written [1].


So I would personally prefer to leave "discovery" as in "discovery of 
hardware properties", though I realize that "introspection" may be a 
better name.


[1] https://github.com/Divius/ironic-discoverd



Why is this disambiguation important? At the last midcycle, we agreed
that "hardware discovery" is out of scope for Ironic -- finding new,
unmanaged nodes and enrolling them with Ironic is best left to other
services or processes, at least for the forseeable future.

However, "introspection" is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for many
of our current users, and multiple proof of concept implementations of
this have been done by different parties over the last year.

It may be entirely possible that no one else in our developer
community is using the term "introspection" in the way that I've
defined it above -- if so, that's fine, I can stop calling that
"introspection", but I don't know a better word for the thing that is
find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for "hardware discovery" yields several
results related to identifying unknown network-connected devices and
adding them to inventory systems, which is the way that I'm using the
term right now, so I don't feel completely off in continuing to say
"discovery" when I mean "find unknown network devices and add them to
Ironic".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing DISCOVERING
to INTROSPECTING in the new state machine spec is a good idea?
As before I'm uncertain. Discovery is a troublesome term, but too many 
people use and recognize it, while IMO introspecting is much less 
common. So count me as -0 on this.




On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
mailto:sandhya.ganapa...@hp.com>> wrote:

Hi all,

Following the mail thread on disambiguating the term 'discovery' -

In the lines of what Devananda had stated, Hardware Introspection
also means retrieving and storing hardware details of the node whose
credentials and IP Address are known to the system. (Correct me if I
am wrong).

I am currently in the process of extracting hardware details (cpu,
memory etc..) of n no. of nodes belonging to a Chassis whose
credentials are already known to ironic. Does this process fall in
the category of hardware introspection?

Thanks,
Sandhya.

-Original Message-
From: Devananda van der Veen [mailto:devananda@gmail.com
]
Sent: Tuesday, October 21, 2014 5:41 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] disambiguating the term "discovery"

Hi all,

I was reminded in the Ironic meeting today that the words "hardware
discovery" are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building
consensus in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words,
and some other words that I use to mean something else which is what
some people mean when they use those words. I'm not saying my words
are the right words -- they're just the words that make sense to my
brain right now. If someone else has better words, and those words
also make sense (or make more sense) then I'm happy to use those
instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

"hardware discovery"
The process or act of identifying hitherto unknown hardware, which
is addressable by the management system, in order to later make it
available for provisioning and management.

"hardware introspection"
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.


Why is this disambiguation important? At the last midcycle, we
agreed that "hardware discovery" is out of scope for Ironic --
finding new, unmanaged nodes and enrolling them with Ironic is best
left to other services or processes, at least for the forseeable future.

However, "introspection" is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for
many of our current users, and multiple proof of concept
implementations of this have been done by different parties over the
last year.

It may be entirely possible that no one else in our developer
community is using the term "introspection" in the way that I've
defined it above -- if so, that's fine, I can stop calling that
"introspection", but I don't know a better word for the thing that
is find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for "hardware discovery" yields
several results related to identifying unknown network-connected
devices and adding them to inventory systems, which is the way that
I'm using the term right now, so I don't feel completely off in
continuing to say "discovery" when I mean "find unknown network
devices and add them to Ironic".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 08:06 PM, Doug Hellmann wrote:

During our “Graduation Schedule” summit session we worked through the list of 
modules remaining the in the incubator. Our notes are in the etherpad [1], but as 
part of the "Write it Down” theme for Oslo this cycle I am also posting a 
summary of the outcome here on the mailing list for wider distribution. Let me know 
if you remembered the outcome for any of these modules differently than what I have 
written below.

Doug



Deleted or deprecated modules:

funcutils.py - This was present only for python 2.6 support, but it is no 
longer used in the applications. We are keeping it in the stable/juno branch of 
the incubator, and removing it from master (https://review.openstack.org/130092)

hooks.py - This is not being used anywhere, so we are removing it. 
(https://review.openstack.org/#/c/125781/)

quota.py - A new quota management system is being created 
(https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
replace this, so we will keep it in the incubator for now but deprecate it.

crypto/utils.py - We agreed to mark this as deprecated and encourage the use of 
Barbican or cryptography.py (https://review.openstack.org/134020)

cache/ - Morgan is going to be working on a new oslo.cache library as a 
front-end for dogpile, so this is also deprecated 
(https://review.openstack.org/134021)

apiclient/ - With the SDK project picking up steam, we felt it was safe to 
deprecate this code as well (https://review.openstack.org/134024).

xmlutils.py - This module was used to provide a security fix for some XML 
modules that have since been updated directly. It was removed. 
(https://review.openstack.org/#/c/125021/)



Graduating:

oslo.context:
- Dims is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
- includes:
context.py

oslo.service:
- Sachi is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
- includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
By te way, right now I'm looking into updating this code to be able to 
run tasks on a thread pool, not only in one thread (quite a problem for 
Ironic). Does it somehow interfere with the graduation? Any deadlines or 
something?



request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

oslo.utils:
- We need to look into how to preserve the git history as we import these 
modules.
- includes:
fileutils.py
versionutils.py



Remaining untouched:

scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
whether Gantt has enough traction yet so we will hold onto these in the 
incubator for at least another cycle.

report/ - There’s interest in creating an oslo.reports library containing this 
code, but we haven’t had time to coordinate with Solly about doing that.



Other work:

We will continue the work on oslo.concurrency and oslo.log that we started 
during Juno.

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 12:27 PM, Ganapathy, Sandhya wrote:

Hi All,

Based on the discussions, I have filed a blue print that initiates discovery of 
node hardware details given its credentials at chassis level. I am in the 
process of creating a spec for it. Do share your thoughts regarding this -

https://blueprints.launchpad.net/ironic/+spec/chassis-level-node-discovery
Hi and thank you for the suggestion. As already said, this thread is not 
the best place to discuss it, so please file a (short version of) spec, 
so that we can comment on it.


Thanks,
Sandhya.

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, November 13, 2014 2:20 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing
DISCOVERING to INTROSPECTING in the new state machine spec is a good idea?

As before I'm uncertain. Discovery is a troublesome term, but too many people 
use and recognize it, while IMO introspecting is much less common. So count me 
as -0 on this.



On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
mailto:sandhya.ganapa...@hp.com>> wrote:

 Hi all,

 Following the mail thread on disambiguating the term 'discovery' -

 In the lines of what Devananda had stated, Hardware Introspection
 also means retrieving and storing hardware details of the node whose
 credentials and IP Address are known to the system. (Correct me if I
 am wrong).

 I am currently in the process of extracting hardware details (cpu,
 memory etc..) of n no. of nodes belonging to a Chassis whose
 credentials are already known to ironic. Does this process fall in
 the category of hardware introspection?

 Thanks,
 Sandhya.

 -Original Message-
 From: Devananda van der Veen [mailto:devananda@gmail.com
 <mailto:devananda@gmail.com>]
 Sent: Tuesday, October 21, 2014 5:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Ironic] disambiguating the term "discovery"

 Hi all,

 I was reminded in the Ironic meeting today that the words "hardware
 discovery" are overloaded and used in different ways by different
 people. Since this is something we are going to talk about at the
 summit (again), I'd like to start the discussion by building
 consensus in the language that we're going to use.

 So, I'm starting this thread to explain how I use those two words,
 and some other words that I use to mean something else which is what
 some people mean when they use those words. I'm not saying my words
 are the right words -- they're just the words that make sense to my
 brain right now. If someone else has better words, and those words
 also make sense (or make more sense) then I'm happy to use those
 instead.

 So, here are rough definitions for the terms I've been using for the
 last six months to disambiguate this:

 "hardware discovery"
 The process or act of identifying hitherto unknown hardware, which
 is addressable by the management system, in order to later make it
 available for provisioning and management.

 "hardware introspection"
 The process or act of gathering information about the properties or
 capabilities of hardware already known by the management system.


 Why is this disambiguation important? At the last midcycle, we
 agreed that "hardware discovery" is out of scope for Ironic --
 finding new, unmanaged nodes and enrolling them with Ironic is best
 left to other services or processes, at least for the forseeable future.

 However, "introspection" is definitely within scope for Ironic. Even
 though we couldn't agree on the details during Juno, we are going to
 revisit this at the Kilo summit. This is an important feature for
 many of our current users, and multiple proof of concept
 implementations of this have been done by different parties over the
 last year.

 It may be entirely possible that no one else in our developer
 community is using the term "introspection" in the way that I've
 defined it above -- if so, that's fine, I can stop calling that
 "introspection", but I don't know a better word for the thing that
 is find-unknown-hardware.

 Suggestions welcome,
 Devananda


 P.S.

 For what it's worth, googling for "hardware discovery" yields
 several results related to identifying unknown network-connected
 devices and adding them to inventory systems, which is the way that
 I'm using the term right now, so I don't feel completely off in
 continuin

Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 01:54 PM, Doug Hellmann wrote:


On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur  wrote:


On 11/12/2014 08:06 PM, Doug Hellmann wrote:

During our “Graduation Schedule” summit session we worked through the list of 
modules remaining the in the incubator. Our notes are in the etherpad [1], but as 
part of the "Write it Down” theme for Oslo this cycle I am also posting a 
summary of the outcome here on the mailing list for wider distribution. Let me know 
if you remembered the outcome for any of these modules differently than what I have 
written below.

Doug



Deleted or deprecated modules:

funcutils.py - This was present only for python 2.6 support, but it is no 
longer used in the applications. We are keeping it in the stable/juno branch of 
the incubator, and removing it from master (https://review.openstack.org/130092)

hooks.py - This is not being used anywhere, so we are removing it. 
(https://review.openstack.org/#/c/125781/)

quota.py - A new quota management system is being created 
(https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
replace this, so we will keep it in the incubator for now but deprecate it.

crypto/utils.py - We agreed to mark this as deprecated and encourage the use of 
Barbican or cryptography.py (https://review.openstack.org/134020)

cache/ - Morgan is going to be working on a new oslo.cache library as a 
front-end for dogpile, so this is also deprecated 
(https://review.openstack.org/134021)

apiclient/ - With the SDK project picking up steam, we felt it was safe to 
deprecate this code as well (https://review.openstack.org/134024).

xmlutils.py - This module was used to provide a security fix for some XML 
modules that have since been updated directly. It was removed. 
(https://review.openstack.org/#/c/125021/)



Graduating:

oslo.context:
- Dims is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
- includes:
context.py

oslo.service:
- Sachi is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
- includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py

By te way, right now I'm looking into updating this code to be able to run 
tasks on a thread pool, not only in one thread (quite a problem for Ironic). 
Does it somehow interfere with the graduation? Any deadlines or something?


Feature development on code declared ready for graduation is basically frozen 
until the new library is created. You should plan on doing that work in the new 
oslo.service repository, which should be showing up soon. And the you describe 
feature sounds like something for which we would want a spec written, so please 
consider filing one when you have some of the details worked out.
Sure, right now I'm experimenting in Ironic tree to figure out how it 
really works. There's a single oslo-specs repo for the whole oslo, right?







request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

oslo.utils:
- We need to look into how to preserve the git history as we import these 
modules.
- includes:
fileutils.py
versionutils.py



Remaining untouched:

scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
whether Gantt has enough traction yet so we will hold onto these in the 
incubator for at least another cycle.

report/ - There’s interest in creating an oslo.reports library containing this 
code, but we haven’t had time to coordinate with Solly about doing that.



Other work:

We will continue the work on oslo.concurrency and oslo.log that we started 
during Juno.

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 01:15 PM, Lucas Alvares Gomes wrote:

This was discussed in the Contributor Meetup on Friday at the Summit
but I think it's important to share on the mail list too so we can get
more opnions/suggestions/comments about it.

In the Ironic weekly meeting we dedicate a good time of the meeting to
do some announcements, reporting bug status, CI status, oslo status,
specific drivers status, etc... It's all good information, but I
believe that the mail list would be a better place to report it and
then we can free some time from our meeting to actually discuss
things.

Are you guys in favor of it?

If so I'd like to propose a new format based on the discussions we had
in Paris. For the people doing the status report on the meeting, they
would start adding the status to an etherpad and then we would have a
responsible person to get this information and send it to the mail
list once a week.

For the meeting itself we have a wiki page with an agenda[1] which
everyone can edit to put the topic they want to discuss in the meeting
there, I think that's fine and works. The only change about it would
be that we may want freeze the agenda 2 days before the meeting so
people can take a look at the topics that will be discussed and
prepare for it; With that we can move forward quicker with the
discussions because people will be familiar with the topics already.

Let me know what you guys think.
I'm not really fond of it (like every process complication) but it looks 
inevitable, so +1.




[1] https://wiki.openstack.org/wiki/Meetings/Ironic

Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposing new meeting times

2014-11-18 Thread Dmitry Tantsur

On 11/18/2014 02:00 AM, Devananda van der Veen wrote:

Hi all,

As discussed in Paris and at today's IRC meeting [1] we are going to be
alternating the time of the weekly IRC meetings to accommodate our
contributors in EMEA better. No time will be perfect for everyone, but
as it stands, we rarely (if ever) see our Indian, Chinese, and Japanese
contributors -- and it's quite hard for any of the AU / NZ folks to attend.

I'm proposing two sets of times below. Please respond with a "-1" vote
to an option if that option would cause you to miss ALL meetings, or a
"+1" vote if you can magically attend ALL the meetings. If you can
attend, without significant disruption, at least one of the time slots
in a proposal, please do not vote either for or against it. This way we
can identify a proposal which allows everyone to attend at a minimum 50%
of the meetings, and preferentially weight towards one that allows more
contributors to attend two meetings.

This link shows the local times in some major coutries / timezones
around the world (and you can customize it to add your own).
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20141125&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

For reference, the current meeting time is 1900 UTC.

Option #1: alternate between Monday 1900 UTC && Tuesday 0900 UTC.  I
like this because 1900 UTC spans all of US and western EU, while 0900
combines EU and EMEA. Folks in western EU are "in the middle" and can
attend all meetings.

+1



http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=19&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=25&hour=9&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


Option #2: alternate between Monday 1700 UTC && Tuesday 0500 UTC. I like
this because it shifts the current slot two hours earlier, making it
easier for eastern EU to attend without excluding the western US, and
while 0500 UTC is not so late that US west coast contributors can't
attend (it's 9PM for us), it is harder for western EU folks to attend.
There's really no one in the middle here, but there is at least a chance
for US west coast and EMEA to overlap, which we don't have at any other
time.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=17&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


I'll collate all the responses to this thread during the week, ahead of
next week's regularly-scheduled meeting.

-Devananda

[1]
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Jim Rollenhagen to ironic-core

2014-07-14 Thread Dmitry Tantsur
+1, much awaited!

On Fri, 2014-07-11 at 15:50 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> It's time to grow the team :)
> 
> 
> Jim (jroll) started working with Ironic at the last mid-cycle, when
> "teeth" became ironic-python-agent. In the time since then, he's
> jumped into Ironic to help improve the project as a whole. In the last
> few months, in both reviews and discussions on IRC, I have seen him
> consistently demonstrate a solid grasp of Ironic's architecture and
> its role within OpenStack, contribute meaningfully to design
> discussions, and help many other contributors. I think he will be a
> great addition to the core review team.
> 
> 
> Below are his review stats for Ironic, as calculated by the
> openstack-infra/reviewstats project with local modification to remove
> ironic-python-agent, so we can see his activity in the main project.
> 
> 
> Cheers,
> Devananda
> 
> 
> +--+---++
> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +--+---++
> 
> 
> 30
> |  jimrollenhagen  |  290   8  21   0   072.4% |
>5 ( 17.2%)  |
> 
> 
> 60
> |  jimrollenhagen  |  760  16  60   0   078.9% |
> 13 ( 17.1%)  |
> 
> 
> 90
> |  jimrollenhagen  | 1060  27  79   0   074.5% |
> 25 ( 23.6%)  |
> 
> 
> 180
> |  jimrollenhagen  | 1570  41 116   0   073.9% |
> 35 ( 22.3%)  |
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating David Shrewsbury to ironic-core

2014-07-14 Thread Dmitry Tantsur
+1 

On Fri, 2014-07-11 at 15:50 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> While David (Shrews) only began working on Ironic in earnest four
> months ago, he has been working on some of the tougher problems with
> our Tempest coverage and the Nova<->Ironic interactions. He's also
> become quite active in reviews and discussions on IRC, and
> demonstrated a good understanding of the challenges facing Ironic
> today. I believe he'll also make a great addition to the core team.
> 
> 
> Below are his stats for the last 90 days.
> 
> 
> Cheers,
> Devananda
> 
> 
> +--+---++
> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +--+---++
> 
> 
> 30
> | dshrews  |  470  11  36   0   076.6% |
>7 ( 14.9%)  |
> 
> 
> 
> 60
> | dshrews  |  910  14  77   0   084.6% |
> 15 ( 16.5%)  |
> 
> 
> 90
> | dshrews  | 1210  21 100   0   082.6% |
> 16 ( 13.2%)  |
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How to get testr to failfast

2014-07-31 Thread Dmitry Tantsur
Hi!

On Thu, 2014-07-31 at 10:45 +0100, Chris Dent wrote:
> One of the things I like to be able to do when in the middle of making
> changes is sometimes run all the tests to make sure I haven't accidentally
> caused some unexpected damage in the neighborhood. If I have I don't
> want the tests to all run, I'd like to exit on first failure.

This makes even more sense, if you _know_ that you've broken a lot of
things and want to deal with it case-by-case. At least for me it's more
convenient, I believe many will prefer getting all the errors at once.

>  This
> is a common feature in lots of testrunners but I can't seem to find
> a way to make it happen when testr is integrated with setuptools.
> 
> Any one know a way?
> 
> There's this:
>https://bugs.launchpad.net/testrepository/+bug/1211926
> 
> But it is not clear how or where to effectively pass the right argument,
> either from the command line or in tox.ini.
> 
> Even if you don't know a way, I'd like to hear from other people who
> would like it to be possible. It's one of several testing habits I
> have from previous worlds that I'm missing and doing a bit of
> commiseration would be a nice load off.

It would be my 2nd wanted feature in our test system (after getting
reasonable error message (at least not binary) in case of import
errors :)

> 
> Thanks.
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] help

2014-07-31 Thread Dmitry Tantsur
Hi!

This list is not for usage question, it's for OpenStack developers. The
best way to get a quick help should be using
https://ask.openstack.org/en/questions/ or joining #openstack on
Freenode and asking there.

Good luck!

On Thu, 2014-07-31 at 15:59 +0530, shailendra acharya wrote:
> hello folks,
>   this is shailendra acharya. i m trying to install openstack
> icehouse in centos6.5. but i got stuck and tried almost every link
> which was suggested to me by google. i have last hope to u. 
> when i come to create user using keystone cmd as written in openstack
> installation manual
>keystone user-create --name=admin --pass=ADMIN_PASS
> --email=ADMIN_EMAIL
> 
>  
> i replaced email and pass but when i press enter it shows 
> invalid credential error  plz do dsomething asap
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco Driver Blueprint

2014-08-07 Thread Dmitry Tantsur
Hi!

Didn't read the spec thoroughly, but I'm concerned by it's huge scope.
It's actually several specs squashed into one (not too detailed). My
vote is splitting it into a chain of specs (at least 3: power driver,
discovery, other configurations) and seek exception separately.
Actually, I'm +1 on making exception for power driver, but -0 on the
others, until I see a separate spec for them.

Dmitry.

On Thu, 2014-08-07 at 09:30 +0530, GopiKrishna Saripuri wrote:
> Hi,
> 
> 
> I've submitted Ironic Cisco driver blueprint post proposal freeze
> date. This driver is critical for Cisco and few customers to test as
> part of their private cloud expansion. The driver implementation is
> ready along with unit-tests. Will submit the code for review once
> blueprint is accepted. 
> 
> 
> The Blueprint review link: https://review.openstack.org/#/c/110217/
> 
> 
> Please let me know If its possible to include this in Juno release.
> 
> 
> 
> Regards
> GopiKrishna S
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-07 Thread Dmitry Tantsur
Hi!

On Tue, 2014-08-05 at 12:33 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> The following idea came out of last week's midcycle for how to improve
> our spec process and tracking on launchpad. I think most of us liked
> it, but of course, not everyone was there, so I'll attempt to write
> out what I recall.
> 
> 
> This would apply to new specs proposed for Kilo (since the new spec
> proposal deadline has already passed for Juno).
> 
> 
> 
> 
> First, create a blueprint in launchpad and populate it with your
> spec's heading. Then, propose a spec with just the heading (containing
> a link to the BP), Problem Description, and first paragraph outlining
> your Proposed change. 
> 
> 
> This will be given an initial, high-level review to determine whether
> it is in scope and in alignment with project direction, which will be
> reflected on the review comments, and, if affirmed, by setting the
> blueprint's "Direction" field to "Approved".

How will we formally track it in Gerrit? By having several +1's by spec
cores? Or will it be done by you (I guess only you can update
"Direction" in LP)?

> 
> 
> At this point, if affirmed, you should proceed with filling out the
> entire spec, and the remainder of the process will continue as it was
> during Juno. Once the spec is approved, update launchpad to set the
> specification URL to the spec's location on
> https://specs.openstack.org/openstack/ironic-specs/ and a member of
> the team (probably me) will update the release target, priority, and
> status.
> 
> 
> 
> 
> I believe this provides two benefits. First, it should give quicker
> initial feedback to proposer if their change is going to be in/out of
> scope, which can save considerable time if the proposal is out of
> scope. Second, it allows us to track well-aligned specs on Launchpad
> before they are completely approved. We observed that several specs
> were approved at nearly the same time as the code was approved. Due to
> the way we were using LP this cycle, it meant that LP did not reflect
> the project's direction in advance of landing code, which is not what
> we intended. This may have been confusing, and I think this will help
> next cycle. FWIW, several other projects have observed a similar
> problem with spec<->launchpad interaction, and are adopting similar
> practices for Kilo.
> 
> 
> 
> 
> Comments/discussion welcome!

I'm +1 to the idea, just some concerns about the implementation:
1. We don't have any "pre-approved" state in Gerrit - need agreement on
when to continue (see above)
2. We'll need to speed up spec reviews, because we're adding one more
blocker on the way to the code being merged :) Maybe it's no longer a
problem actually, we're doing it faster now.

> 
> 
> 
> -Deva
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][ironic] ironic-inspector release 2.2.2 (liberty)

2015-10-21 Thread Dmitry Tantsur

We are gleeful to announce the release of:

ironic-inspector 2.2.2: Hardware introspection for OpenStack Bare Metal

With source available at:

http://git.openstack.org/cgit/openstack/ironic-inspector

The most important change is a fix for CVE-2015-5306, all users 
(including users of ironic-discoverd) are highly advised to update.


Another user-visible change is defaulting MySQL to InnoDB, as MyISAM is 
known not to work.


For more details, please see the git log history below and:

http://launchpad.net/ironic-inspector/+milestone/2.2.2

Please report issues through launchpad:

http://bugs.launchpad.net/ironic-inspector

Changes in ironic-inspector 2.2.1..2.2.2


95db43c Always default to InnoDB for MySQL
2d42cdf Updated from global requirements
2c64da2 Never run Flask application with debug mode
bbf31de Fix gate broken by the devstack trueorfalse change
12eaf81 Use auth_strategy=noauth in functional tests

Diffstat (except docs and test files)
-

devstack/plugin.sh |  2 +-
ironic_inspector/db.py |  7 ++-
ironic_inspector/main.py   |  5 +--
.../versions/578f84f38d_inital_db_schema.py| 12 +++--
.../migrations/versions/d588418040d_add_rules.py   | 10 -
ironic_inspector/test/functional.py| 51 
+++---

requirements.txt   |  2 +-
7 files changed, 52 insertions(+), 37 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e53d673..39b8423 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -21 +21 @@ oslo.rootwrap>=2.0.0 # Apache-2.0
-oslo.utils>=2.0.0 # Apache-2.0
+oslo.utils!=2.6.0,>=2.0.0 # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Next meeting is November 9

2015-10-22 Thread Dmitry Tantsur

On 10/22/2015 12:33 PM, Miles Gould wrote:

I've just joined - what is the usual place and time?


Hi and welcome!

All the information you need you can find here: 
https://wiki.openstack.org/wiki/Meetings/Ironic




Thanks,
Miles

- Original Message -
From: "Beth Elwell" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, 22 October, 2015 8:33:03 AM
Subject: Re: [openstack-dev] [ironic] Next meeting is November 9

Hi Jim,

I will be on holiday the week of the 9th November and so will be unable to make 
that meeting. Work on the ironic UI will be posted in the sub team report 
section and if anyone has any questions regarding it please shoot me an email 
or ping me.

Thanks!
Beth


On 22 Oct 2015, at 01:58, Jim Rollenhagen  wrote:

Hi folks,

Since we'll all be at the summit next week, and presumably recovering
the following week, the next Ironic meeting will be on November 9, in
the usual place and time. See you there! :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-09 Thread Dmitry Tantsur

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack 
baremetal" and "openstack baremetal introspection" namespaces belonging 
to ironic and ironic-inspector accordingly. TL;DR of this email is to 
deprecate them and move to TripleO-specific namespaces. Read on to know why.


Problem
===

I realized that we're doing a wrong thing when people started asking me 
why "baremetal introspection start" and "baremetal introspection bulk 
start" behave so differently (the former is from ironic-inspector, the 
latter is from tripleoclient). The problem with TripleO commands is that 
they're highly opinionated workflows commands, but there's no way a user 
can distinguish them from general-purpose ironic/ironic-inspector 
commands. The way some of them work is not generic enough ("baremetal 
import"), or uses different defaults from an upstream project 
("configure boot"), or does something completely unacceptable upstream 
(e.g. the way "introspection bulk start" deals with node states).


So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

 This command assumes there's an "baremetal instackenv" object, while 
instackenv is a tripleo-specific file format.


2. baremetal import

 This command supports a limited subset of ironic drivers and driver 
properties, only those known to os-cloud-config.


3. baremetal introspection bulk start

 This command does several bad (IMO) things:
 a. Messes with ironic node states
 b. Operates implicitly on all nodes (in a wrong state)
 c. Defaults to polling

4. baremetal show capabilities

 This is the only commands that is generic enough and could actually 
make it to ironicclient itself.


5. baremetal introspection bulk status

 See "bulk start" above.

6. baremetal configure ready state

 First of all, this and the next command use "baremetal configure" 
prefix. I would not promise we'll never start using it in ironic, 
breaking the whole TripleO.


 Seconds, it's actually DELL-specific.

7. baremetal configure boot

 This one is nearly ok, but it defaults to local boot, which is not an 
upstream default. Default values for images may not work outside of 
TripleO as well.


Proposal


As we already have "openstack undercloud" and "openstack overcloud" 
prefixes for TripleO, I suggest we move these commands under "openstack 
overcloud nodes" namespace. So we end up with:


 overcloud nodes import
 overcloud nodes configure ready state --drac
 overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state" 
command. As to the remaining commands:


1. baremetal introspection status --all

  This is fine to move to inspector-client, as inspector knows which 
nodes are/were on introspection. We'll need a new API though.


2. baremetal show capabilities

  We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

  I believe that we need to make 2 things explicit in this replacement 
for "introspection bulk status": polling and operating on "available" nodes.


4. overcloud nodes import --dry-run

  could be a replacement for "baremetal instackenv validate".


Please let me know what you think.

Cheers,
Dmitry.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-09 Thread Dmitry Tantsur

On 11/09/2015 03:04 PM, Dougal Matthews wrote:

On 9 November 2015 at 12:44, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces
belonging to ironic and ironic-inspector accordingly. TL;DR of this
email is to deprecate them and move to TripleO-specific namespaces.
Read on to know why.

Problem
===

I realized that we're doing a wrong thing when people started asking
me why "baremetal introspection start" and "baremetal introspection
bulk start" behave so differently (the former is from
ironic-inspector, the latter is from tripleoclient). The problem
with TripleO commands is that they're highly opinionated workflows
commands, but there's no way a user can distinguish them from
general-purpose ironic/ironic-inspector commands. The way some of
them work is not generic enough ("baremetal import"), or uses
different defaults from an upstream project ("configure boot"), or
does something completely unacceptable upstream (e.g. the way
"introspection bulk start" deals with node states).


A big +1 to the idea.

We originally done this because we wanted to make it feel more
"integrated", but it never quite worked. I completely agree with all the
justifications below.


So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object,
while instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and
driver properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling

4. baremetal show capabilities

  This is the only commands that is generic enough and could
actually make it to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

  Seconds, it's actually DELL-specific.


heh, that I didn't know!


7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not
an upstream default. Default values for images may not work outside
of TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under
"openstack overcloud nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot


I think this is probably okay, but I wonder if "nodes" is a bit generic?
Why not "overcloud baremetal" for consistency?


I don't have a strong opinion on it :)




As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows
which nodes are/were on introspection. We'll need a new API though.


A new API endpoint in Ironic Inspector?


Yeah, a new endpoint to report all nodes that are/were on inspection.




2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this
replacement for "introspection bulk status": polling and operating
on "available" nodes.

4. overcloud nodes import --dry-run

   could be a replacement for "baremetal instackenv validate".


Please let me know what you think.


Thanks for bringing this up, it should make everything much clearer for
everyone.


Great! I've also added this topic to the tomorrow's meeting to increase 
visibility.





Cheers,
Dmitry.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://l

[openstack-dev] [TripleO] RFC: profile matching

2015-11-09 Thread Dmitry Tantsur

Hi folks!

I spent some time thinking about bringing profile matching back in, so 
I'd like to get your comments on the following near-future plan.


First, the scope of the problem. What we do is essentially kind of 
capability discovery. We'll help nova scheduler with doing the right 
thing by assigning a capability like "suits for compute", "suits for 
controller", etc. The most obvious path is to use inspector to assign 
capabilities like "profile=1" and then filter nodes by it.


A special care, however, is needed when some of the nodes match 2 or 
more profiles. E.g. if we have all 4 nodes matching "compute" and then 
only 1 matching "controller", nova can select this one node for 
"compute" flavor, and then complain that it does not have enough hosts 
for "controller".


We also want to conduct some sanity check before even calling to 
heat/nova to avoid cryptic "no valid host found" errors.


(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to 
inspector that allow us to define rules on introspection data. The plan 
is to have rules saying, for example:


 rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
 rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based 
DSL [1].


As you see, one node can receive 0, 1 or many such capabilities. So we 
need the next step to make a final decision, based on how many nodes we 
need of every profile.


(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided, 
tripleoclient will fetch all ironic nodes, and try to ensure that we 
have enough nodes with all profiles.


Nodes with existing "profile:xxx" capability are left as they are. For 
nodes without a profile it will look at "xxx_profile" capabilities 
discovered on the previous step. One of the possible profiles will be 
chosen and assigned to "profile" capability. The assignment stops as 
soon as we have enough nodes of a flavor as requested by a user.


(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will 
fetch all flavors involved and look at the "profile" capabilities. If 
they are set for any flavors, it will check if we have enough ironic 
nodes with a given "profile:xxx" capability. This check will happen 
after profiles assigning, if --assign-profiles is used.


Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

Hi all!

I'd like to seek consensus (or at least some opinions) on patch 
https://review.openstack.org/#/c/206119/

It proposed the following command:

  openstack baremetal provision state --provide UUID

(where --provide can also be --active, --deleted, --inspect, etc).

I have several issues with this proposal:

1. IIUC the structure of an OSC command is "openstack noun verb". 
"provision state" is not a verb.

2. --active is not consistent with other options, which are verbs.

Let's have a quick poll, which would you prefer and why:

1. openstack baremetal provision state --provide UUID
2. openstack baremetal provision --provide UUID
3. openstack baremetal provide UUID
4. openstack baremetal set provision state --provide UUID
5. openstack baremetal set state --provide UUID
6. openstack baremetal action --provide UUID

I vote for #3. Though it's much more versbose, it reads very easily, 
except for "active". For active I'm thinking about changing it to 
"activate" or "provision".


My next candidate is #6. Though it's also not a verb, it reads pretty 
easily.


Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 10:28 AM, Lucas Alvares Gomes wrote:

Hi,


Let's have a quick poll, which would you prefer and why:

1. openstack baremetal provision state --provide UUID
2. openstack baremetal provision --provide UUID
3. openstack baremetal provide UUID
4. openstack baremetal set provision state --provide UUID
5. openstack baremetal set state --provide UUID
6. openstack baremetal action --provide UUID


I know very little about OSC and it's syntax, but what I would do in
this case is to follow the same syntax as the command that changes the
power state of the nodes. Apparently the power state command proposed
[1] follows the syntax:

$ openstack baremetal power --on | --off 

I would expect provision state to follow the same, perhaps

$ openstack baremetal provision --provide | --active | ... 

So my vote goes to make both power and provision state syntax
consistent. (Which currently is the option # 2, but none patches are
merged yet)


It's still not 100% consistent, "power" is a noun, "provision" is a 
verb. Not sure it matters, though, adding OSC folks so that they can 
weigh in.




[1] https://review.openstack.org/#/c/172517/28

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 12:26 PM, John Trowbridge wrote:



On 11/09/2015 07:44 AM, Dmitry Tantsur wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces belonging
to ironic and ironic-inspector accordingly. TL;DR of this email is to
deprecate them and move to TripleO-specific namespaces. Read on to know
why.

Problem
===

I realized that we're doing a wrong thing when people started asking me
why "baremetal introspection start" and "baremetal introspection bulk
start" behave so differently (the former is from ironic-inspector, the
latter is from tripleoclient). The problem with TripleO commands is that
they're highly opinionated workflows commands, but there's no way a user
can distinguish them from general-purpose ironic/ironic-inspector
commands. The way some of them work is not generic enough ("baremetal
import"), or uses different defaults from an upstream project
("configure boot"), or does something completely unacceptable upstream
(e.g. the way "introspection bulk start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling



I have considered this whole command as a bug for a while now. I
understand what we were trying to do and why, but it is pretty bad to
hijack another project's namespace with a command that would get a firm
-2 there.


4. baremetal show capabilities

  This is the only commands that is generic enough and could actually
make it to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

  Seconds, it's actually DELL-specific.

7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of
TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under "openstack
overcloud nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows which
nodes are/were on introspection. We'll need a new API though.

2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this replacement
for "introspection bulk status": polling and operating on "available"
nodes.


I am not totally convinced that we gain a huge amount by hiding the
state manipulation in this command. We need to move that logic to
tripleo-common anyways, so I think it is worth considering splitting it
from the introspect command.


+1



Dmitry and I discussed briefly at summit having the ability to pass a
list of nodes to the inspector client for introspection as well. So if
we separated out the bulk state manipulation bit, we could just use that.


And here it goes: https://review.openstack.org/#/c/243541/ :)

The only missing bit would be polling, it's bug 
https://bugs.launchpad.net/python-ironic-inspector-client/+bug/1480649 
if someone feels like working on it.




I get that this is going in the opposite direction of the original
intention of lowering the amount of commands needed to get a functional
deployment. However, I think that goal is better solved elsewhere
(tripleo.sh, some ansible playbooks, etc.). Instead it would be nice if
the tripleoclient was more transparent.


+100



Thanks Dmitry for starting this discussion.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack

Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 02:42 PM, Lennart Regebro wrote:

These changes are fine to me.

I'm not so sure about the idea that we can't "hijack" other projects
namespaces. If only ironic is allowed to use the prefix "baremetal",
then the prefix should not have been "baremetal" in the first place,
it should have been "ironic". Which of course means it would just be a
replacement for the ironic client, making these whole namespaces
pointless.


That's not true, ironic is officially called the Bare metal service, so 
"baremetal" is its official short name.


That said, I'm not saying we can never use other's namespaces. I only 
state that this should be done in coordination with projects and with 
care to make the new commands generic enough.




I do agree that many of these should not be in baremetal at all as
they are not baremetal specific, but tripleo-things, and hence is a
part of the overcloud/undercloud namespace, and that in the minimum
teaches us to be more careful with the namespaces. We should probably
double-check with others first.

Oh, sorry, I mean "We should probably increase cross-team
communication visibility to synchronize the integrational aspects of
the openstack client project, going forward."


:)




On Mon, Nov 9, 2015 at 1:44 PM, Dmitry Tantsur  wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack baremetal"
and "openstack baremetal introspection" namespaces belonging to ironic and
ironic-inspector accordingly. TL;DR of this email is to deprecate them and
move to TripleO-specific namespaces. Read on to know why.

Problem
===

I realized that we're doing a wrong thing when people started asking me why
"baremetal introspection start" and "baremetal introspection bulk start"
behave so differently (the former is from ironic-inspector, the latter is
from tripleoclient). The problem with TripleO commands is that they're
highly opinionated workflows commands, but there's no way a user can
distinguish them from general-purpose ironic/ironic-inspector commands. The
way some of them work is not generic enough ("baremetal import"), or uses
different defaults from an upstream project ("configure boot"), or does
something completely unacceptable upstream (e.g. the way "introspection bulk
start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling

4. baremetal show capabilities

  This is the only commands that is generic enough and could actually make it
to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure" prefix. I
would not promise we'll never start using it in ironic, breaking the whole
TripleO.

  Seconds, it's actually DELL-specific.

7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of TripleO
as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud" prefixes
for TripleO, I suggest we move these commands under "openstack overcloud
nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state" command.
As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows which nodes
are/were on introspection. We'll need a new API though.

2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this replacement for
"introspection bulk status": polling and operating on "available" nodes.

4. overcloud nodes import --dry-run

   could be a replacement for "baremetal instackenv validate".


Please let me know what you think.

Cheers,
Dmitry.


__
OpenStack Development 

Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 04:08 PM, Tzu-Mainn Chen wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 for both



b) Put it in its own repo, tripleo-api


The first option made a lot of sense to people on IRC, as the proposed
API is a very thin layer that's bound closely to the code in tripleo-
common.  The major objection is that renaming is not trivial; however
it was mentioned that renaming might not be *too* bad... as long as
it's done sooner rather than later.


Renaming is bad when there are strong backward compatibility guarantees. 
I'm not sure if it's the case for tripleo-common.




What do people think?


Thanks,
Tzu-Mainn Chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 03:32 PM, Steve Martinelli wrote:

So I don't know the intricacies of the baremetal APIs, but hopefully I
can shed some light on best practices.

Do try to reuse the existing actions
(http://docs.openstack.org/developer/python-openstackclient/commands.html#actions)
Do use "create", "delete", "set", "show" and "list" for basic CRUD.
Do try to have natural opposites - like issue/revoke, resume/suspend,
add/remove.


So looking at the list below, I'd say:
Don't use "update" - use "set".

What's the point of "inspect"? Can you use "show"? If it's a HEAD call,
how about "check"?

What's "manage" does it update a resource? Can you use "set" instead?

What are the natural opposites between provide/activate/abort/boot/shutdown?


 inspect, manage, provide, active and abort are all provisioning verbs 
used in ironic API. they usually represent some complex operations on a 
node. Inspection is not related to showing, it's about fetching hardware 
properties from hardware itself and updating ironic database. manage 
sets a node to a specific ("manageable") state. etc.


boot and shutdown are natural opposites, aka power on and power off.



reboot and rebuild seem good

/rant

Steve

Inactive hide details for "Sam Betts (sambetts)" ---2015/11/10 07:20:54
AM---So you would end up with a set of commands that lo"Sam Betts
(sambetts)" ---2015/11/10 07:20:54 AM---So you would end up with a set
of commands that look like this: Openstack baremetal [node/driver/cha

From: "Sam Betts (sambetts)" 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 2015/11/10 07:20 AM
Subject: Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient
command for provision action





So you would end up with a set of commands that look like this:

Openstack baremetal [node/driver/chassis] list
Openstack baremetal port list [—node uuid] <— replicate node-port-list

Openstack baremetal [node/port/driver] show UUID
Openstack baremetal chassis show [—nodes] UUID <— replicate
chassis-node-list

Openstack baremetal [node/chassis/port] create
Openstack baremetal [node/chassis/port] update UUID
Openstack baremetal [node/chassis/port] delete UUID

Openstack baremetal [node/chassis] provide UUID
Openstack baremetal [node/chassis] activate UUID
Openstack baremetal [node/chassis] rebuild UUID
Openstack baremetal [node/chassis] inspect UUID
Openstack baremetal [node/chassis] manage UUID
Openstack baremetal [node/chassis] abort UUID
Openstack baremetal [node/chassis] boot UUID
Openstack baremetal [node/chassis] shutdown UUID
Openstack baremetal [node/chassis] reboot UUID

Openstack baremetal node maintain [—done] UUID
Openstack baremetal node console [—enable, —disable] UUID <— With no
parameters this acts like node-get-console, otherwise acts like
node-set-console-mode
Openstack baremetal node boot-device [—supported, —PXE, —CDROM, etc]
UUID <—With no parameters this acts like node-get-boot-device,
—supported makes it act like node-get-supported-boot-devices, and with a
type of boot device passed in it’ll act like node-set-boot-device

Openstack baremetal [node/driver] passthru

WDYT? I think I’ve covered most of what exists in the Ironic CLI currently.

Sam

*From: *"Haomeng, Wang" <_wanghaomeng@gmail.com_
>*
Reply-To: *"OpenStack Development Mailing List (not for usage
questions)" <_openstack-dev@lists.openstack.org_
>*
Date: *Tuesday, 10 November 2015 11:41*
To: *"OpenStack Development Mailing List (not for usage questions)"
<_openstack-dev@lists.openstack.org_
>*
Subject: *Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient
command for provision action

Hi Sam,

Yes, I understand your format is:

#openstack baremetal  

so these can cover all 'node' operations however if we want to cover
support port/chassis/driver and more ironic resources, so how about
below proposal?

#openstack baremetal   

The resource/target can be one item in following list:

node
port
chassis
driver
...

Make sense?




On Tue, Nov 10, 2015 at 7:25 PM, Sam Betts (sambetts)
<_sambetts@cisco.com_ > wrote:

Openstack baremetal provision provide or —provide Just doesn’t feel
right to me, it feels like I am typing more that I need to and it
feels like I’m telling it to do the same action twice.

I would much rather see:

Openstack baremetal provide UUID
Openstack baremetal activate UUID
Openstack baremetal delete UUID
Openstack baremetal rebuild UUID
Openstack baremetal inspect UUID
Openstack baremetal manage UUID
Openstack baremetal abort UUID

And for power:

Openstack baremetal boot UUID
Openstack beremetal shutdown UUID
Openstack baremetal reboot UUID

WDYT?

Sam

*From: *"Haomeng, Wang" <_wanghaomeng@gmail.com_

Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 04:37 PM, Giulio Fidente wrote:

On 11/10/2015 04:16 PM, Dmitry Tantsur wrote:

On 11/10/2015 04:08 PM, Tzu-Mainn Chen wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 for both



b) Put it in its own repo, tripleo-api


if both the api (coming) and the cli (currently python-tripleoclient)
are meant to consume the shared code (business logic) from
tripleo-common, then I think it makes sense to keep each in its own repo
... so that we avoid renaming tripleo-common as well


tripleoclient should not consume tripleo-common or have any business 
logic. otherwise it undermines the whole goal of having API, as we'll 
have to reproduce the same logic on GUI.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:14 PM, Dean Troyer wrote:

On Tue, Nov 10, 2015 at 9:46 AM, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

  inspect, manage, provide, active and abort are all provisioning
verbs used in ironic API. they usually represent some complex
operations on a node. Inspection is not related to showing, it's
about fetching hardware properties from hardware itself and updating
ironic database. manage sets a node to a specific ("manageable")
state. etc.


inspect seems like a very specific action and is probably OK as-is.  We
should sanity-check other resources in OpenStack that it might also be
used with down the road and how different the action might be.


ironic-inspector uses term "introspection", but that's another long story.



Those that are states of a resource should be handled with a set command.


Speaking of consistency, all these move a node in the ironic state 
machine, so it might be weird to have "inspect" but "set manageable". 
Maybe it's only me, not sure.. The problem is that some states 
manipulation result in a simple actions (e.g. "manage" action either 
does nothing or does power credentials validation depending on the 
initial state). But most provision state changes involve complex long 
running operations ("active" to deploy, "deleted" to undeploy and clean, 
"inspect" to conduct inspection). Not sure how to make these consistent, 
any suggestions are very welcome.




boot and shutdown are natural opposites, aka power on and power off.


The analogous server commands (create/delete) may not make sense here
because, unlike with a server (VM), a resource is not being created or
deleted.  But a user might expect to use the same commands in both
places.  We need to consider which of those is more important.  I like
to break ties on the side of user experience consistency.

Honestly, at some point as a user, I'd like to forget whether my server
is a bare metal box or not and just use the same commands to manage it.


Well, it's not possible. Or more precisely, it is possible if you use 
ironic indirectly via nova API. But power on/off is not very similar to 
instance create/delete. Instance create is actually correlates to 
"active" provision state, instance deletion - to "deleted" (yeah, naming 
is no so good here).




Also, I'd LOVE to avoid using 'boot' at all just to get away from the
nova command's use of it.


+1



dt

--

Dean Troyer
dtro...@gmail.com <mailto:dtro...@gmail.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:21 PM, Brad P. Crochet wrote:

On Tue, Nov 10, 2015 at 4:09 AM, Dmitry Tantsur  wrote:

Hi all!

I'd like to seek consensus (or at least some opinions) on patch
https://review.openstack.org/#/c/206119/
It proposed the following command:



I think it's time to actually just write up a spec on this. I think we
would be better served to spell it out now, and then more people can
contribute to both the spec and to the actual implementation once the
spec is approved.

WDYT?


+1

I'll block the first patch until we get consensus. Thanks for working on it!




   openstack baremetal provision state --provide UUID

(where --provide can also be --active, --deleted, --inspect, etc).

I have several issues with this proposal:

1. IIUC the structure of an OSC command is "openstack noun verb". "provision
state" is not a verb.
2. --active is not consistent with other options, which are verbs.

Let's have a quick poll, which would you prefer and why:

1. openstack baremetal provision state --provide UUID
2. openstack baremetal provision --provide UUID
3. openstack baremetal provide UUID
4. openstack baremetal set provision state --provide UUID
5. openstack baremetal set state --provide UUID
6. openstack baremetal action --provide UUID

I vote for #3. Though it's much more versbose, it reads very easily, except
for "active". For active I'm thinking about changing it to "activate" or
"provision".

My next candidate is #6. Though it's also not a verb, it reads pretty
easily.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:18 PM, Ben Nemec wrote:

+1 to moving anything we can into Ironic itself.

I do want to note that if we rename anything, we can't just rip and
replace.  We have users of the current commands, and we need to
deprecate those to give people a chance to move to the new ones.


Definitely. I think I used word deprecation somewhere in this long letter :)



A few more thoughts inline.

On 11/09/2015 06:44 AM, Dmitry Tantsur wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces belonging
to ironic and ironic-inspector accordingly. TL;DR of this email is to
deprecate them and move to TripleO-specific namespaces. Read on to know why.

Problem
===

I realized that we're doing a wrong thing when people started asking me
why "baremetal introspection start" and "baremetal introspection bulk
start" behave so differently (the former is from ironic-inspector, the
latter is from tripleoclient). The problem with TripleO commands is that
they're highly opinionated workflows commands, but there's no way a user
can distinguish them from general-purpose ironic/ironic-inspector
commands. The way some of them work is not generic enough ("baremetal
import"), or uses different defaults from an upstream project
("configure boot"), or does something completely unacceptable upstream
(e.g. the way "introspection bulk start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

   This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

   This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.


True, although I feel like an "import from JSON" feature would not be
inappropriate for inclusion in Ironic.  I can't believe that we're the
only ones who would be interested in mass importing nodes from a very
common format like this.


I would warmly welcome such command, but it should not require patching 
os-cloud-config every time we add a new driver or driver property.






3. baremetal introspection bulk start

   This command does several bad (IMO) things:
   a. Messes with ironic node states
   b. Operates implicitly on all nodes (in a wrong state)


I thought that was fixed?  It used to try to introspect nodes that were
in an invalid state (like active), but it shouldn't anymore.


It introspects nodes in "available" state, which is a rude violation of 
the ironic state machine ;)




Unless your objection is that it introspects things in an available
state, which I think has to do with the state Ironic puts (or used to
put) nodes in after registration.  In any case, this one likely requires
some more discussion over how it should work.


Well, if we upgrade baremetal API version we used from 1.6 (essentially 
Kilo) to Liberty one, nodes would appear in a new ENROLL state. I've 
started a patch for it, but I don't have time to finish. Anyone is free 
to overtake: https://review.openstack.org/#/c/235158/





   c. Defaults to polling

4. baremetal show capabilities

   This is the only commands that is generic enough and could actually
make it to ironicclient itself.

5. baremetal introspection bulk status

   See "bulk start" above.

6. baremetal configure ready state

   First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

   Seconds, it's actually DELL-specific.


Well, as I understand it we don't intend for it to be Dell-specific,
it's just that the Dell implementation is the only one that has been
done so far.

That said, since I think this is just layering some TripleO-specific
logic on top of the vendor-specific calls Ironic provides I agree that
it probably doesn't belong in the baremetal namespace.



7. baremetal configure boot

   This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of
TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under "openstack
overcloud nodes" namespace. So we end up with:

   overcloud nodes import
   overcloud nodes configure ready state --drac
   overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

This is fine to move to inspector-client, as inspector knows whi

Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:45 PM, Lucas Alvares Gomes wrote:

Hi,

In the last Ironic meeting [1] we started a discussion about whether
we need to have a mid-cycle meeting for the Mitaka cycle or not. Some
ideas about the format of the midcycle were presented in that
conversation and this email is just a follow up on that conversation.

The ideas presented were:

1. Normal mid-cycle

Same format as the previous ones, the meetup will happen in a specific
venue somewhere in the world.


I would really want to see you all as often as possible. However, I 
don't see much value in proper face-to-face mid-cycles as compared to 
improving our day-to-day online communications.




2. Virtual mid-cycle

People doing a virtual hack session on IRC / google hangout /
others... Something like virtual sprints [2].


Actually we could do it more often that mid-cycles and with less 
planning. Say, when we feel a need to. Face-to-face communication is the 
most important part of a mid-cycle, so this choice is not an 
alternative, just one more good thing we could do from time to time.




3. Coordinated regional mid-cycles

Having more than one meetup happening in different parts of the world
with a preferable time overlap between them so we could use video
conference for some hours each day to sync up what was done/discussed
on each of the meetups.


This sounds like a good compromise between not having a midcycle at all 
(I include virtual sprints in this category too) and spending a big 
budget on traveling overseas. I would try something like that.




4. Not having a mid-cycle at all


So, what people think about it? Should we have a mid-cycle for the
Mitaka release or not? If so, what format should we use?

Other ideas are also welcome.

[1] 
http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-11-09-17.00.log.html
[2] https://wiki.openstack.org/wiki/VirtualSprints

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 06:08 PM, Ben Nemec wrote:

On 11/10/2015 10:28 AM, John Trowbridge wrote:



On 11/10/2015 10:43 AM, Ben Nemec wrote:

On 11/10/2015 05:26 AM, John Trowbridge wrote:



On 11/09/2015 07:44 AM, Dmitry Tantsur wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces belonging
to ironic and ironic-inspector accordingly. TL;DR of this email is to
deprecate them and move to TripleO-specific namespaces. Read on to know
why.

Problem
===

I realized that we're doing a wrong thing when people started asking me
why "baremetal introspection start" and "baremetal introspection bulk
start" behave so differently (the former is from ironic-inspector, the
latter is from tripleoclient). The problem with TripleO commands is that
they're highly opinionated workflows commands, but there's no way a user
can distinguish them from general-purpose ironic/ironic-inspector
commands. The way some of them work is not generic enough ("baremetal
import"), or uses different defaults from an upstream project
("configure boot"), or does something completely unacceptable upstream
(e.g. the way "introspection bulk start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling



I have considered this whole command as a bug for a while now. I
understand what we were trying to do and why, but it is pretty bad to
hijack another project's namespace with a command that would get a firm
-2 there.


4. baremetal show capabilities

  This is the only commands that is generic enough and could actually
make it to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

  Seconds, it's actually DELL-specific.

7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of
TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under "openstack
overcloud nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows which
nodes are/were on introspection. We'll need a new API though.

2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this replacement
for "introspection bulk status": polling and operating on "available"
nodes.


I am not totally convinced that we gain a huge amount by hiding the
state manipulation in this command. We need to move that logic to
tripleo-common anyways, so I think it is worth considering splitting it
from the introspect command.

Dmitry and I discussed briefly at summit having the ability to pass a
list of nodes to the inspector client for introspection as well. So if
we separated out the bulk state manipulation bit, we could just use that.

I get that this is going in the opposite direction of the original
intention of lowering the amount of commands needed to get a functional
deployment. However, I think that goal is better solved elsewhere
(tripleo.sh, some ansible playbooks, etc.). Instead it would be nice if
the tripleoclient was more transparent.


-2.  This is exactly the thing that got us to a place where our GUI was
unusable.  Business logic (and state management around Ironic node
inspection is just that) has to live in the API so all consumers can
take advantage of it.  Otherwise everyone has to reimplement it
themselves and anything but the developer-used CLI interfaces (like
tripleo.sh) fall behind, and we end up right back where we are 

[openstack-dev] [all] [oslo] Please stop removing usedevelop from tox.ini (at least for now)

2015-11-16 Thread Dmitry Tantsur

Hi all!

I've seen a couple of patches removing "usedevelop = true" from tox.ini. 
This has 2 nasty consequences:
1. It's harder to manually experiment in tox environment, as you have to 
explicitly use 'tox' command every time you change code
2. The most important, it breaks tox invocation for all people who don't 
have very recent pbr and setuptools in their distributions (which I 
suspect might be the majority of people):


ERROR: invocation failed (exit code 1), logfile: 
/home/dtantsur/.gerrty-git/openstack/futurist/.tox/log/tox-0.log

ERROR: actionid=tox
msg=packaging
cmdargs=['/usr/bin/python', 
local('/home/dtantsur/.gerrty-git/openstack/futurist/setup.py'), 
'sdist', '--formats=zip', '--dist-dir', 
local('/home/dtantsur/.gerrty-git/openstack/futurist/.tox/dist')]

env=None

Installed 
/home/dtantsur/.gerrty-git/openstack/futurist/.eggs/pbr-1.8.1-py2.7.egg
error in setup command: 'install_requires' must be a string or list of 
strings containing valid project/version requirement specifiers; 
Expected ',' or end-of-list in futures>=3.0;python_version=='2.7' or 
python_version=='2.6' at ;python_version=='2.7' or python_version=='2.6'


ERROR: FAIL could not package project - v = 
InvocationError('/usr/bin/python 
/home/dtantsur/.gerrty-git/openstack/futurist/setup.py sdist 
--formats=zip --dist-dir 
/home/dtantsur/.gerrty-git/openstack/futurist/.tox/dist (see 
/home/dtantsur/.gerrty-git/openstack/futurist/.tox/log/tox-0.log)', 1)



Before you ask: using 'sudo pip install -U setuptools pbr' is out of 
question in binary distributions :) so please make sure to remove this 
line only when everyone is updated to whatever version is required for 
understanding these ;python_version=='2.7 bits.


Thank you!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo] Please stop removing usedevelop from tox.ini (at least for now)

2015-11-16 Thread Dmitry Tantsur

On 11/16/2015 11:35 AM, Julien Danjou wrote:

On Mon, Nov 16 2015, Dmitry Tantsur wrote:


Before you ask: using 'sudo pip install -U setuptools pbr' is out of question
in binary distributions :) so please make sure to remove this line only when
everyone is updated to whatever version is required for understanding these
;python_version=='2.7 bits.


But:
   pip install --user setuptools pbr
might not be out of the question.



Yeah, this (with added -U) fixes the problem. But then we have to add it 
to *all* contribution documentation. I'm pretty sure a lot of people 
won't realize they need it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo] Please stop removing usedevelop from tox.ini (at least for now)

2015-11-16 Thread Dmitry Tantsur

On 11/16/2015 12:28 PM, Davanum Srinivas wrote:

Dmitry,

i was trying to bring sanity to the tox.ini(s). +1 to documenting this
step somewhere prominent.


Please don't get me wrong, I really appreciate it. I'm not sure why 
"usedevelop" is not the default, though. Maybe we at least make sure to 
communicate this to people first? Because the error message is really 
vague to anyone who is not aware of these version tags.




-- Dims

On Mon, Nov 16, 2015 at 5:37 AM, Dmitry Tantsur  wrote:

On 11/16/2015 11:35 AM, Julien Danjou wrote:


On Mon, Nov 16 2015, Dmitry Tantsur wrote:


Before you ask: using 'sudo pip install -U setuptools pbr' is out of
question
in binary distributions :) so please make sure to remove this line only
when
everyone is updated to whatever version is required for understanding
these
;python_version=='2.7 bits.



But:
pip install --user setuptools pbr
might not be out of the question.



Yeah, this (with added -U) fixes the problem. But then we have to add it to
*all* contribution documentation. I'm pretty sure a lot of people won't
realize they need it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector][documentation] any suggestions

2015-11-18 Thread Dmitry Tantsur

On 11/18/2015 10:10 AM, Serge Kovaleff wrote:

Hi Stackers,

I am going to help my team with the Inspector installation instruction.


Hi, that's great!



Any ideas or suggestions what and how to contribute back to the community?

I see that Ironic Inspector could benefit from Documentation efforts.
The repo hasn't got Doc folder or/and auto-generated documentation.


Creating a proper documentation would be a great step to contribute :) 
it's tracked as https://bugs.launchpad.net/ironic-inspector/+bug/1514803


Right now all documentation, including the installation guide, is in a 
couple of rst files in root:

https://github.com/openstack/ironic-inspector/blob/master/README.rst
https://github.com/openstack/ironic-inspector/blob/master/HTTP-API.rst



Cheers,
Serge Kovaleff




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-18 Thread Dmitry Tantsur

I have to admin I forgot about this thread. Please find comments inline.

On 11/06/2015 05:25 PM, Bruno Cornec wrote:

Hello,

Pavlo Shchelokovskyy said on Tue, Nov 03, 2015 at 09:41:51PM +:

For auto-setting driver options on enrollment, I would vote for option 2
with default being fake driver + optional CMDB integration. This would
ease
managing a homogeneous pool of BMs, but still (using fake driver or data
from CMDB) work reasonably well in heterogeneous case.


Using fake driver means we need a manual step to set it to something 
non-fake :) and the current introspection process already has 1 manual 
step (enrolling nodes), so I'd like autodiscovery to require 0 of them 
(at least for the majority of users).




As for setting a random password, CMDB integration is crucial IMO. Large
deployments usually have some sort of it already, and it must serve as a
single source of truth for the deployment. So if inspector is changing
the
ipmi password, it should not only notify/update Ironic's knowledge on
that
node, but also notify/update the CMDB on that change - at least there
must
be a possibility (a ready-to-use plug point) to do that before we roll
out
such feature.


Well, if we have a CMDB, we probably don't need to set credentials. Or 
at least we should rely on the CMDB as a primary source. This "setting 
random password" thing is more about people without CMDB (aka using 
ironic as a CMDB ;). I'm not sure it's a compelling enough use case.


Anyway, it could be interesting to talk about some generic 
OpenStack-CMDB interface, which might something proposed below.




wrt interaction with CMDB, we have investigating around some ideas tha
we have gathered at https://github.com/uggla/alexandria/wiki


Oh, that's interesting. I see some potential overlap with ironic and 
ironic-inspector. Would be cool to chat on it the next summit.




Some code has been written to try to model some of these aspects, but
having more contributors and patches to enhance that integration would
be great ! Similarly available at https://github.com/uggla/alexandria

We had planned to talk about these ideas at the previous OpenStack
summit but didn't get enough votes it seems. So now aiming at preenting
to the next one ;-)


+100, would love to hear.



HTH,
Bruno.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-18 Thread Dmitry Tantsur

On 11/02/2015 05:07 PM, Sam Betts (sambetts) wrote:

Auto discovery is a topic which has been discussed a few times in the
past for

Ironic, and its interesting to solve because its a bit of a chicken and egg

problem. The ironic inspector allows us to inspect nodes that we don't know

the mac addresses for yet, to do this we run a global DHCP PXE rule that
will

respond to all mac addresses and PXE boot any machine that requests it,

this means its possible for machines that we haven't been asked to

inspect to boot into the inspector ramdisk and send their information to

inspector's API. To prevent this data from being processed further by

inspector if its a machine we shouldn't care about, we do a node lookup.
If the data

fails this node lookup we used to drop this data and continue no further, in

release 2.0.0 we added a hook point to intercept this state called the
Node Not

Found hook point which allows us to run some python code at this point in

processing before failing and dropping the inspection data. Something we've

discussed as a use for this hook point is, enrolling a node that fails the

lookup into Ironic, and then having inspector continue to process the

inspection data as we would for any other node that had inspection requested

for it, this allows us to auto-discover unknown nodes into Ironic.


If this auto discovery hook was enabled this would be the flow when
inspector

receives inspection data from the inspector ramdisk:


- Run pre-process on the inspection data to sanitise the data and ready
it for

   the rest of the process


- Node lookup using fields from the inspection data:

   - If in inspector node cache return node info


   - If not in inspector node cache and but is in ironic node database, fail

 inspection because its a known node and inspection hasn't been
requested

 for it.


   - If not in inspector node cache or ironic node database, enroll the
node in

 ironic and return node info


- Process inspection data


The remaining question for this idea is how to handle the driver
settings for

each node that we discover, we've currently discussed 3 different options:


1. Enroll the node in ironic using the fake driver, and leave it to the
operator

to set the driver type and driver info before they move the node
from enroll

to manageable.


I'm -1 to this because it requires a manual step. We already have a 
process requiring 1 manual step - inspection :) I'd like autodiscovery 
to turn it to 0.





2. Allow for the default driver and driver info information to be set in
the

ironic inspector configuration file, this will be set on every node
that is

auto discovered. Possible config file example:


[autodiscovery]

driver = pxe_ipmitool

address_field = 

username_field = 

password_field = 


This is my favorite one. We'll also need to provide the default user 
name/password. We can try to advance a node to MANAGEABLE state after 
enrolling it. If the default credentials don't work, node would stay in 
ENROLL state, and this will be a signal to an operator to check them.





3. A possibly vendor specific option that was suggested at the summit was to

provide an ability to look up out of band credentials from an
external CMDB.


We already have an extension point for discovery. If we know more about 
CMDB interfaces, we can extend it, but it's already possible to use.





The first option is technically possible using the second option, by setting

the driver to fake and leaving the driver info blank.


+1




With IPMI based drivers most IPMI related information can be retrieved
from the

node by the inspector ramdisk, however for non-ipmi based drivers such
as the

cimc/ucs drivers this information isn't accessible from an in-band OS
command.


A problem with option 2 is that it can not account for a mixed driver

environment.


We have also discussed for IPMI based drivers inspector could set a new

randomly generated password on to the freshly discovered node, with the idea

being fresh hardware often comes with a default password, and if you used

inspector to discover it then it could set a unique password on it and

automatically make ironic aware of that.


We're throwing this idea out onto the mailer because we'd like to get
feedback

from the community to see if this would be useful for people using
inspector,

and to see if people have any opinions on what the right way to handle
the node

driver settings is.


Yeah, I'm not decided on this one. Sounds cool but dangerous :)




Sam (sambetts)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage q

Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-19 Thread Dmitry Tantsur

On 11/19/2015 11:57 AM, Sam Betts (sambetts) wrote:

What Yuiko has described makes a lot of sense, and from that perspective
perhaps instead of us defining what driver a node should and shouldn’t
be using a config file, we should just provide a guide to using the
inspector rules for this and maybe some prewritten rules that can set
the driver and driver info etc fields for different cases?

Then the work flow would be, default to Fake driver because we don’t
need any special info for that, however if a rule detects that its
IPMIable by making sure that the IPMI address is valid or something,
then it can set the driver to an ipmitool one and then set the password
and username based on either a retrieved field or values defined in the
rule itself.


Using introspection rules, wow! That's a really nice idea, I would why I 
didn't think about it.




WDYT?


That sounds really promising, and simplifies our life a lot. I would 
love to see all these ideas written. Do you folks think it's time for 
ironic-inspector-specs repo? :D




Sam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] specs process for ironic-inspector

2015-11-19 Thread Dmitry Tantsur

Hi folks!

I've been dodging subj for some time (mostly due to my laziness), but 
now it seems like the time has come. We're discussing 2 big features: 
autodiscovery and HA that I would like us to have a proper consensus on.


I'd like to get your opinion on one of the options:
1. Do not have specs, only blueprints are enough for us.
2. Reuse ironic-specs repo, create our own subdirectory with our own 
template

3. Create a new ironic-inspector-specs repo.

I vote for #2, as sharing a repo with the remaining ironic would 
increase visibility of large inspector changes (i.e. those deserving a 
spec). We would probably use [inspector] tag in the commit summary, so 
that people explicitly NOT wanting to review them can quickly ignore.


Also note that I still see #1 (use only blueprints) as a way to go for 
simple features.


WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] specs process for ironic-inspector

2015-11-19 Thread Dmitry Tantsur

On 11/19/2015 02:39 PM, Pavlo Shchelokovskyy wrote:

Hi all,

+1 for specs in general, big features require a proper review and
discussion for which LP is not a good choice.

+1 for not requiring a spec for small features, LP BP is enough for just
time/release tracking, but of course cores can request a proper spec to
be proposed if feeling feature is worth discussion.

0 for using ironic-specs. It will increase visibility to wider ironic
community, sure. But it seems ironic-inspector has to decide how
integrated it should be with the other ironic project infra pieces as
well. For example, there is now a patch on review to build a proper
sphinx docs for ironic-inspector. Should those then be published and
where? Should ironic-inspector have own doc site e.g.
http://docs.openstack.org/developer/ironic-inspector/, or somehow be
incorporated in ironic doc site? IMO decision on specs and docs should
be consistent.


This is a good point. It's very likely that we'll post documentation to 
a separate site.




Best regards,

On Thu, Nov 19, 2015 at 3:20 PM Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi folks!

I've been dodging subj for some time (mostly due to my laziness), but
now it seems like the time has come. We're discussing 2 big features:
autodiscovery and HA that I would like us to have a proper consensus on.

I'd like to get your opinion on one of the options:
1. Do not have specs, only blueprints are enough for us.
2. Reuse ironic-specs repo, create our own subdirectory with our own
template
3. Create a new ironic-inspector-specs repo.

I vote for #2, as sharing a repo with the remaining ironic would
increase visibility of large inspector changes (i.e. those deserving a
spec). We would probably use [inspector] tag in the commit summary, so
that people explicitly NOT wanting to review them can quickly ignore.

Also note that I still see #1 (use only blueprints) as a way to go for
simple features.

WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-19 Thread Dmitry Tantsur

On 11/16/2015 03:05 PM, Jim Rollenhagen wrote:

On Wed, Nov 11, 2015 at 12:16:34PM -0500, Ruby Loo wrote:

On 10 November 2015 at 12:08, Dmitry Tantsur  wrote:


On 11/10/2015 05:45 PM, Lucas Alvares Gomes wrote:


Hi,

In the last Ironic meeting [1] we started a discussion about whether
we need to have a mid-cycle meeting for the Mitaka cycle or not. Some
ideas about the format of the midcycle were presented in that
conversation and this email is just a follow up on that conversation.

The ideas presented were:

1. Normal mid-cycle

Same format as the previous ones, the meetup will happen in a specific
venue somewhere in the world.



I would really want to see you all as often as possible. However, I don't
see much value in proper face-to-face mid-cycles as compared to improving
our day-to-day online communications.



+2.

My take on mid-cycles is that if folks want to have one, that is fine, I
might not attend :)

My preference is 4) no mid-cycle -- and try to work more effectively with
people in different locations and time zones.


++ that was part of my thought process when I proposed not having an
official midcycle.

Another idea I floated last week was to do a virtual midcycle of sorts.
Treat it like a normal midcycle in that everyone tells their management
"I'm out for 3-4 days for the midcycle", but they don't travel anywhere.
We come up with an agenda, see if there's any planning/syncing work to
do, or if it's all just hacking on code/reviews.

Then we can set up some hangouts (or similar) to get people in the same
"room" working on things. Time zones will get weird, but we tend to
split into smaller groups at the midcycle anyway; this is just more
timezone-aligned. We can also find windows where time zones overlap when
we want to go across those boundaries. Disclaimer: people may need to
work some weird hours to do this well.

I think this might get a little bit bumpy, but if it goes relatively
well we can try to improve on it for the future. Worst case, it's a
total failure and is roughly equivalent to the "no midcycle" option.


I would try it, +1



// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Redfish drivers in ironic

2015-11-20 Thread Dmitry Tantsur

On 11/20/2015 12:50 AM, Bruno Cornec wrote:

Hello,

Vladyslav Drok said on Thu, Nov 19, 2015 at 03:59:41PM +0200:

Hi list and Bruno,

I’m interested in adding virtual media boot interface for redfish (
https://blueprints.launchpad.net/ironic/+spec/redfish-virtual-media-boot).

It depends on
https://blueprints.launchpad.net/ironic/+spec/ironic-redfish
and a corresponding spec https://review.openstack.org/184653, that
proposes
adding support for redfish (adding new power and management
interfaces) to
ironic. It also seems to depend on python-redfish client -
https://github.com/devananda/python-redfish.


Very good idea ;-)


I’d like to know what is the current status of it?


We have made recently some successful tests with both a real HP ProLiant
server with a redfish compliant iLO FW (2.30+) and the DMTF simulator.

The version working for these tests is at
https://github.com/bcornec/python-redfish (prototype branch)
I think we should now move that work into master and make again a pull
request to Devananda.


Is there some roadmap of what should be added to
python-redfish (or is the one mentioned in spec is still relevant)?


I think this is still relevant.


Is there a way for others to contribute in it?


Feel free to git clone the repo and propose patches to it ! We would be
happy to have contributors :-) I've also copied our mailing list to the
other contributors are aware of this.


Bruno, do you plan to move it
under ironic umbrella, or into pyghmi as people suggested in spec?


That's a difficult question. One one hand, I don't think python-redfish
should be under the OpenStack umbrella per se. This is a useful python
module to dialog with servers providing a Redfish interface and this has
no relationship with OpenStack ... except that it's very useful for
Ironic ! But could also be used by other projects in the future such as
Hadoop for node deployment, or my MondoRescue Disaster Recovery project
e.g. That's also why we have not used OpenStack modules in order to
avoid to create an artificial dependency that could prevent that module
tobe used py these other projects.


Using openstack umbrella does not automatically mean the project can't 
be used outside of openstack. It just means you'll be using openstack 
infra for its development, which might be a big plus.




I'm new to the python galaxy myself, but thought that pypy would be the
right place for it, but I really welcome suggestions here.


You mean PyPI? I don't see how these 2 contradict each other, PyPI is 
just a way to distribute releases.



I also need to come back to the Redfish spec itself and upate with the
atest feedback we got, in order to have more up to date content for the
Mitaka cycle.

Best regards,
Bruno.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-23 Thread Dmitry Tantsur

On 11/17/2015 04:31 PM, Tzu-Mainn Chen wrote:






On 10 November 2015 at 15:08, Tzu-Mainn Chen mailto:tzuma...@redhat.com>> wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 - I think this makes most sense if we are not going to support
the tripleo repo as a library.


Okay, this seems to be the consensus, which is great.

The leftover question is how to package the renamed repo.  'tripleo' is
already intuitively in use by tripleo-incubator.
In IRC, bnemec and trown suggested splitting the renamed repo into two
packages - 'python-tripleo' and 'tripleo-api',
which seems sensible to me.


-1, that would be inconsistent with what other projects are doing. I 
guess tripleo-incubator will die soon, and I think only tripleo devs 
have any intuition about it. For me tripleo == instack-undercloud.




What do others think?


Mainn


b) Put it in its own repo, tripleo-api


The first option made a lot of sense to people on IRC, as the
proposed
API is a very thin layer that's bound closely to the code in
tripleo-
common.  The major objection is that renaming is not trivial;
however
it was mentioned that renaming might not be *too* bad... as long as
it's done sooner rather than later.

What do people think?


Thanks,
Tzu-Mainn Chen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Question about using Devstack

2015-11-23 Thread Dmitry Tantsur

On 11/23/2015 12:07 PM, Oleksii Zamiatin wrote:



On Mon, Nov 23, 2015 at 12:58 PM, Young Yang mailto:afe.yo...@gmail.com>> wrote:

Hi,
I'm using devstack to deploy stable/Kilo in my Xenserver.
I successfully deploy devstack. But I found that every time I
restart it, devstack always run ./stack.sh to clear all my data and
resintall all the components.
So here comes  the questions.

1) Can I stop devstack from reinstalling after rebooting and just
use the openstack installed successfully last time.
I've tried  replacing the stack.sh with another blank shell script
to stop it running. Then  It didn't reinstall the services after
rebooting.  However, some services didn't start successfully.


try rejoin-stack.sh - it is in the same folder as unstack.sh, stack.sh


Did it ever worked for someone? :)




2) I found that devstack will exit if it is unable to connect the
Internet when rebooting.
Is there any way I can reboot devstack successfully without
connection to the Internet after I've install it successfully with
connection to the Internet.

Thanks in advance !  :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-23 Thread Dmitry Tantsur

On 11/23/2015 05:42 PM, Morgan Fainberg wrote:

Hi everyone,

This email is being written in the context of Keystone more than any
other project but I strongly believe that other projects could benefit
from a similar evaluation of the policy.

Most projects have a policy that prevents the following scenario (it is
a social policy not enforced by code):

* Employee from Company A writes code
* Other Employee from Company A reviews code
* Third Employee from Company A reviews and approves code.

This policy has a lot of history as to why it was implemented. I am not
going to dive into the depths of this history as that is the past and we
should be looking forward. This type of policy is an actively
distrustful policy. With exception of a few potentially bad actors
(again, not going to point anyone out here), most of the folks in the
community who have been given core status on a project are trusted to
make good decisions about code and code quality. I would hope that
any/all of the Cores would also standup to their management chain if
they were asked to "just push code through" if they didn't sincerely
think it was a positive addition to the code base.


Thanks for raising this. I always apply this policy in ironic not 
because I don't think we're trustful with my colleagues. The problem I'm 
trying to avoid is members of the same company having the same one-sided 
view on a problem.




Now within Keystone, we have a fair amount of diversity of core
reviewers, but we each have our specialities and in some cases (notably
KeystoneAuth and even KeystoneClient) getting the required diversity of
reviews has significantly slowed/stagnated a number of reviews.


This is probably a fair use case for not applying this rule.



What I would like us to do is to move to a trustful policy. I can
confidently say that company affiliation means very little to me when I
was PTL and nominating someone for core. We should explore making a
change to a trustful model, and allow for cores (regardless of company
affiliation) review/approve code. I say this since we have clear steps
to correct any abuses of this policy change.

With all that said, here is the proposal I would like to set forth:

1. Code reviews still need 2x Core Reviewers (no change)
2. Code can be developed by a member of the same company as both core
reviewers (and approvers).
3. If the trust that is being given via this new policy is violated, the
code can [if needed], be reverted (we are using git here) and the actors
in question can lose core status (PTL discretion) and the policy can be
changed back to the "distrustful" model described above.

I hope that everyone weighs what it means within the community to start
moving to a trusting-of-our-peers model. I think this would be a net win
and I'm willing to bet that it will remove noticeable roadblocks [and
even make it easier to have an organization work towards stability fixes
when they have the resources dedicated to it].

Thanks for your time reading this.

Regards,
--Morgan
PTL Emeritus, Keystone


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-26 Thread Dmitry Tantsur

On 11/25/2015 10:43 PM, Ben Nemec wrote:

On 11/23/2015 06:50 AM, Dmitry Tantsur wrote:

On 11/17/2015 04:31 PM, Tzu-Mainn Chen wrote:






 On 10 November 2015 at 15:08, Tzu-Mainn Chen mailto:tzuma...@redhat.com>> wrote:

 Hi all,

 At the last IRC meeting it was agreed that the new TripleO REST API
 should forgo the Tuskar name, and simply be called... the TripleO
 API.  There's one more point of discussion: where should the API
 live?  There are two possibilities:

 a) Put it in tripleo-common, where the business logic lives.  If we
 do this, it would make sense to rename tripleo-common to simply
 tripleo.


 +1 - I think this makes most sense if we are not going to support
 the tripleo repo as a library.


Okay, this seems to be the consensus, which is great.

The leftover question is how to package the renamed repo.  'tripleo' is
already intuitively in use by tripleo-incubator.
In IRC, bnemec and trown suggested splitting the renamed repo into two
packages - 'python-tripleo' and 'tripleo-api',
which seems sensible to me.


-1, that would be inconsistent with what other projects are doing. I
guess tripleo-incubator will die soon, and I think only tripleo devs
have any intuition about it. For me tripleo == instack-undercloud.


This was only referring to rpm packaging, and it is how we currently
package most of the other projects.  The repo itself would stay as one
thing, but would be split into python-tripleo and openstack-tripleo-api
rpms.

I don't massively care about package names, but given that there is no
(for example) openstack-nova package and openstack-tripleo is already in
use by a completely different project, I think it's reasonable to move
ahead with the split packages named this way.


Got it, sorry for confusion







What do others think?


Mainn


 b) Put it in its own repo, tripleo-api


 The first option made a lot of sense to people on IRC, as the
 proposed
 API is a very thin layer that's bound closely to the code in
 tripleo-
 common.  The major objection is that renaming is not trivial;
 however
 it was mentioned that renaming might not be *too* bad... as long as
 it's done sooner rather than later.

 What do people think?


 Thanks,
 Tzu-Mainn Chen

 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-announce] [release][stable][keystone][ironic] keystonemiddleware release 1.5.3 (kilo)

2015-11-26 Thread Dmitry Tantsur
I suspect it could break ironic stable/kilo in the same way as 2.0.0 
release. Still investigating, checking if 
https://review.openstack.org/#/c/250341/ will also fix it. Example of 
failing patch: https://review.openstack.org/#/c/248365/


On 11/23/2015 08:54 PM, d...@doughellmann.com wrote:

We are pumped to announce the release of:

keystonemiddleware 1.5.3: Middleware for OpenStack Identity

This release is part of the kilo stable release series.

With source available at:

 http://git.openstack.org/cgit/openstack/keystonemiddleware

With package available at:

 https://pypi.python.org/pypi/keystonemiddleware

For more details, please see the git log history below and:

 http://launchpad.net/keystonemiddleware/+milestone/1.5.3

Please report issues through launchpad:

 http://bugs.launchpad.net/keystonemiddleware

Notable changes


will now require python-requests<2.8.0

Changes in keystonemiddleware 1.5.2..1.5.3
--

d56d96c Updated from global requirements
9aafe8d Updated from global requirements
cc746dc Add an explicit test failure condition when auth_token is missing
5b1e18f Fix list_opts test to not check all deps
217cd3d Updated from global requirements
518e9c3 Ensure cache keys are a known/fixed length
033c151 Updated from global requirements

Diffstat (except docs and test files)
-

keystonemiddleware/auth_token/_cache.py   | 19 ++-
requirements.txt  | 19 ++-
setup.py  |  1 -
test-requirements-py3.txt | 18 +-
test-requirements.txt | 18 +-
7 files changed, 69 insertions(+), 37 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e3288a1..23308cd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7,9 +7,9 @@ iso8601>=0.1.9
-oslo.config>=1.9.3,<1.10.0  # Apache-2.0
-oslo.context>=0.2.0,<0.3.0 # Apache-2.0
-oslo.i18n>=1.5.0,<1.6.0  # Apache-2.0
-oslo.serialization>=1.4.0,<1.5.0   # Apache-2.0
-oslo.utils>=1.4.0,<1.5.0   # Apache-2.0
-pbr>=0.6,!=0.7,<1.0
-pycadf>=0.8.0,<0.9.0
-python-keystoneclient>=1.1.0,<1.4.0
-requests>=2.2.0,!=2.4.0
+oslo.config<1.10.0,>=1.9.3 # Apache-2.0
+oslo.context<0.3.0,>=0.2.0 # Apache-2.0
+oslo.i18n<1.6.0,>=1.5.0 # Apache-2.0
+oslo.serialization<1.5.0,>=1.4.0 # Apache-2.0
+oslo.utils!=1.4.1,<1.5.0,>=1.4.0 # Apache-2.0
+pbr!=0.7,<1.0,>=0.6
+pycadf<0.9.0,>=0.8.0
+python-keystoneclient<1.4.0,>=1.2.0
+requests!=2.4.0,<2.8.0,>=2.2.0
@@ -16,0 +17 @@ six>=1.9.0
+stevedore<1.4.0,>=1.3.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 11d9e17..5ab5eb0 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-hacking>=0.10.0,<0.11
+hacking<0.11,>=0.10.0
@@ -9,2 +9,2 @@ discover
-fixtures>=0.3.14
-mock>=1.0
+fixtures<1.3.0,>=0.3.14
+mock<1.1.0,>=1.0
@@ -12,5 +12,5 @@ pycrypto>=2.6
-oslosphinx>=2.5.0,<2.6.0 # Apache-2.0
-oslotest>=1.5.1,<1.6.0  # Apache-2.0
-oslo.messaging>=1.8.0,<1.9.0  # Apache-2.0
-requests-mock>=0.6.0  # Apache-2.0
-sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
+oslosphinx<2.6.0,>=2.5.0 # Apache-2.0
+oslotest<1.6.0,>=1.5.1 # Apache-2.0
+oslo.messaging<1.9.0,>=1.8.0 # Apache-2.0
+requests-mock>=0.6.0 # Apache-2.0
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
@@ -19 +19 @@ testresources>=0.2.4
-testtools>=0.9.36,!=1.2.0
+testtools!=1.2.0,>=0.9.36



___
OpenStack-announce mailing list
openstack-annou...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-11-26 Thread Dmitry Tantsur

On 11/09/2015 03:51 PM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
  rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.

(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


I've put up a prototype patch for this work item: 
https://review.openstack.org/#/c/250405/




(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.


Looks like this is already implemented, so the patch above is the only 
thing we actually need.




Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Releases and things

2015-11-26 Thread Dmitry Tantsur
FYI the same thing applies to both inspector and (very soon) 
inspector-client.


On 11/26/2015 04:30 PM, Ruby Loo wrote:

On 25 November 2015 at 18:02, Jim Rollenhagen mailto:j...@jimrollenhagen.com>> wrote:

Hi all,

We're approaching OpenStack's M-1 milestone, and as we have lots of good
stuff in the master branch, and no Mitaka release yet, I'd like to make
a release next Thursday, December 3.

First, I've caught us up (best I can tell) on missing release notes
since our last release. Please do review them:
https://review.openstack.org/#/c/250029/

Second, please make sure when writing and reviewing code, that we are
adding release notes for anything significant, including important bug
fixes. See the patch above for examples on things that could be
candidates for the release notes. Basically, if you think it's something
a deployer or operator might care about, we should have a note for it.

How to make a release note:
http://docs.openstack.org/developer/reno/usage.html


Jim, thanks for putting together the release notes! It isn't crystal
clear to me what ought to be mentioned in release notes, but I'll use
your release notes as a guide :)

This is a heads up to folks that if you have submitted a patch that
warrants mention in the release notes, you ought to update the patch to
include a note. Otherwise, (sorry,) it will be -1'd.

Last, I'd love if cores could help test the master branch and try to
dislodge any issues there, and also try to find any existing bug reports
that feel like they should definitely be fixed before the release.


I think this also means that we shouldn't land any patches this coming
week, that might be risky or part of an incomplete feature.

After going through the commit log to build the release notes patch, I
think we've done a lot of great work since the 4.2 release. Thank you
all for that. Let's keep pushing hard on our priority list and have an
amazing rest of the cycle! :D


Hear, hear!

--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] using reno for libraries

2015-11-30 Thread Dmitry Tantsur

On 11/28/2015 02:48 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-11-27 10:21:36 -0500:

Liaisons,

We're making good progress on adding reno to service projects as
we head to the Mitaka-1 milestone. Thank you!

We also need to add reno to all of the other deliverables with
changes that might affect deployers. That means clients and other
libraries, SDKs, etc. with configuration options or where releases
can change deployment behavior in some way. Now that most teams
have been through this conversion once, it should be easy to replicate
for the other repositories in a similar way.

Libraries have 2 audiences for release notes: developers consuming
the library and deployers pushing out new versions of the libraries.
To separate the notes for the two audiences, and avoid doing manually
something that we have been doing automatically, we can use reno
just for deployer release notes (changes in support for options,
drivers, etc.). That means the library repositories that need reno
should have it configured just like for the service projects, with
the separate jobs and a publishing location different from their
existing developer documentation. The developer docs can continue
to include notes for the developer audience.


I've had a couple of questions about this split for release notes. The
intent is for developer-focused notes to continue to come from commit
messages and in-tree documentation, while using reno for new and
additional deployer-focused communication. Most commits to libraries
won't need reno release notes.


This looks like unnecessary overcomplication. Why not use such a 
convenient tool for both kinds of release notes instead of having us 
invent and maintain one more place to put release notes, now for 
developers? It's already not so easy to explain reno to newcomers, this 
idea makes it even harder...




Doug



After we start using reno for libraries, the release announcement
email tool will be updated to use those same notes to build the
message in addition to looking at the git change log. This will be
a big step toward unifying the release process for services and
libraries, and will allow us to make progress on completing the
automation work we have planned for this cycle.

It's not necessary to add reno to the liberty branch for library
projects, since we tend to backport far fewer changes to libraries.
If you maintain a library that does see a lot of backports, by all
means go ahead and add reno, but it's not a requirement. If you do
set up multiple branches, make sure you have one page that uses the
release-notes directive without specifing a branch, as in the
oslo.config example, to build notes for the "current" branch to get
releases from master and to serve as a test for rendering notes
added to stable branches.

Thanks,
Doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] using reno for libraries

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 05:14 PM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2015-11-30 10:06:25 +0100:

On 11/28/2015 02:48 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-11-27 10:21:36 -0500:

Liaisons,

We're making good progress on adding reno to service projects as
we head to the Mitaka-1 milestone. Thank you!

We also need to add reno to all of the other deliverables with
changes that might affect deployers. That means clients and other
libraries, SDKs, etc. with configuration options or where releases
can change deployment behavior in some way. Now that most teams
have been through this conversion once, it should be easy to replicate
for the other repositories in a similar way.

Libraries have 2 audiences for release notes: developers consuming
the library and deployers pushing out new versions of the libraries.
To separate the notes for the two audiences, and avoid doing manually
something that we have been doing automatically, we can use reno
just for deployer release notes (changes in support for options,
drivers, etc.). That means the library repositories that need reno
should have it configured just like for the service projects, with
the separate jobs and a publishing location different from their
existing developer documentation. The developer docs can continue
to include notes for the developer audience.


I've had a couple of questions about this split for release notes. The
intent is for developer-focused notes to continue to come from commit
messages and in-tree documentation, while using reno for new and
additional deployer-focused communication. Most commits to libraries
won't need reno release notes.


This looks like unnecessary overcomplication. Why not use such a
convenient tool for both kinds of release notes instead of having us
invent and maintain one more place to put release notes, now for


In the past we have had rudimentary release notes and changelogs
for developers to read based on the git commit messages. Since
deployers and developers care about different things, we don't want
to make either group sift through the notes meant for the other.
So, we publish notes in different ways.


Hmm, so maybe for small libraries with few changes it's still fine to 
publish them together, what do you think?




The thing that is new here is publishing release notes for changes
in libraries that deployers need to know about. While the Oslo code
was in the incubator, and being copied into applications, it was
possible to detect deployer-focused changes like new or deprecated
configuration options in the application and put the notes there.
Using shared libraries means those changes can happen without
application developers being aware of them, so the library maintainers
need to be publishing notes. Using reno for those notes is consistent
with the way they are handled in the applications, so we're extending
one tool to more repositories.


developers? It's already not so easy to explain reno to newcomers, this
idea makes it even harder...


Can you tell me more about the difficulty you've had? I would like to
improve the documentation for reno and for how we use it.


Usually people are stuck at the "how do I do this at all" stage :) we've 
even added it to the ironic developer FAQ. As to me, the official reno 
documentation is nice enough (but see below), maybe people are not aware 
of it.


Another "issue" (at least for our newcomers) with reno docs is that 
http://docs.openstack.org/developer/reno/usage.html#generating-a-report 
mentions the "reno report" command which is not something we all 
actually use, we use these "tox -ereleasenotes" command. What is worse, 
this command (I guess it's by design) does not catch release note files 
that are just created locally. It took me time to figure out that I have 
to commit release notes before "tox -ereleasenotes" would show them in 
the rendered HTML.


Finally, people are confused by how our release note jobs handle 
branches. E.g. ironic-inspector release notes [1] currently seem to show 
release notes from stable/liberty (judging by the version), so no 
current items [2] are shown.


[1] http://docs.openstack.org/releasenotes/ironic-inspector/unreleased.html
[2] for example 
http://docs-draft.openstack.org/18/250418/2/gate/gate-ironic-inspector-releasenotes/f0b9363//releasenotes/build/html/unreleased.html




Doug





Doug



After we start using reno for libraries, the release announcement
email tool will be updated to use those same notes to build the
message in addition to looking at the git change log. This will be
a big step toward unifying the release process for services and
libraries, and will allow us to make progress on completing the
automation work we have planned for this cycle.

It's not necessary to add reno to the liberty branch for library
projects, since we tend to backport far fewer changes to libraries.
If you maintain a library that does see a lot of backports, by all
means go

Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 04:19 PM, Derek Higgins wrote:

Hi All,

 A few months tripleo switch from its devtest based CI to one that
was based on instack. Before doing this we anticipated disruption in the
ci jobs and removed them from non tripleo projects.

 We'd like to investigate adding it back to heat and ironic as these
are the two projects where we find our ci provides the most value. But
we can only do this if the results from the job are treated as voting.

 In the past most of the non tripleo projects tended to ignore the
results from the tripleo job as it wasn't unusual for the job to broken
for days at a time. The thing is, ignoring the results of the job is the
reason (the majority of the time) it was broken in the first place.
 To decrease the number of breakages we are now no longer running
master code for everything (for the non tripleo projects we bump the
versions we use periodically if they are working). I believe with this
model the CI jobs we run have become a lot more reliable, there are
still breakages but far less frequently.

What I proposing is we add at least one of our tripleo jobs back to both
heat and ironic (and other projects associated with them e.g. clients,
ironicinspector etc..), tripleo will switch to running latest master of
those repositories and the cores approving on those projects should wait
for a passing CI jobs before hitting approve. So how do people feel
about doing this? can we give it a go? A couple of people have already
expressed an interest in doing this but I'd like to make sure were all
in agreement before switching it on.


I'm one of these "people", so definitely +1 here.

By the way, is it possible to NOT run tripleo-ci on changes touching 
only tests and docs? We do the same for our devstack jobs, it saves some 
infra resources.




thanks,
Derek.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 05:34 PM, Anita Kuno wrote:

On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:

Hi,



  This is to announce that  we have  setup  a  Third Party CI environment
for Proliant iLO Drivers. The results will be posted  under "HP Proliant CI
check" section in Non-voting mode.   We will be  running the basic deploy
tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We will
first  pursue to make the results consistent and over a period of time we
will try to promote it to voting mode.



For more information check the Wiki:
https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci ,
for any issues please contact ilo_driv...@groups.ext.hpe.com





Thanks & Regards,

Gururaja Grandhi

R&D Project Manager

HPE Proliant  Ironic  Project



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please do not post announcements to the mailing list about the existence
of your third party ci system.


Could you please explain why? As a developer I appreciated this post.



Ensure your third party ci system is listed here:
https://wiki.openstack.org/wiki/ThirdPartySystems (there are
instructions on the bottom of the page) as well as fill out a template
on your system so that folks can find your third party ci system the
same as all other third party ci systems.


Wiki is not an announcement FWIW.



Ensure you are familiar with the requirements for third party systems
listed here:
http://docs.openstack.org/infra/system-config/third_party.html#requirements

Thank you,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 06:24 PM, Anita Kuno wrote:

On 11/30/2015 12:17 PM, Dmitry Tantsur wrote:

On 11/30/2015 05:34 PM, Anita Kuno wrote:

On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:

Hi,



   This is to announce that  we have  setup  a  Third Party CI
environment
for Proliant iLO Drivers. The results will be posted  under "HP
Proliant CI
check" section in Non-voting mode.   We will be  running the basic
deploy
tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We
will
first  pursue to make the results consistent and over a period of
time we
will try to promote it to voting mode.



 For more information check the Wiki:
https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci
,
for any issues please contact ilo_driv...@groups.ext.hpe.com





Thanks & Regards,

Gururaja Grandhi

R&D Project Manager

HPE Proliant  Ironic  Project



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please do not post announcements to the mailing list about the existence
of your third party ci system.


Could you please explain why? As a developer I appreciated this post.



Ensure your third party ci system is listed here:
https://wiki.openstack.org/wiki/ThirdPartySystems (there are
instructions on the bottom of the page) as well as fill out a template
on your system so that folks can find your third party ci system the
same as all other third party ci systems.


Wiki is not an announcement FWIW.


If Ironic wants to hear about announced drivers they have agreed to do
so as part of their weekly irc meeting:
2015-11-30T17:19:55   I think it is reasonable for each
driver team, if they want to announce it in the meeting, to do so on the
whiteboard section for their driver. we'll all see that in the weekly
meeting
2015-11-30T17:20:08   but it will avoid spamming the whole
openstack list

http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2015-11-30.log


I was there and I already said that I'm not buying into "spamming the 
list" argument. There are much less important things that I see here 
right now, even though I do actively use filters to only see potentially 
relevant things. We've been actively (and not very successfully) 
encouraging people to use ML instead of IRC conversations (or even 
private messages and video chats), and this thread does not seem in line 
with it.




Thank you,
Anita.





Ensure you are familiar with the requirements for third party systems
listed here:
http://docs.openstack.org/infra/system-config/third_party.html#requirements


Thank you,
Anita.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-02 Thread Dmitry Tantsur

On 12/01/2015 06:55 PM, Ben Nemec wrote:

Sorry for not getting to this earlier.  Some thoughts inline.

On 11/09/2015 08:51 AM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

   rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
   rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.


Is the intent that this will replace the standalone ahc-match call that
currently assigns profiles to nodes?  In general I'm +1 on simplifying
the process (which is why I'm finally revisiting this) so I think I'm
onboard with that idea.


Yes





(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


And this assignment would follow the same rules as the existing AHC
version does?  So if I had a rules file that specified 3 controllers, 3
cephs, and an unlimited number of computes, it would first find and
assign 3 controllers, then 3 cephs, and finally assign all the other
matching nodes to compute.


There's no longer a spec file, though we could create something like 
that. The spec file had 2 problems:

1. it was used to maintain state in local file system
2. it was completely out of sync with what was later passed to the 
deploy command. So you could, for example, request 1 controller and the 
remaining to be computes in a spec file, and then request deploy with 2 
controllers, which was doomed to fail.




I guess there's still a danger if ceph nodes also match the controller
profile definition but not the other way around, because a ceph node
might get chosen as a controller and then there won't be enough matching
ceph nodes when we get to that.  IIRC (it's been a while since I've done
automatic profile matching) that's how it would work today so it's an
existing problem, but it would be nice if we could fix that as part of
this work.  I'm not sure how complex the resolution code for such
conflicts would need to be.


My current patch does not deal with it. Spec file only had ordering, so 
you could process 'ceph' before 'controller'. We can do the same by 
accepting something like --profile-ordering=ceph,controller,compute. WDYT?


I can't think of something smarter for now, any ideas are welcome.





(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.


By the way, this is already implemented. I was not aware of it while 
writing my first email.




Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not

Re: [openstack-dev] [nova] [ironic] Hardware composition

2015-12-02 Thread Dmitry Tantsur

On 12/01/2015 02:44 PM, Vladyslav Drok wrote:

Hi list!

There is an idea of making use of hardware composition (e.g.
http://www.intel.com/content/www/us/en/architecture-and-technology/rack-scale-architecture/intel-rack-scale-architecture-resources.html)
to create nodes for ironic.

The current proposal is:

1. To create hardware-compositor service under ironic umbrella to manage
this composition process. Its initial implementation will support Intel
RSA, other technologies may be added in future. At the beginning, it
will contain the most basic CRUD logic for composed system.


My concern with this idea is that it would have to have its own drivers, 
maybe overlapping with ironic drivers. I'm not sure what prevents you 
from bringing it into ironic (e.g. in case of ironic-inspector it was 
problems with HA mostly, I don't see anything that bad in your proposal).




2. Add logic to nova to compose a node using this new project and
register it in ironic if the scheduler is not able to find any ironic
node matching the flavor. An alternative (as pointed out by Devananda
during yesterday's meeting) could be using it in ironic by claims API
when it's implemented (https://review.openstack.org/204641).

3. If implemented in nova, there will be no changes to ironic right now
(apart from needing the driver to manage these composed nodes, which is
redfish I beleive), but there are cases when it may be useful to call
this service from ironic directly, e.g. to free the resources when a
node is deleted.


That's why I suggest just implementing it in ironic.

As a side note, some people (myself included) would really appreciate 
notifications on node deletion, and I think it's being worked on right now.




Thoughts?

Thanks,
Vlad


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector] CMDB integration

2015-12-02 Thread Dmitry Tantsur

On 11/30/2015 03:07 PM, Pavlo Shchelokovskyy wrote:

Hi all,

we are looking at how ironic-inspector could integrate with external
CMDB solutions and be able fetch a minimal set of data needed for
discovery (e.g. IPMI credentials and IPs) from CMDB. This could probably
be achieved with data filters framework that is already in place, but we
have one question:

what are people actually using? There are simple (but not conceivably
used in real life) choices to make a first implementation, like fetching
a csv file from HTTP link. Thus we want to learn if there is an already
known and working solution operators are actually using, either open
source or at least with open API.


What tripleo currently does is creating a JSON file with credentials in 
advance, then enroll nodes with it. There's no CMDB there, but the same 
flow might be preferred in this case as well.




We really appreciate if you chime in :) This would help us design this
feature the way that will benefit community the most.

Best regards,
--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] specs process for ironic-inspector

2015-12-03 Thread Dmitry Tantsur

FYI: the process is in effect now.

Please submit specs to https://github.com/openstack/ironic-inspector-specs/
Approved specs will appear on 
http://specs.openstack.org/openstack/ironic-inspector-specs/


On 11/19/2015 02:19 PM, Dmitry Tantsur wrote:

Hi folks!

I've been dodging subj for some time (mostly due to my laziness), but
now it seems like the time has come. We're discussing 2 big features:
autodiscovery and HA that I would like us to have a proper consensus on.

I'd like to get your opinion on one of the options:
1. Do not have specs, only blueprints are enough for us.
2. Reuse ironic-specs repo, create our own subdirectory with our own
template
3. Create a new ironic-inspector-specs repo.

I vote for #2, as sharing a repo with the remaining ironic would
increase visibility of large inspector changes (i.e. those deserving a
spec). We would probably use [inspector] tag in the commit summary, so
that people explicitly NOT wanting to review them can quickly ignore.

Also note that I still see #1 (use only blueprints) as a way to go for
simple features.

WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] specs process for ironic-inspector

2015-12-04 Thread Dmitry Tantsur

On 12/03/2015 06:13 PM, Pavlo Shchelokovskyy wrote:

Hi Dmitry,

should we also configure Launchpad to have blueprints references there
(for release/milestone targeting etc)? Or is it not needed?


Not sure what you mean, we do have Launchpad configured for blueprints. 
We used and will continue to use it for tracking features. Now some more 
complex blueprints will need a spec in addition. Does it answer your 
question?




Cheers,

On Thu, Dec 3, 2015 at 4:00 PM Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

FYI: the process is in effect now.

Please submit specs to
https://github.com/openstack/ironic-inspector-specs/
Approved specs will appear on
http://specs.openstack.org/openstack/ironic-inspector-specs/

--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-04 Thread Dmitry Tantsur

On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:

Hey all,

Over the past few months, there's been a lot of discussion and work around
creating a new REST API-supported TripleO deployment workflow.  However most
of that discussion has been fragmented within spec reviews and weekly IRC
meetings, so I thought it might make sense to provide a high-level overview
of what's been going on.  Hopefully it'll provide some useful perspective for
those that are curious!

Thanks,
Tzu-Mainn Chen

--
1. Explanation for Deployment Workflow Change

TripleO uses Heat to deploy clouds.  Heat allows tremendous flexibility at the
cost of enormous complexity.  Fortunately TripleO has the space to allow
developers to create tools to simplify the process tremendously,  resulting in
a deployment process that is both simple and flexible to user needs.

The current CLI-based TripleO workflow asks the deployer to modify a base set
of Heat environment files directly before calling Heat's stack-create command.
This requires much knowledge and precision, and is a process prone to error.

However this process can be eased by understanding that there is a pattern to
these modifications; for example, if a deployer wishes to enable network
isolation, a specific set of modifications must be made.  These  modification
sets can be encapsulated through pre-created Heat environment files, and TripleO
contains a library of these
(https://github.com/openstack/tripleo-heat-templates/tree/master/environments).

These environments are further categorized through the proposed environment
capabilities map (https://review.openstack.org/#/c/242439).  This mapping file
contains programmatic metadata, adding items such as user-friendly text around
environment files and marking certain environments as mutually exclusive.


2. Summary of Updated Deployment Workflow

Here's a summary of the updated TripleO deployment workflow.

 1. Create a Plan: Upload a base set of heat templates and environment files
into a Swift container.  This Swift container will be versioned to allow
for future work with respect to updates and upgrades.

 2. Environment Selection: Select the appropriate environment files for your
deployment.

 3. Modify Parameters: Modify additional deployment parameters.  These
parameters are influenced by the environment selection in step 2.

 4. Deploy: Send the contents of the plan's Swift container to Heat for
deployment.

Note that the current CLI-based workflow still fits here: a deployer can modify
Heat files directly prior to step 1, follow step 1, and then skip directly to
step 4.  This also allows for trial deployments with test configurations.


3. TripleO Python Library, REST API, and GUI

Right now, much of the existing TripleO deployment logic lives within the 
TripleO
CLI code, making it inaccessible to non-Python based UIs.   Putting both old and
new deployment logic into tripleo-common and then creating a REST API on top of
that logic will enable modern Javascript-based GUIs to create cloud deployments
using TripleO.


4. Future Work - Validations

A possible next step is to add validations to the TripleO toolkit: scripts that
can be used to check the validity of your deployment pre-, in-, and  
post-flight.
These validations will be runnable and queryable through a  REST API.  Note that
the above deployment workflow should not be a requirement for validations to be
run.


5. In-Progress Development

The initial spec for the tripleo-common library has already been approved, and
various people have been pushing work forward.  Here's a summary:

* Move shared business logic out of CLI
   * https://review.openstack.org/249134 - simple validations (WIP)


When is this going to be finished? It's going to get me a huge merge 
conflict in https://review.openstack.org/#/c/250405/ (and make it 
impossible to backport to liberty btw).



   * https://review.openstack.org/228991 - deployment code (ready for review)

* Additional TripleO business logic
   * rename tripleo-common repo to tripleo
 * https://review.openstack.org/#/c/249521/ (ready for review)
 * https://review.openstack.org/#/c/249524/ (ready for review)
 * https://review.openstack.org/#/c/247834/ (ready for review)
   * https://review.openstack.org/#/c/242439 - capabilities map (ready for 
review)
   * https://review.openstack.org/#/c/227297/ - base tripleo library code 
(ready for review)
   * https://review.openstack.org/#/c/232534/ - utility functions to manage 
environments (ready for review)
   * after the above is merged, plan.py will need to be updated to include 
environment methods

* TripleO API
   * https://review.openstack.org/#/c/230432/ - API spec (ready for review)
   * https://review.openstack.org/#/c/243737/  - API (WIP)
   * after the library code is fully merged, API will need to be updated to 
allow access
 to a plan

[openstack-dev] [all] [tc] [ironic] Picking an official name for a subproject (ironic-inspector in this case)

2015-12-04 Thread Dmitry Tantsur

Hi everyone!

I'd like to get guidance on how to pick an official name (e.g. appearing 
in keystone catalog or used in API versioning headers) for a subproject 
of an official project.


Specifically, I'm talking about ironic-inspector, which is a auxiliary 
service under the bare metal program. My first assumption is to prefix 
with ironic's official name, so it should be something like 
'baremetal-XXX' or 'baremetal XXX'. Is it correct? Which separator is 
preferred?


Next step is choosing the XXX part. The process we implement in 
ironic-inspector is usually referred to as "baremetal introspection" or 
"baremetal inspection". The former is used for our OSC plugin, so I 
think our official name should be one of
1. "baremetalintrospection" - named after the process we 
implement
2. "baremetalinspector" - using our code name after the 
official ironic project name.


WDYT? Any suggestions are welcome.

Dmitry

P.S.
This topic was raised by https://review.openstack.org/#/c/253493/ but 
also appeared in the microversioning discussion.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [ironic] Picking an official name for a subproject (ironic-inspector in this case)

2015-12-04 Thread Dmitry Tantsur

On 12/04/2015 04:30 PM, Thierry Carrez wrote:

Julien Danjou wrote:

On Fri, Dec 04 2015, Dmitry Tantsur wrote:


Specifically, I'm talking about ironic-inspector, which is a auxiliary service
under the bare metal program. My first assumption is to prefix with ironic's
official name, so it should be something like 'baremetal-XXX' or 'baremetal
XXX'. Is it correct? Which separator is preferred?


FWIW we have 3 different projects under the telemetry umbrella and they
do not share any prefix.


My take is to rename ironic-inspector to clouseau, the ironic inspector
from the Pink Panther series.


You should have raised it back in the beginning of liberty, when we did 
the discoverd->inspector renaming :D






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   >