Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-22 Thread Dmitry Tantsur

On 11/09/2015 03:51 PM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
  rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.

(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


New update: after talking to Imre on irc I realized that there's big 
value in decoupling profile validation/assigning from the deploy 
command. One of the use cases is future RAID work: we'll need to 
configure RAID based on the profile.


I'm introducing 2 new commands[*]:

1. overcloud profiles assign --XXX-flavor=YYY --XXX-scale=NN ..

   accepts the same arguments as deploy (XXX = compute, control, etc), 
tries to both validate and assign profiles. in the future we might add 
things like --XXX-raid-configuration to set RAID config for matched 
nodes. or even something generic like --XXX-set-property KEY=VALUE.


2. overcloud profiles list

  shows nodes and their profiles (including possible ones)

Deployment command will only do validation, following the same logic as 
'profiles assign' command.


The patch is not finished yet, but early reviews are welcome: 
https://review.openstack.org/#/c/250405/


[*] note that we more or less agreed on avoiding other projects' OSC 
namespaces, hence 'overcloud' prefix instead of 'baremetal'




(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.

Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-18 Thread Dmitry Tantsur

On 12/16/2015 10:22 PM, Zaro wrote:

On Tue, Dec 1, 2015 at 6:38 PM, Spencer Krum  wrote:

Hi All,

The infra team will be taking gerrit offline for an upgrade on December
16th. We
will start the operation at 17:00 UTC and will continue until about
21:00 UTC.

This outage is to upgrade Gerrit to version 2.11. The IP address of
Gerrit will not be changing.

There is a thread beginning here:
http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
which covers what to expect from the new software.

If you have questions about the Gerrit outage you are welcome to post a
reply to this thread or find the infra team in the #openstack-infra irc
channel on freenode. If you have questions about the version of Gerrit
we are upgrading to please post a reply to the email thread linked
above, or again you are welcome to ask in the #openstack-infra channel.




Thanks to everyone for their patience while we upgraded to Gerrit
2.11.  I'm happy to announce that we were able to successfully
completed this task at around 21:00 UTC.  You may hack away once more.

If you encounter any problems, please let us know here or in
#openstack-infra on Freenode.


Thanks! One small feature request: is it possible to highlight V-1 from 
Jenkins in red, just like review -1? It used to be handy to quickly 
check the status of the patch.




Enjoy,
-Khai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-15 Thread Dmitry Tantsur

On 11/09/2015 03:51 PM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
  rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.

(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


Documentation update with the workflow I have in mind: 
http://docs-draft.openstack.org/67/257867/2/check/gate-tripleo-docs-docs/c938244//doc/build/html/advanced_deployment/profile_matching.html




(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.

Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][stable][ironic] Stable branch docs

2015-12-14 Thread Dmitry Tantsur

On 12/14/2015 03:42 PM, Jim Rollenhagen wrote:

Hi all,

In the big tent, project teams are expected to maintain their own
install guides within their projects' source tree. There's a
conversation going on over in the docs list[1] about changing this, but
in the meantime...

Ironic (and presumably other projects) publish versioned documentation,
which includes the install guide. For example, our kilo install guide is
here[2]. However, there's no way to update those, as stable branch
policy[3] only allows for important bug fixes to be backported. For
example, this patch[4] was blocked for this reason (among others).

So, I'd like to propose that in the new world, where projects maintain
their own deployer/operator docs, that we allow documentation backports
(or even changes that are not part of a backport, for changes that only
make sense on the stable branch and not master). They're extremely low
risk, and can be very useful for operators. The alternative is making
sure people are always reading the most up-to-date docs, and in places
that have changed, having "in kilo [...], in liberty [...]", etc, which
is a bit of a maintenance burden.


+1 I would prefer us landing important documentation patches on stable 
branches.




What do folks think? I'm happy to write up a patch for the project team
guide if there's support for this.

// jim

[1] 
http://lists.openstack.org/pipermail/openstack-docs/2015-December/008051.html
[2] http://docs.openstack.org/developer/ironic/kilo/deploy/install-guide.html
[3] http://docs.openstack.org/project-team-guide/stable-branches.html
[4] https://review.openstack.org/#/c/219603/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-10 Thread Dmitry Tantsur

On 12/09/2015 10:58 PM, Jim Rollenhagen wrote:

On Fri, Dec 04, 2015 at 05:38:43PM +0100, Dmitry Tantsur wrote:

Hi!

As you all probably know, we've switched to reno for managing release notes.
What it also means is that the release team has stopped managing milestones
for us. We have to manually open/close milestones in launchpad, if we feel
like. I'm a bit tired of doing it for inspector, so I'd prefer we stop it.
If we need to track release-critical patches, we usually do it in etherpad
anyway. We also have importance fields for bugs, which can be applied to
both important bugs and important features.

During a quick discussion on IRC Sam mentioned that neutron also dropped
using blueprints for tracking features. They only use bugs with RFE tag and
specs. It makes a lot of sense to me to do the same, if we stop tracking
milestones.

For both ironic and ironic-inspector I'd like to get your opinion on the
following suggestions:
1. Stop tracking milestones in launchpad
2. Drop existing milestones to avoid confusion
3. Stop using blueprints and move all active blueprints to bugs with RFE
tags; request a bug URL instead of a blueprint URL in specs.

So in the end we'll end up with bugs for tracking user requests, specs for
complex features and reno for tracking for went into a particular release.

Important note: if you vote for keeping things for ironic-inspector, I may
ask you to volunteer in helping with them ;)


We decided we're going to try this in Monday's meeting, following
roughly the same process as Neutron:
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements

Note that as the goal here is to stop managing blueprints and milestones
in launchpad, a couple of things will differ from the neutron process:

1) A matching blueprint will not be created; the tracking will only be
done in the bug.

2) A milestone will not be immediately chosen for the feature
enhancement, as we won't track milestones on launchpad.

Now, some requests for volunteers. We need:

1) Someone to document this process in our developer docs.

2) Someone to update the spec template to request a bug link, instead of
a blueprint link.

3) Someone to help move existing blueprints into RFEs.

4) Someone to point specs for incomplete work at the new RFE bugs,
instead of the existing blueprints.

I can help with some or all of these, but hope to not do all the work
myself. :)


I'll help you with as many things as my time allows. Documentation is my 
week point, so I'll start with #2.




Thanks for proposing this, Dmitry!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] using reno for release note management

2015-12-09 Thread Dmitry Tantsur

On 12/08/2015 08:33 PM, Emilien Macchi wrote:

This morning the Puppet OpenStack had the weekly meeting [1] and we
discussed about using reno [2] for release note management.

We saw three possibilities:

1/ we use reno and enforce each contributions (bugfix, feature, etc) to
also edit a release note YAML file
2/ we use reno and the release note YAML file can be updated later (by
the contributor or someone else)


Note that this approach will somewhat complicate backporting to stable 
branches (if it applies to you ofc), as you'll need release notes there 
as well.



3/ we don't use reno and continue to manually write release notes

The solution 3/ is definitely not in our scope and we are willing to use
reno. Though we are looking for a way to switch using it.

Some people in our group shared some feedback, and ideas. Feel free to
comment inline:

* Having a YAML file for one feature/bugfix/... sounds heavy.
* We have 23 repositories (modules) - we will probably start using reno
for one or two modules, and see how it works.
* We will apply 2/ with the goal to go 1/ one day. We think directly
doing 1/ will have the risk to frustrate our group since not anyone is
familiar with releases. Giving -1 to a good patch just because it does
not have a release note update is not something we want now. We need to
educate people at using it, that's why I think we might go 2/.
* Using reno will spread the release note management to our group,
instead of one release manager taking care of that.

Feel free to have more feedback or comment inline, we are really willing
to suggestions.

Thanks!

[1]
https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack#Previous_meetings
(2] http://docs.openstack.org/developer/reno/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-08 Thread Dmitry Tantsur

On 12/08/2015 11:32 AM, Pavlo Shchelokovskyy wrote:

Hi Dmitry,

not true, the gate-ironic-specs-python27 requires that LP blueprint is
provided in a spec [0].

If we are to get away from LP blueprints and at least not register new
ones, this must be fixed. I'll test if the job accepts a link to LP RFE
bug instead of LP blueprint though.


Gotcha, sorry for confusion. Yes, in case we completely drop blueprints, 
we'll have to change it to accept bugs instead.




[0]
http://logs.openstack.org/21/254421/1/check/gate-ironic-specs-python27/ab07a3f/console.html#_2015-12-07_22_52_07_669

On Tue, Dec 8, 2015 at 11:31 AM Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

On 12/08/2015 08:12 AM, Pavlo Shchelokovskyy wrote:
 > Hi all,
 >
 > I have a question regarding #1 (Stop using LP for blueprints):
 >
 > what should we now use instead of "specifies" and "implements" Gerrit
 > tags in commit messages? Simple Depends-On:  should
 > suffice but is not visually specific enough, and only replaces
 > "implements" tag.

Closes-Bug for RFE bug, I guess. As a bonus point we'll distinguish
between Partial-Bug and Closes-Bug.

 >
 > Also as a side note, some gate jobs for spec repos must be
modified to
 > accommodate for the new process - they are still requiring a LP
 > blueprint reference to be specified in the body of a spec
 > (e.g. gate-ironic-specs-python27).

No gate jobs require a blueprint reference anywhere (otherwise we would
not be able to land bug fixes :)

 >
 > Best regards,
 >
 > On Mon, Dec 7, 2015 at 3:52 PM Dmitry Tantsur
mailto:dtant...@redhat.com>
 > <mailto:dtant...@redhat.com <mailto:dtant...@redhat.com>>> wrote:
 >
 > On 12/07/2015 02:42 PM, Doug Hellmann wrote:
 >  > Excerpts from Dmitry Tantsur's message of 2015-12-07
13:18:22 +0100:
 >  >> On 12/07/2015 10:48 AM, Thierry Carrez wrote:
 >  >>> Dmitry Tantsur wrote:
 >  >>>>
 >  >>>> 2015-12-04 18:26 GMT+01:00 Doug Hellmann
 > mailto:d...@doughellmann.com>
<mailto:d...@doughellmann.com <mailto:d...@doughellmann.com>>
 >  >>>> <mailto:d...@doughellmann.com
<mailto:d...@doughellmann.com> <mailto:d...@doughellmann.com
<mailto:d...@doughellmann.com>>>>:
 >  >>>>
 >  >> 
 >  >>>>
 >  >>>>   Please don't delete anything older than Mitaka.
 >  >>>>
 >  >>>>
 >  >>>> Do you have any hints how to not confuse users in this
case?
 >  >>>
 >  >>> I think what Doug means is you should not delete
existing closed
 >  >>> milestones like:
 >  >>> https://launchpad.net/ironic/kilo/2015.1.0
 >  >>> or:
 >  >>> https://launchpad.net/ironic/liberty/4.2.0
 >  >>> since we use the Launchpad pages there as the list of
features
 > and bugs
 >  >>> fixed for those pre-reno releases.
 >  >>>
 >  >>> Deleting those milestones would lose useful user
information for no
 >  >>> gain: you can't use them anymore (since they are closed) so
 > they are
 >  >>> unlikely to confuse anyone ?
 >  >>>
 >  >>
 >  >> I wonder how to avoid giving impression that development has
 > stopped on
 >  >> 4.2.0. E.g. Launchpad would show 4.2.0 as the last released
 > tarball, as
 >  >> we no longer push tarballs to launchpad.
 >  >>
 >  >
 >  > I think the fact that we'll be announcing new releases by
pointing
 >  > to other URLs (the releases site, for example) will help
avoid that
 >  > sort of confusion. You could also add a note to the top of the
 > project
 >  > page on launchpad.
 >
 > +1
 >
 >  >
 >  > If, over time, we see a lot of folks actually confused
about the
 >  > move we can figure out a way to migrate the old data
elsewhere so
 >  > it can be deleted. But that's not going to happen this
cycle, so
 >  > please leave it intact for now.
 >
 > Understood, thanks for explanation. So I withdraw suggestion #2.
 >
 >  >
 >  > Doug
  

Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-08 Thread Dmitry Tantsur

On 12/08/2015 08:12 AM, Pavlo Shchelokovskyy wrote:

Hi all,

I have a question regarding #1 (Stop using LP for blueprints):

what should we now use instead of "specifies" and "implements" Gerrit
tags in commit messages? Simple Depends-On:  should
suffice but is not visually specific enough, and only replaces
"implements" tag.


Closes-Bug for RFE bug, I guess. As a bonus point we'll distinguish 
between Partial-Bug and Closes-Bug.




Also as a side note, some gate jobs for spec repos must be modified to
accommodate for the new process - they are still requiring a LP
blueprint reference to be specified in the body of a spec
(e.g. gate-ironic-specs-python27).


No gate jobs require a blueprint reference anywhere (otherwise we would 
not be able to land bug fixes :)




Best regards,

On Mon, Dec 7, 2015 at 3:52 PM Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

On 12/07/2015 02:42 PM, Doug Hellmann wrote:
 > Excerpts from Dmitry Tantsur's message of 2015-12-07 13:18:22 +0100:
 >> On 12/07/2015 10:48 AM, Thierry Carrez wrote:
 >>> Dmitry Tantsur wrote:
 >>>>
 >>>> 2015-12-04 18:26 GMT+01:00 Doug Hellmann
mailto:d...@doughellmann.com>
 >>>> <mailto:d...@doughellmann.com <mailto:d...@doughellmann.com>>>:
 >>>>
 >> 
 >>>>
 >>>>   Please don't delete anything older than Mitaka.
 >>>>
 >>>>
 >>>> Do you have any hints how to not confuse users in this case?
 >>>
 >>> I think what Doug means is you should not delete existing closed
 >>> milestones like:
 >>> https://launchpad.net/ironic/kilo/2015.1.0
 >>> or:
 >>> https://launchpad.net/ironic/liberty/4.2.0
 >>> since we use the Launchpad pages there as the list of features
and bugs
 >>> fixed for those pre-reno releases.
 >>>
 >>> Deleting those milestones would lose useful user information for no
 >>> gain: you can't use them anymore (since they are closed) so
they are
 >>> unlikely to confuse anyone ?
 >>>
 >>
 >> I wonder how to avoid giving impression that development has
stopped on
 >> 4.2.0. E.g. Launchpad would show 4.2.0 as the last released
tarball, as
 >> we no longer push tarballs to launchpad.
 >>
 >
 > I think the fact that we'll be announcing new releases by pointing
 > to other URLs (the releases site, for example) will help avoid that
 > sort of confusion. You could also add a note to the top of the
project
 > page on launchpad.

+1

 >
 > If, over time, we see a lot of folks actually confused about the
 > move we can figure out a way to migrate the old data elsewhere so
 > it can be deleted. But that's not going to happen this cycle, so
 > please leave it intact for now.

Understood, thanks for explanation. So I withdraw suggestion #2.

 >
 > Doug
 >
 >
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-07 Thread Dmitry Tantsur

On 12/07/2015 02:42 PM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2015-12-07 13:18:22 +0100:

On 12/07/2015 10:48 AM, Thierry Carrez wrote:

Dmitry Tantsur wrote:


2015-12-04 18:26 GMT+01:00 Doug Hellmann mailto:d...@doughellmann.com>>:





  Please don't delete anything older than Mitaka.


Do you have any hints how to not confuse users in this case?


I think what Doug means is you should not delete existing closed
milestones like:
https://launchpad.net/ironic/kilo/2015.1.0
or:
https://launchpad.net/ironic/liberty/4.2.0
since we use the Launchpad pages there as the list of features and bugs
fixed for those pre-reno releases.

Deleting those milestones would lose useful user information for no
gain: you can't use them anymore (since they are closed) so they are
unlikely to confuse anyone ?



I wonder how to avoid giving impression that development has stopped on
4.2.0. E.g. Launchpad would show 4.2.0 as the last released tarball, as
we no longer push tarballs to launchpad.



I think the fact that we'll be announcing new releases by pointing
to other URLs (the releases site, for example) will help avoid that
sort of confusion. You could also add a note to the top of the project
page on launchpad.


+1



If, over time, we see a lot of folks actually confused about the
move we can figure out a way to migrate the old data elsewhere so
it can be deleted. But that's not going to happen this cycle, so
please leave it intact for now.


Understood, thanks for explanation. So I withdraw suggestion #2.



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]"No valid host was found" when creating node in Ironic

2015-12-07 Thread Dmitry Tantsur

On 12/07/2015 10:45 AM, Zhi Chang wrote:

Hi, all
 I install devstack with Ironic in physical machine. And I want to
deploy a another physical machine which IPMI info is "username:root
password:12345 ip_address: 192.168.0.100". I use this command "ironic
node-create -c d5f2dee1-f5bc-409e-a9be-a3ae5c392cfa -d ipmi_tool -p
ipmi_username=root -p ipmi_address=192.168.0.100 -p ipmi_password=12345
-n testing" to create a node in Ironic. There is an error when I run the
command. It says "No valid host was found. Reason: No conductor service
registered which supports driver ipmi_tool. (HTTP 400)". What should I
do to resolve this problem? Or, what should I do if I want to deploy a
physical machine by using Ironic?


Hello,

Please check /etc/ironic/ironic.conf. Probably your enabled_drivers 
configuration does not contain pxe_ipmitool. Please add it there and 
restart the conductor.





Thx
Zhi Chang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] workflow

2015-12-07 Thread Dmitry Tantsur

On 12/06/2015 05:33 PM, John Trowbridge wrote:



On 12/03/2015 03:47 PM, Dan Prince wrote:

On Tue, 2015-11-24 at 15:25 +, Dougal Matthews wrote:

On 23 November 2015 at 14:37, Dan Prince  wrote:

There are lots of references to "workflow" within TripleO
conversations
these days. We are at (or near) the limit of what we can do within
Heat
with regards to upgrades. We've got a new TripleO API in the works
(a
new version of Tuskar basically) that is specifically meant to
encapsulates business logic workflow around deployment. And, Lots
of
interest in using Ansible for this and that.

So... Last week I spent a bit of time tinkering with the Mistral
workflow service that already exists in OpenStack and after a few
patches got it integrated into my undercloud:

https://etherpad.openstack.org/p/tripleo-undercloud-workflow

One could imagine us coming up with a set of useful TripleO
workflows
(something like this):

  tripleo.deploy 
  tripleo.update 
  tripleo.run_ad_hoc_whatever_on_specific_roles <>

Since Mistral (the OpenStack workflow service) can already interact
w/
keystone and has a good many hooks to interact with core OpenStack
services like Swift, Heat, and Nova we might get some traction very
quickly here. Perhaps we add some new Mistral Ironic actions? Or
imagine smaller more focused Heat configuration stacks that we
drive
via Mistral? Or perhaps we tie in Zaqar (which already has some
integration into os-collect-config) to run ad-hoc deployment
snippets
on specific roles in an organized fashion?  Or wrapping mistral w/
tripleoclient to allow users to more easily call TripleO specific
workflows (enhancing the user feedback like we do with our
heatclient
wrapping already)?

Where all this might lead... I'm not sure. But I feel like we might
benefit by adding a few extra options to our OpenStack deployment
tool
chain.

I think this sounds promising. Lots of the code in the CLI is about
managing workflows. For example when doing introspection we change
the node state, poll for the result, start introspection, poll for
the result, change the node state back and poll for the result. If
mistral can help here I expect it could give us a much more robust
solution.


Hows this look:

https://github.com/dprince/tripleo-mistral-
workflows/blob/master/tripleo/baremetal.yaml



This is a really good starter example because the bulk inspection
command is particularly problematic. I like this a lot. One really nice
thing here is that we get a REST API for free by using Mistral.


Yeah, looks really good, except for 
https://github.com/dprince/tripleo-mistral-workflows/blob/master/tripleo/baremetal.yaml#L35-L39 
still talks about introspecting nodes in AVAILABLE state, which must be 
killed with fire. We should use ENROLL state when importing nodes 
instead and require a user to explicitly move nodes out of AVAILABLE 
state, if they want them to be introspected.







  Dan

___
___
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-07 Thread Dmitry Tantsur

On 12/07/2015 04:33 AM, Tzu-Mainn Chen wrote:



On 04/12/15 23:04, Dmitry Tantsur wrote:

On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:






5. In-Progress Development

The initial spec for the tripleo-common library has already
been approved, and
various people have been pushing work forward.  Here's a
summary:

* Move shared business logic out of CLI
* https://review.openstack.org/249134 - simple
validations (WIP)


When is this going to be finished? It's going to get me a huge
merge conflict in https://review.openstack.org/#/c/250405/ (and
make it impossible to backport to liberty btw).

This plan would be fine if Mitaka development was the only
consideration but I hope that it can be adapted a little bit to take
into account the Liberty branches, and the significant backports
that will be happening there. The rdomanager-plugin->tripleoclient
transition made backports painful, and having moved on for that it
would be ideal if we didn't create the same situation again.

What I would propose is the following:
- the tripleo_common repo is renamed to tripleo and consumed by Mitaka
- the tripleo_common repo continues to exist in Liberty
- the change to rename the package tripleo_common to tripleo occurs
on the tripleo repo in the master branch using oslo-style wildcard
imports[1], and initially no deprecation message
- this change is backported to the tripleo_common repo on the
stable/liberty branch


Heya - apologies for the bit of churn here.  I re-visited the
repo-renaming issue in IRC, and it sounds like
the vast majority of people are actually in favor of putting the
relevant library and API code in the
tripleo-common repository, and revisiting the creation of a tripleo
repository later, once code has had a
chance to settle.  I personally like this idea, as it reduces some
disruptive activity at a time when we're trying
to add quite a bit of new code.

I double-checked with the people who originally raised objections to the
idea of putting API code into
tripleo-common.  One said that his objection had to do with package
naming, and he removed his objection
once it was realized that the package name could be independent of the
repo name.  The other clarified his
objection as simply a consistency issue that he thought was okay to
resolve until after the API code settled a
bit.

So: I'm putting the idea of *not* creating a tripleo repo just quite yet
out there on the mailing list, and I'm
hopeful we can come to agreement in Tuesday's tripleo weekly IRC
meeting.  That would resolve a lot of the
concerns mentioned here, correct?


It does not seem to resolve mine concern, though. I'm still wondering 
where should I continue the major profiles refactoring. If it moves to 
triple-common/tripleo (whatever the name), how do I backport it?





Mainn


Once this is in place, stable/liberty tripleoclient can gradually
move from the tripleo_common to the tripleo package, and parts of
then tripleoclient -> tripleo_common business logic move can also be
backported where appropriate.

I'm planning on adding some liberty backportable commands as part of
blueprint tripleo-manage-software-deployments [2] and this approach
would greatly ease the backport process, and allow the business
logic to start in the tripleo repo.

* https://review.openstack.org/228991 - deployment code
(ready for review)

* Additional TripleO business logic
* rename tripleo-common repo to tripleo
  * https://review.openstack.org/#/c/249521/ (ready for
review)
  * https://review.openstack.org/#/c/249524/ (ready for
review)
  * https://review.openstack.org/#/c/247834/ (ready for
review)

(here is my review comment on this change)

I'd like to propose that we have a period where the tripleo_common
package continues to be usable without a deprecation message.

Rather than using deprecated subclasses, can we just do oslo-style
wildcard imports [1] for this package transition?

If we did that then the test files could just be moved to tripleo,
rather than duplicating them.

What I am hoping is that this change can be backported to
stable/liberty of tripleo_co mmon so that stable/liberty
tripleoclient can gradually transition over, and the work to move
business logic out of tripleoclient can also have targeted backports
to liberty tripleo_common/tripleoclient.


* https://review.openstack.org/#/c/242439 - capabilities
map (ready for review)
* https://review.openstack.org/#/c/22

Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-07 Thread Dmitry Tantsur

On 12/07/2015 10:48 AM, Thierry Carrez wrote:

Dmitry Tantsur wrote:


2015-12-04 18:26 GMT+01:00 Doug Hellmann mailto:d...@doughellmann.com>>:





 Please don't delete anything older than Mitaka.


Do you have any hints how to not confuse users in this case?


I think what Doug means is you should not delete existing closed
milestones like:
https://launchpad.net/ironic/kilo/2015.1.0
or:
https://launchpad.net/ironic/liberty/4.2.0
since we use the Launchpad pages there as the list of features and bugs
fixed for those pre-reno releases.

Deleting those milestones would lose useful user information for no
gain: you can't use them anymore (since they are closed) so they are
unlikely to confuse anyone ?



I wonder how to avoid giving impression that development has stopped on 
4.2.0. E.g. Launchpad would show 4.2.0 as the last released tarball, as 
we no longer push tarballs to launchpad.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-04 Thread Dmitry Tantsur
2015-12-04 18:26 GMT+01:00 Doug Hellmann :

> Excerpts from Dmitry Tantsur's message of 2015-12-04 17:38:43 +0100:
> > Hi!
> >
> > As you all probably know, we've switched to reno for managing release
> > notes. What it also means is that the release team has stopped managing
> > milestones for us. We have to manually open/close milestones in
> > launchpad, if we feel like. I'm a bit tired of doing it for inspector,
> > so I'd prefer we stop it. If we need to track release-critical patches,
> > we usually do it in etherpad anyway. We also have importance fields for
> > bugs, which can be applied to both important bugs and important features.
> >
> > During a quick discussion on IRC Sam mentioned that neutron also dropped
> > using blueprints for tracking features. They only use bugs with RFE tag
> > and specs. It makes a lot of sense to me to do the same, if we stop
> > tracking milestones.
> >
> > For both ironic and ironic-inspector I'd like to get your opinion on the
> > following suggestions:
> > 1. Stop tracking milestones in launchpad
> > 2. Drop existing milestones to avoid confusion
>
> Please don't delete anything older than Mitaka.
>

Do you have any hints how to not confuse users in this case?


>
> Doug
>
> > 3. Stop using blueprints and move all active blueprints to bugs with RFE
> > tags; request a bug URL instead of a blueprint URL in specs.
> >
> > So in the end we'll end up with bugs for tracking user requests, specs
> > for complex features and reno for tracking for went into a particular
> > release.
> >
> > Important note: if you vote for keeping things for ironic-inspector, I
> > may ask you to volunteer in helping with them ;)
> >
> > Dmitry.
> >
>
> __________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-04 Thread Dmitry Tantsur

Hi!

As you all probably know, we've switched to reno for managing release 
notes. What it also means is that the release team has stopped managing 
milestones for us. We have to manually open/close milestones in 
launchpad, if we feel like. I'm a bit tired of doing it for inspector, 
so I'd prefer we stop it. If we need to track release-critical patches, 
we usually do it in etherpad anyway. We also have importance fields for 
bugs, which can be applied to both important bugs and important features.


During a quick discussion on IRC Sam mentioned that neutron also dropped 
using blueprints for tracking features. They only use bugs with RFE tag 
and specs. It makes a lot of sense to me to do the same, if we stop 
tracking milestones.


For both ironic and ironic-inspector I'd like to get your opinion on the 
following suggestions:

1. Stop tracking milestones in launchpad
2. Drop existing milestones to avoid confusion
3. Stop using blueprints and move all active blueprints to bugs with RFE 
tags; request a bug URL instead of a blueprint URL in specs.


So in the end we'll end up with bugs for tracking user requests, specs 
for complex features and reno for tracking for went into a particular 
release.


Important note: if you vote for keeping things for ironic-inspector, I 
may ask you to volunteer in helping with them ;)


Dmitry.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [ironic] Picking an official name for a subproject (ironic-inspector in this case)

2015-12-04 Thread Dmitry Tantsur

On 12/04/2015 03:33 PM, Julien Danjou wrote:

On Fri, Dec 04 2015, Dmitry Tantsur wrote:


Specifically, I'm talking about ironic-inspector, which is a auxiliary service
under the bare metal program. My first assumption is to prefix with ironic's
official name, so it should be something like 'baremetal-XXX' or 'baremetal
XXX'. Is it correct? Which separator is preferred?


FWIW we have 3 different projects under the telemetry umbrella and they
do not share any prefix.


Do you register all 3 in keystone? What do you use as service types?




Next step is choosing the XXX part. The process we implement in
ironic-inspector is usually referred to as "baremetal introspection" or
"baremetal inspection". The former is used for our OSC plugin, so I think our
official name should be one of
1. "baremetalintrospection" - named after the process we implement
2. "baremetalinspector" - using our code name after the official
ironic project name.


I think 1. makes more sense.


That's what I'll use if nobody objects.. I'm only undecided about the 
separator.




My 2c,

Cheers,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [ironic] Picking an official name for a subproject (ironic-inspector in this case)

2015-12-04 Thread Dmitry Tantsur

On 12/04/2015 04:30 PM, Thierry Carrez wrote:

Julien Danjou wrote:

On Fri, Dec 04 2015, Dmitry Tantsur wrote:


Specifically, I'm talking about ironic-inspector, which is a auxiliary service
under the bare metal program. My first assumption is to prefix with ironic's
official name, so it should be something like 'baremetal-XXX' or 'baremetal
XXX'. Is it correct? Which separator is preferred?


FWIW we have 3 different projects under the telemetry umbrella and they
do not share any prefix.


My take is to rename ironic-inspector to clouseau, the ironic inspector
from the Pink Panther series.


You should have raised it back in the beginning of liberty, when we did 
the discoverd->inspector renaming :D






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [tc] [ironic] Picking an official name for a subproject (ironic-inspector in this case)

2015-12-04 Thread Dmitry Tantsur

Hi everyone!

I'd like to get guidance on how to pick an official name (e.g. appearing 
in keystone catalog or used in API versioning headers) for a subproject 
of an official project.


Specifically, I'm talking about ironic-inspector, which is a auxiliary 
service under the bare metal program. My first assumption is to prefix 
with ironic's official name, so it should be something like 
'baremetal-XXX' or 'baremetal XXX'. Is it correct? Which separator is 
preferred?


Next step is choosing the XXX part. The process we implement in 
ironic-inspector is usually referred to as "baremetal introspection" or 
"baremetal inspection". The former is used for our OSC plugin, so I 
think our official name should be one of
1. "baremetalintrospection" - named after the process we 
implement
2. "baremetalinspector" - using our code name after the 
official ironic project name.


WDYT? Any suggestions are welcome.

Dmitry

P.S.
This topic was raised by https://review.openstack.org/#/c/253493/ but 
also appeared in the microversioning discussion.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-04 Thread Dmitry Tantsur

On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:

Hey all,

Over the past few months, there's been a lot of discussion and work around
creating a new REST API-supported TripleO deployment workflow.  However most
of that discussion has been fragmented within spec reviews and weekly IRC
meetings, so I thought it might make sense to provide a high-level overview
of what's been going on.  Hopefully it'll provide some useful perspective for
those that are curious!

Thanks,
Tzu-Mainn Chen

--
1. Explanation for Deployment Workflow Change

TripleO uses Heat to deploy clouds.  Heat allows tremendous flexibility at the
cost of enormous complexity.  Fortunately TripleO has the space to allow
developers to create tools to simplify the process tremendously,  resulting in
a deployment process that is both simple and flexible to user needs.

The current CLI-based TripleO workflow asks the deployer to modify a base set
of Heat environment files directly before calling Heat's stack-create command.
This requires much knowledge and precision, and is a process prone to error.

However this process can be eased by understanding that there is a pattern to
these modifications; for example, if a deployer wishes to enable network
isolation, a specific set of modifications must be made.  These  modification
sets can be encapsulated through pre-created Heat environment files, and TripleO
contains a library of these
(https://github.com/openstack/tripleo-heat-templates/tree/master/environments).

These environments are further categorized through the proposed environment
capabilities map (https://review.openstack.org/#/c/242439).  This mapping file
contains programmatic metadata, adding items such as user-friendly text around
environment files and marking certain environments as mutually exclusive.


2. Summary of Updated Deployment Workflow

Here's a summary of the updated TripleO deployment workflow.

 1. Create a Plan: Upload a base set of heat templates and environment files
into a Swift container.  This Swift container will be versioned to allow
for future work with respect to updates and upgrades.

 2. Environment Selection: Select the appropriate environment files for your
deployment.

 3. Modify Parameters: Modify additional deployment parameters.  These
parameters are influenced by the environment selection in step 2.

 4. Deploy: Send the contents of the plan's Swift container to Heat for
deployment.

Note that the current CLI-based workflow still fits here: a deployer can modify
Heat files directly prior to step 1, follow step 1, and then skip directly to
step 4.  This also allows for trial deployments with test configurations.


3. TripleO Python Library, REST API, and GUI

Right now, much of the existing TripleO deployment logic lives within the 
TripleO
CLI code, making it inaccessible to non-Python based UIs.   Putting both old and
new deployment logic into tripleo-common and then creating a REST API on top of
that logic will enable modern Javascript-based GUIs to create cloud deployments
using TripleO.


4. Future Work - Validations

A possible next step is to add validations to the TripleO toolkit: scripts that
can be used to check the validity of your deployment pre-, in-, and  
post-flight.
These validations will be runnable and queryable through a  REST API.  Note that
the above deployment workflow should not be a requirement for validations to be
run.


5. In-Progress Development

The initial spec for the tripleo-common library has already been approved, and
various people have been pushing work forward.  Here's a summary:

* Move shared business logic out of CLI
   * https://review.openstack.org/249134 - simple validations (WIP)


When is this going to be finished? It's going to get me a huge merge 
conflict in https://review.openstack.org/#/c/250405/ (and make it 
impossible to backport to liberty btw).



   * https://review.openstack.org/228991 - deployment code (ready for review)

* Additional TripleO business logic
   * rename tripleo-common repo to tripleo
 * https://review.openstack.org/#/c/249521/ (ready for review)
 * https://review.openstack.org/#/c/249524/ (ready for review)
 * https://review.openstack.org/#/c/247834/ (ready for review)
   * https://review.openstack.org/#/c/242439 - capabilities map (ready for 
review)
   * https://review.openstack.org/#/c/227297/ - base tripleo library code 
(ready for review)
   * https://review.openstack.org/#/c/232534/ - utility functions to manage 
environments (ready for review)
   * after the above is merged, plan.py will need to be updated to include 
environment methods

* TripleO API
   * https://review.openstack.org/#/c/230432/ - API spec (ready for review)
   * https://review.openstack.org/#/c/243737/  - API (WIP)
   * after the library code is fully merged, API will need to be updated to 
allow access
 to a plan

Re: [openstack-dev] [ironic] specs process for ironic-inspector

2015-12-04 Thread Dmitry Tantsur

On 12/03/2015 06:13 PM, Pavlo Shchelokovskyy wrote:

Hi Dmitry,

should we also configure Launchpad to have blueprints references there
(for release/milestone targeting etc)? Or is it not needed?


Not sure what you mean, we do have Launchpad configured for blueprints. 
We used and will continue to use it for tracking features. Now some more 
complex blueprints will need a spec in addition. Does it answer your 
question?




Cheers,

On Thu, Dec 3, 2015 at 4:00 PM Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

FYI: the process is in effect now.

Please submit specs to
https://github.com/openstack/ironic-inspector-specs/
Approved specs will appear on
http://specs.openstack.org/openstack/ironic-inspector-specs/

--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] specs process for ironic-inspector

2015-12-03 Thread Dmitry Tantsur

FYI: the process is in effect now.

Please submit specs to https://github.com/openstack/ironic-inspector-specs/
Approved specs will appear on 
http://specs.openstack.org/openstack/ironic-inspector-specs/


On 11/19/2015 02:19 PM, Dmitry Tantsur wrote:

Hi folks!

I've been dodging subj for some time (mostly due to my laziness), but
now it seems like the time has come. We're discussing 2 big features:
autodiscovery and HA that I would like us to have a proper consensus on.

I'd like to get your opinion on one of the options:
1. Do not have specs, only blueprints are enough for us.
2. Reuse ironic-specs repo, create our own subdirectory with our own
template
3. Create a new ironic-inspector-specs repo.

I vote for #2, as sharing a repo with the remaining ironic would
increase visibility of large inspector changes (i.e. those deserving a
spec). We would probably use [inspector] tag in the commit summary, so
that people explicitly NOT wanting to review them can quickly ignore.

Also note that I still see #1 (use only blueprints) as a way to go for
simple features.

WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector] CMDB integration

2015-12-02 Thread Dmitry Tantsur

On 11/30/2015 03:07 PM, Pavlo Shchelokovskyy wrote:

Hi all,

we are looking at how ironic-inspector could integrate with external
CMDB solutions and be able fetch a minimal set of data needed for
discovery (e.g. IPMI credentials and IPs) from CMDB. This could probably
be achieved with data filters framework that is already in place, but we
have one question:

what are people actually using? There are simple (but not conceivably
used in real life) choices to make a first implementation, like fetching
a csv file from HTTP link. Thus we want to learn if there is an already
known and working solution operators are actually using, either open
source or at least with open API.


What tripleo currently does is creating a JSON file with credentials in 
advance, then enroll nodes with it. There's no CMDB there, but the same 
flow might be preferred in this case as well.




We really appreciate if you chime in :) This would help us design this
feature the way that will benefit community the most.

Best regards,
--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] Hardware composition

2015-12-02 Thread Dmitry Tantsur

On 12/01/2015 02:44 PM, Vladyslav Drok wrote:

Hi list!

There is an idea of making use of hardware composition (e.g.
http://www.intel.com/content/www/us/en/architecture-and-technology/rack-scale-architecture/intel-rack-scale-architecture-resources.html)
to create nodes for ironic.

The current proposal is:

1. To create hardware-compositor service under ironic umbrella to manage
this composition process. Its initial implementation will support Intel
RSA, other technologies may be added in future. At the beginning, it
will contain the most basic CRUD logic for composed system.


My concern with this idea is that it would have to have its own drivers, 
maybe overlapping with ironic drivers. I'm not sure what prevents you 
from bringing it into ironic (e.g. in case of ironic-inspector it was 
problems with HA mostly, I don't see anything that bad in your proposal).




2. Add logic to nova to compose a node using this new project and
register it in ironic if the scheduler is not able to find any ironic
node matching the flavor. An alternative (as pointed out by Devananda
during yesterday's meeting) could be using it in ironic by claims API
when it's implemented (https://review.openstack.org/204641).

3. If implemented in nova, there will be no changes to ironic right now
(apart from needing the driver to manage these composed nodes, which is
redfish I beleive), but there are cases when it may be useful to call
this service from ironic directly, e.g. to free the resources when a
node is deleted.


That's why I suggest just implementing it in ironic.

As a side note, some people (myself included) would really appreciate 
notifications on node deletion, and I think it's being worked on right now.




Thoughts?

Thanks,
Vlad


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-02 Thread Dmitry Tantsur

On 12/01/2015 06:55 PM, Ben Nemec wrote:

Sorry for not getting to this earlier.  Some thoughts inline.

On 11/09/2015 08:51 AM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

   rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
   rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.


Is the intent that this will replace the standalone ahc-match call that
currently assigns profiles to nodes?  In general I'm +1 on simplifying
the process (which is why I'm finally revisiting this) so I think I'm
onboard with that idea.


Yes





(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


And this assignment would follow the same rules as the existing AHC
version does?  So if I had a rules file that specified 3 controllers, 3
cephs, and an unlimited number of computes, it would first find and
assign 3 controllers, then 3 cephs, and finally assign all the other
matching nodes to compute.


There's no longer a spec file, though we could create something like 
that. The spec file had 2 problems:

1. it was used to maintain state in local file system
2. it was completely out of sync with what was later passed to the 
deploy command. So you could, for example, request 1 controller and the 
remaining to be computes in a spec file, and then request deploy with 2 
controllers, which was doomed to fail.




I guess there's still a danger if ceph nodes also match the controller
profile definition but not the other way around, because a ceph node
might get chosen as a controller and then there won't be enough matching
ceph nodes when we get to that.  IIRC (it's been a while since I've done
automatic profile matching) that's how it would work today so it's an
existing problem, but it would be nice if we could fix that as part of
this work.  I'm not sure how complex the resolution code for such
conflicts would need to be.


My current patch does not deal with it. Spec file only had ordering, so 
you could process 'ceph' before 'controller'. We can do the same by 
accepting something like --profile-ordering=ceph,controller,compute. WDYT?


I can't think of something smarter for now, any ideas are welcome.





(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.


By the way, this is already implemented. I was not aware of it while 
writing my first email.




Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not

Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 06:24 PM, Anita Kuno wrote:

On 11/30/2015 12:17 PM, Dmitry Tantsur wrote:

On 11/30/2015 05:34 PM, Anita Kuno wrote:

On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:

Hi,



   This is to announce that  we have  setup  a  Third Party CI
environment
for Proliant iLO Drivers. The results will be posted  under "HP
Proliant CI
check" section in Non-voting mode.   We will be  running the basic
deploy
tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We
will
first  pursue to make the results consistent and over a period of
time we
will try to promote it to voting mode.



 For more information check the Wiki:
https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci
,
for any issues please contact ilo_driv...@groups.ext.hpe.com





Thanks & Regards,

Gururaja Grandhi

R&D Project Manager

HPE Proliant  Ironic  Project



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please do not post announcements to the mailing list about the existence
of your third party ci system.


Could you please explain why? As a developer I appreciated this post.



Ensure your third party ci system is listed here:
https://wiki.openstack.org/wiki/ThirdPartySystems (there are
instructions on the bottom of the page) as well as fill out a template
on your system so that folks can find your third party ci system the
same as all other third party ci systems.


Wiki is not an announcement FWIW.


If Ironic wants to hear about announced drivers they have agreed to do
so as part of their weekly irc meeting:
2015-11-30T17:19:55   I think it is reasonable for each
driver team, if they want to announce it in the meeting, to do so on the
whiteboard section for their driver. we'll all see that in the weekly
meeting
2015-11-30T17:20:08   but it will avoid spamming the whole
openstack list

http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2015-11-30.log


I was there and I already said that I'm not buying into "spamming the 
list" argument. There are much less important things that I see here 
right now, even though I do actively use filters to only see potentially 
relevant things. We've been actively (and not very successfully) 
encouraging people to use ML instead of IRC conversations (or even 
private messages and video chats), and this thread does not seem in line 
with it.




Thank you,
Anita.





Ensure you are familiar with the requirements for third party systems
listed here:
http://docs.openstack.org/infra/system-config/third_party.html#requirements


Thank you,
Anita.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 05:34 PM, Anita Kuno wrote:

On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:

Hi,



  This is to announce that  we have  setup  a  Third Party CI environment
for Proliant iLO Drivers. The results will be posted  under "HP Proliant CI
check" section in Non-voting mode.   We will be  running the basic deploy
tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We will
first  pursue to make the results consistent and over a period of time we
will try to promote it to voting mode.



For more information check the Wiki:
https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci ,
for any issues please contact ilo_driv...@groups.ext.hpe.com





Thanks & Regards,

Gururaja Grandhi

R&D Project Manager

HPE Proliant  Ironic  Project



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please do not post announcements to the mailing list about the existence
of your third party ci system.


Could you please explain why? As a developer I appreciated this post.



Ensure your third party ci system is listed here:
https://wiki.openstack.org/wiki/ThirdPartySystems (there are
instructions on the bottom of the page) as well as fill out a template
on your system so that folks can find your third party ci system the
same as all other third party ci systems.


Wiki is not an announcement FWIW.



Ensure you are familiar with the requirements for third party systems
listed here:
http://docs.openstack.org/infra/system-config/third_party.html#requirements

Thank you,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 04:19 PM, Derek Higgins wrote:

Hi All,

 A few months tripleo switch from its devtest based CI to one that
was based on instack. Before doing this we anticipated disruption in the
ci jobs and removed them from non tripleo projects.

 We'd like to investigate adding it back to heat and ironic as these
are the two projects where we find our ci provides the most value. But
we can only do this if the results from the job are treated as voting.

 In the past most of the non tripleo projects tended to ignore the
results from the tripleo job as it wasn't unusual for the job to broken
for days at a time. The thing is, ignoring the results of the job is the
reason (the majority of the time) it was broken in the first place.
 To decrease the number of breakages we are now no longer running
master code for everything (for the non tripleo projects we bump the
versions we use periodically if they are working). I believe with this
model the CI jobs we run have become a lot more reliable, there are
still breakages but far less frequently.

What I proposing is we add at least one of our tripleo jobs back to both
heat and ironic (and other projects associated with them e.g. clients,
ironicinspector etc..), tripleo will switch to running latest master of
those repositories and the cores approving on those projects should wait
for a passing CI jobs before hitting approve. So how do people feel
about doing this? can we give it a go? A couple of people have already
expressed an interest in doing this but I'd like to make sure were all
in agreement before switching it on.


I'm one of these "people", so definitely +1 here.

By the way, is it possible to NOT run tripleo-ci on changes touching 
only tests and docs? We do the same for our devstack jobs, it saves some 
infra resources.




thanks,
Derek.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] using reno for libraries

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 05:14 PM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2015-11-30 10:06:25 +0100:

On 11/28/2015 02:48 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-11-27 10:21:36 -0500:

Liaisons,

We're making good progress on adding reno to service projects as
we head to the Mitaka-1 milestone. Thank you!

We also need to add reno to all of the other deliverables with
changes that might affect deployers. That means clients and other
libraries, SDKs, etc. with configuration options or where releases
can change deployment behavior in some way. Now that most teams
have been through this conversion once, it should be easy to replicate
for the other repositories in a similar way.

Libraries have 2 audiences for release notes: developers consuming
the library and deployers pushing out new versions of the libraries.
To separate the notes for the two audiences, and avoid doing manually
something that we have been doing automatically, we can use reno
just for deployer release notes (changes in support for options,
drivers, etc.). That means the library repositories that need reno
should have it configured just like for the service projects, with
the separate jobs and a publishing location different from their
existing developer documentation. The developer docs can continue
to include notes for the developer audience.


I've had a couple of questions about this split for release notes. The
intent is for developer-focused notes to continue to come from commit
messages and in-tree documentation, while using reno for new and
additional deployer-focused communication. Most commits to libraries
won't need reno release notes.


This looks like unnecessary overcomplication. Why not use such a
convenient tool for both kinds of release notes instead of having us
invent and maintain one more place to put release notes, now for


In the past we have had rudimentary release notes and changelogs
for developers to read based on the git commit messages. Since
deployers and developers care about different things, we don't want
to make either group sift through the notes meant for the other.
So, we publish notes in different ways.


Hmm, so maybe for small libraries with few changes it's still fine to 
publish them together, what do you think?




The thing that is new here is publishing release notes for changes
in libraries that deployers need to know about. While the Oslo code
was in the incubator, and being copied into applications, it was
possible to detect deployer-focused changes like new or deprecated
configuration options in the application and put the notes there.
Using shared libraries means those changes can happen without
application developers being aware of them, so the library maintainers
need to be publishing notes. Using reno for those notes is consistent
with the way they are handled in the applications, so we're extending
one tool to more repositories.


developers? It's already not so easy to explain reno to newcomers, this
idea makes it even harder...


Can you tell me more about the difficulty you've had? I would like to
improve the documentation for reno and for how we use it.


Usually people are stuck at the "how do I do this at all" stage :) we've 
even added it to the ironic developer FAQ. As to me, the official reno 
documentation is nice enough (but see below), maybe people are not aware 
of it.


Another "issue" (at least for our newcomers) with reno docs is that 
http://docs.openstack.org/developer/reno/usage.html#generating-a-report 
mentions the "reno report" command which is not something we all 
actually use, we use these "tox -ereleasenotes" command. What is worse, 
this command (I guess it's by design) does not catch release note files 
that are just created locally. It took me time to figure out that I have 
to commit release notes before "tox -ereleasenotes" would show them in 
the rendered HTML.


Finally, people are confused by how our release note jobs handle 
branches. E.g. ironic-inspector release notes [1] currently seem to show 
release notes from stable/liberty (judging by the version), so no 
current items [2] are shown.


[1] http://docs.openstack.org/releasenotes/ironic-inspector/unreleased.html
[2] for example 
http://docs-draft.openstack.org/18/250418/2/gate/gate-ironic-inspector-releasenotes/f0b9363//releasenotes/build/html/unreleased.html




Doug





Doug



After we start using reno for libraries, the release announcement
email tool will be updated to use those same notes to build the
message in addition to looking at the git change log. This will be
a big step toward unifying the release process for services and
libraries, and will allow us to make progress on completing the
automation work we have planned for this cycle.

It's not necessary to add reno to the liberty branch for library
projects, since we tend to backport far fewer changes to libraries.
If you maintain a library that does see a lot of backports, by all
means go

Re: [openstack-dev] [release] using reno for libraries

2015-11-30 Thread Dmitry Tantsur

On 11/28/2015 02:48 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-11-27 10:21:36 -0500:

Liaisons,

We're making good progress on adding reno to service projects as
we head to the Mitaka-1 milestone. Thank you!

We also need to add reno to all of the other deliverables with
changes that might affect deployers. That means clients and other
libraries, SDKs, etc. with configuration options or where releases
can change deployment behavior in some way. Now that most teams
have been through this conversion once, it should be easy to replicate
for the other repositories in a similar way.

Libraries have 2 audiences for release notes: developers consuming
the library and deployers pushing out new versions of the libraries.
To separate the notes for the two audiences, and avoid doing manually
something that we have been doing automatically, we can use reno
just for deployer release notes (changes in support for options,
drivers, etc.). That means the library repositories that need reno
should have it configured just like for the service projects, with
the separate jobs and a publishing location different from their
existing developer documentation. The developer docs can continue
to include notes for the developer audience.


I've had a couple of questions about this split for release notes. The
intent is for developer-focused notes to continue to come from commit
messages and in-tree documentation, while using reno for new and
additional deployer-focused communication. Most commits to libraries
won't need reno release notes.


This looks like unnecessary overcomplication. Why not use such a 
convenient tool for both kinds of release notes instead of having us 
invent and maintain one more place to put release notes, now for 
developers? It's already not so easy to explain reno to newcomers, this 
idea makes it even harder...




Doug



After we start using reno for libraries, the release announcement
email tool will be updated to use those same notes to build the
message in addition to looking at the git change log. This will be
a big step toward unifying the release process for services and
libraries, and will allow us to make progress on completing the
automation work we have planned for this cycle.

It's not necessary to add reno to the liberty branch for library
projects, since we tend to backport far fewer changes to libraries.
If you maintain a library that does see a lot of backports, by all
means go ahead and add reno, but it's not a requirement. If you do
set up multiple branches, make sure you have one page that uses the
release-notes directive without specifing a branch, as in the
oslo.config example, to build notes for the "current" branch to get
releases from master and to serve as a test for rendering notes
added to stable branches.

Thanks,
Doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Releases and things

2015-11-26 Thread Dmitry Tantsur
FYI the same thing applies to both inspector and (very soon) 
inspector-client.


On 11/26/2015 04:30 PM, Ruby Loo wrote:

On 25 November 2015 at 18:02, Jim Rollenhagen mailto:j...@jimrollenhagen.com>> wrote:

Hi all,

We're approaching OpenStack's M-1 milestone, and as we have lots of good
stuff in the master branch, and no Mitaka release yet, I'd like to make
a release next Thursday, December 3.

First, I've caught us up (best I can tell) on missing release notes
since our last release. Please do review them:
https://review.openstack.org/#/c/250029/

Second, please make sure when writing and reviewing code, that we are
adding release notes for anything significant, including important bug
fixes. See the patch above for examples on things that could be
candidates for the release notes. Basically, if you think it's something
a deployer or operator might care about, we should have a note for it.

How to make a release note:
http://docs.openstack.org/developer/reno/usage.html


Jim, thanks for putting together the release notes! It isn't crystal
clear to me what ought to be mentioned in release notes, but I'll use
your release notes as a guide :)

This is a heads up to folks that if you have submitted a patch that
warrants mention in the release notes, you ought to update the patch to
include a note. Otherwise, (sorry,) it will be -1'd.

Last, I'd love if cores could help test the master branch and try to
dislodge any issues there, and also try to find any existing bug reports
that feel like they should definitely be fixed before the release.


I think this also means that we shouldn't land any patches this coming
week, that might be risky or part of an incomplete feature.

After going through the commit log to build the release notes patch, I
think we've done a lot of great work since the 4.2 release. Thank you
all for that. Let's keep pushing hard on our priority list and have an
amazing rest of the cycle! :D


Hear, hear!

--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-11-26 Thread Dmitry Tantsur

On 11/09/2015 03:51 PM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
  rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.

(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


I've put up a prototype patch for this work item: 
https://review.openstack.org/#/c/250405/




(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.


Looks like this is already implemented, so the patch above is the only 
thing we actually need.




Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-announce] [release][stable][keystone][ironic] keystonemiddleware release 1.5.3 (kilo)

2015-11-26 Thread Dmitry Tantsur
I suspect it could break ironic stable/kilo in the same way as 2.0.0 
release. Still investigating, checking if 
https://review.openstack.org/#/c/250341/ will also fix it. Example of 
failing patch: https://review.openstack.org/#/c/248365/


On 11/23/2015 08:54 PM, d...@doughellmann.com wrote:

We are pumped to announce the release of:

keystonemiddleware 1.5.3: Middleware for OpenStack Identity

This release is part of the kilo stable release series.

With source available at:

 http://git.openstack.org/cgit/openstack/keystonemiddleware

With package available at:

 https://pypi.python.org/pypi/keystonemiddleware

For more details, please see the git log history below and:

 http://launchpad.net/keystonemiddleware/+milestone/1.5.3

Please report issues through launchpad:

 http://bugs.launchpad.net/keystonemiddleware

Notable changes


will now require python-requests<2.8.0

Changes in keystonemiddleware 1.5.2..1.5.3
--

d56d96c Updated from global requirements
9aafe8d Updated from global requirements
cc746dc Add an explicit test failure condition when auth_token is missing
5b1e18f Fix list_opts test to not check all deps
217cd3d Updated from global requirements
518e9c3 Ensure cache keys are a known/fixed length
033c151 Updated from global requirements

Diffstat (except docs and test files)
-

keystonemiddleware/auth_token/_cache.py   | 19 ++-
requirements.txt  | 19 ++-
setup.py  |  1 -
test-requirements-py3.txt | 18 +-
test-requirements.txt | 18 +-
7 files changed, 69 insertions(+), 37 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e3288a1..23308cd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7,9 +7,9 @@ iso8601>=0.1.9
-oslo.config>=1.9.3,<1.10.0  # Apache-2.0
-oslo.context>=0.2.0,<0.3.0 # Apache-2.0
-oslo.i18n>=1.5.0,<1.6.0  # Apache-2.0
-oslo.serialization>=1.4.0,<1.5.0   # Apache-2.0
-oslo.utils>=1.4.0,<1.5.0   # Apache-2.0
-pbr>=0.6,!=0.7,<1.0
-pycadf>=0.8.0,<0.9.0
-python-keystoneclient>=1.1.0,<1.4.0
-requests>=2.2.0,!=2.4.0
+oslo.config<1.10.0,>=1.9.3 # Apache-2.0
+oslo.context<0.3.0,>=0.2.0 # Apache-2.0
+oslo.i18n<1.6.0,>=1.5.0 # Apache-2.0
+oslo.serialization<1.5.0,>=1.4.0 # Apache-2.0
+oslo.utils!=1.4.1,<1.5.0,>=1.4.0 # Apache-2.0
+pbr!=0.7,<1.0,>=0.6
+pycadf<0.9.0,>=0.8.0
+python-keystoneclient<1.4.0,>=1.2.0
+requests!=2.4.0,<2.8.0,>=2.2.0
@@ -16,0 +17 @@ six>=1.9.0
+stevedore<1.4.0,>=1.3.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 11d9e17..5ab5eb0 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-hacking>=0.10.0,<0.11
+hacking<0.11,>=0.10.0
@@ -9,2 +9,2 @@ discover
-fixtures>=0.3.14
-mock>=1.0
+fixtures<1.3.0,>=0.3.14
+mock<1.1.0,>=1.0
@@ -12,5 +12,5 @@ pycrypto>=2.6
-oslosphinx>=2.5.0,<2.6.0 # Apache-2.0
-oslotest>=1.5.1,<1.6.0  # Apache-2.0
-oslo.messaging>=1.8.0,<1.9.0  # Apache-2.0
-requests-mock>=0.6.0  # Apache-2.0
-sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
+oslosphinx<2.6.0,>=2.5.0 # Apache-2.0
+oslotest<1.6.0,>=1.5.1 # Apache-2.0
+oslo.messaging<1.9.0,>=1.8.0 # Apache-2.0
+requests-mock>=0.6.0 # Apache-2.0
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
@@ -19 +19 @@ testresources>=0.2.4
-testtools>=0.9.36,!=1.2.0
+testtools!=1.2.0,>=0.9.36



___
OpenStack-announce mailing list
openstack-annou...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-26 Thread Dmitry Tantsur

On 11/25/2015 10:43 PM, Ben Nemec wrote:

On 11/23/2015 06:50 AM, Dmitry Tantsur wrote:

On 11/17/2015 04:31 PM, Tzu-Mainn Chen wrote:






 On 10 November 2015 at 15:08, Tzu-Mainn Chen mailto:tzuma...@redhat.com>> wrote:

 Hi all,

 At the last IRC meeting it was agreed that the new TripleO REST API
 should forgo the Tuskar name, and simply be called... the TripleO
 API.  There's one more point of discussion: where should the API
 live?  There are two possibilities:

 a) Put it in tripleo-common, where the business logic lives.  If we
 do this, it would make sense to rename tripleo-common to simply
 tripleo.


 +1 - I think this makes most sense if we are not going to support
 the tripleo repo as a library.


Okay, this seems to be the consensus, which is great.

The leftover question is how to package the renamed repo.  'tripleo' is
already intuitively in use by tripleo-incubator.
In IRC, bnemec and trown suggested splitting the renamed repo into two
packages - 'python-tripleo' and 'tripleo-api',
which seems sensible to me.


-1, that would be inconsistent with what other projects are doing. I
guess tripleo-incubator will die soon, and I think only tripleo devs
have any intuition about it. For me tripleo == instack-undercloud.


This was only referring to rpm packaging, and it is how we currently
package most of the other projects.  The repo itself would stay as one
thing, but would be split into python-tripleo and openstack-tripleo-api
rpms.

I don't massively care about package names, but given that there is no
(for example) openstack-nova package and openstack-tripleo is already in
use by a completely different project, I think it's reasonable to move
ahead with the split packages named this way.


Got it, sorry for confusion







What do others think?


Mainn


 b) Put it in its own repo, tripleo-api


 The first option made a lot of sense to people on IRC, as the
 proposed
 API is a very thin layer that's bound closely to the code in
 tripleo-
 common.  The major objection is that renaming is not trivial;
 however
 it was mentioned that renaming might not be *too* bad... as long as
 it's done sooner rather than later.

 What do people think?


 Thanks,
 Tzu-Mainn Chen

 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-23 Thread Dmitry Tantsur

On 11/23/2015 05:42 PM, Morgan Fainberg wrote:

Hi everyone,

This email is being written in the context of Keystone more than any
other project but I strongly believe that other projects could benefit
from a similar evaluation of the policy.

Most projects have a policy that prevents the following scenario (it is
a social policy not enforced by code):

* Employee from Company A writes code
* Other Employee from Company A reviews code
* Third Employee from Company A reviews and approves code.

This policy has a lot of history as to why it was implemented. I am not
going to dive into the depths of this history as that is the past and we
should be looking forward. This type of policy is an actively
distrustful policy. With exception of a few potentially bad actors
(again, not going to point anyone out here), most of the folks in the
community who have been given core status on a project are trusted to
make good decisions about code and code quality. I would hope that
any/all of the Cores would also standup to their management chain if
they were asked to "just push code through" if they didn't sincerely
think it was a positive addition to the code base.


Thanks for raising this. I always apply this policy in ironic not 
because I don't think we're trustful with my colleagues. The problem I'm 
trying to avoid is members of the same company having the same one-sided 
view on a problem.




Now within Keystone, we have a fair amount of diversity of core
reviewers, but we each have our specialities and in some cases (notably
KeystoneAuth and even KeystoneClient) getting the required diversity of
reviews has significantly slowed/stagnated a number of reviews.


This is probably a fair use case for not applying this rule.



What I would like us to do is to move to a trustful policy. I can
confidently say that company affiliation means very little to me when I
was PTL and nominating someone for core. We should explore making a
change to a trustful model, and allow for cores (regardless of company
affiliation) review/approve code. I say this since we have clear steps
to correct any abuses of this policy change.

With all that said, here is the proposal I would like to set forth:

1. Code reviews still need 2x Core Reviewers (no change)
2. Code can be developed by a member of the same company as both core
reviewers (and approvers).
3. If the trust that is being given via this new policy is violated, the
code can [if needed], be reverted (we are using git here) and the actors
in question can lose core status (PTL discretion) and the policy can be
changed back to the "distrustful" model described above.

I hope that everyone weighs what it means within the community to start
moving to a trusting-of-our-peers model. I think this would be a net win
and I'm willing to bet that it will remove noticeable roadblocks [and
even make it easier to have an organization work towards stability fixes
when they have the resources dedicated to it].

Thanks for your time reading this.

Regards,
--Morgan
PTL Emeritus, Keystone


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Question about using Devstack

2015-11-23 Thread Dmitry Tantsur

On 11/23/2015 12:07 PM, Oleksii Zamiatin wrote:



On Mon, Nov 23, 2015 at 12:58 PM, Young Yang mailto:afe.yo...@gmail.com>> wrote:

Hi,
I'm using devstack to deploy stable/Kilo in my Xenserver.
I successfully deploy devstack. But I found that every time I
restart it, devstack always run ./stack.sh to clear all my data and
resintall all the components.
So here comes  the questions.

1) Can I stop devstack from reinstalling after rebooting and just
use the openstack installed successfully last time.
I've tried  replacing the stack.sh with another blank shell script
to stop it running. Then  It didn't reinstall the services after
rebooting.  However, some services didn't start successfully.


try rejoin-stack.sh - it is in the same folder as unstack.sh, stack.sh


Did it ever worked for someone? :)




2) I found that devstack will exit if it is unable to connect the
Internet when rebooting.
Is there any way I can reboot devstack successfully without
connection to the Internet after I've install it successfully with
connection to the Internet.

Thanks in advance !  :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-23 Thread Dmitry Tantsur

On 11/17/2015 04:31 PM, Tzu-Mainn Chen wrote:






On 10 November 2015 at 15:08, Tzu-Mainn Chen mailto:tzuma...@redhat.com>> wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 - I think this makes most sense if we are not going to support
the tripleo repo as a library.


Okay, this seems to be the consensus, which is great.

The leftover question is how to package the renamed repo.  'tripleo' is
already intuitively in use by tripleo-incubator.
In IRC, bnemec and trown suggested splitting the renamed repo into two
packages - 'python-tripleo' and 'tripleo-api',
which seems sensible to me.


-1, that would be inconsistent with what other projects are doing. I 
guess tripleo-incubator will die soon, and I think only tripleo devs 
have any intuition about it. For me tripleo == instack-undercloud.




What do others think?


Mainn


b) Put it in its own repo, tripleo-api


The first option made a lot of sense to people on IRC, as the
proposed
API is a very thin layer that's bound closely to the code in
tripleo-
common.  The major objection is that renaming is not trivial;
however
it was mentioned that renaming might not be *too* bad... as long as
it's done sooner rather than later.

What do people think?


Thanks,
Tzu-Mainn Chen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Redfish drivers in ironic

2015-11-20 Thread Dmitry Tantsur

On 11/20/2015 12:50 AM, Bruno Cornec wrote:

Hello,

Vladyslav Drok said on Thu, Nov 19, 2015 at 03:59:41PM +0200:

Hi list and Bruno,

I’m interested in adding virtual media boot interface for redfish (
https://blueprints.launchpad.net/ironic/+spec/redfish-virtual-media-boot).

It depends on
https://blueprints.launchpad.net/ironic/+spec/ironic-redfish
and a corresponding spec https://review.openstack.org/184653, that
proposes
adding support for redfish (adding new power and management
interfaces) to
ironic. It also seems to depend on python-redfish client -
https://github.com/devananda/python-redfish.


Very good idea ;-)


I’d like to know what is the current status of it?


We have made recently some successful tests with both a real HP ProLiant
server with a redfish compliant iLO FW (2.30+) and the DMTF simulator.

The version working for these tests is at
https://github.com/bcornec/python-redfish (prototype branch)
I think we should now move that work into master and make again a pull
request to Devananda.


Is there some roadmap of what should be added to
python-redfish (or is the one mentioned in spec is still relevant)?


I think this is still relevant.


Is there a way for others to contribute in it?


Feel free to git clone the repo and propose patches to it ! We would be
happy to have contributors :-) I've also copied our mailing list to the
other contributors are aware of this.


Bruno, do you plan to move it
under ironic umbrella, or into pyghmi as people suggested in spec?


That's a difficult question. One one hand, I don't think python-redfish
should be under the OpenStack umbrella per se. This is a useful python
module to dialog with servers providing a Redfish interface and this has
no relationship with OpenStack ... except that it's very useful for
Ironic ! But could also be used by other projects in the future such as
Hadoop for node deployment, or my MondoRescue Disaster Recovery project
e.g. That's also why we have not used OpenStack modules in order to
avoid to create an artificial dependency that could prevent that module
tobe used py these other projects.


Using openstack umbrella does not automatically mean the project can't 
be used outside of openstack. It just means you'll be using openstack 
infra for its development, which might be a big plus.




I'm new to the python galaxy myself, but thought that pypy would be the
right place for it, but I really welcome suggestions here.


You mean PyPI? I don't see how these 2 contradict each other, PyPI is 
just a way to distribute releases.



I also need to come back to the Redfish spec itself and upate with the
atest feedback we got, in order to have more up to date content for the
Mitaka cycle.

Best regards,
Bruno.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-19 Thread Dmitry Tantsur

On 11/16/2015 03:05 PM, Jim Rollenhagen wrote:

On Wed, Nov 11, 2015 at 12:16:34PM -0500, Ruby Loo wrote:

On 10 November 2015 at 12:08, Dmitry Tantsur  wrote:


On 11/10/2015 05:45 PM, Lucas Alvares Gomes wrote:


Hi,

In the last Ironic meeting [1] we started a discussion about whether
we need to have a mid-cycle meeting for the Mitaka cycle or not. Some
ideas about the format of the midcycle were presented in that
conversation and this email is just a follow up on that conversation.

The ideas presented were:

1. Normal mid-cycle

Same format as the previous ones, the meetup will happen in a specific
venue somewhere in the world.



I would really want to see you all as often as possible. However, I don't
see much value in proper face-to-face mid-cycles as compared to improving
our day-to-day online communications.



+2.

My take on mid-cycles is that if folks want to have one, that is fine, I
might not attend :)

My preference is 4) no mid-cycle -- and try to work more effectively with
people in different locations and time zones.


++ that was part of my thought process when I proposed not having an
official midcycle.

Another idea I floated last week was to do a virtual midcycle of sorts.
Treat it like a normal midcycle in that everyone tells their management
"I'm out for 3-4 days for the midcycle", but they don't travel anywhere.
We come up with an agenda, see if there's any planning/syncing work to
do, or if it's all just hacking on code/reviews.

Then we can set up some hangouts (or similar) to get people in the same
"room" working on things. Time zones will get weird, but we tend to
split into smaller groups at the midcycle anyway; this is just more
timezone-aligned. We can also find windows where time zones overlap when
we want to go across those boundaries. Disclaimer: people may need to
work some weird hours to do this well.

I think this might get a little bit bumpy, but if it goes relatively
well we can try to improve on it for the future. Worst case, it's a
total failure and is roughly equivalent to the "no midcycle" option.


I would try it, +1



// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] specs process for ironic-inspector

2015-11-19 Thread Dmitry Tantsur

On 11/19/2015 02:39 PM, Pavlo Shchelokovskyy wrote:

Hi all,

+1 for specs in general, big features require a proper review and
discussion for which LP is not a good choice.

+1 for not requiring a spec for small features, LP BP is enough for just
time/release tracking, but of course cores can request a proper spec to
be proposed if feeling feature is worth discussion.

0 for using ironic-specs. It will increase visibility to wider ironic
community, sure. But it seems ironic-inspector has to decide how
integrated it should be with the other ironic project infra pieces as
well. For example, there is now a patch on review to build a proper
sphinx docs for ironic-inspector. Should those then be published and
where? Should ironic-inspector have own doc site e.g.
http://docs.openstack.org/developer/ironic-inspector/, or somehow be
incorporated in ironic doc site? IMO decision on specs and docs should
be consistent.


This is a good point. It's very likely that we'll post documentation to 
a separate site.




Best regards,

On Thu, Nov 19, 2015 at 3:20 PM Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi folks!

I've been dodging subj for some time (mostly due to my laziness), but
now it seems like the time has come. We're discussing 2 big features:
autodiscovery and HA that I would like us to have a proper consensus on.

I'd like to get your opinion on one of the options:
1. Do not have specs, only blueprints are enough for us.
2. Reuse ironic-specs repo, create our own subdirectory with our own
template
3. Create a new ironic-inspector-specs repo.

I vote for #2, as sharing a repo with the remaining ironic would
increase visibility of large inspector changes (i.e. those deserving a
spec). We would probably use [inspector] tag in the commit summary, so
that people explicitly NOT wanting to review them can quickly ignore.

Also note that I still see #1 (use only blueprints) as a way to go for
simple features.

WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] specs process for ironic-inspector

2015-11-19 Thread Dmitry Tantsur

Hi folks!

I've been dodging subj for some time (mostly due to my laziness), but 
now it seems like the time has come. We're discussing 2 big features: 
autodiscovery and HA that I would like us to have a proper consensus on.


I'd like to get your opinion on one of the options:
1. Do not have specs, only blueprints are enough for us.
2. Reuse ironic-specs repo, create our own subdirectory with our own 
template

3. Create a new ironic-inspector-specs repo.

I vote for #2, as sharing a repo with the remaining ironic would 
increase visibility of large inspector changes (i.e. those deserving a 
spec). We would probably use [inspector] tag in the commit summary, so 
that people explicitly NOT wanting to review them can quickly ignore.


Also note that I still see #1 (use only blueprints) as a way to go for 
simple features.


WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-19 Thread Dmitry Tantsur

On 11/19/2015 11:57 AM, Sam Betts (sambetts) wrote:

What Yuiko has described makes a lot of sense, and from that perspective
perhaps instead of us defining what driver a node should and shouldn’t
be using a config file, we should just provide a guide to using the
inspector rules for this and maybe some prewritten rules that can set
the driver and driver info etc fields for different cases?

Then the work flow would be, default to Fake driver because we don’t
need any special info for that, however if a rule detects that its
IPMIable by making sure that the IPMI address is valid or something,
then it can set the driver to an ipmitool one and then set the password
and username based on either a retrieved field or values defined in the
rule itself.


Using introspection rules, wow! That's a really nice idea, I would why I 
didn't think about it.




WDYT?


That sounds really promising, and simplifies our life a lot. I would 
love to see all these ideas written. Do you folks think it's time for 
ironic-inspector-specs repo? :D




Sam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-18 Thread Dmitry Tantsur

On 11/02/2015 05:07 PM, Sam Betts (sambetts) wrote:

Auto discovery is a topic which has been discussed a few times in the
past for

Ironic, and its interesting to solve because its a bit of a chicken and egg

problem. The ironic inspector allows us to inspect nodes that we don't know

the mac addresses for yet, to do this we run a global DHCP PXE rule that
will

respond to all mac addresses and PXE boot any machine that requests it,

this means its possible for machines that we haven't been asked to

inspect to boot into the inspector ramdisk and send their information to

inspector's API. To prevent this data from being processed further by

inspector if its a machine we shouldn't care about, we do a node lookup.
If the data

fails this node lookup we used to drop this data and continue no further, in

release 2.0.0 we added a hook point to intercept this state called the
Node Not

Found hook point which allows us to run some python code at this point in

processing before failing and dropping the inspection data. Something we've

discussed as a use for this hook point is, enrolling a node that fails the

lookup into Ironic, and then having inspector continue to process the

inspection data as we would for any other node that had inspection requested

for it, this allows us to auto-discover unknown nodes into Ironic.


If this auto discovery hook was enabled this would be the flow when
inspector

receives inspection data from the inspector ramdisk:


- Run pre-process on the inspection data to sanitise the data and ready
it for

   the rest of the process


- Node lookup using fields from the inspection data:

   - If in inspector node cache return node info


   - If not in inspector node cache and but is in ironic node database, fail

 inspection because its a known node and inspection hasn't been
requested

 for it.


   - If not in inspector node cache or ironic node database, enroll the
node in

 ironic and return node info


- Process inspection data


The remaining question for this idea is how to handle the driver
settings for

each node that we discover, we've currently discussed 3 different options:


1. Enroll the node in ironic using the fake driver, and leave it to the
operator

to set the driver type and driver info before they move the node
from enroll

to manageable.


I'm -1 to this because it requires a manual step. We already have a 
process requiring 1 manual step - inspection :) I'd like autodiscovery 
to turn it to 0.





2. Allow for the default driver and driver info information to be set in
the

ironic inspector configuration file, this will be set on every node
that is

auto discovered. Possible config file example:


[autodiscovery]

driver = pxe_ipmitool

address_field = 

username_field = 

password_field = 


This is my favorite one. We'll also need to provide the default user 
name/password. We can try to advance a node to MANAGEABLE state after 
enrolling it. If the default credentials don't work, node would stay in 
ENROLL state, and this will be a signal to an operator to check them.





3. A possibly vendor specific option that was suggested at the summit was to

provide an ability to look up out of band credentials from an
external CMDB.


We already have an extension point for discovery. If we know more about 
CMDB interfaces, we can extend it, but it's already possible to use.





The first option is technically possible using the second option, by setting

the driver to fake and leaving the driver info blank.


+1




With IPMI based drivers most IPMI related information can be retrieved
from the

node by the inspector ramdisk, however for non-ipmi based drivers such
as the

cimc/ucs drivers this information isn't accessible from an in-band OS
command.


A problem with option 2 is that it can not account for a mixed driver

environment.


We have also discussed for IPMI based drivers inspector could set a new

randomly generated password on to the freshly discovered node, with the idea

being fresh hardware often comes with a default password, and if you used

inspector to discover it then it could set a unique password on it and

automatically make ironic aware of that.


We're throwing this idea out onto the mailer because we'd like to get
feedback

from the community to see if this would be useful for people using
inspector,

and to see if people have any opinions on what the right way to handle
the node

driver settings is.


Yeah, I'm not decided on this one. Sounds cool but dangerous :)




Sam (sambetts)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage q

Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-18 Thread Dmitry Tantsur

I have to admin I forgot about this thread. Please find comments inline.

On 11/06/2015 05:25 PM, Bruno Cornec wrote:

Hello,

Pavlo Shchelokovskyy said on Tue, Nov 03, 2015 at 09:41:51PM +:

For auto-setting driver options on enrollment, I would vote for option 2
with default being fake driver + optional CMDB integration. This would
ease
managing a homogeneous pool of BMs, but still (using fake driver or data
from CMDB) work reasonably well in heterogeneous case.


Using fake driver means we need a manual step to set it to something 
non-fake :) and the current introspection process already has 1 manual 
step (enrolling nodes), so I'd like autodiscovery to require 0 of them 
(at least for the majority of users).




As for setting a random password, CMDB integration is crucial IMO. Large
deployments usually have some sort of it already, and it must serve as a
single source of truth for the deployment. So if inspector is changing
the
ipmi password, it should not only notify/update Ironic's knowledge on
that
node, but also notify/update the CMDB on that change - at least there
must
be a possibility (a ready-to-use plug point) to do that before we roll
out
such feature.


Well, if we have a CMDB, we probably don't need to set credentials. Or 
at least we should rely on the CMDB as a primary source. This "setting 
random password" thing is more about people without CMDB (aka using 
ironic as a CMDB ;). I'm not sure it's a compelling enough use case.


Anyway, it could be interesting to talk about some generic 
OpenStack-CMDB interface, which might something proposed below.




wrt interaction with CMDB, we have investigating around some ideas tha
we have gathered at https://github.com/uggla/alexandria/wiki


Oh, that's interesting. I see some potential overlap with ironic and 
ironic-inspector. Would be cool to chat on it the next summit.




Some code has been written to try to model some of these aspects, but
having more contributors and patches to enhance that integration would
be great ! Similarly available at https://github.com/uggla/alexandria

We had planned to talk about these ideas at the previous OpenStack
summit but didn't get enough votes it seems. So now aiming at preenting
to the next one ;-)


+100, would love to hear.



HTH,
Bruno.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector][documentation] any suggestions

2015-11-18 Thread Dmitry Tantsur

On 11/18/2015 10:10 AM, Serge Kovaleff wrote:

Hi Stackers,

I am going to help my team with the Inspector installation instruction.


Hi, that's great!



Any ideas or suggestions what and how to contribute back to the community?

I see that Ironic Inspector could benefit from Documentation efforts.
The repo hasn't got Doc folder or/and auto-generated documentation.


Creating a proper documentation would be a great step to contribute :) 
it's tracked as https://bugs.launchpad.net/ironic-inspector/+bug/1514803


Right now all documentation, including the installation guide, is in a 
couple of rst files in root:

https://github.com/openstack/ironic-inspector/blob/master/README.rst
https://github.com/openstack/ironic-inspector/blob/master/HTTP-API.rst



Cheers,
Serge Kovaleff




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo] Please stop removing usedevelop from tox.ini (at least for now)

2015-11-16 Thread Dmitry Tantsur

On 11/16/2015 12:28 PM, Davanum Srinivas wrote:

Dmitry,

i was trying to bring sanity to the tox.ini(s). +1 to documenting this
step somewhere prominent.


Please don't get me wrong, I really appreciate it. I'm not sure why 
"usedevelop" is not the default, though. Maybe we at least make sure to 
communicate this to people first? Because the error message is really 
vague to anyone who is not aware of these version tags.




-- Dims

On Mon, Nov 16, 2015 at 5:37 AM, Dmitry Tantsur  wrote:

On 11/16/2015 11:35 AM, Julien Danjou wrote:


On Mon, Nov 16 2015, Dmitry Tantsur wrote:


Before you ask: using 'sudo pip install -U setuptools pbr' is out of
question
in binary distributions :) so please make sure to remove this line only
when
everyone is updated to whatever version is required for understanding
these
;python_version=='2.7 bits.



But:
pip install --user setuptools pbr
might not be out of the question.



Yeah, this (with added -U) fixes the problem. But then we have to add it to
*all* contribution documentation. I'm pretty sure a lot of people won't
realize they need it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo] Please stop removing usedevelop from tox.ini (at least for now)

2015-11-16 Thread Dmitry Tantsur

On 11/16/2015 11:35 AM, Julien Danjou wrote:

On Mon, Nov 16 2015, Dmitry Tantsur wrote:


Before you ask: using 'sudo pip install -U setuptools pbr' is out of question
in binary distributions :) so please make sure to remove this line only when
everyone is updated to whatever version is required for understanding these
;python_version=='2.7 bits.


But:
   pip install --user setuptools pbr
might not be out of the question.



Yeah, this (with added -U) fixes the problem. But then we have to add it 
to *all* contribution documentation. I'm pretty sure a lot of people 
won't realize they need it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [oslo] Please stop removing usedevelop from tox.ini (at least for now)

2015-11-16 Thread Dmitry Tantsur

Hi all!

I've seen a couple of patches removing "usedevelop = true" from tox.ini. 
This has 2 nasty consequences:
1. It's harder to manually experiment in tox environment, as you have to 
explicitly use 'tox' command every time you change code
2. The most important, it breaks tox invocation for all people who don't 
have very recent pbr and setuptools in their distributions (which I 
suspect might be the majority of people):


ERROR: invocation failed (exit code 1), logfile: 
/home/dtantsur/.gerrty-git/openstack/futurist/.tox/log/tox-0.log

ERROR: actionid=tox
msg=packaging
cmdargs=['/usr/bin/python', 
local('/home/dtantsur/.gerrty-git/openstack/futurist/setup.py'), 
'sdist', '--formats=zip', '--dist-dir', 
local('/home/dtantsur/.gerrty-git/openstack/futurist/.tox/dist')]

env=None

Installed 
/home/dtantsur/.gerrty-git/openstack/futurist/.eggs/pbr-1.8.1-py2.7.egg
error in setup command: 'install_requires' must be a string or list of 
strings containing valid project/version requirement specifiers; 
Expected ',' or end-of-list in futures>=3.0;python_version=='2.7' or 
python_version=='2.6' at ;python_version=='2.7' or python_version=='2.6'


ERROR: FAIL could not package project - v = 
InvocationError('/usr/bin/python 
/home/dtantsur/.gerrty-git/openstack/futurist/setup.py sdist 
--formats=zip --dist-dir 
/home/dtantsur/.gerrty-git/openstack/futurist/.tox/dist (see 
/home/dtantsur/.gerrty-git/openstack/futurist/.tox/log/tox-0.log)', 1)



Before you ask: using 'sudo pip install -U setuptools pbr' is out of 
question in binary distributions :) so please make sure to remove this 
line only when everyone is updated to whatever version is required for 
understanding these ;python_version=='2.7 bits.


Thank you!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 06:08 PM, Ben Nemec wrote:

On 11/10/2015 10:28 AM, John Trowbridge wrote:



On 11/10/2015 10:43 AM, Ben Nemec wrote:

On 11/10/2015 05:26 AM, John Trowbridge wrote:



On 11/09/2015 07:44 AM, Dmitry Tantsur wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces belonging
to ironic and ironic-inspector accordingly. TL;DR of this email is to
deprecate them and move to TripleO-specific namespaces. Read on to know
why.

Problem
===

I realized that we're doing a wrong thing when people started asking me
why "baremetal introspection start" and "baremetal introspection bulk
start" behave so differently (the former is from ironic-inspector, the
latter is from tripleoclient). The problem with TripleO commands is that
they're highly opinionated workflows commands, but there's no way a user
can distinguish them from general-purpose ironic/ironic-inspector
commands. The way some of them work is not generic enough ("baremetal
import"), or uses different defaults from an upstream project
("configure boot"), or does something completely unacceptable upstream
(e.g. the way "introspection bulk start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling



I have considered this whole command as a bug for a while now. I
understand what we were trying to do and why, but it is pretty bad to
hijack another project's namespace with a command that would get a firm
-2 there.


4. baremetal show capabilities

  This is the only commands that is generic enough and could actually
make it to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

  Seconds, it's actually DELL-specific.

7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of
TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under "openstack
overcloud nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows which
nodes are/were on introspection. We'll need a new API though.

2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this replacement
for "introspection bulk status": polling and operating on "available"
nodes.


I am not totally convinced that we gain a huge amount by hiding the
state manipulation in this command. We need to move that logic to
tripleo-common anyways, so I think it is worth considering splitting it
from the introspect command.

Dmitry and I discussed briefly at summit having the ability to pass a
list of nodes to the inspector client for introspection as well. So if
we separated out the bulk state manipulation bit, we could just use that.

I get that this is going in the opposite direction of the original
intention of lowering the amount of commands needed to get a functional
deployment. However, I think that goal is better solved elsewhere
(tripleo.sh, some ansible playbooks, etc.). Instead it would be nice if
the tripleoclient was more transparent.


-2.  This is exactly the thing that got us to a place where our GUI was
unusable.  Business logic (and state management around Ironic node
inspection is just that) has to live in the API so all consumers can
take advantage of it.  Otherwise everyone has to reimplement it
themselves and anything but the developer-used CLI interfaces (like
tripleo.sh) fall behind, and we end up right back where we are 

Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:45 PM, Lucas Alvares Gomes wrote:

Hi,

In the last Ironic meeting [1] we started a discussion about whether
we need to have a mid-cycle meeting for the Mitaka cycle or not. Some
ideas about the format of the midcycle were presented in that
conversation and this email is just a follow up on that conversation.

The ideas presented were:

1. Normal mid-cycle

Same format as the previous ones, the meetup will happen in a specific
venue somewhere in the world.


I would really want to see you all as often as possible. However, I 
don't see much value in proper face-to-face mid-cycles as compared to 
improving our day-to-day online communications.




2. Virtual mid-cycle

People doing a virtual hack session on IRC / google hangout /
others... Something like virtual sprints [2].


Actually we could do it more often that mid-cycles and with less 
planning. Say, when we feel a need to. Face-to-face communication is the 
most important part of a mid-cycle, so this choice is not an 
alternative, just one more good thing we could do from time to time.




3. Coordinated regional mid-cycles

Having more than one meetup happening in different parts of the world
with a preferable time overlap between them so we could use video
conference for some hours each day to sync up what was done/discussed
on each of the meetups.


This sounds like a good compromise between not having a midcycle at all 
(I include virtual sprints in this category too) and spending a big 
budget on traveling overseas. I would try something like that.




4. Not having a mid-cycle at all


So, what people think about it? Should we have a mid-cycle for the
Mitaka release or not? If so, what format should we use?

Other ideas are also welcome.

[1] 
http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-11-09-17.00.log.html
[2] https://wiki.openstack.org/wiki/VirtualSprints

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:18 PM, Ben Nemec wrote:

+1 to moving anything we can into Ironic itself.

I do want to note that if we rename anything, we can't just rip and
replace.  We have users of the current commands, and we need to
deprecate those to give people a chance to move to the new ones.


Definitely. I think I used word deprecation somewhere in this long letter :)



A few more thoughts inline.

On 11/09/2015 06:44 AM, Dmitry Tantsur wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces belonging
to ironic and ironic-inspector accordingly. TL;DR of this email is to
deprecate them and move to TripleO-specific namespaces. Read on to know why.

Problem
===

I realized that we're doing a wrong thing when people started asking me
why "baremetal introspection start" and "baremetal introspection bulk
start" behave so differently (the former is from ironic-inspector, the
latter is from tripleoclient). The problem with TripleO commands is that
they're highly opinionated workflows commands, but there's no way a user
can distinguish them from general-purpose ironic/ironic-inspector
commands. The way some of them work is not generic enough ("baremetal
import"), or uses different defaults from an upstream project
("configure boot"), or does something completely unacceptable upstream
(e.g. the way "introspection bulk start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

   This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

   This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.


True, although I feel like an "import from JSON" feature would not be
inappropriate for inclusion in Ironic.  I can't believe that we're the
only ones who would be interested in mass importing nodes from a very
common format like this.


I would warmly welcome such command, but it should not require patching 
os-cloud-config every time we add a new driver or driver property.






3. baremetal introspection bulk start

   This command does several bad (IMO) things:
   a. Messes with ironic node states
   b. Operates implicitly on all nodes (in a wrong state)


I thought that was fixed?  It used to try to introspect nodes that were
in an invalid state (like active), but it shouldn't anymore.


It introspects nodes in "available" state, which is a rude violation of 
the ironic state machine ;)




Unless your objection is that it introspects things in an available
state, which I think has to do with the state Ironic puts (or used to
put) nodes in after registration.  In any case, this one likely requires
some more discussion over how it should work.


Well, if we upgrade baremetal API version we used from 1.6 (essentially 
Kilo) to Liberty one, nodes would appear in a new ENROLL state. I've 
started a patch for it, but I don't have time to finish. Anyone is free 
to overtake: https://review.openstack.org/#/c/235158/





   c. Defaults to polling

4. baremetal show capabilities

   This is the only commands that is generic enough and could actually
make it to ironicclient itself.

5. baremetal introspection bulk status

   See "bulk start" above.

6. baremetal configure ready state

   First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

   Seconds, it's actually DELL-specific.


Well, as I understand it we don't intend for it to be Dell-specific,
it's just that the Dell implementation is the only one that has been
done so far.

That said, since I think this is just layering some TripleO-specific
logic on top of the vendor-specific calls Ironic provides I agree that
it probably doesn't belong in the baremetal namespace.



7. baremetal configure boot

   This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of
TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under "openstack
overcloud nodes" namespace. So we end up with:

   overcloud nodes import
   overcloud nodes configure ready state --drac
   overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

This is fine to move to inspector-client, as inspector knows whi

Re: [openstack-dev] [Ironic] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:21 PM, Brad P. Crochet wrote:

On Tue, Nov 10, 2015 at 4:09 AM, Dmitry Tantsur  wrote:

Hi all!

I'd like to seek consensus (or at least some opinions) on patch
https://review.openstack.org/#/c/206119/
It proposed the following command:



I think it's time to actually just write up a spec on this. I think we
would be better served to spell it out now, and then more people can
contribute to both the spec and to the actual implementation once the
spec is approved.

WDYT?


+1

I'll block the first patch until we get consensus. Thanks for working on it!




   openstack baremetal provision state --provide UUID

(where --provide can also be --active, --deleted, --inspect, etc).

I have several issues with this proposal:

1. IIUC the structure of an OSC command is "openstack noun verb". "provision
state" is not a verb.
2. --active is not consistent with other options, which are verbs.

Let's have a quick poll, which would you prefer and why:

1. openstack baremetal provision state --provide UUID
2. openstack baremetal provision --provide UUID
3. openstack baremetal provide UUID
4. openstack baremetal set provision state --provide UUID
5. openstack baremetal set state --provide UUID
6. openstack baremetal action --provide UUID

I vote for #3. Though it's much more versbose, it reads very easily, except
for "active". For active I'm thinking about changing it to "activate" or
"provision".

My next candidate is #6. Though it's also not a verb, it reads pretty
easily.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 05:14 PM, Dean Troyer wrote:

On Tue, Nov 10, 2015 at 9:46 AM, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

  inspect, manage, provide, active and abort are all provisioning
verbs used in ironic API. they usually represent some complex
operations on a node. Inspection is not related to showing, it's
about fetching hardware properties from hardware itself and updating
ironic database. manage sets a node to a specific ("manageable")
state. etc.


inspect seems like a very specific action and is probably OK as-is.  We
should sanity-check other resources in OpenStack that it might also be
used with down the road and how different the action might be.


ironic-inspector uses term "introspection", but that's another long story.



Those that are states of a resource should be handled with a set command.


Speaking of consistency, all these move a node in the ironic state 
machine, so it might be weird to have "inspect" but "set manageable". 
Maybe it's only me, not sure.. The problem is that some states 
manipulation result in a simple actions (e.g. "manage" action either 
does nothing or does power credentials validation depending on the 
initial state). But most provision state changes involve complex long 
running operations ("active" to deploy, "deleted" to undeploy and clean, 
"inspect" to conduct inspection). Not sure how to make these consistent, 
any suggestions are very welcome.




boot and shutdown are natural opposites, aka power on and power off.


The analogous server commands (create/delete) may not make sense here
because, unlike with a server (VM), a resource is not being created or
deleted.  But a user might expect to use the same commands in both
places.  We need to consider which of those is more important.  I like
to break ties on the side of user experience consistency.

Honestly, at some point as a user, I'd like to forget whether my server
is a bare metal box or not and just use the same commands to manage it.


Well, it's not possible. Or more precisely, it is possible if you use 
ironic indirectly via nova API. But power on/off is not very similar to 
instance create/delete. Instance create is actually correlates to 
"active" provision state, instance deletion - to "deleted" (yeah, naming 
is no so good here).




Also, I'd LOVE to avoid using 'boot' at all just to get away from the
nova command's use of it.


+1



dt

--

Dean Troyer
dtro...@gmail.com <mailto:dtro...@gmail.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 04:37 PM, Giulio Fidente wrote:

On 11/10/2015 04:16 PM, Dmitry Tantsur wrote:

On 11/10/2015 04:08 PM, Tzu-Mainn Chen wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 for both



b) Put it in its own repo, tripleo-api


if both the api (coming) and the cli (currently python-tripleoclient)
are meant to consume the shared code (business logic) from
tripleo-common, then I think it makes sense to keep each in its own repo
... so that we avoid renaming tripleo-common as well


tripleoclient should not consume tripleo-common or have any business 
logic. otherwise it undermines the whole goal of having API, as we'll 
have to reproduce the same logic on GUI.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 03:32 PM, Steve Martinelli wrote:

So I don't know the intricacies of the baremetal APIs, but hopefully I
can shed some light on best practices.

Do try to reuse the existing actions
(http://docs.openstack.org/developer/python-openstackclient/commands.html#actions)
Do use "create", "delete", "set", "show" and "list" for basic CRUD.
Do try to have natural opposites - like issue/revoke, resume/suspend,
add/remove.


So looking at the list below, I'd say:
Don't use "update" - use "set".

What's the point of "inspect"? Can you use "show"? If it's a HEAD call,
how about "check"?

What's "manage" does it update a resource? Can you use "set" instead?

What are the natural opposites between provide/activate/abort/boot/shutdown?


 inspect, manage, provide, active and abort are all provisioning verbs 
used in ironic API. they usually represent some complex operations on a 
node. Inspection is not related to showing, it's about fetching hardware 
properties from hardware itself and updating ironic database. manage 
sets a node to a specific ("manageable") state. etc.


boot and shutdown are natural opposites, aka power on and power off.



reboot and rebuild seem good

/rant

Steve

Inactive hide details for "Sam Betts (sambetts)" ---2015/11/10 07:20:54
AM---So you would end up with a set of commands that lo"Sam Betts
(sambetts)" ---2015/11/10 07:20:54 AM---So you would end up with a set
of commands that look like this: Openstack baremetal [node/driver/cha

From: "Sam Betts (sambetts)" 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 2015/11/10 07:20 AM
Subject: Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient
command for provision action





So you would end up with a set of commands that look like this:

Openstack baremetal [node/driver/chassis] list
Openstack baremetal port list [—node uuid] <— replicate node-port-list

Openstack baremetal [node/port/driver] show UUID
Openstack baremetal chassis show [—nodes] UUID <— replicate
chassis-node-list

Openstack baremetal [node/chassis/port] create
Openstack baremetal [node/chassis/port] update UUID
Openstack baremetal [node/chassis/port] delete UUID

Openstack baremetal [node/chassis] provide UUID
Openstack baremetal [node/chassis] activate UUID
Openstack baremetal [node/chassis] rebuild UUID
Openstack baremetal [node/chassis] inspect UUID
Openstack baremetal [node/chassis] manage UUID
Openstack baremetal [node/chassis] abort UUID
Openstack baremetal [node/chassis] boot UUID
Openstack baremetal [node/chassis] shutdown UUID
Openstack baremetal [node/chassis] reboot UUID

Openstack baremetal node maintain [—done] UUID
Openstack baremetal node console [—enable, —disable] UUID <— With no
parameters this acts like node-get-console, otherwise acts like
node-set-console-mode
Openstack baremetal node boot-device [—supported, —PXE, —CDROM, etc]
UUID <—With no parameters this acts like node-get-boot-device,
—supported makes it act like node-get-supported-boot-devices, and with a
type of boot device passed in it’ll act like node-set-boot-device

Openstack baremetal [node/driver] passthru

WDYT? I think I’ve covered most of what exists in the Ironic CLI currently.

Sam

*From: *"Haomeng, Wang" <_wanghaomeng@gmail.com_
>*
Reply-To: *"OpenStack Development Mailing List (not for usage
questions)" <_openstack-dev@lists.openstack.org_
>*
Date: *Tuesday, 10 November 2015 11:41*
To: *"OpenStack Development Mailing List (not for usage questions)"
<_openstack-dev@lists.openstack.org_
>*
Subject: *Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient
command for provision action

Hi Sam,

Yes, I understand your format is:

#openstack baremetal  

so these can cover all 'node' operations however if we want to cover
support port/chassis/driver and more ironic resources, so how about
below proposal?

#openstack baremetal   

The resource/target can be one item in following list:

node
port
chassis
driver
...

Make sense?




On Tue, Nov 10, 2015 at 7:25 PM, Sam Betts (sambetts)
<_sambetts@cisco.com_ > wrote:

Openstack baremetal provision provide or —provide Just doesn’t feel
right to me, it feels like I am typing more that I need to and it
feels like I’m telling it to do the same action twice.

I would much rather see:

Openstack baremetal provide UUID
Openstack baremetal activate UUID
Openstack baremetal delete UUID
Openstack baremetal rebuild UUID
Openstack baremetal inspect UUID
Openstack baremetal manage UUID
Openstack baremetal abort UUID

And for power:

Openstack baremetal boot UUID
Openstack beremetal shutdown UUID
Openstack baremetal reboot UUID

WDYT?

Sam

*From: *"Haomeng, Wang" <_wanghaomeng@gmail.com_

Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 04:08 PM, Tzu-Mainn Chen wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 for both



b) Put it in its own repo, tripleo-api


The first option made a lot of sense to people on IRC, as the proposed
API is a very thin layer that's bound closely to the code in tripleo-
common.  The major objection is that renaming is not trivial; however
it was mentioned that renaming might not be *too* bad... as long as
it's done sooner rather than later.


Renaming is bad when there are strong backward compatibility guarantees. 
I'm not sure if it's the case for tripleo-common.




What do people think?


Thanks,
Tzu-Mainn Chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 02:42 PM, Lennart Regebro wrote:

These changes are fine to me.

I'm not so sure about the idea that we can't "hijack" other projects
namespaces. If only ironic is allowed to use the prefix "baremetal",
then the prefix should not have been "baremetal" in the first place,
it should have been "ironic". Which of course means it would just be a
replacement for the ironic client, making these whole namespaces
pointless.


That's not true, ironic is officially called the Bare metal service, so 
"baremetal" is its official short name.


That said, I'm not saying we can never use other's namespaces. I only 
state that this should be done in coordination with projects and with 
care to make the new commands generic enough.




I do agree that many of these should not be in baremetal at all as
they are not baremetal specific, but tripleo-things, and hence is a
part of the overcloud/undercloud namespace, and that in the minimum
teaches us to be more careful with the namespaces. We should probably
double-check with others first.

Oh, sorry, I mean "We should probably increase cross-team
communication visibility to synchronize the integrational aspects of
the openstack client project, going forward."


:)




On Mon, Nov 9, 2015 at 1:44 PM, Dmitry Tantsur  wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack baremetal"
and "openstack baremetal introspection" namespaces belonging to ironic and
ironic-inspector accordingly. TL;DR of this email is to deprecate them and
move to TripleO-specific namespaces. Read on to know why.

Problem
===

I realized that we're doing a wrong thing when people started asking me why
"baremetal introspection start" and "baremetal introspection bulk start"
behave so differently (the former is from ironic-inspector, the latter is
from tripleoclient). The problem with TripleO commands is that they're
highly opinionated workflows commands, but there's no way a user can
distinguish them from general-purpose ironic/ironic-inspector commands. The
way some of them work is not generic enough ("baremetal import"), or uses
different defaults from an upstream project ("configure boot"), or does
something completely unacceptable upstream (e.g. the way "introspection bulk
start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling

4. baremetal show capabilities

  This is the only commands that is generic enough and could actually make it
to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure" prefix. I
would not promise we'll never start using it in ironic, breaking the whole
TripleO.

  Seconds, it's actually DELL-specific.

7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of TripleO
as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud" prefixes
for TripleO, I suggest we move these commands under "openstack overcloud
nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state" command.
As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows which nodes
are/were on introspection. We'll need a new API though.

2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this replacement for
"introspection bulk status": polling and operating on "available" nodes.

4. overcloud nodes import --dry-run

   could be a replacement for "baremetal instackenv validate".


Please let me know what you think.

Cheers,
Dmitry.


__
OpenStack Development 

Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 12:26 PM, John Trowbridge wrote:



On 11/09/2015 07:44 AM, Dmitry Tantsur wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces belonging
to ironic and ironic-inspector accordingly. TL;DR of this email is to
deprecate them and move to TripleO-specific namespaces. Read on to know
why.

Problem
===

I realized that we're doing a wrong thing when people started asking me
why "baremetal introspection start" and "baremetal introspection bulk
start" behave so differently (the former is from ironic-inspector, the
latter is from tripleoclient). The problem with TripleO commands is that
they're highly opinionated workflows commands, but there's no way a user
can distinguish them from general-purpose ironic/ironic-inspector
commands. The way some of them work is not generic enough ("baremetal
import"), or uses different defaults from an upstream project
("configure boot"), or does something completely unacceptable upstream
(e.g. the way "introspection bulk start" deals with node states).

So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object, while
instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and driver
properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling



I have considered this whole command as a bug for a while now. I
understand what we were trying to do and why, but it is pretty bad to
hijack another project's namespace with a command that would get a firm
-2 there.


4. baremetal show capabilities

  This is the only commands that is generic enough and could actually
make it to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

  Seconds, it's actually DELL-specific.

7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not an
upstream default. Default values for images may not work outside of
TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under "openstack
overcloud nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows which
nodes are/were on introspection. We'll need a new API though.

2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this replacement
for "introspection bulk status": polling and operating on "available"
nodes.


I am not totally convinced that we gain a huge amount by hiding the
state manipulation in this command. We need to move that logic to
tripleo-common anyways, so I think it is worth considering splitting it
from the introspect command.


+1



Dmitry and I discussed briefly at summit having the ability to pass a
list of nodes to the inspector client for introspection as well. So if
we separated out the bulk state manipulation bit, we could just use that.


And here it goes: https://review.openstack.org/#/c/243541/ :)

The only missing bit would be polling, it's bug 
https://bugs.launchpad.net/python-ironic-inspector-client/+bug/1480649 
if someone feels like working on it.




I get that this is going in the opposite direction of the original
intention of lowering the amount of commands needed to get a functional
deployment. However, I think that goal is better solved elsewhere
(tripleo.sh, some ansible playbooks, etc.). Instead it would be nice if
the tripleoclient was more transparent.


+100



Thanks Dmitry for starting this discussion.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack

Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

On 11/10/2015 10:28 AM, Lucas Alvares Gomes wrote:

Hi,


Let's have a quick poll, which would you prefer and why:

1. openstack baremetal provision state --provide UUID
2. openstack baremetal provision --provide UUID
3. openstack baremetal provide UUID
4. openstack baremetal set provision state --provide UUID
5. openstack baremetal set state --provide UUID
6. openstack baremetal action --provide UUID


I know very little about OSC and it's syntax, but what I would do in
this case is to follow the same syntax as the command that changes the
power state of the nodes. Apparently the power state command proposed
[1] follows the syntax:

$ openstack baremetal power --on | --off 

I would expect provision state to follow the same, perhaps

$ openstack baremetal provision --provide | --active | ... 

So my vote goes to make both power and provision state syntax
consistent. (Which currently is the option # 2, but none patches are
merged yet)


It's still not 100% consistent, "power" is a noun, "provision" is a 
verb. Not sure it matters, though, adding OSC folks so that they can 
weigh in.




[1] https://review.openstack.org/#/c/172517/28

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Quick poll: OpenStackClient command for provision action

2015-11-10 Thread Dmitry Tantsur

Hi all!

I'd like to seek consensus (or at least some opinions) on patch 
https://review.openstack.org/#/c/206119/

It proposed the following command:

  openstack baremetal provision state --provide UUID

(where --provide can also be --active, --deleted, --inspect, etc).

I have several issues with this proposal:

1. IIUC the structure of an OSC command is "openstack noun verb". 
"provision state" is not a verb.

2. --active is not consistent with other options, which are verbs.

Let's have a quick poll, which would you prefer and why:

1. openstack baremetal provision state --provide UUID
2. openstack baremetal provision --provide UUID
3. openstack baremetal provide UUID
4. openstack baremetal set provision state --provide UUID
5. openstack baremetal set state --provide UUID
6. openstack baremetal action --provide UUID

I vote for #3. Though it's much more versbose, it reads very easily, 
except for "active". For active I'm thinking about changing it to 
"activate" or "provision".


My next candidate is #6. Though it's also not a verb, it reads pretty 
easily.


Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] RFC: profile matching

2015-11-09 Thread Dmitry Tantsur

Hi folks!

I spent some time thinking about bringing profile matching back in, so 
I'd like to get your comments on the following near-future plan.


First, the scope of the problem. What we do is essentially kind of 
capability discovery. We'll help nova scheduler with doing the right 
thing by assigning a capability like "suits for compute", "suits for 
controller", etc. The most obvious path is to use inspector to assign 
capabilities like "profile=1" and then filter nodes by it.


A special care, however, is needed when some of the nodes match 2 or 
more profiles. E.g. if we have all 4 nodes matching "compute" and then 
only 1 matching "controller", nova can select this one node for 
"compute" flavor, and then complain that it does not have enough hosts 
for "controller".


We also want to conduct some sanity check before even calling to 
heat/nova to avoid cryptic "no valid host found" errors.


(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to 
inspector that allow us to define rules on introspection data. The plan 
is to have rules saying, for example:


 rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
 rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based 
DSL [1].


As you see, one node can receive 0, 1 or many such capabilities. So we 
need the next step to make a final decision, based on how many nodes we 
need of every profile.


(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided, 
tripleoclient will fetch all ironic nodes, and try to ensure that we 
have enough nodes with all profiles.


Nodes with existing "profile:xxx" capability are left as they are. For 
nodes without a profile it will look at "xxx_profile" capabilities 
discovered on the previous step. One of the possible profiles will be 
chosen and assigned to "profile" capability. The assignment stops as 
soon as we have enough nodes of a flavor as requested by a user.


(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will 
fetch all flavors involved and look at the "profile" capabilities. If 
they are set for any flavors, it will check if we have enough ironic 
nodes with a given "profile:xxx" capability. This check will happen 
after profiles assigning, if --assign-profiles is used.


Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-09 Thread Dmitry Tantsur

On 11/09/2015 03:04 PM, Dougal Matthews wrote:

On 9 November 2015 at 12:44, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces
belonging to ironic and ironic-inspector accordingly. TL;DR of this
email is to deprecate them and move to TripleO-specific namespaces.
Read on to know why.

Problem
===

I realized that we're doing a wrong thing when people started asking
me why "baremetal introspection start" and "baremetal introspection
bulk start" behave so differently (the former is from
ironic-inspector, the latter is from tripleoclient). The problem
with TripleO commands is that they're highly opinionated workflows
commands, but there's no way a user can distinguish them from
general-purpose ironic/ironic-inspector commands. The way some of
them work is not generic enough ("baremetal import"), or uses
different defaults from an upstream project ("configure boot"), or
does something completely unacceptable upstream (e.g. the way
"introspection bulk start" deals with node states).


A big +1 to the idea.

We originally done this because we wanted to make it feel more
"integrated", but it never quite worked. I completely agree with all the
justifications below.


So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object,
while instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and
driver properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling

4. baremetal show capabilities

  This is the only commands that is generic enough and could
actually make it to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

  Seconds, it's actually DELL-specific.


heh, that I didn't know!


7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not
an upstream default. Default values for images may not work outside
of TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under
"openstack overcloud nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot


I think this is probably okay, but I wonder if "nodes" is a bit generic?
Why not "overcloud baremetal" for consistency?


I don't have a strong opinion on it :)




As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows
which nodes are/were on introspection. We'll need a new API though.


A new API endpoint in Ironic Inspector?


Yeah, a new endpoint to report all nodes that are/were on inspection.




2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this
replacement for "introspection bulk status": polling and operating
on "available" nodes.

4. overcloud nodes import --dry-run

   could be a replacement for "baremetal instackenv validate".


Please let me know what you think.


Thanks for bringing this up, it should make everything much clearer for
everyone.


Great! I've also added this topic to the tomorrow's meeting to increase 
visibility.





Cheers,
Dmitry.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://l

[openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-09 Thread Dmitry Tantsur

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack 
baremetal" and "openstack baremetal introspection" namespaces belonging 
to ironic and ironic-inspector accordingly. TL;DR of this email is to 
deprecate them and move to TripleO-specific namespaces. Read on to know why.


Problem
===

I realized that we're doing a wrong thing when people started asking me 
why "baremetal introspection start" and "baremetal introspection bulk 
start" behave so differently (the former is from ironic-inspector, the 
latter is from tripleoclient). The problem with TripleO commands is that 
they're highly opinionated workflows commands, but there's no way a user 
can distinguish them from general-purpose ironic/ironic-inspector 
commands. The way some of them work is not generic enough ("baremetal 
import"), or uses different defaults from an upstream project 
("configure boot"), or does something completely unacceptable upstream 
(e.g. the way "introspection bulk start" deals with node states).


So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

 This command assumes there's an "baremetal instackenv" object, while 
instackenv is a tripleo-specific file format.


2. baremetal import

 This command supports a limited subset of ironic drivers and driver 
properties, only those known to os-cloud-config.


3. baremetal introspection bulk start

 This command does several bad (IMO) things:
 a. Messes with ironic node states
 b. Operates implicitly on all nodes (in a wrong state)
 c. Defaults to polling

4. baremetal show capabilities

 This is the only commands that is generic enough and could actually 
make it to ironicclient itself.


5. baremetal introspection bulk status

 See "bulk start" above.

6. baremetal configure ready state

 First of all, this and the next command use "baremetal configure" 
prefix. I would not promise we'll never start using it in ironic, 
breaking the whole TripleO.


 Seconds, it's actually DELL-specific.

7. baremetal configure boot

 This one is nearly ok, but it defaults to local boot, which is not an 
upstream default. Default values for images may not work outside of 
TripleO as well.


Proposal


As we already have "openstack undercloud" and "openstack overcloud" 
prefixes for TripleO, I suggest we move these commands under "openstack 
overcloud nodes" namespace. So we end up with:


 overcloud nodes import
 overcloud nodes configure ready state --drac
 overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state" 
command. As to the remaining commands:


1. baremetal introspection status --all

  This is fine to move to inspector-client, as inspector knows which 
nodes are/were on introspection. We'll need a new API though.


2. baremetal show capabilities

  We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

  I believe that we need to make 2 things explicit in this replacement 
for "introspection bulk status": polling and operating on "available" nodes.


4. overcloud nodes import --dry-run

  could be a replacement for "baremetal instackenv validate".


Please let me know what you think.

Cheers,
Dmitry.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Next meeting is November 9

2015-10-22 Thread Dmitry Tantsur

On 10/22/2015 12:33 PM, Miles Gould wrote:

I've just joined - what is the usual place and time?


Hi and welcome!

All the information you need you can find here: 
https://wiki.openstack.org/wiki/Meetings/Ironic




Thanks,
Miles

- Original Message -
From: "Beth Elwell" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, 22 October, 2015 8:33:03 AM
Subject: Re: [openstack-dev] [ironic] Next meeting is November 9

Hi Jim,

I will be on holiday the week of the 9th November and so will be unable to make 
that meeting. Work on the ironic UI will be posted in the sub team report 
section and if anyone has any questions regarding it please shoot me an email 
or ping me.

Thanks!
Beth


On 22 Oct 2015, at 01:58, Jim Rollenhagen  wrote:

Hi folks,

Since we'll all be at the summit next week, and presumably recovering
the following week, the next Ironic meeting will be on November 9, in
the usual place and time. See you there! :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][ironic] ironic-inspector release 2.2.2 (liberty)

2015-10-21 Thread Dmitry Tantsur

We are gleeful to announce the release of:

ironic-inspector 2.2.2: Hardware introspection for OpenStack Bare Metal

With source available at:

http://git.openstack.org/cgit/openstack/ironic-inspector

The most important change is a fix for CVE-2015-5306, all users 
(including users of ironic-discoverd) are highly advised to update.


Another user-visible change is defaulting MySQL to InnoDB, as MyISAM is 
known not to work.


For more details, please see the git log history below and:

http://launchpad.net/ironic-inspector/+milestone/2.2.2

Please report issues through launchpad:

http://bugs.launchpad.net/ironic-inspector

Changes in ironic-inspector 2.2.1..2.2.2


95db43c Always default to InnoDB for MySQL
2d42cdf Updated from global requirements
2c64da2 Never run Flask application with debug mode
bbf31de Fix gate broken by the devstack trueorfalse change
12eaf81 Use auth_strategy=noauth in functional tests

Diffstat (except docs and test files)
-

devstack/plugin.sh |  2 +-
ironic_inspector/db.py |  7 ++-
ironic_inspector/main.py   |  5 +--
.../versions/578f84f38d_inital_db_schema.py| 12 +++--
.../migrations/versions/d588418040d_add_rules.py   | 10 -
ironic_inspector/test/functional.py| 51 
+++---

requirements.txt   |  2 +-
7 files changed, 52 insertions(+), 37 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e53d673..39b8423 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -21 +21 @@ oslo.rootwrap>=2.0.0 # Apache-2.0
-oslo.utils>=2.0.0 # Apache-2.0
+oslo.utils!=2.6.0,>=2.0.0 # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Design Summit Schedule

2015-10-16 Thread Dmitry Tantsur

On 10/15/2015 06:42 PM, Matthew Treinish wrote:


Hi Everyone,

I just pushed up the QA schedule for design summit:

https://mitakadesignsummit.sched.org/overview/type/qa

Let me know if there are any big schedule conflicts or other issues, so we can
work through the problem.


Hi!

I wonder if it's possible to move "QA: Tempest Microversion Support and 
Testing" one slot down (to 3:40), so that Ironic people can attend.




Thanks,

Matt Treinish



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Dmitry Tantsur

On 10/15/2015 12:18 AM, Robert Collins wrote:

On 15 October 2015 at 11:11, Thomas Goirand  wrote:


One major pain point is unfortunately something ridiculously easy to
fix, but which nobody seems to care about: the long & short descriptions
format. These are usually buried into the setup.py black magic, which by
the way I feel is very unsafe (does PyPi actually execute "python
setup.py" to find out about description texts? I hope they are running
this in a sandbox...).

Since everyone uses the fact that PyPi accepts RST format for the long
description, there's nothing that can really easily fit the
debian/control. Probably a rst2txt tool would help, but still, the long
description would still be polluted with things like changelog, examples
and such (damned, why people think it's the correct place to put that...).

The only way I'd see to fix this situation, would be a PEP. This will
probably take a decade to have everyone switching to a new correct way
to write a long & short description...


Perhaps Debian (1 thing) should change, rather than trying to change
all the upstreams packaged in it (>20K) :)


+1. Both README and PyPI are for users, and I personally find detailed 
descriptions (especially a couple of simple examples) on the PyPI page 
to be of so much value.




-Rob





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Introspection rules aka advances profiles replacement: next steps

2015-10-14 Thread Dmitry Tantsur

Hi OoO'ers :)

It's going to be a long letter, fasten your seat-belts (and excuse my 
bad, as usual, English)!


In RDO Manager we used to have a feature called advanced profiles 
matching. It's still there in the documentation at 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html 
but the related code needed reworking and didn't quite make it upstream 
yet. This mail is an attempt to restart the discussion on this topic.


Short explanation for those unaware of this feature: we used detailed 
data from introspection (acquired using hardware-detect utility [1]) to 
provide scheduling hints, which we called profiles. A profile is 
essentially a flavor, but calculated using much more data. E.g. you 
could sat that a profile "foo" will be assigned to nodes with 1024 <= 
RAM <= 4096 and with GPU devices present (an artificial example). 
Profile was put on an Ironic as a capability as a result of 
introspection. Please read the documentation linked above for more details.


This feature had a bunch of problems with it, to name a few:
1. It didn't have an API
2. It required a user to modify files by hand to use it
3. It was tied to a pretty specific syntax of the hardware [1] library

So we decided to split this thing into 2 parts, which are of value one 
their own:


1. Pluggable introspection ramdisk - so that we don't force dependency 
on hardware-detect on everyone.
2. User-defined introspection rules - some DSL that will allow a user to 
define something like a specs file (see link above) via an API. The 
outcome would be something, probably capabilit(y|ies) set on a node.
3. Scheduler helper - an utility that will take capabilities set by the 
previous step, and turn them into exactly one profile to use.


Long story short, we got 1 and 2 implemented in appropriate projects 
(ironic-python-agent and ironic-inspector) during the Liberty time 
frame. Now it's time to figure out what we do in TripleO about this, namely:


1. Do we need some standard way to define introspection rules for 
TripleO? E.g. a JSON file like we have for ironic nodes?


2. Do we need a scheduler helper at all? We could use only capabilities 
for scheduling, but then we can end up with the following situation: 
node1 has capabilities C1 and C2, node2 has capability C1. First we 
deploy a flavor with capability C1, it goes to node1. Then we deploy a 
flavor with capability C2 and it fails, despite us having 2 correct 
nodes initially. This is what state files were solving in [1] (again, 
please refer to the documentation).


3. If we need, where does it go? tripleo-common? Do we need an HTTP API 
for it, or do we just do it in place where we need it? After all, it's a 
pretty trivial manipulation with ironic nodes..


4. Finally, we need an option to tell introspection to use 
python-hardware. I don't think it should be on by default, but it will 
require rebuilding of IPA (due to a new dependency).


Looking forward to your opinions.
Dmitry.

[1] https://github.com/redhat-cip/hardware

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [inspector] Ideas for summit discussions

2015-10-12 Thread Dmitry Tantsur

Hi inspectors! :)

We don't have a proper design session in Tokyo, but I hope it won't 
prevent us from having an informal one, probably on Friday morning 
during the contributor meetup. I'm collecting the ideas of what we could 
discuss, so please feel free to jump in:

https://etherpad.openstack.org/p/mitaka-ironic-inspector

Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-12 Thread Dmitry Tantsur

On 10/09/2015 05:41 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.


As I'm leaving for the weekend, I'll post my findings here.

I was not able to spot what writes these files (in my case it was named
33). I also was not able to reproduce it on my regular devstack
environment.

I've posted a temporary patch https://review.openstack.org/#/c/233017/
so that we're able to track where and when these files appear. Right now
I only understood that they really appear during the devstack run, not
earlier.


So, no file seems to be created, so it looks like a problem in devstack: 
https://review.openstack.org/#/c/233584/








This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.


As I'm leaving for the weekend, I'll post my findings here.

I was not able to spot what writes these files (in my case it was named 
33). I also was not able to reproduce it on my regular devstack environment.


I've posted a temporary patch https://review.openstack.org/#/c/233017/ 
so that we're able to track where and when these files appear. Right now 
I only understood that they really appear during the devstack run, not 
earlier.






This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:06 PM, Tang Chen wrote:


On 10/09/2015 05:48 PM, Jordan Pittier wrote:

Hi,
On Fri, Oct 9, 2015 at 11:00 AM, Tang Chen mailto:tangc...@cn.fujitsu.com>> wrote:

Hi,

CI systems will run tests for each patch once it is submitted or
modified.
But most CI systems occupy a lot of resource, and take a long time to
run tests (1 or 2 hours for one patch).

I think, not all the patches submitted need to be tested. Even
those patches
with an approved BP and spec may be reworked for 20+ versions. So
I think
CI should support a RFC (Require For Comments) mechanism for
developers
to submit and review the code detail and rework. When the patches are
fully ready, I mean all reviewers have agreed on the
implementation detail,
then CI will test the patches.

So have the humans do the hard work to eventually find out that the
patch breaks the world ?


No. Developers of course will run some tests themselves before they
submit patches.


Tests, but not all possible CI's. E.g. in ironic we 6 devstack-based 
jobs, I don't really expect a submitter to go through them manually. 
Actually, it's an awesome feature of our CI system that I would not give 
away :)


Also as a reviewer, I'm not sure I would like to argue on function 
names, while I'm not even sure that this change does not break the world.



It is just a waste of resource if reviewers are discussing about where
this function should be,
or what the function should be named. After all these details are agreed
on, run the CI.


For a 20+ version patch-set, maybe 3 or 4 rounds
of tests are enough. Just test the last 3 or 4 versions.

 How do know, when a new patchset arrives, that it's part of the last
3 or 4 versions ?


I think it could work like this:
1. At first, developer submits v1 patch-set with RFC tag. CIs don't run.
2. After several versions reworked, like v5, v6, most reviewers have
agreed on the implementation
 is OK. Then submit v7 without RFC tag. Then CIs run.
3. After 3, 4 rounds of tests, v10 patch-set could be merged.

Thanks.



This can significantly reduce CI overload.

This workflow appears in many other OSS communities, such as Linux
kernel,
qemu and libvirt. Testers won't test patches with a [RFC] tag in
the commit message.
So I want to enable CI to support a similar mechanism.

I'm not sure if it is a good idea. Please help to review the
following BP.

https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am running a 3rd party for Cinder. The amount of time to setup,
operate and watch after the CI results cost way more than the 1 or 2
servers it take to run the jobs. So, I don"t want to be a party pooper
here, but in my opinion I am not sure it's worth the effort.

Note: I don"t know about nova or neutron.

Jordan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.



This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Dmitry Tantsur

On 10/08/2015 11:47 PM, Jim Rollenhagen wrote:

Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.


+2



I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.


+2



Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Dmitry Tantsur

On 09/30/2015 03:15 PM, Ryan Brown wrote:

On 09/30/2015 04:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common
I have
to grep through the projects I know that use it to make sure I don't
break
anything.


The API working group exists, but they focus on REST APIs so they don't
have any guidelines on library APIs.


Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.


I think assuming that anything without a leading underscore is public
might be too broad. For example, that would make all of libutils
ostensibly a "stable" interface. I don't think that's what we want,
especially this early in the lifecycle.

In heatclient, we present "heatclient.client" and "heatclient.exc"
modules as the main public API, and put versioned implementations in
modules.


I'd recommend to avoid things like 'heatclient.client', as in a big 
application it would lead to imports like


 from heatclient import client as heatclient

:)

What I did for ironic-inspector-client was to make a couple of most 
important things available directly on ironic_inspector_client top-level 
module, everything else - under ironic_inspector_client.v1 (modulo some 
legacy).




heatclient
|- client
|- exc
\- v1
   |- client
   |- resources
   |- events
   |- services

I think versioning the public API is the way to go, since it will make
it easier to maintain backwards compatibility while new needs/uses evolve.


++






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Dmitry Tantsur

On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:

Hi all,

releases are approaching, so it’s the right time to start some bike shedding on 
the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit message 
requirement [3] for the message lines that says: "Subsequent lines should be 
wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they are 200+ 
chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was 
killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not 
get -1 treatment. I propose to raise the limit for the guideline on wiki 
accordingly.


+1, I never understood it actually. I know some folks even question 80 
chars for the code, so having 72 chars for commit messages looks a bit 
weird to me.




Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: 
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Dmitry Tantsur
2015-09-24 17:38 GMT+02:00 Ionut Balutoiu 
:

> Hello, guys!
>
> I'm starting a new implementation for a dhcp provider,
> mainly to be used for Ironic standalone. I'm planning to
> push it upstream. I'm using isc-dhcp-server service from
> Linux. So, when an Ironic node is started, the ironic-conductor
> writes in the config file the MAC-IP reservation for that node and
> reloads dhcp service. I'm using a SQL database as a backend to store
> the dhcp reservations (I think is cleaner and it should allow us
> to have more than one DHCP server). What do you think about my
> implementation ?
>

What you describe slightly resembles how ironic-inspector works. It needs
to serve DHCP to nodes that are NOT know to Ironic, so it manages iptables
rules giving (or not giving access) to the dnsmasq instance. I wonder if we
may find some common code between these 2, but I definitely don't want to
reinvent Neutron :) I'll think about it after seeing your spec and/or code,
I'm already looking forward to them!


> Also, I'm not sure how can I scale this out to provide HA/failover.
> Do you guys have any idea ?
>
> Regards,
> Ionut Balutoiu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Stepping down from IPA core

2015-09-21 Thread Dmitry Tantsur

On 09/21/2015 05:49 PM, Josh Gachnang wrote:

Hey y'all, it's with a heavy heart I have to announce I'll be stepping
down from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
healthcare startup (Triggr Health) and won't have the time to dedicate
to being an effective OpenStack reviewer.

Ever since the OnMetal team proposed IPA all the way back in the
Icehouse midcycle, this community has been welcoming, helpful, and all
around great. You've all helped me grow as a developer with your in
depth and patient reviews, for which I am eternally grateful. I'm really
sad I won't get to see everyone in Tokyo.


I'm a bit sad to hear it :) it was a big pleasure to work with you. Have 
the best of luck in your new challenges!




I'll still be on IRC after leaving, so feel free to ping me for any
reason :)

- JoshNang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Liberty soft freeze

2015-09-18 Thread Dmitry Tantsur
Note for inspector folks: this applies to us as well. Lets land whatever 
we have planned for 2.2.0 and fix any issues arising.


Please see milestone page for list of things that we still need to 
review/fix:

https://launchpad.net/ironic-inspector/+milestone/2.2.0

On 09/18/2015 03:50 AM, Jim Rollenhagen wrote:

Hi folks,

It's time for our soft freeze for Liberty, as planned. Core reviewers
should do their best to refrain from landing risky code. We'd like to
ship 4.2.0 as the candidate for stable/liberty next Thursday, September
24.

Here's the things we still want to complete in 4.2.0:
https://launchpad.net/ironic/+milestone/4.2.0

Note that zapping is no longer there; sadly, after lots of writing and
reviewing code, we want to rethink how we implement this. We've talked
about being able to go from MANAGEABLE->CLEANING->MANAGEABLE with a list
of clean steps. Same idea, but without the word zapping, the new DB
fields, etc. At any rate, it's been bumped to Mitaka to give us time to
figure it out.

This may also mean in-band RAID configuration may not land; the
interface in general did land, and drivers may do out-of-band
configuration. We assumed that in-band RAID would be done through
zapping. However, if folks can agree on how to do it during automated
cleaning, I'd be happy to get that in Liberty if the code is not too
risky. If it is risky, we'll need to punt it to Mitaka as well.

I'd like to see the rest of the work on the milestone completed during
Liberty, and I hope everyone can jump in and help us to do that.

Thanks in advance!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final liberty cycle client library releases needed)

2015-09-15 Thread Dmitry Tantsur
t's easier
> to
> > > > identify the causes of test failures if we have one patch at a time.
> > >
> > > Hi Doug!
> > >
> > > When is the last and final deadline for doing all this for
> > > not-so-important and non-release:managed projects like
> ironic-inspector?
> > > We still lack some Liberty features covered in
> > > python-ironic-inspector-client. Do we have time until end of week to
> > > finish them?
> >
> > We would like for the schedule to be the same for everyone. We need the
> > final versions for all libraries this week, so we can update
> > requirements constraints by early next week before the RC1.
> >
> > https://wiki.openstack.org/wiki/Liberty_Release_Schedule
> >
> > Doug
> >
> > >
> > > Sorry if you hear this question too often :)
> > >
> > > Thanks!
> > >
> > > >
> > > > Doug
> > > >
> > > >
> __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Inspector] Finishing Liberty

2015-09-15 Thread Dmitry Tantsur

On 09/15/2015 05:02 PM, Dmitry Tantsur wrote:

Hi folks!

As you can see below, we have to make the final release of
python-ironic-inspector-client really soon. We have 2 big missing parts:

1. Introspection rules support.
I'm working on it: https://review.openstack.org/#/c/223096/
This required a substantial requirement, so that our client does not
become a complete mess: https://review.openstack.org/#/c/223490/

2. Support for getting introspection data. John (trown) volunteered to
do this work.

I'd like to ask the inspector team to pay close attention to these
patches, as the deadline for them is Friday (preferably European time).

Next, please have a look at the milestone page for ironic-inspector
itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
There are things that require review, and there are things without an
assignee. If you'd like to volunteer for something there, please assign
it to yourself. Our deadline is next Thursday, but it would be really
good to finish it earlier next week to dedicate some time to testing.


Forgot an important thing: we have 2 outstanding IPA patches as well:
https://review.openstack.org/#/c/222605/
https://review.openstack.org/#/c/223054



Thanks all, I'm looking forward to this release :)


 Forwarded Message 
Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle
client library releases needed
Date: Tue, 15 Sep 2015 10:45:45 -0400
From: Doug Hellmann 
Reply-To: OpenStack Development Mailing List (not for usage questions)

To: openstack-dev 

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:

On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release
>> tasks, we need to have final releases for all client libraries in the
>> next day or two.
>>
>> If you have not already submitted your final release request for this
>> cycle, please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this
>> cycle, please reply to this email and let me know that you have so
I can
>> create your stable/liberty branch.
>>
>> Thanks!
>> Doug
>
> I forgot to mention that we also need the constraints file in
> global-requirements updated for all of the releases, so we're actually
> testing with them in the gate. Please take a minute to check the
version
> specified in openstack/requirements/upper-constraints.txt for your
> libraries and submit a patch to update it to the latest release if
> necessary. I'll do a review later in the week, too, but it's easier to
> identify the causes of test failures if we have one patch at a time.

Hi Doug!

When is the last and final deadline for doing all this for
not-so-important and non-release:managed projects like ironic-inspector?
We still lack some Liberty features covered in
python-ironic-inspector-client. Do we have time until end of week to
finish them?


We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug



Sorry if you hear this question too often :)

Thanks!

>
> Doug
>
>
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final liberty cycle client library releases needed)

2015-09-15 Thread Dmitry Tantsur

Hi folks!

As you can see below, we have to make the final release of 
python-ironic-inspector-client really soon. We have 2 big missing parts:


1. Introspection rules support.
   I'm working on it: https://review.openstack.org/#/c/223096/
   This required a substantial requirement, so that our client does not 
become a complete mess: https://review.openstack.org/#/c/223490/


2. Support for getting introspection data. John (trown) volunteered to 
do this work.


I'd like to ask the inspector team to pay close attention to these 
patches, as the deadline for them is Friday (preferably European time).


Next, please have a look at the milestone page for ironic-inspector 
itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
There are things that require review, and there are things without an 
assignee. If you'd like to volunteer for something there, please assign 
it to yourself. Our deadline is next Thursday, but it would be really 
good to finish it earlier next week to dedicate some time to testing.


Thanks all, I'm looking forward to this release :)


 Forwarded Message 
Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle 
client library releases needed

Date: Tue, 15 Sep 2015 10:45:45 -0400
From: Doug Hellmann 
Reply-To: OpenStack Development Mailing List (not for usage questions) 


To: openstack-dev 

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:

On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release
>> tasks, we need to have final releases for all client libraries in the
>> next day or two.
>>
>> If you have not already submitted your final release request for this
>> cycle, please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this
>> cycle, please reply to this email and let me know that you have so I can
>> create your stable/liberty branch.
>>
>> Thanks!
>> Doug
>
> I forgot to mention that we also need the constraints file in
> global-requirements updated for all of the releases, so we're actually
> testing with them in the gate. Please take a minute to check the version
> specified in openstack/requirements/upper-constraints.txt for your
> libraries and submit a patch to update it to the latest release if
> necessary. I'll do a review later in the week, too, but it's easier to
> identify the causes of test failures if we have one patch at a time.

Hi Doug!

When is the last and final deadline for doing all this for
not-so-important and non-release:managed projects like ironic-inspector?
We still lack some Liberty features covered in
python-ironic-inspector-client. Do we have time until end of week to
finish them?


We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug



Sorry if you hear this question too often :)

Thanks!

>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Dmitry Tantsur

On 09/14/2015 04:18 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:

PTLs and release liaisons,

In order to keep the rest of our schedule for the end-of-cycle release
tasks, we need to have final releases for all client libraries in the
next day or two.

If you have not already submitted your final release request for this
cycle, please do that as soon as possible.

If you *have* already submitted your final release request for this
cycle, please reply to this email and let me know that you have so I can
create your stable/liberty branch.

Thanks!
Doug


I forgot to mention that we also need the constraints file in
global-requirements updated for all of the releases, so we're actually
testing with them in the gate. Please take a minute to check the version
specified in openstack/requirements/upper-constraints.txt for your
libraries and submit a patch to update it to the latest release if
necessary. I'll do a review later in the week, too, but it's easier to
identify the causes of test failures if we have one patch at a time.


Hi Doug!

When is the last and final deadline for doing all this for 
not-so-important and non-release:managed projects like ironic-inspector? 
We still lack some Liberty features covered in 
python-ironic-inspector-client. Do we have time until end of week to 
finish them?


Sorry if you hear this question too often :)

Thanks!



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Suggestion to split install guide

2015-09-14 Thread Dmitry Tantsur

On 09/14/2015 03:54 PM, Ruby Loo wrote:



On 11 September 2015 at 04:56, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi all!

Our install guide is huge, and I've just approved even more text for
it. WDYT about splitting it into "Basic Install Guide", which will
contain bare minimum for running ironic and deploying instances, and
"Advanced Install Guide", which will the following things:
1. Using Bare Metal service as a standalone service
2. Enabling the configuration drive (configdrive)
3. Inspection
4. Trusted boot
5. UEFI

Opinions?


Thanks for bringing this up Dmitry. Any idea whether there is some sort
of standard format/organization of install guides for the other
OpenStack projects?


Not sure

> And/or maybe we should ask Ops folks (non developers

:-))


Fair enough. I've proposed basic vs advanced split based on what we did 
for TripleO downstream, which was somewhat user-driven.




--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Suggestion to split install guide

2015-09-11 Thread Dmitry Tantsur

Hi all!

Our install guide is huge, and I've just approved even more text for it. 
WDYT about splitting it into "Basic Install Guide", which will contain 
bare minimum for running ironic and deploying instances, and "Advanced 
Install Guide", which will the following things:

1. Using Bare Metal service as a standalone service
2. Enabling the configuration drive (configdrive)
3. Inspection
4. Trusted boot
5. UEFI

Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Command structure for OSC plugin

2015-09-10 Thread Dmitry Tantsur

On 09/09/2015 06:48 PM, Jim Rollenhagen wrote:

On Tue, Sep 01, 2015 at 03:47:03PM -0500, Dean Troyer wrote:

[late catch-up]

On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann 
wrote:


Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:

On 24/08/15 18:19 +, Tim Bell wrote:



>From a user perspective, where bare metal and VMs are just different

flavors (with varying capabilities), can we not use the same commands
(server create/rebuild/...) ? Containers will create the same conceptual
problems.


OSC can provide a converged interface but if we just replace '$ ironic

' by '$ openstack baremetal ', this seems to be a missed
opportunity to hide the complexity from the end user.


Can we re-use the existing server structures ?




I've wondered about how users would see doing this, we've done it already
with the quota and limits commands (blurring the distinction between
project APIs).  At some level I am sure users really do not care about some
of our project distinctions.



To my knowledge, overriding or enhancing existing commands like that

is not possible.

You would have to do it in tree, by making the existing commands
smart enough to talk to both nova and ironic, first to find the
server (which service knows about something with UUID XYZ?) and
then to take the appropriate action on that server using the right
client. So it could be done, but it might lose some of the nuance
between the server types by munging them into the same command. I
don't know what sorts of operations are different, but it would be
worth doing the analysis to see.



I do have an experimental plugin that hooks the server create command to
add some options and change its behaviour so it is possible, but right now
I wouldn't call it supported at all.  That might be something that we could
consider doing though for things like this.

The current model for commands calling multiple project APIs is to put them
in openstackclient.common, so yes, in-tree.

Overall, though, to stay consistent with OSC you would map operations into
the current verbs as much as possible.  It is best to think in terms of how
the CLI user is thinking and what she wants to do, and not how the REST or
Python API is written.  In this case, 'baremetal' is a type of server, a
set of attributes of a server, etc.  As mentioned earlier, containers will
also have a similar paradigm to consider.


Disclaimer: I don't know much about OSC or its syntax, command
structure, etc. These may not be well-formed thoughts. :)


With the same disclaimer applied...



While it would be *really* cool to support the same command to do things
to nova servers or do things to ironic servers, I don't know that it's
reasonable to do so.

Ironic is an admin-only API, that supports running standalone or behind
a Nova installation with the Nova virt driver. The API is primarily used
by Nova, or by admins for management. In the case of a standalone
configuration, an admin can use the Ironic API to deploy a server,
though the recommended approach is to use Bifrost[0] to simplify that.
In the case of Ironic behind Nova, users are expected to boot baremetal
servers through Nova, as indicated by a flavor.

So, many of the nova commands (openstack server foo) don't make sense in
an Ironic context, and vice versa. It would also be difficult to
determine if the commands should go through Nova or through Ironic.
The path could be something like: check that Ironic exists, see if user
has access, hence standalone mode (oh wait, operators probably have
access to manage Ironic *and* deploy baremetal through Nova, what do?).


I second this. I'd like also to add that in case of Ironic "server 
create" may involve actually several complex actions, that do not map to 
'nova boot'. First of all we create a node record in database, second we 
check it's power credentials, third we do properties inspection, finally 
we do cleaning. None of these make any sense on a virtual environment.




I think we should think of "openstack baremetal foo" as commands to
manage the baremetal service (Ironic), as that is what the API is
primarily intended for. Then "openstack server foo" just does what it
does today, and if the flavor happens to be a baremetal flavor, the user
gets a baremetal server.

// jim

[0] https://github.com/openstack/bifrost

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI

2015-09-09 Thread Dmitry Tantsur

On 09/09/2015 12:15 PM, Dougal Matthews wrote:

Hi,

The tripleo-common library appears to be registered or PyPI but hasn't yet had
a release[1]. I am not familiar with the release process - what do we need to
do to make sure it is regularly released with other TripleO packages?


I think this is a good start: 
https://github.com/openstack/releases/blob/master/README.rst




We will also want to do something similar with the new python-tripleoclient
which doesn't seem to be registered on PyPI yet at all.


And instack-undercloud.



Thanks,
Dougal

[1]: https://pypi.python.org/pypi/tripleo-common

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Mitaka Design Summit - Proposed slot allocation

2015-09-04 Thread Dmitry Tantsur

On 09/04/2015 12:14 PM, Thierry Carrez wrote:

Hi PTLs,

Here is the proposed slot allocation for every "big tent" project team
at the Mitaka Design Summit in Tokyo. This is based on the requests the
liberty PTLs have made, space availability and project activity &
collaboration needs.

We have a lot less space (and time slots) in Tokyo compared to
Vancouver, so we were unable to give every team what they wanted. In
particular, there were far more workroom requests than we have
available, so we had to cut down on those quite heavily. Please note
that we'll have a large lunch room with roundtables inside the Design
Summit space that can easily be abused (outside of lunch) as space for
extra discussions.

Here is the allocation:

| fb: fishbowl 40-min slots
| wr: workroom 40-min slots
| cm: Friday contributors meetup
| | day: full day, morn: only morning, aft: only afternoon

Neutron: 12fb, cm:day
Nova: 14fb, cm:day
Cinder: 5fb, 4wr, cm:day
Horizon: 2fb, 7wr, cm:day   
Heat: 4fb, 8wr, cm:morn
Keystone: 7fb, 3wr, cm:day
Ironic: 4fb, 4wr, cm:morn
Oslo: 3fb, 5wr
Rally: 1fb, 2wr
Kolla: 3fb, 5wr, cm:aft
Ceilometer: 2fb, 7wr, cm:morn
TripleO: 2fb, 1wr, cm:full
Sahara: 2fb, 5wr, cm:aft
Murano: 2wr, cm:full
Glance: 3fb, 5wr, cm:full   
Manila: 2fb, 4wr, cm:morn
Magnum: 5fb, 5wr, cm:full   
Swift: 2fb, 12wr, cm:full   
Trove: 2fb, 4wr, cm:aft
Barbican: 2fb, 6wr, cm:aft
Designate: 1fb, 4wr, cm:aft
OpenStackClient: 1fb, 1wr, cm:morn
Mistral: 1fb, 3wr   
Zaqar: 1fb, 3wr
Congress: 3wr
Cue: 1fb, 1wr
Solum: 1fb
Searchlight: 1fb, 1wr
MagnetoDB: won't be present

Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA) 
PuppetOpenStack: 2fb, 3wr
Documentation: 2fb, 4wr, cm:morn
Quality Assurance: 4fb, 4wr, cm:full
OpenStackAnsible: 2fb, 1wr, cm:aft
Release management: 1fb, 1wr (shared meetup with QA)
Security: 2fb, 2wr
ChefOpenstack: will camp in the lunch room all week
App catalog: 1fb, 1wr
I18n: cm:morn
OpenStack UX: 2wr
Packaging-deb: 2wr
Refstack: 2wr
RpmPackaging: 1fb, 1wr

We'll start working on laying out those sessions over the available
rooms and time slots. If you have constraints (I already know
searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
Manila with Cinder, Solum with Magnum...) please let me know, we'll do
our best to limit them.



Would be cool to avoid conflicts between Ironic and TripleO.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Dmitry Tantsur

On 08/28/2015 04:36 PM, Lucas Alvares Gomes wrote:

Hi,


If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.



I have been thinking about doing it for Ironic as well so I'm looking
for options. IMHO after using WSME I would think that one of the most
important criteria we should start looking at is if the project has a
health, sizable and active community around it. It's crucial to use
libraries that are being maintained.

So at the present moment the [micro]framework that comes to my mind -
without any testing or prototype of any sort - is Flask.


We're using Flask in inspector. We have a nice experience, but note that 
inspector does not have very complex API :)




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] "latest" microversion considered dangerous

2015-08-28 Thread Dmitry Tantsur

On 08/28/2015 09:34 AM, Valeriy Ponomaryov wrote:

Dmitriy,

New tests, that cover new functionality already know which API version
they require. So, even in testing, it is not needed. All other existing
tests do not require API update.


Yeah, but you can't be sure that your change does not break the world, 
until you merge it and start updating tests. Probably it's not that 
important for projects who have their integration tests in-tree though..




So, I raise hand for restricting "latest".

On Fri, Aug 28, 2015 at 10:20 AM, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

Manila recently implemented microversions, copying the
implementation
from Nova. I really like the feature! However I noticed that
it's legal
for clients to transmit "latest" instead of a real version number.

THIS IS A TERRIBLE IDEA!

I recommend removing support for "latest" and forcing clients to
request
a specific version (or accept the default).


I think "latest" is needed for integration testing. Otherwise you
have to update your tests each time new version is introduced.



Allowing clients to request the "latest" microversion guarantees
undefined (and likely broken) behavior* in every situation where a
client talks to a server that is newer than it.

Every client can only understand past and present API
implementation,
not future implementations. Transmitting "latest" implies an
assumption
that the future is not so different from the present. This
assumption
about future behavior is precisely what we don't want clients to
make,
because it prevents forward progress. One of the main reasons
microversions is a valuable feature is because it allows forward
progress by letting us make major changes without breaking old
clients.

If clients are allowed to assume that nothing will change too
much in
the future (which is what asking for "latest" implies) then the
server
will be right back in the situation it was trying to get out of
-- it
can never change any API in a way that might break old clients.

I can think of no situation where transmitting "latest" is
better than
transmitting the highest version that existed at the time the
client was
written.

-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself
to never
making any backward-compatiblity-breaking change of any kind.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com <http://www.mirantis.com>
vponomar...@mirantis.com <mailto:vponomar...@mirantis.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] "latest" microversion considered dangerous

2015-08-28 Thread Dmitry Tantsur

On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

Manila recently implemented microversions, copying the implementation
from Nova. I really like the feature! However I noticed that it's legal
for clients to transmit "latest" instead of a real version number.

THIS IS A TERRIBLE IDEA!

I recommend removing support for "latest" and forcing clients to request
a specific version (or accept the default).


I think "latest" is needed for integration testing. Otherwise you have 
to update your tests each time new version is introduced.




Allowing clients to request the "latest" microversion guarantees
undefined (and likely broken) behavior* in every situation where a
client talks to a server that is newer than it.

Every client can only understand past and present API implementation,
not future implementations. Transmitting "latest" implies an assumption
that the future is not so different from the present. This assumption
about future behavior is precisely what we don't want clients to make,
because it prevents forward progress. One of the main reasons
microversions is a valuable feature is because it allows forward
progress by letting us make major changes without breaking old clients.

If clients are allowed to assume that nothing will change too much in
the future (which is what asking for "latest" implies) then the server
will be right back in the situation it was trying to get out of -- it
can never change any API in a way that might break old clients.

I can think of no situation where transmitting "latest" is better than
transmitting the highest version that existed at the time the client was
written.

-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself to never
making any backward-compatiblity-breaking change of any kind.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Dmitry Tantsur
gt; > this new work flow. But I rather read a technical explanation than an
> > emotional one. What I want to know for example is what it will look
> > like when one register a node in ACTIVE state directly? What about the
> > internal driver fields? What about the TFTP/HTTP environment that is
> > built as part of the DEPLOY process ? What about the ports in Neutron
> > ? and so on...
>
> Emotions matter to users. You're right that a technical argument helps
> us get our work done efficiently. But don't forget _why Ironic exists_.
> It's not for you to develop on, and it's not just for Nova to talk to.
> It's for your users to handle their datacenter in the wee hours without
> you to hold their hand. Make that hard, get somebody fired or burned
> out, and no technical argument will ever convince them to use Ironic
> again.
>

You care only about users at your technical level in OpenStack. For other
(and the majority of) users the situation is the opposite: they want to be
told that they've screwed their SSH credentials (and they *constantly* do)
as soon as it is possible. If they are not, their nodes, for example, will
silently go into maintenance, and then nova will return cryptic "no valid
host found" error.


>
> I think I see the problem though. Ironic needs a new mission statement:
>
> To produce an OpenStack service and associated libraries capable of
> managing and provisioning physical machines, and to do this in a
> security-aware and fault-tolerant manner.
>
> Mission accomplished. It's been capable of doing that for a long time.
> Perhaps the project should rethink whether _users_ should be considered
> in a new mission statement.
>

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Dmitry Tantsur

On 08/27/2015 11:40 AM, Lucas Alvares Gomes wrote:

On Wed, Aug 26, 2015 at 11:09 PM, Julia Kreger
 wrote:

My apologies for not expressing my thoughts on this matter
sooner, however I've had to spend some time collecting my
thoughts.

To me, it seems like we do not trust our users.  Granted,
when I say users, I mean administrators who likely know more
about the disposition and capabilities of their fleet than
could ever be discovered or inferred via software.

Sure, we have other users, mainly in the form of consumers,
asking Ironic for hardware to be deployed, but the driver for
adoption is who feels the least amount of pain.

API versioning aside, I have to ask the community, what is
more important?

- An inflexible workflow that forces an administrator to
always have a green field, and to step through a workflow
that we've dictated, which may not apply to their operational
scenario, ultimately driving them to write custom code to
inject "new" nodes into the database directly, which will
surely break from time to time, causing them to hate Ironic
and look for a different solution.

- A happy administrator that has the capabilities to do their
job (and thus manage the baremetal node wherever it is in the
operator's lifecycle) in an efficient fashion, thus causing
them to fall in love with Ironic.



I'm sorry, I find the language used in this reply very offensive.
That's not even a real question, due the alternatives you're basically
asking the community "What's more important, be happy or be sad ? Be
efficient or not efficient?"

It's not about an "inflexible workflow" which "dictates" what people
do making them "hate" the project. It's about finding a common pattern
for an work flow that will work for all types of machines, it's about
consistency, it's about keeping the history of what happened to that
node. When a node is on a specific state you know what it's been
through so you can easily debug it (i.e an ACTIVE node means that it
passed through MANAGEABLE -> CLEAN* -> AVAILABLE -> DEPLOY* -> ACTIVE.
Even if some of the states are non-op for a given driver, it's a clear
path).

Think about our API, it's not that we don't allow vendors to add every
new features they have to the core part of the API because we don't
trust them or we think that their shiny features are not worthy. We
don't do that to make it consistent, to have an abstraction layer that
will work the same for all types of hardware.

I mean it when I said I want to have a fresh mind to read the proposal
this new work flow. But I rather read a technical explanation than an
emotional one. What I want to know for example is what it will look
like when one register a node in ACTIVE state directly? What about the
internal driver fields? What about the TFTP/HTTP environment that is
built as part of the DEPLOY process ? What about the ports in Neutron
? and so on...


I agree with everything Lucas said.

I also want to point that it's completely unrealistic to expect even 
majority of Ironic users to have at least some idea about how Ironic 
actually works. And definitely not all our users are Ironic developers.


I routinely help people who never used Ironic before, and they don't 
have problems with running 1, 2, 10 commands, if they're written in the 
documentation and clearly explained. What they do have problems with is 
several ways of doing the same thing, with different ways being broken 
under different conditions.




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Dmitry Tantsur

On 08/26/2015 11:05 AM, Lucas Alvares Gomes wrote:

On Wed, Aug 26, 2015 at 9:27 AM, Chris Dent  wrote:

On Wed, 26 Aug 2015, Dmitry Tantsur wrote:


Note that this is an API breaking change, which can potentially break
random users of all projects using wsme. I think we should communicate this
point a bit louder, and I also believe it should have warranted a major
version bump.



Yeah, Lucas and I weren't actually aware of the specific impact of that
change until after it was released; part of the danger of being cores-on-
demand rather than cores-by-desire[1].

I'll speak with him and dhellman later this morning and figure out
the best thing to do.



+1, yeah I kinda agree with the major version bump. But also it's
important to note that Ironic which was affected by that was relying
on be able to POST nonexistent fields to create resources and WSME
would just ignore those on versions <= 0.8.0. That's a legitimate bug
that have been fixed in WSME (and projects shouldn't have relied on
that in the first place).


Note that I'm not talking about our projects, I'm talking about our 
users, whose automation might become broken after the switch.




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Dmitry Tantsur

On 08/25/2015 07:38 PM, Chris Dent wrote:


WSME version 0.8.0 was released today with several fixes to error
handling and error messages. These fixes make WSME behave more in
the way it says it would like to behave (and should behave) with
regard to input validation and HTTP handling. You want these
changes.

Unfortunately we've discovered since the release that it causes test
failures in Ceilometer, Aodh and Ironic so it may also cause some
issues in other services.

The two main issues are:

* More detailed input validation can result in the body of a 4xx
   response having changed to reflect increased detail of the
   problem. If you have tests which check this response body, they
   may now break.

* Formerly, input validation would allow unused fields to pass through
   and be dropped. This is now, as a virtue of more strict processing
   throughout the validation handling, considered a client-side error.


Note that this is an API breaking change, which can potentially break 
random users of all projects using wsme. I think we should communicate 
this point a bit louder, and I also believe it should have warranted a 
major version bump.




There may also be situations where a 500 had been returned in the
past but now a more correct status code in the 4xx range is
returned.

Fixes for ceilometer and ironic are pending and may provide some
guidance on fixes other projects might need to do:

* Ironic: https://review.openstack.org/216802
* Ceilometer: https://review.openstack.org/#/c/208467/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts

2015-08-25 Thread Dmitry Tantsur

Hi all!

Please join me in welcoming Sam to our team! He has been doing very 
smart reviews recently, was contributing core features and expressed 
clear interest in the ironic-inspector project.


Thanks and welcome!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Command structure for OSC plugin

2015-08-24 Thread Dmitry Tantsur

On 08/24/2015 07:25 PM, Brad P. Crochet wrote:

On 24/08/15 17:56 +0200, Dmitry Tantsur wrote:

On 08/24/2015 05:41 PM, Jay Pipes wrote:

On 08/24/2015 08:03 AM, Brad P. Crochet wrote:

I am working on extending the current set of patches that implement
the OSC plugin for Ironic. I would like some discussion/guidance about
a couple of command structures.

Currently provisioning state is set via 'openstack baremetal set
--provision-state [active|deleted|rebuild|inspect|provide|manage]
$NODE'

dtantsur suggests it be top-level a command (which I concur)
'openstack baremetal [active|delete|rebuild|inspect|provide|manage]
$NODE'

Question there is does that make sense?


I prefer the current CLI command structure.

`openstack baremetal active $NODE` does not make grammatical sense.

`openstack baremetal activate $NODE` would make more sense, but I
actually think the original is easier.


As it is now it's a bit hard to understand what "openstack baremetal
set" command actually does, as it corresponds to 2 API's (and I hope
it won't also do node updating, will it?)


I'm not sure what you mean about node updating... Do you mean setting
properties? Because it does that. Can you be more specific about what
you mean?


So is it a replacement for 3 APIs/commands:
ironic node-set-maintenance
ironic node-set-provision-state
ironic node-update
?

If so, that's too much IMO.







Best,
-jay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Command structure for OSC plugin

2015-08-24 Thread Dmitry Tantsur

On 08/24/2015 05:41 PM, Jay Pipes wrote:

On 08/24/2015 08:03 AM, Brad P. Crochet wrote:

I am working on extending the current set of patches that implement
the OSC plugin for Ironic. I would like some discussion/guidance about
a couple of command structures.

Currently provisioning state is set via 'openstack baremetal set
--provision-state [active|deleted|rebuild|inspect|provide|manage]
$NODE'

dtantsur suggests it be top-level a command (which I concur)
'openstack baremetal [active|delete|rebuild|inspect|provide|manage]
$NODE'

Question there is does that make sense?


I prefer the current CLI command structure.

`openstack baremetal active $NODE` does not make grammatical sense.

`openstack baremetal activate $NODE` would make more sense, but I
actually think the original is easier.


As it is now it's a bit hard to understand what "openstack baremetal 
set" command actually does, as it corresponds to 2 API's (and I hope it 
won't also do node updating, will it?)




Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] ACL for library-release team for release:managed projects

2015-08-24 Thread Dmitry Tantsur

On 08/22/2015 11:36 PM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2015-08-22 16:12:44 +0200:

2015-08-22 0:25 GMT+02:00 Davanum Srinivas :


Folks,

In the governance repo a number of libraries are marked with
release:managed tag:

http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

However, some of these libraries do not have appropriate ACL in the
project:config repo:

http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack

For example a quick scan shows that the following repos are marked
release:managed and do not have the right ACL:
python-kiteclient
python-designateclient
python-ironic-inspector-client



This patch
https://github.com/openstack/governance/commit/ecd007e1a21f00828bde49b7dbd3b9ff41b7b18b
incorrectly added release:managed to python-ironic-inspector-client. It's
not that I'm against switching to managed releases, but it's not the case
now (just like for ironic-inspector itself).

What is a general policy: should we aim for becoming managed or should I
just drop the wrong tag? Is the procedure for the former outlined somewhere?


It's up to you. The neutron team has us managing some of their
repositories, and not others. If you're comfortable creating releases
yourself, the simplest thing to do is remove the tag. If you'd like to
be included in the release summaries at the end of a cycle, we should
get the ACLs set up properly to let the release team take over creating
the tags. The process for requesting tags under that process is
documented in
http://git.openstack.org/cgit/openstack/releases/tree/README.rst if you
want to review that before making a decision.


Thanks! I'm dropping this tag for now [1] with intention to move to 
managed releases in the next cycle.


[1] https://review.openstack.org/#/c/216251/



Doug



Thanks!


python-manilaclient
os-client-config
automaton
python-zaqarclient

So PTL's, either please fix the governance repo to remove release:managed
or add appropriate ACL in the project-config repo as documented in:
http://docs.openstack.org/infra/manual/creators.html#creation-of-tags

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] ACL for library-release team for release:managed projects

2015-08-22 Thread Dmitry Tantsur
2015-08-22 0:25 GMT+02:00 Davanum Srinivas :

> Folks,
>
> In the governance repo a number of libraries are marked with
> release:managed tag:
>
> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
>
> However, some of these libraries do not have appropriate ACL in the
> project:config repo:
>
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack
>
> For example a quick scan shows that the following repos are marked
> release:managed and do not have the right ACL:
> python-kiteclient
> python-designateclient
> python-ironic-inspector-client
>

This patch
https://github.com/openstack/governance/commit/ecd007e1a21f00828bde49b7dbd3b9ff41b7b18b
incorrectly added release:managed to python-ironic-inspector-client. It's
not that I'm against switching to managed releases, but it's not the case
now (just like for ironic-inspector itself).

What is a general policy: should we aim for becoming managed or should I
just drop the wrong tag? Is the procedure for the former outlined somewhere?

Thanks!


> python-manilaclient
> os-client-config
> automaton
> python-zaqarclient
>
> So PTL's, either please fix the governance repo to remove release:managed
> or add appropriate ACL in the project-config repo as documented in:
> http://docs.openstack.org/infra/manual/creators.html#creation-of-tags
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-19 Thread Dmitry Tantsur

On 08/19/2015 06:42 PM, Derek Higgins wrote:

On 06/08/15 15:01, Dougal Matthews wrote:

- Original Message -

From: "Dan Prince" 
To: "OpenStack Development Mailing List (not for usage questions)"

Sent: Thursday, 6 August, 2015 1:12:42 PM
Subject: Re: [openstack-dev] [TripleO] Moving instack upstream





I would really like to see us rename python-rdomanager-oscplugin. I
think any project having the name "RDO" in it probably doesn't belong
under TripleO proper. Looking at the project there are some distro
specific things... but those are fairly encapsulated (or could be made
so fairly easily).


I agree, it made sense as it was the entrypoint to RDO-Manager. However,
it could easily be called the python-tripleo-oscplugin or similar. The
changes would be really trivial, I can only think of one area that
may be distro specific.


I'm putting the commit together now to pull these repositories into
upstream tripleo are we happy with the name "python-tripleo-oscplugin" ?


Do we really need this "oscplugin" postfix? It may be clear for some of 
us, but I don't that our users know that OSC means OpenStackClient, and 
that oscplugin designates "something that adds features to openstack 
command". Can't we just call it python-tripleo? or maybe even just 
"tripleo"?






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   >