Re: [openstack-dev] [ironic] weekly subteam report

2016-09-13 Thread Dmitry Tantsur

On 09/13/2016 12:52 PM, Pavlo Shchelokovskyy wrote:

Hi all,

On Mon, Sep 12, 2016 at 9:28 PM, Loo, Ruby > wrote:

Cross-project:
==
- Infra insists on switching new jobs to Xenial


A small heads up. If we have any gate jobs that use PXE instead of iPXE,
those won't work on Xenial with current Ironic devstack plugin due to
some packaging changes made in Ubuntu since about 15.04.

Bug: https://bugs.launchpad.net/ironic/+bug/1611850
Fix on review: https://review.openstack.org/#/c/326024/


Thanks for bringing it! The last time I checked we did use PXE, but I'm 
not sure it's the case still.




Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Bug smash day: Wed, Sep 14

2016-09-12 Thread Dmitry Tantsur

Hi all!

On Wednesday, Sep 14, 2016 we will try to clean up our bug list, triage, 
separate RFEs from real bugs, etc. Then we'll find out which bugs must 
be fixed for Newton, and, well, fix them :)


We are starting as soon as the first person joins and stop as soon as 
the last person drops. We will coordinate our efforts on the 
#openstack-ironic channel on Freenode. Please feel free to join at your 
convenience.


See you there!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Overriding internal_api network name

2016-09-12 Thread Dmitry Tantsur

Hi folks!

I'm looking into support for multiple overclouds with shared control 
plane. I'm porting a downstream guide: https://review.openstack.org/368840.


However, this no longer works, probably because "internal_api" network 
name is hardcoded in ServiceNetMapDefaults: 
https://github.com/openstack/tripleo-heat-templates/blob/dfe74b211267cde7a1da4e1fe9430127eda234c6/network/service_net_map.yaml#L14. 
So deployment fails with


CREATE_FAILED resources.RedisVirtualIP: Property error: 
resources.VipPort.properties.network: Error validating value 
'internal_api': Unable to find network with name or id 'internal_api'


Is it a bug? Or is there another way to change the network name? I need 
it to avoid overlap between networks from two overclouds. I'd prefer to 
avoid overriding everything from ServiceNetMapDefaults in my network 
environment file.


Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Ironic] [Tempest] Problem with paramiko on grenade

2016-09-09 Thread Dmitry Tantsur

On 09/09/2016 02:08 PM, Jim Rollenhagen wrote:

On Fri, Sep 9, 2016 at 7:32 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

On 09/08/2016 11:24 AM, Anton Arefiev wrote:


Hi folks!

We've ran into the problem with paramiko, similar to[1] in Inspector.
Our grenade job fails on TestNetworkBasicOps.test_network_basic_ops with
[2] or [3] in 90% cases.



Folks, we're close to the release, and this becomes critical. We don't
actually care much about this particular test, it's just imposed on us by
the way grenade and tempest are implemented. Is there a way to disable it,
so that we can at least release something?


We've merged a hack to ironic-inspector to prevent the offending test 
from running, so it's no longer critical. Still, I'd love to see it 
fixed properly.




Any hints are really appreciated.



Looks like reverting fix [4] solves the problem, at least grenade passed
few times with reverting patch. But it will back other problems, which
mentioned patch fixes. So we need to help with this.



Yeah, so far it seems like everything we do about this bit breaks somebody
:(


I think we should land the revert - the fix at [4] was fixing out of tree code
and problems not seen in upstream CI. With the release so close, and inspector's
gate dead in the water, I think we should revert it and find a better way to fix
the problem folks saw there.


I guess the real fix will be part of https://review.openstack.org/367478



// jim





Any suggestions are welcome.
Thanks in advance.

[1] https://bugs.launchpad.net/tempest/+bug/1615659
[2]

http://logs.openstack.org/79/358779/2/check/gate-grenade-dsvm-ironic-inspector/8b5c901/logs/grenade.sh.txt.gz#_2016-09-08_08_36_02_408
[3]

http://logs.openstack.org/95/352295/9/check/gate-grenade-dsvm-ironic-inspector/b6e3aa7/console.html#_2016-09-08_08_12_20_050120
[4] https://review.openstack.org/#/c/358610/

--
Best regards,
Anton Arefiev
Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Ironic] [Tempest] Problem with paramiko on grenade

2016-09-09 Thread Dmitry Tantsur

On 09/08/2016 11:24 AM, Anton Arefiev wrote:

Hi folks!

We've ran into the problem with paramiko, similar to[1] in Inspector.
Our grenade job fails on TestNetworkBasicOps.test_network_basic_ops with
[2] or [3] in 90% cases.


Folks, we're close to the release, and this becomes critical. We don't 
actually care much about this particular test, it's just imposed on us 
by the way grenade and tempest are implemented. Is there a way to 
disable it, so that we can at least release something?


Any hints are really appreciated.



Looks like reverting fix [4] solves the problem, at least grenade passed
few times with reverting patch. But it will back other problems, which
mentioned patch fixes. So we need to help with this.


Yeah, so far it seems like everything we do about this bit breaks 
somebody :(




Any suggestions are welcome.
Thanks in advance.

[1] https://bugs.launchpad.net/tempest/+bug/1615659
[2]
http://logs.openstack.org/79/358779/2/check/gate-grenade-dsvm-ironic-inspector/8b5c901/logs/grenade.sh.txt.gz#_2016-09-08_08_36_02_408
[3]
http://logs.openstack.org/95/352295/9/check/gate-grenade-dsvm-ironic-inspector/b6e3aa7/console.html#_2016-09-08_08_12_20_050120
[4] https://review.openstack.org/#/c/358610/

--
Best regards,
Anton Arefiev
Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-3, 12 - 16 Sept -- Release Candidate Deadline

2016-09-09 Thread Dmitry Tantsur

On 09/08/2016 08:01 PM, Doug Hellmann wrote:

Focus
-

All teams should be wrapping up their work to prepare release
candidates for the end of Newton.

General Notes
-

The release candidate deadline for Newton releases is 15 Sept. All
projects following cycle-based release models that want to include
a deliverable in Newton should have a release prepared by 15 Sept.

We will create stable/newton branches for milestone-based projects
when the release candidate is tagged.

Release Actions
---

Be prepared to have a release candidate tagged by 15 Sept, but wait
as late as possible to include as many bug fixes and translations
as possible. This also avoids having a second release candidate
quickly after the first, and therefore indicating to consumers that
they may want to ignore those initial release candidates.

Projects not following the milestone-based release model who want
stable/newton branches created should include that information in
the commit message for their release request. Remember, we always
create stable branches from tagged commits, so we need the tag to
exist before we branch.

Review the members of your $project-release group in gerrit, based
on the instructions Thierry sent on 15 Aug to ensure there are team
members able to approve patches on the new branch when it is created.


Doug, Thierry,

Could you please link to email you're talking about? I can't find 
anything like that neither in my inbox nor on 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/author.html.




Watch for translation patches and merge them quickly to ensure we
have as many user-facing strings translated as possible in the
release candidates.

Liaisons for projects with independent deliverables should import
the release history by preparing patches to openstack/releases.

Important Dates
---

Newton RC-1, 15 Sept.

Newton final release, 6 Oct.

Newton release schedule: http://releases.openstack.org/newton/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Future of cleaning when in maintenance mode

2016-09-08 Thread Dmitry Tantsur

On 09/07/2016 05:25 PM, Dmitry Tantsur wrote:

Hi all!

Today while playing with my installation I noticed that we do try to run
cleaning for nodes in maintenance mode. This leads to a somewhat
confusing result, because we no-op heartbeats for such nodes. So
cleaning gets stuck in "clean wait" forever [1].

However, it seems like some folks find it a convenient feature. This way
they can ask Ironic to boot the ramdisk and make it wait for operator's
commands. It's a fair use case, but I still find the current situation
confusing.

We've ended up with these few options:

1. Ensure we don't run cleaning for nodes in maintenance mode. I've
proposed a patch [2] banning most of provision verbs from working in
maintenance. However, we still want to allow deleting an instance, which
still results in cleaning.

2. On receiving a heartbeat in {CLEAN,DEPLOY}WAIT and maintenance on
[3], move the node to {CLEAN,DEPLOY}FAIL (optionally without powering it
off, so that IPA stays running).


It looks like this approach does not cause contention, so I'll look into 
it first. I will keep patch for option #1 around as well, in case we 
want to explore it.




3. Document this as a desired feature and don't change anything.

What do you think?

[1] https://bugs.launchpad.net/ironic/+bug/1621006
[2] https://review.openstack.org/#/c/366793/
[3]
https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/agent_base_vendor.py#L474-L478.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Future of cleaning when in maintenance mode

2016-09-07 Thread Dmitry Tantsur

Hi all!

Today while playing with my installation I noticed that we do try to run 
cleaning for nodes in maintenance mode. This leads to a somewhat 
confusing result, because we no-op heartbeats for such nodes. So 
cleaning gets stuck in "clean wait" forever [1].


However, it seems like some folks find it a convenient feature. This way 
they can ask Ironic to boot the ramdisk and make it wait for operator's 
commands. It's a fair use case, but I still find the current situation 
confusing.


We've ended up with these few options:

1. Ensure we don't run cleaning for nodes in maintenance mode. I've 
proposed a patch [2] banning most of provision verbs from working in 
maintenance. However, we still want to allow deleting an instance, which 
still results in cleaning.


2. On receiving a heartbeat in {CLEAN,DEPLOY}WAIT and maintenance on 
[3], move the node to {CLEAN,DEPLOY}FAIL (optionally without powering it 
off, so that IPA stays running).


3. Document this as a desired feature and don't change anything.

What do you think?

[1] https://bugs.launchpad.net/ironic/+bug/1621006
[2] https://review.openstack.org/#/c/366793/
[3] 
https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/agent_base_vendor.py#L474-L478.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-09-07 Thread Dmitry Tantsur

On 09/01/2016 05:48 PM, Emilien Macchi wrote:

On Thu, Aug 25, 2016 at 9:16 AM, Steven Hardy <sha...@redhat.com> wrote:

On Wed, Aug 17, 2016 at 07:20:59AM -0400, James Slagle wrote:

On Wed, Aug 17, 2016 at 4:04 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

However, the current gate system allows to run jobs based on files affected.
So we can also run a scenario covering ironic on THT check/gate if
puppet/services/*ironic* is affected, but not in the other cases. This won't
cover all potential failures, but it would be of great help anyway. It
should also run in experimental pipeline, so that it can be triggered on any
patch.

This is in addition to periodic jobs you're proposing, not replacing them.
WDYT?


Using the files affected to trigger a scenario test that uses the
affected composable service sounds like a really good idea to me.


+1 I think this sounds like a really good idea.

Now that we're doing almost all per-service configuration in the respective
templates and puppet profiles, this should be much easier to implement I
think so definitely +1 on giving it a go.



So I would like to give an update about this.
The work has been done to have the structure in place.
I first used experimental pipeline to test the new jobs but they are
now in check pipeline, as non-voting.

I created a multinode jobs CI matrix here:
https://github.com/openstack-infra/tripleo-ci#service-testing-matrix
I encourage everyone to have a quick look at it.

What is happening now?

- If you submit a patch in THT (in puppet/services/ceilometer*) it
will trigger scenario001 job (example with Telemetry). Same thing for
Puppet profiles in puppet-tripleo. Note: with Zuul v3 we'll be able to
make things more granular and specific to projects and files. We're
looking for having it!
- Since multinode-nonha initially created by slagle is now a bit
redundant with scenarios, I'll make it lighter to test basic compute
services with the less services as possible.
- I'll continue to extend the scenarios complexity and start testing
different backends (ie, ceph/rbd on scenario001 with Telemetry, etc),
like we're doing in Puppet CI:
https://github.com/openstack/puppet-openstack-integration#description
- For now, we're using pingtest to test that services actually work. I
guess it's good for now, but we also might want to consider Tempest
sometimes. I know Tempest already runs on periodic jobs, but should we
also consider it in those multinode jobs? The feedback would be
valuable though but we would have to make tempest configuration
composable, depending on the services we actually run (maybe using
discovery, etc).

Any help and feedback on this topic is highly welcome,



It's pretty cool, unfortunately putting nova-compute on controllers is 
incompatible with the approach we currently take for ironic... so we 
either need a separate scenario or another approach here.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE request for Ironic composable services

2016-09-01 Thread Dmitry Tantsur

On 08/30/2016 10:20 AM, Dmitry Tantsur wrote:

Hi all!

Bare metal provisioning is a hot topic right now. These services are
also required for Dan's heat-driven undercloud work. The majority of
changes has landed already, but there a few changes waiting on
puppet-ironic changes.

The feature is low-impact as it's disabled by default and mostly merged
anyway. The blueprint is
https://blueprints.launchpad.net/tripleo/+spec/ironic-integration (it
does miss a few links, probably I forgot to tag the patches - see below).

The missing bits are:



This is a convenient link to see all open patches for this work:
https://review.openstack.org/#/q/status:open+branch:master+topic:bp/ironic-integration



Thanks for considering it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FFE request for Ironic composable services

2016-08-30 Thread Dmitry Tantsur

Hi all!

Bare metal provisioning is a hot topic right now. These services are 
also required for Dan's heat-driven undercloud work. The majority of 
changes has landed already, but there a few changes waiting on 
puppet-ironic changes.


The feature is low-impact as it's disabled by default and mostly merged 
anyway. The blueprint is 
https://blueprints.launchpad.net/tripleo/+spec/ironic-integration (it 
does miss a few links, probably I forgot to tag the patches - see below).


The missing bits are:

1. (i)PXE configuration

puppet-tripleo: https://review.openstack.org/#/c/361109/
THT: https://review.openstack.org/362148
blocked by puppet-ironic patch https://review.openstack.org/354125 which 
passes CI and is on review currently.


2. Potential problems with networking

Dan proposed a fix https://review.openstack.org/#/c/361459/
I'm currently trying to test it locally, then we can merge it.

3. Documentation

The patch is https://review.openstack.org/354016
I'm keeping it WIP to see which of the above changes actually land.

Thanks for considering it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-25 Thread Dmitry Tantsur

Hi!

Looks great! Ironic currently requires a few manual steps, I wonder how 
we do them, but I guess we can figure out later.


On 08/24/2016 08:39 PM, Emilien Macchi wrote:

Ok I have  PoC ready for review and feedback:

- First iteration of scenario001 job in TripleO CI:
https://review.openstack.org/#/c/360039
I checked, and the job is not triggered if we don't touch Sahara files directly.

- Patch in THT that tries to modify Sahara files:
https://review.openstack.org/#/c/360040
I checked, and when running "check experimental", the job is triggered
because we modify puppet/services/sahara-base.yaml.

So the mechanism is in place (experimental status now) but ready for review.
Please give any feedback.

Once we have this mechanism in place, we'll be able to add more
services coverage, and run the jobs in a smart way thanks to Zuul.

Thanks,

On Wed, Aug 17, 2016 at 3:52 PM, Emilien Macchi <emil...@redhat.com> wrote:

On Wed, Aug 17, 2016 at 7:20 AM, James Slagle <james.sla...@gmail.com> wrote:

On Wed, Aug 17, 2016 at 4:04 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

However, the current gate system allows to run jobs based on files affected.
So we can also run a scenario covering ironic on THT check/gate if
puppet/services/*ironic* is affected, but not in the other cases. This won't
cover all potential failures, but it would be of great help anyway. It
should also run in experimental pipeline, so that it can be triggered on any
patch.

This is in addition to periodic jobs you're proposing, not replacing them.
WDYT?


Using the files affected to trigger a scenario test that uses the
affected composable service sounds like a really good idea to me.



I have a PoC, everything is explained in commit message:
https://review.openstack.org/#/c/356675/

Please review it and give feedback !



--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Emilien Macchi







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][UI] Port number for frontend app

2016-08-24 Thread Dmitry Tantsur

On 08/22/2016 08:08 AM, Honza Pokorny wrote:

Hello folks,

We've been using port 3000 for the GUI during development and testing.
Now that we're working on packaging and shipping our code, we're
wondering if port 3000 is still the best choice.

Would 3000 conflict with any other services?  Is there a better option?


I think the best option is to run it as wsgi in the same Apache instance 
as e.g. Horizon (and I guess Keystone and other services in the future 
as well), just with a different leading path. Not sure how doable it is 
though.




Thanks

Honza Pokorny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [infra] check-osc-plugins job breaks clients

2016-08-17 Thread Dmitry Tantsur

Hi all!

This is probably a known problem, just bringing the attention to it: the 
check-osc-plugins is pretty unstable right now. It feels like on more 
than a half of runs it times out. For example, ironicclient: 
http://logs.openstack.org/91/328191/10/check/check-osc-plugins/f85d602/console.html, 
tripleoclient: 
http://logs.openstack.org/94/332694/7/check/check-osc-plugins/e04fdec/console.html, 
openstackclient: 
http://logs.openstack.org/46/355746/1/check/check-osc-plugins/23c05df/console.html.


Is there any ideas how to fix it? We're approaching the client release 
deadline in two weeks, so inability to land client changes is pretty 
annoying.


My apologies if it's already getting fixed.
Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] launchpad bugs

2016-08-17 Thread Dmitry Tantsur

On 08/17/2016 03:28 AM, Emilien Macchi wrote:

Hi team,

This e-mail is addressed to TripleO developers interested by helping
in bug triage.
If you already subscribed to TripleO bugs notifications, you can skip
and go at the second part of the e-mail.
If not, please:
1) Go on https://bugs.launchpad.net/tripleo/+subscriptions
2) Click on "Add a subscription"
3) Bug mail recipient: yourself / Receive mail for bugs affecting
tripleo that are added or closed
4) Create a mail filter if you like your emails classified.
That way, you'll get an email for every new bug and their updates.


At least but not least, please keep your assigned bugs up-to-date and
close them with "Fix released" when automation didn't make it for you.
Reminder: auto-triage is our model, we trust you to assign the bug in
the right priority and milestone.
If any doubt, please ask on #tripleo.
Note: I spent time today to make triage and updates on a good number
of bugs. Let me know if I did something wrong with your bugs.

Thanks,



FYI I have a dashboard to track Ironic bugs that may require triaging or 
are in a wrong state: http://ironic-divius.rhcloud.com/ (it opens very 
slowly, it's normal). Maybe you can fork it and create something similar.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-17 Thread Dmitry Tantsur

On 08/17/2016 03:21 AM, Emilien Macchi wrote:

On Tue, Aug 16, 2016 at 4:49 PM, James Slagle <james.sla...@gmail.com> wrote:

On Mon, Aug 15, 2016 at 4:54 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

Hi everyone, happy Monday :)

I'd like to start the discussion about CI-testing the optional composable
services in the CI (I'm primarily interested in Ironic, but I know there are
a lot more).





The reason being is that there are already many different services
that can break TripleO, and I'd rather focus on improving the testing
of the actual deployment framework itself, instead of testing the
"whole world" on every patch. We only have so much capacity. For
example, I'd rather see us testing updates or upgrades on each patch
instead of all the services.


I don't suggest we test Ironic, I suggest we test the composable 
services installing Ironic, and then just do a sanity-check. Without 
such check an actual failure can go unnoticed, I've already faced with that.




That being said, if you wanted to add a job that covered Ironic, I'd
at least start with adding a job in the experimental queue. If it
proves to be stable, we can always consider moving it to the check
queue.



TripleO CI is having the same problem as Puppet CI had some time ago,
when we wanted to test more and more.

We solved this problem with scenarios.
Everything is documented here:
https://github.com/openstack/puppet-openstack-integration#description

We're testing all scenarios for all commits in
puppet-openstack-integration (equivalent to tripleo-ci) but only some
scenarios for each Puppet module.
Ex: OpenStack Sahara is deployed in scenario003, so a patch in
puppet-sahara will only run scenario003 job. We do that in Zuul
layout.

Because it would be complicated to do the same with TripleO Heat
Templates, we could run the same kind of job in periodic pipeline.
Though for Puppet jobs, we could run the tripleo scenarios in the
check/gate, as we do with the current multinode job.

So here's a summary of my proposal (and I volunteer to actually work
on it, like I did for Puppet CI):
* Call current jobs "compute-kit" which are current nonha/ha (ovb and
multinode) jobs deploying the basic services (undercloud +
Nova/Neutron/Glance/Keystone/Heat/Cinder/Swift and common services,
apache, mysql, etc).
* Create TripleO envs deploying different scenarios (we could start by
scenario001 deploying "compute-kit" + ceph (rbd backend for Nova /
Glance / Cinder). It's an example, feel free to propose something
else. The envs would live in tripleo-ci repo among others ci envs.
* Switch puppet-ceph zuul layout to stop running ovb ha job and run
the scenario001 job in check/gate (with the experimental pipeline
transition, as usual).
* Run scenario001 job in check/gate of tripleo-ci among other jobs.
* Run scenario001 job in periodic pipeline for puppet-tripleo and
tripleo-heat-templates.

Any feedback is welcome, but I think this would be a good start of
scaling our CI jobs coverage.



I'm not particularly fond of using *only* periodic jobs. I don't think 
it alone solves the problems I've raised, because:
1. Periodic jobs do not give an immediate feedback of whether we break 
something in a THT patch.

2. Periodic jobs do not help tracking which exactly patch was breaking.
3. Periodic jobs results are slightly hidden, so a failure in a 
non-priority service (like Ironic) will probably stay unnoticed for 
quite a while.


However, the current gate system allows to run jobs based on files 
affected. So we can also run a scenario covering ironic on THT 
check/gate if puppet/services/*ironic* is affected, but not in the other 
cases. This won't cover all potential failures, but it would be of great 
help anyway. It should also run in experimental pipeline, so that it can 
be triggered on any patch.


This is in addition to periodic jobs you're proposing, not replacing 
them. WDYT?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-15 Thread Dmitry Tantsur

Hi everyone, happy Monday :)

I'd like to start the discussion about CI-testing the optional 
composable services in the CI (I'm primarily interested in Ironic, but I 
know there are a lot more).


Currently every time we change something in an optional service, we have 
to create a DO-NOT-MERGE patch making the service in question not 
optional. This approach has several problems:


1. It's not usually done for global refactorings.

2. The current CI does not have any specific tests to check that the 
services in question actually works at all (e.g. in my experience the CI 
was green even though nova-compute could not reach ironic).


3. If something breaks, it's hard to track the problem down to a 
specific patch, as there is no history of gate runs.


4. It does not test the environment files we provide for enabling the 
service.


So, are there any plans to start covering optional services? Maybe at 
least a non-HA job with all environment files included? It would be cool 
to also somehow provide additional checks though. Or, in case of ironic, 
to disable the regular nova compute, so that the ping test runs on an 
ironic instance.


WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur



Well, except for you need some non-openstack starting point, because unlike
with e.g. ansible installing any openstack service(s) does not end at "dnf
install ".


You might like to watch Dan's demo again.

It goes something like:

yum install python-tripleoclient
openstack undercloud deploy

Done!


That's pretty awesome indeed. But it also moves the user further away 
from the actual code running, so in case of an unobvious failure they'll 
have to inspect more layers. I guess I am getting at the debugability 
point again... At least it's good to know we're getting all the output 
visible, that's really great.





The problems this would solve are several:

1. Remove divergence between undercloud and overcloud puppet implementation
(instead of having an undercloud specific manifest, we reuse the *exact*
same stuff we use for overcloud deployments)


I'm not against reusing puppet bits, I'm against building the same heavy
abstraction layer with heat around it.


Sure, this is a valid concern to raise, and analternative to what Dan has
prototyped would be to refactor the undercloud puppet manifest to use the
puppet-tripleo profiles, somebody still has to do this work and it still
doesn't help at all with either container integration or multi-node
underclouds.


So, this multi-node thing. Will it still be as easy as running one 
command? I guess we assume that the OS is already provisioned on all 
nodes, right?





2. Better modularity, far easier to enable/disable services


Why? Do you expect enabling/disabling Nova, for example? In this regard
undercloud is fundamentally different from overcloud: for the former we have
a list of required services and a pretty light list of optional services.


Actually, yes!  I'd love to be able to disable Nova and instead deploy
nodes directly via a mistral workflow that drives Ironic.  That's why I
started this:

https://review.openstack.org/#/c/313048/


++ to this

However, it brings a big QE concern. If we say we support deployment 
with and without nova, it increases a number of things to test wrt 
provisioning twice. I still suspect we'll end up with one "blesses" way, 
and the other "probably working" ways. Which might not be so good.




There are reasons such as static IPs for everything where you might want to
be able to make Neutron optional, and there are already a bunch of optional
services (such as all the telemetry services).

Ok, every time I want to disable or add a new service I can hack on the
manifest, but it's just extra work compared to reusing the exact same
method we already support for overcloud deployments.


3. Get container integration "for free" when we land it in the overcloud

4. Any introspection and debugging workflow becomes identical between the
undercloud and overcloud


I would love a defined debugging workflow for the overcloud first..


Sure, and it's something we have to improve regardless.


5. We remove dependencies on a bunch of legacy scripts which run outside of
puppet


If you mean instack-undercloud element, we're getting rid of them anyway,
no?


Quite a few still remain, but yeah there are less than there was, which is
good.


I think I've seen the patches up for removing all of them (except for 
puppet-stack-config obviously).





6. Whenever someone lands support for a new service in the overcloud, we
automatically get undercloud support for it, completely for free.


Again, why? A service won't integrate itself into the deployment. And to be
honest, the amount of options TripleO has already cases real world problems.
I would rather see a well defined set of functionality for it..


It means it's easy to enable any service which is one less barrier to
integration, I'm not really sure how that could be construed as a bad
thing.


7. Potential for much easier implementation of a multi-node undercloud


Ideally, I would love to see:

 for node in nodes:
   ssh $node puppet apply blah-blah


Haha, this is a delightful over-simplification, but it completely ignores
all of the logic to create the per-node manifests and hieradata.  This is
what the Heat templates already do for us, over multiple nodes by default.


A bit unrelated, but while we're here... I wonder if we could stop after 
instances are deployed with Heat returning a set of hieredata files for 
nodes... Haven't thought is through, just a quick idea.





Maybe we're not there, but it only means we have to improve our puppet
modules.


There is a layer of orchestration outside of the per-service modules which
is needed here.  We do that simply in the current undercloud implementation
by having a hard-coded manifest, which works OK.  We do that in the
overcloud by orchestrating puppet via Heat over multiple nodes, which also
works OK.


Undercloud installation is already sometimes fragile, but it's probably the
least fragile part right now (at least from my experience) And at the very
least it's pretty obviously debuggable in most cases. THT is hard to

Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur

On 08/05/2016 01:34 PM, Dan Prince wrote:

On Fri, 2016-08-05 at 12:27 +0200, Dmitry Tantsur wrote:

On 08/04/2016 11:48 PM, Dan Prince wrote:


Last week I started some prototype work on what could be a new way
to
install the Undercloud. The driving force behind this was some of
the
recent "composable services" work we've done in TripleO so
initially I
called in composable undercloud. There is an etherpad here with
links
to some of the patches already posted upstream (many of which stand
as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process
all-
in-one heat-all (engine and API) and thereby avoid things like
Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.

I don't want to sound rude, but please no. The fact that you have a
hammer does not mean everything around is nails :( What problem are
you
trying to solve by doing it?


Several problems I think.

One is TripleO has gradually moved away from elements. And while we
still use DIB elements for some things we no longer favor that tool and
instead rely on Heat and config management tooling to do our stepwise
deployment ordering. This leaves us using instack-undercloud a tool
built specifically to install elements locally as a means to create our
undercloud. It works... and I do think we've packaged it nicely but it
isn't the best architectural fit for where we are going I think. I
actually think that from an end/user contribution standpoint using t-h-
t could be quite nice for adding features to the Undercloud.


I don't quite get how it is better than finally moving to puppet only 
and stop using elements.




Second would be re-use. We just spent a huge amount of time in Newton
(and some in Mitaka) refactoring t-h-t around composable services. So
say you add a new composable service for Barbican in the Overcloud...
wouldn't it be nice to be able to consume the same thing in your
Undercloud as well? Right now you can't, you have to do some of the
work twice and in quite different formats I think. Sure, there is some
amount of shared puppet work but that is only part of the picture I
think.


I've already responded to Steve's email, so a tl;dr here: I'm not sure 
why you want to add random services to undercloud. Have you seen an 
installer ever benefiting from e.g. adding a FileSystem-as-a-Service or 
Database-as-a-Service solution?




There are new features to think about here too. Once upon a time
TripleO supported multi-node underclouds. When we switched to instack-
undercloud we moved away from that. By switching back to tripleo-heat-
templates we could structure our templates around abstractions like
resource groups and the new 'deployed-server' trick that allow you to
create machines either locally or perhaps via Ironic too. We could
avoid Ironic entirely and always install the Undercloud on existing
servers via 'deployed-server' as well.


A side note: if we do use Ironic for this purpose, I would expect some 
help with pushing the Ironic composable service through. And the 
ironic-inspector's one, which I haven't even started.


I'm still struggling to understand what entity is going to install this 
bootstrapping Heat instance. Are we bringing back seed?




Lastly, there is container work ongoing for the Overcloud. Again, I'd
like to see us adopt a format that would allow it to be used in the
Undercloud as well as opposed to having to re-implement features in the
Over and Under clouds all the time.



Undercloud installation is already sometimes fragile, but it's
probably
the least fragile part right now (at least from my experience) And
at
the very least it's pretty obviously debuggable in most cases. THT
is
hard to understand and often impossible to debug. I'd prefer we move
away from THT completely rather than trying to fix it in one more
place
where heat does not fit..


What tool did you have in mind. FWIW I started with heat because by
using just Heat I was able to take the initial steps to prototype this.

In my mind Mistral might be next here and in fact it already supports
the single process launching idea thing. Keeping the undercloud
installer as light as possible would be ideal though.


I don't have a really huge experience with both, but for me Mistral 
seems much cleaner and easier to understand. That, of course, won't 
allow you to use reuse the existing heat templates (which may be good or 
bad depending on your point of view).




Dan






I created a short video demonstration which goes over some of the
history behind the approach, and shows a live demo of all of this
working with the patches above:

https://www.youtube.com/watch?v=y1qMDLAf26Q

Thoughts? Would it be cool to have a session to discuss this more
in
Barcelona?

Dan Princ

Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur

On 08/05/2016 01:21 PM, Steven Hardy wrote:

On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:

On 08/04/2016 11:48 PM, Dan Prince wrote:

Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.


I don't want to sound rude, but please no. The fact that you have a hammer
does not mean everything around is nails :( What problem are you trying to
solve by doing it?


I think Dan explains it pretty well in his video, and your comment
indicates a fundamental misunderstanding around the entire TripleO vision,
which is about symmetry and reuse between deployment tooling and the
deployed cloud.


Well, except for you need some non-openstack starting point, because 
unlike with e.g. ansible installing any openstack service(s) does not 
end at "dnf install ".




The problems this would solve are several:

1. Remove divergence between undercloud and overcloud puppet implementation
(instead of having an undercloud specific manifest, we reuse the *exact*
same stuff we use for overcloud deployments)


I'm not against reusing puppet bits, I'm against building the same heavy 
abstraction layer with heat around it.




2. Better modularity, far easier to enable/disable services


Why? Do you expect enabling/disabling Nova, for example? In this regard 
undercloud is fundamentally different from overcloud: for the former we 
have a list of required services and a pretty light list of optional 
services.




3. Get container integration "for free" when we land it in the overcloud

4. Any introspection and debugging workflow becomes identical between the
undercloud and overcloud


I would love a defined debugging workflow for the overcloud first..



5. We remove dependencies on a bunch of legacy scripts which run outside of
puppet


If you mean instack-undercloud element, we're getting rid of them 
anyway, no?




6. Whenever someone lands support for a new service in the overcloud, we
automatically get undercloud support for it, completely for free.


Again, why? A service won't integrate itself into the deployment. And to 
be honest, the amount of options TripleO has already cases real world 
problems. I would rather see a well defined set of functionality for it..




7. Potential for much easier implementation of a multi-node undercloud


Ideally, I would love to see:

 for node in nodes:
   ssh $node puppet apply blah-blah

Maybe we're not there, but it only means we have to improve our puppet 
modules.





Undercloud installation is already sometimes fragile, but it's probably the
least fragile part right now (at least from my experience) And at the very
least it's pretty obviously debuggable in most cases. THT is hard to
understand and often impossible to debug. I'd prefer we move away from THT
completely rather than trying to fix it in one more place where heat does
not fit..


These are some strong but unqualified assertions, so it's hard to really
respond.


We'll talk about "unqualified" assertions the next time I'll try to get 
answers on #tripleo after seeing error messages like "controller_step42 
failed with code 1" ;)



Yes, there is complexity, but it's a set of abstractions which
actually work pretty well for us, so there is value in having just one set
of abstractions used everywhere vs special-casing the undercloud.


There should be a point where we stop. What entity is going to install 
heat to install undercloud (did I just say "seed")? What will provide HA 
for it? Authentication, templates storage and versioning? How do you 
reuse the same abstractions (that's the whole point after all)?




Re moving away from THT completely, this is not a useful statement -
yes, there are alternative tools, but if you were to remove THT and just
use some other tool with Ironic, the result would simply not be TripleO.
There would be zero migration/upgrade path for existing users and all
third-party integrations (and our API/UI) would break.


I don't agree it would not be TripleO. OpenStack does not end on heat 
templates, some deployments don't even use heat.




FWIW I think this prototyping work is very interesting, and I'm certainly
keen to get wider (more constructive) feedback and see where it leads.

Thanks,

Steve


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur

On 08/04/2016 11:48 PM, Dan Prince wrote:

Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.


I don't want to sound rude, but please no. The fact that you have a 
hammer does not mean everything around is nails :( What problem are you 
trying to solve by doing it?


Undercloud installation is already sometimes fragile, but it's probably 
the least fragile part right now (at least from my experience) And at 
the very least it's pretty obviously debuggable in most cases. THT is 
hard to understand and often impossible to debug. I'd prefer we move 
away from THT completely rather than trying to fix it in one more place 
where heat does not fit..




I created a short video demonstration which goes over some of the
history behind the approach, and shows a live demo of all of this
working with the patches above:

https://www.youtube.com/watch?v=y1qMDLAf26Q

Thoughts? Would it be cool to have a session to discuss this more in
Barcelona?

Dan Prince (dprince)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Plugins for all

2016-07-18 Thread Dmitry Tantsur

On 07/17/2016 11:04 PM, Jay Pipes wrote:

On 07/14/2016 12:21 PM, Hayes, Graham wrote:

A lot of the effects are hard to see, and are not insurmountable, but
do cause projects to re-invent the wheel.

For example, quotas - there is no way for a project that is not nova,
neutron, cinder to hook into the standard CLI, or UI for setting
quotas.


There *is* no standard CLI or UI for setting quotas.


They can be done as either extra commands
(openstack dns quota set --foo bar) or as custom panels, but not
the way other quotas get set.


This has nothing to do with the big tent and everything to do with the
fact that the community at large has conflated quotas -- i.e. the limit
of a particular class of resource that a user or tenant can consume --
with the usage of a particular class of resource. The two things are not
the same nor do they need to be handled by the same service, IMHO.

I've proposed before that quotas -- i.e. the *limits* for different
resource classes that a consumer of those resources has -- be handled by
the Keystone API. This is the endpoint that I personally think makes the
most sense to house this information.

Usage information is necessarily the domain of the individual service
projects who must control allocation/consumption of resources under
their control. It would be *helpful* to have a set of best-practice
guidelines for projects to follow in safely accounting for consumption
of resources, but "the big tent" has nothing to do with our community
failing to do this. We've failed to do this from the beginning of
OpenStack, well before the big tent was just a spark in the minds of the
TC.


Tempest plugins are another example. Approximately 30 of the 36
current plugins are using resources that are not supposed to be
used, and are an unstable interface.


What "resources" are you referring to above? What is the "unstable
interface" you are referring to? Tempest should only ever be validating
public REST APIs.


Projects in tree in tempest
are at a much better position, as any change to the internal
API will have to be fixed before the gate merges, but other
out of tree plugins are in a place where they can be broken at any
point.


An example here would be super-useful, since as mentioned above, Tempest
should only be validating public HTTP/REST APIs.


Not entirely related example, but still in support of the original point 
(IMO): grenade currently does not catch smoke tests coming from tempest 
plugins when running after upgrade. It's just one missing call [1], and 
it probably would not go unnoticed if Nova tests did not run ;)


[1] https://review.openstack.org/337372




None of this is meant to single out projects, or teams. A lot
of the projects that are in this situation have inordinate amounts of
work placed on them by the big-tent, and I can emphasize with why things
are this way. These were the examples that currently stick out
in my mind, and I think we have come to a point where we need to make
a change as a community.

By moving to a "plugins for all" model, these issues are reduced.


I guess I don't understand what you are referring to as a "plugin"
above. Are you referring to devstack libXXX things? Or are you referring
to *drivers* for things like the hypervisor in Nova or the ML2 mech
drivers in Neutron? Or something else entirely?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] How to handle defaults in driver composition reform?

2016-07-14 Thread Dmitry Tantsur

On 07/14/2016 12:52 PM, Sam Betts (sambetts) wrote:

There have been several discussions brought about by new interface types
and how they fit into the existing driver composition spec. Network and
Volume connector interfaces are examples of two interfaces who’s
implementations can depend highly on the environment they are being used
in, as well as the piece of hardware they are being used with. For
example the neutron network interface requires neutron to exist to operate.





After considering several different ways to handle hardware_type
interfaces so that we can ensure that deployers maintain fine control
over which interfaces are enabled in their environment, but continue to
provide sane defaults for all the different types of interfaces for
convenience of the user when enrolling nodes and also have all interface
types defined in a consistent way for developers of hardware types. I
would like to propose this solution:

- Remove single hardware_type default for all interface types
- Make hardware_type supported_FOO_interfaces a list of supported
implementations in order of preference by that hardware type’s vendor
- Have Ironic resolve the possible_FOO_interfaces by intersecting
enabled_FOO_interfaces with supported_FOO_interfaces maintaining the
order of preference
- Use the first possible_FOO_interface as the default this hardware_type


I haven't thought much about the idea, but at first glance I like it. 
It's somewhat less explicit, but we still can return the calculated 
default interface in /v1/drivers API, so it might be OK.




An example of this in operation:

In the deployer’s environment, implementation RAR, HAH and GAR will
work, BAR will not.
The deployer wants to enable two different hardware_types X and Y.

hardware_type X:
# BAR is recomended over RAR and RAR over GAR if available in
this deployment
supported_FOO_interface: [BAR, RAR, GAR]

hardware_type Y:
supported_FOO_interface: [HAH]


The deployer knowing that BAR will not work, does not include BAR in the
list of enabled_FOO_interfaces.

Deployers config file:
enabled_FOO_interfaces = RAR, HAH, GAR

The Ironic user creates a node using hardware_type X, but doesn’t
specify any interfaces.

ironic node-create -d X


Ironic calculates prioritied list of possible interfaces:

possible_FOO_interfaces = [BAR, RAR, GAR] intersect [RAR, HAH, GAR]
  = [RAR, GAR]

Ironic creates node with the first interface out of ordered list of
possible interfaces.

Node:

hardware_type: X
FOO_interfaces: RAR


The user now has a node with interfaces that are guaranteed by the
deployer to work in this environment.

This solution does mean that based on which environment you enroll a
node in you might get a different set of interfaces. So in order to find
out which interface is going to be the default FOO_interface for this
hardware_type in this environment, you can use the discovery API, which
returns a default_FOO_interface field, and also the
possible_FOO_interfaces, if you want to know which other interfaces
are available to you.

I believe this is a good fit for treating all interfaces in a
standardised way, please let me know your opinions so we can solve this
issue in a way that is convenient for our users as well as keeps
things consistent for developing core Ironic code, and hardware_types
both in and out of tree.

Sam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] why do we need setting network driver per node?

2016-07-12 Thread Dmitry Tantsur

Thanks for writing this up, minor comments inline.

On 07/11/2016 10:57 PM, Devananda van der Veen wrote:

We spent the majority of today's weekly IRC meeting [1] discussing the finer
points of this question. I agreed to post a summary of those to the list (it's
below the break).

tldr;

* we don't know if network_interface should behave like other hardware
interfaces, but...
* we need to come to an agreement on it this week, so we can proceed with the
network integration work.







There was a proposal from sambetts towards the end of the meeting, which I've
edited for clarity (please correct if I misunderstood any of your points). This
seems to capture/address most of the points above and proposes a way forward,
while keeping within the intent of our driver composition reform spec. It was
the only clear suggestion during the meeting.

- in-tree hardware_types declare supported_network_interfaces to be empty [4]
and declare no default_network_interface


We need supported_network_interfaces, otherwise you won't be able to 
check compatibility. I think we should introduce a constant to use like 
that:


 class MyHwType:
   supported_network_interfaces = ironic.network.ALL_INTERFACES


- we add a CONF option for default_network_interface, with a sane default value
- this CONF option is validated on conductor load to be supported by all loaded
hardware_types, and the conductor refuses to start if this CONF option is set to
a value not supported by one or more enabled_hardware_types


How do we distinguish between interfaces which have a default and which 
don't? For example, I can see a use case for having defaults for deploy 
and inspect (the latter would be used by tripleo for sure).



- if a(n out of tree) hardware_type declares a default_network_interface, this
will take precedence over the CONF option
- a node created without specifying a network interface will check the
hardware_type's supported_network_interfaces, and when that is empty, fall back


s/supported/default/ here?


to the CONF.default_network_interface, just as other interfaces fall back to the
hardware_type's relevant default
- operators can override a specific node's network_interface, which follows the
usual rules for setting an interface on a Node (ie, the interface must be in
hardware_type.supported_network_interfaces AND in 
CONF.enabled_network_interfaces)


If anyone else has other clear suggestions that address all the issues here,
please reply with them.

I'm going to make myself available tomorrow at 1700 UTC, in both IRC and by
voice [3] (conference line # TBD) to discuss this with anyone. If we need to
discuss it again on Wednesday, we can.


Thanks much,
--devananda



[1] starting at 17:20

http://eavesdrop.openstack.org/meetings/ironic/2016/ironic.2016-07-11-17.00.log.html

[2]
http://specs.openstack.org/openstack/ironic-specs/specs/approved/driver-composition-reform.html

[3] https://wiki.openstack.org/wiki/Infrastructure/Conferencing

[4] I think we may need to have in-tree drivers declare
supported_network_interfaces to be [noop, flat, neutron], but that is not what
Sam suggested during the meeting



On 06/28/2016 08:32 AM, Dmitry Tantsur wrote:

Hi folks!

I was reviewing https://review.openstack.org/317391 and realized I don't quite
understand why we want to have node.network_interface. What's the real life use
case for it?

Do we expect some nodes to use Neutron, some - not?

Do we expect some nodes to benefit from network separation, some - not? There
may be a use case here, but then we have to expose this field to Nova for
scheduling, so that users can request a "secure" node or a "less secure" one. If
we don't do that, Nova will pick at random, which makes the use case unclear 
again.
If we do that, the whole work goes substantially beyond what we were trying to
do initially: isolate tenants from the provisioning network and from each other.

Flexibility it good, but the patches raise upgrade concerns, because it's
unclear how to provide a good default for the new field. And anyway it makes the
whole thing much more complex than it could be.

Any hints are welcome.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mascot/logo for your project

2016-07-11 Thread Dmitry Tantsur

On 07/11/2016 05:51 PM, Steve Martinelli wrote:

The keystone project was one of the first to have a logo and I'm more
than happy to give it up for the sake of a consistent message across all
OpenStack projects.

I think it's fine if ironic and tripleo want to stick with their current
animals (luckily they are using logos that meet the new criteria
proposed by the foundation), but I don't think it's unreasonable if the
foundation proposes a re-design of existing logos that is more
consistent with the rest of the new ones.


Lets see how it ends up :)



The logos we use today are too different from each other, i don't think
anyone would disagree with that. If each project goes off and does their
own thing it comes off as inconsistent.


I don't think I value consistency in mascots too much, but we'll see.



stevemar

On Mon, Jul 11, 2016 at 11:33 AM, Steven Hardy > wrote:

On Mon, Jul 11, 2016 at 08:00:29AM -0700, Heidi Joy Tretheway wrote:
>The Foundation would like to help promote OpenStack projects in the big
>tent with branding and marketing services. The idea is to create a 
family
>of logos for OpenStack projects that are unique, yet immediately
>identifiable as part of OpenStack. Weâ**ll be using these logos
to promote
>your project on the OpenStack website, at the Summit and in marketing
>materials.
>Weâ**re asking project teams to choose a mascot to represent as
their
>logo. Your team can select your own mascot, and then weâ**ll
have an
>illustrator create the logo for you (and we also plan to print some
>special collateral for your team in Barcelona).
>If your team already has a logo based on a mascot from nature,
youâ**ll
>have first priority to keep that mascot, and the illustrator will 
restyle
>it to be consistent with the other projects. If you have a logo that
>doesnâ**t have a mascot from nature, we encourage your team to
choose a
>mascot.
>Hereâ**s an FAQ and examples of what the logos can look like:
>http://www.openstack.org/project-mascots
>Weâ**ve also recorded a quick video with an overview of the
project:
>https://youtu.be/LOdsuNr2T-o
>You can get in touch with your PTL to participate in the logo choice
>discussion. If you have more questions, Iâ**m happy to help. :-)

TripleO has had some discussion already around a project mascot, and
we've
settled on the owl logo displayed on tripleo.org
 and our launchpad org:

http://tripleo.org/
https://bugs.launchpad.net/tripleo/

(There is also a hi-res version or SVG around, I just can't find it atm)

This was discussed in the community and accepted here:

http://lists.openstack.org/pipermail/openstack-dev/2016-March/089043.html

Which was in turn based on a previous design discussed here:


http://lists.openstack.org/pipermail/openstack-dev/2015-September/075649.html

So, I think it's likely (unless anyone objects) we'll stick with that
current owl theme for our official mascot.

Overall I like the idea of encouraging official mascots/logos for
projects,
quite a few have done so informally and I think it's a fun way to
reinforce
project/team identity within the OpenStack community.

Thanks!

Steve Hardy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mascot/logo for your project

2016-07-11 Thread Dmitry Tantsur

On 07/11/2016 05:00 PM, Heidi Joy Tretheway wrote:

The Foundation would like to help promote OpenStack projects in the big
tent with branding and marketing services. The idea is to create a
family of logos for OpenStack projects that are unique, yet immediately
identifiable as part of OpenStack. We’ll be using these logos to promote
your project on the OpenStack website, at the Summit and in marketing
materials.

We’re asking project teams to choose a mascot to represent as their
logo. Your team can select your own mascot, and then we’ll have an
illustrator create the logo for you (and we also plan to print some
special collateral for your team in Barcelona).


Ironic has had a mascot for quite some time, it can be found on 
https://wiki.openstack.org/wiki/Ironic (see in the bottom).




If your team already has a logo based on a mascot from nature, you’ll
have first priority to keep that mascot, and the illustrator will
restyle it to be consistent with the other projects. If you have a logo
that doesn’t have a mascot from nature, we encourage your team to choose
a mascot.

Here’s an FAQ and examples of what the logos can look like:
http://www.openstack.org/project-mascots
We’ve also recorded a quick video with an overview of the project:
https://youtu.be/LOdsuNr2T-o

You can get in touch with your PTL to participate in the logo choice
discussion. If you have more questions, I’m happy to help. :-)

Cheers,
Heidi Joy

__
Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769  |  skype: heidi.tretheway







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][osc-lib][openstackclient] is it too early for orc-lib?

2016-07-11 Thread Dmitry Tantsur

On 06/30/2016 11:29 PM, Dean Troyer wrote:

On Thu, Jun 30, 2016 at 8:38 AM, Hardik
> wrote:

Regarding osc-lib we have mainly two changes.

1) Used "utils" which is moved from openstackclient.common.utils to
osc_lib.utils
2) We used "command"  which wrapped in osc_lib from cliff.

So I think there is no harm in keeping osc_lib.


Admittedly the change to include osc-lib is a little early, I would have
preferred until the other parts of it were a bit more stable.


Also, I guess we do not need openstackclient to be installed  with
mistralclient as if mistral is used in standalone mode
there is no need of openstackclient.


The choice to include OSC as a dependency of a plugin/library rests
entirely on the plugin team, and that will usually be determined by the
answer to the question "Do you want all users of your library to have
OSc installed even if they do not use it?"  or alternatively "Do you
want to make your users remember to install OSC after installing the
plugin?"

Note that we do intend to have the capability on osc-lib to build an
OSC-like stand-alone binary for plugins that would theoretically make
installing OSC optional for stand-alone client users.  This is not
complete yet, and as I said above, one reason I wish osc-lib had not
been merged into plugin requirements yet.  That said, as long as you
don't use those bits yet you will be fine, the utils, command, etc bits
are stable, it is the clientmanager and shell parts that are still being
developed.


Dean,

It seems like OSC now issues warnings if we use bits that are moved to 
osc-lib. Does it mean that now osc-lib is ready for all projects to 
switch to? If not, could you please revert the warnings? It's a bit 
confusing if we should our users warnings that we can't fix.


Thanks.



dt

--

Dean Troyer
dtro...@gmail.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Dell recheck command issue (was: UFCG OneView CI comments missing recheck command)

2016-07-08 Thread Dmitry Tantsur
I've noticed that Dell CI has the same problem: it uses "recheck Dell" 
causing the whole check pipeline to rerun.


On 06/29/2016 02:23 AM, Villalovos, John L wrote:




-Original Message-
From: Thiago Paiva [mailto:thia...@lsd.ufcg.edu.br]
Sent: Tuesday, June 28, 2016 17:00
To: OpenStack Development Mailing List (not for usage questions)
Cc: ufcg-oneview...@lsd.ufcg.edu.br
Subject: Re: [openstack-dev] [ironic] UFCG OneView CI comments missing
recheck command

Hi Jay,

Sorry about that. The comment should be "recheck oneview" to test again.
I'll patch the failure message with instructions, thanks for the warning.


I'm not sure "recheck oneview" is a good command because it will kick off the 
master recheck. I would suggest something that will not trigger the normal jobs to 
recheck.

"retest oneview"??  Hopefully there is some standard for 3rd Party CI to use.

And yes, please do put in the message the command to do a job recheck.

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-07-05 Thread Dmitry Tantsur

On 07/04/2016 01:42 PM, Steven Hardy wrote:

Hi Dmitry,

I wanted to revisit this thread, as I see some of these interfaces
are now posted for review, and I have a couple of questions around
the naming (specifically for the "provide" action):

On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:


The last step before the deployment it to make nodes "available" using the
"provide" provisioning action. Such nodes are exposed to nova, and can be
deployed to at any moment. No long-running configuration actions should be
run in this state. The "manage" action can be used to bring nodes back to
"manageable" state for configuration (e.g. reintrospection).


So, I've been reviewing https://review.openstack.org/#/c/334411/ which
implements support for "openstack overcloud node provide"

I really hate to be the one nitpicking over openstackclient verbiage, but
I'm a little unsure if the literal translation of this results in an
intuitive understanding of what happens to the nodes as a result of this
action. So I wanted to have a broaded discussion before we land the code
and commit to this interface.





Here, I think the problem is that while the dictionary definition of
"provide" is "make available for use, supply" (according to google), it
implies obtaining the node, not just activating it.

So, to me "provide node" implies going and physically getting the node that
does not yet exist, but AFAICT what this action actually does is takes an
existing node, and activates it (sets it to "available" state)

I'm worried this could be a source of operator confusion - has this
discussion already happened in the Ironic community, or is this a TripleO
specific term?


Hi, and thanks for the great question.

As I've already responded on the patch, this term is settled in our OSC 
plugin spec [1], and we feel like it reflects the reality pretty well. 
But I clearly understand that naming things is really hard, and what 
feels obvious to me does not feel obvious to the others. Anyway, I'd 
prefer if we stay consistent with how Ironic names things now.


[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/ironicclient-osc-plugin.html




To me, something like "openstack overcloud node enable" or maybe "node
activate" would be more intuitive, as it implies taking an existing node
from the inventory and making it active/available in the context of the
overcloud deployment?


The problem here is that "provide" does not just "enable" nodes. It also 
makes nodes pass through cleaning, which may be a pretty complex and 
long process (we have it disabled for TripleO for this reason).




Anyway, not a huge issue, but given that this is a new step in our nodes
workflow, I wanted to ensure folks are comfortable with the terminology
before we commit to it in code.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] glance backend: replace swift by file in CI

2016-06-29 Thread Dmitry Tantsur

On 06/28/2016 01:37 PM, Erno Kuvaja wrote:

TL;DR

Makes absolutely sense to run file backend on single node undercloud at CI.

Few more comments inline.

On Mon, Jun 27, 2016 at 8:49 PM, Emilien Macchi  wrote:

On Mon, Jun 27, 2016 at 3:46 PM, Clay Gerrard  wrote:

There's probably some minimal gain in cross compatibility testing to
sticking with the status quo.  The Swift API is old and stable, but I
believe there was some bug in recent history where some return value in
swiftclient changed from a iterable to a generator or something and some
aggressive non-duck type checking broke something somewhere

I find that bug reports sorta interesting, the reported memory pressure
there doesn't make sense.  Maybe there's some non-
essential middleware configured on that proxy that's causing the workers to
bloat up like that?


Swift proxy pipeline:
pipeline = catch_errors healthcheck cache ratelimit bulk tempurl
formpost authtoken keystone staticweb proxy-logging proxy-server


Some things I do not think we benefit having there if we want to
experiment still with swift in undercloud:


I hope we're not removing it completely...


staticweb - do we need containers being presented as webpages?
tempurl - Id assume we can expect the user having access the needed
objects with their own credentials.


Please leave it there, we need it to support agent_* family of ironic 
drivers.



formpost - likely we do not need http forms instead of PUT calls either.
ratelimit - There and there, have we had single time where something
goes grazy and ratelimit has saved us and the tests still not failed.
healthcheck - not likely used, but also really lightweight so
shouldn't make any difference

cache - Memcache is likely the thing that kills us.



Thanks for your help,


-clayg

On Mon, Jun 27, 2016 at 12:30 PM, Emilien Macchi  wrote:


Hi,

Today we're re-investigating a CI failure that we had multiple times [1]:
Swift memory usage grows until it is OOM-killed.

The perimeter of this thread is about our CI and not production
environments.
Indeed, our CI is running limited resources while production
environments should not hit this problem.

After some investigation on #ŧripleo, we found out this scenario was
happening almost every time since recently:

* undercloud is deployed, glance and swift are running. Glance is
configured with Swift backend to store images.
* tripleo CI upload overcloud image into Glance, image is successfully
uploaded.
* when overcloud starts deploying, some nodes randomly fail to deploy
because the undercloud OOM-kills swift-proxy-server that is still
sending the ovecloud image requested by Glance API. Swift fails,
Glance fails, overcloud deployment fails with a "No valid hosts
found".

It's likely due to performances issues in our CI, and there is nothing
we can do but adding more resources or reducing the number of
environments, something we won't do at this time, because our recent
improvements in our CI (more ram, SSD, etc).


So the possible streamlining and optimizing swift for small
environment was tried already?

Another thing that comes to my mind based on the discussions lately.
What is the core count on our CI uc node? Are all the serviced
deployed there with their default worker values? Might be sensible
(even for production use) to limit the amount of workers our services
kick up in aio undercloud as that tends to have huge impact on memory
consumption.

- Erno "jokke_" Kuvaja


As a first iteration, I propose [2] that we stop using Swift as a
backend for Glance. Indeed, our undercloud is currently single-node, I
see zero value of using Swift to store the overcloud image.
If there is a value, then we can add the option to whether or not
using it (and set it to False in our CI to use file backend, which
won't lead to OOM).

Note: on the overcloud: we currently support file, swift and rbd
backends, that you can easily select during your deployment.

[1] https://bugs.launchpad.net/tripleo/+bug/1595916
[2] https://review.openstack.org/#/c/334555/
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [ironic] why do we need setting network driver per node?

2016-06-29 Thread Dmitry Tantsur
and go further
with interfaces. And if it is an interface, it should be considered
as such (the basic spec containing most of the requirements is
merged, and we can use it to make network interface as close to the
spec as possible, while not going too far, as multitenancy slipping
another release would be very bad). There might be some caveats with
backwards compatibility in this particular case, but they're all
solvable.

Thanks,
Vlad

On Tue, Jun 28, 2016 at 7:08 PM, Mathieu Mitchell
<mmitch...@internap.com <mailto:mmitch...@internap.com>> wrote:

Following discussion on IRC, here are my thoughts:

The proposed network_interface allows choosing a "driver" for
the network part of a node. The values could be something like
"nobody", "neutron-flat networking" and "neutron-tenant separation".

I think this choice should be left per-node. My reasoning is
that you could have a bunch of nodes for which you have complete
Neutron support, through for example an ML2 plugin. These nodes
would be configured using one of the "neutron-*" modes.

On the other hand, that same Ironic installation could also
manage nodes for which the switches are unmanaged, or manually
configured. In such case, you would probably want to use the
"nobody" mode.

An important point is to expose this "capability" to Nova as you
might want to offer nodes with neutron integration differently
from "any node". I am unsure if the capability should be the
value of the network_interface or a boolean "neutron
integration?". Thoughts?

Mathieu


On 2016-06-28 11:32 AM, Dmitry Tantsur wrote:

Hi folks!

I was reviewing https://review.openstack.org/317391 and
realized I don't
quite understand why we want to have node.network_interface.
What's the
real life use case for it?

Do we expect some nodes to use Neutron, some - not?

Do we expect some nodes to benefit from network separation,
some - not?
There may be a use case here, but then we have to expose
this field to
Nova for scheduling, so that users can request a "secure"
node or a
"less secure" one. If we don't do that, Nova will pick at
random, which
makes the use case unclear again.
If we do that, the whole work goes substantially beyond what
we were
trying to do initially: isolate tenants from the
provisioning network
and from each other.

Flexibility it good, but the patches raise upgrade concerns,
because
it's unclear how to provide a good default for the new
field. And anyway
it makes the whole thing much more complex than it could be.

Any hints are welcome.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] why do we need setting network driver per node?

2016-06-28 Thread Dmitry Tantsur

Hi folks!

I was reviewing https://review.openstack.org/317391 and realized I don't 
quite understand why we want to have node.network_interface. What's the 
real life use case for it?


Do we expect some nodes to use Neutron, some - not?

Do we expect some nodes to benefit from network separation, some - not? 
There may be a use case here, but then we have to expose this field to 
Nova for scheduling, so that users can request a "secure" node or a 
"less secure" one. If we don't do that, Nova will pick at random, which 
makes the use case unclear again.
If we do that, the whole work goes substantially beyond what we were 
trying to do initially: isolate tenants from the provisioning network 
and from each other.


Flexibility it good, but the patches raise upgrade concerns, because 
it's unclear how to provide a good default for the new field. And anyway 
it makes the whole thing much more complex than it could be.


Any hints are welcome.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-24 Thread Dmitry Tantsur

On 06/23/2016 11:21 PM, Clark Boylan wrote:

On Thu, Jun 23, 2016, at 02:15 PM, Doug Hellmann wrote:

Excerpts from Thomas Goirand's message of 2016-06-23 23:04:28 +0200:

On 06/23/2016 06:11 PM, Doug Hellmann wrote:

I'd like for the community to set a goal for Ocata to have Python
3 functional tests running for all projects.

As Tony points out, it's a bit late to have this as a priority for
Newton, though work can and should continue. But given how close
we are to having the initial phase of the port done (thanks Victor!),
and how far we are from discussions of priorities for Ocata, it
seems very reasonable to set a community-wide goal for our next
release cycle.

Thoughts?

Doug


+1

Just think about it for a while. If we get Nova to work with Py3, and
everything else is working, including all functional tests in Tempest,
then after Otaca, we could even start to *REMOVE* Py2 support after
Otaca+1. That would be really awesome to stop all the compat layer
madness and use the new features available in Py3.


We'll need to get some input from other distros and from deployers
before we decide on a timeline for dropping Python 2. For now, let's
focus on making Python 3 work. Then we can all rejoice while having the
discussion of how much longer to support Python 2. :-)



I really would love to ship a full stack running Py3 for Debian Stretch.
However, for this, it'd be super helful to have as much visibility as
possible. Are we setting a hard deadline for the Otaca release? Or is
this just a goal we only "would like" to reach, but it's not really a
big deal if we don't reach it?


Let's see what PTLs have to say about planning, but I think if not
Ocata then we'd want to set one for the P release. We're running
out of supported lifetime for Python 2.7.


Keep in mind that there is interest in running OpenStack on PyPy which
is python 2.7. We don't have to continue supporting CPython 2.7
necessarily but we may want to support python 2.7 by way of PyPy.


PyPy folks have been working on python 3 support for some time already: 
http://doc.pypy.org/en/latest/release-pypy3.3-v5.2-alpha1.html
It's an alpha, but by the time we consider dropping Python 2 it will 
probably be released :)




Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [ironic-python-agent] Broken functional tests

2016-06-22 Thread Dmitry Tantsur

On 06/22/2016 12:55 PM, Lucas Alvares Gomes wrote:

Hi,

On Wed, Jun 22, 2016 at 10:53 AM, Sam Betts (sambetts)
 wrote:

This patch https://review.openstack.org/#/c/324909/ merged last night and
has broken the IPA functional tests.

To verify pull master and run "tox -r -e func" and it¹ll fail to run. If
you git checkout the commit before that one merged the same thing passes
successfully.

Seeing this error has made me realise that we don¹t have a CI job to run
these functional tests on IPA so this isn¹t caught and highlighted in
gerrit for reviewers, is this on purpose or should we add a new one to
prevent this happening again?



I would say we should add a job to the IPA gate to verify the
functional tests, just like swift does [0].

We may also need to revert/fix that patch that broke those tests.


I'm -1 to reverting anything until we have the test in the gate.



[0] 
https://github.com/openstack-infra/project-config/blob/bd54f0127ee1a8da985f7fc6644e91b11f8f5f09/zuul/layout.yaml#L12108

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Dmitry Tantsur

On 06/16/2016 05:12 PM, Jim Rollenhagen wrote:

Hi all,

I'd like to propose Jay Faulkner (JayF) and Sam Betts (sambetts) for the
ironic-core team.

Jay has been in the community as long as I have, has been IPA and
ironic-specs core for quite some time. His background is operations, and
he's getting good with Python. He's given great reviews for quite a
while now, and the number is steadily increasing. I think it's a
no-brainer.


+2



Sam has been in the ironic community for quite some time as well, with
close ties to the neutron community as well. His background seems to be
in networking, he's got great python skills as well. His reviews are
super useful. He doesn't have quite as many as some people, but they are
always thoughtful, and he often catches things others don't. I do hope
to see more of his reviews.


+2



Both Sam and Jay are to the point where I consider their +1 or -1 as
highly as any other core, so I think it's past time to allow them to +2
as well.


+1 :)



Current cores, please reply with your vote.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Dmitry Tantsur

On 06/13/2016 06:28 PM, Ben Nemec wrote:

On 06/13/2016 09:41 AM, Jiri Tomasek wrote:

Hi all,

As we are close to merging the initial Nodes Registration workflows and
action [1, 2] using Mistral which successfully provides the current
registration logic via common API, I'd like to start discussion on how
to improve it so it satisfies GUI and CLI requirements. I'd like to try
to describe the clients goals and define requirements, describe current
workflow problems and propose a solution. I'd like to record the result
of discussion to Blueprint [3] which Ryan already created.


CLI goals and optimal workflow


CLI's main benefit is based on the fact that it's commands can simply
become part of a script, so it is important that the operation is
idempotent. The optimal CLI workflow is:

User runs 'openstack baremetal import' and provides instackenv.json file
which includes all nodes information. When the registration fails at
some point, user is notified about the error and re-runs the command
with the same set of nodes. Rinse and repeat until all nodes are
properly registered.


I would actually not describe this as the optimal workflow for CLI
registration either.  It would be much better if the registration
completed for all the nodes that it can in the first place and then any
failed nodes can be cleaned up later.  There's no reason one bad node in
a file containing 100 nodes should block the entire deployment.

On that note, the only part of your proposal that I'm not sure would be
useful for the CLI is the non-blocking part.  I don't know that a CLI
fire-and-forget mode makes a lot of sense, although if there's a way for
the CLI to check the status then that might be fine too.  However,
pretty much all of the other usability stuff you're talking about would
benefit the CLI too.




GUI goals and optimal workflow
=

GUI's main goal is to provide a user friendly way to register nodes,
inform the user on the process, handle the problems and lets user fix
them. GUI strives for being responsive and interactive.

GUI allows user to add nodes specification manually one by one by
provided form or allow user (in same manner as CLI) to provide the
instackenv.json file which holds the nodes description. Importing the
file (or adding node manually) will populate an array of nodes the user
wants to register. User is able to browse these nodes and make
corrections to their configuration. GUI provides client side validations
to verify inputs (node name format, required fields, mac address, ip
address format etc.)

Then user triggers the registration. The nodes are moved to nodes table
as they are being registered. If an error occurs during registration of
any of the nodes, user is notified about the issue and can fix it in
registration form and can re-trigger registration for failed nodes.
Rinse and repeat until all nodes are successfully registered and in
proper state (manageable).

Such workflow keeps the GUI interactive, user does not have to look at
the spinner for several minutes (in case of registering hundreds of
nodes), hoping that something does not happen wrong. User is constantly
informed about the progress, user is able to react to the situation as
he wants, User is able to freely interact with the GUI while
registration is happening on the background. User is able to register
nodes in batches.


Current solution
=

Current solution uses register_or_update_nodes workflow [1] which takes
a nodes_json array and runs register_or_update_nodes and
set_nodes_managed tasks. When the whole operation completes it sends
Zaqar message notifying about the result of the registration of the
whole batch of nodes.

register_or_update_nodes runs tripleo.register_or_update_nodes action
[2] which uses business logic in tripleo_common/utils/nodes.py

utils.nodes.py module has been originally extracted from tripleoclient
to get the business logic behind the common API. It does following:

- converts the instackenv.json nodes format to appropriate ironic driver
format (driver-info fields)
- sets kernel and ramdisk ids defaults if they're not provided
- for each node it tests if node already exists (finds nodes by mac
addresses) and updates it or registers it as new based on the result.


Current Problems:
- no zaqar notification is sent for each node
- nodes are registered in batch, registration fails when error happens
on a certain node, leaving already registered nodes in inconsistent state
- workflow does not notify user about what nodes have been registered
and what failed, only thing user gets is relevant error message
- when the workflow succeeds, the registered_nodes list sent by Zaqar
message has outdated information
- when nodes are updated using nodes registration, the forkflow ends up
as failed, without any error output, although the nodes are updated
successfully

- utils/nodes.py decides whether the node should be created or updated
based on mac address 

Re: [openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Dmitry Tantsur

On 06/13/2016 04:41 PM, Jiri Tomasek wrote:

Hi all,

As we are close to merging the initial Nodes Registration workflows and
action [1, 2] using Mistral which successfully provides the current
registration logic via common API, I'd like to start discussion on how
to improve it so it satisfies GUI and CLI requirements. I'd like to try
to describe the clients goals and define requirements, describe current
workflow problems and propose a solution. I'd like to record the result
of discussion to Blueprint [3] which Ryan already created.


Hi and thanks for writing this up. Just a few clarifying comments inline.




CLI goals and optimal workflow


CLI's main benefit is based on the fact that it's commands can simply
become part of a script, so it is important that the operation is
idempotent. The optimal CLI workflow is:

User runs 'openstack baremetal import' and provides instackenv.json file
which includes all nodes information. When the registration fails at
some point, user is notified about the error and re-runs the command
with the same set of nodes. Rinse and repeat until all nodes are
properly registered.


Note that while in your example everything works, the command is not 
idempotent, and e.g. running it in the middle of deployment will 
probably cause funny things to happen.





GUI goals and optimal workflow
=

GUI's main goal is to provide a user friendly way to register nodes,
inform the user on the process, handle the problems and lets user fix
them. GUI strives for being responsive and interactive.

GUI allows user to add nodes specification manually one by one by
provided form or allow user (in same manner as CLI) to provide the
instackenv.json file which holds the nodes description. Importing the
file (or adding node manually) will populate an array of nodes the user
wants to register. User is able to browse these nodes and make
corrections to their configuration. GUI provides client side validations
to verify inputs (node name format, required fields, mac address, ip
address format etc.)


It's worth nothing that Ironic has API to provide required and optional 
properties for all drivers. But of course not in instackenv format ;)




Then user triggers the registration. The nodes are moved to nodes table
as they are being registered. If an error occurs during registration of
any of the nodes, user is notified about the issue and can fix it in
registration form and can re-trigger registration for failed nodes.
Rinse and repeat until all nodes are successfully registered and in
proper state (manageable).

Such workflow keeps the GUI interactive, user does not have to look at
the spinner for several minutes (in case of registering hundreds of
nodes), hoping that something does not happen wrong. User is constantly
informed about the progress, user is able to react to the situation as
he wants, User is able to freely interact with the GUI while
registration is happening on the background. User is able to register
nodes in batches.


Current solution
=

Current solution uses register_or_update_nodes workflow [1] which takes
a nodes_json array and runs register_or_update_nodes and
set_nodes_managed tasks. When the whole operation completes it sends
Zaqar message notifying about the result of the registration of the
whole batch of nodes.

register_or_update_nodes runs tripleo.register_or_update_nodes action
[2] which uses business logic in tripleo_common/utils/nodes.py

utils.nodes.py module has been originally extracted from tripleoclient
to get the business logic behind the common API. It does following:

- converts the instackenv.json nodes format to appropriate ironic driver
format (driver-info fields)
- sets kernel and ramdisk ids defaults if they're not provided
- for each node it tests if node already exists (finds nodes by mac
addresses) and updates it or registers it as new based on the result.


Current Problems:
- no zaqar notification is sent for each node
- nodes are registered in batch, registration fails when error happens
on a certain node, leaving already registered nodes in inconsistent state
- workflow does not notify user about what nodes have been registered
and what failed, only thing user gets is relevant error message
- when the workflow succeeds, the registered_nodes list sent by Zaqar
message has outdated information
- when nodes are updated using nodes registration, the forkflow ends up
as failed, without any error output, although the nodes are updated
successfully

- utils/nodes.py decides whether the node should be created or updated
based on mac address which is subject to change. It needs to be done by
UUID which is fixed.


Well, only if a user/CLI/GUI generates this UUID.

Also it's using not only MACs. We don't require MACs for non-virtual 
nodes, so it also uses some smart matching with BMC credentials in the 
same fashion as Ironic Inspector.



- utils/nodes.py uses instackenv.json nodes list 

Re: [openstack-dev] [ironic][infra][qa] Ironic grenade work nearly complete

2016-06-10 Thread Dmitry Tantsur

On 06/10/2016 02:16 PM, Jim Rollenhagen wrote:

On Thu, Jun 09, 2016 at 03:21:05PM -0700, Jay Faulkner wrote:

A quick update:

The devstack-gate patch is currently merging.

There was some discussion about whether or not the Ironic grenade job should
be in the check pipeline (even as -nv) for grenade, so I split that patch
into two pieces so the less controversial part (adding the grenade-nv job to
Ironic's check pipeline) could merge more easily.

https://review.openstack.org/#/c/319336/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue for Ironic only)

https://review.openstack.org/#/c/327985/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue for grenade)

Getting working upgrade testing will be a huge milestone for Ironic. Thanks
to those who have already helped us make progress and those who will help us
land these and see it at work.


The first of these is merged, which gets us most of what we needed.
Thanks everyone for your hard work on this!


Let me also say thank you to all the folks who spent huge amount of time 
testing, hacking, fixing and reviewing stuff for us to pass this 
important milestone. Thanks!




/me rechecks some things waiting for this testing :D

// jim



Thanks in advance,
Jay Faulkner
OSIC

On 6/9/16 8:28 AM, Jim Rollenhagen wrote:

Hi friends,

We're two patches away from having grenade passing in our check queue!
This is a huge step forward for us, many thanks go to the numerous folks
that have worked on or helped somehow with this.

I'd love to push this across the line today as it's less than 10 lines
of changes between the two, and we have a bunch of work nearly done that
we'd like upgrade testing running against before merging.

So we need infra cores' help here.

https://review.openstack.org/#/c/316662/ - devstack-gate
Allow to pass OS_TEST_TIMEOUT for grenade job
1 line addition with an sdague +2.

https://review.openstack.org/#/c/319336/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue)
+7,-1 with an AJaeger +2.

Thanks in advance. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Dmitry Tantsur

On 06/07/2016 02:01 AM, Devananda van der Veen wrote:


On 06/06/2016 01:44 PM, Kris G. Lindgren wrote:

Hi ironic folks,
As I'm trying to explore how GoDaddy can use ironic I've created the following
in an attempt to document some of my concerns, and I'm wondering if you folks
could help myself identity ongoing work to solve these (or alternatives?)
List of concerns with ironic:


Hi Kris,

There is a lot of ongoing work in and around the Ironic project. Thanks for
diving in and for sharing your concerns; you're not alone.

I'll respond to each group of concerns, as some of these appear quite similar to
each other and align with stuff we're already doing. Hopefully I can provide
some helpful background to where the project is at today.



1.)Nova <-> ironic interactions are generally seem terrible?


These two projects are coming at the task of managing "compute" with
significantly different situations and we've been working, for the last ~2
years, to build a framework that can provide both virtual and physical resources
through one API. It's not a simple task, and we have a lot more to do.



  -How to accept raid config and partitioning(?) from end users? Seems to not a
yet agreed upon method between nova/ironic.


Nova expresses partitioning in a very limited way on the flavor. You get root,
swap, and ephemeral partitions -- and that's it. Ironic honors those today, but
they're pinned on the flavor definition, eg. by the cloud admin (or whoever can
define the flavor.

If your users need more complex partitioning, they could create additional
partitions after the instance is created. This limitation within Ironic exists,
in part, because the projects' goal is to provide hardware through the OpenStack
Compute API -- which doesn't express arbitrary partitionability. (If you're
interested, there is a lengthier and more political discussion about whether the
cloud should support "pets" and whether arbitrary partitioning is needed for
"cattle".)


RAID configuration isn't something that Nova allows their users to choose today
- it doesn't fit in the Nova model of "compute", and there is, to my knowledge,
nothing in the Nova API to allow its input. We've discussed this a little bit,
but so far settled on leaving it up to the cloud admin to set this in Ironic.

There has been discussion with the Cinder community over ways to express volume
spanning and mirroring, but apply it to a machines' local disks, but these
discussions didn't result in any traction.

There's also been discussion of ways we could do ad-hoc changes in RAID level,
based on flavor metadata, during the provisioning process (rather than ahead of
time) but no code has been done for this yet, AFAIK.


I'm still pretty interested in it, because I agree with anything said 
above about building RAID ahead-of-time not being convenient. I don't 
quite understand how such a feature would look like, we might add it as 
a topic for midcycle.




So, where does that leave us? With the "explosion of flavors" that you
described. It may not be ideal, but that is the common ground we've reached.


   -How to run multiple conductors/nova-computes?   Right now as far as I can
tell all of ironic front-end by a single nova-compute, which I will have to
manage via a cluster technology between two or mode nodes.  Because of this and
the way host-agregates work I am unable to expose fault domains for ironic
instances (all of ironic can only be under a single AZ (the az that is assigned
to the nova-compute node)). Unless I create multiple nova-compute servers and
manage multiple independent ironic setups.  This makes on-boarding/query of
hardware capacity painful.


Yep. It's not ideal, and the community is very well aware of, and actively
working on, this limitation. It also may not be as bad as you may think. The
nova-compute process doesn't do very much, and tests show it handling some
thousands of ironic nodes fairly well in parallel. Standard active-passive
management of that process should suffice.

A lot of design work has been done to come up with a joint solution by folks on
both the Ironic and Nova teams.
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/ironic-multiple-compute-hosts.html

As a side note, it's possible (though not tested, recommended, or well
documented) to run more than one nova-compute. See
https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py


  - Nova appears to be forcing a we are "compute" as long as "compute" is VMs,
means that we will have a baremetal flavor explosion (ie the mismatch between
baremetal and VM).
  - This is a feeling I got from the ironic-nova cross project meeting in
Austin.  General exmaple goes back to raid config above. I can configure a
single piece of hardware many different ways, but to fit into nova's world view
I need to have many different flavors exposed to end-user.  In this way many
flavors can map back to a single piece of hardware with just a 

Re: [openstack-dev] [ironic] versioning of IPA, it is time or is it?

2016-06-02 Thread Dmitry Tantsur
2 июня 2016 г. 10:19 PM пользователь "Loo, Ruby" 
написал:
>
> Hi,
>
> I recently reviewed a patch [1] that is trying to address an issue with
ironic (master) talking to a ramdisk that has a mitaka IPA lurking around.
>
> It made me think that IPA may no longer be a teenager (yay, boo). IPA now
has a stable branch. I think it is time it grows up and acts responsibly.
Ironic needs to know which era of IPA it is talking to. Or conversely, does
ironic want to specify which microversion of IPA it wants to use? (Sorry,
Dmitry, I realize you are cringing.)

With versioning in place we'll have to fix one IPA version in ironic.
Meaning, as soon as we introduce a new feature, we have to explicitly break
compatibility with old ramdisk by requesting a version it does not support.
Even if the feature itself is optional. Or we have to wait some long time
before using new IPA features in ironic. I hate both options.

Well, or we can use some different versioning procedure :)

>
> Has anyone thought more than I have about this (i.e., more than 2ish
minutes)?
>
> If the solution (whatever it is) is going to take a long time to
implement, is there anything we can do in the short term (ie, in this
cycle)?
>
> --ruby
>
> [1] https://review.openstack.org/#/c/319183/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-01 Thread Dmitry Tantsur

On 06/01/2016 02:20 PM, Jason Guiditta wrote:

On 01/06/16 18:49 +0800, Xingchao Yu wrote:

  Hi, everyone:

  Do we need to give a abbreviation for PuppetOpenstack project? B/C
  it's really a long words when I introduce this project to people or
  writng article about it.

  How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack
  Project) ?

  I would like +1 for POM.
  Just an idea, please feel free to give your comment :D
  Xingchao Yu


For rdo and osp, we package it as 'openstack-puppet-modules', or OPM
for short.


I definitely love POM as it reminds me of pomeranians :) but I agree 
that OPM will probably be easier recognizable.




-j

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-05-31 Thread Dmitry Tantsur

On 05/31/2016 10:25 AM, Tan, Lin wrote:

Hi,

Recently, I am working on a spec[1] in order to recover nodes which get stuck 
in deploying state, so I really expect some feedback from you guys.

Ironic nodes can be stuck in 
deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is 
reserved by a dead conductor (the exclusive lock was not released).
Any further requests will be denied by ironic because it thinks the node 
resource is under control of another conductor.

To be more clear, let's narrow the scope and focus on the deploying state 
first. Currently, people do have several choices to clear the reserved lock:
1. restart the dead conductor
2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the lock.
3. The operator touches the DB to manually recover these nodes.

Option two looks very promising but there are some weakness:
2.1 It won't work if the dead conductor was renamed or deleted.
2.2 It won't work if the node's specific driver was not enabled on live 
conductors.
2.3 It won't work if the node is in maintenance. (only a corner case).


We can and should fix all three cases.



Definitely we should improve the option 2, but there are could be more issues I 
didn't know in a more complicated environment.
So my question is do we still need a new command to recover these node easier 
without accessing DB, like this PoC [2]:
  ironic-noderecover --node_uuids=UUID1,UUID2  
--config-file=/etc/ironic/ironic.conf


I'm -1 to anything silently removing the lock until I see a clear use 
case which is impossible to improve within Ironic itself. Such utility 
may and will be abused.


I'm fine with anything that does not forcibly remove the lock by default.



Best Regards,

Tan


[1] https://review.openstack.org/#/c/319812
[2] https://review.openstack.org/#/c/311273/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zaqar messages standardization

2016-05-26 Thread Dmitry Tantsur

On 05/25/2016 08:08 PM, Thomas Herve wrote:

On Fri, May 20, 2016 at 5:52 PM, Jiri Tomasek  wrote:

Hey all,

I've been recently working on getting the TripleO UI integrated with Zaqar,
so it can receive a messages from Mistral workflows and act upon them
without having to do various polling hacks.

Since there is currently quite a large amount of new TripleO workflows
comming to tripleo-common, we need to standardize this communication so
clients can consume the messages consistently.

I'll try to outline the requirements as I see it to start the discussion.

Zaqar queues:
To listen to the Zaqar messages it requires the client to connect to Zaqar
WebSocket, send authenticate message and subscribe to queue(s) which it
wants to listen to. The currently pending workflow patches which send Zaqar
messages [1, 2] expect that the queue is created by client and name is
passed as an input to the workflow [3].

From the client perspective, it would IMHO be better if all workflows sent
messages to the same queue and provide means to identify itself by carrying
workflow name and execution id. The reason is, that if client creates a
queue and triggers the workflow and then disconnects from the Socket (user
refreshes browser), then it does not know what queues it previously created
and which it should listen to. If there is single 'tripleo' queue, then all
clients always know that it is where it will get all the messages from.

Messages identification and content:
The client should be able to identify message by it's name so it can act
upon it. The name should probably be relevant to the action or workflow it
reports on.

{
  body: {
name: 'tripleo.validations.v1.run_validation,
execution_id: '123123123'
data: {}
  }
}

Other parts of the message are optional but it would be good to provide
information relevant to the message's purpose, so the client can update
relevant state and does not have to do any additional API calls. So e.g. in
case of running the validation a message includes validation id.


Hi,

Sorry for not responding earlier, but I have some inputs. In Heat we
publish events on Zaqar queue, and we defined this format:

{
'timestamp': $timestamp,
'version': '0.1',
'type': 'os.heat.event',
'id': $uuid,
'payload': {
'XXX
}
}

I don't think we have strong requirements on that, and we can
certainly make some tweaks. If we can converge towards something
simimar that'd be great.


One more datapoint here. Ironic agreed on the following format:
http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/notifications.html#proposed-change
but I'm not sure if we implemented it already.



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Dropping legacy bash ramdisk support

2016-05-23 Thread Dmitry Tantsur

On 05/11/2016 02:54 PM, Jim Rollenhagen wrote:

Hi all!

As you probably know, the old bash-based ramdisks for ironic [1] and
ironic-inspector [2] are deprecated for some long time. The time has
come: we are removing their support from our code base in the near
future.

Here is the draft plan:

1. Remove the gate-tempest-dsvm-ironic-pxe_ssh-dib job from
   diskimage-builder.

2. Remove the gate-tempest-dsvm-ironic-pxe_ssh job from Ironic master
   only.


FYI: these 2 changes are in effect now. I've forgot to drop the job from 
dib-utils, the patch is on its way.




3. Remove support for the old ramdisk from ironic-inspector and its
   devstack plugin. The last versions still supporting the old ramdisk will
   be 3.3.0 (to be released soon), Mitaka and older.

4. Remove support for the old ramdisk from ironic and its devstack
   plugin. The last version still supporting the old ramdisk will be
   Mitaka.

The DIB elements themselves won't be removed per DIB policy. But they
will be completely unsupported for the next releases (starting with N2).

[1] 
https://github.com/openstack/diskimage-builder/tree/master/elements/deploy-ironic
[2] 
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-23 Thread Dmitry Tantsur

On 05/21/2016 08:35 PM, Dan Prince wrote:

On Fri, 2016-05-20 at 14:06 +0200, Dmitry Tantsur wrote:

On 05/20/2016 01:44 PM, Dan Prince wrote:


On Thu, 2016-05-19 at 15:31 +0200, Dmitry Tantsur wrote:


Hi all!

We started some discussions on https://review.openstack.org/#/c/3
0020
0/
about the future of node management (registering, configuring and
introspecting) in the new API, but I think it's more fair (and
convenient) to move it here. The goal is to fix several long-
standing
design flaws that affect the logic behind tripleoclient. So
fasten
your
seatbelts, here it goes.

If you already understand why we need to change this logic, just
scroll
down to "what do you propose?" section.

"introspection bulk start" is evil
--

As many of you obviously know, TripleO used the following command
for
introspection:

  openstack baremetal introspection bulk start

As not everyone knows though, this command does not come from
ironic-inspector project, it's part of TripleO itself. And the
ironic
team has some big problems with it.

The way it works is

1. Take all nodes in "available" state and move them to
"manageable"
state
2. Execute introspection for all nodes in "manageable" state
3. Move all nodes with successful introspection to "available"
state.

Step 3 is pretty controversial, step 1 is just horrible. This not
how
the ironic-inspector team designed introspection to work (hence
it
refuses to run on nodes in "available" state), and that's now how
the
ironic team expects the ironic state machine to be handled. To
explain
it I'll provide a brief information on the ironic state machine.

ironic node lifecycle
-

With recent versions of the bare metal API (starting with 1.11),
nodes
begin their life in a state called "enroll". Nodes in this state
are
not
available for deployment, nor for most of other actions. Ironic
does
not
touch such nodes in any way.

To make nodes alive an operator uses "manage" provisioning action
to
move nodes to "manageable" state. During this transition the
power
and
management credentials (IPMI, SSH, etc) are validated to ensure
that
nodes in "manageable" state are, well, manageable. This state is
still
not available for deployment. With nodes in this state an
operator
can
execute various pre-deployment actions, such as introspection,
RAID
configuration, etc. So to sum it up, nodes in "manageable" state
are
being configured before exposing them into the cloud.

The last step before the deployment it to make nodes "available"
using
the "provide" provisioning action. Such nodes are exposed to
nova,
and
can be deployed to at any moment. No long-running configuration
actions
should be run in this state. The "manage" action can be used to
bring
nodes back to "manageable" state for configuration (e.g.
reintrospection).

so what's the problem?
--

The problem is that TripleO essentially bypasses this logic by
keeping
all nodes "available" and walking them through provisioning steps
automatically. Just a couple of examples of what gets broken:

(1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for
deployment (including potential autoscaling) and I want to enroll
10
more nodes.

Both introspection and ready-state operations nowadays will touch
both
10 new nodes AND 10 nodes which are ready for deployment,
potentially
making the latter not ready for deployment any more (and
definitely
moving them out of pool for some time).

Particularly, any manual configuration made by an operator before
making
nodes "available" may get destroyed.

(2) TripleO has to disable automated cleaning. Automated cleaning
is
a
set of steps (currently only wiping the hard drive) that happens
in
ironic 1) before nodes are available, 2) after an instance is
deleted.
As TripleO CLI constantly moves nodes back-and-forth from and to
"available" state, cleaning kicks in every time. Unless it's
disabled.

Disabling cleaning might sound a sufficient work around, until
you
need
it. And you actually do. Here is a real life example of how to
get
yourself broken by not having cleaning:

a. Deploy an overcloud instance
b. Delete it
c. Deploy an overcloud instance on a different hard drive
d. Boom.

This sounds like an Ironic bug to me. Cleaning (wiping a disk) and
removing state that would break subsequent installations on a
different
drive are different things. In TripleO I think the reason we
disable
cleaning is largely because of the extra time it takes and the fact
that our baremetal cloud isn't multi-tenant (currently at least).

We fix this "bug" by introducing cleaning. This is the process to
guarantee each deployment starts with a clean environment. It's hard
to
known which remained data can cause which problem (e.g. what about a
remaining UE

Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Dmitry Tantsur

On 05/20/2016 03:42 PM, John Trowbridge wrote:



On 05/19/2016 09:31 AM, Dmitry Tantsur wrote:

Hi all!

We started some discussions on https://review.openstack.org/#/c/300200/
about the future of node management (registering, configuring and
introspecting) in the new API, but I think it's more fair (and
convenient) to move it here. The goal is to fix several long-standing
design flaws that affect the logic behind tripleoclient. So fasten your
seatbelts, here it goes.

If you already understand why we need to change this logic, just scroll
down to "what do you propose?" section.

"introspection bulk start" is evil
--

As many of you obviously know, TripleO used the following command for
introspection:

 openstack baremetal introspection bulk start

As not everyone knows though, this command does not come from
ironic-inspector project, it's part of TripleO itself. And the ironic
team has some big problems with it.

The way it works is

1. Take all nodes in "available" state and move them to "manageable" state
2. Execute introspection for all nodes in "manageable" state
3. Move all nodes with successful introspection to "available" state.

Step 3 is pretty controversial, step 1 is just horrible. This not how
the ironic-inspector team designed introspection to work (hence it
refuses to run on nodes in "available" state), and that's now how the
ironic team expects the ironic state machine to be handled. To explain
it I'll provide a brief information on the ironic state machine.

ironic node lifecycle
-

With recent versions of the bare metal API (starting with 1.11), nodes
begin their life in a state called "enroll". Nodes in this state are not
available for deployment, nor for most of other actions. Ironic does not
touch such nodes in any way.

To make nodes alive an operator uses "manage" provisioning action to
move nodes to "manageable" state. During this transition the power and
management credentials (IPMI, SSH, etc) are validated to ensure that
nodes in "manageable" state are, well, manageable. This state is still
not available for deployment. With nodes in this state an operator can
execute various pre-deployment actions, such as introspection, RAID
configuration, etc. So to sum it up, nodes in "manageable" state are
being configured before exposing them into the cloud.

The last step before the deployment it to make nodes "available" using
the "provide" provisioning action. Such nodes are exposed to nova, and
can be deployed to at any moment. No long-running configuration actions
should be run in this state. The "manage" action can be used to bring
nodes back to "manageable" state for configuration (e.g. reintrospection).

so what's the problem?
--

The problem is that TripleO essentially bypasses this logic by keeping
all nodes "available" and walking them through provisioning steps
automatically. Just a couple of examples of what gets broken:

(1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for
deployment (including potential autoscaling) and I want to enroll 10
more nodes.

Both introspection and ready-state operations nowadays will touch both
10 new nodes AND 10 nodes which are ready for deployment, potentially
making the latter not ready for deployment any more (and definitely
moving them out of pool for some time).

Particularly, any manual configuration made by an operator before making
nodes "available" may get destroyed.

(2) TripleO has to disable automated cleaning. Automated cleaning is a
set of steps (currently only wiping the hard drive) that happens in
ironic 1) before nodes are available, 2) after an instance is deleted.
As TripleO CLI constantly moves nodes back-and-forth from and to
"available" state, cleaning kicks in every time. Unless it's disabled.

Disabling cleaning might sound a sufficient work around, until you need
it. And you actually do. Here is a real life example of how to get
yourself broken by not having cleaning:

a. Deploy an overcloud instance
b. Delete it
c. Deploy an overcloud instance on a different hard drive
d. Boom.

As we didn't pass cleaning, there is still a config drive on the disk
used in the first deployment. With 2 config drives present cloud-init
will pick a random one, breaking the deployment.

To top it all, TripleO users tend to not use root device hints, so
switching root disks may happen randomly between deployments. Have fun
debugging.

what do you propose?


I would like the new TripleO mistral workflows to start following the
ironic state machine closer. Imagine the following workflows:

1. register: take JSON, create nodes in "manageable" state. I do believe
we can automate the enroll->manageable transition, as it serves the
purpose of vali

Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Dmitry Tantsur

On 05/20/2016 02:54 PM, Steven Hardy wrote:

Hi Dmitry,

Thanks for the detailed write-up, some comments below:

On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:


what do you propose?


I would like the new TripleO mistral workflows to start following the ironic
state machine closer. Imagine the following workflows:

1. register: take JSON, create nodes in "manageable" state. I do believe we
can automate the enroll->manageable transition, as it serves the purpose of
validation (and discovery, but lets put it aside).

2. provide: take a list of nodes or all "managable" nodes and move them to
"available". By using this workflow an operator will make a *conscious*
decision to add some nodes to the cloud.

3. introspect: take a list of "managable" (!!!) nodes or all "manageable"
nodes and move them through introspection. This is an optional step between
"register" and "provide".

4. set_node_state: a helper workflow to move nodes between states. The
"provide" workflow is essentially set_node_state with verb=provide, but is
separate due to its high importance in the node lifecycle.

5. configure: given a couple of parameters (deploy image, local boot flag,
etc), update given or all "manageable" nodes with them.

Essentially the only addition here is the "provide" action which I hope you
already realize should be an explicit step.

what about tripleoclient


Of course we want to keep backward compatibility. The existing commands

 openstack baremetal import
 openstack baremetal configure boot
 openstack baremetal introspection bulk start

will use some combinations of workflows above and will be deprecated.

The new commands (also avoiding hijacking into the bare metal namespaces)
will be provided strictly matching the workflows (especially in terms of the
state machine):

 openstack overcloud node import
 openstack overcloud node configure
 openstack overcloud node introspect
 openstack overcloud node provide


So, provided we maintain backwards compatibility this sounds OK, but one
question - is there any alternative approach that might solve this problem
more generally, e.g not only for TripleO?


I was thinking about that.

We could move the import command to ironicclient, but it won't support 
TripleO format and additions then. It's still a good thing to have, I'll 
talk about it upstream.


As to introspect and provide, the only thing which is different from 
ironic analogs is that ironic commands don't act on "all nodes in XXX 
state", and I don't think we ever will.




Given that we're likely to implement these workflows in mistral, it
probably does make sense to switch to a TripleO specific namespace, but I
can't help wondering if we're solving a general problem in a TripleO
specific way - e.g isn't this something any user adding nodes from an
inventory, introspecting them and finally making them available for
deployment going to need?

Also, and it may be too late to fix this, "openstack overcloud node" is
kinda strange, because we're importing nodes on the undercloud, which could
in theory be used for any purpose, not only overcloud deployments.


I agree but keeping our stuff in ironic's namespace leads to even more 
confusion and even potential conflicts (e.g. we can't introduce 
"baremetal import", cause tripleo reserved it).




We've already done arguably the wrong thing with e.g openstack overcloud image
upload (which, actually, uploads images to the undercloud), but I wanted to
point out that we're maintaining that inconsistency with your proposed
interface (which may be the least-bad option I suppose).

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Dmitry Tantsur

On 05/20/2016 01:44 PM, Dan Prince wrote:

On Thu, 2016-05-19 at 15:31 +0200, Dmitry Tantsur wrote:

Hi all!

We started some discussions on https://review.openstack.org/#/c/30020
0/
about the future of node management (registering, configuring and
introspecting) in the new API, but I think it's more fair (and
convenient) to move it here. The goal is to fix several long-
standing
design flaws that affect the logic behind tripleoclient. So fasten
your
seatbelts, here it goes.

If you already understand why we need to change this logic, just
scroll
down to "what do you propose?" section.

"introspection bulk start" is evil
--

As many of you obviously know, TripleO used the following command
for
introspection:

  openstack baremetal introspection bulk start

As not everyone knows though, this command does not come from
ironic-inspector project, it's part of TripleO itself. And the
ironic
team has some big problems with it.

The way it works is

1. Take all nodes in "available" state and move them to "manageable"
state
2. Execute introspection for all nodes in "manageable" state
3. Move all nodes with successful introspection to "available" state.

Step 3 is pretty controversial, step 1 is just horrible. This not
how
the ironic-inspector team designed introspection to work (hence it
refuses to run on nodes in "available" state), and that's now how
the
ironic team expects the ironic state machine to be handled. To
explain
it I'll provide a brief information on the ironic state machine.

ironic node lifecycle
-

With recent versions of the bare metal API (starting with 1.11),
nodes
begin their life in a state called "enroll". Nodes in this state are
not
available for deployment, nor for most of other actions. Ironic does
not
touch such nodes in any way.

To make nodes alive an operator uses "manage" provisioning action to
move nodes to "manageable" state. During this transition the power
and
management credentials (IPMI, SSH, etc) are validated to ensure that
nodes in "manageable" state are, well, manageable. This state is
still
not available for deployment. With nodes in this state an operator
can
execute various pre-deployment actions, such as introspection, RAID
configuration, etc. So to sum it up, nodes in "manageable" state are
being configured before exposing them into the cloud.

The last step before the deployment it to make nodes "available"
using
the "provide" provisioning action. Such nodes are exposed to nova,
and
can be deployed to at any moment. No long-running configuration
actions
should be run in this state. The "manage" action can be used to
bring
nodes back to "manageable" state for configuration (e.g.
reintrospection).

so what's the problem?
--

The problem is that TripleO essentially bypasses this logic by
keeping
all nodes "available" and walking them through provisioning steps
automatically. Just a couple of examples of what gets broken:

(1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for
deployment (including potential autoscaling) and I want to enroll 10
more nodes.

Both introspection and ready-state operations nowadays will touch
both
10 new nodes AND 10 nodes which are ready for deployment,
potentially
making the latter not ready for deployment any more (and definitely
moving them out of pool for some time).

Particularly, any manual configuration made by an operator before
making
nodes "available" may get destroyed.

(2) TripleO has to disable automated cleaning. Automated cleaning is
a
set of steps (currently only wiping the hard drive) that happens in
ironic 1) before nodes are available, 2) after an instance is
deleted.
As TripleO CLI constantly moves nodes back-and-forth from and to
"available" state, cleaning kicks in every time. Unless it's
disabled.

Disabling cleaning might sound a sufficient work around, until you
need
it. And you actually do. Here is a real life example of how to get
yourself broken by not having cleaning:

a. Deploy an overcloud instance
b. Delete it
c. Deploy an overcloud instance on a different hard drive
d. Boom.


This sounds like an Ironic bug to me. Cleaning (wiping a disk) and
removing state that would break subsequent installations on a different
drive are different things. In TripleO I think the reason we disable
cleaning is largely because of the extra time it takes and the fact
that our baremetal cloud isn't multi-tenant (currently at least).


We fix this "bug" by introducing cleaning. This is the process to 
guarantee each deployment starts with a clean environment. It's hard to 
known which remained data can cause which problem (e.g. what about a 
remaining UEFI partition? any remainings of Ceph? I don't know).






As we didn't pass cleaning, there is still

[openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-19 Thread Dmitry Tantsur

Hi all!

We started some discussions on https://review.openstack.org/#/c/300200/ 
about the future of node management (registering, configuring and 
introspecting) in the new API, but I think it's more fair (and 
convenient) to move it here. The goal is to fix several long-standing 
design flaws that affect the logic behind tripleoclient. So fasten your 
seatbelts, here it goes.


If you already understand why we need to change this logic, just scroll 
down to "what do you propose?" section.


"introspection bulk start" is evil
--

As many of you obviously know, TripleO used the following command for 
introspection:


 openstack baremetal introspection bulk start

As not everyone knows though, this command does not come from 
ironic-inspector project, it's part of TripleO itself. And the ironic 
team has some big problems with it.


The way it works is

1. Take all nodes in "available" state and move them to "manageable" state
2. Execute introspection for all nodes in "manageable" state
3. Move all nodes with successful introspection to "available" state.

Step 3 is pretty controversial, step 1 is just horrible. This not how 
the ironic-inspector team designed introspection to work (hence it 
refuses to run on nodes in "available" state), and that's now how the 
ironic team expects the ironic state machine to be handled. To explain 
it I'll provide a brief information on the ironic state machine.


ironic node lifecycle
-

With recent versions of the bare metal API (starting with 1.11), nodes 
begin their life in a state called "enroll". Nodes in this state are not 
available for deployment, nor for most of other actions. Ironic does not 
touch such nodes in any way.


To make nodes alive an operator uses "manage" provisioning action to 
move nodes to "manageable" state. During this transition the power and 
management credentials (IPMI, SSH, etc) are validated to ensure that 
nodes in "manageable" state are, well, manageable. This state is still 
not available for deployment. With nodes in this state an operator can 
execute various pre-deployment actions, such as introspection, RAID 
configuration, etc. So to sum it up, nodes in "manageable" state are 
being configured before exposing them into the cloud.


The last step before the deployment it to make nodes "available" using 
the "provide" provisioning action. Such nodes are exposed to nova, and 
can be deployed to at any moment. No long-running configuration actions 
should be run in this state. The "manage" action can be used to bring 
nodes back to "manageable" state for configuration (e.g. reintrospection).


so what's the problem?
--

The problem is that TripleO essentially bypasses this logic by keeping 
all nodes "available" and walking them through provisioning steps 
automatically. Just a couple of examples of what gets broken:


(1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for 
deployment (including potential autoscaling) and I want to enroll 10 
more nodes.


Both introspection and ready-state operations nowadays will touch both 
10 new nodes AND 10 nodes which are ready for deployment, potentially 
making the latter not ready for deployment any more (and definitely 
moving them out of pool for some time).


Particularly, any manual configuration made by an operator before making 
nodes "available" may get destroyed.


(2) TripleO has to disable automated cleaning. Automated cleaning is a 
set of steps (currently only wiping the hard drive) that happens in 
ironic 1) before nodes are available, 2) after an instance is deleted. 
As TripleO CLI constantly moves nodes back-and-forth from and to 
"available" state, cleaning kicks in every time. Unless it's disabled.


Disabling cleaning might sound a sufficient work around, until you need 
it. And you actually do. Here is a real life example of how to get 
yourself broken by not having cleaning:


a. Deploy an overcloud instance
b. Delete it
c. Deploy an overcloud instance on a different hard drive
d. Boom.

As we didn't pass cleaning, there is still a config drive on the disk 
used in the first deployment. With 2 config drives present cloud-init 
will pick a random one, breaking the deployment.


To top it all, TripleO users tend to not use root device hints, so 
switching root disks may happen randomly between deployments. Have fun 
debugging.


what do you propose?


I would like the new TripleO mistral workflows to start following the 
ironic state machine closer. Imagine the following workflows:


1. register: take JSON, create nodes in "manageable" state. I do believe 
we can automate the enroll->manageable transition, as it serves the 
purpose of validation (and discovery, but lets put it aside).


2. provide: take a list of nodes or all "managable" nodes and move them 
to "available". By using this workflow an operator will make a 
*conscious* decision to add some nodes to the 

Re: [openstack-dev] [tc] supporting Go

2016-05-19 Thread Dmitry Tantsur

On 05/19/2016 12:58 PM, Robert Collins wrote:

On 19 May 2016 at 22:40, Dmitry Tantsur <dtant...@redhat.com> wrote:



You are correct that my position is subjective, but it is based on my
experiences trying to operate and deploy OpenStack in addition to
writing code. The draw of Go, in my experience, has been easily
deploying a single binary I've been able to build and test consistently.
The target system has doesn't require Go installed at all and it works
on old distros. And it has been much faster.



.. this is something distributions would never do or encourage. Ask zigo for
reasons :)


Distros are having a hard time at the moment :) - much of their
/obvious/ value is no longer sought: for instance compile time is
cheap enough that folk are rebuilding distros just to check that the
binaries are actually from the same sources!

Further, the historical squashing of all dependencies into one version
becomes increasing fragile as dependency chains get larger: the
probability of a bug preventing a library being updated (best case -
caught by CI) or breaking something without warning (typical case,
little-to-no-CI of transitive reverse deps) goes up, not down. This is
one of the major reasons folks doing operations often bypass distro
packages (or roll their own isolated set with known-good
dependencies).

The idea of trusted-collections-of-packages made a lot more sense
before this explosion of software we have now, much of which is high
quality, and moving much much faster than distro release
cycles.Canonical/Ubuntu has at least partly got its head around this
with their focus for the last while on a vibrant app store ecosystem -
one where shipping a single binary with vendored, static dependencies
is actually viable.

So yeah, some distros are getting there, bit by bit :)


It's already an off-topic, but I don't believe in this model personally. 
Especially since how many times such models appeared already without 
apparent instant success. But let us see :)




-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-19 Thread Dmitry Tantsur

On 05/19/2016 12:42 AM, Eric Larson wrote:


Dmitry Tantsur writes:


This is pretty subjective, I would say. I personally don't feel Go
(especially its approach to error handling) any natural (at least no
more than Rust or Scala, for example). If familiarity for Python
developers is an argument here, mastering Cython or making OpenStack
run on PyPy must be much easier for a random Python developer out
there to seriously bump the performance. And it would not require
introducing a completely new language to the picture.


In one sense you are correct. It is easier for a Pythonista to pick up
Cython and use that for performance specific areas of code. At the same
time, I'd argue that OpenStack as a community is not the same as Python
at large. There are packaging requirements and cross project standards
that also come into play, not to mention operators that end up bearing
the brunt of those decisions. For example, Debian will likely not
package a PyPy only version of Designate along with all its
requirements. Similarly, while 50% of operators use packaged versioned,
that means 50% work from source control to build, test, and release
OpenStack projects.


Here you speak about distributions and packaging, and I rather agree 
with you, but then...




You are correct that my position is subjective, but it is based on my
experiences trying to operate and deploy OpenStack in addition to
writing code. The draw of Go, in my experience, has been easily
deploying a single binary I've been able to build and test consistently.
The target system has doesn't require Go installed at all and it works
on old distros. And it has been much faster.


.. this is something distributions would never do or encourage. Ask zigo 
for reasons :)




Coming from Python, the reason Go has been easy to get started with is
that it offers some protections that are useful such as memory
management. Features such as slices are extremely similar to Python and
go routines / channels allow supporting more complex patterns such as
generators. Yes, you are correct, error handling is controversial, but
at the same time, it is no better in C.

I'm not an expert in Go, but from what I've seen, Go has been easier to
build and deploy than Python, while being faster. Picking it up has been
trivial and becoming reasonably proficient has been a quick process.
When considered within the scope of OpenStack, it adds a minimal
overhead for testing, packaging and deployment, especially when compared
to C extensions, PyPy or Cython.

I hope that contextualizes my opinion a bit to make clear the subjective
aspects are based on OpenStack specific constraints.

--

Eric Larson | eric.lar...@rackspace.com Software Developer |
Cloud DNS | OpenStack Designate Rackspace Hosting   | Austin, Texas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] oslo.log `verbose` and $your project

2016-05-17 Thread Dmitry Tantsur

On 05/17/2016 02:41 AM, Joshua Harlow wrote:

Hi all,

I just wanted to ensure folks are aware that the oslo group has removed
the 'verbose' option and the hence-forth the 'debug' option should just
be used (having to options that did very similar things was very
confusing to folks).

This deprecation for this was merged on aug 1, 2015 in review:
https://review.openstack.org/#/c/206437/ and the removal of it
officially merged about a week ago in
https://review.openstack.org/#/c/314573/ (therefore about 10 months of
lead time was given).

Sadly I think that we can't yet release oslo_log due to issues like:

*
http://logs.openstack.org/periodic/periodic-ironic-py27-with-oslo-master/c911e32/console.html#_2016-05-16_06_07_54_907

*
http://logs.openstack.org/periodic/periodic-glance-py27-with-oslo-master/c27a28b/console.html#_2016-05-16_06_11_55_647

* more...

So I've proposed a revert of 314573 @
https://review.openstack.org/#/c/317148/ so that the folks involved in
consuming projects can work through further removal of the verbose
option (it seems some projects are in the process of removing it as we
speak).

I'd like to retry 314573 in a few weeks, so cooperation and helping
getting any leftover cases of 'verbose' out of source trees would be
much appreciated.


Hi!

Let me use this thread as a reminder: the goal of that deprecation was 
to make INFO the default log level. I've fixed it this time, but as it 
gets reverted, please make sure that the next time we don't end up with 
WARNING again.


Thanks!



Does this sound reasonable to folks?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-16 Thread Dmitry Tantsur

On 05/16/2016 05:21 PM, John Dickinson wrote:



On 16 May 2016, at 8:14, Dmitry Tantsur wrote:


On 05/16/2016 05:09 PM, Ian Cordasco wrote:



-Original Message-
From: Dmitry Tantsur <dtant...@redhat.com>
Reply: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Date: May 16, 2016 at 09:55:27
To: openstack-dev@lists.openstack.org <openstack-dev@lists.openstack.org>
Subject:  Re: [openstack-dev] [tc] supporting Go


On 05/16/2016 04:35 PM, Adam Young wrote:

On 05/16/2016 05:23 AM, Dmitry Tantsur wrote:

On 05/14/2016 03:00 AM, Adam Young wrote:

On 05/13/2016 08:21 PM, Dieterly, Deklan wrote:

If we allow Go, then we should also consider allowing JVM based
languages.

Nope. Don't get me wrong, I've written more than my fair share of Java
in my career, and I like it, and I miss automated refactoring and real
threads. I have nothing against Java (I know a lot of you do).

Java fills the same niche as Python. We already have one of those, and
its very nice (according to John Cleese).


A couple of folks in this thread already stated that the primary
reason to switch from Python-based languages is the concurrency story.
JVM solves it and does it in the same manner as Go (at least that's my
assumption).

(not advocating for JVM, just trying to understand the objection)



So, what I think we are really saying here is "what is our Native
extension story going to be? Is it the traditional native languages, or
is it something new that has learned from them?"

Go is a complement to Python to fill in the native stuff. The
alternative is C or C++. Ok Flapper, or Rust.


C, C++, Rust, yes, I'd call them "native".

A language with a GC and green threads does not fall into "native"
category for me, rather the same as JVM.


MOre complex than just that. But Go does not have a VM, just put a lot
of effort into co-routines without taking context switches. Different
than green threads.


Ok, I think we have a different notion of "native" here. For me it's
being with as little magic happening behind the scenes as possible.


Have you written a lot of Rust?


Not a lot, but definitely some.


Rust handles the memory management for you as well. Certainly, you can 
determine the lifetime of something and tell the compiler how the underlying 
memory is shared, but Rust is far better than C in so much as you should never 
be able to write code that doubly frees the same memory unless you're 
explicitly using the unsafe features of the language that are infrequently 
needed.


I think we're in agreement here, not sure which bit you're arguing against :)



I'm with Flavio about preferring Rust personally, but I'm not a member of 
either of these teams and I understand the fact that most of the code is 
already written and has been shown to drastically improve performance in 
exactly the places where it's needed. With all of that in mind, I'm in favor of 
just agreeing already that Go is okay. I understand the concern that this will 
increase cognitive load on some developers and *might* have effects on our 
larger community but our community can only grow so long as our software is 
usable (performant) and useful (satisfies needs/requirements).


This Rust discussion is a bit offtopic, I was just stating that my notion of 
"nativeness" of Go is closer to one of JVM, not one of C/C++/Rust.

I guess my main question is whether folks seriously considered PyPy/Cython/etc.


Yes.

http://lists.openstack.org/pipermail/openstack-dev/2016-May/094960.html (which 
also references 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094549.html and 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094720.html


Thanks! Yeah, I've read the 1st, but somehow missed the other two, which 
actually provide a lot of interesting insights on how stuff works :) In 
this thread I was under a wrong impression, that people are hunting for 
running as "native" code as possible, while actually smart handling of 
IO operations is the critical point. I guess Go is really the best 
solution in this case.




--John




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-16 Thread Dmitry Tantsur

On 05/16/2016 05:09 PM, Ian Cordasco wrote:



-Original Message-
From: Dmitry Tantsur <dtant...@redhat.com>
Reply: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Date: May 16, 2016 at 09:55:27
To: openstack-dev@lists.openstack.org <openstack-dev@lists.openstack.org>
Subject:  Re: [openstack-dev] [tc] supporting Go


On 05/16/2016 04:35 PM, Adam Young wrote:

On 05/16/2016 05:23 AM, Dmitry Tantsur wrote:

On 05/14/2016 03:00 AM, Adam Young wrote:

On 05/13/2016 08:21 PM, Dieterly, Deklan wrote:

If we allow Go, then we should also consider allowing JVM based
languages.

Nope. Don't get me wrong, I've written more than my fair share of Java
in my career, and I like it, and I miss automated refactoring and real
threads. I have nothing against Java (I know a lot of you do).

Java fills the same niche as Python. We already have one of those, and
its very nice (according to John Cleese).


A couple of folks in this thread already stated that the primary
reason to switch from Python-based languages is the concurrency story.
JVM solves it and does it in the same manner as Go (at least that's my
assumption).

(not advocating for JVM, just trying to understand the objection)



So, what I think we are really saying here is "what is our Native
extension story going to be? Is it the traditional native languages, or
is it something new that has learned from them?"

Go is a complement to Python to fill in the native stuff. The
alternative is C or C++. Ok Flapper, or Rust.


C, C++, Rust, yes, I'd call them "native".

A language with a GC and green threads does not fall into "native"
category for me, rather the same as JVM.


MOre complex than just that. But Go does not have a VM, just put a lot
of effort into co-routines without taking context switches. Different
than green threads.


Ok, I think we have a different notion of "native" here. For me it's
being with as little magic happening behind the scenes as possible.


Have you written a lot of Rust?


Not a lot, but definitely some.


Rust handles the memory management for you as well. Certainly, you can 
determine the lifetime of something and tell the compiler how the underlying 
memory is shared, but Rust is far better than C in so much as you should never 
be able to write code that doubly frees the same memory unless you're 
explicitly using the unsafe features of the language that are infrequently 
needed.


I think we're in agreement here, not sure which bit you're arguing 
against :)




I'm with Flavio about preferring Rust personally, but I'm not a member of 
either of these teams and I understand the fact that most of the code is 
already written and has been shown to drastically improve performance in 
exactly the places where it's needed. With all of that in mind, I'm in favor of 
just agreeing already that Go is okay. I understand the concern that this will 
increase cognitive load on some developers and *might* have effects on our 
larger community but our community can only grow so long as our software is 
usable (performant) and useful (satisfies needs/requirements).


This Rust discussion is a bit offtopic, I was just stating that my 
notion of "nativeness" of Go is closer to one of JVM, not one of C/C++/Rust.


I guess my main question is whether folks seriously considered 
PyPy/Cython/etc.




--
Ian Cordasco




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-16 Thread Dmitry Tantsur

On 05/16/2016 04:35 PM, Adam Young wrote:

On 05/16/2016 05:23 AM, Dmitry Tantsur wrote:

On 05/14/2016 03:00 AM, Adam Young wrote:

On 05/13/2016 08:21 PM, Dieterly, Deklan wrote:

If we allow Go, then we should also consider allowing JVM based
languages.

Nope.  Don't get me wrong, I've written more than my fair share of Java
in my career, and I like it, and I miss automated refactoring and real
threads.  I have nothing against Java (I know a lot of you do).

Java fills the same niche as Python.  We already have one of those, and
its very nice (according to John Cleese).


A couple of folks in this thread already stated that the primary
reason to switch from Python-based languages is the concurrency story.
JVM solves it and does it in the same manner as Go (at least that's my
assumption).

(not advocating for JVM, just trying to understand the objection)



So, what I think we are really saying here is "what is our Native
extension story going to be? Is it the traditional native languages, or
is it something new that has learned from them?"

Go is a complement to Python to fill in the native stuff.  The
alternative is C or C++.  Ok Flapper, or Rust.


C, C++, Rust, yes, I'd call them "native".

A language with a GC and green threads does not fall into "native"
category for me, rather the same as JVM.


MOre complex than just that.  But Go does not have a VM, just put a lot
of effort into co-routines without taking context switches. Different
than green threads.


Ok, I think we have a different notion of "native" here. For me it's 
being with as little magic happening behind the scenes as possible.




http://programmers.stackexchange.com/questions/222642/are-go-langs-goroutine-pools-just-green-threads



You can do userland level co-routines in C, C++ and Erlang, probably
even Rust (even if it is not written yet, no idea).  Which is different
than putting in a code translation layer.

We are not talking about a replacement for Python here.  We are talking
about a language to use for native optimizations when Python's
concurrency model or other overhead gets in the way.

In scientific computing, using a language like R or Python to then call
into a linear algebra or messaging library is the norm for just this
reason.  Since we are going to be writing the native code here, the
question is what language to use for it.

We are not getting rid of python, and we are not bringing in Java. Those
are different questions.


The question is "How do we take performance sensitive sections in
OpenStack and optimize to native?"

The list of answers that map to the question here as I see it include:

1.  We are not doing native code, stop asking.
2.  Stick with C
3.  C or C++ is Ok
4.  Fortran (OK I just put this in to see if you are paying attention).
5.  Go
6.  Rust (Only put in to keep Flapper off my back)


Count me in here as well, though of course I have no voting rights 
within this issue :)





We have two teams asking for Go, and Flapper asking for Rust.  No one
has suggested new native code in C or C++, instead those types of
projects seem to be kept out of OpenStack proper.






This is coming from someone  that has done Kernel stuff.  I did C++ in
both the Windows and Linux worlds.  I've written inversion of control
stuff in C++ template metaprogramming.  I am not personally afraid of
writing code in either language. But I don't want to inflict that on
OpenStack.  Its a question of reducing complexity, not increasing it.


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] change in release announcement emails

2016-05-16 Thread Dmitry Tantsur

On 05/13/2016 07:40 PM, Doug Hellmann wrote:

The release team has recently landed a change to the script that
generates the automated release announcement emails send to
openstack-dev and openstack-announce when a project is released [1].

Based on feedback we received before and during the summit, we have
changed the subject lines to replace the "[release]" tag with "[new]",
so that folks subscribing to release team messages like this one will
have less volume and folks who want to see messages about new releases
will be able to see only those messages.

Please update your email filters accordingly.


Hi!

Will there be a new topic category on 
http://lists.openstack.org/cgi-bin/mailman/options/openstack-dev?




Enjoy,
Doug

[1] https://review.openstack.org/#/c/312762/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-16 Thread Dmitry Tantsur

On 05/14/2016 03:00 AM, Adam Young wrote:

On 05/13/2016 08:21 PM, Dieterly, Deklan wrote:

If we allow Go, then we should also consider allowing JVM based
languages.

Nope.  Don't get me wrong, I've written more than my fair share of Java
in my career, and I like it, and I miss automated refactoring and real
threads.  I have nothing against Java (I know a lot of you do).

Java fills the same niche as Python.  We already have one of those, and
its very nice (according to John Cleese).


A couple of folks in this thread already stated that the primary reason 
to switch from Python-based languages is the concurrency story. JVM 
solves it and does it in the same manner as Go (at least that's my 
assumption).


(not advocating for JVM, just trying to understand the objection)



So, what I think we are really saying here is "what is our Native
extension story going to be? Is it the traditional native languages, or
is it something new that has learned from them?"

Go is a complement to Python to fill in the native stuff.  The
alternative is C or C++.  Ok Flapper, or Rust.


C, C++, Rust, yes, I'd call them "native".

A language with a GC and green threads does not fall into "native" 
category for me, rather the same as JVM.




This is coming from someone  that has done Kernel stuff.  I did C++ in
both the Windows and Linux worlds.  I've written inversion of control
stuff in C++ template metaprogramming.  I am not personally afraid of
writing code in either language. But I don't want to inflict that on
OpenStack.  Its a question of reducing complexity, not increasing it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-13 Thread Dmitry Tantsur

On 05/11/2016 09:50 PM, Eric Larson wrote:


Flavio Percoco writes:


On 11/05/16 09:47 -0500, Dean Troyer wrote:

On Tue, May 10, 2016 at 5:54 PM, Flavio Percoco 
wrote:

[language mixing bits were here]


   The above is my main concern with this proposal. I'vementioned
this in the upstream review and I'm glad to havefound it here as
well. The community impact of this changeis perhaps not being
discussed enough and I believe, in thelong run, it'll bite us.


Agreed, but to do nothing instead is so not what we are about. The
change from integrated/incubated to Big Tent was done to address some
issues knowing we did not have all of the answers up front and would
learn some things along the way. We did learn some things, both good
and bad.

I do believe that we can withstand the impact of a new language,
particularly when we do it intentionally and knowing where some of
the pitfalls are. Also, the specific request is coming from the
oldest of all OpenStack projects, and one that has a history of not
making big changes without _really_ good reasons. Yes it opens a
door, but it will be opened with what I believe to be a really solid
model to build upon in other parts of the OpenStack community.  I
would MUCH rather do it this way then with a new Go-only project that
is joining OpenStack from scratch in more than just the
implementation language.



So, one thing that was mentioned during the last TC meeting is to
decide this in a project basis. Don't open the door entirely but let
projects sign up for this.  This will give us a more contained growth
as far as projects with go-code go but it does mean we'll have to do a
technical analysis on every project willing to sign up and it kinda
goes against the principles of the big tent.



   The feedback from the Horizon community has been that it'sbeen
impossible to avoid a community split and that's whatI'd like to
avoid.


I do think part of this is also due to the differences in the problem
domain of client/browser-side and server-side. I believe there is a
similar issue with  devs writing SQL, the overlap in
expertise between the two is way smaller than we all wish it was.


Exactly! This separation of domains is the reason why opening the door
for JS code was easier. The request was for browser apps that can't be
written in Python.


And for the specific Python-Golang overlap, it feels to me like more
Python devs have (at least talked about) working in Go than in other
newish languages. There are worse choices to test the waters with.


Just to stress this a bit more, I don't think the problem is the
language per se. There are certainly technical issues related to it
(packaging, CI, etc) but the main discussion is currently going around
the impact this change will have in the community and other areas. I'm
sure we can figure the technical issues out.



One thing to consider regarding the community's ability to task switch
is how Go is much easier than other languages and techniques. For
example, one common tactic people suggest when Python becomes too slow
is to rewrite the slow parts in C. In designate's case, rewriting the
dns wire protocol aspects in C could be beneficial, but it would be very
difficult as well. We would need to write an implementation that is able
to safely parse dns wire format in a reasonably thread safe fashion that
also will work well when those threads have been patched by eventlet,
all while writing C code that is compatible with Python internals.

To contrast that, the go POC was able to use a well tested go DNS
library and implement the same documented interface that was then
testable via the same functional tests. It also allowed an extremely
simple deployment and had a minimal impact for our CI systems. Finally,
as other go code has been written on our small team, getting Python
developers up to speed has been trivial. Memory management, built in
concurrency primitives, and similar language constructs have made using
Go feel natural.


This is pretty subjective, I would say. I personally don't feel Go 
(especially its approach to error handling) any natural (at least no 
more than Rust or Scala, for example). If familiarity for Python 
developers is an argument here, mastering Cython or making OpenStack run 
on PyPy must be much easier for a random Python developer out there to 
seriously bump the performance. And it would not require introducing a 
completely new language to the picture.




This experience is different from JavaScript because there are very
specific silos between the UI and the backend. I'd expect that, even
though JavaScript is an accepted language in OpenStack, writing a
node.js service would prevent a whole host of new complexity the project
would similarly debate. Fortunately, on a technical level, I believe we
can try Go without its requirements putting a large burden on the CI
team resources.

Eric


Flavio


dt

--

Dean Troyer dtro...@gmail.com



--

Eric 

Re: [openstack-dev] [TripleO] Undercloud Configuration Wizard

2016-05-12 Thread Dmitry Tantsur

On 05/11/2016 06:19 PM, Ben Nemec wrote:

Hi all,

Just wanted to let everyone know that I've ported the undercloud
configuration wizard to be a web app so it can be used by people without
PyQt on their desktop.  I've written a blog post about it here:
http://blog.nemebean.com/content/undercloud-configuration-wizard and the
tool itself is here: http://ucw-bnemec.rhcloud.com/


Nice! I remember people complaining about our use of 192.0.2.0 network 
by default, maybe you could change it?




It might be good to link it from tripleo.org too, or maybe even move it
to be hosted there entirely.  The latter would require some work as it's
not really designed to play nicely with an existing web server (hey, I'm
a webapp noob, cut me some slack :-), but it could be done.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] it's official: ironic-discoverd is EOL now

2016-05-09 Thread Dmitry Tantsur

Hey, you didn't know it wasn't EOL all this time? ;)

Well, now it is. The last PyPI release was ironic-discoverd 1.1.1 (it's 
not tagged in git due to technical reasons), and with Kilo going EOL we 
will have no more releases. Both stable/1.0 and stable/1.1 branches are 
gone now as well. I highly recommend switching to Liberty and beyond to 
enjoy many new features and fixes that ironic-inspector introduces.


Cheers,
Dmitry

P.S.
In case you don't known: ironic-discoverd was renamed to 
ironic-inspector in Liberty, and it is supported right now.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-09 Thread Dmitry Tantsur

On 05/07/2016 01:00 AM, Joshua Harlow wrote:

Dmitry Tantsur wrote:

On 05/03/2016 11:24 PM, Joshua Harlow wrote:

Howdy folks,

So I meet up with *some* of the mistral folks during friday last week at
the summit and I was wondering if we as a group can find a path to help
that project move forward in their desire to have some kind of process
than ack (vs the existing ack then process) in there usage of the
messaging layer.

I got to learn that the following exists in mistral (sad-face):

https://github.com/openstack/mistral/blob/master/mistral/engine/rpc.py#L38



And it got me thinking about how/if we can as a group possibly allow a
variant of https://review.openstack.org/#/c/229186/ to get worked on and
merged in and release so that the above 'hack' can be removed.


Hey, lemme weigh in from ironic-inspector PoV.

As you maybe remember, we also need a queue with possibility of both
ways of ack'ing for our HA work. So something like this patch doesn't
seem to help us at all. We'll probably have to cargo-cult the mistral
approach.


U seem to be thinking about the queue as an implementation vs thinking
about what API do u really need and then say backing that API by a queue
(if u so want to).

Thus where https://review.openstack.org/#/c/260246/ comes into play here
because it thinks about the API first and the impl second (and if u
really want 2 impls, well they are at
https://github.com/openstack/taskflow/tree/master/taskflow/jobs/backends
but I'm avoiding trying to bring those into the picture, because the
top-level API seems unclear here still).

I guess it goes back to the 'why are people trying to use a message
queue as a work queue' when the semantics of these are different (and
let's not get into why we use a message queue as an RPC layer while we
are at it, ha).


And I have a trivial answer: because that's all we have working right now ;)





Is it possible to have a manual ack feature? I.e. for the handler to
choose when to ack.



I also would like to come to some kind of understanding that we also
(mistral folks would hopefully help here) would remove this kind of
change in the future as the longer term goal (of something like
https://review.openstack.org/#/c/260246/) would progress.

Thoughts from folks (mistral and oslo)?

Anyway we can create a solution that works in the short term (allowing
for that hack to be removed) and working toward the longer term goal?

-Josh

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-05 Thread Dmitry Tantsur

On 05/03/2016 11:24 PM, Joshua Harlow wrote:

Howdy folks,

So I meet up with *some* of the mistral folks during friday last week at
the summit and I was wondering if we as a group can find a path to help
that project move forward in their desire to have some kind of process
than ack (vs the existing ack then process) in there usage of the
messaging layer.

I got to learn that the following exists in mistral (sad-face):

https://github.com/openstack/mistral/blob/master/mistral/engine/rpc.py#L38

And it got me thinking about how/if we can as a group possibly allow a
variant of https://review.openstack.org/#/c/229186/ to get worked on and
merged in and release so that the above 'hack' can be removed.


Hey, lemme weigh in from ironic-inspector PoV.

As you maybe remember, we also need a queue with possibility of both 
ways of ack'ing for our HA work. So something like this patch doesn't 
seem to help us at all. We'll probably have to cargo-cult the mistral 
approach.


Is it possible to have a manual ack feature? I.e. for the handler to 
choose when to ack.




I also would like to come to some kind of understanding that we also
(mistral folks would hopefully help here) would remove this kind of
change in the future as the longer term goal (of something like
https://review.openstack.org/#/c/260246/) would progress.

Thoughts from folks (mistral and oslo)?

Anyway we can create a solution that works in the short term (allowing
for that hack to be removed) and working toward the longer term goal?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-05 Thread Dmitry Tantsur

On 05/04/2016 08:21 AM, Mehdi Abaakouk wrote:


Hi,


That said, I agree with Mehdi that *most* RPC calls throughout OpenStack,
not being idempotent, should not use process-then-ack.


That why I think we must not call this RPC. And the new API should be
clear the expected idempotent of the application callbacks.


Thoughts from folks (mistral and oslo)?


Also, I was not at the Summit, should I conclude the Tooz+taskflow
approach (that ensure the idempotent of the application within the
library API) have not been accepted by mistral folks ?



Taskflow is pretty opinionated about the whole application design. We 
can't use it in ironic-inspector, but we also need process-then-ack 
semantics for our HA work.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-04 Thread Dmitry Tantsur

On 05/04/2016 02:44 PM, Thierry Carrez wrote:

Dmitry Tantsur wrote:

On 05/02/2016 08:53 PM, Shamail Tahir wrote:

When will we name the P release of OpenStack?  We named two releases
simultaneously (Newton and Ocata) during the Mitaka release cycle.  This
gave us the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata)
releases.

If we were to vote for the name of the P release soon (since the
location is now known) we would be able to have names associated with
the current release cycle (Newton), N+1 (Ocata), and N+2 (P).  This
would also allow us to get back to only voting for one name per release
cycle but consistently have names for N, N+1, and N+2.


Do we have locations for the summit announced? It's hard to name
anything without these..


Yes, was announced at the keynotes in Austin. May 2017 OpenStack Summit
will be held in Boston and the November 2017 OpenStack Summit will be
held in Sydney.

Source: https://twitter.com/OpenStack/status/724628774961074176



Thanks! Would be good to duplicate it to openstack-announce (or was it 
done already?) for folks who were not on keynotes and don't follow twitter.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-04 Thread Dmitry Tantsur

On 05/02/2016 08:53 PM, Shamail Tahir wrote:

Hi everyone,

When will we name the P release of OpenStack?  We named two releases
simultaneously (Newton and Ocata) during the Mitaka release cycle.  This
gave us the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata)
releases.

If we were to vote for the name of the P release soon (since the
location is now known) we would be able to have names associated with
the current release cycle (Newton), N+1 (Ocata), and N+2 (P).  This
would also allow us to get back to only voting for one name per release
cycle but consistently have names for N, N+1, and N+2.


Do we have locations for the summit announced? It's hard to name 
anything without these..


Also do we base names on the main summit or the design summit locations?



--
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican][designate][murano][fuel][ironic][cue][ceilometer][astara][gce-api][kiloeyes] keystoneclient 3.0.0 release - no more CLI!

2016-04-19 Thread Dmitry Tantsur

On 04/19/2016 04:34 PM, Dmitry Tantsur wrote:

On 04/18/2016 10:05 PM, Steve Martinelli wrote:

Everyone,

I sent out a note about this on Friday [1], but I'll repeat it here and
tag individual projects. The keystone team *will* be releasing a new
version of keystoneclient on *Thursday* that will not include a CLI.

A quick codesearch showed that a few projects are still using the old
`keystone` CLI in either their docs, scripts that create sample data or
in devstack plugins; the latter being the more immediate issue here.
These fixes should be very quick


Well, they are not. The OSC plugin does not work the same way as
keystone command used to.







Ironic:
http://git.openstack.org/cgit/openstack/ironic-inspector/tree/devstack/exercise.sh#n49



We did that because the OSC command didn't work for us. As the following
review shows, it still does not work:
https://review.openstack.org/#/c/307523/

Can we please make sure all projects can actually use the command they
need before we removing anything? I'll probably work around it in
ironic-inspector for now, but in the future we might need the
endpoint-get command again.


Note: my assumptions here and above are based on the patch (+1'ed by 
Steve), I may not have the whole picture. Based on a quick local 
testing, things might Just Work (tm).










Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican][designate][murano][fuel][ironic][cue][ceilometer][astara][gce-api][kiloeyes] keystoneclient 3.0.0 release - no more CLI!

2016-04-19 Thread Dmitry Tantsur

On 04/18/2016 10:05 PM, Steve Martinelli wrote:

Everyone,

I sent out a note about this on Friday [1], but I'll repeat it here and
tag individual projects. The keystone team *will* be releasing a new
version of keystoneclient on *Thursday* that will not include a CLI.

A quick codesearch showed that a few projects are still using the old
`keystone` CLI in either their docs, scripts that create sample data or
in devstack plugins; the latter being the more immediate issue here.
These fixes should be very quick


Well, they are not. The OSC plugin does not work the same way as 
keystone command used to.



, use `openstackclient` CLI instead.
I've gone ahead and created a listed off some files that include a
keystone CLI command (keystone user-list, keystone tenant-list, keystone
user-create, keystone role-list, etc )

Barbican:
http://git.openstack.org/cgit/openstack/barbican/tree/bin/keystone_data.sh

Designate:
http://git.openstack.org/cgit/openstack/designate/tree/tools/designate-keystone-setup
(already being addressed by: https://review.openstack.org/307433 )

Murano:
http://git.openstack.org/cgit/openstack/murano-deployment/tree/murano-ci/config/devstack/local.sh


Fuel:
http://git.openstack.org/cgit/openstack/fuel-plugin-plumgrid/tree/deployment_scripts/cleanup_os.sh

http://git.openstack.org/cgit/openstack/fuel-octane/tree/octane/tests/create_vms.sh

http://git.openstack.org/cgit/openstack/fuel-plugin-plumgrid/tree/deployment_scripts/cleanup_os.sh


Ironic:
http://git.openstack.org/cgit/openstack/ironic-inspector/tree/devstack/exercise.sh#n49


We did that because the OSC command didn't work for us. As the following 
review shows, it still does not work:

https://review.openstack.org/#/c/307523/

Can we please make sure all projects can actually use the command they 
need before we removing anything? I'll probably work around it in 
ironic-inspector for now, but in the future we might need the 
endpoint-get command again.





Cue:
http://git.openstack.org/cgit/openstack/cue/tree/devstack/plugin.sh

Ceilometer:
http://git.openstack.org/cgit/openstack/ceilometer/tree/tools/make_test_data.sh


Astara:
http://git.openstack.org/cgit/openstack/astara/tree/tools/run_functional.sh

GCE-API:
http://git.openstack.org/cgit/openstack/gce-api/tree/install.sh

Kiloeyes:
http://git.openstack.org/cgit/openstack/kiloeyes/tree/setup_horizon.sh

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/092471.html

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][all] Summit session clashes

2016-04-19 Thread Dmitry Tantsur
19 апр. 2016 г. 6:28 AM пользователь "Steve Baker" 
написал:
>
> All of the TripleO design summit sessions are on Thursday afternoon in
slots which clash with Heat sessions. Heat is a core component of TripleO
and as a contributor to both projects I was rather hoping to attend as many
of both sessions as possible - I don't think I'm alone in this desire.
>
> Is it possible that some horse trading could take place to reduce the
clashes? Maybe TripleO sessions could move to Wednesday morning?

Note that then they'll conflict with ironic sessions. I'm not sure if it's
critical or not.

>
> cheers
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-12 Thread Dmitry Tantsur

On 04/05/2016 12:24 PM, Dmitry Tantsur wrote:

Hi!

I'd like to propose Anton to the ironic-inspector core reviewers team.
His stats are pretty nice [1], he's making meaningful reviews and he's
pushing important things (discovery, now tempest).

Members of the current ironic-inspector-team and everyone interested,
please respond with your +1/-1. A lazy consensus will be applied: if
nobody objects by the next Tuesday, the change will be in effect.


The change is in effect, congratulations! :)



Thanks

[1] http://stackalytics.com/report/contribution/ironic-inspector/60

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Dmitry Tantsur

On 04/11/2016 05:33 PM, Ben Nemec wrote:

On 04/11/2016 04:54 AM, John Trowbridge wrote:

Hola OOOers,

It came up in the meeting last week that we could benefit from a CI
subteam with its own meeting, since CI is taking up a lot of the main
meeting time.

I like this idea, and think we should do something similar for the other
informal subteams (tripleoclient, UI), and also add a new subteam for
tripleo-quickstart (and maybe one for releases?).

We should make seperate ACL's for these subteams as well. The informal
approach of adding cores who can +2 anything but are told to only +2
what they know doesn't scale very well.


How so?  Are we planning to give people +2 even though we don't trust
them to not +2 things they shouldn't?  I remain of the opinion that if
we need ACL controls to keep someone from doing something then they
shouldn't have +2 in the first place.

Quickstart is a bit of a weird case because the regular contributors to
it have not previously been very involved in TripleO upstream so I don't
think most of us have enough context to know whether they should have
+2.  I guess the UI would fall under the same category, so I'd be in
favor of keeping those two separate, but otherwise I think we're
creating bureaucracy for its own sake.


FWIW it works pretty well for the ironic-inspector-core subteam of the 
big ironic-core.




-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [api] Fixing bugs in old microversions

2016-04-11 Thread Dmitry Tantsur

On 04/11/2016 03:54 PM, Jay Pipes wrote:

On 04/11/2016 09:48 AM, Dmitry Tantsur wrote:

On 04/11/2016 02:00 PM, Jay Pipes wrote:

On 04/11/2016 04:48 AM, Vladyslav Drok wrote:

Hi all!

There is a bug <https://bugs.launchpad.net/ironic/+bug/1565663> in
ironic API that allows to remove node name using any API version,
while node names were added in version 1.5. There are concerns that
fixing this might
be a breaking change, and I'm not sure how to proceed with that.
Here is
a change <https://review.openstack.org/300983> that
fixes the bug by just forbidding to do PATCH remove request on /name
path if requested
API version is less than 1.5. Is it enough to just mention this in a
release note, maybe
both in fixes and upgrade sections? As bumping API microversion to fix
some previous
microversion seems weird to me.

Any suggestions?


I think the approach you've taken -- just fix it and not add a new
microversion -- is the correct approach.


Do we really allow breaking API changes, covering old microversions?


Generally we have said that if a patch is fixing only an error response
code (as would be the case here -- changing from a 202 to a 400 when
name is attempted to be changed) then it doesn't need a microversion.


I remember us avoiding even changing 403 to 400 (or something like 
that). This is not just "changing return code". This is removing a 
working (unfortunately) feature: removing node names with API versions 
1.1-1.4. I'm not saying it's totally impossible, but I've heard Ironic 
people (notably, Deva) saying "no API breaks ever", thus my concern.




Sean, am I remembering that correctly?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [api] Fixing bugs in old microversions

2016-04-11 Thread Dmitry Tantsur

On 04/11/2016 10:48 AM, Vladyslav Drok wrote:

Hi all!

There is a bug  in
ironic API that allows to remove node name using any API version,
while node names were added in version 1.5. There are concerns that
fixing this might
be a breaking change, and I'm not sure how to proceed with that. Here is
a change  that
fixes the bug by just forbidding to do PATCH remove request on /name
path if requested
API version is less than 1.5. Is it enough to just mention this in a
release note, maybe
both in fixes and upgrade sections? As bumping API microversion to fix
some previous
microversion seems weird to me.


My point stays the same: this is a breaking change in API and should be 
avoided, unless absolutely necessary.


This situation is a side effect of the microversioning procedure we 
have, which, as you all know, I personally never liked :) and this case 
was one of the reasons. The only way to avoid it is to have negatives 
functional tests for all microversions. We're not even close to that 
yet, unfortunately.




Any suggestions?

Thanks,
Vlad


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [api] Fixing bugs in old microversions

2016-04-11 Thread Dmitry Tantsur

On 04/11/2016 02:00 PM, Jay Pipes wrote:

On 04/11/2016 04:48 AM, Vladyslav Drok wrote:

Hi all!

There is a bug  in
ironic API that allows to remove node name using any API version,
while node names were added in version 1.5. There are concerns that
fixing this might
be a breaking change, and I'm not sure how to proceed with that. Here is
a change  that
fixes the bug by just forbidding to do PATCH remove request on /name
path if requested
API version is less than 1.5. Is it enough to just mention this in a
release note, maybe
both in fixes and upgrade sections? As bumping API microversion to fix
some previous
microversion seems weird to me.

Any suggestions?


I think the approach you've taken -- just fix it and not add a new
microversion -- is the correct approach.


Do we really allow breaking API changes, covering old microversions?



Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stackalytics] Gaming the Stackalytics stats

2016-04-08 Thread Dmitry Tantsur
2016-04-08 19:26 GMT+02:00 Davanum Srinivas <dava...@gmail.com>:

> Team,
>
> Steve pointed out to a problem in Stackalytics:
> https://twitter.com/stevebot/status/718185667709267969


There are many ways to game a simple +1 counter, such as +1'ing changes
that already have at least 1x +2, or which already approved, or which need
rechecking...


>
>
> It's pretty clear what's happening if you look here:
>
> https://review.openstack.org/#/q/owner:openstack-infra%2540lists.openstack.org+status:open
>
> Here's the drastic step (i'd like to avoid):
> https://review.openstack.org/#/c/303545/
>
> What do you think?
>

One more possible (though also imperfect) mitigation is to make exception
from the usual 2x +2 rule for requirements updates passing gates and use
only 1x +2. Then requirements reviews will take substantially less time to
land, reducing need/possibility of having such +1's.


>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Backport exception request // discussion: setting the default root device

2016-04-06 Thread Dmitry Tantsur

Hi OOO'ers!

I'd like to get your permission to backport 
https://review.openstack.org/#/c/288417/ to stable/{liberty,mitaka} or 
seek alternative suggestions on how to make life easier for folks 
upgrading for Kilo.


The context of the problem is the following. In the Liberty release we 
(with the whole Ironic world) have switched from the old bash-based 
Ironic deploy ramdisk to IPA. I can't talk enough about benefits that it 
brought us, but today I want to talk about one drawback.


IPA has a different logic for choosing the root device for deployment, 
when several root devices are present. The old ramdisk always tried to 
find a disk by name present in the Ironic disk_devices configuration 
option, defaulting to something like "sda,hda,vda". IPA takes the 
smallest device which is greater than 4 GiB. Obviously, it's not 
guaranteed to be the same.


What it means is that when people upgrade their undercloud from Kilo and 
Liberty and beyond, and rebuild an overcloud node, this node may end up 
with a different root device picked by default. In the absence of 
cleaning, that will probably result in deployment failure (e.g. due to 
duplicated config drive).


A side note: the same was possible and actually happened back in Kilo, 
because device names are not reliable, and can change between reboots.


Now, the Ironic team has always recommended using root device hints for 
several root devices. However, there are valid complaints from users 
that running node-update on every node is not really convenient. And he 
is the patch in question: it adds a new flag to 'baremetal configure 
boot' to bulk-set root device hints based on a strategy or list of 
device names. The root device information is fetched from the 
introspection data. This allows people upgrading from Kilo to just do:


 openstack baremetal configure boot --root-device=sda,hda,vda

to create root device hints matching the previous behavior. I suggest we 
backport this patch to simplify life for people.


I'm also open to alternative suggestions on this.

Thanks for reading this long explanation,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-05 Thread Dmitry Tantsur

Hi!

I'd like to propose Anton to the ironic-inspector core reviewers team. 
His stats are pretty nice [1], he's making meaningful reviews and he's 
pushing important things (discovery, now tempest).


Members of the current ironic-inspector-team and everyone interested, 
please respond with your +1/-1. A lazy consensus will be applied: if 
nobody objects by the next Tuesday, the change will be in effect.


Thanks

[1] http://stackalytics.com/report/contribution/ironic-inspector/60

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-25 Thread Dmitry Tantsur
24 марта 2016 г. 8:12 PM пользователь "Jim Rollenhagen" <
j...@jimrollenhagen.com> написал:
>
> Hey all,
>
> I'm nominating Julia Kreger (TheJulia in IRC) for ironic-core. She runs
> the Bifrost project, gives super valuable reviews, is beginning to lead
> the boot from volume efforts, and is clearly an expert in this space.
>
> All in favor say +1 :)

+1

>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Backward incompatibility when moving from the old Ironic ramdisk to ironic-python-agent

2016-03-19 Thread Dmitry Tantsur

Hi all!

This is a heads up for you that we've found an issue [1] in IPA that 
changes the behavior for those of you with several hard drives. The 
difference is in the way our ramdisks pick the root device for 
deployment, when no root device hints [2] are provided. Namely:
- The old ramdisk picked the first available device from the list of 
device names in the "disk_devices" configuration option [3]. In practice 
it usually meant the first disk was chosen. Note that this approach was 
error-prone, as disk ordering is, generally speaking, not guaranteed by 
Linux.
- IPA ignores the "disk_devices" option completely and picks the 
smallest device larger than 4 GiB.


It is probably too late to change the IPA behavior to be more 
compatible, as a lot of folks are already relying on it. So we decided 
to raise this issue and get feedback on the preferred path forward.


[1] https://bugs.launchpad.net/ironic-python-agent/+bug/1554492
[2] 
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment
[3] 
https://github.com/openstack/ironic/blob/master/etc/ironic/ironic.conf.sample#L2017


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-19 Thread Dmitry Tantsur
ai.html
[2]
https://github.com/openstack/fuel-nailgun-agent/blob/master/agent#L46-L51
[3] https://wiki.openstack.org/wiki/Fuel/Plugins


On Wed, Mar 16, 2016 at 1:39 PM, Dmitry Tantsur <dtant...@redhat.com
<mailto:dtant...@redhat.com>> wrote:

On 03/15/2016 01:53 PM, Serge Kovaleff wrote:

Dear All,

Let's compare functional abilities of both solutions.

Till the recent Mitaka release Ironic-inspector had only
Introspection
ability.

Discovery part is proposed and implemented by Anton Arefiev. We
should
align expectations and current and future functionality.

Adding Tags to attract the Inspector community.


Hi!

It would be great to see what we can do to fit the nailgun use case.
Unfortunately, I don't know much about it right now. What are you
missing?


Cheers,
Serge Kovaleff
http://www.mirantis.com <http://www.mirantis.com/>
cell: +38 (063) 83-155-70

On Tue, Mar 15, 2016 at 2:07 PM, Alexander Saprykin
<asapry...@mirantis.com <mailto:asapry...@mirantis.com>
<mailto:asapry...@mirantis.com <mailto:asapry...@mirantis.com>>>
wrote:

 Dear all,

 Thank you for the opinions about this problem.

 I would agree with Roman, that it is always better to reuse
 solutions than re-inventing the wheel. We should investigate
 possibility of using ironic-inspector and integrating it
into fuel.

 Best regards,
 Alexander Saprykin

 2016-03-15 13:03 GMT+01:00 Sergii Golovatiuk
 <sgolovat...@mirantis.com <mailto:sgolovat...@mirantis.com>
<mailto:sgolovat...@mirantis.com
<mailto:sgolovat...@mirantis.com>>>:

 My strong +1 to drop off nailgun-agent completely in
favour of
 ironic-inspector. Even taking into consideration we'lll
need to
 extend  ironic-inspector for fuel needs.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Mar 15, 2016 at 11:06 AM, Roman Prykhodchenko
 <m...@romcheg.me <mailto:m...@romcheg.me>
<mailto:m...@romcheg.me <mailto:m...@romcheg.me>>> wrote:

 My opition on this is that we have too many re-invented
 wheels in Fuel and it’s better think about
replacing them
 with something we can re-use than re-inventing them
one more
 time.

 Let’s take a look at Ironic and try to figure out
how we can
 use its features for the same purpose.


 - romcheg
  > 15 бер. 2016 р. о 10:38 Neil Jerram
 <neil.jer...@metaswitch.com
<mailto:neil.jer...@metaswitch.com>
 <mailto:neil.jer...@metaswitch.com
<mailto:neil.jer...@metaswitch.com>>> написав(ла):

  >
  > On 15/03/16 07:11, Vladimir Kozhukalov wrote:
  >> Alexander,
  >>
  >> We have many other places where use Ruby
(astute, puppet
 custom types,
  >> etc.). I don't think it is a good reason to
re-write
 something just
  >> because it is written in Ruby. You are right about
 tests, about plugins,
  >> but let's look around. Ironic community has already
 invented discovery
  >> component (btw written in python) and I can't
see any
 reason why we
  >> should continue putting efforts in nailgun
agent and not
 try to switch
  >> to ironic-inspector.
  >
  > +1 in general terms.  It's strange to me that
there are
 so many
  > OpenStack deployment systems that each do each
piece of
 the puzzle in
  > their own way (Fuel, Foreman, MAAS/Juju etc.) -
and which
 also means
  > that I need substantial separate learning in
order to use
 all these
  > systems.  It would be great to see some
consolidation.
  >
  > Regards,
  >   Neil
  >
   

Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-16 Thread Dmitry Tantsur

On 03/15/2016 01:53 PM, Serge Kovaleff wrote:

Dear All,

Let's compare functional abilities of both solutions.

Till the recent Mitaka release Ironic-inspector had only Introspection
ability.

Discovery part is proposed and implemented by Anton Arefiev. We should
align expectations and current and future functionality.

Adding Tags to attract the Inspector community.


Hi!

It would be great to see what we can do to fit the nailgun use case. 
Unfortunately, I don't know much about it right now. What are you missing?




Cheers,
Serge Kovaleff
http://www.mirantis.com 
cell: +38 (063) 83-155-70

On Tue, Mar 15, 2016 at 2:07 PM, Alexander Saprykin
> wrote:

Dear all,

Thank you for the opinions about this problem.

I would agree with Roman, that it is always better to reuse
solutions than re-inventing the wheel. We should investigate
possibility of using ironic-inspector and integrating it into fuel.

Best regards,
Alexander Saprykin

2016-03-15 13:03 GMT+01:00 Sergii Golovatiuk
>:

My strong +1 to drop off nailgun-agent completely in favour of
ironic-inspector. Even taking into consideration we'lll need to
extend  ironic-inspector for fuel needs.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Mar 15, 2016 at 11:06 AM, Roman Prykhodchenko
> wrote:

My opition on this is that we have too many re-invented
wheels in Fuel and it’s better think about replacing them
with something we can re-use than re-inventing them one more
time.

Let’s take a look at Ironic and try to figure out how we can
use its features for the same purpose.


- romcheg
 > 15 бер. 2016 р. о 10:38 Neil Jerram
> написав(ла):
 >
 > On 15/03/16 07:11, Vladimir Kozhukalov wrote:
 >> Alexander,
 >>
 >> We have many other places where use Ruby (astute, puppet
custom types,
 >> etc.). I don't think it is a good reason to re-write
something just
 >> because it is written in Ruby. You are right about
tests, about plugins,
 >> but let's look around. Ironic community has already
invented discovery
 >> component (btw written in python) and I can't see any
reason why we
 >> should continue putting efforts in nailgun agent and not
try to switch
 >> to ironic-inspector.
 >
 > +1 in general terms.  It's strange to me that there are
so many
 > OpenStack deployment systems that each do each piece of
the puzzle in
 > their own way (Fuel, Foreman, MAAS/Juju etc.) - and which
also means
 > that I need substantial separate learning in order to use
all these
 > systems.  It would be great to see some consolidation.
 >
 > Regards,
 >   Neil
 >
 >
 >

__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

 >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


Re: [openstack-dev] [tripleo] Logo for TripleO

2016-03-11 Thread Dmitry Tantsur

On 03/11/2016 06:32 AM, Jason Rist wrote:

Hey everyone -
We've been working on a UI for TripleO for a few months now and we're
just about to beg to be a part of upstream... and we're in need of a
logo for the login page and header.

In my evenings, I've come up with a logo.

It's a take on the work that Dan has already done on the Owl idea:
http://wixagrid.com/tripleo/tripleo_svg.html


This is looking fantastic!! Big +1 to using it everywhere.

We also need to put somewhere both this Owl and ironic's Bear together :)



I think it'd be cool if it were used on the CI page and maybe even
tripleo.org - I ran it by the guys on #tripleo and they seem to be
pretty warm on the idea, so I thought I'd run it by here if you missed
the conversation.

It's SVG so we can change the colors pretty easily as I have in the two
attached screenshots.  It also doesn't need to be loaded as a separate
asset.  Additionally, it scales well since it's basically vector instead
of rasterized.

What do you guys think?

Can we use it?

I can do a patch for tripleo.org and the ci and wherever else it's in use.

-J



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all][ptl] preparing to create stable/mitaka branches for libraries

2016-03-09 Thread Dmitry Tantsur
2016-03-09 18:26 GMT+01:00 Doug Hellmann <d...@doughellmann.com>:

> It's time to start opening the stable branches for libraries. I've
> prepared a list of repositories and the proposed versions from which
> we will create stable/mitaka branches, and need each team to sign off on
> the versions. If you know you intend to release a bug fix version in
> the next couple of days, we can wait to avoid having to backport
> patches, but otherwise we should go ahead and create the branches.
>
> I will process each repository as I hear from the owning team.
>
> openstack/ceilometermiddleware 0.4.0
> openstack/django_openstack_auth 2.2.0
> openstack/glance_store 0.13.0
> openstack/ironic-lib 1.1.0
> openstack/keystoneauth 2.3.0
> openstack/keystonemiddleware 4.3.0
> openstack/os-brick 1.1.0
> openstack/os-client-config 1.16.0
> openstack/pycadf 2.1.0
> openstack/python-barbicanclient 4.0.0
> openstack/python-brick-cinderclient-ext 0.1.0
> openstack/python-ceilometerclient 2.3.0
> openstack/python-cinderclient 1.6.0
> openstack/cliff 2.0.0
> openstack/python-designateclient 2.0.0
> openstack/python-glanceclient 2.0.0
> openstack/python-heatclient 1.0.0
> openstack/python-ironic-inspector-client 1.5.0
>

This one is fine.

Thanks,
Dmitry.


> openstack/python-ironicclient 1.2.0
> openstack/python-keystoneclient 2.3.1
> openstack/python-manilaclient 1.8.0
> openstack/python-neutronclient 4.1.1
> openstack/python-novaclient 3.3.0
> openstack/python-saharaclient 0.13.0
> openstack/python-swiftclient 3.0.0
> openstack/python-troveclient 2.1.0
> openstack/python-zaqarclient 1.0.0
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Dmitry Tantsur

On 03/06/2016 05:58 PM, James Slagle wrote:

On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi  wrote:

I'm kind of hijacking Dan's e-mail but I would like to propose some
technical improvements to stop having so much CI failures.


1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
mistake to swap on files because we don't have enough RAM. In my
experience, swaping on non-SSD disks is even worst that not having
enough RAM. We should stop doing that I think.


We have been relying on swap in tripleo-ci for a little while. While
not ideal, it has been an effective way to at least be able to test
what we've been testing given the amount of physical RAM that is
available.

The recent change to add swap to the overcloud nodes has proved to be
unstable. But that has more to do with it being racey with the
validation deployment afaict. There are some patches currently up to
address those issues.




2/ Split CI jobs in scenarios.

Currently we have CI jobs for ceph, HA, non-ha, containers and the
current situation is that jobs fail randomly, due to performances issues.

Puppet OpenStack CI had the same issue where we had one integration job
and we never stopped adding more services until all becomes *very*
unstable. We solved that issue by splitting the jobs and creating scenarios:

https://github.com/openstack/puppet-openstack-integration#description

What I propose is to split TripleO jobs in more jobs, but with less
services.

The benefit of that:

* more services coverage
* jobs will run faster
* less random issues due to bad performances

The cost is of course it will consume more resources.
That's why I suggest 3/.

We could have:

* HA job with ceph and a full compute scenario (glance, nova, cinder,
ceilometer, aodh & gnocchi).
* Same with IPv6 & SSL.
* HA job without ceph and full compute scenario too
* HA job without ceph and basic compute (glance and nova), with extra
services like Trove, Sahara, etc.
* ...
(note: all jobs would have network isolation, which is to me a
requirement when testing an installer like TripleO).


Each of those jobs would at least require as much memory as our
current HA job. I don't see how this gets us to using less memory. The
HA job we have now already deploys the minimal amount of services that
is possible given our current architecture. Without the composable
service roles work, we can't deploy less services than we already are.





3/ Drop non-ha job.
I'm not sure why we have it, and the benefit of testing that comparing
to HA.


In my opinion, I actually think that we could drop the ceph and non-ha
job from the check-tripleo queue.

non-ha doesn't test anything realistic, and it doesn't really provide
any faster feedback on patches. It seems at most it might run 15-20
minutes faster than the HA job on average. Sometimes it even runs
slower than the HA job.


The non-HA job is the only job with introspection. So you'll have to 
enable introspection on the HA job, bumping its run time.




The ceph job we could move to the experimental queue to run on demand
on patches that might affect ceph, and it could also be a daily
periodic job.

The same could be done for the containers job, an IPv6 job, and an
upgrades job. Ideally with a way to run an individual job as needed.
Would we need different experimental queues to do that?

That would leave only the HA job in the check queue, which we should
run with SSL and network isolation. We could deploy less testenv's
since we'd have less jobs running, but give the ones we do deploy more
RAM. I think this would really alleviate a lot of the transient
intermittent failures we get in CI currently. It would also likely run
faster.

It's probably worth seeking out some exact evidence from the RDO
centos-ci, because I think they are testing with virtual environments
that have a lot more RAM than tripleo-ci does. It'd be good to
understand if they have some of the transient failures that tripleo-ci
does as well.

We really are deploying on the absolute minimum cpu/ram requirements
that is even possible. I think it's unrealistic to expect a lot of
stability in that scenario. And I think that's a big reason why we get
so many transient failures.

In summary: give the testenv's more ram, have one job in the
check-tripleo queue, as many jobs as needed in the experimental queue,
and as many periodic jobs as necessary.





Any comment / feedback is welcome,
--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [ironic] Remember to follow RFE process

2016-03-03 Thread Dmitry Tantsur
2016-03-03 11:01 GMT+01:00 Lucas Alvares Gomes <lucasago...@gmail.com>:

> Hi,
>
> > Ironic'ers, please remember to follow the RFE process; especially the
> cores.
> >
> > I noticed that a patch [1] got merged yesterday. The patch was associated
> > with an RFE [2] that hadn't been approved yet :-( What caught my eye was
> > that the commit message didn't describe the actual API change so I took a
> > quick look at the (RFE) bug and it wasn't documented there either.
> >
> > As a reminder, the RFE process is documented [3].
> >
> > Spec cores need to try to be more timely wrt specs (I admit, I am
> guilty).
> > And folks, especially cores, ought to take more care when reviewing.
> > Although I do feel like there are too many things that a reviewer needs
> to
> > keep in mind.
> >
> > Should we revert the patch [1] for now? (Disclaimer. I haven't looked at
> the
> > patch itself. But I don't think I should have to, to know what the API
> > change is.)
> >
>
> Thanks for calling it out Ruby, that's unfortunate that the patch was
> merged without the RFE being approved. About reverting the patch I
> think we shouldn't do that now because the patch is touching the API
> and introducing a new microversion to it.
>

Exactly. I've -2'ed the revert, as removing API version is even worse than
landing a change without an RFE approved. Let us make sure to approve RFE
asap, and then adjust the code according to it.


>
> And yes, as reviewers let's try to improve our process. We probably
> should talk about how we can do it in the next upstream meeting.
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Dmitry Tantsur

I agree with Daniel + a couple more comments inline.

On 02/22/2016 04:49 PM, Daniel P. Berrange wrote:

On Mon, Feb 22, 2016 at 04:14:06PM +0100, Thierry Carrez wrote:

Hi everyone,

TL;DR: Let's split the events, starting after Barcelona.


Yes, please. Your proposal addresses the big issue I have with current
summits which is the really poor timing wrt start of each dev cycle.


The idea would be to split the events. The first event would be for upstream
technical contributors to OpenStack. It would be held in a simpler,
scaled-back setting that would let all OpenStack project teams meet in
separate rooms, but in a co-located event that would make it easy to have
ad-hoc cross-project discussions. It would happen closer to the centers of
mass of contributors, in less-expensive locations.


The idea that we can choose less expensive locations is great, but I'm a
little wary of focusing too much on "centers of mass of contributors", as
it can easily become an excuse to have it in roughly the same places each
time. As a non-USA based contributor, I really value the fact the the
summits rotate around different regions instead of spending all the time
in the USA as was the case earlier in openstcck days. Minimizing travel
costs is no doubt a welcome aim for companies' budgets, but it should not
be allowed to dominate to such a large extent that we miss representation
of different regions. ie if we never went back to Asia because the it is
cheaper for the /current/ majority of contributors to go to the US, we'll
make it harder to attract new contributors from those regions we avoid on
cost ground. The "center of mass of contributors" could become a self-
fullfilling prophecy.

IOW, I'm onboard with choosing less expensive locations, but would like
to see us still make the effort to reach out across different regions
for the events, and not become too US focused once again.


+1 here. I got an impression that midcycles now usually happen in the 
US. Indeed, it's probably much cheaper for the majority of contributors, 
but would make things worse for non-US folks.





The split should ideally reduce the needs to organize separate in-person
mid-cycle events. If some are still needed, the main conference venue and
time could easily be used to provide space for such midcycle events (given
that it would end up happening in the middle of the cycle).


The obvious risk with suggesting that current mid-cycle events could take
place alongside the business conference, is that the "business conference"
ends up being just as large as our combined conference is today. IOW we
risk actually creating 4 big official developer events a year, instead of
the current 2 events + small unofficial mid-cycles. You'd need to find some
way to limit the scope of any "mid cycle" events that co-located with the
business conference to prevent it growing out of hand.  We really want to
make sure we keep the mid-cycles portrayed as optional small scale
"hackathons", and not something that contributors feel obligated to
attend. IMHO they're already risking getting out of hand - it is hard to
feel well connected to development plans if you miss the mid-cycle events.


This time we (Ironic) tried a virtual midcycle using the asterisk 
infrastructure provided by the infra team, and it worked surprisingly 
well. I'd recommend more teams trying this option instead of trying to 
find a better way of having one more expensive f2f event (even though I 
really like to meet other folks).




Regards,
Daniel




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread Dmitry Tantsur

On 02/19/2016 01:29 PM, Lucas Alvares Gomes wrote:

Hi,

By removing stable branches you mean stable branches for mitaka and
newer releases or that includes stable/liberty which already exist as
well?

I think the latter is more complicated, I don't think we should drop
stable/liberty like that because other people (apart from TripleO) may
also depend on that. I mean, it wouldn't be very "stable" if stable
branches were deleted before their supported phases.


Yeah, this is a valid concern. Maybe we should recommend RDO somehow 
ignore stable/liberty, and then no longer have stable branches..




But that said, I'm +1 to not have stable branches for newer releases.

Cheers,
Lucas

On Fri, Feb 19, 2016 at 12:17 PM, Dmitry Tantsur <dtant...@redhat.com> wrote:

Hi all!

Initially we didn't plan on having stable branches for IPA at all. Our gate
is using the prebuilt image generated from the master branch even on
Ironic/Inspector stable branches. The branch in question was added by
request of RDO folks, and today I got a request from trown to remove it:

 dtantsur: btw, what do you think the chances are that IPA gets rid
of stable branch?
 I'm +1 on that, because currently only tripleo is using this
stable branch, our own gates are using tarball from master
 s/tarball/prebuilt image/
 cool, from RDO perspective, I would prefer to have master package in
our liberty delorean server, but I cant do that (without major hacks) if
there is a stable/liberty branch
 LIO support being the main reason
 fwiw, I have tested master IPA on liberty and it works great

So I suggest we drop stable branches from IPA. This won't affect the Ironic
gate in any regard, as we don't use stable IPA there anyway, as I mentioned
before. As we do know already, we'll keep IPA compatible with all supported
Ironic and Inspector versions.

Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread Dmitry Tantsur

Hi all!

Initially we didn't plan on having stable branches for IPA at all. Our 
gate is using the prebuilt image generated from the master branch even 
on Ironic/Inspector stable branches. The branch in question was added by 
request of RDO folks, and today I got a request from trown to remove it:


 dtantsur: btw, what do you think the chances are that IPA gets 
rid of stable branch?
 I'm +1 on that, because currently only tripleo is using this 
stable branch, our own gates are using tarball from master

 s/tarball/prebuilt image/
 cool, from RDO perspective, I would prefer to have master 
package in our liberty delorean server, but I cant do that (without 
major hacks) if there is a stable/liberty branch

 LIO support being the main reason
 fwiw, I have tested master IPA on liberty and it works great

So I suggest we drop stable branches from IPA. This won't affect the 
Ironic gate in any regard, as we don't use stable IPA there anyway, as I 
mentioned before. As we do know already, we'll keep IPA compatible with 
all supported Ironic and Inspector versions.


Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][i18n] What is the backport policy on i18n changes?

2016-02-18 Thread Dmitry Tantsur

On 02/18/2016 01:16 AM, Matt Riedemann wrote:

I don't think we have an official policy for stable backports with
respect to translatable string changes.

I'm looking at a release request for ironic-inspector on stable/liberty
[1] and one of the changes in that has translatable string changes to
user-facing error messages [2].

mrunge brought up this issue in the stable team meeting this week also
since Horizon has to be extra careful about backporting changes with
translatable string changes.

I think on the server side, if they are changes that just go in the
logs, it's not a huge issue. But for user facing changes, should we
treat those like StringFreeze [3]? Or only if the stable branches for
the given project aren't getting translation updates? I know the server
projects (at least nova) is still get translation updates on
stable/liberty so if we do backport changes with translatable string
updates, they aren't getting updated in stable. I don't see anything
like that happening for ironic-inspector on stable/liberty though.


Hi!

I had this concern, but ironic-inspector has never had any actual 
translations, so I don't think it's worth blocking this (pretty 
annoying) bug fix based on that.




Thoughts?

[1] https://review.openstack.org/#/c/279515/
[2] https://review.openstack.org/#/c/279071/1/ironic_inspector/process.py
[3] https://wiki.openstack.org/wiki/StringFreeze




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread Dmitry Tantsur

On 02/17/2016 02:22 PM, John Trowbridge wrote:



On 02/17/2016 06:27 AM, Dmitry Tantsur wrote:

Hi everyone!

Yesterday on the Ironic midcycle we agreed that we would like to remove
support for the old bash ramdisk from our code and gate. This, however,
pose a problem, since we still support Kilo and Liberty. Meaning:

1. We can't remove gate jobs completely, as they still run on Kilo/Liberty.
2. Then we should continue to run our job on DIB, as DIB does not have
stable branches.
3. Then we can't remove support from Ironic master as well, as it would
break DIB job :(

I see the following options:

1. Wait for Kilo end-of-life (April?) before removing jobs and code.
This means that the old ramdisk will essentially be supported in Mitaka,
but we'll remove gating on stable/liberty and stable/mitaka very soon.
Pros: it will happen soon. Cons: in theory we do support the old ramdisk
on Liberty, so removing gates will end this support prematurely.

2. Wait for Liberty end-of-life. This means that the old ramdisk will
essentially be supported in Mitaka and Newton. We should somehow
communicate that it's not official and can be dropped at any moment
during stable branches life time. Pros: we don't drop support of the
bash ramdisk on any branch where we promised to support it. Cons: people
might assume we still support the old ramdisk on Mitaka/Newton; it will
also take a lot of time.

3. Do it now, recommend Kilo users to switch to IPA too. Pros: it
happens now, no confusing around old ramdisk support in Mitaka and
later. Cons: probably most Kilo users (us included) are using the bash
ramdisk, meaning we can potentially break them when landing changes on
stable/kilo.



I think if we were to do this, then we need to backport LIO support in
IPA to liberty and kilo. While the bash ramdisk is not awesome to
troubleshoot, tgtd is not great either, and the bash ramdisk has
supported LIO since Kilo. However, there is not stable/kilo branch in
IPA, so that backport is impossible. I have not looked at how hard the
stable/liberty backport would be, but I imagine not very.


4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then
remove gates from Ironic master and DIB, leaving them on Kilo and
Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB
bug fixes won't affect kilo and liberty any more.

5. The same as #4, but only on Kilo.

As gate on stable/kilo is not working right now, and end-of-life is
quickly approaching, I see number 3 as a pretty viable option anyway. We
probably won't land any more changes on Kilo, so no use in keeping gates
on it. Liberty is still a concern though, as the old ramdisk was only
deprecated in Liberty.

What do you all think? Did I miss any options?


My favorite option would be 5 with backport of LIO support to liberty
(since backport to kilo is not possible). That is the only benefit of
the current bash ramdisk over the liberty/kilo IPA ramdisk. This is not
just for RHEL, but RHEL derivatives like CentOS which the RDO distro is
based on. (technically tgt can still be installed from EPEL, but there
is a reason it is not included in the base repos)


Oh, that's a good catch, IPA is usable on RHEL starting with Mitaka... I 
wonder if having stable branches for IPA was a good idea at all, 
especially provided that our gate is using git master on all branches.




Other than that, I think 4 is the next best option.


Cheers,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread Dmitry Tantsur

Hi everyone!

Yesterday on the Ironic midcycle we agreed that we would like to remove 
support for the old bash ramdisk from our code and gate. This, however, 
pose a problem, since we still support Kilo and Liberty. Meaning:


1. We can't remove gate jobs completely, as they still run on Kilo/Liberty.
2. Then we should continue to run our job on DIB, as DIB does not have 
stable branches.
3. Then we can't remove support from Ironic master as well, as it would 
break DIB job :(


I see the following options:

1. Wait for Kilo end-of-life (April?) before removing jobs and code. 
This means that the old ramdisk will essentially be supported in Mitaka, 
but we'll remove gating on stable/liberty and stable/mitaka very soon. 
Pros: it will happen soon. Cons: in theory we do support the old ramdisk 
on Liberty, so removing gates will end this support prematurely.


2. Wait for Liberty end-of-life. This means that the old ramdisk will 
essentially be supported in Mitaka and Newton. We should somehow 
communicate that it's not official and can be dropped at any moment 
during stable branches life time. Pros: we don't drop support of the 
bash ramdisk on any branch where we promised to support it. Cons: people 
might assume we still support the old ramdisk on Mitaka/Newton; it will 
also take a lot of time.


3. Do it now, recommend Kilo users to switch to IPA too. Pros: it 
happens now, no confusing around old ramdisk support in Mitaka and 
later. Cons: probably most Kilo users (us included) are using the bash 
ramdisk, meaning we can potentially break them when landing changes on 
stable/kilo.


4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then 
remove gates from Ironic master and DIB, leaving them on Kilo and 
Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB 
bug fixes won't affect kilo and liberty any more.


5. The same as #4, but only on Kilo.

As gate on stable/kilo is not working right now, and end-of-life is 
quickly approaching, I see number 3 as a pretty viable option anyway. We 
probably won't land any more changes on Kilo, so no use in keeping gates 
on it. Liberty is still a concern though, as the old ramdisk was only 
deprecated in Liberty.


What do you all think? Did I miss any options?

Cheers,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Dmitry Tantsur

On 02/07/2016 09:07 PM, Jay Pipes wrote:

Hello all,

tl;dr
=

I have long thought that the OpenStack Summits have become too
commercial and provide little value to the software engineers
contributing to OpenStack.

I propose the following:

1) Separate the design summits from the conferences
2) Hold only a single OpenStack conference per year
3) Return the design summit to being a low-key, low-cost working event


It sounds like a great idea, but I have a couple of concerns - see below.



details
===

The design summits originally started out as working events. Developers
got together in smallish rooms, arranged chairs in a fishbowl, and got
to work planning and designing.

With the OpenStack Summit growing more and more marketing- and
sales-focused, the contributors attending the design summit are often
unfocused. The precious little time that developers have to actually
work on the next release planning is often interrupted or cut short by
the large numbers of "suits" and salespeople at the conference event,
many of which are peddling a product or pushing a corporate agenda.

Many contributors submit talks to speak at the conference part of an
OpenStack Summit because their company says it's the only way they will
pay for them to attend the design summit. This is, IMHO, a terrible
thing. The design summit is a *working* event. Companies that contribute
to OpenStack projects should send their engineers to working events
because that is where work is done, not so that their engineer can go
give a talk about some vendor's agenda-item or newfangled product.


I'm afraid that if a company does not value employees participation in 
the design summit alone, they will continue to send them to the 
conference event, ignoring the design part completely. I.e. we'll get 
even less people from these companies. (of course it's only me guessing)


Also it means that people who actually have to be present in both places 
will travel even more, so it has high chances of increasing budget, not 
decreasing it.




Part of the reason that companies only send engineers who are giving a
talk at the conference side is that the cost of attending the OpenStack
Summit has become ludicrously expensive. Why have the events become so
expensive? I can think of a few reasons:

a) They are held every six months. I know of no other community or open
source project that holds *conference-type* events every six months.

b) They are held in extremely expensive hotels and conference centers
because the number of attendees is so big.


On one hand, big +1 for "extremely expensive" part.

On another hand, for participants arriving from another continent the 
airfare is roughly the half of the whole expense. This probably can't be 
improved (and may actually become worse from some of us, if new events 
become more US-centric).




c) Because the conferences have become sales and marketing-focused
events, companies shell out hundreds of thousands of dollars for schwag,
for rented event people, for food and beverage sponsorships, for keynote
slots, for lavish and often ridiculous parties, and more. This cost
means less money to send engineers to the design summit to do actual work.

I would love to see the OpenStack contributor community take back the
design summit to its original format and purpose and decouple it from
the OpenStack Summit's conference portion.

I believe the design summits should be organized by the OpenStack
contributor community, not the OpenStack Foundation and its marketing
and event planning staff. This will allow lower-cost venues to be chosen
that meet the needs only of the small group of active contributors, not
of huge masses of conference attendees. This will allow contributor
companies to send *more* engineers to *more* design summits, which is
something that really needs to happen if we are to grow our active
contributor pool.

Once this decoupling occurs, I think that the OpenStack Summit should be
renamed to the OpenStack Conference and Expo to better fit its purpose
and focus. This Conference and Expo event really should be held once a
year, in my opinion, and continue to be run by the OpenStack Foundation.

I, for one, would welcome events that have no conference check-in area,
no evening parties with 2000 people, no keynote and
powerpoint-as-a-service sessions, and no getting pulled into sales
meetings.

OK, there, I said it.

Thoughts? Criticism? Support? Suggestions welcome.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Dmitry Tantsur

On 02/08/2016 06:37 PM, Kevin L. Mitchell wrote:

On Mon, 2016-02-08 at 10:49 -0500, Jay Pipes wrote:

5) Dealing with schwag, giveaways, parties, and other superfluous
stuff


As a confirmed introvert, I have to say that I rarely attend parties,
for a variety of reasons.  However, I don't think our hypothetical
design-only meeting should completely eliminate parties, though we can
back off from some of the more extravagant ones.  If we maintain at
least one party, I think that would satisfy the social needs of the
community without distracting too much from the main purpose of the
event.  Of course, I agree with eliminating the other distracting
elements, such as schwag and giveaways…



+1, I think we can just make a party somewhat less fancy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Exception request : [stable] Ironic doesn't use cacert while talking to Swift ( https://review.openstack.org/#/c/253460/)

2016-01-18 Thread Dmitry Tantsur

On 01/17/2016 09:25 AM, Nisha Agarwal wrote:

Hello Team,

This patch got approval long back(Jan 6)  but due to Jenkins failure in
the merge pipeline of the Kilo branch, this patch was not merged.

Hence I request for an exception for this patch as  this was not merged
due to Jenkins issue.


Hi.

Our kilo gate is still not feeling well, so I'm not sure there's any 
point in giving an exception for anything not deadly critical. Sorry.




Regards
Nisha

--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Dmitry Tantsur

On 01/11/2016 11:09 PM, Tzu-Mainn Chen wrote:

- Original Message -

Background info:

We've got a problem in TripleO at the moment where many of our
workflows can be driven by the command line only. This causes some
problems for those trying to build a UI around the workflows in that
they have to duplicate deployment logic in potentially multiple places.
There are specs up for review which outline how we might solve this
problem by building what is called TripleO API [1].

Late last year I began experimenting with an OpenStack service called
Mistral which contains a generic workflow API. Mistral supports
defining workflows in YAML and then creating, managing, and executing
them via an OpenStack API. Initially the effort was focused around the
idea of creating a workflow in Mistral which could supplant our
"baremetal introspection" workflow which currently lives in python-
tripleoclient. I create a video presentation which outlines this effort
[2]. This particular workflow seemed to fit nicely within the Mistral
tooling.



More recently I've turned my attention to what it might look like if we
were to use Mistral as a replacement for the TripleO API entirely. This
brings forth the question of would TripleO be better off building out
its own API... or would relying on existing OpenStack APIs be a better
solution?

Some things I like about the Mistral solution:

- The API already exists and is generic.

- Mistral already supports interacting with many of the OpenStack API's
we require [3]. Integration with keystone is baked in. Adding support
for new clients seems straightforward (I've had no issues in adding
support for ironic, inspector, and swift actions).

- Mistral actions are pluggable. We could fairly easily wrap some of
our more complex workflows (perhaps those that aren't easy to replicate
with pure YAML workflows) by creating our own TripleO Mistral actions.
This approach would be similar to creating a custom Heat resource...
something we have avoided with Heat in TripleO but I think it is
perhaps more reasonable with Mistral and would allow us to again build
out our YAML workflows to drive things. This might allow us to build
off some of the tripleo-common consolidation that is already underway
...

- We could achieve a "stable API" by simply maintaining input
parameters for workflows in a stable manner. Or perhaps workflows get
versioned like a normal API would be as well.

- The purist part of me likes Mistral quite a bit. It fits nicely with
the deploy OpenStack with OpenStack. I sort of feel like if we have to
build our own API in TripleO part of this vision has failed and could
even be seen as a massive technical debt which would likely be hard to
build a community around outside of TripleO.

- Some of the proposed validations could perhaps be implemented as new
Mistral actions as well. I'm not convinced we require TripleO API just
to support a validations mechanism yet. Perhaps validations seem hard
because we are simply trying to do them in the wrong places anyway?
(like for example perhaps we should validate network connectivity at
inspection time rather than during provisioning).

- Power users might find a workflow built around a Mistral API more
easy to interact with and expand upon. Perhaps this ends up being
something that gets submitted as a patchset back to the TripleO that we
accept into our upstream "stock" workflow sets.



Hiya!  Thanks for putting down your thoughts.

I think I fundamentally disagree with the idea of using Mistral, simply
because many of the actions we'd like to expose through a REST API
(described in the tripleo-common deployment library spec [1]) aren't
workflows; they're straightforward get/set methods.


Right, because this spec describes nearly nothing from what is present 
in tripleoclient now. And what we realistically have now is workflows, 
which we'll have to reimplement in API somehow. So maybe we need both: 
the get/set TripleO API for deployment plans and Mistral for workflows.


> Putting a workflow

engine in front of that feels like overkill and an added complication
that simply isn't needed.  And added complications can lead to unneeded
complications: for instance, one open Mistral bug details how it may
not scale well [2].


Let's not talk about scaling in the context of what we have in 
tripleoclient now ;)




The Mistral solution feels like we're trying to force a circular peg
into a round-ish hole.  In a vacuum, if we were to consider the
engineering problem of exposing a code base to outside consumers in a
non-language specific fashion - I'm pretty sure we'd just suggest the
creation of a REST API and be done with it; the thought of using a
workflow engine as the frontend would not cross our minds.

I don't really agree with the 'purist' argument.  We already have custom
business logic written in the TripleO CLI; accepting that within TripleO,
but not a very thin API layer, feels like an arbitrary line to me.  And
if that 

Re: [openstack-dev] [release][ironic] ironic-python-agent release 1.1.0 (mitaka)

2016-01-12 Thread Dmitry Tantsur
Gate is not working right now, as we still use preversioning in 
setup.cfg, and we have a version mismatch, e.g. 
http://logs.openstack.org/74/264274/1/check/gate-ironic-python-agent-pep8/8d6ef18/console.html.


Patch to remove the version from setup.cfg:
https://review.openstack.org/#/c/266267/
Will backport to liberty as soon as it merges.

On 01/11/2016 10:01 PM, d...@doughellmann.com wrote:

We are glad to announce the release of:

ironic-python-agent 1.1.0: Ironic Python Agent Ramdisk

This release is part of the mitaka release series.

With package available at:

 https://pypi.python.org/pypi/ironic-python-agent

For more details, please see below.


1.1.0
^


New Features


* The CoreOS image builder now uses the latest CoreOS stable version
   when building images.

* IPA now supports Linux-IO as an alternative to tgtd. The iSCSI
   extension will try to use Linux-IO first, and fall back to tgtd if
   Linux-IO is not found or cannot be used.

* Adds support for setting proxy info for downloading images. This
   is controlled by the *proxies* and *no_proxy* keys in the
   *image_info* dict of the *prepare_image* command.

* Adds support for streaming raw images directly onto the disk. This
   avoids writing the image to a tmpfs partition before writing it to
   disk, which also enables using images larger than the usable amount
   of RAM on the machine IPA runs on. Pass *stream_raw_images=True* to
   the *prepare_image* command to enable this; it is disabled by
   default.

* CoreOS image builder now runs IPA in a chroot, instead of a
   container. systemd-nspawn has been adding more security features
   that break several things IPA needs to do (after all, IPA
   manipulates hardware), such as using sysrq triggers or writing to
   /sys.

* Root device hints now also inspect ID_WWN_WITH_EXTENSION and
   ID_WWN_VENDOR_EXTENSION from udev.


Upgrade Notes
*

* Now that IPA runs in a chroot, any operator tooling built around
   the container may need to change (for example, methods of getting a
   shell inside the container).


Bug Fixes
*

* Raw images larger than available of RAM may now be used by passing
   *stream_raw_images=True* to the *prepare_image* command; these will
   be streamed directly to disk.

* Fixes an issue using the "logs" inspection collector when logs
   contain non-ascii characters.

* Makes tgtd ready status detection more robust.

* Fixes configdrive creation for MBR disks greater than 2TB.


Other Notes
***

* System random is now used where applicable, rather than the
   default python random library.


Changes in ironic-python-agent 1.0.0..1.1.0
---

43a149d Updated from global requirements
dcdb06d Replace deprecated LOG.warn with LOG.warning
4b561f1 Updated from global requirements
943d2c0 Revert "Use latest CoreOS stable when building"
a39dfbd Updated from global requirements
ffcdcd4 Add mitaka reno page
cfcef97 Replace assertEqual(None, *) with assertIsNone in tests
b9df861 Catch up release notes for Mitaka
e8488c2 Add reno for release notes management
d185927 Fix trivial typo in docs
5bac998 Updated from global requirements
4cd64e2 Delete the Linux-IO target before setting up local boot
056bb42 CoreOS: Ensure /run is mounted before starting
6dc7f34 Deprecated tox -downloadcache option removed
a253e50 Use latest CoreOS stable when building
84fc428 Updated from global requirements
b5b0b63 Run IPA in chroot instead of container in CoreOS
5fa258b Fix "logs" inspection collector when logs contain non-ascii symbols
2fc6ce2 pyudev exception has changed for from_device_file
c474a5a Support Linux-IO in addition to tgtd
f4ad4d7 Updated from global requirements
863b47b Updated from global requirements
e320bb8 Add support for streaming raw images directly onto the disk
65053b7 Refactor the image download and checksum computation bits
c21409e Follow up patch for da9c3b0adc67efa916fc534d975823c0a45948a1
a01c4c9 Create partition at max msdos limit for disks > 2TB
54c901e Support proxies for image download
d97dbf2 Updated from global requirements
da9c3b0 Extend root device hints for different types of WWN
505b345 Fix to preserve double dashes of command line option in HTML.
59630d4 Updated from global requirements
9e75ba5 Use oslo.log instead of original logging
037e391 Updated from global requirements
18d5d6a Replace deprecated LOG.warn with LOG.warning
e51ccbe avoid duplicate text in ISCSIError message
fb920f4 determine tgtd ready status through tgtadm
f042be5 Updated from global requirements
1aeef4d Updated from global requirements
f01 Add param docstring into the normalize func
06d34ae Make calling arguments easier to understand
6131b2e Ensure all methods in utils.py have docstrings
7823240 Updated from global requirements
af20875 Update gitignore
5f7bc48 Reduce size of CoreOS ramdisk
deb50ac Add LOG.debug() if requested device type not found
d538f5e Babel is not a direct dependency

Re: [openstack-dev] [ironic][tests] approach to functional/integration tests

2016-01-12 Thread Dmitry Tantsur

On 01/11/2016 03:49 PM, Serge Kovaleff wrote:

Hi All,

Last week I had a noble goal to write "one-more" functional test in Ironic.
I did find a folder "func" but it was empty.

Friends helped me to find a WIP patch
https://review.openstack.org/#/c/235612/

and here comes the question of this email: what approach we would like
to implement:
Option 1 - write infrastructure code that starts/configure/stops the
services
Option 2 - rely on installed DevStack and run the tests over it

Both options have their Cons and Pros. Both options are implemented
across the OpenStack umbrella.
Option 1 - Glance, Nova, the patch above
Option 2 - HEAT and my favorite at the moment.

Any ideas?


I think we should eventually end up with supporting both standalone 
functional tests (#1) and tempest-based (#3).


A bit of context on #1. We've been using it in ironic-inspector since 
nearly its inception, when devstack plugins didn't exist and our project 
was on stackforge, so we had no way of implementing our dsvm gate. The 
basic idea is to start a full-featured service with mocked access to 
other services and simplified environment. We've written a decorator [1] 
that starts the service in __enter__ and stops in __exit__. It uses a 
temporary configuration file [2] with authentication disabled and 
database in temporary SQLite file. The service is started in a new green 
thread and exits when the test exits. We mock ironicclient with the 
usual 'mock' library to avoid requirements on a running ironic instance.


We do 2 kinds of tests: just an API test like [3] or full-flow 
introspection tests like [4]. In the latter case we first start 
introspection via API, verify that status is "in progress", then call 
the ramdisk callback endpoint with a fake data, and verify that 
introspection ends successfully.


Applying the same thing to ironic might be somewhat trickier. We can run 
conductor and API in the same process and use oslo.messaging fake driver 
[5] to avoid AMQP dependency. We'll have to use a fake network 
implementation and either mock glance or make sure we use local file 
system URL's for all images (IIRC we do support it).


Going further, if we create a simple fake IPA, we can even do a 
full-flow test with fake_agent driver. We will start deployment, make 
sure fake IPA was called with a right image, make it report success and 
see deployment finish. We might want to modify fake interfaces to record 
all calls, so that we can verify that boot interface was called properly.


[1] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L358-L393
[2] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L36-L51
[3] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L249-L291
[4] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L177-L196

[5] http://docs.openstack.org/developer/oslo.messaging/drivers.html#fake



Cheers,
Serge Kovaleff
http://www.mirantis.com 
cell: +38 (063) 83-155-70


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ironic] ironic-python-agent release 1.1.0 (mitaka)

2016-01-12 Thread Dmitry Tantsur

On 01/12/2016 10:56 AM, Dmitry Tantsur wrote:

Gate is not working right now, as we still use preversioning in
setup.cfg, and we have a version mismatch, e.g.
http://logs.openstack.org/74/264274/1/check/gate-ironic-python-agent-pep8/8d6ef18/console.html.


Patch to remove the version from setup.cfg:
https://review.openstack.org/#/c/266267/
Will backport to liberty as soon as it merges.


Master change has merged, liberty was already fine, so the gate should 
be fine now.




On 01/11/2016 10:01 PM, d...@doughellmann.com wrote:

We are glad to announce the release of:

ironic-python-agent 1.1.0: Ironic Python Agent Ramdisk

This release is part of the mitaka release series.

With package available at:

 https://pypi.python.org/pypi/ironic-python-agent

For more details, please see below.


1.1.0
^


New Features


* The CoreOS image builder now uses the latest CoreOS stable version
   when building images.

* IPA now supports Linux-IO as an alternative to tgtd. The iSCSI
   extension will try to use Linux-IO first, and fall back to tgtd if
   Linux-IO is not found or cannot be used.

* Adds support for setting proxy info for downloading images. This
   is controlled by the *proxies* and *no_proxy* keys in the
   *image_info* dict of the *prepare_image* command.

* Adds support for streaming raw images directly onto the disk. This
   avoids writing the image to a tmpfs partition before writing it to
   disk, which also enables using images larger than the usable amount
   of RAM on the machine IPA runs on. Pass *stream_raw_images=True* to
   the *prepare_image* command to enable this; it is disabled by
   default.

* CoreOS image builder now runs IPA in a chroot, instead of a
   container. systemd-nspawn has been adding more security features
   that break several things IPA needs to do (after all, IPA
   manipulates hardware), such as using sysrq triggers or writing to
   /sys.

* Root device hints now also inspect ID_WWN_WITH_EXTENSION and
   ID_WWN_VENDOR_EXTENSION from udev.


Upgrade Notes
*

* Now that IPA runs in a chroot, any operator tooling built around
   the container may need to change (for example, methods of getting a
   shell inside the container).


Bug Fixes
*

* Raw images larger than available of RAM may now be used by passing
   *stream_raw_images=True* to the *prepare_image* command; these will
   be streamed directly to disk.

* Fixes an issue using the "logs" inspection collector when logs
   contain non-ascii characters.

* Makes tgtd ready status detection more robust.

* Fixes configdrive creation for MBR disks greater than 2TB.


Other Notes
***

* System random is now used where applicable, rather than the
   default python random library.


Changes in ironic-python-agent 1.0.0..1.1.0
---

43a149d Updated from global requirements
dcdb06d Replace deprecated LOG.warn with LOG.warning
4b561f1 Updated from global requirements
943d2c0 Revert "Use latest CoreOS stable when building"
a39dfbd Updated from global requirements
ffcdcd4 Add mitaka reno page
cfcef97 Replace assertEqual(None, *) with assertIsNone in tests
b9df861 Catch up release notes for Mitaka
e8488c2 Add reno for release notes management
d185927 Fix trivial typo in docs
5bac998 Updated from global requirements
4cd64e2 Delete the Linux-IO target before setting up local boot
056bb42 CoreOS: Ensure /run is mounted before starting
6dc7f34 Deprecated tox -downloadcache option removed
a253e50 Use latest CoreOS stable when building
84fc428 Updated from global requirements
b5b0b63 Run IPA in chroot instead of container in CoreOS
5fa258b Fix "logs" inspection collector when logs contain non-ascii
symbols
2fc6ce2 pyudev exception has changed for from_device_file
c474a5a Support Linux-IO in addition to tgtd
f4ad4d7 Updated from global requirements
863b47b Updated from global requirements
e320bb8 Add support for streaming raw images directly onto the disk
65053b7 Refactor the image download and checksum computation bits
c21409e Follow up patch for da9c3b0adc67efa916fc534d975823c0a45948a1
a01c4c9 Create partition at max msdos limit for disks > 2TB
54c901e Support proxies for image download
d97dbf2 Updated from global requirements
da9c3b0 Extend root device hints for different types of WWN
505b345 Fix to preserve double dashes of command line option in HTML.
59630d4 Updated from global requirements
9e75ba5 Use oslo.log instead of original logging
037e391 Updated from global requirements
18d5d6a Replace deprecated LOG.warn with LOG.warning
e51ccbe avoid duplicate text in ISCSIError message
fb920f4 determine tgtd ready status through tgtadm
f042be5 Updated from global requirements
1aeef4d Updated from global requirements
f01 Add param docstring into the normalize func
06d34ae Make calling arguments easier to understand
6131b2e Ensure all methods in utils.py have docstrings
7823240 Updated from global requirements

Re: [openstack-dev] [all][openstackclient] check/gate job to check for duplicate openstackclient commands

2016-01-10 Thread Dmitry Tantsur
2016-01-10 8:36 GMT+01:00 Steve Martinelli <steve...@ca.ibm.com>:

> During the Liberty release the OpenStackClient (OSC) team ran into an
> issue that is documented here: [0] and here: [1]. In short, commands were
> clobbering each other because they had the same command name.
>
> A longer example is this, OSC has a command for listing compute flavors
> (os flavor list). zaqarclient, an OSC plugin, also implemented an `os
> flavor list` command. This caused OSC to break (it became unusable because
> it couldn't load entry points), and the user had to upgrade their
> zaqarclient, which included a renamed command (os messaging flavor list).
>
> In an effort to make sure this doesn't happen against, we did two things:
> 1) fixed the exception handling upon load at the cliff level, now OSC won't
> become unusable, it'll just take the last entrypoint it sees, and 2) we
> created a new gate/check job that checks for duplicate commands [2].
> (Thanks to ajaeger and dhellmann for their help in this work!)
>
> I've added this job to the OpenStackClient gate (in a non-voting capacity
> for now), and would like to get it added to the following projects, again
> in a non-voting capacity (since they are all OSC plugins):
>
> - python-barbicanclient
> - python-congressclient
> - python-cueclient
> - python-designateclient
> - python-heatclient
> - python-ironicclient
> - python-ironic-inspector-client
> - python-mistralclient
> - python-saharaclient
> - python-tuskarclient
> - python-zaqarclient
>

Note that python-tripleoclient is also an OSC plugin.


>
> If the core team for any of those projects objects to me adding a new
> check job then reply to this thread or this patch [3]
>
> Regarding the eventual question about the value of a non-voting job:
> AFAICT, the new check job is working fine, it's catching valid errors and
> succeeded where it should. I'd like to make this voting eventually, but
> it's only been running in the OSC gate for about a week, and I'd like a few
> non-voting runs in the plugin projects to ensure we don't have any hiccups
> before making this a voting job.
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076272.html
> [1] https://bugs.launchpad.net/python-openstackclient/+bug/1503512
> [2] https://review.openstack.org/#/c/261828/
> [3] https://review.openstack.org/#/c/265608/
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Project Team Lead
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] re-introducing twisted to global-requirements

2016-01-07 Thread Dmitry Tantsur
2016-01-07 20:09 GMT+01:00 Jim Rollenhagen <j...@jimrollenhagen.com>:

> Hi all,
>
> A change to global-requirements[1] introduces mimic, which is an http
> server that can mock various APIs, including nova and ironic, including
> control of error codes and timeouts. The ironic team plans to use this
> for testing python-ironicclient without standing up a full ironic
> environment.
>
> Here's the catch - mimic is built on twisted. I know twisted was
> previously removed from OpenStack (or at least people said "pls no", I
> don't know the full history). We didn't intend to stealth-introduce
> twisted back into g-r, but it was pointed out to me that it may appear
> this way, so here I am letting everyone know. lifeless pointed out that
> when tests are failing, people may end up digging into mimic or twisted
> code, which most people in this community aren't familiar with AFAIK,
> which is a valid point though I hope it isn't required often.
>

Btw, I've spent some amount of time (5 years?) with twisted on my previous
jobs. While my memory is no longer fresh on it, I can definitely be pinged
to help with it, if problems appear.


>
> So, the primary question here is: do folks have a problem with adding
> twisted here? We're holding off on Ironic changes that depend on this
> until this discussion has happened, but aren't reverting the g-r change
> until we decide one way or another.
>
> // jim
>
> [1] https://review.openstack.org/#/c/220268/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-22 Thread Dmitry Tantsur

On 11/09/2015 03:51 PM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
  rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.

(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


New update: after talking to Imre on irc I realized that there's big 
value in decoupling profile validation/assigning from the deploy 
command. One of the use cases is future RAID work: we'll need to 
configure RAID based on the profile.


I'm introducing 2 new commands[*]:

1. overcloud profiles assign --XXX-flavor=YYY --XXX-scale=NN ..

   accepts the same arguments as deploy (XXX = compute, control, etc), 
tries to both validate and assign profiles. in the future we might add 
things like --XXX-raid-configuration to set RAID config for matched 
nodes. or even something generic like --XXX-set-property KEY=VALUE.


2. overcloud profiles list

  shows nodes and their profiles (including possible ones)

Deployment command will only do validation, following the same logic as 
'profiles assign' command.


The patch is not finished yet, but early reviews are welcome: 
https://review.openstack.org/#/c/250405/


[*] note that we more or less agreed on avoiding other projects' OSC 
namespaces, hence 'overcloud' prefix instead of 'baremetal'




(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.

Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   >