Re: [openstack-dev] Announcing the OpenStack Health Dashboard

2015-12-06 Thread Fawad Khaliq
Very cool! Thanks.

Fawad Khaliq


On Sat, Dec 5, 2015 at 1:38 AM, Matthew Treinish 
wrote:

> Hi Everyone,
>
> As some people may have seen already we've been working on creating a test
> results dashboard up and running to visualize the state of the tests
> running in
> the gate. You can get to the dashboard here:
>
> http://status.openstack.org/openstack-health/#/
>
> It's still early for this project (we only started on it back in Sept.),
> so not
> everything is really polished yet and there are still a couple of issues
> and
> limitations.
>
> The biggest current limitation is based on the data store. We're using
> subunit2sql as the backend for all the data and right now we only are
> collecting
> results for tempest and grenade runs in the gate and periodic queues. This
> is
> configurable, as any job that emits a subunit stream can use the same
> mechanism,
> and it is something we will likely change in the future.
>
> We also don't have any results for runs that fail before tempest starts
> running,
> since we need a subunit stream to populate the DB with results. However,
> we have
> several proposed paths to fix this, so it's just a matter of time. But for
> right
> now if a job fails before tests start running it isn't counted on the
> dashboard.
>
> The code for everything lives here:
>
> http://git.openstack.org/cgit/openstack/openstack-health/
>
> If you find an issue feel free to file a bug here:
>
> https://bugs.launchpad.net/openstack-health
>
> We're eager to see this grow to enable the dashboard to suit the needs of
> anyone
> looking at the gate results.
>
> We're tracking work items that need to be done here:
>
> https://etherpad.openstack.org/p/openstack-health-tracking
>
> Please feel free to contribute if you have an idea on how to improve the
> dashboard, or want to take on one of the existing TODO items. The only way
> we'll
> be able to grow this into something that fits everyone's needs is if more
> people
> get involved in improving it.
>
> Thanks,
>
> Matthew Treinish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][openstack-governance][governance][vitrage] Adding Vitrage to the Governance repository

2015-12-06 Thread Jeremy Stanley
On 2015-12-06 10:49:49 + (+), GROSZ, Maty (Maty) wrote:
> Vitrage would like to initialise new puppet-vitrage module (change
> https://review.openstack.org/#/c/252214/) I was instructed to
> first add a patch to the governance repository:
> https://review.openstack.org/253893 How do I proceed? Can someone
> please review and approve?

I think Emilien's suggestion in 252214 was to add a puppet-vitrage
deliverable to the Puppet OpenStack project in that file, in which
case the process is just to wait for him to +1 it and then the TC
generally approves the addition at their next weekly meeting.

Instead, you're proposing a new official OpenStack project for
vitrage. It's also fine, but that more involved process and its
expectations are described at
http://governance.openstack.org/reference/new-projects-requirements.html
with a lot of background information you should probably read at
http://docs.openstack.org/project-team-guide/ too.

Also, it's unusual for a project's corresponding Puppet modules to
be governed by the same team. You'll notice that for the vast
majority of OpenStack services their Puppet modules are a
collaborative effort of the Puppet OpenStack team instead (per
Emilien's original suggestion).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] workflow

2015-12-06 Thread John Trowbridge


On 12/03/2015 03:47 PM, Dan Prince wrote:
> On Tue, 2015-11-24 at 15:25 +, Dougal Matthews wrote:
>> On 23 November 2015 at 14:37, Dan Prince  wrote:
>>> There are lots of references to "workflow" within TripleO
>>> conversations
>>> these days. We are at (or near) the limit of what we can do within
>>> Heat
>>> with regards to upgrades. We've got a new TripleO API in the works
>>> (a
>>> new version of Tuskar basically) that is specifically meant to
>>> encapsulates business logic workflow around deployment. And, Lots
>>> of
>>> interest in using Ansible for this and that.
>>>
>>> So... Last week I spent a bit of time tinkering with the Mistral
>>> workflow service that already exists in OpenStack and after a few
>>> patches got it integrated into my undercloud:
>>>
>>> https://etherpad.openstack.org/p/tripleo-undercloud-workflow
>>>
>>> One could imagine us coming up with a set of useful TripleO
>>> workflows
>>> (something like this):
>>>
>>>  tripleo.deploy 
>>>  tripleo.update 
>>>  tripleo.run_ad_hoc_whatever_on_specific_roles <>
>>>
>>> Since Mistral (the OpenStack workflow service) can already interact
>>> w/
>>> keystone and has a good many hooks to interact with core OpenStack
>>> services like Swift, Heat, and Nova we might get some traction very
>>> quickly here. Perhaps we add some new Mistral Ironic actions? Or
>>> imagine smaller more focused Heat configuration stacks that we
>>> drive
>>> via Mistral? Or perhaps we tie in Zaqar (which already has some
>>> integration into os-collect-config) to run ad-hoc deployment
>>> snippets
>>> on specific roles in an organized fashion?  Or wrapping mistral w/
>>> tripleoclient to allow users to more easily call TripleO specific
>>> workflows (enhancing the user feedback like we do with our
>>> heatclient
>>> wrapping already)?
>>>
>>> Where all this might lead... I'm not sure. But I feel like we might
>>> benefit by adding a few extra options to our OpenStack deployment
>>> tool
>>> chain.
>> I think this sounds promising. Lots of the code in the CLI is about
>> managing workflows. For example when doing introspection we change
>> the node state, poll for the result, start introspection, poll for
>> the result, change the node state back and poll for the result. If
>> mistral can help here I expect it could give us a much more robust
>> solution.
> 
> Hows this look:
> 
> https://github.com/dprince/tripleo-mistral-
> workflows/blob/master/tripleo/baremetal.yaml
> 

This is a really good starter example because the bulk inspection
command is particularly problematic. I like this a lot. One really nice
thing here is that we get a REST API for free by using Mistral.

>>
>>>  Dan
>>>
>>> ___
>>> ___
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
>>> bscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> _
>> _
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> cribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][nova] Orchiestrated upgrades in kolla

2015-12-06 Thread Jay Pipes

On 12/04/2015 11:50 PM, Michał Jastrzębski wrote:

Hey guys,

Orchiestrated upgrades is one of our highest priorities for M in kolla,
so following up after discussion on summit I'd like to suggest an approach:

Instead of creating playbook called "upgrade my openstack" we will
create "upgrade my nova" instead and approach to each service case by
case (since all of our running services are in dockers, this is possible).
We will also make use of image tags as version carriers, so ansible will
deploy new container only if version tag differs from what we ask it to
deploy. This will help with idempotency of upgrade process.

So let's start with nova. Upgrade-my-nova playbook will do something
like this:

0. We create snapshot of our mariadb-data container. This will affect
every service, but it's always good to have backup and rollback of db
will be manual action

1. Nova bootstrap will be called and it will perform db-migration. Since
current approach to nova code is add-only we shouldn't need to stop
services and old services should keep working on newer database. Also
for minor version upgrades there will be no action here unless there is
migration.
2. We upgrade all conductor at the same time. This should take mere
seconds since we'll have prebuilt containers
3. We will upgrade rest of controller services with using "series: 1" in
ansible to ensure rolling upgrades.
4. We will upgrade all of nova-compute services on it's own pace.

This workflow should be pretty robust (I think it is) and it should also
provide idempotency.


From Nova's perspective, most of the above looks OK to me, however, I 
am in more in favor of an upgrade strategy that builds a "new" set of 
conductor or controller service containers all at once and then flips 
the VIP or managed IP from the current controller pod containers to the 
newly-updated controller pod containers.


This seems to me to be the most efficient (in terms of minimal overall 
downtime) and most robust (because a "rollback" is simply flipping the 
VIP back to the original controller/conductor service pod containers) 
solution for upgrading Nova.


Dan Smith, can you comment on this since you are the God of Upgrades?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra] Functional tests in the gate

2015-12-06 Thread Gal Sagie
Hello all,

I want to write functional tests that will run in the gate but only after a
devstack
environment has finished.
Basically i want to interact with a working enviorment, but i don't want
the tests that i write
to fail the python27 gate tests.

(I guess like the Neutron full stack tests, but i haven't found how to do
it yet)

Can anyone please direct me to an example / or help me figure out how to
achieve this?

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] puppet-redis broke redis < 3.0.0

2015-12-06 Thread Emilien Macchi
Hey,

By working on monitoring support in TripleO undercloud, I found a bug that
I think is not detected on the overcloud:
https://bugs.launchpad.net/tripleo/+bug/1523239

I'm working on it:
https://github.com/arioch/puppet-redis/pull/77

RDO folks is about to update Redis to be 3.x.x but in the meantime, we
might want to pin puppet-redis.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Functional tests in the gate

2015-12-06 Thread Andrey Pavlov
For example -
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/ec2-api.yaml#L45

this job has next lines:
  export DEVSTACK_GATE_TEMPEST=1
  export DEVSTACK_GATE_TEMPEST_NOTESTS=1
first line tells to install tempest
and second tells that job should not run it

and then 'post_test_hook' defines commands to run.

On Sun, Dec 6, 2015 at 7:06 PM, Gal Sagie  wrote:

> Hello all,
>
> I want to write functional tests that will run in the gate but only after
> a devstack
> environment has finished.
> Basically i want to interact with a working enviorment, but i don't want
> the tests that i write
> to fail the python27 gate tests.
>
> (I guess like the Neutron full stack tests, but i haven't found how to do
> it yet)
>
> Can anyone please direct me to an example / or help me figure out how to
> achieve this?
>
> Thanks
> Gal.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing the OpenStack Health Dashboard

2015-12-06 Thread Ghe Rivero
I love it! Thanks for the great job.

Ghe Rivero

Quoting Matthew Treinish (2015-12-04 21:38:29)
> Hi Everyone,
>
> As some people may have seen already we've been working on creating a test
> results dashboard up and running to visualize the state of the tests running 
> in
> the gate. You can get to the dashboard here:
>
> http://status.openstack.org/openstack-health/#/
>
> It's still early for this project (we only started on it back in Sept.), so 
> not
> everything is really polished yet and there are still a couple of issues and
> limitations.
>
> The biggest current limitation is based on the data store. We're using
> subunit2sql as the backend for all the data and right now we only are 
> collecting
> results for tempest and grenade runs in the gate and periodic queues. This is
> configurable, as any job that emits a subunit stream can use the same 
> mechanism,
> and it is something we will likely change in the future.
>
> We also don't have any results for runs that fail before tempest starts 
> running,
> since we need a subunit stream to populate the DB with results. However, we 
> have
> several proposed paths to fix this, so it's just a matter of time. But for 
> right
> now if a job fails before tests start running it isn't counted on the 
> dashboard.
>
> The code for everything lives here:
>
> http://git.openstack.org/cgit/openstack/openstack-health/
>
> If you find an issue feel free to file a bug here:
>
> https://bugs.launchpad.net/openstack-health
>
> We're eager to see this grow to enable the dashboard to suit the needs of 
> anyone
> looking at the gate results.
>
> We're tracking work items that need to be done here:
>
> https://etherpad.openstack.org/p/openstack-health-tracking
>
> Please feel free to contribute if you have an idea on how to improve the
> dashboard, or want to take on one of the existing TODO items. The only way 
> we'll
> be able to grow this into something that fits everyone's needs is if more 
> people
> get involved in improving it.
>
> Thanks,
>
> Matthew Treinish
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra][openstack-governance][governance][vitrage] Adding Vitrage to the Governance repository

2015-12-06 Thread GROSZ, Maty (Maty)
Hello,

Vitrage would like to initialise new puppet-vitrage module (change 
https://review.openstack.org/#/c/252214/)
I was instructed to first add a patch to the governance repository: 
https://review.openstack.org/253893
How do I proceed? Can someone please review and approve?

Thanks,

Maty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS7 Merging Plan

2015-12-06 Thread Dmitry Teselkin
Hello,

Status update for Dec, 5-6 - we're on track.

CentOS7 based ISO looks good, we are building it on product CI using
the same jobs as for CentOS6.

ISO #258 passed nightly swarm with a slightly reduced coverage (69%).
We've got one known issue [0] that affects some jobs when run on a
loaded slave (there is a fix [1]), and issues on QA side.

Smoke and BVT tests also green [2], however, we've got interference
from Ubuntu upstream here, and had to disable trusty-proposed channel
to get tests passed. More details in a bug [3].

According to our plan [4] we've got over decision point #4 with a
decision to go with CentOS7.

Please note that merge freeze still taken place, it will be explicitly
notified when it's lifted.

[0] https://bugs.launchpad.net/mos/+bug/1523117
[1] https://review.openstack.org/#/c/253843/
[2] https://product-ci.infra.mirantis.net/job/8.0.test_all/245/
[3] https://bugs.launchpad.net/fuel/+bug/1523092
[4]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081026.html

On Tue, 1 Dec 2015 13:48:00 -0800
Dmitry Borodaenko  wrote:

> With bit more details, I hope this covers all the risks and decision
> points now.
> 
> First of all, current list of outstanding commits:
> https://etherpad.openstack.org/p/fuel_on_centos7
> 
> The above list has two sections: backwards compatible changes that can
> be merged one at a time even if the rest of CentOS7 support isn't
> merged, and backwards incompatible changes that break support for
> CentOS6 and must be merged (and, if needed, reverted) all at once.
> 
> Decision point 1: FFE for CentOS7
> 
> CentOS7 support cannot be fully merged on Dec 2, so it misses FF. Can
> it be allowed a Feature Freeze Exception? So far, the disruption of
> the Fuel development process implied by the proposed merge plan is
> acceptable, if anything goes wrong and we become unable to have a
> stable ISO with merged CentOS7 support on Monday, December 7, the FFE
> will be revoked.
> 
> Wed, Dec 2: Merge party
> 
> Merge party before 8.0 FF, we should do our best to merge all
> remaining feature commits before end of day (including backwards
> compatible CentOS7 support commits), without breaking the build too
> much.
> 
> At the end of the day we'll start a swarm test over the result of the
> merge party, and we expect QA to analyze and summarize the results by
> 17:00 MSK (6:00 PST) on Thu Dec 3.
> 
> Risk 1: Merge party breaks the build
> 
> If there is a large regression in swarm pass percentage, we won't be
> able to afford a merge freeze which is necessary to merge CentOS7
> support, we'll have to be merging bugfixes until swarm test pass rate
> is back around 70%.
> 
> Risk 2: More features get FFE
> 
> If some essential 8.0 features are not completely merged by end of day
> Wed Dec 2 and are granted FFE, merging the remaining commits can
> interfere with merging CentOS7 support, not just from merge conflicts
> perspective, but also invalidating swarm results and making it
> practically impossible to bisect and attribute potential regressions.
> 
> Thu, Dec 3: Start merge freeze for CentOS7
> 
> Decision point 2: Other FFEs
> 
> In the morning MSK time, we will assess Risk 2 and decide what to do
> with the other FFEs. The options are: integrate remaining commits into
> CentOS7 merge plan, block remaining commits until Monday, revoke
> CentOS7 FFE.
> 
> If the decision is to go ahead with CentOS7 merge, we announce merge
> freeze for all git repositories that go into Fuel ISO, and spend the
> rest of the day rebasing and cleaning up the rest of the CentOS7
> commits to make sure they're all in mergeable state by the end of the
> day. The outcome of this work must be a custom ISO image with all
> remaining commits, with additional requirement that it must not use
> Jenkins job parameters (only patches to fuel-main that change default
> repository paths) to specify all required package repositories. This
> will validate the proposed fuel-main patches and ensure that no
> unmerged package changes are used to produce the ISO.
> 
> Decision point 3: Swarm pass rate
> 
> After swarm results from Wed are available, we will assess the Risk 1.
> If the pass rate regression is significant, CentOS7 FFE is revoked and
> merge freeze is lifted. If regression is acceptable, we proceed with
> merging remaining CentOS7 commmits through Thu Dec 3 and Fri Dec 4.
> 
> Fri, Dec 4: Merge and test CentOS7
> 
> The team will have until 17:00 MSK to produce a non-custom ISO that
> passes BVT and can be run through swarm.
> 
> Sat, Dec 5: Assess CentOS7 swarm and bugfix
> 
> First of all, someone from CI and QA teams should commit to monitoring
> the CentOS7 swarm run and report the results as soon as possible.
> Based on the results (which once again must be available by 17:00
> MSK), we can decide on the final step of the plan.
> 
> Decision point 4: Keep or revert
> 
> If CentOS7 based swarm shows significant regression, we have to spend
> the 

Re: [openstack-dev] [kolla][nova] Orchiestrated upgrades in kolla

2015-12-06 Thread Michał Jastrzębski
On 6 December 2015 at 09:33, Jay Pipes  wrote:
>
> On 12/04/2015 11:50 PM, Michał Jastrzębski wrote:
>>
>> Hey guys,
>>
>> Orchiestrated upgrades is one of our highest priorities for M in kolla,
>> so following up after discussion on summit I'd like to suggest an approach:
>>
>> Instead of creating playbook called "upgrade my openstack" we will
>> create "upgrade my nova" instead and approach to each service case by
>> case (since all of our running services are in dockers, this is possible).
>> We will also make use of image tags as version carriers, so ansible will
>> deploy new container only if version tag differs from what we ask it to
>> deploy. This will help with idempotency of upgrade process.
>>
>> So let's start with nova. Upgrade-my-nova playbook will do something
>> like this:
>>
>> 0. We create snapshot of our mariadb-data container. This will affect
>> every service, but it's always good to have backup and rollback of db
>> will be manual action
>>
>> 1. Nova bootstrap will be called and it will perform db-migration. Since
>> current approach to nova code is add-only we shouldn't need to stop
>> services and old services should keep working on newer database. Also
>> for minor version upgrades there will be no action here unless there is
>> migration.
>> 2. We upgrade all conductor at the same time. This should take mere
>> seconds since we'll have prebuilt containers
>> 3. We will upgrade rest of controller services with using "series: 1" in
>> ansible to ensure rolling upgrades.
>> 4. We will upgrade all of nova-compute services on it's own pace.
>>
>> This workflow should be pretty robust (I think it is) and it should also
>> provide idempotency.
>
>
> From Nova's perspective, most of the above looks OK to me, however, I am in 
> more in favor of an upgrade strategy that builds a "new" set of conductor or 
> controller service containers all at once and then flips the VIP or managed 
> IP from the current controller pod containers to the newly-updated controller 
> pod containers.

Well, we can't really do that because that would affect all other
services on controller host, and we want to minimize impact on rest of
cluster during upgrade. Containers gives us option to upgrade "just
nova", or even "just nova conductors" without affecting anything else.
On the bright side, redeploying pre-built container is a matter of
seconds (or even less). Whole process from turning off last conductor
to spawn first new conductor should take less than 10s, and that's
only downtime we should get there.

> This seems to me to be the most efficient (in terms of minimal overall 
> downtime) and most robust (because a "rollback" is simply flipping the VIP 
> back to the original controller/conductor service pod containers) solution 
> for upgrading Nova.


>
> Dan Smith, can you comment on this since you are the God of Upgrades?
>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Functional tests in the gate

2015-12-06 Thread Gal Sagie
Thanks for the answer Andrey,

In the post_test_hook of this, you run the ec2-api functional tests.
What i am asking, is how to prevent these tests from running in
gate-X-python27 for example (which they will
obviously fail as they need a working devstack to run)

Maybe i am not understanding something fundamental here.

Gal.

On Sun, Dec 6, 2015 at 6:56 PM, Andrey Pavlov  wrote:

> For example -
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/ec2-api.yaml#L45
>
> this job has next lines:
>   export DEVSTACK_GATE_TEMPEST=1
>   export DEVSTACK_GATE_TEMPEST_NOTESTS=1
> first line tells to install tempest
> and second tells that job should not run it
>
> and then 'post_test_hook' defines commands to run.
>
> On Sun, Dec 6, 2015 at 7:06 PM, Gal Sagie  wrote:
>
>> Hello all,
>>
>> I want to write functional tests that will run in the gate but only after
>> a devstack
>> environment has finished.
>> Basically i want to interact with a working enviorment, but i don't want
>> the tests that i write
>> to fail the python27 gate tests.
>>
>> (I guess like the Neutron full stack tests, but i haven't found how to do
>> it yet)
>>
>> Can anyone please direct me to an example / or help me figure out how to
>> achieve this?
>>
>> Thanks
>> Gal.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kind regards,
> Andrey Pavlov.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstackdocstheme to be considered (very) harmful for your generated sphinx docs

2015-12-06 Thread Qiming Teng
Hi, Anne,

As someone unfortunately who was born and is still working behind a national
firewall, having a lot Google calls in docs do have some impacts on us.

It would be great if we can make the docs just self-contained docs.

Thanks.

Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-12-06 Thread 王华
Hi all,

If we want to run etcd and flannel in container, we will introduce
docker-bootstrap
which makes setup become more complex as Egor pointed out. Should we pay
for the price?

On Sat, Nov 28, 2015 at 8:45 AM, Egor Guz  wrote:

> Wanghua,
>
> I don’t think moving flannel to the container is good idea. This is setup
> great for dev environment, but become too complex from operator point of
> view (you add extra Docker daemon and need extra Cinder volume for this
> daemon, also
> keep in mind it makes sense to keep etcd data folder at Cinder storage as
> well because etcd is database). flannel has just there files without extra
> dependencies and it’s much easy to download it during cloud-init ;)
>
> I agree that we have pain with building Fedora Atomic images, but instead
> of simplify this process we should switch to another more “friendly” images
> (e.g. Fedora/CentOS/Ubuntu) which we can easy build with disk builder.
> Also we can fix CoreOS template (I believe people more asked about it
> instead of Atomic), but we may face similar to Atomic issues when we will
> try to integrate not CoreOS products (e.g. Calico or Weave)
>
> —
> Egor
>
> From: 王华 >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Date: Thursday, November 26, 2015 at 00:15
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap
>
> Hi Hongbin,
>
> The docker in master node stores data in /dev/mapper/atomicos-docker--data
> and metadata in /dev/mapper/atomicos-docker--meta.
> /dev/mapper/atomicos-docker--data and /dev/mapper/atomicos-docker--meta are
> logic volumes. The docker in minion node store data in the cinder volume,
> but /dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta
> are not used. If we want to leverage Cinder volume for docker in master,
> should we drop /dev/mapper/atomicos-docker--meta and
> /dev/mapper/atomicos-docker--meta? I think it is not necessary to allocate
> a Cinder volume. It is enough to allocate two logic volumes for docker,
> because only etcd, flannel, k8s run in the docker daemon which need not a
> large amount of storage.
>
> Best regards,
> Wanghua
>
> On Thu, Nov 26, 2015 at 12:40 AM, Hongbin Lu  > wrote:
> Here is a bit more context.
>
> Currently, at k8s and swarm bay, some required binaries (i.e. etcd and
> flannel) are built into image and run at host. We are exploring the
> possibility to containerize some of these system components. The rationales
> are (i) it is infeasible to build custom packages into an atomic image and
> (ii) it is infeasible to upgrade individual component. For example, if
> there is a bug in current version of flannel and we know the bug was fixed
> in the next version, we need to upgrade flannel by building a new image,
> which is a tedious process.
>
> To containerize flannel, we need a second docker daemon, called
> docker-bootstrap [1]. In this setup, pods are running on the main docker
> daemon, and flannel and etcd are running on the second docker daemon. The
> reason is that flannel needs to manage the network of the main docker
> daemon, so it needs to run on a separated daemon.
>
> Daneyon, I think it requires separated storage because it needs to run a
> separated docker daemon (unless there is a way to make two docker daemons
> share the same storage).
>
> Wanghua, is it possible to leverage Cinder volume for that. Leveraging
> external storage is always preferred [2].
>
> [1]
> http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#bootstrap-docker
> [2] http://www.projectatomic.io/docs/docker-storage-recommendation/
>
> Best regards,
> Hongbin
>
> From: Daneyon Hansen (danehans) [mailto:daneh...@cisco.com daneh...@cisco.com>]
> Sent: November-25-15 11:10 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap
>
>
>
> From: 王华 >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Date: Wednesday, November 25, 2015 at 5:00 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Subject: [openstack-dev] [magnum]storage for docker-bootstrap
>
> Hi all,
>
> I am working on containerizing etcd and flannel. But I met a problem. As
> described in [1], we need a docker-bootstrap. Docker and docker-bootstrap
> can not use the same storage, so we need some disk space for it.

[openstack-dev] What's Up, Doc? 4 December 2015

2015-12-06 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

Slight delay in the newsletter this time around, it's been a busy week! I've 
been working on the DocImpact changes, reviewing the docs and docs-specs core 
teams, and catching up with blueprint management and reviews from while I was 
on leave.

Since we're getting to the pointy end of the year, remember to call out 
successes using #success in our IRC channel. All successes get logged here: 
https://wiki.openstack.org/wiki/Successes and it's a great way to show your 
appreciation for your fellow community members.

== Progress towards Mitaka ==

121 days to go!

139 bugs closed so far for this release.

RST Conversions
* Arch Guide
** RST conversion is complete! Well done Shilla and team for getting this done 
so incredibly quickly :)
* Config Ref
** Contact the Config Ref Speciality team: 
https://wiki.openstack.org/wiki/Documentation/ConfigRef
* Virtual Machine Image Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/image-guide-rst and 
https://review.openstack.org/#/c/244598/

Reorganisations
* Arch Guide
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Doc sprint in planning for December 21-22. Contact the Ops Guide speciality 
team for more info or to get  involved.
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

DocImpact
* This is now well underway, with the first two patches merged. One changes the 
default bug queue for DocImpact back to the projects' own queue, and the other 
changes all current projects (with the exception of the six defcore projects) 
to their own bug queue. There has also been some discussion on the mailing 
lists about this process change. The next step is to merge the new Jenkins job 
(https://review.openstack.org/#/c/251301/) which for now will be against Nova 
only so we can make sure it's working correctly before we roll it out across 
the board. Thank you to Andreas, Josh Hesketh, and everyone else who has helped 
get us this far, and to the Nova team who are allowing us to use them as guinea 
pigs.

Document openstack-doc-tools and reorganise index page
* Thanks to Brian and Christian for volunteering to take this on!

Horizon UX work
* Thanks to the people who offered to help out on this! If you're interested 
but haven't gotten in touch yet, contact Piet Kruithof of the UX team. There is 
now a blueprint + spec in draft: 
https://blueprints.launchpad.net/openstack-manuals/+spec/ui-content-guidelines 
and https://review.openstack.org/#/c/252668/

== Speciality Teams ==

'''HA Guide - Bogdan Dobrelya'''
Finished IRC meeting schedule poll, finalised the Corosync/pacemaker patch 
(https://review.openstack.org/235893), participating and tracking progress with 
the HA for OpenStack instances research 
(https://etherpad.openstack.org/p/automatic-evacuation) to make it into the HA 
guide, eventually.

'''Installation Guide - Christian Berendt'''
Debian install guide nearly completed, testers needed; meeting time for the 
EU/US meeting will probably change from 13 UTC to 17 UTC; Aodh install 
instructions nearly finished, ongoing discussion because of missing SUSE 
packages; Japanese translation of the Liberty install guide published. 

'''Networking Guide - Edgar Magana'''
We are having our first official IRC meeting tomorrow at 1600 UTC. The goal is 
to find out what we can to complete during the Liberty cycle and have specific 
tasks assigned. Relevant patches: Create tags for the Networking Guide - WIP 
(https://review.openstack.org/#/c/251979/)

'''Security Guide - Nathaniel Dillon'''
No update this week

'''User Guides - Joseph Robinson'''
The User guide meeting on IRC worked, however attendance was down; the 
reorganization spec is updated with information from Big Tent Project navigator 
to define the scope (http://www.openstack.org/software/project-navigator) and 
information from the published nova-network to neutron migration study.

'''Ops and Arch Guides - Darren Chan'''
The arch guide RST conversion is completed. We plan on doing a virtual swarm 
this December. The doodle link was sent out last week and the poll is now 
closed. The final date for the virtual swarm is: 12/21-12/22.

'''API Docs - Anne Gentle'''
Working on usage docs for migration, how to build, what gets migrated, how the 
docs are assembled: 
https://github.com/russell/fairy-slipper/blob/master/doc/source/usage.rst 
Responding to spec reviews: https://review.openstack.org/#/c/246660/ Revising 
patch to bring fairy-slipper into openstack org: 
https://review.openstack.org/#/c/245352/

'''Config Ref - Gauvain Pocentek'''
The RST migration is still in progress. The Manila section has been greatly 
improved in the process, and the Zaqar 

Re: [openstack-dev] [Fuel] Patch size limit

2015-12-06 Thread Andrey Tykhonov
I believe this is against the code review guidelines.

«Comments must be meaningful and should help an author to change the
code the right way.» [1]

If you get a comment that says «split this change into the smaller
commit» I'm sorry, but it doesn't help at all.

«Leave constructive comments

Not everyone in the community is a native English speaker, so make
sure your remarks are meaningful and helpful for the patch author to
change his code, but *also polite and respectful*.

The review is not really about the score. It's all about the
comments. When you are reviewing code, always make sure that your
comments are useful and helpful to the author of the patch. Try to
avoid leaving comments just to show that you reviewed something if
they don't really add anything meaningful» [2]

So, when an author of a patch gets -1 with the statement «split this
code», I believe it is not constructive. At least you should roughly
describe how you see it, how the patch could be split, you should be
helpful to the author of a patch. So, first of all, you need to review
the patch! :)

I want to emphasize this: «
*The review is not really about thescore. It's all about the comments.*»

«In almost all cases, a negative review should be accompanied by
*clearinstructions* for the submitter how they might fix the patch.» [4]

I believe that the statement "split this change into the smaller
commit" is too generic, it is mostly the same as the "this patch needs
further work". It doesn't bring any additional instructions how
exactly a patch could be fixed.

Please also take a loot at the following conversation from mailing
list: [3].

«It's not so easy to guess what you mean, and in case of a clumsy
piece of code, not even that certain that better code can be used
instead. So always provide an example of what you would rather want to
see. So instead of pointing to indentation rules, just show properly
indented code. Instead of talking about grammar or spelling, just type
the corrected comment or docstring. Finally, instead of saying "use
list comprehension here" or "don't use has_key", just type your
proposal of how the code should look like. Then we have something
concrete to talk about. Of course, you can also say why you think this
is better, but an example is very important. If you are not sure how
the improved code would look like, just let it go, chances are it
would look even worse.» [3]

So, please, bring something concrete to talk about. If you are not
sure how the improved code would look like, just let it go!

«The simplest way to talk about code is to just show the code. When you
want the author to fix something, rewrite it in a different way,
format the code differently, etc. -- it's best to just write in the
comment how you want that code to look like. It's much faster than
having the author guess what you meant in your descriptions, and also
lets us learn much faster by seeing examples.» [2]


[1]
https://docs.google.com/document/d/1tyKhHQRQqTEW6tS7_LCajEpzqn55f-f5nDmtzIeJ2uY/edit
[2] https://wiki.openstack.org/wiki/CodeReviewGuidelines
[3]
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg07831.html
[4] http://docs.openstack.org/infra/manual/developers.html#peer-review


Best regards,
Andrey Tykhonov (tkhno)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-06 Thread Tzu-Mainn Chen
- Original Message -

> On 04/12/15 23:04, Dmitry Tantsur wrote:

> > On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:
> 

> > > Hey all,
> > 
> 

> > > Over the past few months, there's been a lot of discussion and work
> > > around
> > 
> 
> > > creating a new REST API-supported TripleO deployment workflow. However
> > > most
> > 
> 
> > > of that discussion has been fragmented within spec reviews and weekly IRC
> > 
> 
> > > meetings, so I thought it might make sense to provide a high-level
> > > overview
> > 
> 
> > > of what's been going on. Hopefully it'll provide some useful perspective
> > > for
> > 
> 
> > > those that are curious!
> > 
> 

> > > Thanks,
> > 
> 
> > > Tzu-Mainn Chen
> > 
> 

> > > --
> > 
> 
> > > 1. Explanation for Deployment Workflow Change
> > 
> 

> > > TripleO uses Heat to deploy clouds. Heat allows tremendous flexibility at
> > > the
> > 
> 
> > > cost of enormous complexity. Fortunately TripleO has the space to allow
> > 
> 
> > > developers to create tools to simplify the process tremendously,
> > > resulting
> > > in
> > 
> 
> > > a deployment process that is both simple and flexible to user needs.
> > 
> 

> > > The current CLI-based TripleO workflow asks the deployer to modify a base
> > > set
> > 
> 
> > > of Heat environment files directly before calling Heat's stack-create
> > > command.
> > 
> 
> > > This requires much knowledge and precision, and is a process prone to
> > > error.
> > 
> 

> > > However this process can be eased by understanding that there is a
> > > pattern
> > > to
> > 
> 
> > > these modifications; for example, if a deployer wishes to enable network
> > 
> 
> > > isolation, a specific set of modifications must be made. These
> > > modification
> > 
> 
> > > sets can be encapsulated through pre-created Heat environment files, and
> > > TripleO
> > 
> 
> > > contains a library of these
> > 
> 
> > > (
> > > https://github.com/openstack/tripleo-heat-templates/tree/master/environments
> > > ).
> > 
> 

> > > These environments are further categorized through the proposed
> > > environment
> > 
> 
> > > capabilities map ( https://review.openstack.org/#/c/242439 ). This
> > > mapping
> > > file
> > 
> 
> > > contains programmatic metadata, adding items such as user-friendly text
> > > around
> > 
> 
> > > environment files and marking certain environments as mutually exclusive.
> > 
> 

> > > 2. Summary of Updated Deployment Workflow
> > 
> 

> > > Here's a summary of the updated TripleO deployment workflow.
> > 
> 

> > > 1. Create a Plan: Upload a base set of heat templates and environment
> > > files
> > 
> 
> > > into a Swift container. This Swift container will be versioned to allow
> > 
> 
> > > for future work with respect to updates and upgrades.
> > 
> 

> > > 2. Environment Selection: Select the appropriate environment files for
> > > your
> > 
> 
> > > deployment.
> > 
> 

> > > 3. Modify Parameters: Modify additional deployment parameters. These
> > 
> 
> > > parameters are influenced by the environment selection in step 2.
> > 
> 

> > > 4. Deploy: Send the contents of the plan's Swift container to Heat for
> > 
> 
> > > deployment.
> > 
> 

> > > Note that the current CLI-based workflow still fits here: a deployer can
> > > modify
> > 
> 
> > > Heat files directly prior to step 1, follow step 1, and then skip
> > > directly
> > > to
> > 
> 
> > > step 4. This also allows for trial deployments with test configurations.
> > 
> 

> > > 3. TripleO Python Library, REST API, and GUI
> > 
> 

> > > Right now, much of the existing TripleO deployment logic lives within the
> > > TripleO
> > 
> 
> > > CLI code, making it inaccessible to non-Python based UIs. Putting both
> > > old
> > > and
> > 
> 
> > > new deployment logic into tripleo-common and then creating a REST API on
> > > top
> > > of
> > 
> 
> > > that logic will enable modern Javascript-based GUIs to create cloud
> > > deployments
> > 
> 
> > > using TripleO.
> > 
> 

> > > 4. Future Work - Validations
> > 
> 

> > > A possible next step is to add validations to the TripleO toolkit:
> > > scripts
> > > that
> > 
> 
> > > can be used to check the validity of your deployment pre-, in-, and
> > > post-flight.
> > 
> 
> > > These validations will be runnable and queryable through a REST API. Note
> > > that
> > 
> 
> > > the above deployment workflow should not be a requirement for validations
> > > to
> > > be
> > 
> 
> > > run.
> > 
> 

> > > 5. In-Progress Development
> > 
> 

> > > The initial spec for the tripleo-common library has already been
> > > approved,
> > > and
> > 
> 
> > > various people have been pushing work forward. Here's a summary:
> > 
> 

> > > * Move shared business logic out of CLI
> > 
> 
> > > * https://review.openstack.org/249134 - simple validations (WIP)
> > 
> 

> > When is this going to be finished? It's going to get me a huge merge
> > conflict
> > in 

Re: [openstack-dev] [openstack-infra] Functional tests in the gate

2015-12-06 Thread Andrey Pavlov
Each gating job has own config.
This functional test job exports some variable to install devstack.
python27/34, pep8 and many other jobs have own configs in the gating.
most of jobs use tox`s 'tox.ini' file where command to run is written.

On Sun, Dec 6, 2015 at 9:53 PM, Gal Sagie  wrote:

> Thanks for the answer Andrey,
>
> In the post_test_hook of this, you run the ec2-api functional tests.
> What i am asking, is how to prevent these tests from running in
> gate-X-python27 for example (which they will
> obviously fail as they need a working devstack to run)
>
> Maybe i am not understanding something fundamental here.
>
> Gal.
>
> On Sun, Dec 6, 2015 at 6:56 PM, Andrey Pavlov  wrote:
>
>> For example -
>> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/ec2-api.yaml#L45
>>
>> this job has next lines:
>>   export DEVSTACK_GATE_TEMPEST=1
>>   export DEVSTACK_GATE_TEMPEST_NOTESTS=1
>> first line tells to install tempest
>> and second tells that job should not run it
>>
>> and then 'post_test_hook' defines commands to run.
>>
>> On Sun, Dec 6, 2015 at 7:06 PM, Gal Sagie  wrote:
>>
>>> Hello all,
>>>
>>> I want to write functional tests that will run in the gate but only
>>> after a devstack
>>> environment has finished.
>>> Basically i want to interact with a working enviorment, but i don't want
>>> the tests that i write
>>> to fail the python27 gate tests.
>>>
>>> (I guess like the Neutron full stack tests, but i haven't found how to
>>> do it yet)
>>>
>>> Can anyone please direct me to an example / or help me figure out how to
>>> achieve this?
>>>
>>> Thanks
>>> Gal.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kind regards,
>> Andrey Pavlov.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-06 Thread Steve Baker

On 04/12/15 23:04, Dmitry Tantsur wrote:

On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:

Hey all,

Over the past few months, there's been a lot of discussion and work 
around
creating a new REST API-supported TripleO deployment workflow. 
However most
of that discussion has been fragmented within spec reviews and weekly 
IRC
meetings, so I thought it might make sense to provide a high-level 
overview
of what's been going on.  Hopefully it'll provide some useful 
perspective for

those that are curious!

Thanks,
Tzu-Mainn Chen

-- 


1. Explanation for Deployment Workflow Change

TripleO uses Heat to deploy clouds.  Heat allows tremendous 
flexibility at the

cost of enormous complexity.  Fortunately TripleO has the space to allow
developers to create tools to simplify the process tremendously,  
resulting in

a deployment process that is both simple and flexible to user needs.

The current CLI-based TripleO workflow asks the deployer to modify a 
base set
of Heat environment files directly before calling Heat's stack-create 
command.
This requires much knowledge and precision, and is a process prone to 
error.


However this process can be eased by understanding that there is a 
pattern to

these modifications; for example, if a deployer wishes to enable network
isolation, a specific set of modifications must be made.  These 
modification
sets can be encapsulated through pre-created Heat environment files, 
and TripleO

contains a library of these
(https://github.com/openstack/tripleo-heat-templates/tree/master/environments). 



These environments are further categorized through the proposed 
environment
capabilities map (https://review.openstack.org/#/c/242439). This 
mapping file
contains programmatic metadata, adding items such as user-friendly 
text around
environment files and marking certain environments as mutually 
exclusive.



2. Summary of Updated Deployment Workflow

Here's a summary of the updated TripleO deployment workflow.

 1. Create a Plan: Upload a base set of heat templates and 
environment files
into a Swift container.  This Swift container will be 
versioned to allow

for future work with respect to updates and upgrades.

 2. Environment Selection: Select the appropriate environment 
files for your

deployment.

 3. Modify Parameters: Modify additional deployment parameters.  
These
parameters are influenced by the environment selection in 
step 2.


 4. Deploy: Send the contents of the plan's Swift container to 
Heat for

deployment.

Note that the current CLI-based workflow still fits here: a deployer 
can modify
Heat files directly prior to step 1, follow step 1, and then skip 
directly to
step 4.  This also allows for trial deployments with test 
configurations.



3. TripleO Python Library, REST API, and GUI

Right now, much of the existing TripleO deployment logic lives within 
the TripleO
CLI code, making it inaccessible to non-Python based UIs. Putting 
both old and
new deployment logic into tripleo-common and then creating a REST API 
on top of
that logic will enable modern Javascript-based GUIs to create cloud 
deployments

using TripleO.


4. Future Work - Validations

A possible next step is to add validations to the TripleO toolkit: 
scripts that
can be used to check the validity of your deployment pre-, in-, and  
post-flight.
These validations will be runnable and queryable through a  REST 
API.  Note that
the above deployment workflow should not be a requirement for 
validations to be

run.


5. In-Progress Development

The initial spec for the tripleo-common library has already been 
approved, and

various people have been pushing work forward.  Here's a summary:

* Move shared business logic out of CLI
   * https://review.openstack.org/249134 - simple validations (WIP)


When is this going to be finished? It's going to get me a huge merge 
conflict in https://review.openstack.org/#/c/250405/ (and make it 
impossible to backport to liberty btw).


This plan would be fine if Mitaka development was the only consideration 
but I hope that it can be adapted a little bit to take into account the 
Liberty branches, and the significant backports that will be happening 
there. The rdomanager-plugin->tripleoclient transition made backports 
painful, and having moved on for that it would be ideal if we didn't 
create the same situation again.


What I would propose is the following:
- the tripleo_common repo is renamed to tripleo and consumed by Mitaka
- the tripleo_common repo continues to exist in Liberty
- the change to rename the package tripleo_common to tripleo occurs on 
the tripleo repo in the master branch using oslo-style wildcard 
imports[1], and initially no deprecation message
- this change is backported to the tripleo_common repo on the 
stable/liberty branch


Once this is in place, stable/liberty tripleoclient can gradually move 

Re: [openstack-dev] [tripleo] puppet-redis broke redis < 3.0.0

2015-12-06 Thread Emilien Macchi
Last update:
puppet-redis is now fixed (PR is merged).
RDO updated Redis to be 3.x.x -- it's in testing repo now, will be stable
next week.

On Sun, Dec 6, 2015 at 9:55 AM, Emilien Macchi 
wrote:

> Hey,
>
> By working on monitoring support in TripleO undercloud, I found a bug that
> I think is not detected on the overcloud:
> https://bugs.launchpad.net/tripleo/+bug/1523239
>
> I'm working on it:
> https://github.com/arioch/puppet-redis/pull/77
>
> RDO folks is about to update Redis to be 3.x.x but in the meantime, we
> might want to pin puppet-redis.
> --
> Emilien Macchi
>



-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [python-novaclient] microversions support

2015-12-06 Thread Alex Xu
2015-12-05 1:23 GMT+08:00 Kevin L. Mitchell :

> On Fri, 2015-12-04 at 18:58 +0200, Andrey Kurilin wrote:
> > This week I added 5 patches to enable 2.7-2.11 microversions in
> > novaclient[1][2][3][4][5]. I'm not bragging. Just want to ask everyone
> > who are working on new microversions: Please, do not forget to add
> > support of your microversion to official Nova client.
>
> Perhaps this is something we should add to the review guidelines—no API
> change can be merged to nova unless there is a pending change to
> novaclient to add support?  We already more or less enforce the criteria
> that no addition to novaclient can be added unless the corresponding
> nova change has been merged…
>

+1, otherwise the python-novaclient always have gaps with nova API.


> --
> Kevin L. Mitchell 
> Rackspace
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Upgrading Icehouse to Juno

2015-12-06 Thread Tony Breeds
On Mon, Dec 07, 2015 at 10:58:18AM +0530, Abhishek Shrivastava wrote:
> Hi Folks,
> 
> I have a stable/icehouse devstack setup and I am planning to upgrade it to
> stable/kilo but following the guide mentioned in Openstack Manuals does not
> seems to work correctly.

Umm you don't really upgrade a devstack install.  Just start from scratch on
the new release (and I'd suggest liberty rather than juno or kilo).

If you've managed to get production data in you icehouse devstack, preserving
that is a different discussion.

Yours Tony.


pgpZzmLpw7Tmc.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group meeting - Agenda 12/8

2015-12-06 Thread Dugger, Donald D
Meeting on #openstack-meeting-alt at 1400 UTC (7:00AM MDT)



(Hopefully the DDOS against freenode has died down and we can actually have the 
meeting)



1) Spec & BPs - 
https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking

2) Bugs - https://bugs.launchpad.net/nova/+bugs?field.tag=scheduler

3) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-06 Thread AFEK, Ifat (Ifat)
Hi Julien, 

> -Original Message-
> From: Julien Danjou [mailto:jul...@danjou.info]
> Sent: Thursday, December 03, 2015 4:27 PM
>
> I think that I would be more interested by connecting Nagios to
> Ceilometer/Gnocchi/Aodh with maybe the long-term goal of replacing it
> by that stack, which should be more scalable and dynamic.
> 
> That would make Vitrage only needing to build on top of telemetry
> projects. It would also bring Nagios & co to telemetry not only for
> Vitrage, but for the whole stack.
> 
> Maybe there's some good reasons you're going the way you do, I don't
> have the pretension to have though about that as long as you probably
> did. :-)

Our goal is to get as much information as we can from various data 
sources. If you connect Nagios to telemetry project, and we can get 
nagios alarms directly from Aodh, it would be great. Is it something 
that you planned on doing for Mitaka?

> Do you have something like a MVP based on Telemetry you target? I saw
> you were already talking about Horizon, which to me is something that
> (sh|c)ould be way further into the pipeline, so I'm worried. ;)

Our current use cases focus on giving value to the cloud admin. These 
are mostly UI use cases; the admin will be able to:

- view the topology of his environment, the relations between the 
physical, virtual and applicative layer and the statuses all resources
- view the alarms history (there is an existing blueprint for it[1])
- view alarms about problems that Vitrage deduced could happen, even
if no other OpenStack component reported these problems (yet)
- view RCA information about the alarms

In order to support these use cases, we will get input from various 
data sources, process and evaluate it based on configurable templates, 
trigger new alarms in Aodh and calculate RCA information. 
On top of it, we will have Vitrage API to query the information and
show it in horizon. 
In case you haven't seen in yet, our high level architecture is on 
Vitrage main page[2], and in the coming days we plan to document also 
the lower level design.

Best Regards,
Ifat.


[1] 
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page
[2] https://wiki.openstack.org/wiki/Vitrage

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-06 Thread AFEK, Ifat (Ifat)
Hi Ryota,

> -Original Message-
> From: Ryota Mibu [mailto:r-m...@cq.jp.nec.com]
> Sent: Friday, December 04, 2015 9:42 AM
> 
> > The next step can happen if and when Aodh supports alarm templates.
> > If Vitrage can handle about 30 alarm types, and there are 100
> > instances, we don't want to pre-configure 3000 alarms, which most
> likely will never be triggered.
> 
> I understand your concern. Aodh is user facing service, so having lots
> of alarms doesn't make sense.
> 
> Can we clarify use case again in terms of service role definition?

Our use cases focus on giving value to the cloud admin, who will be 
able to:

- view the topology of his environment, the relations between the 
physical, virtual and applicative layer and the statuses all resources
- view the alarms history
- view alarms about problems that Vitrage deduced could happen, even
if no other OpenStack component reported these problems (yet)
- view RCA information about the alarms

> 
> Aodh provides alarming mechanism to *notify* events and situations
> calculated from various data sources. But, original/master information
> of resource including latest resource state is owned by other services
> such as nova.
> 
> So, user who wants to know current resource state to find out dead
> resources (instances), can simply query instances via nova api. And,
> user who wants to know when/what failure occurred can query events via
> ceilometer api. Aodh has alarm state and history though.

I'm not sure I fully understand the difference between Aodh events and 
alarms. If the user wants to know what failure occurred, is it part of 
Aodh events, alarms, or both?

> > > OK. The 'combination' type alarm enables you to aggregate multiple
> > > alarm to one alarm. This can be used when you want to receive alarm
> > > when the both of physical NIC ports are downed to recognize logical
> > > connection unavailability if the ports are teamed for redundancy.
> > > Now, the combination alarms are evaluated periodically that means
> > > you can receive combination alarm not on-the-fly while you are
> using
> > > event alarms as source of combination alarm though.
> >
> > I think I understand your point. It means that certain alarms will
> > arrive to Vitrage in delay, due to your evaluation policy. I think we
> will have to address this issue at some point, but it won't change our
> overall design.
> 
> Yes, I'm just curious if there is any user can get benefit from this
> improvement to set priority.

I don't see a need for that improvement in our current use cases. Not so
sure about the future use cases, I will keep this limitation in mind.

Best Regards,
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][nova] Orchiestrated upgrades in kolla

2015-12-06 Thread Jay Pipes

On 12/07/2015 04:28 AM, Michał Jastrzębski wrote:

On 6 December 2015 at 09:33, Jay Pipes  wrote:


On 12/04/2015 11:50 PM, Michał Jastrzębski wrote:


Hey guys,

Orchiestrated upgrades is one of our highest priorities for M in
kolla, so following up after discussion on summit I'd like to
suggest an approach:

Instead of creating playbook called "upgrade my openstack" we
will create "upgrade my nova" instead and approach to each
service case by case (since all of our running services are in
dockers, this is possible). We will also make use of image tags
as version carriers, so ansible will deploy new container only if
version tag differs from what we ask it to deploy. This will help
with idempotency of upgrade process.

So let's start with nova. Upgrade-my-nova playbook will do
something like this:

0. We create snapshot of our mariadb-data container. This will
affect every service, but it's always good to have backup and
rollback of db will be manual action

1. Nova bootstrap will be called and it will perform
db-migration. Since current approach to nova code is add-only we
shouldn't need to stop services and old services should keep
working on newer database. Also for minor version upgrades there
will be no action here unless there is migration. 2. We upgrade
all conductor at the same time. This should take mere seconds
since we'll have prebuilt containers 3. We will upgrade rest of
controller services with using "series: 1" in ansible to ensure
rolling upgrades. 4. We will upgrade all of nova-compute services
on it's own pace.

This workflow should be pretty robust (I think it is) and it
should also provide idempotency.



From Nova's perspective, most of the above looks OK to me, however,
I am in more in favor of an upgrade strategy that builds a "new"
set of conductor or controller service containers all at once and
then flips the VIP or managed IP from the current controller pod
containers to the newly-updated controller pod containers.


Well, we can't really do that because that would affect all other
services on controller host, and we want to minimize impact on rest
of cluster during upgrade. Containers gives us option to upgrade
"just nova", or even "just nova conductors" without affecting
anything else. On the bright side, redeploying pre-built container is
a matter of seconds (or even less). Whole process from turning off
last conductor to spawn first new conductor should take less than
10s, and that's only downtime we should get there.


Sorry, I guess I wasn't clear. I was not proposing to put all controller 
services in a single container. I was proposing simply to build the 
various container sets (nova-conductor, nova-api, etc) with the updated 
code for that service and then "flip the target IP or port" for that 
particular service from the existing container set's VIP to the 
new/updated container set's VIP.


In Kubernetes-speak, you would create a new Pod with containers having 
the updated Nova conductor code, and would set the new Pod's selector to 
something different than the existing Pod's selector -- using the Git 
SHA1 hash for the selector would be perfect... You would then update the 
Service object's selector to target the new Pod instead of the old one.


Or, alternately, you could use Kubernetes' support for rolling updates 
[1] of a service using a ReplicationController that essentially does the 
Pod and Service orchestration for you.


Hope that is a little more clear what I was referring to...

Best,
-jay

[1] 
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/replication-controller.md#rolling-updates



This seems to me to be the most efficient (in terms of minimal
overall downtime) and most robust (because a "rollback" is simply
flipping the VIP back to the original controller/conductor service
pod containers) solution for upgrading Nova.





Dan Smith, can you comment on this since you are the God of
Upgrades?

Best, -jay


__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] jsonschema for scheduler hints

2015-12-06 Thread Alex Xu
2015-12-04 16:48 GMT+08:00 Sylvain Bauza :

>
>
> Le 04/12/2015 04:21, Alex Xu a écrit :
>
>
>
> 2015-12-02 23:12 GMT+08:00 Sylvain Bauza :
>
>>
>>
>> Le 02/12/2015 15:23, Sean Dague a écrit :
>>
>>> We have previously agreed that scheduler hints in Nova are an open ended
>>> thing. It's expected for sites to have additional scheduler filters
>>> which expose new hints. The way we handle that with our strict
>>> jsonschema is that we allow additional properties -
>>>
>>> https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/api/openstack/compute/schemas/scheduler_hints.py#L65
>>>
>>> This means that if you specify some garbage hint, you don't get feedback
>>> that it was garbage in your environment. That lost a couple of days
>>> building multinode tests in the gate. Having gotten used to the hints
>>> that "you've given us bad stuff", this was a stark change back to the
>>> old world.
>>>
>>> Would it be possible to make it so that the schema could be explicitly
>>> extended (instead of implicitly extended). So that additional
>>> properties=False, but a mechanism for a scheduler filter to register
>>> it's jsonschema in?
>>>
>>
>> I'm pretty +1 for that because we want to have in-tree filters clear for
>> the UX they provide when asking for scheduler hints.
>>
>
> +1 also, and we should have capability API for discovering what hints
> supported by current deployment.
>
>
>>
>> For the moment, it's possible to have 2 different filters asking for the
>> same hint without providing a way to explain the semantics so I would want
>> to make sure that one in-tree filter could just have the same behaviour for
>> *all the OpenStack deployments.*
>>
>> That said, I remember some discussion we had about that in the past, and
>> the implementation details we discussed about having the Nova API knowing
>> the list of filters and fitering by those.
>> To be clear, I want to make sure that we could not leak the deployment by
>> providing a 401 if a filter is not deployed, but rather just make sure that
>> all our in-tree filters are like checked, even if they aren't deployed.
>>
>
> There isn't any other Nova API return 401. So if you return 401, then
> everybody will know that is the only 401 in the nova, so I think there
> isn't any different. As we have capability API, it's fine let the user to
> know what is supported in the deployment.
>
>
>
> Sorry, I made a mistake by providing a wrong HTTP code for when the
> validation returns a ValidationError (due to the JSON schema not matched by
> the request).
> Here, my point is that if we enforce a per-enabled-filter basis for
> checking whether the hint should be enforced, it would mean that as an
> hacker, I could have some way to know what filters are enabled, or as an
> user, I could have different behaviours depending on the deployment.
>
> Let me give you an example: say that I'm not enabling the SameHostFilter
> which exposes the 'same_host' hint.
>
> For that specific cloud, if we allow to deny a request which could provide
> the 'same-host' hint (because the filter is not loaded by the
> 'scheduler_default_filters' option), it would make a difference with
> another cloud which enables SameHostFilter (because the request would pass).
>
> So, I'm maybe nitpicking, but I want to make clear that we shouldn't
> introspect the state of the filter, and just consider a static JSON schema
> (as we have today) which would reference all the hints, whether the
> corresponding filter is enabled or not.
>

yes, I see your concern, that is why I think we should have capabilities
API. User should query the capabilities API to ensure what filter enabled
in the current cloud.


>
>
>
>
>
>> That leaves the out-of-tree discussion about custom filters and how we
>> could have a consistent behaviour given that. Should we accept something in
>> a specific deployment while another deployment could 401 against it ? Mmm,
>> bad to me IMHO.
>>
>
> We can have code to check the out-of-tree filters didn't expose any same
> hints with in-tree filter.
>
>
>
> Sure, and thank you for that, that was missing in the past. That said,
> there are still some interoperability concerns, let me explain : as a cloud
> operator, I'm now providing a custom filter (say MyAwesomeFilter) which
> does the lookup for an hint called 'my_awesome_hint'.
>
> If we enforce a strict validation (and not allow to accept any hint) it
> would mean that this cloud would accept a request with 'my_awesome_hint'
> while another cloud which wouldn't be running MyAwesomeFilter would then
> deny the same request.
>

yes, same answer as above, capabilities API is used to discover enabled
'hints'.


>
> Hope I better explained my concerns,
> -Sylvain
>
>
>>
>> -Sylvain
>>
>> -Sean
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> 

Re: [openstack-dev] [Neutron] DVR + L3 HA + L2pop - Mapping out the work

2015-12-06 Thread Venkata Anil

Hi Assaf

I will work on this bug(i.e L3 HA integration with l2pop. I have already 
assigned it to myself).


https://bugs.launchpad.net/neutron/+bug/1522980

Thanks
Anil


On 12/05/2015 04:25 AM, Vasudevan, Swaminathan (PNB Roseville) wrote:

Hi Assaf,
Thanks for putting the list together.
We can help to get the pending patches to be reviewed, if that would help.

Thanks
Swami

-Original Message-
From: Assaf Muller [mailto:amul...@redhat.com]
Sent: Friday, December 04, 2015 2:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron] DVR + L3 HA + L2pop - Mapping out the work

There's a patch up for review to integrate DVR and L3 HA:
https://review.openstack.org/#/c/143169/

Let me outline all of the work that has to happen before that patch would be 
useful:

In order for DVR + L3 HA to work in harmony, each feature would have to be 
stable on its own. DVR has its share of problems, and this is being tackled 
full on, with more folks joining the good fight soon. L3 HA also has its own 
problems:

* https://review.openstack.org/#/c/238122/
* https://review.openstack.org/#/c/230481/
* https://review.openstack.org/#/c/250040/

DVR requires l2pop, and l2pop on its own is also problematic (Regardless if DVR 
or L3 HA is turned on). One notable issue is that it screws up live migration:
https://bugs.launchpad.net/neutron/+bug/1443421.
I'd really like to see more focus on Vivek's patch that attempts to resolve 
this issue:
https://review.openstack.org/#/c/175383/

Finally the way L3 HA integrates with l2pop is not something I would recommend 
in production, as described here:
https://bugs.launchpad.net/neutron/+bug/1522980. If I cannot find an owner for 
this work I will reach out to some folks soon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Testing concerns around boot from UEFI spec

2015-12-06 Thread Ren, Qiaowei

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Friday, December 4, 2015 9:47 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Testing concerns around boot from UEFI
> spec
> 
> On 12/04/2015 08:34 AM, Daniel P. Berrange wrote:
> > On Fri, Dec 04, 2015 at 07:43:41AM -0500, Sean Dague wrote:
> >> Can someone explain the licensing issue here? The Fedora comments
> >> make this sound like this is something that's not likely to end up in 
> >> distros.
> >
> > The EDK codebase contains a FAT driver which has a license that
> > forbids reusing the code outside of the EDK project.
> >
> > [quote]
> > Additional terms: In addition to the forgoing, redistribution and use
> > of the code is conditioned upon the FAT 32 File System Driver and all
> > derivative works thereof being used for and designed only to read
> > and/or write to a file system that is directly managed by Intel's
> > Extensible Firmware Initiative (EFI) Specification v. 1.0 and later
> > and/or the Unified Extensible Firmware Interface (UEFI) Forum's UEFI
> > Specifications v.2.0 and later (together the "UEFI Specifications");
> > only as necessary to emulate an implementation of the UEFI
> > Specifications; and to create firmware, applications, utilities and/or 
> > drivers.
> > [/quote]
> >
> > So while the code is open source, it is under a non-free license,
> > hence Fedora will not ship it. For RHEL we're reluctantly choosing to
> > ship it as an exception to our normal policy, since its the only
> > immediate way to make UEFI support available on x86 & aarch64
> >
> > So I don't think the license is a reason to refuse to allow the UEFI
> > feature into Nova though, nor should it prevent us using the current
> > EDK bios in CI for testing purposes. It is really just an issue for
> > distros which only want 100% free software.
> 
> For upstream CI that's also a bar that's set. So for 3rd party, it would 
> probably be
> fine, but upstream won't happen.
> 
> > Unless the license on the existing code gets resolved, some Red Hat
> > maintainers have a plan to replace the existing FAT driver with an
> > alternative impl likely under GPL. At that time, it'll be acceptable
> > for inclusion in Fedora.
> >
> >> That seems weird enough that I'd rather push back on our Platinum
> >> Board member to fix the licensing before we let this in. Especially
> >> as this feature is being drive by Intel.
> >
> > As copyright holder, Intel could choose to change the license of their
> > code to make it free software avoiding all the problems. None the
> > less, as above, I don't think this is a blocker for inclusion of the
> > feature in Nova, nor our testing of it.
> 
> That's fair. However we could also force having this conversation again, and 
> pay
> it forward to the larger open source community by getting this ridiculous
> licensing fixed. We did the same thing with some other libraries in the past.
> 

It should be due to MIT copyright addition that distributions like Fedora, 
Ubuntu, QEMU... don't include OVMF. But the MS patent looks like be recently 
expired. So the addition will be removed later, and once removed we will work 
to make OVMF a standard part of distributions.

Thanks,
Qiaowei

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Is there blueprint to add argument collection UI for action

2015-12-06 Thread WANG, Ming Hao (Tony T)
Stan,

Thanks for your information very much.
I’m looking forward for this feature! ☺

Thanks,
Tony

From: Stan Lagun [mailto:sla...@mirantis.com]
Sent: Saturday, December 05, 2015 4:52 AM
To: WANG, Ming Hao (Tony T)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [murano] Is there blueprint to add argument collection UI for 
action

Tony,

Here is the blueprint: https://blueprints.launchpad.net/murano/+spec/action-ui

The problem is much bigger then it seems to. Action parameters are very similar 
to regular application properties and thus there should be a dedicated dynamic 
UI form (think ui.yaml) for each action describing its inputs.
We discussed it many times and each time the resolution was that it is better 
to do major refactoring of dynamic UI forms before introducing additional 
forms. The intention was to either simplify or completely get rid of ui.yaml 
because it is yet another DSL to learn. Most of the information from ui.yaml 
could be obtained by examining MuranoPL class properties. And what is missing 
could be added as additional metadata right to the classes. However a lot of 
work requires to do it properly (starting from new API that would be 
MuranoPL-aware). That's why we still have no proper UI for the actions.

Maybe we should reconsider and to have many ui forms in single package until we 
have a better solution.



Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

On Fri, Dec 4, 2015 at 5:42 AM, WANG, Ming Hao (Tony T) 
> wrote:
Dear Stan and Murano developers,

The current murano-dashboards can add the action button for the actions defined 
in Murano class definition automatically, which is great.
Is there any blueprint to add argument collection UI for the action?
Currently, the murano-dashboards only uses environment_id + action_id to run 
the actions, and user has no way to provide action arguments from UI.

Thanks in advance,
Tony


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Rename tenant to project: discussion

2015-12-06 Thread Smigiel, Dariusz
Thanks guys for all the feedback.
I'm happy we elaborated some plan for this topic.

-- 
 Dariusz Smigiel
 Intel Technology Poland

> On 12/04/2015 04:26 PM, Armando M. wrote:
> >
> >
> > On 4 December 2015 at 10:02, Kevin Benton  > > wrote:
> >
> > So obviously the stuff in the client can be updated since most of
> > that is user-facing. However, on the server side maybe we can start
> > out by keeping all of the internal code and DB tables the same. Then
> > all we need to worry about is the API translation code to start.
> >
> > Once our public-facing stuff is done, we can just start the
> > transition to project_id internally at our own pace and in much less
> > invasive chunks.
> >
> >
> > This plan is sensible, and kudos to Dariusz to take it on...this is no
> > small feat of engineering and it won't be the effort of a
> > single...we're all here to help. Let me state the obvious and remind
> > that this is not a mechanical search and replace effort. We gotta be
> > extra careful to support both terms in the process.
> >
> > To sum it up I see the following steps:
> >
> > 1) Make or figure out how the server can talk to the v3 API - which is
> > bug 1503428. If Monty is unable to tackle it soon, I am sure he'll be
> > happy to hand it back and Darius, perhaps, can take over
> 
> I will hack on this next week - sorry for the delay so far. I'd love to do a 
> first
> rough pass and then get Darius to look at it and tell me where I'm insane.
> 
> > This will ensure that if for whatever reason v2 gets pulled out
> > tomorrow (not gonna happen, but still), we're not left high and dry.
> > To achieve this, I think we don't invasively need to change tenant id
> > with project id, but only where it's key to get/validate a token.
> 
> ++
> 
> > 2) Start from the client to allow to handle both project_id and tenant_id.
> >
> > The server must be enhanced to be able to convert project_id to
> > tenant_id on the fly. The change should be relatively limited in a few
> > places, like where the request come in. At this time nothing else is
> > required to change in the server.
> 
>  From an auth perspective, keystoneauth will handle both tenant and project
> as auth parameters (and I've got a patch coming to neutronclient to help get
> that all fleshed out too)
> 
>  From the server/api side and client lib side where people are passing in
> tenant_ids to neutron resources because it's important to associate a
> resource with a tenant/project - I think this is a GREAT plan, and thank you
> for doing it this way. As a consumer of your API, I neither want to have to
> change my code to the new version, or write new code using the old version
> (thus perpetuating the move in history)
> 
> I would suggest/request if there is a way (and this might be _terrible_ to
> mark tenant_id in the _docs_ as either hidden or deprecated, so that new
> users don't write new code using it - but of course we should continue to
> accept tenant_id until the end of time because of how much we love our
> users.
> 
> > 3) Tackle the data model.
> >
> > I wonder if we could leverage some sqlalchemy magic to support both
> > project_id and tenant_id in the db logic, seamlesslysomething
> > worth investigating (zzzeek may be of help here). The sooner we start
> > here, the sooner we catch and fix breakages
> >
> > 4) Tackle the codebase sweep.
> >
> > As for projects that use neutron and use the internal APIs, I can't
> > see a clean way of handling the bw compat if not by sprinkling
> > decorators that will take the signature of all the affected methods
> > and convert the tenant_id, but we could definitely explore how this would
> look like.
> 
> Yah. That one is going to be yuck. I'm happy to hand people beer ... :)
> >
> > On Thu, Dec 3, 2015 at 10:25 AM, Smigiel, Dariusz
> > > wrote:
> >
> > Hey Neutrinos (thanks armax for this word :),
> > Keystone is planning to deprecate V2 API (again :). This time in
> > Mitaka [6], and probably forever. It will stay at least four
> > releases, but we need to decide, how to conquer problem of
> > renaming...
> > And more important: consider if it's a problem for Neutron?
> >
> > I'm looking at blueprint [1] about renaming all occurrences of
> > 'tenant' to 'project', and trying to find out all the details.
> > First attempt to solve this problem was raised in November 2013
> > [4][5] but unfortunately, no one finished it. Although Keystone
> > V3 API is already supported in Neutron client [2], there are
> > still some unknowns about Neutron server side. Monty Taylor is
> > trying to address necessary (if any) changes [3].
> >
> > Findings:
> > I've focused on two projects: python-neutronclient and neutron.
> > grep found 429 

Re: [openstack-dev] [kolla][nova] Orchiestrated upgrades in kolla

2015-12-06 Thread Steven Dake (stdake)


On 12/6/15, 9:47 PM, "Jay Pipes"  wrote:

>On 12/07/2015 04:28 AM, Michał Jastrzębski wrote:
>> On 6 December 2015 at 09:33, Jay Pipes  wrote:
>>>
>>> On 12/04/2015 11:50 PM, Michał Jastrzębski wrote:

 Hey guys,

 Orchiestrated upgrades is one of our highest priorities for M in
 kolla, so following up after discussion on summit I'd like to
 suggest an approach:

 Instead of creating playbook called "upgrade my openstack" we
 will create "upgrade my nova" instead and approach to each
 service case by case (since all of our running services are in
 dockers, this is possible). We will also make use of image tags
 as version carriers, so ansible will deploy new container only if
 version tag differs from what we ask it to deploy. This will help
 with idempotency of upgrade process.

 So let's start with nova. Upgrade-my-nova playbook will do
 something like this:

 0. We create snapshot of our mariadb-data container. This will
 affect every service, but it's always good to have backup and
 rollback of db will be manual action

 1. Nova bootstrap will be called and it will perform
 db-migration. Since current approach to nova code is add-only we
 shouldn't need to stop services and old services should keep
 working on newer database. Also for minor version upgrades there
 will be no action here unless there is migration. 2. We upgrade
 all conductor at the same time. This should take mere seconds
 since we'll have prebuilt containers 3. We will upgrade rest of
 controller services with using "series: 1" in ansible to ensure
 rolling upgrades. 4. We will upgrade all of nova-compute services
 on it's own pace.

 This workflow should be pretty robust (I think it is) and it
 should also provide idempotency.
>>>
>>>
>>> From Nova's perspective, most of the above looks OK to me, however,
>>> I am in more in favor of an upgrade strategy that builds a "new"
>>> set of conductor or controller service containers all at once and
>>> then flips the VIP or managed IP from the current controller pod
>>> containers to the newly-updated controller pod containers.
>>
>> Well, we can't really do that because that would affect all other
>> services on controller host, and we want to minimize impact on rest
>> of cluster during upgrade. Containers gives us option to upgrade
>> "just nova", or even "just nova conductors" without affecting
>> anything else. On the bright side, redeploying pre-built container is
>> a matter of seconds (or even less). Whole process from turning off
>> last conductor to spawn first new conductor should take less than
>> 10s, and that's only downtime we should get there.
>
>Sorry, I guess I wasn't clear. I was not proposing to put all controller
>services in a single container. I was proposing simply to build the
>various container sets (nova-conductor, nova-api, etc) with the updated
>code for that service and then "flip the target IP or port" for that
>particular service from the existing container set's VIP to the
>new/updated container set's VIP.
>
>In Kubernetes-speak, you would create a new Pod with containers having
>the updated Nova conductor code, and would set the new Pod's selector to
>something different than the existing Pod's selector -- using the Git
>SHA1 hash for the selector would be perfect... You would then update the
>Service object's selector to target the new Pod instead of the old one.
>
>Or, alternately, you could use Kubernetes' support for rolling updates
>[1] of a service using a ReplicationController that essentially does the
>Pod and Service orchestration for you.
>
>Hope that is a little more clear what I was referring to...
>
>Best,
>-jay

Jay,

Forgive me if this is a little incoherent - I have a root canal scheduled
for tomorrow with the fun-filled day that comes  comes with :(

We don't use docker-proxy in kolla.  Instead we use net=host mode.  I
think what you propose would require an extra global VIP for upgrades (or
using docker-proxy, which we don't want to do). I am not really hot on
this approach for that reason.

Regards
-stee

>
>[1] 
>https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/repli
>cation-controller.md#rolling-updates
>
>>> This seems to me to be the most efficient (in terms of minimal
>>> overall downtime) and most robust (because a "rollback" is simply
>>> flipping the VIP back to the original controller/conductor service
>>> pod containers) solution for upgrading Nova.
>>
>>
>>>
>>> Dan Smith, can you comment on this since you are the God of
>>> Upgrades?
>>>
>>> Best, -jay
>>>
>>>
>>> 
>>>
>>>__
>>>
>>>
>OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

[openstack-dev] [neutron][docs] Third Party Neutron Drivers

2015-12-06 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

This is just a note that in accordance with 
http://specs.openstack.org/openstack/docs-specs/specs/kilo/move-driver-docs.html
 we will be going ahead and removing the tables for all third party Neutron 
drivers from the Config Ref as part of the conversion to RST. The core Neutron 
components will remain in the doc (that is Neutron itself, and 
neutron-{fw,lb,vpn}aas).

As far as we can tell, all the third-party drivers are already documented 
elsewhere, but please contact me if you feel this isn't the case.

Questions are, as always, very welcome.

Thanks,
Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWZRIaAAoJELppzVb4+KUyaF4H/RtpzrGbq32sLjy2arKb6yJM
HeOpUNow/blrjgJv7SeSCN/02xum/rMLt89SIwPybnjw4nCo0hamGlVlf1zqYRW5
FYWnJ5vsyvPYr3jbaAn1IR0Z/KFXUtnoJ+wb4zykkyV7CoXULDm0oUSDElYbxqIp
0eisrPUFDkfu8/cxVtn3KIsmQSVdzeYSP4ZLQN8uvCXhWcDYdjb6Ym11xrQGU7D5
rTCLo3+mrxx9vQjis1hGD8tWyTMHtbAXzTqU+SSAqwNn7q73m61l53PDr6o4IbPv
2nrVDaKBIwfSDI5ZEV8KfhUI5orCY9+AIzqMZSr/qxaqMjoRMfvmr1S2zN6X6Mk=
=UFUX
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Upgrading Icehouse to Juno

2015-12-06 Thread Abhishek Shrivastava
Hi Folks,

I have a stable/icehouse devstack setup and I am planning to upgrade it to
stable/kilo but following the guide mentioned in Openstack Manuals does not
seems to work correctly.

So if anyone can help me regarding this issue, please send your suggestion
and guidance.

-- 


*Thanks & Regards,*
*Abhishek*
*Cloudbyte Inc. *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][nova] Orchiestrated upgrades in kolla

2015-12-06 Thread Michal Rostecki

On 12/07/2015 05:47 AM, Jay Pipes wrote:


Sorry, I guess I wasn't clear. I was not proposing to put all controller
services in a single container. I was proposing simply to build the
various container sets (nova-conductor, nova-api, etc) with the updated
code for that service and then "flip the target IP or port" for that
particular service from the existing container set's VIP to the
new/updated container set's VIP.



That idea may work if the new containers will be running on the other 
nodes. That's because in Kolla we're not using network namespaces and we 
do net=host instead. So all the API services are bound directly to the 
host's net interface.


Of course we might try to use namespaces with default random port 
mapping, autodiscover OpenStack services and then your idea would be 
perfect. It would be great IMO and I'm +1 for that. But it would require 
a lot of work related with Marathon API and/or ZooKeeper usage. I'm not 
sure what the other folks would think about that and whether it's 
"doable" in Mitaka release. I also don't have a concrete imagination how 
Keystone service catalog would work in this model.



In Kubernetes-speak, you would create a new Pod with containers having
the updated Nova conductor code, and would set the new Pod's selector to
something different than the existing Pod's selector -- using the Git
SHA1 hash for the selector would be perfect... You would then update the
Service object's selector to target the new Pod instead of the old one.

Or, alternately, you could use Kubernetes' support for rolling updates
[1] of a service using a ReplicationController that essentially does the
Pod and Service orchestration for you.



Mesos+Marathon has its upgrade scenario and it seems to be almost the 
same you're proposing in the first paragraph.


https://mesosphere.github.io/marathon/docs/deployment-design-doc.html

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev