Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-23 Thread Takashi Yamamoto
On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
 wrote:
> Migration script has been submitted, v1 is not going anywhere from 
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
>
> I’m thinking in this order:
>
> - remove jenkins jobs
> - wait for heat to remove their jenkins jobs ([heat] added to this thread, so 
> they see this coming before the job breaks)

magnum is relying on lbaasv1.  (with heat)

> - remove q-lbaas from devstack, and any references to lbaas v1 in 
> devstack-gate or infra defaults.
> - remove v1 code from neutron-lbaas
>
> Since newton is now open for commits, this process is going to get started.
>
> Thanks,
> doug
>
>
>
>> On Mar 8, 2016, at 11:36 AM, Eichberger, German  
>> wrote:
>>
>> Yes, it’s Database only — though we changed the agent driver in the DB from 
>> V1 to V2 — so if you bring up a V2 with that database it should reschedule 
>> all your load balancers on the V2 agent driver.
>>
>> German
>>
>>
>>
>>
>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
>>
>>> So this looks like only a database migration, right?
>>>
>>> -Original Message-
>>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>>> Sent: Tuesday, March 08, 2016 12:28 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>>> weready?
>>>
>>> Ok, for what it’s worth we have contributed our migration script: 
>>> https://review.openstack.org/#/c/289595/ — please look at this as a 
>>> starting point and feel free to fix potential problems…
>>>
>>> Thanks,
>>> German
>>>
>>>
>>>
>>>
>>> On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:
>>>
 As far as I recall, you can specify the VIP in creating the LB so you will 
 end up with same IPs.

 -Original Message-
 From: Eichberger, German [mailto:german.eichber...@hpe.com]
 Sent: Monday, March 07, 2016 8:30 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
 weready?

 Hi Sam,

 So if you have some 3rd party hardware you only need to change the
 database (your steps 1-5) since the 3rd party hardware will just keep
 load balancing…

 Now for Kevin’s case with the namespace driver:
 You would need a 6th step to reschedule the loadbalancers with the V2 
 namespace driver — which can be done.

 If we want to migrate to Octavia or (from one LB provider to another) it 
 might be better to use the following steps:

 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health
 Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
 Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format
 file into some scripts which recreate the load balancers with your
 provider of choice —

 6. Run those scripts

 The problem I see is that we will probably end up with different VIPs
 so the end user would need to change their IPs…

 Thanks,
 German



 On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:

> As for a migration tool.
> Due to model changes and deployment changes between LBaaS v1 and LBaaS 
> v2, I am in favor for the following process:
>
> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> Health Monitors , Members) into some JSON format file(s) 2. Delete LBaaS 
> v1 3.
> Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back
> over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
> make room to some custom modification for mapping between v1 and v2
> models)
>
> What do you think?
>
> -Sam.
>
>
>
>
> -Original Message-
> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> Sent: Friday, March 04, 2016 2:06 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
>
> Ok. Thanks for the info.
>
> Kevin
> 
> From: Brandon Logan [brandon.lo...@rackspace.com]
> Sent: Thursday, March 03, 2016 2:42 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
>
> Just for clarity, V2 did not reuse tables, all the tables it uses are 
> only for it.  The main problem is that v1 and v2 both have a pools 
> resource, but v1 and v2's pool resource have different attributes.  With 
> the way neutron wsgi works, if both v1 and v2 are enabled, it will 
> combine both sets of attributes into the same validation schema.
>
> The other 

[openstack-dev] [Glance] Block subtractive schema changes

2016-03-23 Thread Kekane, Abhishek
Hi Glance Team,

I have registered a blueprint [1] for blocking subtractive schema changes.
Cinder and Nova are already supporting blocking of subtractive schema 
operations. Would like to add similar support here.

Please let me know your opinion on the same.

[1] https://blueprints.launchpad.net/glance/+spec/block-subtractive-operations


Thank you,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-23 Thread Rabi Mishra
> On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:
> >Hello,
> >It looks similar on issue, which was discussed here [1]
> >I suppose, that the root cause is incorrect using get_attr for your
> >case.
> >Probably you got "list"  instead of "string".
> >F.e. if I do something similar:
> >outputs:
> >  rg_1:
> >    value: {get_attr: [rg_a, rg_a_public_ip]}
> >  rg_2:
> >    value: {get_attr: [rg_a, rg_a_public_ip, 0]}
> >                  
> >  rg_3:
> >    value: {get_attr: [rg_a]}
> >  rg_4:
> >    value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
> >where rg_a is also resource group which uses custom template as
> >resource.
> >the custom template has output value rg_a_public_ip.
> >The output for it looks like [2]
> >So as you can see, that in first case (like it is used in your example),
> >get_attr returns list with one element.
> >rg_2 is also wrong, because it takes first symbol from sting with IP
> >address.
> 
> Shouldn't rg_2 and rg_4 be equivalent?

They are the same for template version 2013-05-23. However, they behave 
differently
from the next  version(2014-10-16) onward and return a list of characters. I 
think 
this is due to the fact that `get_attr` function mapping is changed from 
2014-10-16.


2013-05-23 -  
https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L70
2014-10-16 -  
https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L291

This makes me wonder why would a template author do something like 
{get_attr: [rg_a, rg_a_public_ip, 0]} when he can easily do 
{get_attr: [rg_a, resource.0.rg_a_public_ip]} or {get_attr: [rg_a, resource.0, 
rg_a_public_ip]}
for specific resource atrributes.

I understand that {get_attr: [rg_a, rg_a_public_ip]} cane be useful when we 
just want to use
the list of attributes.


> 
> {get_attr: [rg_a, rg_a_public_ip]} should return a list of all
> rg_a_public_ip attributes (one list item for each resource in the group),
> then the 0 should select the first item from that list?
> 
> If it's returning the first character of the first element, that sounds
> like a bug to me?
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Nailgun] Random failures in unit tests

2016-03-23 Thread Mike Scherbakov
I finally got it passing all the tests, including performance:
https://review.openstack.org/#/c/294976/. I'd appreciate if you guys can
review/land it sooner than later: patch touches many tests, and it would be
beneficial for everyone to be based on updated code.

Thanks,

On Mon, Mar 21, 2016 at 12:22 AM Mike Scherbakov 
wrote:

> FakeUI, which is based on fake threads, is obviously needed for
> development purposes.
> Ideally we need to refactor our integration tests, so that we don't run
> whole pipeline in every test. To start, I suggest that we switch from
> threads to synchronous runs of test cases (while keeping threads for
> fakeUI).
> Please take a look & comment in this draft:
> https://review.openstack.org/#/c/294976/
>
> Thanks,
>
> On Wed, Mar 16, 2016 at 7:30 AM Igor Kalnitsky 
> wrote:
>
>> Hey Vitaly,
>>
>> Thanks for your feedback, it's an important notice. However, I think
>> you didn't get the problem quite well so let me explain it again.
>>
>> You see, Nailgun unit tests are failing due to races or deadlocks
>> happened by two transactions: test transaction and fake thread
>> transaction, and we must face it and fix it. This problem has nothing
>> to do with the problem you're encountering in UI tests. Besides,
>> removing them from test doesn't mean removing them from Nailgun code
>> base.
>>
>> So your problem must be addressed, but it's kinda another story.
>>
>> Thanks,
>> Igor
>>
>> On Wed, Mar 16, 2016 at 4:21 PM, Vitaly Kramskikh
>>  wrote:
>> > Igor,
>> >
>> > We have UI and CLI integration tests which use fake mode of Nailgun,
>> and we
>> > can't avoid using fake threads for them. So I think we need to think
>> how to
>> > fix fake threads instead. There is a critical bug which is the main
>> reason
>> > of randomly failing UI tests. To fix it, we need to fix fake threads
>> > behaviour.
>> >
>> > 2016-03-16 17:06 GMT+03:00 Igor Kalnitsky :
>> >>
>> >> Hey Fuelers,
>> >>
>> >> As you might know recently we encounter a lot of random test failures
>> >> on CI, and they are still there (likely with a bit less probability).
>> >> A nature of that random failures is actually not a random, they are
>> >> happened because of so called fake threads.
>> >>
>> >> Fake threads, actually, ain't fake at all. They are native OS threads
>> >> that are designed to emulate Astute behaviour (i.e. catch RPC call and
>> >> respond with appropriate message). Since they are native threads and
>> >> we use SQLAlchemy's scoped_session, fake threads are using a separate
>> >> database session, hence - transaction. That leads to the following
>> >> issues:
>> >>
>> >> * Races. We don't know when threads are switched, therefore, we don't
>> >> know what's committed and what's not. Some Nailgun tests sends
>> >> something via RPC (catched by fake threads) and immediately checks
>> >> something. The issue is, we can't guarantee fake threads is already
>> >> committed produced result. That could be avoided by waiting for
>> >> 'ready' status of created nailgun task, however, it's better to simply
>> >> do not use fake threads in that case and simply call appropriate
>> >> Nailgun receiver's method directly in the test.
>> >>
>> >> * Deadlocks. It's incredibly hard to ensure the same order of database
>> >> locks in test + business code on one hand and fake thread code on
>> >> other hand. That's why we can (and we do) encounter deadlocks on CI,
>> >> when test case waits for lock acquired by fake thread, and fake thread
>> >> waits for lock acquired by test case.
>> >>
>> >> Fake threads are became a bottleneck of landing patches to master in
>> >> time, and we can't ignore it anymore. We have ~190 tests that use fake
>> >> threads, and fixing them all at once is a boring routine. So I kindly
>> >> ask Nailgun contrubitors to fix them as soon as we face them. Let's
>> >> file a bug on each file in CI, and quicly prepare a separate patch
>> >> that removes fake thread from failed test.
>> >>
>> >> Thanks in advance,
>> >> Igor
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > --
>> > Vitaly Kramskikh,
>> > Fuel UI Tech Lead,
>> > Mirantis, Inc.
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> 

Re: [openstack-dev] [i18n][horizon][sahara][trove][magnum][murano] dashboard plugin release schedule

2016-03-23 Thread Craig Vyvial
The trove-dashboard has its own stable/mitaka branch [1] as well. We have
an RC1 release already and we can make sure to land the translations and
cut an RC2 early next week (March 28).

Thanks,
Craig Vyvial

[1] https://github.com/openstack/trove-dashboard/tree/stable/mitaka


On Wed, Mar 23, 2016 at 11:02 PM Akihiro Motoki  wrote:

> Thank you all for your supports.
> We can see the progress of translations at [0]
>
> Shu,
> Magnum UI adopts the independent release model. Good to know you have
> stable/mitaka branch :)
> Once the stable branch is cut, let not only me but also the i18n team know
> it.
> openstack-i18n ML is the best place to do it.
> If so, the i18n team and the infra team will setup required action for
> Zanata sync.
>
> [0]
> https://translate.openstack.org/version-group/view/mitaka-translation/projects
>
> 2016-03-24 12:33 GMT+09:00 Shuu Mutou :
> > Hi Akihiro,
> >
> > Thank you for your announcement.
> > We will create stable/mitaka branch for Magnum-UI in this week,
> > and that will freeze strings.
> >
> > Thanks,
> >
> > Shu Muto
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [jacket] Introduction to jacket, a new project

2016-03-23 Thread GHANSHYAM MANN
Hi Kevin,

Its always nice idea as jacket has but not sure how feasible and
valuable it would be. For doing API translation and gateway, there are
many available and one I remember is Aviator (based on ruby gem) [1],
not sure how active it is now.

As your idea is more about consuming all differences between different
cloud, few query-

 1. Different clouds have very much different API model and feature
they provides, how  worth it is to provide missing/different features
at jacket layer? its then kind of another layer of cloud layer you
will end up.

 2. To support that idea through OpenStack standard API, you need to
inserting jacket driver all over the components which end up having
another layer gets inserted there. Maintainability of that is another
issue for each OpenStack components.

IMO, outside layer (from OpenStack ) which can do all these would be
nice something which can redirect API services at top level top and do
what all conversion, consume difference etc.

[1] https://github.com/aviator/aviator

Regards
Ghanshyam Mann


On Wed, Mar 16, 2016 at 9:58 PM, zs  wrote:
> Hi Gordon,
>
> Thank you for your suggestion.
>
> I think jacket is different from tricircle. Because tricircle focuses on
> OpenStack deployment across multiple sites, but jacket focuses on how to
> manage the different clouds just like one cloud.  There are some
> differences:
> 1. Account management and API model: Tricircle faces multiply OpenStack
> instances which can share one Keystone and have the same API model, but
> jacket will face the different clouds which have the respective service and
> different API model. For example, VMware vCloud Director has no volume
> management like OpenStack and AWS, jacket will offer a fake volume
> management for this kind of cloud.
> 2. Image management: One image just can run in one cloud, jacket need
> consider how to solve this problem.
> 3. Flavor management: Different clouds have different flavors which can not
> be operated by users. Jacket will face this problem but there will be no
> this problem in tricircle.
> 4. Legacy resources adoption: Because of the different API modles, it will
> be a huge challenge for jacket.
>
> I think it is maybe a good solution that jacket works to unify the API model
> for different clouds, and then using tricircle to offer the management of  a
> large scale VMs.
>
> Best Regards,
> Kevin (Sen Zhang)
>
>
> At 2016-03-16 19:51:33, "gordon chung"  wrote:
>>
>>
>>On 16/03/2016 4:03 AM, zs wrote:
>>> Hi all,
>>>
>>> There is a new project "jacket" to manage multiply clouds. The jacket
>>> wiki is: https://wiki.openstack.org/wiki/Jacket
>>>   Please review it and give your comments. Thanks.
>>>
>>> Best Regards,
>>>
>>> Kevin (Sen Zhang)
>>>
>>>
>>
>>i don't know exact details of either project, but i suggest you
>>collaborate with tricircle project[1] because it seems you are
>>addressing the same user story (and in a very similar fashion). not sure
>>if it's a user story for OpenStack itself, but no point duplicating
>> efforts.
>>
>>[1] https://wiki.openstack.org/wiki/Tricircle
>>
>>cheers,
>>
>>--
>>gord
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [vote] Managing bug backports to Mitaka branch

2016-03-23 Thread Swapnil Kulkarni
On Thu, Mar 24, 2016 at 12:10 AM, Steven Dake (stdake)  wrote:
> We had an emergency voting session on this proposal on IRC in our team
> meeting today and it passed as documented in the meeting minutes[1].  I was
> asked to have a typical vote and discussion on irc by one of the
> participants of the vote, so please feel free to discuss and vote again.  I
> will leave discussion and voting open until March 30th.  If the voting is
> unanimous prior to that time, I will close voting.  The original vote will
> stand unless there is a majority that oppose this process in this formal
> vote.  (formal votes > informal irc meeting votes).
>
> Thanks,
> -steve
>
> [1]
> http://eavesdrop.openstack.org/meetings/kolla/2016/kolla.2016-03-23-16.30.log.html
>
> look for timestamp 16:51:05
>
> From: Steven Dake 
> Reply-To: OpenStack Development Mailing List
> 
> Date: Tuesday, March 22, 2016 at 10:12 AM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [kolla] Managing bug backports to Mitaka branch
>
> Thierry (ttx in the irc log at [1]) proposed the standard way projects
> typically handle backports of newton fixes that should be fixed in an rc,
> while also maintaining the information in our rc2/rc3 trackers.
>
> Here is an example bug with the process applied:
> https://bugs.launchpad.net/kolla/+bug/1540234
>
> To apply this process, the following happens:
>
> Any individual may propose a newton bug for backport potential by specifying
> the tag 'rc-backport-potential" in the Newton 1 milestone.
> Core reviewers review the rc-backport-potential bugs.
>
> CR's review [3] on a daily basis for new rc backport candidates.
> If the core reviewer thinks the bug should be backported to stable/mitaka,
> (or belongs in the rc), they use the Target to series button, select mitaka,
> save.
>  copy the state of the bug, but set thte Mitaka milestone target to
> "mitaka-rc2".
> Finally they remove the rc-backport-potential tag from the bug, so it isn't
> re-reviwed.
>
> The purpose of this proposal is to do the following:
>
> Allow the core reviewer team to keep track of bugs needing attention for the
> release candidates in [2] by looking at [3].
> Allow master development to proceed un-impeded.
> Not single thread on any individual for backporting.
>
> I'd like further discussion on this proposal at our Wednesday meeting, so
> I've blocked off a 20 minute timebox for this topic.  I'd like wide
> agreement from the core reviewers to follow this best practice, or
> alternately lets come up with a plan b :)
>
> If your a core reviewer and won't be able to make our next meeting, please
> respond on this thread with your  thoughts.  Lets also not apply the process
> until the conclusion of the discussion at Wednesday's meeting.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I was not able to attend the meeting yesterday. I am +1 on this.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n][horizon][sahara][trove][magnum][murano] dashboard plugin release schedule

2016-03-23 Thread Akihiro Motoki
Thank you all for your supports.
We can see the progress of translations at [0]

Shu,
Magnum UI adopts the independent release model. Good to know you have
stable/mitaka branch :)
Once the stable branch is cut, let not only me but also the i18n team know it.
openstack-i18n ML is the best place to do it.
If so, the i18n team and the infra team will setup required action for
Zanata sync.

[0] 
https://translate.openstack.org/version-group/view/mitaka-translation/projects

2016-03-24 12:33 GMT+09:00 Shuu Mutou :
> Hi Akihiro,
>
> Thank you for your announcement.
> We will create stable/mitaka branch for Magnum-UI in this week,
> and that will freeze strings.
>
> Thanks,
>
> Shu Muto
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n][horizon][sahara][trove][magnum][murano] dashboard plugin release schedule

2016-03-23 Thread Shuu Mutou
Hi Akihiro,

Thank you for your announcement.
We will create stable/mitaka branch for Magnum-UI in this week,
and that will freeze strings.

Thanks, 

Shu Muto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Adam Young

On 03/23/2016 03:11 PM, Fox, Kevin M wrote:

If heat convergence worked (Is that a thing yet?), it could potentially be used 
instead of a COE like kubernetes.

The thing ansible buys us today would be upgradeability. Ansible is config 
management, but its also a workflow like tool. Heats bad at workflow.

I think between Heat with Convergence, Kolla containers, and some kind of 
Mistral workflow for upgrades, you could replace Ansible.

Then there's the nova instance user thing again 
(https://review.openstack.org/93)... How do you get secrets to the 
instances securely... Kubernetes has a secure store we could use... OpenStack 
still hasn't really gotten this one figured out. :/ Barbican is a piece of that 
puzzle, but there's no really good to hook it and nova together.


Don;t really think Kubernetes has this solved, either.  You need to 
build security from the ground up.



I think the best we can do is get an One Time Password  into the config 
driver for a new instance, and immediately have that instance use the 
OTP to register with an identity manager.




Thanks,
Kevin

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Wednesday, March 23, 2016 8:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

Hello,

So Ryan, I think you can make use of heat all the way. Architecture of
kolla doesn't require you to use ansible at all (in fact, we separate
ansible code to a different repo). Truth is that ansible-kolla is
developed by most people and considered "the way to deploy kolla" by
most of us, but we make sure that we won't cut out other deployment
engines from our potential.

So bottom line, heat may very well replace ansible code if you can
duplicate logic we have in playbooks in heat templates. That may
require docker resource with pretty complete featureset of docker
itself (named volumes being most important). Bootstrap is usually done
inside container, so that would be possible too.

To be honest, as for tripleo doing just bare metal deployment would
defeat idea of tripleo. We have bare metal deployment tools already
(cobbler which is used widely, bifrost which use ansible same as kolla
and integration would be easier), and these comes with significantly
less footprint than whole tripleo infrastructure. Strength of tripleo
comes from it's rich config of openstack itself, and I think that
should be portable to kolla.



On 23 March 2016 at 06:54, Ryan Hallisey  wrote:

*Snip*


Indeed, this has literally none of the benefits of the ideal Heat
deployment enumerated above save one: it may be entirely the wrong tool
in every way for the job it's being asked to do, but at least it is
still well-integrated with the rest of the infrastructure.
Now, at the Mitaka summit we discussed the idea of a 'split stack',
where we have one stack for the infrastructure and a separate one for
the software deployments, so that there is no longer any tight
integration between infrastructure and software. Although it makes me a
bit sad in some ways, I can certainly appreciate the merits of the idea
as well. However, from the argument above we can deduce that if this is
the *only* thing we do then we will end up in the very worst of all
possible worlds: the wrong tool for the job, poorly integrated. Every
single advantage of using Heat to deploy software will have evaporated,
leaving only disadvantages.

I think Heat is a very powerful tool having done the container integration
into the tripleo-heat-templates I can see its appeal.  Something I learned
from integration, was that Heat is not the best tool for container deployment,
at least right now.  We were able to leverage the work in Kolla, but what it
came down to was that we're not using containers or Kolla to its max potential.

I did an evaluation recently of tripleo and kolla to see what we would gain
if the two were to combine. Let's look at some items on tripleo's roadmap.
Split stack, as mentioned above, would be gained if tripleo were to adopt
Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates config
and deployment.  Therefore, allowing for the decoupling for each piece of
the stack.  Composable roles, this would be the ability to land services
onto separate hosts on demand.  Kolla also already does this [1]. Finally,
container integration, this is just a given :).

In the near term, if tripleo were to adopt Kolla as its overcloud it would
be provided these features and retire heat to setting up the baremetal nodes
and providing those ips to ansible.  This would be great for kolla too because
it would provide baremetal provisioning.

Ian Main and I are currently working on a POC for this as of last week [2].
It's just a simple heat template :).

I think further down the road we can evaluate using kubernetes [3].
For now though,  kolla-anisble is 

Re: [openstack-dev] [trove] OpenStack Trove meeting minutes (2016-03-23)

2016-03-23 Thread Tony Breeds
On Wed, Mar 23, 2016 at 07:52:10PM +, Amrith Kumar wrote:
> The meeting bot died during the meeting and therefore the logs on eavesdrop 
> are useless. So I've had to get "Old-Fashioned-Logs(tm)".
> 
> Action Items:
> 
>  #action [all] If you have a patch set that you intend to resume work 
> on, please put an update in it to that effect so we don't go abandon it under 
> you ...
>  #action [all]  if any of the abandoned patches looks like something 
> you would like to pick up feel free
>  #action cp16net reply to trove-dashboard ML question for RC2
>  #action [all] please review changes [3], [4], and link [5] in agenda 
> and update the reviews
> 
> Agreed:
> 
>  #agreed flaper87 to WF+1 the patches in question [3] and [4]
> 
> Meeting agenda is at 
> https://wiki.openstack.org/wiki/Trove/MeetingAgendaHistory#Trove_Meeting.2C_March_23.2C_2016
> 
> Meeting minutes (complete transcript) is posted at
> 
> https://gist.github.com/amrith/5ce3e4a0311f2cc4044c

I'm still unsure of the value of adding these tests, and would love some
pointers.

The current stable/liberty branch of trove fails
gate-trove-scenario-functional-dsvm-mysql with a summary of "FAILED (SKIP=23,
errors=13)"[1].

The 2 reviews in question fail with
"FAILED (SKIP=5, errors=2)"[2] and
"FAILED (SKIP=18, errors=1)"[3].

Granted these are heading in the right direction (in terms of fails).  There is
no dependencies between the 2 reviews so I'll assume that if they're both
merged that number wont regress.

If you look at the logs all 3 runs failed "instance_resize_flavor"
[4][5][6]

So even with the changes merged you still don't end up with a gate job (even on
the experimental queue that you can "just use".

Yes you have improved testing coverage, but is it meaningful?

I'm not blocking you from merging them I just don't understand the benefit.

Tony.

[1] 
http://logs.openstack.org/55/295055/1/experimental/gate-trove-scenario-functional-dsvm-mysql/f8d62a3/logs/devstack-gate-post_test_hook.txt.gz#_2016-03-21_01_57_16_012
[2] 
http://logs.openstack.org/89/262289/1/experimental/gate-trove-scenario-functional-dsvm-mysql/1464974/logs/devstack-gate-post_test_hook.txt.gz#_2016-03-19_07_33_32_392
[3] 
http://logs.openstack.org/87/262287/1/experimental/gate-trove-scenario-functional-dsvm-mysql/ad5dbff/logs/devstack-gate-post_test_hook.txt.gz#_2016-03-19_07_39_27_114
[4] 
http://logs.openstack.org/55/295055/1/experimental/gate-trove-scenario-functional-dsvm-mysql/f8d62a3/logs/devstack-gate-post_test_hook.txt.gz#_2016-03-21_01_57_15_242
[5] 
http://logs.openstack.org/87/262287/1/experimental/gate-trove-scenario-functional-dsvm-mysql/ad5dbff/logs/devstack-gate-post_test_hook.txt.gz#_2016-03-19_07_39_26_000
[6] 
http://logs.openstack.org/89/262289/1/experimental/gate-trove-scenario-functional-dsvm-mysql/1464974/logs/devstack-gate-post_test_hook.txt.gz#_2016-03-19_07_33_31_744


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-23 Thread Robert Collins
On 24 March 2016 at 13:23, Dean Troyer  wrote:
> On Wed, Mar 23, 2016 at 3:03 PM, Doug Hellmann 
> wrote:
>>
>> Are you packaging unreleased things in RDO? Because those are the only
>> things that will have similar version numbers. We ensure that whatever
>> is actually tagged have good, non-overlapping, versions.
>
>
> The packages on https://trunk.rdoproject.org/ are built from master and have
> version numbers in the future derived from the pbr default 'next' version.
> So yes, there are packages distributed in the wild with version numbers that
> have not been tagged.

Thats ok, because thats an opt-in thing there - its not existing in
the same namespace as the mitaka builds for rdo, is it?

Trunk will rapidly exceed mitaka's versions, leading to no confusion too.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-23 Thread Robert Collins
On 24 March 2016 at 10:36, Ian Cordasco  wrote:
>
>

> The project will build wheels first. The wheels generated tend to look 
> something like 13.0.0.0rc2.dev10 when they're built because of pbr.
>
> If someone is doing CD with the openstack-ansible project and they deploy 
> mitaka once it has a final tag, then they decide to upgrade to run master, 
> they could run into problems upgrading. That said, I think my team is the 
> only team doing this. (Or at least, none of the other active members of the 
> IRC channel talk about doing this.) So it might not be anything more than a 
> "nice to have" especially since no one else from the project has chimed in.

So when we discussed this in Tokyo, we had the view that folk *either*
run master -> master, or they run release->release, or rarely
release->alpha-or-beta.

We didn't think it made sense that folk would start with a stable use
case and evolve that into an unstable one, *particularly* early in the
unstable cycle.

So - if there's one team doing it, I think its keep-both-pieces time.

If its going to be more common than that, we can do the legwork to
make tags early on every time. But I don't think we've got supporting
evidence that its more than you so far :).

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images usingdiskimage-builder

2016-03-23 Thread Kai Qiang Wu
1) +1  diskimage-builder maybe a better place to external consumption.

2) For the image size difference(big), I think we may need to know what's
the issue for that.
Maybe Redhat guys know something about it.


Thanks


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Ton Ngo" 
To: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date:   24/03/2016 01:12 am
Subject:Re: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder



Hi Yolanda,
Thank you for making a huge improvement from the manual process of building
the Fedora Atomic image.
Although Atomic does publish a public OpenStack image that is being
considered in this patch:
https://review.openstack.org/#/c/276232/
in the past we have come to many situations where we need an image with
specific version of certain software
for features or bug fixes (Kubernetes, Docker, Flannel, ...). So the
automated and customizable build process
will be very helpful.

With respect to where to land the patch, I think diskimage-builder is a
reasonable target.
If it does not land there, Magnum does currently have 2 sets of
diskimage-builder elements for Mesos image
and Ironic image, so it is also reasonable to submit the patch to Magnum.
With the new push to reorganize
into drivers for COE and distro, the elements would be a natural fit for
Fedora Atomic.

As for periodic image build, it's a good idea to stay current with the
distro, but we should avoid the situation
where something new in the image breaks a COE and we are stuck for awhile
until a fix is made. So instead of
an automated periodic build, we might want to stage the new image to make
sure it's good before switching.

One question: I notice the image built by DIB is 871MB, similar to the
manually built image, while the
public image from Atomic is 486MB. It might be worthwhile to understand the
difference.

Ton Ngo,

Inactive hide details for Yolanda Robla Mota ---03/23/2016 04:12:54 AM---Hi
I wanted to start a discussion on how Fedora AtomicYolanda Robla Mota
---03/23/2016 04:12:54 AM---Hi I wanted to start a discussion on how Fedora
Atomic images are being

From: Yolanda Robla Mota 
To: 
Date: 03/23/2016 04:12 AM
Subject: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder



Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.html.
The image needs to be built manually, uploaded to fedorapeople, and then
consumed from there in the magnum tests.
I have been working on a feature to allow diskimage-builder to generate
these images. The code that makes it possible is here:
https://review.openstack.org/287167
This will allow that magnum images are generated on infra, using
diskimage-builder element. This element also has the ability to consume
any tree we need, so images can be customized on demand. I generated one
image using this element, and uploaded to fedora people. The image has
passed tests, and has been validated by several people.

So i'm raising that topic to decide what should be the next steps. This
change to generate fedora-atomic images has not already landed into
diskimage-builder. But we have two options here:
- add this element to generic diskimage-builder elements, as i'm doing now
- generate this element internally on magnum. So we can have a directory
in magnum project, called "elements", and have the fedora-atomic element
here. This will give us more control on the element behaviour, and will
allow to update the element without waiting for external reviews.

Once the code for diskimage-builder has landed, another step can be to
periodically generate images using a magnum job, and upload these images
to OpenStack Infra mirrors. Currently the image is based on Fedora F23,
docker-host tree. But different images can be generated if we need a
better option.

As soon as the images are available on internal infra mirrors, the tests
can be changed, to consume these internals images. By this way the tests
can be a bit faster (i know that the bottleneck is on the functional
testing, but if we reduce the download time it can help), and tests can
be more reilable, because we will be removing an external dependency.

So i'd like to get more feedback on this topic, options and next steps

Re: [openstack-dev] [release]how to release an non-official project in Mitaka

2016-03-23 Thread joehuang
Hi, Thierry,

This is quite clear, thanks a lot, would ask for more help if needed.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Wednesday, March 23, 2016 9:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: joehuang; Rochelle Grober; huangzhipeng; Gordon Chung
Subject: Re: [openstack-dev][release]how to release an non-official project in 
Mitaka

joehuang wrote:
> Thanks for the help. There is a plan for not only Tricircle but also 
> Kingbird to do a release in Mitaka, both of them are not OpenStack 
> official project yet. The question is whether these projects can 
> leverage the facility https://github.com/openstack/releases to do a 
> release, or is there any guide how to do the release work by 
> themselves for new projects? Or just tagging is enough.

So... openstack/releases is specifically meant to list official OpenStack 
deliverables. Unofficial projects shall do their releases independently.

You can find information on how to do releases for projects hosted under 
OpenStack infrastructure here:

http://docs.openstack.org/infra/manual/drivers.html#release-management

Generally it implies pushing a tag and having a -tarball job defined (the job 
will pick up the tag and upload a source code tarball versioned after the tag 
name to tarballs.openstack.org).

Let me know if you have any other question.
Regards,

--
Thierry Carrez (ttx)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-23 Thread melanie witt
On Mar 23, 2016, at 16:56, melanie witt  wrote:

> I may have found a workaround for the scroll jumping after reading through 
> the upstream issue comments [1]: use the "Slow" setting in the preferences 
> for Render. Click the gear icon in the upper right corner of the diff view 
> and click the Render switch to "Slow" and then Apply. It seems to be working 
> for me so far.

I realized Apply only works for the current screen, you must use Save to make 
the setting stick for future screens [2].

-melanie

[2] 
https://gerrit-review.googlesource.com/Documentation/user-review-ui.html#diff-preferences


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-23 Thread Dean Troyer
On Wed, Mar 23, 2016 at 3:03 PM, Doug Hellmann 
wrote:

> Are you packaging unreleased things in RDO? Because those are the only
> things that will have similar version numbers. We ensure that whatever
> is actually tagged have good, non-overlapping, versions.
>

The packages on https://trunk.rdoproject.org/ are built from master and
have version numbers in the future derived from the pbr default 'next'
version.  So yes, there are packages distributed in the wild with version
numbers that have not been tagged.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-03-23 Thread Joshua Harlow

On 03/23/2016 12:49 PM, pnkk wrote:

Joshua,

We are performing few scaling tests for our solution and see that there
are errors as below:

Failed saving logbook 'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b'\n  InternalError: 
(pymysql.err.InternalError) (1205, u'Lock wait timeout exceeded; try restarting 
transaction') [SQL: u'UPDATE logbooks SET created_at=%s, updated_at=%s, meta=%s, 
name=%s, uuid=%s WHERE logbooks.uuid = %s'] [parameters: (datetime.datetime(2016, 3, 
18, 18, 16, 40), datetime.datetime(2016, 3, 23, 3, 3, 44, 95395), u'{}', u'test', 
u'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b', 
u'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b')]"


We have about 800 flows as of now and each flow is updated in the same logbook 
in a separate eventlet thread.


Every thread calls save_logbook() on the same logbook record. I think this 
function is trying to update logbook record even though my usecase needs only 
flow details to be inserted and it doesn't update any information related to 
logbook.



Right its trying to update the 'updated_at' field afaik,



Probably one of the threads was holding the lock while updating, and others 
tried for lock and failed after the default interval has elapsed.


I can think of few alternatives at the moment:


1. Increase the number of logbooks

2. Increase theinnodb_lock_wait_timeout

3. There are some suggestions to make the innodb transaction isolation level to "READ 
COMMITTED" instead of "REPEATABLE READ", but I am not very familiar of the side 
effects they can cause


4. Add some basic retries?

5. The following review should also help (and save less) @ 
https://review.openstack.org/#/c/241441/


Afaik we are also using READ COMMITTED already ;)

https://github.com/openstack/taskflow/blob/master/taskflow/persistence/backends/impl_sqlalchemy.py#L105




Appreciate your thoughts on given alternatives or probably even better 
alternative


Do u want to try using https://pypi.python.org/pypi/retrying in a few 
strategic places so that if the above occurs, that it retries?





Thanks,

Kanthi






On Sun, Mar 20, 2016 at 10:00 PM, Joshua Harlow > wrote:

Lingxian Kong wrote:

Kanthi, sorry for chiming in, I suggest you may have a chance to
take
a look at Mistral[1], which is the workflow as a service in
OpenStack(or without OpenStack).


Out of curiosity, why? Seems the ML post was about 'TaskFlow
persistence' not mistral, just saying (unsure how it is relevant to
mention mistral in this)...

Back to getting more coffee...

-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-23 Thread melanie witt
On Mar 18, 2016, at 9:50, Andrew Laski  wrote:

> I've adapted to the new interface and really like some of the new 
> capabilities it provides, but having the page jump around while I'm 
> commenting has been a huge annoyance.

I may have found a workaround for the scroll jumping after reading through the 
upstream issue comments [1]: use the "Slow" setting in the preferences for 
Render. Click the gear icon in the upper right corner of the diff view and 
click the Render switch to "Slow" and then Apply. It seems to be working for me 
so far.

-melanie

[1] https://code.google.com/p/gerrit/issues/detail?id=3252#c8


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] HowTo: Compose a local bundle file

2016-03-23 Thread Serg Melikyan
Hi wangzhh,

You can use python-muranoclient in order to download bundle from
apps.openstack.org and then use it somewhere else for the import:

murano bundle-save app-servers

you can find more about this option in corresponding spec [0].

Generally local bundle is not different from the remote one, you can
take a look at the same bundle [1] internals. If you will download
this file and then will try to execute:

murano bundle-import ./app-servers.bundle

murano will try to find all mentioned packages in the local folder
before going to apps.openstack.org.

References:
[0] 
http://specs.openstack.org/openstack/murano-specs/specs/liberty/bundle-save.html
[1] http://storage.apps.openstack.org/bundles/app-servers.bundle

On Wed, Mar 23, 2016 at 1:48 AM, 王正浩  wrote:
>
> Hi Serg Melikyan!
>   I'm a programmer of China. And I have a question about Application Servers 
> Bundle (bundle) 
> https://apps.openstack.org/#tab=murano-apps=Application%20Servers%20Bundle
>   I want to import a bundle across a local bundle file. Could you tell me how 
>  to create a bundle file? Is there any doc explain it?
>   Thanks!
>
>
>
>
> --
> --
> Best Regards,
> wangzhh




-- 
Serg Melikyan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deleting a cluster in Sahara SQL/PyMYSQL Error

2016-03-23 Thread Jerico Revote
Hello,

When trying to delete a cluster in sahara,
I'm getting the following error:

> code 500 and message 'Internal Server Error'
> 2016-03-23 17:25:21.651 18827 ERROR sahara.utils.api 
> [req-d797bbc8-7932-4187-a428-565f9d834f8b ] Traceback (most recent call last):
> OperationalError: (pymysql.err.OperationalError) (2014, 'Command Out of Sync')
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> [req-377ef364-f2c7-4343-b32c-3741bfc0a05b ] DB exception wrapped.
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
> (most recent call last):
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, in 
> _execute_context
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> context)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 450, in 
> do_execute
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> cursor.execute(statement, parameters)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 132, in execute
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters result 
> = self._query(query)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 271, in _query
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> conn.query(q)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 726, in query
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> self._affected_rows = self._read_query_result(unbuffered=unbuffered)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 861, in 
> _read_query_result
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> result.read()
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1064, in read
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> first_packet = self.connection._read_packet()
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 825, in 
> _read_packet
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters packet 
> = packet_type(self)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 242, in 
> __init__
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> self._recv_packet(connection)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 248, in 
> _recv_packet
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> packet_header = connection._read_bytes(4)
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 839, in 
> _read_bytes
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters if 
> len(data) < num_bytes:
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters TypeError: 
> object of type 'NoneType' has no len()
> 2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters 
> 2016-03-23 17:25:35.808 18823 ERROR sahara.utils.api 
> [req-377ef364-f2c7-4343-b32c-3741bfc0a05b ] Request aborted with status code 
> 500 and message 'Internal Server Error'
> 2016-03-23 17:25:35.809 18823 ERROR sahara.utils.api 
> [req-377ef364-f2c7-4343-b32c-3741bfc0a05b ] Traceback (most recent call last):
> OperationalError: (pymysql.err.OperationalError) (2014, 'Command Out of Sync')

Any idea what could this mean? Thanks
As a result, sahara clusters are stuck in "Deleting" state.

> pkg -l | grep -i sahara
> ii  python-sahara1:3.0.0-0ubuntu1~cloud0  all 
>  OpenStack data processing cluster as a service - library
> ii  sahara-api   1:3.0.0-0ubuntu1~cloud0  all 
>  OpenStack data processing cluster as a service - API
> ii  sahara-common1:3.0.0-0ubuntu1~cloud0  all 
>  OpenStack data processing cluster as a service - common files


Regards,

Jerico



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-23 Thread Douglas Mendizábal

Comments inline.

- Douglas Mendizábal

On 3/23/16 5:15 PM, Fox, Kevin M wrote:
> So, this is where things start getting a little ugly and undefined... This is 
> what I've been able to gather so far, so please someone correct me if I'm 
> wrong.
> 
> Barbican is the OpenStack secret manager. It provides a standard OpenStack 
> api for users to be able store/retrieve secrets... Its plugable and in 
> theory, you could add a vault plugin to it. Barbican is then your abstraction 
> layer.
> 

This is absolutely correct, and it's a point that I'm starting to think
is not very well understood outside of the Barbican and Security teams.
 Barbican is not in-and-of-itself a key management solution.  It
requires some backend to be used where the actual secret storage is
done.  This could be a PKCS#11 Hardware Security Module like we use at
Rackspace, or it could be a KMIP HSM, or DogTag, or Hashicorp Vault.

In a similar way in which Keystone can use a deployers existing identity
system, Barbican should be able to use an existing key management
system.   So I agree that integrations with other key managers belong in
Barbican as a SecretStore plugin.

> Separate from that, is Castellan. Which is a plugable abstraction library at 
> the client side. So a Vault plugin could be create for it instead.
> 

I think Castellan is attractive to projects looking for secure
storage/retrieval of secrets because it superficially avoids a hard
dependency on Barbican.  The problem with using Castellan is that
because it's a common-lowest-denominator secret storage it cannot
provide the features that Barbican provides.

One of the features that Barbican provides that Castellan cannot is that
of multi-tenancy.  So projects that choose to use Castellan are limited
to a single account on the key manager backend that is chosen at
deployment time.  This results in all secrets being "owned" by the
service itself instead of the users they're associated with.

Another feature that Barbican provides that Castellan cannot is that of
Scalability.  The PKCS#11 Plugin in Barbican can provide virtually
unlimited storage capacity while still providing the security assurances
of the HSM backend.  But when you choose Castellan, you will be limited
by the capacity of the service used in the backend.

And while there is a Castellan->Barbican implementation, it only
provides scalability, but it loses the multi-tenancy.


> My personal preference is to have a standard rest api over having a standard 
> python client api in a cloud. Its more the OpenStack way. I'll leave it up to 
> other sources to get into why a rest api's better.
> 
> That being said, there's still the elephant in the room I think of:
> 
> How do you securely get a secret to the vm, to allow you to get secrets from 
> the secret store? I've been working on that use case for over a year now with 
> little traction. :/ 
> 
> Either Castellan, Barbican, or talking directly to Vault will have that 
> issue. How do you validate your vm with that service.
> 
> The current endeavor to address the situation is located here: 
> https://review.openstack.org/#/c/93/
> 

Thanks for fighting the good fight, Kevin.  I'm still hopeful that we
can solve this problem.

> We really need to get all the OpenStack projects together and address this 
> issue as a whole. Everyone's now trying or has already worked around it in 
> some not so nice ways. Amazon has had a solution for years now. Why can't we 
> address it?
> 
> Thanks,
> Kevin
> 
> 
> 
> 
> From: Ian Cordasco [sigmaviru...@gmail.com]
> Sent: Wednesday, March 23, 2016 2:45 PM
> To: Monty Taylor; OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Streamline adoption of Magnum
> 
> -Original Message-
> From: Monty Taylor 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: March 22, 2016 at 18:49:41
> To: openstack-dev@lists.openstack.org 
> Subject:  Re: [openstack-dev] [magnum] Streamline adoption of Magnum
> 
>> On 03/22/2016 06:27 PM, Kevin Carter wrote:
>>> /me wearing my deployer hat now: Many of my customers and product folks
>>> want Magnum but they also want magnum to be as secure and stable as
>>> possible. If Barbican is the best long term solution for the project it
>>> would make sense to me that Magnum remain on course with Barbican as the
>>> defacto way of deploying in production. IMHO building alternative means
>>> for certificate management is a distraction and will only confuse folks
>>> looking to deploy Magnum into production.
>>
>> I'm going to agree. This reminds me of people who didn't want to run
>> keystone back in the day. Those people were a distraction, and placating
>> them hampered OpenStack's progress by probably several years.
> 
> Right. Barbican is a good service that is actually 

Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 24/03/16 08:01, Doug Hellmann wrote:
> Excerpts from Lana Brindley's message of 2016-03-24 07:14:35 +1000:
>> Hi Mike, and sorry I missed you on IRC to discuss this there. That said, I 
>> think it's great that you took this to the mailing list, especially seeing 
>> the conversation that has ensued.
>>
>> More inline ...
>>
>> On 24/03/16 01:06, Mike Perez wrote:
>>> Hey all,
>>>
>>> I've been talking to a variety of projects about lack of install guides. 
>>> This
>>> came from me not having a great experience with trying out projects in the 
>>> big
>>> tent.
>>>
>>> Projects like Manila have proposed install docs [1], but they were rejected
>>> by the install docs team because it's not in defcore. One of Manila's goals 
>>> of
>>> getting these docs accepted is to apply for the operators tag
>>> ops:docs:install-guide [2] so that it helps their maturity level in the 
>>> project
>>> navigator [3].
>>>
>>> Adrian Otto expressed to me having the same issue for Magnum. I think it's
>>> funny that a project that gets keynote time at the OpenStack conference 
>>> can't
>>> be in the install docs personally.
>>>
>>> As seen from the Manila review [1], the install docs team is suggesting 
>>> these
>>> to be put in their developer guide.
>>
>> As Steve pointed out, these now have solid plans to go in. That was because 
>> both projects opened a conversation with us and we worked with them over 
>> time to give them the docs they required.
>>
>>>
>>> I don't think this is a great idea. Mainly because they are for developers,
>>> operators aren't going to be looking in there for install information. Also 
>>> the
>>> Developer doc page [4] even states "This page contains documentation for 
>>> Python
>>> developers, who work on OpenStack itself".
>>
>> I agree, but it's a great place to start. In fact, I've just merged a change 
>> to the Docs Contributor Guide (on the back of a previous mailing list 
>> conversation) that explicitly states this:
>>
>> http://docs.openstack.org/contributor-guide/quickstart/new-projects.html
> 
> I think you're missing that most of us are disagreeing that it is
> a good place to start. It's fine to have the docs in a repository
> managed by the project team. It's not good at all to publish them
> under docs.o.o/developer because they are not for developers, and
> so it's confusing. This is why we ended up with a different place
> for release notes to be published, instead of just adding reno to
> the existing developer documentation build, for example.
> 

All docs need to be drafted somewhere. I don't care where that is, but make the 
suggestion of /developer because at least it's accessible there, and also 
because it's managed in the project's own repo. If you want to create a 
different place, or rename /developer to be more inclusive, I think that's a 
great idea.

>>
>>>
>>> The install docs team doesn't want to be swamped with everyone in big tent
>>> giving them their install docs, to be verified, and eventually likely to be
>>> maintained by the install docs team.
>>
>> Which is exactly why we're very selective. Sadly, documenting every big tent 
>> project's install process is no small task.
> 
> Right. The solution to that isn't to say "we aren't going to document
> it at all" or "publish the documentation somewhere less ideal",
> though, which is what it sounds like we're doing now.  It's to say

Actually, I said that I acknowledge that isn't working, and we need to find a 
different solution.

> "you are going to have to manage that document yourself, with the
> docs team answering some questions to get you started using standard
> templates for the document and build jobs".  We need a way for all
> teams to publish things they write to locations outside of their
> developer docs, without the documentation team feeling like they
> are somehow responsible for the results (or, more importantly, for
> readers of the documents to think that).

Yes, which is exactly what we'll be discussing at Summit.

> 
> I like the prominent "file a bug here" link on the new docs theme,
> so if we could reuse that but point the URL to the project's launchpad
> site instead of the documentation team's site, that would be a
> start. We may be able to do other things with the theme to further
> indicate who created the content and how to get help or report
> issues.
> 

Thanks for mentioning this, we'll take it into account during our discussions.

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJW8x3JAAoJELppzVb4+KUy+UoIALNBcuOjdlwogj64zZ1eqIEO
fKYBOVtmoa2KhyNxDPT+QXFxrqkd0k/mOLR9fbJF6d7qWlb7od1Jix1r+wfYkKZh
Nq0zZ8nG+tPmHR9jtRoY6cZGxXHpJRLT8IBN86rMRdryi+xwtAyzbLz1frJ3QEbb
iGr1tllU+T6vN+QChM5R7fB7MA6U3GIARBxQ1Reye/U74UeLLzZTroN20Py0OMYi

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Steven Dake (stdake)


On 3/23/16, 2:36 PM, "Steven Hardy"  wrote:

>On Wed, Mar 23, 2016 at 01:01:03AM +, Fox, Kevin M wrote:
>> +1 for TripleO taking a look at Kolla.
>> 
>> Some random thoughts:
>> 
>> I'm in the middle of deploying a new cloud and I couldn't use either
>>TripleO or Kolla for various reasons. A few reasons for each:
>>  * TripeO - worries me for ever having to do a major upgrade of the
>>software, or needing to do oddball configs like vxlans over ipoib.
>>  * Kolla - At the time it was still immature. No stable artefacts
>>posted. database container recently broke, little documentation for
>>disaster recovery. No upgrade strategy at the time.
>> 
>> Kolla rearchitected recently to support oddball configs like we've had
>>to do at times. They also recently gained upgrade support. I think they
>>are on the right path. If I had to start fresh, I'd very seriously
>>consider using it.
>> 
>> I think Kolla can provide the missing pieces that TripleO needs.
>>TripleO has bare metal deployment down solid. I really like the idea of
>>using OpenStack to deploy OpenStack. Kolla is now OpenStack so should be
>>considered.
>
>As mentioned in another reply, one of the aims of current refactoring work
>in TripleO is to enable folks to leverage the barematal (and networking)
>aspects of TripleO, then hand off to another tool should they so wish.
>
>This could work really well if you wanted to layer ansible deployed kolla
>containers on top of some TripleO deployed nodes (in fact it's one of the
>use-cases we had in mind when deciding to do it).
>
>I do however have several open questions regarding kolla (and the various
>other ansible based solutions like openstack-ansible):
>
>- What does the HA model look like, is active/active HA fully supported
>  accross multiple controllers?

Kolla is active/active HA with a recommended minimum of 3 nodes.  Kolla
does not do network isolation nor detect failure of components.  Docker
does detect failure of containers and restarts them so we are covered in
the general case of a process stop-crash.  In the case of a node loss,
failure detection is not done, and is a weakness in the current HA
implementation.

>- Is SSL fully supported for the deployed services?

External SSL is implemented in Mitaka.  Internal SSL is not.  What this
means is you can do something like for a developer:

kolla-anible certificates
kolla-ansible deploy
Copy the haproxy-ca.crt file to your workstation and specify the CA in
your openrc
Use API endpoints and horizon with self-signed encrypted and authenticated
communication

For real deployments, we don't recommend using kolla-anible certificates
but instead obtaining a certificate signed by a legitimate signing
authority.  Then the workflow is:

Place certificates in /etc/kolla
kolla-ansible deploy
Use openstack clients as you please.

>- Is IPv6 fully supported?

Considering ipv4 addresses code is used throughout Ansible, I don't think
it would be possible at this time to deploy OpenStack with Ansible on an
IPv6 based network.  If the IPv6 network nodes also had an IPv4 address
which is commonly how IPv6 is deployed, everything would work perfectly.

Note, neutron does obviously work with IPv6 out of the box.

>- What integration exists for isolation of network traffic between
>  services?

Could you go into more detail on what your looking at here?  If you mean
are our internal management networks and external API networks segregated,
the answer is yes.  Also the storage network, tunnel network, and neutron
networks can be segregated.

>- What's the update/upgrade model, what downtime is associated with minor
>  version updates and upgrades requiring RPC/DB migration?  What's tested
>  in CI in this regard?

We are just starting to test upgrade in the CI/CD system.  This work is in
progress and should finish by Newton 1.

Upgrades are minimal downtime.  To upgrade you would do something like

kolla-ansible upgrade

And all of your cloud would migrate to the new version of OpenStack
without VM restart and with minimal (order of milliseconds) network
interruption for the virtual machines.  During the upgrade process which
takes approximately 1-2 minutes on my gear, it is possible some services
such as Nova may return errors - or it is possible our serialized rolling
upgrade has zero impact on upgrades.  We are uncertain on this point as it
requires more evaluation on our end.  We just finished the job on ugprades
in Mitaka, so we are short on downtime data metrics.

>
>Very interested to learn more about these, as they are challenges we've
>been facing within the TripleO community lately in the context of our
>current implementation.

A demo would help you understand the various aspects of how Kolla
operates, including  deployment, reconfigure, and upgrade.  It takes about
15 minutes to do all these things on a single node.  Just ping me on IRC
and we can setup a time.  I can't at the moment demo multinode because my
lab is in shambles 

[openstack-dev] Launch of an instance from a bootable volume fails on Xen env

2016-03-23 Thread Benjamin, Arputham
Launch of an instance from a bootable volume fails on Xen env.
The root cause of this issue is that Nova is mapping the disk_dev /disk_bus to 
vda/virtio
instead of xvda/xen. (Below is the session output showing the launch error)

Has this been resolved?  Is anyone working on this issue?

Thanks,
Benjamin

016-03-08 15:07:51.430 3070 INFO nova.virt.block_device 
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8 976963ca04df48c79f0c87ff7a330d47 
310cb58241964e0a92bc939ec1c6a0ff - - -] [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] Booting with volume 
1d33ba84-9ce2-467d-97c5-973a7ed48456 at /dev/vda
2016-03-08 15:07:55.863 3070 INFO nova.virt.libvirt.driver 
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8 976963ca04df48c79f0c87ff7a330d47 
310cb58241964e0a92bc939ec1c6a0ff - - -] [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] Creating image
2016-03-08 15:07:55.864 3070 WARNING nova.virt.libvirt.driver 
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8 976963ca04df48c79f0c87ff7a330d47 
310cb58241964e0a92bc939ec1c6a0ff - - -] [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] File injection into a boot from volume 
instance is not supported
2016-03-08 15:08:01.430 3070 INFO nova.compute.resource_tracker 
[req-4ac03ab4-f8f6-4141-b96d-2968e9664c35 - - - - -] Auditing locally available 
compute resources for node compute.openstack.com
2016-03-08 15:08:02.164 3070 ERROR nova.virt.libvirt.driver 
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8 976963ca04df48c79f0c87ff7a330d47 
310cb58241964e0a92bc939ec1c6a0ff - - -] Error launching a defined domain with 
XML: 
  instance-006c
  d45a5b7b-314f-4bfa-893d-3498e04f04fa
  
http://openstack.org/xmlns/libvirt/nova/1.0;>
  
  bv-vivid-server
  2016-03-08 23:07:55
  
2048
20
0
0
1
  
  
admin
admin
  

  
  2097152
  2097152
  1
  
hvm
/usr/lib/xen/boot/hvmloader

  
  



  
  
  destroy
  restart
  destroy
  

  
  
  
  1d33ba84-9ce2-467d-97c5-973a7ed48456


  
  
  


  


  





  


  


  

  


2016-03-08 15:08:02.167 3070 ERROR nova.compute.manager 
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8 976963ca04df48c79f0c87ff7a330d47 
310cb58241964e0a92bc939ec1c6a0ff - - -] [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] Instance failed to spawn
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] Traceback (most recent call last):
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in 
_build_resources
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] yield resources
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in 
_build_and_run_instance
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] block_device_info=block_device_info)
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2385, in 
spawn
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] block_device_info=block_device_info)
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4403, in 
_create_domain_and_network
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] power_on=power_on)
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4334, in 
_create_domain
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] LOG.error(err)
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] six.reraise(self.type_, self.value, 
self.tb)
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4324, in 
_create_domain
2016-03-08 15:08:02.167 3070 TRACE nova.compute.manager [instance: 
d45a5b7b-314f-4bfa-893d-3498e04f04fa] domain.createWithFlags(launch_flags)

[openstack-dev] [app-catalog] IRC Meeting Thursday March 24th at 17:00UTC

2016-03-23 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for March 24th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to get
something on the agenda:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Looking forward to seeing everyone there tomorrow!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Fox, Kevin M
+1. I'm going to keep my eye on the split stack stuff closely now. I think this 
could be very useful to our site.

Thanks,
Kevin

From: Steven Hardy [sha...@redhat.com]
Sent: Wednesday, March 23, 2016 3:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

On Wed, Mar 23, 2016 at 10:42:17AM -0500, Michał Jastrzębski wrote:
> Hello,
>
> So Ryan, I think you can make use of heat all the way. Architecture of
> kolla doesn't require you to use ansible at all (in fact, we separate
> ansible code to a different repo). Truth is that ansible-kolla is
> developed by most people and considered "the way to deploy kolla" by
> most of us, but we make sure that we won't cut out other deployment
> engines from our potential.
>
> So bottom line, heat may very well replace ansible code if you can
> duplicate logic we have in playbooks in heat templates. That may
> require docker resource with pretty complete featureset of docker
> itself (named volumes being most important). Bootstrap is usually done
> inside container, so that would be possible too.
>
> To be honest, as for tripleo doing just bare metal deployment would
> defeat idea of tripleo. We have bare metal deployment tools already
> (cobbler which is used widely, bifrost which use ansible same as kolla
> and integration would be easier), and these comes with significantly
> less footprint than whole tripleo infrastructure. Strength of tripleo
> comes from it's rich config of openstack itself, and I think that
> should be portable to kolla.

Honestly I don't think you can compare TripleO, which offers all the
features of Ironic, Ironic-Inspector, Neutron and Nova with Cobbler, it's
just not an apples-to-apples comparison IMHO.

Even if you used TripleO "just" for the baremetal deployment part, you gain
all of this for free:

- Pluggable node power management (with great vendor support) via Ironic
- Node introspection and benchmarking via ironic-inspector
- Rule based profile matching based on introspection data
- Control of node placement via nova flavors/filters
- Declarative configuration of physical networking
- Very flexible configuration of isolated overlay networks
- Pre-configured Heat, Mistral, Zaqar and Swift (should you choose to use
  them)

Yes, you could "just" provision the nodes via a simpler provisioning
tool, or even via Ironic standalone like in bifrost, but as a deployment
tool the TripleO undercloud is pretty nice when you look at the features we
have integrated, and should enable a clean hand-off to ansible or whatever
when we get the split-stack rearchitecting done during Newton.

As a side-note, it'd be great to get better collaboration around the various
teams using Ironic in this way, vs more tenant facing use-cases, and
personally I see that as completely aligned with the idea of TripleO,
nobody ever said you had to use all the pieces (just like OpenStack! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-23 Thread Fox, Kevin M
So, this is where things start getting a little ugly and undefined... This is 
what I've been able to gather so far, so please someone correct me if I'm wrong.

Barbican is the OpenStack secret manager. It provides a standard OpenStack api 
for users to be able store/retrieve secrets... Its plugable and in theory, you 
could add a vault plugin to it. Barbican is then your abstraction layer.

Separate from that, is Castellan. Which is a plugable abstraction library at 
the client side. So a Vault plugin could be create for it instead.

My personal preference is to have a standard rest api over having a standard 
python client api in a cloud. Its more the OpenStack way. I'll leave it up to 
other sources to get into why a rest api's better.

That being said, there's still the elephant in the room I think of:

How do you securely get a secret to the vm, to allow you to get secrets from 
the secret store? I've been working on that use case for over a year now with 
little traction. :/ 

Either Castellan, Barbican, or talking directly to Vault will have that issue. 
How do you validate your vm with that service.

The current endeavor to address the situation is located here: 
https://review.openstack.org/#/c/93/

We really need to get all the OpenStack projects together and address this 
issue as a whole. Everyone's now trying or has already worked around it in some 
not so nice ways. Amazon has had a solution for years now. Why can't we address 
it?

Thanks,
Kevin




From: Ian Cordasco [sigmaviru...@gmail.com]
Sent: Wednesday, March 23, 2016 2:45 PM
To: Monty Taylor; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Streamline adoption of Magnum

-Original Message-
From: Monty Taylor 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 22, 2016 at 18:49:41
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [magnum] Streamline adoption of Magnum

> On 03/22/2016 06:27 PM, Kevin Carter wrote:
> > /me wearing my deployer hat now: Many of my customers and product folks
> > want Magnum but they also want magnum to be as secure and stable as
> > possible. If Barbican is the best long term solution for the project it
> > would make sense to me that Magnum remain on course with Barbican as the
> > defacto way of deploying in production. IMHO building alternative means
> > for certificate management is a distraction and will only confuse folks
> > looking to deploy Magnum into production.
>
> I'm going to agree. This reminds me of people who didn't want to run
> keystone back in the day. Those people were a distraction, and placating
> them hampered OpenStack's progress by probably several years.

Right. Barbican is a good service that is actually necessary for a cloud (see 
also other similar services announced by the likes of Hashicorp). Magnum 
relying on it to securely store information makes perfect sense.

> >> Some ops teams are willing to
> >> adopt a new service, but not two. They only want to add Magnum and not
> >> Barbican.
> >
> > It would seem to me that once the Barbican dependency is well
> > documented, which it should be at this point, Barbican is be easy to
> > accept especially with the understanding of why it is needed. Many of
> > the deployment projects are creating the automation needed to make the
> > adoption of services simpler and I'd imagine deployment automation is
> > the largest hurdle to wide spread adoption for both Barbican and Magnum.
> > If the OPS team you mention does not want both services it would seem
> > they can use "local file" option; this is similar to Cinder+LVM and
> > Glance+file_store both of which have scale operational issues in production.
>
> Agree.

If it wasn't obvious, I also agree. That said, people will still try to use 
these to avoid the illusion of additional costs. They tend to ignore the cost 
of the repercussions of these decisions down the road.

> >> We think that once those operators become familiar with
> >> Magnum, adding Barbican will follow. In the mean time, we’d like to
> >> offer a Barbican alternative that allows Magnum to scale beyond one
> >> conductor, and allows for encrypted storage of TLC credentials needed
> >> for unattended bay operations.
> >
> > If all of this is to simplify or provide for the developer/"someone
> > kicking the tires" use case I'd imagine the "local file" storage would
> > be sufficient. If the acceptance of Barbican is too much to handle or
> > introduce into an active deployment (I'm not sure why that would be
> > especially if they're adding Magnum), the synchronization of locally
> > stored certificates across multiple hosts is manageable and can be
> > handled by a very long list of other pre-existing operational means.

Right, they may be using any of the other recently 

Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2016-03-23 14:46:16 -0700:
> On Thu, Mar 24, 2016 at 07:14:35AM +1000, Lana Brindley wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> > 
> > Hi Mike, and sorry I missed you on IRC to discuss this there. That said, I 
> > think it's great that you took this to the mailing list, especially seeing 
> > the conversation that has ensued.
> > 
> > More inline ...
> > 
> > On 24/03/16 01:06, Mike Perez wrote:
> > > Hey all,
> > > 
> > > I've been talking to a variety of projects about lack of install guides. 
> > > This
> > > came from me not having a great experience with trying out projects in 
> > > the big
> > > tent.
> > > 
> > > Projects like Manila have proposed install docs [1], but they were 
> > > rejected
> > > by the install docs team because it's not in defcore. One of Manila's 
> > > goals of
> > > getting these docs accepted is to apply for the operators tag
> > > ops:docs:install-guide [2] so that it helps their maturity level in the 
> > > project
> > > navigator [3].
> > > 
> > > Adrian Otto expressed to me having the same issue for Magnum. I think it's
> > > funny that a project that gets keynote time at the OpenStack conference 
> > > can't
> > > be in the install docs personally.
> > > 
> > > As seen from the Manila review [1], the install docs team is suggesting 
> > > these
> > > to be put in their developer guide.
> > 
> > As Steve pointed out, these now have solid plans to go in. That was because 
> > both projects opened a conversation with us and we worked with them over 
> > time to give them the docs they required.
> > 
> > > 
> > > I don't think this is a great idea. Mainly because they are for 
> > > developers,
> > > operators aren't going to be looking in there for install information. 
> > > Also the
> > > Developer doc page [4] even states "This page contains documentation for 
> > > Python
> > > developers, who work on OpenStack itself".
> > 
> > I agree, but it's a great place to start. In fact, I've just merged a 
> > change to the Docs Contributor Guide (on the back of a previous mailing 
> > list conversation) that explicitly states this:
> > 
> > http://docs.openstack.org/contributor-guide/quickstart/new-projects.html
> > 
> > > 
> > > The install docs team doesn't want to be swamped with everyone in big tent
> > > giving them their install docs, to be verified, and eventually likely to 
> > > be
> > > maintained by the install docs team.
> > 
> > Which is exactly why we're very selective. Sadly, documenting every big 
> > tent project's install process is no small task.
> 
> I'd love to have some sort of plugin system, where teams can be
> responsible for their own install guide repo, with a single line in the
> RST for the install guide to have it include the repo in the build.
> 
> // jim

Why do we need to have one install guide? Why not separate guides for
the peripheral projects?

Doug

> 
> > 
> > > 
> > > However, as an operator when I go docs.openstack.org under install guides,
> > > I should know how to install any of the big tent projects. These are 
> > > accepted
> > > projects by the Technical Committee.
> > > 
> > > Lets consider the bigger picture of things here. If we don't make this
> > > information accessible, projects have poor adoption and get less feedback
> > > because people can't attempt to install them to begin reporting bugs.
> > 
> > I agree. This has been an issue for several cycles now, but with all our 
> > RST conversions now (mostly) behind us, I feel like we can dedicate the 
> > Newton cycle to improving how we do things. Exactly how that happens will 
> > need to be determined by the docs team in the Austin Design Summit, and I 
> > strongly suggest you intend to attend that session once we have it 
> > scheduled, as your voice is important in this conversation.
> > 
> > Lana
> > 
> > - -- 
> > Lana Brindley
> > Technical Writer
> > Rackspace Cloud Builders Australia
> > http://lanabrindley.com
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v2
> > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> > 
> > iQEcBAEBCAAGBQJW8wc7AAoJELppzVb4+KUywYMIAMr78Gw+zPp3LXyqxkQFPs9y
> > mo/GJrfQ9OLD6CXpKSxcmvnuaHP1vHRrXPqkE02zb6YTOxV3C3CIW7hf023Dihwa
> > uED5kL7DrkTO+xFrjClkVRpKit/ghWQ3By/V9yaYjgWQvvRy3/Y+dvjZHnrDDHE1
> > rIxbU4PVZ0LPTxI7nNy71ffxFXW2Yn9Pl6EJnVm/iu9R+BNfRHgQ3kdqalG+Ppat
> > 9tZIGpxzi5/dTS9dTf5zN2GqYzYoDR8J6C/O/ojWyOjwcycvqWH0XboV7usLLMR8
> > 77RB/Ob8WszpbHZ6+yJF3P9hJhwhFXs8UJFcapkwaMy7wu8Lt0+etgC8nPDFj9I=
> > =hsaE
> > -END PGP SIGNATURE-
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Steven Hardy
On Wed, Mar 23, 2016 at 10:42:17AM -0500, Michał Jastrzębski wrote:
> Hello,
> 
> So Ryan, I think you can make use of heat all the way. Architecture of
> kolla doesn't require you to use ansible at all (in fact, we separate
> ansible code to a different repo). Truth is that ansible-kolla is
> developed by most people and considered "the way to deploy kolla" by
> most of us, but we make sure that we won't cut out other deployment
> engines from our potential.
> 
> So bottom line, heat may very well replace ansible code if you can
> duplicate logic we have in playbooks in heat templates. That may
> require docker resource with pretty complete featureset of docker
> itself (named volumes being most important). Bootstrap is usually done
> inside container, so that would be possible too.
> 
> To be honest, as for tripleo doing just bare metal deployment would
> defeat idea of tripleo. We have bare metal deployment tools already
> (cobbler which is used widely, bifrost which use ansible same as kolla
> and integration would be easier), and these comes with significantly
> less footprint than whole tripleo infrastructure. Strength of tripleo
> comes from it's rich config of openstack itself, and I think that
> should be portable to kolla.

Honestly I don't think you can compare TripleO, which offers all the
features of Ironic, Ironic-Inspector, Neutron and Nova with Cobbler, it's
just not an apples-to-apples comparison IMHO.

Even if you used TripleO "just" for the baremetal deployment part, you gain
all of this for free:

- Pluggable node power management (with great vendor support) via Ironic
- Node introspection and benchmarking via ironic-inspector
- Rule based profile matching based on introspection data
- Control of node placement via nova flavors/filters
- Declarative configuration of physical networking
- Very flexible configuration of isolated overlay networks
- Pre-configured Heat, Mistral, Zaqar and Swift (should you choose to use
  them)

Yes, you could "just" provision the nodes via a simpler provisioning
tool, or even via Ironic standalone like in bifrost, but as a deployment
tool the TripleO undercloud is pretty nice when you look at the features we
have integrated, and should enable a clean hand-off to ansible or whatever
when we get the split-stack rearchitecting done during Newton.

As a side-note, it'd be great to get better collaboration around the various
teams using Ironic in this way, vs more tenant facing use-cases, and
personally I see that as completely aligned with the idea of TripleO,
nobody ever said you had to use all the pieces (just like OpenStack! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Doug Hellmann
Excerpts from Lana Brindley's message of 2016-03-24 07:14:35 +1000:
> Hi Mike, and sorry I missed you on IRC to discuss this there. That said, I 
> think it's great that you took this to the mailing list, especially seeing 
> the conversation that has ensued.
> 
> More inline ...
> 
> On 24/03/16 01:06, Mike Perez wrote:
> > Hey all,
> > 
> > I've been talking to a variety of projects about lack of install guides. 
> > This
> > came from me not having a great experience with trying out projects in the 
> > big
> > tent.
> > 
> > Projects like Manila have proposed install docs [1], but they were rejected
> > by the install docs team because it's not in defcore. One of Manila's goals 
> > of
> > getting these docs accepted is to apply for the operators tag
> > ops:docs:install-guide [2] so that it helps their maturity level in the 
> > project
> > navigator [3].
> > 
> > Adrian Otto expressed to me having the same issue for Magnum. I think it's
> > funny that a project that gets keynote time at the OpenStack conference 
> > can't
> > be in the install docs personally.
> > 
> > As seen from the Manila review [1], the install docs team is suggesting 
> > these
> > to be put in their developer guide.
> 
> As Steve pointed out, these now have solid plans to go in. That was because 
> both projects opened a conversation with us and we worked with them over time 
> to give them the docs they required.
> 
> > 
> > I don't think this is a great idea. Mainly because they are for developers,
> > operators aren't going to be looking in there for install information. Also 
> > the
> > Developer doc page [4] even states "This page contains documentation for 
> > Python
> > developers, who work on OpenStack itself".
> 
> I agree, but it's a great place to start. In fact, I've just merged a change 
> to the Docs Contributor Guide (on the back of a previous mailing list 
> conversation) that explicitly states this:
> 
> http://docs.openstack.org/contributor-guide/quickstart/new-projects.html

I think you're missing that most of us are disagreeing that it is
a good place to start. It's fine to have the docs in a repository
managed by the project team. It's not good at all to publish them
under docs.o.o/developer because they are not for developers, and
so it's confusing. This is why we ended up with a different place
for release notes to be published, instead of just adding reno to
the existing developer documentation build, for example.

> 
> > 
> > The install docs team doesn't want to be swamped with everyone in big tent
> > giving them their install docs, to be verified, and eventually likely to be
> > maintained by the install docs team.
> 
> Which is exactly why we're very selective. Sadly, documenting every big tent 
> project's install process is no small task.

Right. The solution to that isn't to say "we aren't going to document
it at all" or "publish the documentation somewhere less ideal",
though, which is what it sounds like we're doing now.  It's to say
"you are going to have to manage that document yourself, with the
docs team answering some questions to get you started using standard
templates for the document and build jobs".  We need a way for all
teams to publish things they write to locations outside of their
developer docs, without the documentation team feeling like they
are somehow responsible for the results (or, more importantly, for
readers of the documents to think that).

I like the prominent "file a bug here" link on the new docs theme,
so if we could reuse that but point the URL to the project's launchpad
site instead of the documentation team's site, that would be a
start. We may be able to do other things with the theme to further
indicate who created the content and how to get help or report
issues.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-23 Thread Ian Cordasco
 

-Original Message-
From: Monty Taylor 
Reply: Monty Taylor 
Date: March 23, 2016 at 16:52:52
To: Ian Cordasco , OpenStack Development Mailing List 
(not for usage questions) 
Subject:  Re: [openstack-dev] [magnum] Streamline adoption of Magnum

> On 03/23/2016 04:45 PM, Ian Cordasco wrote:
> > -Original Message-
> > From: Monty Taylor  
> > Reply: OpenStack Development Mailing List (not for usage questions)  
> > Date: March 22, 2016 at 18:49:41
> > To: openstack-dev@lists.openstack.org  
> > Subject: Re: [openstack-dev] [magnum] Streamline adoption of Magnum
> >
> >> On 03/22/2016 06:27 PM, Kevin Carter wrote:
> >>> /me wearing my deployer hat now: Many of my customers and product folks
> >>> want Magnum but they also want magnum to be as secure and stable as
> >>> possible. If Barbican is the best long term solution for the project it
> >>> would make sense to me that Magnum remain on course with Barbican as the
> >>> defacto way of deploying in production. IMHO building alternative means
> >>> for certificate management is a distraction and will only confuse folks
> >>> looking to deploy Magnum into production.
> >>
> >> I'm going to agree. This reminds me of people who didn't want to run
> >> keystone back in the day. Those people were a distraction, and placating
> >> them hampered OpenStack's progress by probably several years.
> >
> > Right. Barbican is a good service that is actually necessary for a cloud 
> > (see also other  
> similar services announced by the likes of Hashicorp). Magnum relying on it 
> to securely  
> store information makes perfect sense.
> >
>  Some ops teams are willing to
>  adopt a new service, but not two. They only want to add Magnum and not
>  Barbican.
> >>>
> >>> It would seem to me that once the Barbican dependency is well
> >>> documented, which it should be at this point, Barbican is be easy to
> >>> accept especially with the understanding of why it is needed. Many of
> >>> the deployment projects are creating the automation needed to make the
> >>> adoption of services simpler and I'd imagine deployment automation is
> >>> the largest hurdle to wide spread adoption for both Barbican and Magnum.
> >>> If the OPS team you mention does not want both services it would seem
> >>> they can use "local file" option; this is similar to Cinder+LVM and
> >>> Glance+file_store both of which have scale operational issues in 
> >>> production.
> >>
> >> Agree.
> >
> > If it wasn't obvious, I also agree. That said, people will still try to use 
> > these to avoid  
> the illusion of additional costs. They tend to ignore the cost of the 
> repercussions of  
> these decisions down the road.
> >
>  We think that once those operators become familiar with
>  Magnum, adding Barbican will follow. In the mean time, we’d like to
>  offer a Barbican alternative that allows Magnum to scale beyond one
>  conductor, and allows for encrypted storage of TLC credentials needed
>  for unattended bay operations.
> >>>
> >>> If all of this is to simplify or provide for the developer/"someone
> >>> kicking the tires" use case I'd imagine the "local file" storage would
> >>> be sufficient. If the acceptance of Barbican is too much to handle or
> >>> introduce into an active deployment (I'm not sure why that would be
> >>> especially if they're adding Magnum), the synchronization of locally
> >>> stored certificates across multiple hosts is manageable and can be
> >>> handled by a very long list of other pre-existing operational means.
> >
> > Right, they may be using any of the other recently announced services 
> > created and provided  
> by other groups.
> >
>  A blueprint [2] was recently proposed to
>  address this. We discussed this in our team meeting today [3], where we
>  used an etherpad [4] to collaborate on options that could be used as
>  alternatives besides the ones offered today. This thread is not intended
>  to answer how to make Barbican easier to adopt, but rather how to make
>  Magnum easier to adopt while keeping Barbican as the default
>  best-practice choice for certificate storage.
> >>>
> >>> I'd like there _NOT_ to be an "easy button" way for operators to hang
> >>> themselves in production by following a set of "quick start
> >>> instructions" under the guise of "easy to adopt". If Barbican is the
> >>> best-practice lets keep it that way. If for some reason Barbican is hard
> >>> to adopt lets identify those difficulties and get them fixed. Going down
> >>> the path of NIH or alternative less secure solutions because someone
> >>> (not identified here or speaking for themselves) has said they don't
> >>> want Barbican or deploying it is hard seems like a recipe for
> >>> fragmentation and disaster.
> >>
> >> Agree.
> >
> > Perhaps, if the problem is less that two new services is 

[openstack-dev] [nova][novaclient] Responses for Deletion Events

2016-03-23 Thread Augustina Ragwitz
There's been some discussion regarding a recent bug [1] where an issue was
reported that no confirmation/success message is received from "nova
agent-delete". This behavior is inconsistent from other novaclient delete
commands which do provide a success message.

There are a two issues that need to be addressed before this behavior can
be patched:

1) What would represent sufficient expected behavior in this deletion case?

A few options have been suggested in the bug; we should probably have
consensus. We should keep in mind the novaclient is due to be deprecated in
the near future, to be replaced by the openstack-client.

The options suggested include providing a simple success response or
supporting different levels of response data with options. For instance,
only show a message if the user specifies --verbose explicitly. novaclient
is not consistent with its "delete" behavior, some calls require --verbose
while others are verbose by default.

2) How does the openstack-client behave for deletions? Should we be
consistent with that in our own client?

I've been digging around in the available documentation for the OpenStack
client and didn't see response types documented. This issue has also not
been addressed in any of the HGI or other high level documentation. I
posted a question in the #openstack-sdks channel to see if anyone knows the
answer to this.

This might be a good opportunity to think about a standard for deletion
responses if one hasn't been defined already.


[1] https://bugs.launchpad.net/python-novaclient/+bug/1557888

---
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-23 Thread Monty Taylor

On 03/23/2016 04:45 PM, Ian Cordasco wrote:

-Original Message-
From: Monty Taylor 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 22, 2016 at 18:49:41
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [magnum] Streamline adoption of Magnum


On 03/22/2016 06:27 PM, Kevin Carter wrote:

/me wearing my deployer hat now: Many of my customers and product folks
want Magnum but they also want magnum to be as secure and stable as
possible. If Barbican is the best long term solution for the project it
would make sense to me that Magnum remain on course with Barbican as the
defacto way of deploying in production. IMHO building alternative means
for certificate management is a distraction and will only confuse folks
looking to deploy Magnum into production.


I'm going to agree. This reminds me of people who didn't want to run
keystone back in the day. Those people were a distraction, and placating
them hampered OpenStack's progress by probably several years.


Right. Barbican is a good service that is actually necessary for a cloud (see 
also other similar services announced by the likes of Hashicorp). Magnum 
relying on it to securely store information makes perfect sense.


Some ops teams are willing to
adopt a new service, but not two. They only want to add Magnum and not
Barbican.


It would seem to me that once the Barbican dependency is well
documented, which it should be at this point, Barbican is be easy to
accept especially with the understanding of why it is needed. Many of
the deployment projects are creating the automation needed to make the
adoption of services simpler and I'd imagine deployment automation is
the largest hurdle to wide spread adoption for both Barbican and Magnum.
If the OPS team you mention does not want both services it would seem
they can use "local file" option; this is similar to Cinder+LVM and
Glance+file_store both of which have scale operational issues in production.


Agree.


If it wasn't obvious, I also agree. That said, people will still try to use 
these to avoid the illusion of additional costs. They tend to ignore the cost 
of the repercussions of these decisions down the road.


We think that once those operators become familiar with
Magnum, adding Barbican will follow. In the mean time, we’d like to
offer a Barbican alternative that allows Magnum to scale beyond one
conductor, and allows for encrypted storage of TLC credentials needed
for unattended bay operations.


If all of this is to simplify or provide for the developer/"someone
kicking the tires" use case I'd imagine the "local file" storage would
be sufficient. If the acceptance of Barbican is too much to handle or
introduce into an active deployment (I'm not sure why that would be
especially if they're adding Magnum), the synchronization of locally
stored certificates across multiple hosts is manageable and can be
handled by a very long list of other pre-existing operational means.


Right, they may be using any of the other recently announced services created 
and provided by other groups.


A blueprint [2] was recently proposed to
address this. We discussed this in our team meeting today [3], where we
used an etherpad [4] to collaborate on options that could be used as
alternatives besides the ones offered today. This thread is not intended
to answer how to make Barbican easier to adopt, but rather how to make
Magnum easier to adopt while keeping Barbican as the default
best-practice choice for certificate storage.


I'd like there _NOT_ to be an "easy button" way for operators to hang
themselves in production by following a set of "quick start
instructions" under the guise of "easy to adopt". If Barbican is the
best-practice lets keep it that way. If for some reason Barbican is hard
to adopt lets identify those difficulties and get them fixed. Going down
the path of NIH or alternative less secure solutions because someone
(not identified here or speaking for themselves) has said they don't
want Barbican or deploying it is hard seems like a recipe for
fragmentation and disaster.


Agree.


Perhaps, if the problem is less that two new services is problematic and the 
real problem is one (or some combination) of:

- Magnum and Barbican's dependency is poorly (if at all) documented
- Magnum and Barbican don't have documentation to deploy the services
- Magnum should support a variety of services like Barbican (e.g., adding 
support for Vault)

There are things that can be done. One thing is writing documentation. Another 
would be a driver for Vault or any other service the community might want. 
(Which is not to imply that there is a desire to rely on those, just that those 
are services that might be easier for our hypothetical operators to deploy.)


Yah. I'm not crazy about OpenStack services spending much time worrying 
about integration with other 

Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-03-23 Thread Nikhil Komawar
Just throwing this out there:

May be the sessions are open o_O? If you're using sqlalchemy to talk to
the DB then may be open and close the sessions per transaction than keep
them open for all threads?

On 3/23/16 3:49 PM, pnkk wrote:
> Joshua,
>
> We are performing few scaling tests for our solution and see that
> there are errors as below:
>
> Failed saving logbook 'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b'\n  
> InternalError: (pymysql.err.InternalError) (1205, u'Lock wait timeout 
> exceeded; try restarting transaction') [SQL: u'UPDATE logbooks SET 
> created_at=%s, updated_at=%s, meta=%s, name=%s, uuid=%s WHERE logbooks.uuid = 
> %s'] [parameters: (datetime.datetime(2016, 3, 18, 18, 16, 40), 
> datetime.datetime(2016, 3, 23, 3, 3, 44, 95395), u'{}', u'test', 
> u'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b', 
> u'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b')]"
> We have about 800 flows as of now and each flow is updated in the same 
> logbook in a separate eventlet thread.
> Every thread calls save_logbook() on the same logbook record. I think this 
> function is trying to update logbook record even though my usecase needs only 
> flow details to be inserted and it doesn't update any information related to 
> logbook.
> Probably one of the threads was holding the lock while updating, and others 
> tried for lock and failed after the default interval has elapsed.
> I can think of few alternatives at the moment:
> 1. Increase the number of logbooks
> 2. Increase the innodb_lock_wait_timeout
> 3. There are some suggestions to make the innodb transaction isolation level 
> to "READ COMMITTED" instead of "REPEATABLE READ", but I am not very familiar 
> of the side effects they can cause
> Appreciate your thoughts on given alternatives or probably even better 
> alternative
> Thanks,
> Kanthi
>
> On Sun, Mar 20, 2016 at 10:00 PM, Joshua Harlow  > wrote:
>
> Lingxian Kong wrote:
>
> Kanthi, sorry for chiming in, I suggest you may have a chance
> to take
> a look at Mistral[1], which is the workflow as a service in
> OpenStack(or without OpenStack).
>
>
> Out of curiosity, why? Seems the ML post was about 'TaskFlow
> persistence' not mistral, just saying (unsure how it is relevant
> to mention mistral in this)...
>
> Back to getting more coffee...
>
> -Josh
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-23 Thread Fox, Kevin M
Can someone verify the code in the review is complete enough to do a full 
migration? Any steps missing or not documented?

Is heat going to want to support v1 resources longer then neutron does? Its 
kind of nice to be able to run a newer dashboard/heat against an older 
something else.

Thanks,
Kevin

From: Doug Wiegley [doug...@parksidesoftware.com]
Sent: Wednesday, March 23, 2016 2:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are 
weready?

Migration script has been submitted, v1 is not going anywhere from 
stable/liberty or stable/mitaka, so it’s about to disappear from master.

I’m thinking in this order:

- remove jenkins jobs
- wait for heat to remove their jenkins jobs ([heat] added to this thread, so 
they see this coming before the job breaks)
- remove q-lbaas from devstack, and any references to lbaas v1 in devstack-gate 
or infra defaults.
- remove v1 code from neutron-lbaas

Since newton is now open for commits, this process is going to get started.

Thanks,
doug



> On Mar 8, 2016, at 11:36 AM, Eichberger, German  
> wrote:
>
> Yes, it’s Database only — though we changed the agent driver in the DB from 
> V1 to V2 — so if you bring up a V2 with that database it should reschedule 
> all your load balancers on the V2 agent driver.
>
> German
>
>
>
>
> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
>
>> So this looks like only a database migration, right?
>>
>> -Original Message-
>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>> Sent: Tuesday, March 08, 2016 12:28 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>> Ok, for what it’s worth we have contributed our migration script: 
>> https://review.openstack.org/#/c/289595/ — please look at this as a starting 
>> point and feel free to fix potential problems…
>>
>> Thanks,
>> German
>>
>>
>>
>>
>> On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:
>>
>>> As far as I recall, you can specify the VIP in creating the LB so you will 
>>> end up with same IPs.
>>>
>>> -Original Message-
>>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>>> Sent: Monday, March 07, 2016 8:30 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>>> weready?
>>>
>>> Hi Sam,
>>>
>>> So if you have some 3rd party hardware you only need to change the
>>> database (your steps 1-5) since the 3rd party hardware will just keep
>>> load balancing…
>>>
>>> Now for Kevin’s case with the namespace driver:
>>> You would need a 6th step to reschedule the loadbalancers with the V2 
>>> namespace driver — which can be done.
>>>
>>> If we want to migrate to Octavia or (from one LB provider to another) it 
>>> might be better to use the following steps:
>>>
>>> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health
>>> Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
>>> Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format
>>> file into some scripts which recreate the load balancers with your
>>> provider of choice —
>>>
>>> 6. Run those scripts
>>>
>>> The problem I see is that we will probably end up with different VIPs
>>> so the end user would need to change their IPs…
>>>
>>> Thanks,
>>> German
>>>
>>>
>>>
>>> On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
>>>
 As for a migration tool.
 Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, 
 I am in favor for the following process:

 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
 Health Monitors , Members) into some JSON format file(s) 2. Delete LBaaS 
 v1 3.
 Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back
 over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
 make room to some custom modification for mapping between v1 and v2
 models)

 What do you think?

 -Sam.




 -Original Message-
 From: Fox, Kevin M [mailto:kevin@pnnl.gov]
 Sent: Friday, March 04, 2016 2:06 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
 weready?

 Ok. Thanks for the info.

 Kevin
 
 From: Brandon Logan [brandon.lo...@rackspace.com]
 Sent: Thursday, March 03, 2016 2:42 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
 weready?

 Just for clarity, V2 did not reuse tables, all the tables it uses are only 
 

Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Jim Rollenhagen
On Thu, Mar 24, 2016 at 07:14:35AM +1000, Lana Brindley wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
> 
> Hi Mike, and sorry I missed you on IRC to discuss this there. That said, I 
> think it's great that you took this to the mailing list, especially seeing 
> the conversation that has ensued.
> 
> More inline ...
> 
> On 24/03/16 01:06, Mike Perez wrote:
> > Hey all,
> > 
> > I've been talking to a variety of projects about lack of install guides. 
> > This
> > came from me not having a great experience with trying out projects in the 
> > big
> > tent.
> > 
> > Projects like Manila have proposed install docs [1], but they were rejected
> > by the install docs team because it's not in defcore. One of Manila's goals 
> > of
> > getting these docs accepted is to apply for the operators tag
> > ops:docs:install-guide [2] so that it helps their maturity level in the 
> > project
> > navigator [3].
> > 
> > Adrian Otto expressed to me having the same issue for Magnum. I think it's
> > funny that a project that gets keynote time at the OpenStack conference 
> > can't
> > be in the install docs personally.
> > 
> > As seen from the Manila review [1], the install docs team is suggesting 
> > these
> > to be put in their developer guide.
> 
> As Steve pointed out, these now have solid plans to go in. That was because 
> both projects opened a conversation with us and we worked with them over time 
> to give them the docs they required.
> 
> > 
> > I don't think this is a great idea. Mainly because they are for developers,
> > operators aren't going to be looking in there for install information. Also 
> > the
> > Developer doc page [4] even states "This page contains documentation for 
> > Python
> > developers, who work on OpenStack itself".
> 
> I agree, but it's a great place to start. In fact, I've just merged a change 
> to the Docs Contributor Guide (on the back of a previous mailing list 
> conversation) that explicitly states this:
> 
> http://docs.openstack.org/contributor-guide/quickstart/new-projects.html
> 
> > 
> > The install docs team doesn't want to be swamped with everyone in big tent
> > giving them their install docs, to be verified, and eventually likely to be
> > maintained by the install docs team.
> 
> Which is exactly why we're very selective. Sadly, documenting every big tent 
> project's install process is no small task.

I'd love to have some sort of plugin system, where teams can be
responsible for their own install guide repo, with a single line in the
RST for the install guide to have it include the repo in the build.

// jim

> 
> > 
> > However, as an operator when I go docs.openstack.org under install guides,
> > I should know how to install any of the big tent projects. These are 
> > accepted
> > projects by the Technical Committee.
> > 
> > Lets consider the bigger picture of things here. If we don't make this
> > information accessible, projects have poor adoption and get less feedback
> > because people can't attempt to install them to begin reporting bugs.
> 
> I agree. This has been an issue for several cycles now, but with all our RST 
> conversions now (mostly) behind us, I feel like we can dedicate the Newton 
> cycle to improving how we do things. Exactly how that happens will need to be 
> determined by the docs team in the Austin Design Summit, and I strongly 
> suggest you intend to attend that session once we have it scheduled, as your 
> voice is important in this conversation.
> 
> Lana
> 
> - -- 
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> 
> iQEcBAEBCAAGBQJW8wc7AAoJELppzVb4+KUywYMIAMr78Gw+zPp3LXyqxkQFPs9y
> mo/GJrfQ9OLD6CXpKSxcmvnuaHP1vHRrXPqkE02zb6YTOxV3C3CIW7hf023Dihwa
> uED5kL7DrkTO+xFrjClkVRpKit/ghWQ3By/V9yaYjgWQvvRy3/Y+dvjZHnrDDHE1
> rIxbU4PVZ0LPTxI7nNy71ffxFXW2Yn9Pl6EJnVm/iu9R+BNfRHgQ3kdqalG+Ppat
> 9tZIGpxzi5/dTS9dTf5zN2GqYzYoDR8J6C/O/ojWyOjwcycvqWH0XboV7usLLMR8
> 77RB/Ob8WszpbHZ6+yJF3P9hJhwhFXs8UJFcapkwaMy7wu8Lt0+etgC8nPDFj9I=
> =hsaE
> -END PGP SIGNATURE-
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-23 Thread Ian Cordasco
-Original Message-
From: Monty Taylor 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 22, 2016 at 18:49:41
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [magnum] Streamline adoption of Magnum

> On 03/22/2016 06:27 PM, Kevin Carter wrote:
> > /me wearing my deployer hat now: Many of my customers and product folks
> > want Magnum but they also want magnum to be as secure and stable as
> > possible. If Barbican is the best long term solution for the project it
> > would make sense to me that Magnum remain on course with Barbican as the
> > defacto way of deploying in production. IMHO building alternative means
> > for certificate management is a distraction and will only confuse folks
> > looking to deploy Magnum into production.
>  
> I'm going to agree. This reminds me of people who didn't want to run
> keystone back in the day. Those people were a distraction, and placating
> them hampered OpenStack's progress by probably several years.

Right. Barbican is a good service that is actually necessary for a cloud (see 
also other similar services announced by the likes of Hashicorp). Magnum 
relying on it to securely store information makes perfect sense.

> >> Some ops teams are willing to
> >> adopt a new service, but not two. They only want to add Magnum and not
> >> Barbican.
> >
> > It would seem to me that once the Barbican dependency is well
> > documented, which it should be at this point, Barbican is be easy to
> > accept especially with the understanding of why it is needed. Many of
> > the deployment projects are creating the automation needed to make the
> > adoption of services simpler and I'd imagine deployment automation is
> > the largest hurdle to wide spread adoption for both Barbican and Magnum.
> > If the OPS team you mention does not want both services it would seem
> > they can use "local file" option; this is similar to Cinder+LVM and
> > Glance+file_store both of which have scale operational issues in production.
>  
> Agree.

If it wasn't obvious, I also agree. That said, people will still try to use 
these to avoid the illusion of additional costs. They tend to ignore the cost 
of the repercussions of these decisions down the road.

> >> We think that once those operators become familiar with
> >> Magnum, adding Barbican will follow. In the mean time, we’d like to
> >> offer a Barbican alternative that allows Magnum to scale beyond one
> >> conductor, and allows for encrypted storage of TLC credentials needed
> >> for unattended bay operations.
> >
> > If all of this is to simplify or provide for the developer/"someone
> > kicking the tires" use case I'd imagine the "local file" storage would
> > be sufficient. If the acceptance of Barbican is too much to handle or
> > introduce into an active deployment (I'm not sure why that would be
> > especially if they're adding Magnum), the synchronization of locally
> > stored certificates across multiple hosts is manageable and can be
> > handled by a very long list of other pre-existing operational means.

Right, they may be using any of the other recently announced services created 
and provided by other groups.

> >> A blueprint [2] was recently proposed to
> >> address this. We discussed this in our team meeting today [3], where we
> >> used an etherpad [4] to collaborate on options that could be used as
> >> alternatives besides the ones offered today. This thread is not intended
> >> to answer how to make Barbican easier to adopt, but rather how to make
> >> Magnum easier to adopt while keeping Barbican as the default
> >> best-practice choice for certificate storage.
> >
> > I'd like there _NOT_ to be an "easy button" way for operators to hang
> > themselves in production by following a set of "quick start
> > instructions" under the guise of "easy to adopt". If Barbican is the
> > best-practice lets keep it that way. If for some reason Barbican is hard
> > to adopt lets identify those difficulties and get them fixed. Going down
> > the path of NIH or alternative less secure solutions because someone
> > (not identified here or speaking for themselves) has said they don't
> > want Barbican or deploying it is hard seems like a recipe for
> > fragmentation and disaster.
>  
> Agree.

Perhaps, if the problem is less that two new services is problematic and the 
real problem is one (or some combination) of:

- Magnum and Barbican's dependency is poorly (if at all) documented
- Magnum and Barbican don't have documentation to deploy the services
- Magnum should support a variety of services like Barbican (e.g., adding 
support for Vault)

There are things that can be done. One thing is writing documentation. Another 
would be a driver for Vault or any other service the community might want. 
(Which is not to imply that there is a desire to rely on those, just that those 
are services 

Re: [openstack-dev] [os-brick][nova][cinder] os-brick/privsep change is done and awaiting your review

2016-03-23 Thread Matt Riedemann



On 3/22/2016 5:37 PM, Angus Lees wrote:

On Sat, 19 Mar 2016 at 06:27 Matt Riedemann > wrote:

I stared pretty hard at the nova rootwrap filter change today [1] and
tried to keep that in my head along with the devstack change and the
changes to os-brick (which depend on the devstack/cinder/nova changes).
And with reading the privsep-helper command code in privsep itself.

I realize this is a bridge to fix the tightly couple lockstep upgrade
issue between cinder and nova, but it would be super helpful, at least
for me, to chart out how that nova rootwrap filter change fits into the
bigger picture, like what calls what and how, where are things used,
etc.

I see devstack passing on the os-brick change so I'm inclined to almost
blindly approve to just keep moving, but I'd feel bad about that. Would
it be possible to flow chart this out somewhere?

Sorry for all the confusion Matt.  I obviously explained it poorly in my
gerrit reply to you and I presume also in the parts of the oslo spec
that you've read, so I'll try another explanation here:

privsep fundamentally involves two processes - the regular (nova,
whatever) unprivileged code, and a companion Unix process running with
some sort of elevated privileges (different uid/gid, extra Linux
capabilities, whatever).  These two processes talk to each other over a
Unix socket in the obvious way.

*Conceptually* the companion privileged process is a fork from the
unprivileged process - in that the python environment (oslo.config, etc)
tries to be as similar as possible and writing code that runs in the
privileged process looks just like python defined in the original
process but with a particular decorator.

privsep has two modes of setting up this split-process-with-ipc-channel
arrangement:
- One is to use a true fork(), which follows the traditional Unix daemon
model of starting with full privileges (from init or whatever) and then
dropping privileges later - this avoids sudo, is more secure (imo), and
is a whole lot simpler in the privsep code, but requires a change to the
way OpenStack services are deployed, and a function call at the top of
main() before dropping privileges.
- The second is to invoke sudo or sudo+rootwrap from the unprivileged
process to run the "privsep-helper" command that you see in this
change.  This requires no changes to the way OpenStack services are
deployed, so is the method I'm recommending atm.  (We may never actually
use the fork() method tbh given how slowly things change in OpenStack.)
  It is completely inconsequential whether this uses sudo or
sudo+rootwrap - it just affects whether you need to add a line to
sudoers or rootwrap filters.  I chose rootwrap filter here because I
thought we had greater precedent for that type of change.

So hopefully that makes the overall picture clear:  We need this nova
rootwrap filter addition so privsep-helper can use sudo+rootwrap to
become root, so it can switch to the right set of elevated privileges,
so we can run the relevant privsep-decorated privileged functions in
that privileged environment.

I also have a concern in there about how the privsep-helper rootwrap
command in nova is only using the os-brick context. What happens if
os-vif and nova need to share common rootwrap commands? At the midcycle
Jay and Daniel said there weren't any coming up soon, but if that
happens, how do we handle it?


privsep is able to have different "privileged contexts", which can each
run as different uids and with different Linux capabilities.  In
practice each context has its own privileged process, and if we're using
the sudo(+rootwrap) and privsep-helper method, then each context will
want its own line in sudoers or rootwrap filters.
It is expected that most OpenStack services would only have one or maybe
two different contexts, but nova may end up with a few more because it
has its fingers in so many different pies.  So yes, we'll want another
entry similar to this one for os-vif - presumably os-vif will want
CAP_NET_ADMIN, whereas os-brick wants various storage device-related
capabilities.


Again, I'm disappointed the relevant section of the privsep spec failed
to explain the above sufficiently - if this conversation helps clarify
it for you, *please* suggest some better wording for the spec.  It seems
(understandably!) no-one wants to approve even the smallest
self-contained privsep-related change without understanding the entire
overall process, so I feel like I've had the above conversation about 10
times now.  It would greatly improve everyone's productivity if we can
get the spec (or some new doc) to a place where it can become the place
where people learn about privsep, and they don't have to wait for me to
reply with poorly summarised versions.

  - Gus


__
OpenStack Development 

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Jeff Peeler
On Wed, Mar 23, 2016 at 1:34 PM, Zane Bitter  wrote:
> On 23/03/16 07:54, Ryan Hallisey wrote:
>>
>> *Snip*
>>
>>> Indeed, this has literally none of the benefits of the ideal Heat
>>> deployment enumerated above save one: it may be entirely the wrong tool
>>> in every way for the job it's being asked to do, but at least it is
>>> still well-integrated with the rest of the infrastructure.
>>
>>
>>> Now, at the Mitaka summit we discussed the idea of a 'split stack',
>>> where we have one stack for the infrastructure and a separate one for
>>> the software deployments, so that there is no longer any tight
>>> integration between infrastructure and software. Although it makes me a
>>> bit sad in some ways, I can certainly appreciate the merits of the idea
>>> as well. However, from the argument above we can deduce that if this is
>>> the *only* thing we do then we will end up in the very worst of all
>>> possible worlds: the wrong tool for the job, poorly integrated. Every
>>> single advantage of using Heat to deploy software will have evaporated,
>>> leaving only disadvantages.
>>
>>
>> I think Heat is a very powerful tool having done the container integration
>> into the tripleo-heat-templates I can see its appeal.  Something I learned
>> from integration, was that Heat is not the best tool for container
>> deployment,
>> at least right now.  We were able to leverage the work in Kolla, but what
>> it
>> came down to was that we're not using containers or Kolla to its max
>> potential.
>>
>> I did an evaluation recently of tripleo and kolla to see what we would
>> gain
>> if the two were to combine. Let's look at some items on tripleo's roadmap.
>> Split stack, as mentioned above, would be gained if tripleo were to adopt
>> Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates config
>> and deployment.  Therefore, allowing for the decoupling for each piece of
>> the stack.  Composable roles, this would be the ability to land services
>> onto separate hosts on demand.  Kolla also already does this [1]. Finally,
>> container integration, this is just a given :).
>>
>> In the near term, if tripleo were to adopt Kolla as its overcloud it would
>> be provided these features and retire heat to setting up the baremetal
>> nodes
>> and providing those ips to ansible.  This would be great for kolla too
>> because
>> it would provide baremetal provisioning.
>>
>> Ian Main and I are currently working on a POC for this as of last week
>> [2].
>> It's just a simple heat template :).
>>
>> I think further down the road we can evaluate using kubernetes [3].
>> For now though,  kolla-anisble is rock solid and is worth using for the
>> overcloud.
>
>
> My concern about kolla-ansible is that the requirements might start getting
> away from what the original design was intended to cope with, and that it
> may prove difficult to extend. For example, I wrote about the idea of doing
> the container deployments with pure Heat:
>
>>> What's more, we are going to need some way of redistributing services
>>> when a machine in the cluster fails, and ultimately we would like that
>>> process to be automated, which would *require* a template generation
>>> service.
>>>
>>> We certainly *could* build all of that. But we definitely shouldn't
>
>
> and to my mind kolla-ansible is in a similar category in that respect (it
> does, of course, have an existing community and in that sense is still
> strictly superior to the pure-Heat approach). There's lots of stuff in e.g.
> Kubernetes that it seems likely we'll want and, while there's no
> _theoretical_ obstacle to implementing them in Ansible, these are hard,
> subtle problems which are presumably better left to a specialist project.

Fully agree with Zane here. I'm not really excited by using anything
other than Kubernetes for container orchestration (unless it's mesos,
but my understanding is it's a bit more heavyweight). Though I am
certainly okay with using kolla-ansible to generate config for all the
OpenStack services, but if it's faster to use puppet then that's fine
too.

The only drawback that I know of is that the Kolla containers would
need modifying since Kubernetes has no notion of dependencies. But
perhaps I'm getting ahead of myself...

> I'd be happy to hear other opinions on that though. Maybe we don't care
> about any of that container cluster management stuff, and if something fails
> we just let everything run degraded until we can pull in a replacement? I
> don't know.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-23 Thread Ian Cordasco
 

-Original Message-
From: Doug Hellmann 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 23, 2016 at 15:04:15
To: openstack-dev 
Subject:  Re: [openstack-dev] [release] [pbr] semver on master branches after 
RC WAS Re: How do I calculate the semantic version prior to a release?

> Excerpts from Ian Cordasco's message of 2016-03-22 16:39:02 -0500:
> >
> >
> > -Original Message-
> > From: Alan Pevec  
> > Reply: OpenStack Development Mailing List (not for usage questions)  
> > Date: March 22, 2016 at 14:21:47
> > To: OpenStack Development Mailing List (not for usage questions)  
> > Subject: Re: [openstack-dev] [release] [pbr] semver on master branches 
> > after RC WAS  
> Re: How do I calculate the semantic version prior to a release?
> >
> > > > The release team discussed this at the summit and agreed that it didn't 
> > > > really matter.  
> > > The only folks seeing the auto-generated versions are those doing CD from 
> > > git, and  
> they
> > > should not be mixing different branches of a project in a given 
> > > environment. So I don't  
> > > think it is strictly necessary to raise the major version, or give pbr 
> > > the hint to do  
> so.
> > >
> > > ok, I'll send confused RDO trunk users here :)
> > > That means until first Newton milestone tag is pushed, master will
> > > have misleading version. Newton schedule is not defined yet but 1st
> > > milestone is normally 1 month after Summit, and 2 months from now is
> > > rather large window.
> >
> > This affects other OpenStack projects like the OpenStack Ansible project 
> > which builds  
> from trunk and does periodic upgrades from the latest stable branch to 
> whatever is running  
> on master. Further they're using pip and this will absolutely cause headaches 
> upgrading  
> that.
>  
> Are you saying the Ansible playbooks install server projects using pip?
> For that to be a problem they would have to be installing from git URLs
> or directly from tarballs. Is that the case?

The project will build wheels first. The wheels generated tend to look 
something like 13.0.0.0rc2.dev10 when they're built because of pbr.

If someone is doing CD with the openstack-ansible project and they deploy 
mitaka once it has a final tag, then they decide to upgrade to run master, they 
could run into problems upgrading. That said, I think my team is the only team 
doing this. (Or at least, none of the other active members of the IRC channel 
talk about doing this.) So it might not be anything more than a "nice to have" 
especially since no one else from the project has chimed in.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Steven Hardy
On Wed, Mar 23, 2016 at 01:01:03AM +, Fox, Kevin M wrote:
> +1 for TripleO taking a look at Kolla.
> 
> Some random thoughts:
> 
> I'm in the middle of deploying a new cloud and I couldn't use either TripleO 
> or Kolla for various reasons. A few reasons for each:
>  * TripeO - worries me for ever having to do a major upgrade of the software, 
> or needing to do oddball configs like vxlans over ipoib.
>  * Kolla - At the time it was still immature. No stable artefacts posted. 
> database container recently broke, little documentation for disaster 
> recovery. No upgrade strategy at the time.
> 
> Kolla rearchitected recently to support oddball configs like we've had to do 
> at times. They also recently gained upgrade support. I think they are on the 
> right path. If I had to start fresh, I'd very seriously consider using it.
> 
> I think Kolla can provide the missing pieces that TripleO needs. TripleO has 
> bare metal deployment down solid. I really like the idea of using OpenStack 
> to deploy OpenStack. Kolla is now OpenStack so should be considered.

As mentioned in another reply, one of the aims of current refactoring work
in TripleO is to enable folks to leverage the barematal (and networking)
aspects of TripleO, then hand off to another tool should they so wish.

This could work really well if you wanted to layer ansible deployed kolla
containers on top of some TripleO deployed nodes (in fact it's one of the
use-cases we had in mind when deciding to do it).

I do however have several open questions regarding kolla (and the various
other ansible based solutions like openstack-ansible):

- What does the HA model look like, is active/active HA fully supported
  accross multiple controllers?
- Is SSL fully supported for the deployed services?
- Is IPv6 fully supported?
- What integration exists for isolation of network traffic between
  services?
- What's the update/upgrade model, what downtime is associated with minor
  version updates and upgrades requiring RPC/DB migration?  What's tested
  in CI in this regard?

Very interested to learn more about these, as they are challenges we've
been facing within the TripleO community lately in the context of our
current implementation.

Regardless of the answers I think moving towards a model where we enable
more choice and easier integration between the various efforts (such as the
split-stack model referred to above) is a good thing and I definitely
welcome building on the existing collaboration we have between the TripleO,
Kolla and other deployment focussed communities.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-23 Thread Doug Wiegley
Migration script has been submitted, v1 is not going anywhere from 
stable/liberty or stable/mitaka, so it’s about to disappear from master.

I’m thinking in this order:

- remove jenkins jobs
- wait for heat to remove their jenkins jobs ([heat] added to this thread, so 
they see this coming before the job breaks)
- remove q-lbaas from devstack, and any references to lbaas v1 in devstack-gate 
or infra defaults.
- remove v1 code from neutron-lbaas

Since newton is now open for commits, this process is going to get started.

Thanks,
doug



> On Mar 8, 2016, at 11:36 AM, Eichberger, German  
> wrote:
> 
> Yes, it’s Database only — though we changed the agent driver in the DB from 
> V1 to V2 — so if you bring up a V2 with that database it should reschedule 
> all your load balancers on the V2 agent driver.
> 
> German
> 
> 
> 
> 
> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
> 
>> So this looks like only a database migration, right?
>> 
>> -Original Message-
>> From: Eichberger, German [mailto:german.eichber...@hpe.com] 
>> Sent: Tuesday, March 08, 2016 12:28 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>> 
>> Ok, for what it’s worth we have contributed our migration script: 
>> https://review.openstack.org/#/c/289595/ — please look at this as a starting 
>> point and feel free to fix potential problems…
>> 
>> Thanks,
>> German
>> 
>> 
>> 
>> 
>> On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:
>> 
>>> As far as I recall, you can specify the VIP in creating the LB so you will 
>>> end up with same IPs.
>>> 
>>> -Original Message-
>>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>>> Sent: Monday, March 07, 2016 8:30 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>>> weready?
>>> 
>>> Hi Sam,
>>> 
>>> So if you have some 3rd party hardware you only need to change the 
>>> database (your steps 1-5) since the 3rd party hardware will just keep 
>>> load balancing…
>>> 
>>> Now for Kevin’s case with the namespace driver:
>>> You would need a 6th step to reschedule the loadbalancers with the V2 
>>> namespace driver — which can be done.
>>> 
>>> If we want to migrate to Octavia or (from one LB provider to another) it 
>>> might be better to use the following steps:
>>> 
>>> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>>> Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>>> Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format 
>>> file into some scripts which recreate the load balancers with your 
>>> provider of choice —
>>> 
>>> 6. Run those scripts
>>> 
>>> The problem I see is that we will probably end up with different VIPs 
>>> so the end user would need to change their IPs…
>>> 
>>> Thanks,
>>> German
>>> 
>>> 
>>> 
>>> On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
>>> 
 As for a migration tool.
 Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, 
 I am in favor for the following process:
 
 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, 
 Health Monitors , Members) into some JSON format file(s) 2. Delete LBaaS 
 v1 3.
 Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
 over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
 make room to some custom modification for mapping between v1 and v2
 models)
 
 What do you think?
 
 -Sam.
 
 
 
 
 -Original Message-
 From: Fox, Kevin M [mailto:kevin@pnnl.gov]
 Sent: Friday, March 04, 2016 2:06 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
 weready?
 
 Ok. Thanks for the info.
 
 Kevin
 
 From: Brandon Logan [brandon.lo...@rackspace.com]
 Sent: Thursday, March 03, 2016 2:42 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
 weready?
 
 Just for clarity, V2 did not reuse tables, all the tables it uses are only 
 for it.  The main problem is that v1 and v2 both have a pools resource, 
 but v1 and v2's pool resource have different attributes.  With the way 
 neutron wsgi works, if both v1 and v2 are enabled, it will combine both 
 sets of attributes into the same validation schema.
 
 The other problem with v1 and v2 running together was only occurring when 
 the v1 agent driver and v2 agent driver were both in use at the same time. 
  This may actually have been fixed with some agent updates in neutron, 
 since that 

[openstack-dev] [nova] nova-specs review tracking etherpad

2016-03-23 Thread Matt Riedemann
I've started an etherpad [1] similar to what we had in mitaka. There are 
some useful review links in the top for open reviews and fast-approve 
re-proposals.


I'm also trying to keep a list of how many things we're re-approving 
since that's our backlog from mitaka (and some further back). I'd like 
to have that context so we can prioritize the specs for newton given 
what we haven't yet landed from previous releases. One of the themes I'd 
like to work on in newton is flushing the backlog before taking on new 
work, at least for non-priority blueprints.


The etherpad is also trying to categorize things by sub-team (virt 
drivers, other projects, etc). So as you come across things when 
reviewing specs feel free to update that etherpad so we get an idea of 
what we're doing for newton.


[1] https://etherpad.openstack.org/p/newton-nova-spec-review-tracking

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Mike, and sorry I missed you on IRC to discuss this there. That said, I 
think it's great that you took this to the mailing list, especially seeing the 
conversation that has ensued.

More inline ...

On 24/03/16 01:06, Mike Perez wrote:
> Hey all,
> 
> I've been talking to a variety of projects about lack of install guides. This
> came from me not having a great experience with trying out projects in the big
> tent.
> 
> Projects like Manila have proposed install docs [1], but they were rejected
> by the install docs team because it's not in defcore. One of Manila's goals of
> getting these docs accepted is to apply for the operators tag
> ops:docs:install-guide [2] so that it helps their maturity level in the 
> project
> navigator [3].
> 
> Adrian Otto expressed to me having the same issue for Magnum. I think it's
> funny that a project that gets keynote time at the OpenStack conference can't
> be in the install docs personally.
> 
> As seen from the Manila review [1], the install docs team is suggesting these
> to be put in their developer guide.

As Steve pointed out, these now have solid plans to go in. That was because 
both projects opened a conversation with us and we worked with them over time 
to give them the docs they required.

> 
> I don't think this is a great idea. Mainly because they are for developers,
> operators aren't going to be looking in there for install information. Also 
> the
> Developer doc page [4] even states "This page contains documentation for 
> Python
> developers, who work on OpenStack itself".

I agree, but it's a great place to start. In fact, I've just merged a change to 
the Docs Contributor Guide (on the back of a previous mailing list 
conversation) that explicitly states this:

http://docs.openstack.org/contributor-guide/quickstart/new-projects.html

> 
> The install docs team doesn't want to be swamped with everyone in big tent
> giving them their install docs, to be verified, and eventually likely to be
> maintained by the install docs team.

Which is exactly why we're very selective. Sadly, documenting every big tent 
project's install process is no small task.

> 
> However, as an operator when I go docs.openstack.org under install guides,
> I should know how to install any of the big tent projects. These are accepted
> projects by the Technical Committee.
> 
> Lets consider the bigger picture of things here. If we don't make this
> information accessible, projects have poor adoption and get less feedback
> because people can't attempt to install them to begin reporting bugs.

I agree. This has been an issue for several cycles now, but with all our RST 
conversions now (mostly) behind us, I feel like we can dedicate the Newton 
cycle to improving how we do things. Exactly how that happens will need to be 
determined by the docs team in the Austin Design Summit, and I strongly suggest 
you intend to attend that session once we have it scheduled, as your voice is 
important in this conversation.

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJW8wc7AAoJELppzVb4+KUywYMIAMr78Gw+zPp3LXyqxkQFPs9y
mo/GJrfQ9OLD6CXpKSxcmvnuaHP1vHRrXPqkE02zb6YTOxV3C3CIW7hf023Dihwa
uED5kL7DrkTO+xFrjClkVRpKit/ghWQ3By/V9yaYjgWQvvRy3/Y+dvjZHnrDDHE1
rIxbU4PVZ0LPTxI7nNy71ffxFXW2Yn9Pl6EJnVm/iu9R+BNfRHgQ3kdqalG+Ppat
9tZIGpxzi5/dTS9dTf5zN2GqYzYoDR8J6C/O/ojWyOjwcycvqWH0XboV7usLLMR8
77RB/Ob8WszpbHZ6+yJF3P9hJhwhFXs8UJFcapkwaMy7wu8Lt0+etgC8nPDFj9I=
=hsaE
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Steven Hardy
On Wed, Mar 23, 2016 at 03:58:02PM -0400, Ryan Hallisey wrote:
> >>> Hello,
> >>>
> >>> So Ryan, I think you can make use of heat all the way. Architecture of
> >>> kolla doesn't require you to use ansible at all (in fact, we separate
> >>> ansible code to a different repo). Truth is that ansible-kolla is
> >>> developed by most people and considered "the way to deploy kolla" by
> >>> most of us, but we make sure that we won't cut out other deployment
> >>> engines from our potential.
> >>
> >>> So bottom line, heat may very well replace ansible code if you can
> >>> duplicate logic we have in playbooks in heat templates. That may
> >>> require docker resource with pretty complete featureset of docker
> >>> itself (named volumes being most important). Bootstrap is usually done
> >>> inside container, so that would be possible too.
> 
> >> Heat can call Anisble.
> 
> >> Why would it not be Heats responsibility for creating the stack, and
> >> then Kolla-ansible for setting everything up?
> 
> >> Heat is more esoteric than Ansible.  I expcet the amount of people that
> >> know and use Ansible to far outweight the number of people that know
> >> Heat.  Let's make it easy for them to get involved.  Use each as
> >> appropriate, but let the config with Heat clearly map to a config
> >> without it for a Kolla based deploy.
> 
> I didn't know heat can call Ansible.  Now that I know that let me refine.
> I think it would be nice to have heat use kolla-ansible.

Heat can only call ansible where it applies a given config to a specific
node, or group of nodes (similar to how we currently drive puppet, and
docker-compose in TripleO).

That doesn't actually help if what folks actually want is to leverage the
workflow and multi-node orchestration aspects of ansible, those can't be
driven via heat (and I'm not sure they should).  Possibly such a workflow
could be abstracted behind a mistral action if we needed an API that
triggers ansible configuration of some nodes after heat deploys them.

> With split-stack/composable-roles, the tripleo-heat-templates are going
> to undergo major reconstruction.  So then the questions are, do we
> construct the templates to 1) use kolla-ansible or 2) rewrite them with
> kolla-ansible logic in heat or 3) go with kolla-kubernetes.
> 
> 1) This solution involves merging the kolla and tripleo communities.
> kolla-tripleo maybe?  This path will come to a solution significantly faster
> since it will be completely leveraging the work kolla has done.  I think
> ansible is a good tool, but I don't know if it's the best for container
> deployment/management.
> 
> 2) This solution is right along the lines of dprince's patch series [1],
> but with focus on deploying kolla's containers.  This option has a longer
> road than 1.  I think it could work and I think it would be a good
> solution.

These are good questions, but note the aim of "split stack" is partly to
enable folks to make their own decision on this, e.g it should be
completely possible to leverage the node deployment aspects of TripleO,
then hand over to another tool for service configuration (or container
deployment) if that is desired.

Sounds like you and Ian have made some progress towards this model already,
so it will be good to discuss further as we refactor the templates to more
readily support it.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-23 Thread Zane Bitter

On 23/03/16 13:35, Steven Hardy wrote:

On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:

Hello,
It looks similar on issue, which was discussed here [1]
I suppose, that the root cause is incorrect using get_attr for your case.
Probably you got "list"  instead of "string".
F.e. if I do something similar:
outputs:
  rg_1:
value: {get_attr: [rg_a, rg_a_public_ip]}
  rg_2:
value: {get_attr: [rg_a, rg_a_public_ip, 0]}

  rg_3:
value: {get_attr: [rg_a]}
  rg_4:
value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}


There's actually another option here too that I personally prefer:

  rg_5:
value: {get_attr: [rg_a, resource.0, rg_a_public_ip]}


where rg_a is also resource group which uses custom template as resource.
the custom template has output value rg_a_public_ip.
The output for it looks like [2]
So as you can see, that in first case (like it is used in your example),
get_attr returns list with one element.
rg_2 is also wrong, because it takes first symbol from sting with IP
address.


Shouldn't rg_2 and rg_4 be equivalent?


Nope, rg_2 returns:

  [[0], [0], ...]

If this makes no sense, imagine that rg_a_public_ip is actually a map 
rather than a string. If you want to pick one key out of the map on each 
member and return the list of all of them, then you just have to add the 
key as the next argument to get_attr. This makes get_attr on a resource 
group work somewhat differently to other resources, but it's the only 
sensible way to express this in a template:


https://bugs.launchpad.net/heat/+bug/1341048

Whereas rg_4 and rg_5 just return:

  


{get_attr: [rg_a, rg_a_public_ip]} should return a list of all
rg_a_public_ip attributes (one list item for each resource in the group),
then the 0 should select the first item from that list?

If it's returning the first character of the first element, that sounds
like a bug to me?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-23 Thread Anita Kuno
Bots are very handy for doing repetitive tasks, we agree on that.

Bots also require permissions to execute certain actions, require
maintenance to ensure they operate as expected and do create output
which is music to some and noise to others. Said output is often
archieved somewhere which requires additional decisions.

This thread is intended to initiate a conversation about bots. So far we
have seen developers want to use bots in Gerrit[0] and in IRC[1]. The
conversation starts there but isn't limited to these tools if folks have
usecases for other bots.

I included an item on the infra meeting agenda for yesterday's meeting
(April 22, 2016) and discovered there was enough interest[2] in a
discussion to take it to the list, so here it is.

So some items that have been raised thus far:
- permissions: having a bot on gerrit with +2 +A is something we would
like to avoid
- "unsanctioned" bots (bots not in infra config files) in channels
shared by multiple teams (meeting channels, the -dev channel)
- forming a dependence on bots and expecting infra to maintain them ex
post facto (example: bot soren maintained until soren didn't)
- causing irritation for others due to the presence of an echoing bot
which eventually infra will be asked or expected to mediate
- duplication of features, both meetbot and purplebot log channels and
host the archives in different locations
- canonical bot doesn't get maintained

It is possible that the bots that infra currently maintains have
features of which folks are unaware, so if someone was willing to spend
some time communicating those features to folks who like bots we might
be able to satisfy their needs with what infra currently operates.

Please include your own thoughts on this topic, hopefully after some
discussion we can aggregate on some policy/steps forward.

Thank you,
Anita.


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-03-09.log.html#t2016-03-09T15:21:01
[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/089509.html
[2]
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-19.02.log.html
timestamp 19:53

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proactive backporting

2016-03-23 Thread Assaf Muller
On Wed, Mar 23, 2016 at 12:52 PM, Ihar Hrachyshka  wrote:
> Hey folks,
>
> some update on proactive backporting for neutron, and a call for action from
> subteam leaders.
>
> As you probably know, lately we started to backport a lot of bug fixes in
> latest stable branch (liberty atm) + became more systematic in getting High+
> bug fixes into older stable branch (kilo atm).
>
> I work on some tooling lately to get the process a bit more straight:
>
> https://review.openstack.org/#/q/project:openstack-infra/release-tools+owner:%22Ihar+Hrachyshka+%253Cihrachys%2540redhat.com%253E%22
>
> I am at the point where I can issue a single command and get the list of
> bugs fixed in master since previous check, with Wishlist bugs filtered out
> [since those are not applicable for backporting]. The pipeline looks like:
>
> ./bugs-fixed-since.py neutron  | ./lp-filter-bugs-by-importance.py
> --importance=Wishlist neutron | ./get-lp-links.py

Kudos on the new tooling, this will make at least part of the process easier.

>
> For Kilo, we probably also need to add another filter for Low impact bugs:
>
> ./lp-filter-bugs-by-importance.py --importance=Low neutron
>
> There are more ideas on how to automate the process (specifically, kilo
> backports should probably be postponed till Liberty patches land and be
> handled in a separate workflow pipeline since old-stable criteria are
> different; also, the pipeline should fully automate ‘easy' backport
> proposals, doing cherry-pick and PS upload for the caller).
>
> However we generate the list of backport candidates, in the end the bug list
> is briefly triaged and categorized and put into the etherpad:
>
> https://etherpad.openstack.org/p/stable-bug-candidates-from-master
>
> I backport some fixes that are easy to cherry-pick myself. (easy == with a
> press of a button in gerrit UI)
>
> Still, we have a lot of backport candidates that require special attention
> in the etherpad.
>
> I ask folks that cover specific topics in our community (f.e. Assaf for
> testing; Carl and Oleg for DVR/L3; John for IPAM; etc.) to look at the
> current list, book some patches for your subteams to backport, and make sure
> the fixes land in stable.
>
> Note that the process generates a lot of traffic on stable branches, and
> that’s why we want more frequent releases. We can’t achieve that on kilo
> since kilo stable is still in the integrated release mode, but starting from
> Liberty we should release more often. It’s on my todo to document release
> process in neutron devref.
>
> For your reference, it’s just a matter of calling inside openstack/releases
> repo:
>
> ./tools/new_release.sh liberty neutron bugfix
>
> FYI I just posted a new Liberty release patch at:
> https://review.openstack.org/296608
>
> Thanks for attention,

Ideally, proactive backporting will continue for a long time by being
self sufficient, and that means we get buy in from a sufficiently
large group of people in the Neutron community and obtain critical
mass. I think the incentive is there - Assuming you take part in
delivering OpenStack based on a stable branch, you want that branch as
bug-free as possible so that you don't have to put out fires as people
report them, rather you prevent issues before they happen. This is
much cheaper in the long run for everyone involved.

>
>
> Ihar Hrachyshka  wrote:
>
>> Ihar Hrachyshka  wrote:
>>
>>> Rossella Sblendido  wrote:
>>>
 Hi,

 thanks Ihar for the etherpad and for raising this point.
 .


 On 12/18/2015 06:18 PM, Ihar Hrachyshka wrote:
>
> Hi all,
>
> just wanted to note that the etherpad page [1] with backport candidates
> has a lot of work for those who have cycles for backporting relevant
> pieces to Liberty (and Kilo for High+ bugs), so please take some on
> your
> plate and propose backports, then clean up from the page. And please
> don’t hesitate to check the page for more worthy patches in the future.
>
> It can’t be a one man army if we want to run the initiative in long
> term.


 I completely agree, it can't be one man army.
 I was thinking that maybe we can be even more proactive.
 How about adding as requirement for a bug fix to be merged to have the
 backport to relevant branches? I think that could help
>>>
>>>
>>> I don’t think it will work. First, not everyone should be required to
>>> care about stable branches. It’s my belief that we should avoid formal
>>> requirements that mechanically offload burden from stable team to those who
>>> can’t possible care less about master.
>>
>>
>> Of course I meant ‘about stable branches’.
>>
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [heat] Issue with validation and preview due to get_attr==None

2016-03-23 Thread Zane Bitter

On 23/03/16 13:14, Steven Hardy wrote:

Hi all,

I'm looking for some help and additional input on this bug:

https://bugs.launchpad.net/heat/+bug/1559807


Hmm, I was wondering how this ever worked, but it appears you're making 
particularly aggressive use of the list_join and map_merge Functions 
there - where you're not only getting the elements in the list of things 
to merge (as presumably originally envisioned) but actually getting the 
list itself from an intrinsic function. If we're going to support that 
then those functions need to handle the fact that the input argument may 
be None, just as they do for the list members (see the ensure_string() 
and ensure_map() functions inside the result() methods of those two 
Functions).



Basically, we have multiple issues due to the fact that we consider
get_attr to resolve to None at any point before a resource is actually
instantiated.

It's due to this:

https://github.com/openstack/heat/blob/master/heat/engine/hot/functions.py#L163

This then causes problems during validation of several intrinsic functions,
because if they reference get_attr, they have to contain hacks and
special-cases to work around the validate-time None value (or, as reported
in the bug, fail to validate when all would be fine at runtime).

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L1333

I started digging into fixes, and there are probably a few possible
approaches, e.g setting stack.Stack.strict_validate always to False, or
reworking the intrinsic function validation to always work with the
temporary None value.

However, it's a more widespread issue than just validation - this affects
any action which happens before the actual stack gets created, so things
like preview updates are also broken, e.g consider this:

resources:
   random:
 type: OS::Heat::RandomString

   config:
 type: OS::Heat::StructuredConfig
 properties:
   group: script
   config:
 foo: {get_attr: [random, value]}

   deployment:
 type: OS::Heat::StructuredDeployment
 properties:
   config:
 get_resource: config
   server: "dummy"

On update, nothing is replaced, but if you do e.g:

   heat stack-update -x --dry-run

You see this:

| replaced  | config| OS::Heat::StructuredConfig |

Which occurs due to the false comparison between the current value of
"random" and the None value we get from get_attr in the temporary stack
used for preview comparison:

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L528

after_props.get(key) returns None, which makes us falsely declare the
"config" resource gets replaced :(

I'm looking for ideas on how we solve this - it's clearly a major issue
which completely invalidates the results of validate and preview operations
in many cases.


I've been thinking about this (for about 2 years).

My first thought (it seemed like a good idea at the time, 2 years ago, 
for some reason) was for Function objects themselves to take on the 
types of their return values, so e.g. a Function returning a list would 
have a __getitem__ method and generally act like a list. Don't try this 
at home, BTW, it doesn't work.


I now think the right answer is to return some placeholder object (but 
not None). Then the validating code can detect the placeholder and do 
some checks. e.g. we would be able to say that the placeholder for 
get_resource on a Cinder volume would have type 'cinder.volume' and any 
property with a custom constraint would check that type to see if it 
matches (and fall back to accepting any text type if the placeholder 
doesn't have a type associated). get_param would get its type from the 
parameter schema (including any custom constraints). For get_attr we 
could make it part of the attribute schema.


The hard part obviously would be getting this to work with deeply-nested 
trees of data and across nested stacks. We could probably get the easy 
parts going and incrementally improve from there though. Worst case we 
just return None and get the same behaviour as now.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][stable][sr-iov] Status of physical_device_mappings

2016-03-23 Thread Jay Pipes

+tags for stable and nova

Hi Vladimir, comments inline. :)

On 03/21/2016 05:16 AM, Vladimir Eremin wrote:

Hey OpenStackers,

I’ve recently found out, that changing of use neutron sriov-agent in Mitaka 
from optional to required[1] makes a kind of regression.


While I understand that it is important for you to be able to associate 
more than one NIC to a physical network, I see no evidence that there 
was a *regression* in Mitaka. I don't see any ability to specify more 
than one NIC for a physical network in the Liberty Neutron SR-IOV ML2 agent:


https://github.com/openstack/neutron/blob/stable/liberty/neutron/common/utils.py#L223-L225


Before Mitaka, there was possible to use any number of NICs with one Neutron 
physnet just by specifying pci_passthrough_whitelist in nova:

 [default]
 pci_passthrough_whitelist = { "devname": "eth3", "physical_network": "physnet2”},{ "devname": 
"eth4", "physical_network": "physnet2”},

which means, that eth3 and eth4 will be used for physnet2 in some manner.


Yes, *in Nova*, however from what I can tell, this functionality never 
existed in the parse_mappings() function in neutron.common.utils module.



In Mitaka, there also required to setup neutron sriov-agent as well:

 [sriov_nic]
 physical_device_mappings = physnet2:eth3

The problem actually is to unable to specify this mapping as 
"physnet2:eth3,physnet2:eth4” due to implementation details, so it is clearly a 
regression.


A regression means that a change broke some previously-working 
functionality. This is not a regression, since there apparently was 
never such functionality in Neutron.



I’ve filed bug[2] for it and proposed a patch[3]. Originally 
physical_device_mappings is converted to dict, where physnet name goes to key, 
and interface name to value:

 >>> parse_mappings('physnet2:eth3’)
 {‘physnet2’: 'eth3’}
 >>> parse_mappings('physnet2:eth3,physnet2:eth4’)
 ValueError: Key physnet2 in mapping: 'physnet2:eth4' not unique

I’ve changed it a bit, so interface name is stored in list, so now this case is 
working:

 >>> parse_mappings_multi('physnet2:eth3,physnet2:eth4’)
 {‘physnet2’: [‘eth3’, 'eth4’]}

I’d like to see this fix[3] in master and Mitaka branch.


I understand you really want this functionality in Mitaka. And I will 
leave it up to the stable team to determine whether this code should be 
backported to stable/mitaka. However, I will point out that this is a 
new feature, not a bug fix for a regression. There is no regression 
because the ability for Neutron to use more than one NIC with a physnet 
was never supported as far as I can tell.


Best,
-jay


Moshe Levi also proposed to refactor this part of code to remove 
physical_device_mappings and reuse data that nova provides somehow. I’ll file 
the RFE as soon as I figure out how it should work.

[1]: http://docs.openstack.org/liberty/networking-guide/adv_config_sriov.html
[2]: https://bugs.launchpad.net/neutron/+bug/1558626
[3]: https://review.openstack.org/294188

--
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-23 Thread Doug Hellmann
Excerpts from Alan Pevec's message of 2016-03-22 20:19:44 +0100:
> > The release team discussed this at the summit and agreed that it didn't 
> > really matter. The only folks seeing the auto-generated versions are those 
> > doing CD from git, and they should not be mixing different branches of a 
> > project in a given environment. So I don't think it is strictly necessary 
> > to raise the major version, or give pbr the hint to do so.
> 
> ok, I'll send confused RDO trunk users here :)
> That means until first Newton milestone tag is pushed, master will
> have misleading version. Newton schedule is not defined yet but 1st
> milestone is normally 1 month after Summit, and 2 months from now is
> rather large window.
> 
> Cheers,
> Alan
> 

Are you packaging unreleased things in RDO? Because those are the only
things that will have similar version numbers. We ensure that whatever
is actually tagged have good, non-overlapping, versions.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-23 Thread Doug Hellmann
Excerpts from Ian Cordasco's message of 2016-03-22 16:39:02 -0500:
>  
> 
> -Original Message-
> From: Alan Pevec 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: March 22, 2016 at 14:21:47
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject:  Re: [openstack-dev] [release] [pbr] semver on master branches after 
> RC WAS Re: How do I calculate the semantic version prior to a release?
> 
> > > The release team discussed this at the summit and agreed that it didn't 
> > > really matter.  
> > The only folks seeing the auto-generated versions are those doing CD from 
> > git, and they  
> > should not be mixing different branches of a project in a given 
> > environment. So I don't  
> > think it is strictly necessary to raise the major version, or give pbr the 
> > hint to do so.  
> >  
> > ok, I'll send confused RDO trunk users here :)
> > That means until first Newton milestone tag is pushed, master will
> > have misleading version. Newton schedule is not defined yet but 1st
> > milestone is normally 1 month after Summit, and 2 months from now is
> > rather large window.
> 
> This affects other OpenStack projects like the OpenStack Ansible project 
> which builds from trunk and does periodic upgrades from the latest stable 
> branch to whatever is running on master. Further they're using pip and this 
> will absolutely cause headaches upgrading that.

Are you saying the Ansible playbooks install server projects using pip?
For that to be a problem they would have to be installing from git URLs
or directly from tarballs. Is that the case?

Doug

> --  
> Ian Cordasco
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Ryan Hallisey
>>> Hello,
>>>
>>> So Ryan, I think you can make use of heat all the way. Architecture of
>>> kolla doesn't require you to use ansible at all (in fact, we separate
>>> ansible code to a different repo). Truth is that ansible-kolla is
>>> developed by most people and considered "the way to deploy kolla" by
>>> most of us, but we make sure that we won't cut out other deployment
>>> engines from our potential.
>>
>>> So bottom line, heat may very well replace ansible code if you can
>>> duplicate logic we have in playbooks in heat templates. That may
>>> require docker resource with pretty complete featureset of docker
>>> itself (named volumes being most important). Bootstrap is usually done
>>> inside container, so that would be possible too.

>> Heat can call Anisble.

>> Why would it not be Heats responsibility for creating the stack, and
>> then Kolla-ansible for setting everything up?

>> Heat is more esoteric than Ansible.  I expcet the amount of people that
>> know and use Ansible to far outweight the number of people that know
>> Heat.  Let's make it easy for them to get involved.  Use each as
>> appropriate, but let the config with Heat clearly map to a config
>> without it for a Kolla based deploy.

I didn't know heat can call Ansible.  Now that I know that let me refine.
I think it would be nice to have heat use kolla-ansible.

With split-stack/composable-roles, the tripleo-heat-templates are going
to undergo major reconstruction.  So then the questions are, do we
construct the templates to 1) use kolla-ansible or 2) rewrite them with
kolla-ansible logic in heat or 3) go with kolla-kubernetes.

1) This solution involves merging the kolla and tripleo communities.
kolla-tripleo maybe?  This path will come to a solution significantly faster
since it will be completely leveraging the work kolla has done.  I think
ansible is a good tool, but I don't know if it's the best for container
deployment/management.

2) This solution is right along the lines of dprince's patch series [1],
but with focus on deploying kolla's containers.  This option has a longer
road than 1.  I think it could work and I think it would be a good
solution.

> I'd be happy to hear other opinions on that though. Maybe we don't care
> about any of that container cluster management stuff, and if something
> fails we just let everything run degraded until we can pull in a
> replacement? I don't know.

3) Kolla-kubernetes is only a spec at this point, but with kubernetes the
undercloud would use magnum.  This option to me, has the most upside, but
the longest road.  I think the cluster management tools: replication
controllers, health checks, deployments, ect., would be great additions.

My excitement around kolla-ansible stems from the fact that it is significantly
farther along than kolla-kubernetes.  I haven't done a deployment of
kolla-kuberenetes since we dropped it a year ago.  But having done an evaluation
of it recently, I think it's the best option long term.  Until I use it with
kolla + magnum, I won't know for certain.

Thanks,
-Ryan

[1] - https://review.openstack.org/#/c/295588/5

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] OpenStack Trove meeting minutes (2016-03-23)

2016-03-23 Thread Amrith Kumar
The meeting bot died during the meeting and therefore the logs on eavesdrop are 
useless. So I've had to get "Old-Fashioned-Logs(tm)".

Action Items:

 #action [all] If you have a patch set that you intend to resume work 
on, please put an update in it to that effect so we don't go abandon it under 
you ...
 #action [all]  if any of the abandoned patches looks like something 
you would like to pick up feel free
 #action cp16net reply to trove-dashboard ML question for RC2
 #action [all] please review changes [3], [4], and link [5] in agenda 
and update the reviews

Agreed:

 #agreed flaper87 to WF+1 the patches in question [3] and [4]

Meeting agenda is at 
https://wiki.openstack.org/wiki/Trove/MeetingAgendaHistory#Trove_Meeting.2C_March_23.2C_2016

Meeting minutes (complete transcript) is posted at

https://gist.github.com/amrith/5ce3e4a0311f2cc4044c

-amrith
[14:02:22]  #startmeeting trove
[14:02:24]  Meeting started Wed Mar 23 18:02:06 2016 UTC and is due 
to finish in 60 minutes.  The chair is cp16net. Information about MeetBot at 
http://wiki.debian.org/MeetBot.
[14:02:24]  Useful Commands: #action #agreed #help #info #idea #link 
#topic #startvote.
[14:02:26] *** openstack changes topic to ' (Meeting topic: trove)'
[14:02:27]  The meeting name has been set to 'trove'
[14:02:33]  o/
[14:02:35]  hello cp16net
[14:02:35]  hi
[14:02:38]  hello flaper87
[14:02:43]  howdy
[14:02:45]  o/
[14:02:46]  amrith: hey there :D
[14:02:47]  hi hi
[14:02:51]  i almost missed it
[14:02:54] *** tellesnobrega_af is now known as tellesnobrega
[14:02:54]  i got distracted
[14:03:01]  o/
[14:03:23]  doing a rebase
[14:03:37]  anyways hope everyone is have a good day
[14:03:38] * twm2016 is listening
[14:03:43]  lets get this party started
[14:03:55]  #topic last week action items
[14:03:56] *** openstack changes topic to 'last week action items (Meeting 
topic: trove)'
[14:04:05]  #info [amrith] get more information about creating and 
distributing a trove wide dashboard
[14:04:15]  so I sent an email to the ML
[14:04:19]  and I have a bunch of responses.
[14:04:29]  there are a couple of projects ongoing in this area.
[14:04:40]  this does not relate to the item you put on later; 292451
[14:04:45]  k
[14:04:49]  these are in effect semi-personal dashboards
[14:04:56]  I'm looking for a formal dashboard for the project
[14:05:11]  and one that will help track us to a/the release 
milestone(s)
[14:05:22]  I have some information
[14:05:28]  I will mail to the ML [trove]
[14:05:29]  alright sounds like there is progress being made on that 
action tiem
[14:05:36]  or you can follow the earlier discussions
[14:05:43]  should we follow up next week on more info?
[14:06:00]  see thread 
http://openstack.markmail.org/thread/qh7u3sxmtpwkdzas
[14:06:11]  I tend to disagree on gerrit dashboards being 
smi-personal as there are ways to make them very useful for a project but I do 
agree they are not the prettiest ones and they lack of some useful 
functionality (like labels/hashtags)
[14:06:14]  at this point, I'd say no.
[14:06:14]  I'll bring it back when there is more information
[14:06:24] *** sigmavirus24_awa is now known as sigmavirus24
[14:06:24] * flaper87 is all for something that represents the needs of the 
project better
[14:06:54]  amrith: ok
[14:07:05]  amrith: good stuff, btw! I had no idea that Swift had 
that dashboard and it does sound like something we could use elsewhere
[14:07:08]  flaper87: yeah its usful when everyone can have a simliar 
view
[14:07:20]  so everyone knows what others are looking at
[14:07:40]  ok so next item
[14:07:45]  #info [amrith] contact Victor Stinner re: python3 session 
at summit [DONE]
[14:07:49]  I started using it heavily in Glance in Mitaka and it 
helped. I can't tell everyone was using it but a good number of people were
[14:07:56]  looks like that was addressed
[14:07:57] * flaper87 stfu
[14:07:59]  cp16net, with the dashboard as proposed, it is useful for 
individuals to see what they have to do ... aka, semi-personal. Not something 
that is useful to administer deliverables for a project, which is what I'm 
after.
[14:08:07]  o/
[14:08:11]  o/
[14:08:26]  the other action items were done
[14:08:35]  including pmackinn (who's missing today)
[14:08:38]  #info [amrith] Add new projects discussion topic for 
summit agenda. [DONE]
[14:08:46]  #info [pmackinn] I've tagged you on some topics based on 
mid-cycle, please confirm [DONE, during the meeting]
[14:08:54]  #info [amrith] get leaders for all sessions that we want 
to actually conduct ;) [ONGOING]
[14:09:24]  On the py3 thing
[14:09:28]  I have an update
[14:09:32]  Victor won't be at summit
[14:09:38]  he'll chat with us before summit
[14:09:48]  I've removed -1s and -2s on all py3 reviews
[14:10:08]  sounds good we should move forward on those now
[14:10:17]  In email, "I propose to discuss Python 3 before the summit. 
For example, prepare a concrete plan to port Trove to Python 3, list technical 
issues like MySQL-Python, 

Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-03-23 Thread pnkk
Joshua,

We are performing few scaling tests for our solution and see that there are
errors as below:

Failed saving logbook 'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b'\n
InternalError: (pymysql.err.InternalError) (1205, u'Lock wait timeout
exceeded; try restarting transaction') [SQL: u'UPDATE logbooks SET
created_at=%s, updated_at=%s, meta=%s, name=%s, uuid=%s WHERE
logbooks.uuid = %s'] [parameters: (datetime.datetime(2016, 3, 18, 18,
16, 40), datetime.datetime(2016, 3, 23, 3, 3, 44, 95395), u'{}',
u'test', u'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b',
u'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b')]"


We have about 800 flows as of now and each flow is updated in the same
logbook in a separate eventlet thread.


Every thread calls save_logbook() on the same logbook record. I think
this function is trying to update logbook record even though my
usecase needs only flow details to be inserted and it doesn't update
any information related to logbook.


Probably one of the threads was holding the lock while updating, and
others tried for lock and failed after the default interval has
elapsed.


I can think of few alternatives at the moment:


1. Increase the number of logbooks

2. Increase the innodb_lock_wait_timeout

3. There are some suggestions to make the innodb transaction isolation
level to "READ COMMITTED" instead of "REPEATABLE READ", but I am not
very familiar of the side effects they can cause


Appreciate your thoughts on given alternatives or probably even better
alternative


Thanks,

Kanthi






On Sun, Mar 20, 2016 at 10:00 PM, Joshua Harlow 
wrote:

> Lingxian Kong wrote:
>
>> Kanthi, sorry for chiming in, I suggest you may have a chance to take
>> a look at Mistral[1], which is the workflow as a service in
>> OpenStack(or without OpenStack).
>>
>
> Out of curiosity, why? Seems the ML post was about 'TaskFlow persistence'
> not mistral, just saying (unsure how it is relevant to mention mistral in
> this)...
>
> Back to getting more coffee...
>
> -Josh
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Flavio Percoco

On 23/03/16 15:00 -0400, Doug Hellmann wrote:

Excerpts from Mike Perez's message of 2016-03-23 08:06:28 -0700:

Hey all,

I've been talking to a variety of projects about lack of install guides. This
came from me not having a great experience with trying out projects in the big
tent.

Projects like Manila have proposed install docs [1], but they were rejected
by the install docs team because it's not in defcore. One of Manila's goals of
getting these docs accepted is to apply for the operators tag
ops:docs:install-guide [2] so that it helps their maturity level in the project
navigator [3].

Adrian Otto expressed to me having the same issue for Magnum. I think it's
funny that a project that gets keynote time at the OpenStack conference can't
be in the install docs personally.

As seen from the Manila review [1], the install docs team is suggesting these
to be put in their developer guide.

I don't think this is a great idea. Mainly because they are for developers,
operators aren't going to be looking in there for install information. Also the
Developer doc page [4] even states "This page contains documentation for Python
developers, who work on OpenStack itself".

The install docs team doesn't want to be swamped with everyone in big tent
giving them their install docs, to be verified, and eventually likely to be
maintained by the install docs team.

However, as an operator when I go docs.openstack.org under install guides,
I should know how to install any of the big tent projects. These are accepted
projects by the Technical Committee.

Lets consider the bigger picture of things here. If we don't make this
information accessible, projects have poor adoption and get less feedback
because people can't attempt to install them to begin reporting bugs.

Proposal: if the install docs team doesn't want them in the install docs repo
and instead to live in tree of the project itself before it's in defcore, can
we at least make the install guides for all big tent projects accessible
at docs.openstack.org under install guides?


This seems like a reasonable compromise. We can either handle them using
separate manual repos, or as Julien points out we could include them in
the tree with the code and publish them separately like we're doing with
release notes.


I think merging it in tree and publsihing them separatedly (or collecting them
under the same link) would be better.

FWIW, Zaqar had the same issue as other projects and the team ended up merging
the guide in the tree.

Flavio


Doug




[1] - https://review.openstack.org/#/c/213756/
[2] - 
http://git.openstack.org/cgit/openstack/ops-tags-team/tree/descriptions/ops-docs-install-guide.rst
[3] - http://www.openstack.org/software/releases/liberty/components/manila
[4] - http://docs.openstack.org/developer/openstack-projects.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Russell Bryant
On Wed, Mar 23, 2016 at 11:06 AM, Mike Perez  wrote:

> Hey all,
>
> I've been talking to a variety of projects about lack of install guides.
> This
> came from me not having a great experience with trying out projects in the
> big
> tent.
>
> Projects like Manila have proposed install docs [1], but they were rejected
> by the install docs team because it's not in defcore. One of Manila's
> goals of
> getting these docs accepted is to apply for the operators tag
> ops:docs:install-guide [2] so that it helps their maturity level in the
> project
> navigator [3].
>
> Adrian Otto expressed to me having the same issue for Magnum. I think it's
> funny that a project that gets keynote time at the OpenStack conference
> can't
> be in the install docs personally.
>
> As seen from the Manila review [1], the install docs team is suggesting
> these
> to be put in their developer guide.
>
> I don't think this is a great idea. Mainly because they are for developers,
> operators aren't going to be looking in there for install information.
> Also the
> Developer doc page [4] even states "This page contains documentation for
> Python
> developers, who work on OpenStack itself".
>
> The install docs team doesn't want to be swamped with everyone in big tent
> giving them their install docs, to be verified, and eventually likely to be
> maintained by the install docs team.
>
> However, as an operator when I go docs.openstack.org under install guides,
> I should know how to install any of the big tent projects. These are
> accepted
> projects by the Technical Committee.
>
> Lets consider the bigger picture of things here. If we don't make this
> information accessible, projects have poor adoption and get less feedback
> because people can't attempt to install them to begin reporting bugs.
>
> Proposal: if the install docs team doesn't want them in the install docs
> repo
> and instead to live in tree of the project itself before it's in defcore,
> can
> we at least make the install guides for all big tent projects accessible
> at docs.openstack.org under install guides?
>
>
> [1] - https://review.openstack.org/#/c/213756/
> [2] -
> http://git.openstack.org/cgit/openstack/ops-tags-team/tree/descriptions/ops-docs-install-guide.rst
> [3] - http://www.openstack.org/software/releases/liberty/components/manila
> [4] - http://docs.openstack.org/developer/openstack-projects.html
>

​FWIW, the same issue applies to other official docs.  In particular, I'm
thinking of the networking guide.

http://docs.openstack.org/liberty/networking-guide/

The networking guide is *fantastic*, but it's limited to covering only
ML2+OVS and ML2+LB.  Coverage for other backends is currently considered
out of scope, leaving no official place to put equivalent documentation
except in dev docs.

We got pushback on documenting OVN there, so we've been putting everything
in our dev docs, instead.  For example:

http://docs.openstack.org/developer/networking-ovn/install.html
http://docs.openstack.org/developer/networking-ovn/refarch.html

​It'd be nice to have somewhere else to publish these operator-oriented
docs.

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-23 Thread Sergey Kraynev
Steven,

Honestly I thought about it, but I am not sure, which behavior was
before for both. I don't mind if we create such bug and investigate
the root cause (unfortunately I had not time to do it yet).
The my main point was to provide workable way for mentioned issue.

p.s. there is corresponding bug was created
https://bugs.launchpad.net/heat/+bug/1561157

On 23 March 2016 at 20:35, Steven Hardy  wrote:
> On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:
>>Hello,
>>It looks similar on issue, which was discussed here [1]
>>I suppose, that the root cause is incorrect using get_attr for your case.
>>Probably you got "list"  instead of "string".
>>F.e. if I do something similar:
>>outputs:
>>  rg_1:
>>value: {get_attr: [rg_a, rg_a_public_ip]}
>>  rg_2:
>>value: {get_attr: [rg_a, rg_a_public_ip, 0]}
>>
>>  rg_3:
>>value: {get_attr: [rg_a]}
>>  rg_4:
>>value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
>>where rg_a is also resource group which uses custom template as resource.
>>the custom template has output value rg_a_public_ip.
>>The output for it looks like [2]
>>So as you can see, that in first case (like it is used in your example),
>>get_attr returns list with one element.
>>rg_2 is also wrong, because it takes first symbol from sting with IP
>>address.
>
> Shouldn't rg_2 and rg_4 be equivalent?
>
> {get_attr: [rg_a, rg_a_public_ip]} should return a list of all
> rg_a_public_ip attributes (one list item for each resource in the group),
> then the 0 should select the first item from that list?
>
> If it's returning the first character of the first element, that sounds
> like a bug to me?
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Fox, Kevin M
If heat convergence worked (Is that a thing yet?), it could potentially be used 
instead of a COE like kubernetes.

The thing ansible buys us today would be upgradeability. Ansible is config 
management, but its also a workflow like tool. Heats bad at workflow.

I think between Heat with Convergence, Kolla containers, and some kind of 
Mistral workflow for upgrades, you could replace Ansible.

Then there's the nova instance user thing again 
(https://review.openstack.org/93)... How do you get secrets to the 
instances securely... Kubernetes has a secure store we could use... OpenStack 
still hasn't really gotten this one figured out. :/ Barbican is a piece of that 
puzzle, but there's no really good to hook it and nova together. 

Thanks,
Kevin

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Wednesday, March 23, 2016 8:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

Hello,

So Ryan, I think you can make use of heat all the way. Architecture of
kolla doesn't require you to use ansible at all (in fact, we separate
ansible code to a different repo). Truth is that ansible-kolla is
developed by most people and considered "the way to deploy kolla" by
most of us, but we make sure that we won't cut out other deployment
engines from our potential.

So bottom line, heat may very well replace ansible code if you can
duplicate logic we have in playbooks in heat templates. That may
require docker resource with pretty complete featureset of docker
itself (named volumes being most important). Bootstrap is usually done
inside container, so that would be possible too.

To be honest, as for tripleo doing just bare metal deployment would
defeat idea of tripleo. We have bare metal deployment tools already
(cobbler which is used widely, bifrost which use ansible same as kolla
and integration would be easier), and these comes with significantly
less footprint than whole tripleo infrastructure. Strength of tripleo
comes from it's rich config of openstack itself, and I think that
should be portable to kolla.



On 23 March 2016 at 06:54, Ryan Hallisey  wrote:
> *Snip*
>
>> Indeed, this has literally none of the benefits of the ideal Heat
>> deployment enumerated above save one: it may be entirely the wrong tool
>> in every way for the job it's being asked to do, but at least it is
>> still well-integrated with the rest of the infrastructure.
>
>> Now, at the Mitaka summit we discussed the idea of a 'split stack',
>> where we have one stack for the infrastructure and a separate one for
>> the software deployments, so that there is no longer any tight
>> integration between infrastructure and software. Although it makes me a
>> bit sad in some ways, I can certainly appreciate the merits of the idea
>> as well. However, from the argument above we can deduce that if this is
>> the *only* thing we do then we will end up in the very worst of all
>> possible worlds: the wrong tool for the job, poorly integrated. Every
>> single advantage of using Heat to deploy software will have evaporated,
>> leaving only disadvantages.
>
> I think Heat is a very powerful tool having done the container integration
> into the tripleo-heat-templates I can see its appeal.  Something I learned
> from integration, was that Heat is not the best tool for container deployment,
> at least right now.  We were able to leverage the work in Kolla, but what it
> came down to was that we're not using containers or Kolla to its max 
> potential.
>
> I did an evaluation recently of tripleo and kolla to see what we would gain
> if the two were to combine. Let's look at some items on tripleo's roadmap.
> Split stack, as mentioned above, would be gained if tripleo were to adopt
> Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates config
> and deployment.  Therefore, allowing for the decoupling for each piece of
> the stack.  Composable roles, this would be the ability to land services
> onto separate hosts on demand.  Kolla also already does this [1]. Finally,
> container integration, this is just a given :).
>
> In the near term, if tripleo were to adopt Kolla as its overcloud it would
> be provided these features and retire heat to setting up the baremetal nodes
> and providing those ips to ansible.  This would be great for kolla too because
> it would provide baremetal provisioning.
>
> Ian Main and I are currently working on a POC for this as of last week [2].
> It's just a simple heat template :).
>
> I think further down the road we can evaluate using kubernetes [3].
> For now though,  kolla-anisble is rock solid and is worth using for the
> overcloud.
>
> Thanks!
> -Ryan
>
> [1] - 
> https://github.com/openstack/kolla/blob/master/ansible/inventory/multinode
> [2] - https://github.com/rthallisey/kolla-heat-templates
> [3] - 

Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Doug Hellmann
Excerpts from Mike Perez's message of 2016-03-23 08:06:28 -0700:
> Hey all,
> 
> I've been talking to a variety of projects about lack of install guides. This
> came from me not having a great experience with trying out projects in the big
> tent.
> 
> Projects like Manila have proposed install docs [1], but they were rejected
> by the install docs team because it's not in defcore. One of Manila's goals of
> getting these docs accepted is to apply for the operators tag
> ops:docs:install-guide [2] so that it helps their maturity level in the 
> project
> navigator [3].
> 
> Adrian Otto expressed to me having the same issue for Magnum. I think it's
> funny that a project that gets keynote time at the OpenStack conference can't
> be in the install docs personally.
> 
> As seen from the Manila review [1], the install docs team is suggesting these
> to be put in their developer guide.
> 
> I don't think this is a great idea. Mainly because they are for developers,
> operators aren't going to be looking in there for install information. Also 
> the
> Developer doc page [4] even states "This page contains documentation for 
> Python
> developers, who work on OpenStack itself".
> 
> The install docs team doesn't want to be swamped with everyone in big tent
> giving them their install docs, to be verified, and eventually likely to be
> maintained by the install docs team.
> 
> However, as an operator when I go docs.openstack.org under install guides,
> I should know how to install any of the big tent projects. These are accepted
> projects by the Technical Committee.
> 
> Lets consider the bigger picture of things here. If we don't make this
> information accessible, projects have poor adoption and get less feedback
> because people can't attempt to install them to begin reporting bugs.
> 
> Proposal: if the install docs team doesn't want them in the install docs repo
> and instead to live in tree of the project itself before it's in defcore, can
> we at least make the install guides for all big tent projects accessible
> at docs.openstack.org under install guides?

This seems like a reasonable compromise. We can either handle them using
separate manual repos, or as Julien points out we could include them in
the tree with the code and publish them separately like we're doing with
release notes.

Doug

> 
> 
> [1] - https://review.openstack.org/#/c/213756/
> [2] - 
> http://git.openstack.org/cgit/openstack/ops-tags-team/tree/descriptions/ops-docs-install-guide.rst
> [3] - http://www.openstack.org/software/releases/liberty/components/manila
> [4] - http://docs.openstack.org/developer/openstack-projects.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][horizon][i18n] can we release django-openstack-auth of stable/mitaka for translations

2016-03-23 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2016-03-24 02:06:50 +0900:
> 2016-03-22 20:08 GMT+09:00 Doug Hellmann :
> >
> >> On Mar 22, 2016, at 6:18 AM, Akihiro Motoki  wrote:
> >>
> >> Hi release management team,
> >>
> >> Can we have a new release of django-openstack-auth from stable/mitaka
> >> branch for translations?
> >>
> >> What is happening?
> >> django-openstack-auth is a library project consumed by Horizon.
> >> The (soft) string freeze happens when the milestone-3 is cut.
> >> The milestone-3 is also the dependency freeze.
> >> This is a dilemma between dependency freeze and translation start,
> >> and there is no chance to import translations of django-openstack-auth
> >> for Mitaka.
> >> There are several updates of translations after 2.2.0 (mitaka) release [1].
> >> As the i18n team, we would like to have a released version of
> >> django-openstack-auth
> >> with up-to-date translations.
> >>
> >> Which version?
> >> The current version of django-openstack-auth for Mitaka is 2.2.0.
> >> What version number is recommended, 2.2.1 or 2.3.0?
> >
> > Stable branches for libraries should only ever increment the patch level, 
> > so 2.2.1.
> >
> >>
> >> When?
> >> Hopefully a new version is released soon around Mitaka is shipped.
> >> The current translation deadline is set to Mar 28 (the beginning of
> >> the release week).
> >> In my understanding we avoid releasing a new version of library before
> >> the Mitaka release.
> >> Distributors can choose which version is included in their distribution.
> >
> > Even if we don't do the release before the end of this cycle, we can 
> > release it as a stable update. Either way, when you are ready for a new 
> > release submit the patch to openstack/releases and include in the commit 
> > message the note that the update includes translations.
> 
> Thanks Doug,
> I am relieved to hear that we can update translations of
> django-openstack-auth for Mitaka.
> 
> > Do you think it would be possible for Newton to start translations for 
> > libraries sooner, before their freeze date?
> 
> I think we can. I will coordinate the string freeze will happen a bit earlier.
> The amount of strings are relatively small and there is no problem
> from translation side.
> 
> Akihiro

Good, thank you!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] [vote] Managing bug backports to Mitaka branch

2016-03-23 Thread Steven Dake (stdake)
We had an emergency voting session on this proposal on IRC in our team meeting 
today and it passed as documented in the meeting minutes[1].  I was asked to 
have a typical vote and discussion on irc by one of the participants of the 
vote, so please feel free to discuss and vote again.  I will leave discussion 
and voting open until March 30th.  If the voting is unanimous prior to that 
time, I will close voting.  The original vote will stand unless there is a 
majority that oppose this process in this formal vote.  (formal votes > 
informal irc meeting votes).

Thanks,
-steve

[1] 
http://eavesdrop.openstack.org/meetings/kolla/2016/kolla.2016-03-23-16.30.log.html

look for timestamp 16:51:05

From: Steven Dake >
Reply-To: OpenStack Development Mailing List 
>
Date: Tuesday, March 22, 2016 at 10:12 AM
To: OpenStack Development Mailing List 
>
Subject: [openstack-dev] [kolla] Managing bug backports to Mitaka branch

Thierry (ttx in the irc log at [1]) proposed the standard way projects 
typically handle backports of newton fixes that should be fixed in an rc, while 
also maintaining the information in our rc2/rc3 trackers.

Here is an example bug with the process applied:
https://bugs.launchpad.net/kolla/+bug/1540234

To apply this process, the following happens:

  1.  Any individual may propose a newton bug for backport potential by 
specifying the tag 'rc-backport-potential" in the Newton 1 milestone.
  2.  Core reviewers review the rc-backport-potential bugs.
 *   CR's review [3] on a daily basis for new rc backport candidates.
 *   If the core reviewer thinks the bug should be backported to 
stable/mitaka, (or belongs in the rc), they use the Target to series button, 
select mitaka, save.
 *copy the state of the bug, but set thte Mitaka milestone target to 
"mitaka-rc2".
 *   Finally they remove the rc-backport-potential tag from the bug, so it 
isn't re-reviwed.

The purpose of this proposal is to do the following:

  1.  Allow the core reviewer team to keep track of bugs needing attention for 
the release candidates in [2] by looking at [3].
  2.  Allow master development to proceed un-impeded.
  3.  Not single thread on any individual for backporting.

I'd like further discussion on this proposal at our Wednesday meeting, so I've 
blocked off a 20 minute timebox for this topic.  I'd like wide agreement from 
the core reviewers to follow this best practice, or alternately lets come up 
with a plan b :)

If your a core reviewer and won't be able to make our next meeting, please 
respond on this thread with your  thoughts.  Lets also not apply the process 
until the conclusion of the discussion at Wednesday's meeting.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Issue with validation and preview due to get_attr==None

2016-03-23 Thread Jay Dobies
This is the same issue I ran into a few months ago regarding the nested 
parameter validation. Since it resolves to None at that time, there's no 
hook in our current nested parameters implementation to show that it 
will have a value passed in from the parent template.


Unfortunately, I don't have much to offer in terms of a solution, but 
I'm very interested in where this conversation goes :)


On 3/23/16 1:14 PM, Steven Hardy wrote:

Hi all,

I'm looking for some help and additional input on this bug:

https://bugs.launchpad.net/heat/+bug/1559807

Basically, we have multiple issues due to the fact that we consider
get_attr to resolve to None at any point before a resource is actually
instantiated.

It's due to this:

https://github.com/openstack/heat/blob/master/heat/engine/hot/functions.py#L163

This then causes problems during validation of several intrinsic functions,
because if they reference get_attr, they have to contain hacks and
special-cases to work around the validate-time None value (or, as reported
in the bug, fail to validate when all would be fine at runtime).

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L1333

I started digging into fixes, and there are probably a few possible
approaches, e.g setting stack.Stack.strict_validate always to False, or
reworking the intrinsic function validation to always work with the
temporary None value.

However, it's a more widespread issue than just validation - this affects
any action which happens before the actual stack gets created, so things
like preview updates are also broken, e.g consider this:

resources:
   random:
 type: OS::Heat::RandomString

   config:
 type: OS::Heat::StructuredConfig
 properties:
   group: script
   config:
 foo: {get_attr: [random, value]}

   deployment:
 type: OS::Heat::StructuredDeployment
 properties:
   config:
 get_resource: config
   server: "dummy"

On update, nothing is replaced, but if you do e.g:

   heat stack-update -x --dry-run

You see this:

| replaced  | config| OS::Heat::StructuredConfig |

Which occurs due to the false comparison between the current value of
"random" and the None value we get from get_attr in the temporary stack
used for preview comparison:

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L528

after_props.get(key) returns None, which makes us falsely declare the
"config" resource gets replaced :(

I'm looking for ideas on how we solve this - it's clearly a major issue
which completely invalidates the results of validate and preview operations
in many cases.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-23 Thread Steven Hardy
On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:
>Hello,
>It looks similar on issue, which was discussed here [1] 
>I suppose, that the root cause is incorrect using get_attr for your case.
>Probably you got "list"  instead of "string".
>F.e. if I do something similar:
>outputs:
>  rg_1:
>    value: {get_attr: [rg_a, rg_a_public_ip]}                  
>  rg_2:                                            
>    value: {get_attr: [rg_a, rg_a_public_ip, 0]}                
>                  
>  rg_3:                                            
>    value: {get_attr: [rg_a]}                            
>  rg_4:                                            
>    value: {get_attr: [rg_a, resource.0.rg_a_public_ip]} 
>where rg_a is also resource group which uses custom template as resource.
>the custom template has output value rg_a_public_ip.
>The output for it looks like [2]
>So as you can see, that in first case (like it is used in your example),
>get_attr returns list with one element.
>rg_2 is also wrong, because it takes first symbol from sting with IP
>address.

Shouldn't rg_2 and rg_4 be equivalent?

{get_attr: [rg_a, rg_a_public_ip]} should return a list of all
rg_a_public_ip attributes (one list item for each resource in the group),
then the 0 should select the first item from that list?

If it's returning the first character of the first element, that sounds
like a bug to me?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Zane Bitter

On 23/03/16 07:54, Ryan Hallisey wrote:

*Snip*


Indeed, this has literally none of the benefits of the ideal Heat
deployment enumerated above save one: it may be entirely the wrong tool
in every way for the job it's being asked to do, but at least it is
still well-integrated with the rest of the infrastructure.



Now, at the Mitaka summit we discussed the idea of a 'split stack',
where we have one stack for the infrastructure and a separate one for
the software deployments, so that there is no longer any tight
integration between infrastructure and software. Although it makes me a
bit sad in some ways, I can certainly appreciate the merits of the idea
as well. However, from the argument above we can deduce that if this is
the *only* thing we do then we will end up in the very worst of all
possible worlds: the wrong tool for the job, poorly integrated. Every
single advantage of using Heat to deploy software will have evaporated,
leaving only disadvantages.


I think Heat is a very powerful tool having done the container integration
into the tripleo-heat-templates I can see its appeal.  Something I learned
from integration, was that Heat is not the best tool for container deployment,
at least right now.  We were able to leverage the work in Kolla, but what it
came down to was that we're not using containers or Kolla to its max potential.

I did an evaluation recently of tripleo and kolla to see what we would gain
if the two were to combine. Let's look at some items on tripleo's roadmap.
Split stack, as mentioned above, would be gained if tripleo were to adopt
Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates config
and deployment.  Therefore, allowing for the decoupling for each piece of
the stack.  Composable roles, this would be the ability to land services
onto separate hosts on demand.  Kolla also already does this [1]. Finally,
container integration, this is just a given :).

In the near term, if tripleo were to adopt Kolla as its overcloud it would
be provided these features and retire heat to setting up the baremetal nodes
and providing those ips to ansible.  This would be great for kolla too because
it would provide baremetal provisioning.

Ian Main and I are currently working on a POC for this as of last week [2].
It's just a simple heat template :).

I think further down the road we can evaluate using kubernetes [3].
For now though,  kolla-anisble is rock solid and is worth using for the
overcloud.


My concern about kolla-ansible is that the requirements might start 
getting away from what the original design was intended to cope with, 
and that it may prove difficult to extend. For example, I wrote about 
the idea of doing the container deployments with pure Heat:



What's more, we are going to need some way of redistributing services when a 
machine in the cluster fails, and ultimately we would like that process to be 
automated, which would *require* a template generation service.

We certainly *could* build all of that. But we definitely shouldn't


and to my mind kolla-ansible is in a similar category in that respect 
(it does, of course, have an existing community and in that sense is 
still strictly superior to the pure-Heat approach). There's lots of 
stuff in e.g. Kubernetes that it seems likely we'll want and, while 
there's no _theoretical_ obstacle to implementing them in Ansible, these 
are hard, subtle problems which are presumably better left to a 
specialist project.


I'd be happy to hear other opinions on that though. Maybe we don't care 
about any of that container cluster management stuff, and if something 
fails we just let everything run degraded until we can pull in a 
replacement? I don't know.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Contributor Awards

2016-03-23 Thread Matt Riedemann



On 3/22/2016 2:18 AM, Tom Fifield wrote:

Reminder :)

We'll probably stop taking entries at the end of next week.

On 16/02/16 18:43, Tom Fifield wrote:

Hi all,

I'd like to introduce a new round of community awards handed out by the
Foundation, to be presented at the feedback session of the summit.

Nothing flashy or starchy - the idea is that these are to be a little
informal, quirky ... but still recognising the extremely valuable work
that we all do to make OpenStack excel.

There's so many different areas worthy of celebration, but we think that
there's a few main chunks of the community that need a little love,

* Those who might not be aware that they are valued, particularly new
contributors
* Those who are the active glue that binds the community together
* Those who share their hard-earned knowledge with others and mentor
* Those who challenge assumptions, and make us think

Since it's first time (recently, at least), rather than starting with a
defined set of awards, we'd like to have submissions of names in those
broad categories. Then we'll have a little bit of fun on the back-end
and try to come up with something that isn't just your standard set of
award titles, and iterate to success ;)

The submission form is here, so please submit anyone who you think is
deserving of an award!



https://docs.google.com/forms/d/1HP1jAobT-s4hlqZpmxoGIGTxZmY6lCWolS3zOq8miDk/viewform





in the meantime, let's use this thread to discuss the fun part: goodies.
What do you think we should lavish award winners with? Soft toys?
Perpetual trophies? baseball caps ?


Regards,


Tom, on behalf of the Foundation team



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is it possible to see who's already been nominated?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Issue with validation and preview due to get_attr==None

2016-03-23 Thread Steven Hardy
Hi all,

I'm looking for some help and additional input on this bug:

https://bugs.launchpad.net/heat/+bug/1559807

Basically, we have multiple issues due to the fact that we consider
get_attr to resolve to None at any point before a resource is actually
instantiated.

It's due to this:

https://github.com/openstack/heat/blob/master/heat/engine/hot/functions.py#L163

This then causes problems during validation of several intrinsic functions,
because if they reference get_attr, they have to contain hacks and
special-cases to work around the validate-time None value (or, as reported
in the bug, fail to validate when all would be fine at runtime).

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L1333

I started digging into fixes, and there are probably a few possible
approaches, e.g setting stack.Stack.strict_validate always to False, or
reworking the intrinsic function validation to always work with the
temporary None value.

However, it's a more widespread issue than just validation - this affects
any action which happens before the actual stack gets created, so things
like preview updates are also broken, e.g consider this:

resources:
  random:
type: OS::Heat::RandomString

  config:
type: OS::Heat::StructuredConfig
properties:
  group: script
  config:
foo: {get_attr: [random, value]}

  deployment:
type: OS::Heat::StructuredDeployment
properties:
  config:
get_resource: config
  server: "dummy"

On update, nothing is replaced, but if you do e.g:

  heat stack-update -x --dry-run

You see this:

| replaced  | config| OS::Heat::StructuredConfig |

Which occurs due to the false comparison between the current value of
"random" and the None value we get from get_attr in the temporary stack
used for preview comparison:

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L528

after_props.get(key) returns None, which makes us falsely declare the
"config" resource gets replaced :(

I'm looking for ideas on how we solve this - it's clearly a major issue
which completely invalidates the results of validate and preview operations
in many cases.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-23 Thread Ton Ngo

Hi Yolanda,
 Thank you for making a huge improvement from the manual process of
building the Fedora Atomic image.
Although Atomic does publish a public OpenStack image that is being
considered in this patch:
https://review.openstack.org/#/c/276232/
in the past we have come to many situations where we need an image with
specific version of certain software
for features or bug fixes (Kubernetes, Docker, Flannel, ...).  So the
automated and customizable build process
will be very helpful.

With respect to where to land the patch, I think diskimage-builder is a
reasonable target.
If it does not land there, Magnum does currently have 2 sets of
diskimage-builder elements for Mesos image
and Ironic image, so it is also reasonable to submit the patch to Magnum.
With the new push to reorganize
into drivers for COE and distro, the elements would be a natural fit for
Fedora Atomic.

   As for periodic image build, it's a good idea to stay current with the
distro, but we should avoid the situation
where something new in the image breaks a COE and we are stuck for awhile
until a fix is made.  So instead of
an automated periodic build, we might want to stage the new image to make
sure it's good before switching.

One question:  I notice the image built by DIB is 871MB, similar to the
manually built image, while the
public image from Atomic is 486MB.  It might be worthwhile to understand
the difference.

Ton Ngo,



From:   Yolanda Robla Mota 
To: 
Date:   03/23/2016 04:12 AM
Subject:[openstack-dev] [magnum] Generate atomic images using
diskimage-builder



Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.html.
The image needs to be built manually, uploaded to fedorapeople, and then
consumed from there in the magnum tests.
I have been working on a feature to allow diskimage-builder to generate
these images. The code that makes it possible is here:
https://review.openstack.org/287167
This will allow that magnum images are generated on infra, using
diskimage-builder element. This element also has the ability to consume
any tree we need, so images can be customized on demand. I generated one
image using this element, and uploaded to fedora people. The image has
passed tests, and has been validated by several people.

So i'm raising that topic to decide what should be the next steps. This
change to generate fedora-atomic images has not already landed into
diskimage-builder. But we have two options here:
- add this element to generic diskimage-builder elements, as i'm doing now
- generate this element internally on magnum. So we can have a directory
in magnum project, called "elements", and have the fedora-atomic element
here. This will give us more control on the element behaviour, and will
allow to update the element without waiting for external reviews.

Once the code for diskimage-builder has landed, another step can be to
periodically generate images using a magnum job, and upload these images
to OpenStack Infra mirrors. Currently the image is based on Fedora F23,
docker-host tree. But different images can be generated if we need a
better option.

As soon as the images are available on internal infra mirrors, the tests
can be changed, to consume these internals images. By this way the tests
can be a bit faster (i know that the bottleneck is on the functional
testing, but if we reduce the download time it can help), and tests can
be more reilable, because we will be removing an external dependency.

So i'd like to get more feedback on this topic, options and next steps
to achieve the goals. Best

--
Yolanda Robla Mota
Cloud Automation and Distribution Engineer
+34 605641639
yolanda.robla-m...@hpe.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Ian Main
Fox, Kevin M wrote:
> +1 for TripleO taking a look at Kolla.
> 
> Some random thoughts:
> 
> I'm in the middle of deploying a new cloud and I couldn't use either TripleO 
> or Kolla for various reasons. A few reasons for each:
>  * TripeO - worries me for ever having to do a major upgrade of the software, 
> or needing to do oddball configs like vxlans over ipoib.
>  * Kolla - At the time it was still immature. No stable artefacts posted. 
> database container recently broke, little documentation for disaster 
> recovery. No upgrade strategy at the time.
> 
> Kolla rearchitected recently to support oddball configs like we've had to do 
> at times. They also recently gained upgrade support. I think they are on the 
> right path. If I had to start fresh, I'd very seriously consider using it.
> 
> I think Kolla can provide the missing pieces that TripleO needs. TripleO has 
> bare metal deployment down solid. I really like the idea of using OpenStack 
> to deploy OpenStack. Kolla is now OpenStack so should be considered.
> 
> I'm also in favor of using Magnum to deploy a COE to manage Kolla. I'm much 
> less thrilled about Mesos though. It feels heavy enough weight that it feels 
> like your deploying an OpenStack like system just to deploy OpenStack. So, 
> OpenStack On NotOpenStack On OpenStack. :/ I've had good luck with Kubernetes 
> (much simpler) recently and am disappointed that it was too immature at the 
> time Kolla originally considered it. It seems much more feasible to use now. 
> I use net=host like features all the time which was a major sticking point 
> before.
> 
> I'd be interested in seeing TripeO use the Ansible version for now since 
> that's working, stable, and supports upgrades/oddball configs. Then in the 
> future as Kubernetes support or maybe Mesos support matures, consider that. 
> Kolla's going to have to have a migration path from one to the other 
> eventually... I think this would allow TripeO to really come into its own as 
> an end to end, production ready system sooner.
> 
> Thanks,
> Kevin

This is very much my thinking as well.  I like your pragmatic take on it.
The community is building solutions to these problems and we should join
them.

  Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][horizon][i18n] can we release django-openstack-auth of stable/mitaka for translations

2016-03-23 Thread Akihiro Motoki
2016-03-22 20:08 GMT+09:00 Doug Hellmann :
>
>> On Mar 22, 2016, at 6:18 AM, Akihiro Motoki  wrote:
>>
>> Hi release management team,
>>
>> Can we have a new release of django-openstack-auth from stable/mitaka
>> branch for translations?
>>
>> What is happening?
>> django-openstack-auth is a library project consumed by Horizon.
>> The (soft) string freeze happens when the milestone-3 is cut.
>> The milestone-3 is also the dependency freeze.
>> This is a dilemma between dependency freeze and translation start,
>> and there is no chance to import translations of django-openstack-auth
>> for Mitaka.
>> There are several updates of translations after 2.2.0 (mitaka) release [1].
>> As the i18n team, we would like to have a released version of
>> django-openstack-auth
>> with up-to-date translations.
>>
>> Which version?
>> The current version of django-openstack-auth for Mitaka is 2.2.0.
>> What version number is recommended, 2.2.1 or 2.3.0?
>
> Stable branches for libraries should only ever increment the patch level, so 
> 2.2.1.
>
>>
>> When?
>> Hopefully a new version is released soon around Mitaka is shipped.
>> The current translation deadline is set to Mar 28 (the beginning of
>> the release week).
>> In my understanding we avoid releasing a new version of library before
>> the Mitaka release.
>> Distributors can choose which version is included in their distribution.
>
> Even if we don't do the release before the end of this cycle, we can release 
> it as a stable update. Either way, when you are ready for a new release 
> submit the patch to openstack/releases and include in the commit message the 
> note that the update includes translations.

Thanks Doug,
I am relieved to hear that we can update translations of
django-openstack-auth for Mitaka.

> Do you think it would be possible for Newton to start translations for 
> libraries sooner, before their freeze date?

I think we can. I will coordinate the string freeze will happen a bit earlier.
The amount of strings are relatively small and there is no problem
from translation side.

Akihiro

>
> Doug
>
>>
>> Any suggestions would be appreciated.
>>
>> Thanks,
>> Akihiro
>>
>> [1] 
>> https://review.openstack.org/#/q/topic:zanata/translations+project:openstack/django_openstack_auth
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Steve Gordon
- Original Message -
> From: "Mike Perez" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, March 23, 2016 12:24:55 PM
> Subject: Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - 
> What  about big tent?
> 
> On 12:05 Mar 23, Steve Gordon wrote:
> 
> > Did you look at the link I provided above?:
> > 
> > http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
> > 
> > This content is merged.
> 
> Thanks Steve. So is this no longer a requirement that a project has to be
> considered by the Defcore group in order to be in the install docs as Lana
> has
> required previously?

I'll not put words in Lana's mouth, but my take based on what happens in 
practice is that the docs team - and more specifically the install guide 
subteam - remain focused on defcore projects only but are willing to be 
flexible where someone from a non-defcore big tent project is willing to stand 
up and own the work of creating and maintaining the docs in the guide - though 
they do expect a blueprint/spec to propose adding a new project to the install 
guide and that the project be packaged for the distros the guide covers. The 
bottom line is someone involved with the project being added needs to own the 
work as the regular docs folks are not in a position to cover every project in 
the big tent.

In the case of Manila this has happened and the results will appear in the 
Mitaka guide, while in the case of Magnum this didn't happen - at least until 
very recently, hence it being pushed to Newton. The spec review for that is 
here: 

https://review.openstack.org/#/c/289994/

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proactive backporting

2016-03-23 Thread Ihar Hrachyshka

Hey folks,

some update on proactive backporting for neutron, and a call for action  
from subteam leaders.


As you probably know, lately we started to backport a lot of bug fixes in  
latest stable branch (liberty atm) + became more systematic in getting  
High+ bug fixes into older stable branch (kilo atm).


I work on some tooling lately to get the process a bit more straight:

https://review.openstack.org/#/q/project:openstack-infra/release-tools+owner:%22Ihar+Hrachyshka+%253Cihrachys%2540redhat.com%253E%22

I am at the point where I can issue a single command and get the list of  
bugs fixed in master since previous check, with Wishlist bugs filtered out  
[since those are not applicable for backporting]. The pipeline looks like:


./bugs-fixed-since.py neutron  |  
./lp-filter-bugs-by-importance.py --importance=Wishlist neutron |  
./get-lp-links.py


For Kilo, we probably also need to add another filter for Low impact bugs:

./lp-filter-bugs-by-importance.py --importance=Low neutron

There are more ideas on how to automate the process (specifically, kilo  
backports should probably be postponed till Liberty patches land and be  
handled in a separate workflow pipeline since old-stable criteria are  
different; also, the pipeline should fully automate ‘easy' backport  
proposals, doing cherry-pick and PS upload for the caller).


However we generate the list of backport candidates, in the end the bug  
list is briefly triaged and categorized and put into the etherpad:


https://etherpad.openstack.org/p/stable-bug-candidates-from-master

I backport some fixes that are easy to cherry-pick myself. (easy == with a  
press of a button in gerrit UI)


Still, we have a lot of backport candidates that require special attention  
in the etherpad.


I ask folks that cover specific topics in our community (f.e. Assaf for  
testing; Carl and Oleg for DVR/L3; John for IPAM; etc.) to look at the  
current list, book some patches for your subteams to backport, and make  
sure the fixes land in stable.


Note that the process generates a lot of traffic on stable branches, and  
that’s why we want more frequent releases. We can’t achieve that on kilo  
since kilo stable is still in the integrated release mode, but starting  
from Liberty we should release more often. It’s on my todo to document  
release process in neutron devref.


For your reference, it’s just a matter of calling inside openstack/releases  
repo:


./tools/new_release.sh liberty neutron bugfix

FYI I just posted a new Liberty release patch at:  
https://review.openstack.org/296608


Thanks for attention,

Ihar Hrachyshka  wrote:


Ihar Hrachyshka  wrote:


Rossella Sblendido  wrote:


Hi,

thanks Ihar for the etherpad and for raising this point.
.


On 12/18/2015 06:18 PM, Ihar Hrachyshka wrote:

Hi all,

just wanted to note that the etherpad page [1] with backport candidates
has a lot of work for those who have cycles for backporting relevant
pieces to Liberty (and Kilo for High+ bugs), so please take some on your
plate and propose backports, then clean up from the page. And please
don’t hesitate to check the page for more worthy patches in the future.

It can’t be a one man army if we want to run the initiative in long  
term.


I completely agree, it can't be one man army.
I was thinking that maybe we can be even more proactive.
How about adding as requirement for a bug fix to be merged to have the  
backport to relevant branches? I think that could help


I don’t think it will work. First, not everyone should be required to  
care about stable branches. It’s my belief that we should avoid formal  
requirements that mechanically offload burden from stable team to those  
who can’t possible care less about master.


Of course I meant ‘about stable branches’.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Upgrade][FFE] Reassigning Nodes without Re-Installation

2016-03-23 Thread Ilya Kharin
Hi guys,

According to the last discussion on the Fuel Team Meeting [1] the second
chance was provided to this feature because it is directly related to the
cluster_upgrade extension and has a minimal impact on the core part. A
separate bug report [2] was created to track a progress of this feature.

The high review activity provided an opportunity to finish this feature and
it's ready to be merged.

[1]
http://eavesdrop.openstack.org/meetings/fuel/2016/fuel.2016-03-17-16.00.html
[2] https://bugs.launchpad.net/fuel/+bug/1558655

Best regards,
Ilya Kharin.

On Thu, Mar 3, 2016 at 6:00 PM, Dmitry Borodaenko 
wrote:

> Denied.
>
> This came in very late (patch remained in WIP until 1 day before FF),
> covers a corner case, there was not enough risk analysis, it wasn't
> represented in the IRC meeting earlier today, and the spec for the
> high-level feature is sitting with a -1 from fuel-python component lead
> since 1.5 weeks ago.
>
> --
> Dmitry Borodaenko
>
>
> On Wed, Mar 02, 2016 at 12:02:17AM -0600, Ilya Kharin wrote:
> > I'd like to request a feature freeze exception for Reassigning Nodes
> > without Re-Installation [1].
> >
> > This feature is very important to several upgrade strategies that
> re-deploy
> > control plane nodes, alongside of re-using some already deployed nodes,
> > such as computes nodes or storage nodes. These changes affect only the
> > upgrade part of Nailgun that mostly implemented in the cluster_upgrade
> > extension and do not affect both the provisioning and the deployment.
> >
> > I need one week to finish implementation and testing.
> >
> > [1] https://review.openstack.org/#/c/280067/ (review in progress)
> >
> > Best regards,
> > Ilya Kharin.
> > Mirantis, Inc.
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-23 Thread Dmitry Guryanov
On Wed, 2016-03-23 at 18:22 +0300, Alexander Gordeev wrote:
> Hello Dmitry,
> 
> .
>  
> Yep, astute needs to be fixed as the way how it wipes the disks is
> way too fragile, dangerous and not always reliable due to what you
> mentioned above.
> 
> Nope, I think that zeroing of 446 bytes is not enough. Why don't we
> want to wipe bios_boot partition too? Let's wipe all grub leftovers
> such as bios_boot partitions too. They doesn't contain any FS, so
> unlikely that kernel or any other process will prevent us from wiping
> it. No errors, no kernel panic are expected.
> 
> 
> On Tue, Mar 22, 2016 at 5:06 PM, Dmitry Guryanov  com> wrote:
> > For GPT disks and non-UEFI boot this method will work, since MBR
> > will still contain first stage of a bootloader code.
> > 
> Agreed, it will work. But how about bios_boot partition? What do you
> think?
> 

I have no objections against clearing bios boot partition, but could
you describe scenario, how non-efi system will boot with valid
BIOS_grub and wiped boot code in MBR?

> 
> Thanks,  Alex.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron][nova] publish and update Gerrit dashboard link automatically

2016-03-23 Thread Rossella Sblendido


On 03/22/2016 02:28 PM, Markus Zoeller wrote:
>> From: Jeremy Stanley 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: 02/18/2016 02:05 AM
>> Subject: Re: [openstack-dev] [infra][neutron] publish and update 
>> Gerrit dashboard link automatically
>>
>> On 2016-02-16 14:52:04 -0700 (-0700), Carl Baldwin wrote:
>> [...]
>>> No matter how it is done, there is the problem of where to host such a
>>> page which can be automatically updated daily (or more often) by this
>>> script.
>>>
>>> Any thoughts from infra on this?
>>
>> A neat idea, and sounds like an evolution of/replacement for
>> reviewday[1][2]. Our community already has all the tools it needs
>> for running scripts and publishing the results in an automated
>> fashion (based on a timer, triggered by merged commits in a Git
>> repo, et cetera), as well as running Web servers... you could just
>> add a vhost to the openstack_project::static class[3] and then a job
>> in our project configuration[4] to update it.
>>
>> [1] http://status.openstack.org/reviews/
>> [2] http://git.openstack.org/cgit/openstack-infra/reviewday/
>> [3] http://git.openstack.org/cgit/openstack-infra/system-config/tree/
>> modules/openstack_project/manifests/static.pp
>> [4] http://git.openstack.org/cgit/openstack-infra/project-config/tree/
>> jenkins/jobs/
>> -- 
>> Jeremy Stanley
> 
> I didn't see this thread back then when it started. I think Nova would
> benefit from that too. I didn't find a Neutron related change in [1]
> as Jeremy suggested. I'm mainly interested in bug fix changes, ordered
> by bug report importance.
> 
> @Rossella: 
> Are you still working on this or is this solved in another way?

Hi Markus,

yes I am still working on this. The idea is to use Gerrit project
dashboard as suggested by jeblair[1]. I pushed a patch for Neutron [2],
it's still work in progress, waiting for feedback from infra.

cheers,

Rossella

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-03-11.log.html#t2016-03-11T14:47:06
[2] https://review.openstack.org/#/c/284284/

> 
> References:
> [1] 
> http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp
> 
> Regards, Markus Zoeller (markus_z)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

2016-03-23 Thread Mike Bayer



On 03/23/2016 01:33 AM, Vega Cai wrote:



On 22 March 2016 at 12:09, Shinobu Kinjo > wrote:

Thank you for your comment (inline for my message).

On Tue, Mar 22, 2016 at 11:53 AM, Vega Cai > wrote:
> Let me try to explain some.
>
> On 22 March 2016 at 10:09, Shinobu Kinjo > wrote:
>>
>> On Tue, Mar 22, 2016 at 10:22 AM, joehuang > wrote:
>> > Hello, Shinobu,
>> >
>> > Yes, as what you described here, the "initialize" in "core.py" is used
>> > for unit/function test only. For system integration test( for example,
>> > tempest ), it would be better to use mysql like DB, this is done by the
>> > configuration in DB part.
>>
>> Thank you for your thought.
>>
>> >
>> > From my point of view, the tricircle DB part could be enhanced in the 
DB
>> > model and migration scripts. Currently unit test use DB model to 
initialize
>> > the data base, but not using the migration scripts,
>>
>> I'm assuming the migration scripts are in "tricircle/db". Is it right?
>
>
> migration scripts are in tricircle/db/migrate_repo
>>
>>
>> What is the DB model?
>> Why do we need 2-way-methods at the moment?
>
>
> DB models are defined in tricircle/db/models.py. Models.py defines tables 
in
> object level, so other modules can import models.py then operate the 
tables
> by operating the objects. Migration scripts defines tables in table level,
> you define table fields, constraints in the scripts then migration tool 
will
> read the scripts and build the tables.

Dose "models.py" manage database schema(e.g., create / delete columns,
tables, etc)?


In "models.py" we only define database schema. SQLAlchemy provides
functionality to create tables based on schema definition, which is
"ModelBase.metadata.create_all". This is used to initialized the
in-memory database for tests currently.


FTR this is the best way to do this.   SQLite's migration patterns are 
entirely different than for any other database, so while Alembic has a 
"batch" mode that can provide some level of code-compatibility (with 
many caveats, difficulties, and dead-end cases) between a SQLite 
migration and a migration for all the other databases, it is far 
preferable to not use any migration pattern at all for the SQLite 
database and just do a create_all().  It's also much faster, especially 
in the SQLite case where migrations require that the whole table is 
dropped and re-created for most changes.








> Migration tool has a feature to
> generate migration scripts from DB models automatically but it may make
> mistakes sometimes, so currently we manually maintain the table structure 
in
> both DB model and migration scripts.

Is *migration tool* different from bot DB models and migration scripts?


Migration tool is Alembic, a lightweight database migration tool for
usage of SQLAlchemy:

https://alembic.readthedocs.org/en/latest/

It runs migration scripts to update database schema. Each database
version has one migrate script. After defining "upgrade" and "downgrade"
method in the script, you can update your database from one version to
another version. Alembic isn't aware of DB models defined in
"models.py", users need to guarantee the version of database and the
version of "models.py" match.

If you create a new database, both "ModelBase.metadata.create_all" and
Alembic can be used. But Alembic can also be used to update an existing
database to a specific version of schema.


 >>
 >>
 >> > so the migration scripts can only be tested when using
devstack for
 >> > integration test. It would better to using migration script to
instantiate
 >> > the DB, and tested in the unit test too.
 >>
 >> If I understand you correctly, we are moving forward to using the
 >> migration scripts for both unit and integration tests.
 >>
 >> Cheers,
 >> Shinobu
 >>
 >> >
 >> > (Also move the discussion to the openstack-dev mail-list)
 >> >
 >> > Best Regards
 >> > Chaoyi Huang ( joehuang )
 >> >
 >> > -Original Message-
 >> > From: Shinobu Kinjo [mailto:ski...@redhat.com
]
 >> > Sent: Tuesday, March 22, 2016 7:43 AM
 >> > To: joehuang; khayam.gondal; zhangbinsjtu; shipengfei92; newypei;
 >> > Liuhaixia; caizhiyuan (A); huangzhipeng
 >> > Subject: Using in-memory database for unit tests
 >> >
 >> > Hello,
 >> >
 >> > In "initialize" method defined in "core.py", we're using
*in-memory*
 >> > strategy making use of sqlite. AFAIK we are using this
solution for only
 >> > testing purpose. Unit tests using this solution should be 

Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Julien Danjou
On Wed, Mar 23 2016, Mike Perez wrote:

> As seen from the Manila review [1], the install docs team is suggesting these
> to be put in their developer guide.
>
> I don't think this is a great idea. Mainly because they are for developers,
> operators aren't going to be looking in there for install information. Also 
> the
> Developer doc page [4] even states "This page contains documentation for 
> Python
> developers, who work on OpenStack itself".
>
> The install docs team doesn't want to be swamped with everyone in big tent
> giving them their install docs, to be verified, and eventually likely to be
> maintained by the install docs team.

So what I've been pushing for a few years now (and I keep trying) is to
have the user documentation be required as part of the code that is
contributed by the project – just like we did for unit tests, and just
like we finally do with functional tests (e.g. tempest-lib).

We did that for day 1 with Gnocchi, and so far it works pretty far:

  http://gnocchi.xyz/install.html

The project is probably (made) less complex than e.g. Neutron or Manila
to deploy, but it should be possible to achieve some of that for most
projects. Actually, believe it or not, many open-source projects out
there also do that with great success.

That doc-is-mandatory-with-your-code policy does not prevent the doc
team to do its job. But it makes sure that the people writing the doc
are the same that wrote the code, which brings a few interesting side
effects:

- you can spot mistakes in the code or the doc just by seeing the
  disparity between the two
- you are sure that the feature is well understood by the doc writer and
  that there are no misinterpretation on what $option does
- the documentation is always up-to-date
- it forces the developer to think twice before implementing things that
  are too complicated to deploy since they'll have to write and explain
  how to do it

Then, the doc team is free to jump-in at anytime and review the doc.
Though I never saw anyone from the doc team review doc on Gnocchi – but
I guess they're too busy with other projects making them writing their
doc. :-)

Now, it's very likely that we'll move that policy to Aodh and Ceilometer
during the next cycle.

Continuing to address OpenStack and its documentation as a single and
unified project, while we are in a Big Tent mode, hoping it's gonna
scale, seems to me unrealistic.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Mike Perez
On 12:05 Mar 23, Steve Gordon wrote:
 
> Did you look at the link I provided above?:
> 
> http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
> 
> This content is merged.

Thanks Steve. So is this no longer a requirement that a project has to be
considered by the Defcore group in order to be in the install docs as Lana has
required previously?

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Hayes, Graham
On 23/03/2016 16:12, Steve Gordon wrote:
> - Original Message -
>> From: "Steve Gordon" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>>
>> - Original Message -
>>> From: "Graham Hayes" ha...@hpe.com>
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>>
>>> On 23/03/2016 15:37, Steve Gordon wrote:
 - Original Message -
> From: "Mike Perez" 
> To: "OpenStack Development Mailing List"
> 
>
> Hey all,
>
> I've been talking to a variety of projects about lack of install guides.
> This
> came from me not having a great experience with trying out projects in
> the
> big
> tent.
>
> Projects like Manila have proposed install docs [1], but they were
> rejected
> by the install docs team because it's not in defcore. One of Manila's
> goals
> of
> getting these docs accepted is to apply for the operators tag
> ops:docs:install-guide [2] so that it helps their maturity level in the
> project
> navigator [3].
>
> Adrian Otto expressed to me having the same issue for Magnum. I think
> it's
> funny that a project that gets keynote time at the OpenStack conference
> can't
> be in the install docs personally.

 Just two minor clarifications here:

 * Manila install docs are actively being worked on for inclusion in the
 Mitaka version of the guide:
 http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst

 * Magnum install docs were only very recently proposed here -
 https://review.openstack.org/#/c/288580/ - nobody is saying they can't be
 in the install guide assuming someone is willing to write/maintain them,
 but until now it wasn't clear anyone was.

 I certainly think a better system for linking out-of-tree install docs
 for
 big tent projects would be worth pursuing, but regardless of where it
 lives someone still has to write/maintain that user-orientated content.
 For those that have someone actively doing this on an ongoing basis they
 already have a path to inclusion in the guide (or at least, it seems that
 way based on the cases I am familiar with like those above).

 Are there examples of projects that have this user orientated install
 documentation written but are actively being rejected from including it
 in
 the install guide (in the Magnum case it has been pushed out to Newton as
 it was a late submission, not rejected permanently)?

>>>
>>> the linked review - https://review.openstack.org/#/c/213756/
>>
>> Did you look at the link I provided above?:
>>
>> http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
>>
>> This content is merged.
>>
>> -Steve
>
> Here is the Mitaka spec review for the proposal to add Magnum to the guide: 
> https://review.openstack.org/#/c/275200/
> Here is the Mitaka review to add the Magnum content to the guide: 
> https://review.openstack.org/#/c/273724/

And as I said, if this has changed, that is great.

At the last time I checked docs were def-core only.

> -Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Steve Gordon
- Original Message -
> From: "Steve Gordon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> - Original Message -
> > From: "Steve Gordon" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > 
> > - Original Message -
> > > From: "Graham Hayes" ha...@hpe.com>
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > 
> > > On 23/03/2016 15:37, Steve Gordon wrote:
> > > > - Original Message -
> > > >> From: "Mike Perez" 
> > > >> To: "OpenStack Development Mailing List"
> > > >> 
> > > >>
> > > >> Hey all,
> > > >>
> > > >> I've been talking to a variety of projects about lack of install
> > > >> guides.
> > > >> This
> > > >> came from me not having a great experience with trying out projects in
> > > >> the
> > > >> big
> > > >> tent.
> > > >>
> > > >> Projects like Manila have proposed install docs [1], but they were
> > > >> rejected
> > > >> by the install docs team because it's not in defcore. One of Manila's
> > > >> goals
> > > >> of
> > > >> getting these docs accepted is to apply for the operators tag
> > > >> ops:docs:install-guide [2] so that it helps their maturity level in
> > > >> the
> > > >> project
> > > >> navigator [3].
> > > >>
> > > >> Adrian Otto expressed to me having the same issue for Magnum. I think
> > > >> it's
> > > >> funny that a project that gets keynote time at the OpenStack
> > > >> conference
> > > >> can't
> > > >> be in the install docs personally.
> > > >
> > > > Just two minor clarifications here:
> > > >
> > > > * Manila install docs are actively being worked on for inclusion in the
> > > > Mitaka version of the guide:
> > > > http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
> > > >
> > > > * Magnum install docs were only very recently proposed here -
> > > > https://review.openstack.org/#/c/288580/ - nobody is saying they can't
> > > > be
> > > > in the install guide assuming someone is willing to write/maintain
> > > > them,
> > > > but until now it wasn't clear anyone was.
> > > >
> > > > I certainly think a better system for linking out-of-tree install docs
> > > > for
> > > > big tent projects would be worth pursuing, but regardless of where it
> > > > lives someone still has to write/maintain that user-orientated content.
> > > > For those that have someone actively doing this on an ongoing basis
> > > > they
> > > > already have a path to inclusion in the guide (or at least, it seems
> > > > that
> > > > way based on the cases I am familiar with like those above).
> > > >
> > > > Are there examples of projects that have this user orientated install
> > > > documentation written but are actively being rejected from including it
> > > > in
> > > > the install guide (in the Magnum case it has been pushed out to Newton
> > > > as
> > > > it was a late submission, not rejected permanently)?
> > > >
> > > 
> > > the linked review - https://review.openstack.org/#/c/213756/
> > 
> > Did you look at the link I provided above?:
> > 
> > http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
> > 
> > This content is merged.
> > 
> > -Steve
> 
> Here is the Mitaka spec review for the proposal to add Magnum to the guide:
> https://review.openstack.org/#/c/275200/
> Here is the Mitaka review to add the Magnum content to the guide:
> https://review.openstack.org/#/c/273724/
> 
> -Steve

Sorry, I mean Manila above obviously (same project as the 213756 review was 
for) - the Magnum proposal will likely be looked at for Newton (assuming the 
owner keeps working on it).

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] What are specifications?

2016-03-23 Thread Mike Perez
On 12:12 Mar 23, Thierry Carrez wrote:
 
> Keeping both using the same template, in the same directories and the same
> repositories is what created this grey area that paved the way for specs
> without assignees and best practices asking for cross-project consensus that
> they will never fully obtain.
> 
> I think it's time to recognize those are different things and separate them.

https://review.openstack.org/#/c/296571/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Steve Gordon
- Original Message -
> From: "Steve Gordon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> - Original Message -
> > From: "Graham Hayes" ha...@hpe.com>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > 
> > On 23/03/2016 15:37, Steve Gordon wrote:
> > > - Original Message -
> > >> From: "Mike Perez" 
> > >> To: "OpenStack Development Mailing List"
> > >> 
> > >>
> > >> Hey all,
> > >>
> > >> I've been talking to a variety of projects about lack of install guides.
> > >> This
> > >> came from me not having a great experience with trying out projects in
> > >> the
> > >> big
> > >> tent.
> > >>
> > >> Projects like Manila have proposed install docs [1], but they were
> > >> rejected
> > >> by the install docs team because it's not in defcore. One of Manila's
> > >> goals
> > >> of
> > >> getting these docs accepted is to apply for the operators tag
> > >> ops:docs:install-guide [2] so that it helps their maturity level in the
> > >> project
> > >> navigator [3].
> > >>
> > >> Adrian Otto expressed to me having the same issue for Magnum. I think
> > >> it's
> > >> funny that a project that gets keynote time at the OpenStack conference
> > >> can't
> > >> be in the install docs personally.
> > >
> > > Just two minor clarifications here:
> > >
> > > * Manila install docs are actively being worked on for inclusion in the
> > > Mitaka version of the guide:
> > > http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
> > >
> > > * Magnum install docs were only very recently proposed here -
> > > https://review.openstack.org/#/c/288580/ - nobody is saying they can't be
> > > in the install guide assuming someone is willing to write/maintain them,
> > > but until now it wasn't clear anyone was.
> > >
> > > I certainly think a better system for linking out-of-tree install docs
> > > for
> > > big tent projects would be worth pursuing, but regardless of where it
> > > lives someone still has to write/maintain that user-orientated content.
> > > For those that have someone actively doing this on an ongoing basis they
> > > already have a path to inclusion in the guide (or at least, it seems that
> > > way based on the cases I am familiar with like those above).
> > >
> > > Are there examples of projects that have this user orientated install
> > > documentation written but are actively being rejected from including it
> > > in
> > > the install guide (in the Magnum case it has been pushed out to Newton as
> > > it was a late submission, not rejected permanently)?
> > >
> > 
> > the linked review - https://review.openstack.org/#/c/213756/
> 
> Did you look at the link I provided above?:
> 
> http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
> 
> This content is merged.
> 
> -Steve

Here is the Mitaka spec review for the proposal to add Magnum to the guide: 
https://review.openstack.org/#/c/275200/
Here is the Mitaka review to add the Magnum content to the guide: 
https://review.openstack.org/#/c/273724/

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Steve Gordon
- Original Message -
> From: "Graham Hayes" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> On 23/03/2016 15:37, Steve Gordon wrote:
> > - Original Message -
> >> From: "Mike Perez" 
> >> To: "OpenStack Development Mailing List"
> >> 
> >>
> >> Hey all,
> >>
> >> I've been talking to a variety of projects about lack of install guides.
> >> This
> >> came from me not having a great experience with trying out projects in the
> >> big
> >> tent.
> >>
> >> Projects like Manila have proposed install docs [1], but they were
> >> rejected
> >> by the install docs team because it's not in defcore. One of Manila's
> >> goals
> >> of
> >> getting these docs accepted is to apply for the operators tag
> >> ops:docs:install-guide [2] so that it helps their maturity level in the
> >> project
> >> navigator [3].
> >>
> >> Adrian Otto expressed to me having the same issue for Magnum. I think it's
> >> funny that a project that gets keynote time at the OpenStack conference
> >> can't
> >> be in the install docs personally.
> >
> > Just two minor clarifications here:
> >
> > * Manila install docs are actively being worked on for inclusion in the
> > Mitaka version of the guide:
> > http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
> >
> > * Magnum install docs were only very recently proposed here -
> > https://review.openstack.org/#/c/288580/ - nobody is saying they can't be
> > in the install guide assuming someone is willing to write/maintain them,
> > but until now it wasn't clear anyone was.
> >
> > I certainly think a better system for linking out-of-tree install docs for
> > big tent projects would be worth pursuing, but regardless of where it
> > lives someone still has to write/maintain that user-orientated content.
> > For those that have someone actively doing this on an ongoing basis they
> > already have a path to inclusion in the guide (or at least, it seems that
> > way based on the cases I am familiar with like those above).
> >
> > Are there examples of projects that have this user orientated install
> > documentation written but are actively being rejected from including it in
> > the install guide (in the Magnum case it has been pushed out to Newton as
> > it was a late submission, not rejected permanently)?
> >
> 
> the linked review - https://review.openstack.org/#/c/213756/

Did you look at the link I provided above?:

http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst

This content is merged.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Bug day

2016-03-23 Thread Rob Cresswell
Bug day returns! Everyone welcome!

We've had some success but still a long way to go. The next bug day will be on 
the 5th of April. The focus of the bug days is to triage our existing bugs, not 
to fix specific bugs or find new ones (for now). In the future, as the list 
progresses to an organised state, we'll use this time for fixing all the things.

See the etherpad for more information! 
https://etherpad.openstack.org/p/horizon-bug-day

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Hayes, Graham
On 23/03/2016 15:37, Steve Gordon wrote:
> - Original Message -
>> From: "Mike Perez" 
>> To: "OpenStack Development Mailing List" 
>>
>> Hey all,
>>
>> I've been talking to a variety of projects about lack of install guides. This
>> came from me not having a great experience with trying out projects in the
>> big
>> tent.
>>
>> Projects like Manila have proposed install docs [1], but they were rejected
>> by the install docs team because it's not in defcore. One of Manila's goals
>> of
>> getting these docs accepted is to apply for the operators tag
>> ops:docs:install-guide [2] so that it helps their maturity level in the
>> project
>> navigator [3].
>>
>> Adrian Otto expressed to me having the same issue for Magnum. I think it's
>> funny that a project that gets keynote time at the OpenStack conference can't
>> be in the install docs personally.
>
> Just two minor clarifications here:
>
> * Manila install docs are actively being worked on for inclusion in the 
> Mitaka version of the guide: 
> http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst
>
> * Magnum install docs were only very recently proposed here - 
> https://review.openstack.org/#/c/288580/ - nobody is saying they can't be in 
> the install guide assuming someone is willing to write/maintain them, but 
> until now it wasn't clear anyone was.
>
> I certainly think a better system for linking out-of-tree install docs for 
> big tent projects would be worth pursuing, but regardless of where it lives 
> someone still has to write/maintain that user-orientated content. For those 
> that have someone actively doing this on an ongoing basis they already have a 
> path to inclusion in the guide (or at least, it seems that way based on the 
> cases I am familiar with like those above).
>
> Are there examples of projects that have this user orientated install 
> documentation written but are actively being rejected from including it in 
> the install guide (in the Magnum case it has been pushed out to Newton as it 
> was a late submission, not rejected permanently)?
>

the linked review - https://review.openstack.org/#/c/213756/

Lana Brindley
Aug 18, 2015
Patch Set 1: Code-Review-2
The Install Guide covers defcore projects only, sorry.

Lana Brindley
Aug 27, 2015
Abandoned
This content belongs in the Manila /developer docs.

If this has changed - that is great, I will start getting our docs
together for the guide, but seeing patches like this is why we didn't
try before.

> -Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Mike Perez
On 11:34 Mar 23, Steve Gordon wrote:
 
> Are there examples of projects that have this user orientated install
> documentation written but are actively being rejected from including it in
> the install guide (in the Magnum case it has been pushed out to Newton as it
> was a late submission, not rejected permanently)?

The Manila case was my example. It was abandoned by Lana because:

"The Install Guide covers defcore projects only, sorry." [1] and see a more
detailed explanation [2].

If this is no longer the case, I might've missed something and would appreciate
where it's mentioned directions are changing.

[1] - https://review.openstack.org/#/c/213756/
[2] - 
http://lists.openstack.org/pipermail/openstack-docs/2015-December/008062.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Horizon Drivers meeting

2016-03-23 Thread Rob Cresswell
Hi folks,

With master now open for Newton development, I'm going to reopen the Horizon 
Drivers meeting. See https://wiki.openstack.org/wiki/Meetings/HorizonDrivers

I intend to run the meeting both weeks for now (previously we had only held it 
every other week). The next meeting will be at 1200UTC on March 30th. Calendar 
entry can be found here: http://eavesdrop.openstack.org/#Horizon_Drivers_Meeting

The purpose of this meeting is to review blueprints on Launchpad, *not* to 
review implementation details and patches. Anyone is welcome to attend and 
propose a blueprint for review; please add it to the agenda! Otherwise we'll 
just work our way through the list.

Cheers,
Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Adam Young

On 03/23/2016 11:42 AM, Michał Jastrzębski wrote:

Hello,

So Ryan, I think you can make use of heat all the way. Architecture of
kolla doesn't require you to use ansible at all (in fact, we separate
ansible code to a different repo). Truth is that ansible-kolla is
developed by most people and considered "the way to deploy kolla" by
most of us, but we make sure that we won't cut out other deployment
engines from our potential.

So bottom line, heat may very well replace ansible code if you can
duplicate logic we have in playbooks in heat templates. That may
require docker resource with pretty complete featureset of docker
itself (named volumes being most important). Bootstrap is usually done
inside container, so that would be possible too.


Heat can call Anisble.

Why would it not be Heats responsibility for creating the stack, and 
then Kolla-ansible for setting everything up?


Heat is more esoteric than Ansible.  I expcet the amount of people that 
know and use Ansible to far outweight the number of people that know 
Heat.  Let's make it easy for them to get involved.  Use each as 
appropriate, but let the config with Heat clearly map to a config 
without it for a Kolla based deploy.





To be honest, as for tripleo doing just bare metal deployment would
defeat idea of tripleo. We have bare metal deployment tools already
(cobbler which is used widely, bifrost which use ansible same as kolla
and integration would be easier), and these comes with significantly
less footprint than whole tripleo infrastructure. Strength of tripleo
comes from it's rich config of openstack itself, and I think that
should be portable to kolla.



On 23 March 2016 at 06:54, Ryan Hallisey  wrote:

*Snip*


Indeed, this has literally none of the benefits of the ideal Heat
deployment enumerated above save one: it may be entirely the wrong tool
in every way for the job it's being asked to do, but at least it is
still well-integrated with the rest of the infrastructure.
Now, at the Mitaka summit we discussed the idea of a 'split stack',
where we have one stack for the infrastructure and a separate one for
the software deployments, so that there is no longer any tight
integration between infrastructure and software. Although it makes me a
bit sad in some ways, I can certainly appreciate the merits of the idea
as well. However, from the argument above we can deduce that if this is
the *only* thing we do then we will end up in the very worst of all
possible worlds: the wrong tool for the job, poorly integrated. Every
single advantage of using Heat to deploy software will have evaporated,
leaving only disadvantages.

I think Heat is a very powerful tool having done the container integration
into the tripleo-heat-templates I can see its appeal.  Something I learned
from integration, was that Heat is not the best tool for container deployment,
at least right now.  We were able to leverage the work in Kolla, but what it
came down to was that we're not using containers or Kolla to its max potential.

I did an evaluation recently of tripleo and kolla to see what we would gain
if the two were to combine. Let's look at some items on tripleo's roadmap.
Split stack, as mentioned above, would be gained if tripleo were to adopt
Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates config
and deployment.  Therefore, allowing for the decoupling for each piece of
the stack.  Composable roles, this would be the ability to land services
onto separate hosts on demand.  Kolla also already does this [1]. Finally,
container integration, this is just a given :).

In the near term, if tripleo were to adopt Kolla as its overcloud it would
be provided these features and retire heat to setting up the baremetal nodes
and providing those ips to ansible.  This would be great for kolla too because
it would provide baremetal provisioning.

Ian Main and I are currently working on a POC for this as of last week [2].
It's just a simple heat template :).

I think further down the road we can evaluate using kubernetes [3].
For now though,  kolla-anisble is rock solid and is worth using for the
overcloud.

Thanks!
-Ryan

[1] - https://github.com/openstack/kolla/blob/master/ansible/inventory/multinode
[2] - https://github.com/rthallisey/kolla-heat-templates
[3] - https://review.openstack.org/#/c/255450/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack 

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Michał Jastrzębski
Hello,

So Ryan, I think you can make use of heat all the way. Architecture of
kolla doesn't require you to use ansible at all (in fact, we separate
ansible code to a different repo). Truth is that ansible-kolla is
developed by most people and considered "the way to deploy kolla" by
most of us, but we make sure that we won't cut out other deployment
engines from our potential.

So bottom line, heat may very well replace ansible code if you can
duplicate logic we have in playbooks in heat templates. That may
require docker resource with pretty complete featureset of docker
itself (named volumes being most important). Bootstrap is usually done
inside container, so that would be possible too.

To be honest, as for tripleo doing just bare metal deployment would
defeat idea of tripleo. We have bare metal deployment tools already
(cobbler which is used widely, bifrost which use ansible same as kolla
and integration would be easier), and these comes with significantly
less footprint than whole tripleo infrastructure. Strength of tripleo
comes from it's rich config of openstack itself, and I think that
should be portable to kolla.



On 23 March 2016 at 06:54, Ryan Hallisey  wrote:
> *Snip*
>
>> Indeed, this has literally none of the benefits of the ideal Heat
>> deployment enumerated above save one: it may be entirely the wrong tool
>> in every way for the job it's being asked to do, but at least it is
>> still well-integrated with the rest of the infrastructure.
>
>> Now, at the Mitaka summit we discussed the idea of a 'split stack',
>> where we have one stack for the infrastructure and a separate one for
>> the software deployments, so that there is no longer any tight
>> integration between infrastructure and software. Although it makes me a
>> bit sad in some ways, I can certainly appreciate the merits of the idea
>> as well. However, from the argument above we can deduce that if this is
>> the *only* thing we do then we will end up in the very worst of all
>> possible worlds: the wrong tool for the job, poorly integrated. Every
>> single advantage of using Heat to deploy software will have evaporated,
>> leaving only disadvantages.
>
> I think Heat is a very powerful tool having done the container integration
> into the tripleo-heat-templates I can see its appeal.  Something I learned
> from integration, was that Heat is not the best tool for container deployment,
> at least right now.  We were able to leverage the work in Kolla, but what it
> came down to was that we're not using containers or Kolla to its max 
> potential.
>
> I did an evaluation recently of tripleo and kolla to see what we would gain
> if the two were to combine. Let's look at some items on tripleo's roadmap.
> Split stack, as mentioned above, would be gained if tripleo were to adopt
> Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates config
> and deployment.  Therefore, allowing for the decoupling for each piece of
> the stack.  Composable roles, this would be the ability to land services
> onto separate hosts on demand.  Kolla also already does this [1]. Finally,
> container integration, this is just a given :).
>
> In the near term, if tripleo were to adopt Kolla as its overcloud it would
> be provided these features and retire heat to setting up the baremetal nodes
> and providing those ips to ansible.  This would be great for kolla too because
> it would provide baremetal provisioning.
>
> Ian Main and I are currently working on a POC for this as of last week [2].
> It's just a simple heat template :).
>
> I think further down the road we can evaluate using kubernetes [3].
> For now though,  kolla-anisble is rock solid and is worth using for the
> overcloud.
>
> Thanks!
> -Ryan
>
> [1] - 
> https://github.com/openstack/kolla/blob/master/ansible/inventory/multinode
> [2] - https://github.com/rthallisey/kolla-heat-templates
> [3] - https://review.openstack.org/#/c/255450/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Keeping up with RFEs

2016-03-23 Thread Jim Rollenhagen
Hey all,

I just burned through every RFE we have filed. I've approved the ones
that are trivial or have a spec approved already (by changing the tag to
rfe-approved), asked for a spec on many others, and left the ones with
an unmerged spec alone.

Going forward, I plan to take a look at incoming RFEs weekly and triage
them. I'll bring some of them up in our weekly meeting if I think it
needs more discussion, but might not need a spec.

Where I need help:

* when a specs core approves a spec, they need to mark the RFE as
  approved.
* when folks are reviewing code patches, please do check the referenced
  bug. If it is an RFE and is not yet approved, please -2 the patch (or
  ask a core to do so, if you are not a core reviewer). If the RFE
  should have been approved, please ask a spec core to do so.

Some useful links here:

Unapproved RFEs: https://bugs.launchpad.net/ironic/+bugs?field.tag=rfe
Approved RFEs: https://bugs.launchpad.net/ironic/+bugs?field.tag=rfe-approved
Ironic cores: https://review.openstack.org/#/admin/groups/165,members
Ironic-specs cores: https://review.openstack.org/#/admin/groups/352,members

Thanks in advance for your help :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kloudbuster] authorization failed problem

2016-03-23 Thread Alec Hothan (ahothan)
Hi Akshay

The URL you are using is a private address (http://192.168.138.51:5000/v2.0) 
and is likely the reason it does not work.
If you run the kloudbuster App in the cloud, this app needs to have access to 
the cloud under test.
So even if you can access 192.168.138.51 from your local browser (which runs on 
your workstation or laptop) it may not be accessible from a VM that runs in 
your cloud.
For that to work you need to get an URL that is reachable from the VM.

In some cases where the cloud under test is local, it is easier to just run 
kloudbuster locally as well (from the same place where you can ping 
192.168.138.51).
You can either use a local VM to run the kloudbuster image (vagrant, virtual 
box...) or just simpler, install kloudbuster locally using git clone or pip 
install (see the installation instructions in the doc 
http://kloudbuster.readthedocs.org/en/latest/).

Regards,

   Alec




From: Akshay Kumar Sanghai 
>
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Wednesday, March 23, 2016 at 6:59 AM
To: "Yichen Wang (yicwang)" >, 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [kloudbuster] authorization failed problem


Hi,

I am trying to use cloudbuster for the scale testing of openstack setup.

I have a openstack setup with 1 controller, 1 network and 2 compute node. I am 
trying to use to use kloudbuster for scale testing of the setup. I created one 
VM with kloudbuster image. I accessed the web UI and clicked on "stage". This 
is the log:
:23,206 WARNING No public key is found or specified to instantiate VMs. You 
will not be able to access the VMs spawned by KloudBuster.
2016-03-22 14:01:30,464 WARNING Traceback (most recent call last):
  File \"/kb_test/kloudbuster/kb_server/kb_server/controllers/api_kb.py\", line 
58, in kb_stage_thread_handler
if kb_session.kloudbuster.check_and_upload_images():
  File 
\"/kb_test/kloudbuster/kb_server/kb_server/controllers/../../../kloudbuster/kloudbuster.py\",
 line 283, in check_and_upload_images
keystone_list = [create_keystone_client(self.server_cred)[0],
  File 
\"/kb_test/kloudbuster/kb_server/kb_server/controllers/../../../kloudbuster/kloudbuster.py\",
 line 54, in create_keystone_client
return (keystoneclient.Client(endpoint_type='publicURL', **creds), 
creds['auth_url'])
  File 
\"/usr/local/lib/python2.7/dist-packages/keystoneclient/v2_0/client.py\", line 
166, in __init__
self.authenticate()
  File \"/usr/local/lib/python2.7/dist-packages/keystoneclient/utils.py\", line 
337, in inner
return func(*args, **kwargs)
  File \"/usr/local/lib/python2.7/dist-packages/keystoneclient/httpclient.py\", 
line 589, in authenticate
resp = self.get_raw_token_from_identity_service(**kwargs)
  File 
\"/usr/local/lib/python2.7/dist-packages/keystoneclient/v2_0/client.py\", line 
210, in get_raw_token_from_identity_service
_(\"Authorization Failed: %s\") % e)
AuthorizationFailure: Authorization Failed: Unable to establish connection to 
http://192.168.138.51:5000/v2.0/tokens

I used a rest client to find whether v2.0/tokens is working or not, it was 
working. I got the token . This is the openrc file I used:
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
#export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=http://192.168.138.51:5000/v2.0
export OS_TENANT_NAME=admin
#export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=sanghai
export OS_REGION_NAME=RegionOne

Please suggest a solution and let me know if I missed some details.

Thanks,
Akshay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Steve Gordon
- Original Message -
> From: "Mike Perez" 
> To: "OpenStack Development Mailing List" 
> 
> Hey all,
> 
> I've been talking to a variety of projects about lack of install guides. This
> came from me not having a great experience with trying out projects in the
> big
> tent.
> 
> Projects like Manila have proposed install docs [1], but they were rejected
> by the install docs team because it's not in defcore. One of Manila's goals
> of
> getting these docs accepted is to apply for the operators tag
> ops:docs:install-guide [2] so that it helps their maturity level in the
> project
> navigator [3].
> 
> Adrian Otto expressed to me having the same issue for Magnum. I think it's
> funny that a project that gets keynote time at the OpenStack conference can't
> be in the install docs personally.

Just two minor clarifications here:

* Manila install docs are actively being worked on for inclusion in the Mitaka 
version of the guide: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/manila.rst

* Magnum install docs were only very recently proposed here - 
https://review.openstack.org/#/c/288580/ - nobody is saying they can't be in 
the install guide assuming someone is willing to write/maintain them, but until 
now it wasn't clear anyone was.

I certainly think a better system for linking out-of-tree install docs for big 
tent projects would be worth pursuing, but regardless of where it lives someone 
still has to write/maintain that user-orientated content. For those that have 
someone actively doing this on an ongoing basis they already have a path to 
inclusion in the guide (or at least, it seems that way based on the cases I am 
familiar with like those above).

Are there examples of projects that have this user orientated install 
documentation written but are actively being rejected from including it in the 
install guide (in the Magnum case it has been pushed out to Newton as it was a 
late submission, not rejected permanently)?

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-23 Thread Alexander Gordeev
Hello Dmitry,

First of all, thanks for recovering the thread.

Please read my comments inline.

On Tue, Mar 22, 2016 at 1:07 PM, Dmitry Guryanov 
wrote:

>
> The first problem could be solved with zeroing first 512 bytes of each
> disk (not partition). Even 446 to be precise, because last 66 bytes are
> partition scheme, see
> https://wiki.archlinux.org/index.php/Master_Boot_Record .
>
>
Apparently, fuel has been using GPT since the very beginning [1].

fuel-agent does create only GPT [2] (but in fact it has got some soft of
rudimentary MBR support inside [3], but i really doubt if the corresponding
code path was executed even for once for real use cases. Looks like only
unit tests are actually using it)

Currently, due to lack of UEFI support in fuel, fuel-agent got to use
special dedicated partition for allowing to boot in CSM (BIOS/GPT) mode.
[4] [5]

And it turns out, that you're right about the fact that first stage of grub
resides in MBR [6] no


[1]
https://github.com/openstack/fuel-library/commit/a2a37e4de2a92171d12f0fbc98a684149ca8b124

[2]
https://github.com/openstack/fuel-agent/blob/dcdd64a95245cdde57f1bd1e0a83720e6bf1f56a/fuel_agent/drivers/nailgun.py#L335-L338

[3]
https://github.com/openstack/fuel-agent/blob/dcdd64a95245cdde57f1bd1e0a83720e6bf1f56a/fuel_agent/objects/partition/parted.py#L56-L97

[4] https://help.ubuntu.com/community/Grub2/Installing#BIOS.2FGPT_Notes

[5]
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/drivers/nailgun.py#L345-L347

[6] https://en.wikipedia.org/wiki/BIOS_boot_partition

The second problem should be solved only after reboot into bootstrap.
> Because if we bring a new node to the cluster from some other place and
> boot it with bootstrap image it will possibly have disks with some
> partitions, md devices and lvm volumes. So all these entities should be
> correctly cleared before provisioning, not before reboot. And fuel-agent
> does it in [1].
>
>
However, the code from
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L194-L221

is not allowing us to mix LVM and MD. So, fuel-agent could only
use/wipe/create them separately. The case when MD is built on top of LVM
volumes and vice versa are not supported and then fuel-agent will fail. I
suspect this issue should go to another thread entirely, I just want to
keep you aware of that the way how fuel-agent does it, it's not perfectly
correct. At least, it works for fuel's case.




> I propose to remove erasing first 1M of each partiton, because it can lead
> to errors in FS kernel drivers and kernel panic. An existing workaround,
> that in case of kernel panic we do reboot is bad because it may occur just
> after clearing first partition of the first disk and after reboot bios will
> read MBR of the second disk and boot from it instead of network. Let's just
> clear first 446 bytes of each disk.
>
>
> [0]
> https://github.com/openstack/fuel-astute/blob/master/mcagents/erase_node.rb#L162-L174
>

Yep, astute needs to be fixed as the way how it wipes the disks is way too
fragile, dangerous and not always reliable due to what you mentioned above.

Nope, I think that zeroing of 446 bytes is not enough. Why don't we want to
wipe bios_boot partition too? Let's wipe all grub leftovers such as
bios_boot partitions too. They doesn't contain any FS, so unlikely that
kernel or any other process will prevent us from wiping it. No errors, no
kernel panic are expected.


On Tue, Mar 22, 2016 at 5:06 PM, Dmitry Guryanov 
wrote:

> For GPT disks and non-UEFI boot this method will work, since MBR will
> still contain first stage of a bootloader code.
>

Agreed, it will work. But how about bios_boot partition? What do you think?


Thanks,  Alex.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-23 Thread Mike Perez
Hey all,

I've been talking to a variety of projects about lack of install guides. This
came from me not having a great experience with trying out projects in the big
tent.

Projects like Manila have proposed install docs [1], but they were rejected
by the install docs team because it's not in defcore. One of Manila's goals of
getting these docs accepted is to apply for the operators tag
ops:docs:install-guide [2] so that it helps their maturity level in the project
navigator [3].

Adrian Otto expressed to me having the same issue for Magnum. I think it's
funny that a project that gets keynote time at the OpenStack conference can't
be in the install docs personally.

As seen from the Manila review [1], the install docs team is suggesting these
to be put in their developer guide.

I don't think this is a great idea. Mainly because they are for developers,
operators aren't going to be looking in there for install information. Also the
Developer doc page [4] even states "This page contains documentation for Python
developers, who work on OpenStack itself".

The install docs team doesn't want to be swamped with everyone in big tent
giving them their install docs, to be verified, and eventually likely to be
maintained by the install docs team.

However, as an operator when I go docs.openstack.org under install guides,
I should know how to install any of the big tent projects. These are accepted
projects by the Technical Committee.

Lets consider the bigger picture of things here. If we don't make this
information accessible, projects have poor adoption and get less feedback
because people can't attempt to install them to begin reporting bugs.

Proposal: if the install docs team doesn't want them in the install docs repo
and instead to live in tree of the project itself before it's in defcore, can
we at least make the install guides for all big tent projects accessible
at docs.openstack.org under install guides?


[1] - https://review.openstack.org/#/c/213756/
[2] - 
http://git.openstack.org/cgit/openstack/ops-tags-team/tree/descriptions/ops-docs-install-guide.rst
[3] - http://www.openstack.org/software/releases/liberty/components/manila
[4] - http://docs.openstack.org/developer/openstack-projects.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-23 Thread Sergey Kraynev
Hello,

It looks similar on issue, which was discussed here [1]
I suppose, that the root cause is incorrect using get_attr for your case.
Probably you got "list"  instead of "string".
F.e. if I do something similar:


outputs:

  rg_1:

value: {get_attr: [rg_a, rg_a_public_ip]}

  rg_2:

value: {get_attr: [rg_a, rg_a_public_ip, 0]}

  rg_3:

value: {get_attr: [rg_a]}

  rg_4:

value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}

where rg_a is also resource group which uses custom template as resource.
the custom template has output value rg_a_public_ip.

The output for it looks like [2]

So as you can see, that in first case (like it is used in your example),
get_attr returns list with one element.
rg_2 is also wrong, because it takes first symbol from sting with IP
address.
rg_3 - does not work at all  (because it's custom template resource)
the right way is rg_4, which returns IP address string .

[1]
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg77526.html

[2] http://paste.openstack.org/show/491587/

On 23 March 2016 at 14:15, Ma, Wen-Tao (Mike, HP Servers-PSC-BJ) <
wentao...@hpe.com> wrote:

>
>
> Hi Sergey,
>
> Here is our tracked logs. we can notice that  kube_master resource can
> return the output value "kube_master_ip": "10.101.58.117"  , but It
> can’t  get the kube_master_ip value in kube_minions of
> *kubecluster-fedora-ironic.yaml.*
>
> I found about this heat template composition configuration at
> https://ask.openstack.org/en/question/56988/get-outputs-from-nested-stack/
> . It is same with us.
>
> *#heat resource-list --nested-depth 5 cf0e4e53-e703-4d78-b2e3-90c7081c39fe*
>
>
> +---+--++-+-+-+
>
> | resource_name | physical_resource_id |
> resource_type
> | resource_status | updated_time|
> stack_name  |
>
>
> +---+--++-+-+-+
>
> | kube_master   | 65d68ca7-6629-4203-b40b-359f53be8c79 |
> OS::Heat::ResourceGroup
> | CREATE_COMPLETE | 2016-03-23T18:12:44 |
> k8sbay-rzqvufyi24q5 |
>
> | kube_minions  | 9a3d3d0c-104e-4887-9961-f4d6b6dc392f |
> OS::Heat::ResourceGroup
> | CREATE_FAILED   | 2016-03-23T18:12:44 |
> k8sbay-rzqvufyi24q5 |
>
>
> +---+--++-+-+-+
>
>
>
> *#heat resource-show 65d68ca7-6629-4203-b40b-359f53be8c79 0*
>
>
> ++--+
>
> | Property   |
> Value
>   
>  |
>
>
> ++--+
>
> | attributes |
> {
> |
>
> ||   "kube_master_external_ip":
> "10.101.58.117",
> |
>
> ||   "kube_master_ip": "10.101.58.117"
>   
>  |
>
> ||
> }
>
> |
>
> …
>
> | resource_status|
> CREATE_COMPLETE
> |
>
>
> ++--+
>
>
>
>
>
> *Here is the three k8s heat yaml file.*
>
> *kubecluster-fedora-ironic.yaml*
>
> kube_master:
>
> type: OS::Heat::ResourceGroup
>
> properties:
>
>   count: 1
>
>   resource_def:
>
> type: kubemaster-fedora-ironic.yaml
>
> properties:
>
>   ssh_key_name: {get_param: ssh_key_name}
>
>   server_image: {get_param: server_image}
>
>   …
>
>
>
> kube_minions:
>
> type: OS::Heat::ResourceGroup
>
> depends_on:
>
>   - kube_master
>
> properties:
>
>   count: {get_param: number_of_minions}
>
>   removal_policies: [{resource_list: {get_param: minions_to_remove}}]
>
>   resource_def:
>
> type: 

[openstack-dev] [kloudbuster] authorization failed problem

2016-03-23 Thread Akshay Kumar Sanghai
Hi,

I am trying to use cloudbuster for the scale testing of openstack setup.

I have a openstack setup with 1 controller, 1 network and 2 compute node. I
am trying to use to use kloudbuster for scale testing of the setup. I
created one VM with kloudbuster image. I accessed the web UI and clicked on
"stage". This is the log:
:23,206 WARNING No public key is found or specified to instantiate VMs. You
will not be able to access the VMs spawned by KloudBuster.
2016-03-22 14:01:30,464 WARNING Traceback (most recent call last):
  File \"/kb_test/kloudbuster/kb_server/kb_server/controllers/api_kb.py\",
line 58, in kb_stage_thread_handler
if kb_session.kloudbuster.check_and_upload_images():
  File
\"/kb_test/kloudbuster/kb_server/kb_server/controllers/../../../kloudbuster/kloudbuster.py\",
line 283, in check_and_upload_images
keystone_list = [create_keystone_client(self.server_cred)[0],
  File
\"/kb_test/kloudbuster/kb_server/kb_server/controllers/../../../kloudbuster/kloudbuster.py\",
line 54, in create_keystone_client
return (keystoneclient.Client(endpoint_type='publicURL', **creds),
creds['auth_url'])
  File
\"/usr/local/lib/python2.7/dist-packages/keystoneclient/v2_0/client.py\",
line 166, in __init__
self.authenticate()
  File \"/usr/local/lib/python2.7/dist-packages/keystoneclient/utils.py\",
line 337, in inner
return func(*args, **kwargs)
  File
\"/usr/local/lib/python2.7/dist-packages/keystoneclient/httpclient.py\",
line 589, in authenticate
resp = self.get_raw_token_from_identity_service(**kwargs)
  File
\"/usr/local/lib/python2.7/dist-packages/keystoneclient/v2_0/client.py\",
line 210, in get_raw_token_from_identity_service
_(\"Authorization Failed: %s\") % e)
AuthorizationFailure: Authorization Failed: Unable to establish connection
to http://192.168.138.51:5000/v2.0/tokens

I used a rest client to find whether v2.0/tokens is working or not, it was
working. I got the token . This is the openrc file I used:
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
#export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=http://192.168.138.51:5000/v2.0
export OS_TENANT_NAME=admin
#export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=sanghai
export OS_REGION_NAME=RegionOne

Please suggest a solution and let me know if I missed some details.

Thanks,
Akshay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release]how to release an non-official project in Mitaka

2016-03-23 Thread Thierry Carrez

joehuang wrote:

Thanks for the help. There is a plan for not only Tricircle but also
Kingbird to do a release in Mitaka, both of them are not OpenStack
official project yet. The question is whether these projects can
leverage the facility https://github.com/openstack/releases to do a
release, or is there any guide how to do the release work by themselves
for new projects? Or just tagging is enough.


So... openstack/releases is specifically meant to list official 
OpenStack deliverables. Unofficial projects shall do their releases 
independently.


You can find information on how to do releases for projects hosted under 
OpenStack infrastructure here:


http://docs.openstack.org/infra/manual/drivers.html#release-management

Generally it implies pushing a tag and having a -tarball job defined 
(the job will pick up the tag and upload a source code tarball versioned 
after the tag name to tarballs.openstack.org).


Let me know if you have any other question.
Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] API changes for Nailgun cluster upgrade extension

2016-03-23 Thread Artem Roma
Hi, fuelers!

In accordance to policy of API changing in Fuel components established in
this mail thread [1] the purpose of this notice is to inform everyone
concerned about such modifications pertaining to Nailgun cluster upgrade
extension that is going to be landed into upstream.

New handler of HTTP requests for copying of VIPs from cluster under the
upgrade to seed cluster is going to be added [2]. This handler is
accessible by URL (omitting here the root part for Nailgun API)
'/clusters//upgrade/vips/' where '' is indicator of
seed cluster. The reason for introducing the separate handler (before the
operation was done as part of copying of network settings which in turn is
triggered by cluster clone action) is that, as described here [3], copying
of VIPs must be done for cluster with assigned nodes, thus, in context of
upgrade, after calling to reassign nodes handler. The handler supports POST
operation and accepts no data payload as all needed information is
retrieved from cluster link object created on cloning of original cluster.
In case of success response with empty body and 200 HTTP status code is
returned. In case when cluster object with given '' is not
found in data base 404 code is returned, and in case when validation for
the inbound request has failed the status code of response is 400.

​
[1]: ​
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077624.html
[2]: https://review.openstack.org/#/c/286621/
[3]: https://bugs.launchpad.net/fuel/+bug/1552744


-- 
Regards!)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-23 Thread Ryan Hallisey
*Snip*

> Indeed, this has literally none of the benefits of the ideal Heat 
> deployment enumerated above save one: it may be entirely the wrong tool 
> in every way for the job it's being asked to do, but at least it is 
> still well-integrated with the rest of the infrastructure.

> Now, at the Mitaka summit we discussed the idea of a 'split stack', 
> where we have one stack for the infrastructure and a separate one for 
> the software deployments, so that there is no longer any tight 
> integration between infrastructure and software. Although it makes me a 
> bit sad in some ways, I can certainly appreciate the merits of the idea 
> as well. However, from the argument above we can deduce that if this is 
> the *only* thing we do then we will end up in the very worst of all 
> possible worlds: the wrong tool for the job, poorly integrated. Every 
> single advantage of using Heat to deploy software will have evaporated, 
> leaving only disadvantages.

I think Heat is a very powerful tool having done the container integration
into the tripleo-heat-templates I can see its appeal.  Something I learned
from integration, was that Heat is not the best tool for container deployment,
at least right now.  We were able to leverage the work in Kolla, but what it
came down to was that we're not using containers or Kolla to its max potential.

I did an evaluation recently of tripleo and kolla to see what we would gain
if the two were to combine. Let's look at some items on tripleo's roadmap.
Split stack, as mentioned above, would be gained if tripleo were to adopt
Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates config
and deployment.  Therefore, allowing for the decoupling for each piece of
the stack.  Composable roles, this would be the ability to land services
onto separate hosts on demand.  Kolla also already does this [1]. Finally,
container integration, this is just a given :).

In the near term, if tripleo were to adopt Kolla as its overcloud it would
be provided these features and retire heat to setting up the baremetal nodes
and providing those ips to ansible.  This would be great for kolla too because
it would provide baremetal provisioning.

Ian Main and I are currently working on a POC for this as of last week [2].
It's just a simple heat template :).

I think further down the road we can evaluate using kubernetes [3].
For now though,  kolla-anisble is rock solid and is worth using for the
overcloud.

Thanks!
-Ryan

[1] - https://github.com/openstack/kolla/blob/master/ansible/inventory/multinode
[2] - https://github.com/rthallisey/kolla-heat-templates
[3] - https://review.openstack.org/#/c/255450/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >