[openstack-dev] [tc][fuel][kolla][osa][tripleo] proposing type:deployment

2016-03-21 Thread Steven Dake (stdake)
Technical Committee,

Please accept my proposal of a new type of project called a deployment [1].  If 
people don't like the type name, we can change it.  The basic idea is there are 
a class of projects unrepresented by type:service and type:library which are 
deployment projects including but not limited to Fuel, Kolla, OSA, and TripleO. 
 The main motivation behind this addition are:

  1.  Make it known to all which projects are deployment projects in the 
governance repository.
  2.  Provide that information via the governance website under release 
management tags.
  3.  Permit deployment projects to take part in the assert tags relating to 
upgrades [2].

Currently fuel is listed as a type:service in the governance repository which 
is only partially accurate.  It may provide a ReST API, but during the Kolla 
big tent application process, we were told we couldn't use type:service as it 
only applied to daemon services and not deployment projects.

Regards
-steve

[1] https://review.openstack.org/295528
[2] https://review.openstack.org/295529
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n][horizon][sahara][trove][magnum][murano] dashboard plugin release schedule

2016-03-21 Thread Sergey Lukjanov
It sounds good for Sahara as well.

We already have RC1 in sahara, so, if there will be any translations after
- we'll include them into RC2.

Thanks for you efforts.

On Mon, Mar 21, 2016 at 9:05 AM, Hayes, Graham  wrote:

> On 19/03/2016 17:47, Akihiro Motoki wrote:
> > Hi dashboard plugins team
> > (sahara-dashboard, trove-dashboard, magnum-ui, murano-dashboard)
> >
>
> There is also a designate-dashboard plugin - we have translation set up
> for Mitaka.
>
> We have string frozen as of RC1 - so if there is translations before
> RC2 we can release them as part of RC2
>
> - Graham
>
> > As Horizon i18n liaison, I would like to have a consensus on a rough
> schedule
> > of translation import for Horizon plugins.
> > Several plugins and horizon itself already released RC1.
> >
> > For Horizon translation, we use the following milestone:
> >Mitaka-3 : Soft string freeze
> >Mitaka-RC1: Hard string freeze
> >Mitaka-RC2: Final translation import
> >
> > Does this milestone sound good for sahara/trove/magnum/murano dashboard
> plugins?
> > This means that each dashboard plugin project needs to release RC2 (or
> some RC)
> > even only for translation import. Otherwise, translator efforts after
> > hard string freeze
> > will not be included in Mitaka release.
> >
> > If the above idea sounds good, I hope RC2 (or later RC) will be released
> > on early of the week Mar 28 for translation import.
> > This schedule allows translators to work on translations after "Hard
> > string freeze".
> >
> > Mitaka is the first release for Horizon plugins with translations,
> > so I hope this mail helps everyone around translations.
> >
> > Best Regards,
> > Akihiro
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Call for CPL(s) - cross project liasons

2016-03-21 Thread Mike Perez
On 20:45 Mar 21, Joshua Harlow wrote:
> Hi all,
> 
> During today's oslo meeting[1] there was not (has not been) as many projects
> in attendance as I (and I'm sure others) would like.
> 
> So I just wanted to send this out to try to gather more oslo liasons since I
> know there exists more projects with more folks than attended that meeting.
> 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons
> 
> It'd be great to have more engagement (on both sides) and getting more folks
> to show up to the weekly oslo meeting (or even show up every other week
> would be great to) as it helps the oslo folks plan features, find out what's
> broken (and ...) and be proactive (instead of being reactive).
> 
> http://eavesdrop.openstack.org/#Oslo_Team_Meeting
> 
> If timing is an issue, please do let me know and we can try to work through
> a different arrangement.
> 
> Shout-out for this completed ;)

I'd also recommend making announcements in the cross-project meeting [1] during
the horizontal/vertical announcements when needed.

[1] - https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

2016-03-21 Thread Shinobu Kinjo
Thank you for your comment (inline for my message).

On Tue, Mar 22, 2016 at 11:53 AM, Vega Cai  wrote:
> Let me try to explain some.
>
> On 22 March 2016 at 10:09, Shinobu Kinjo  wrote:
>>
>> On Tue, Mar 22, 2016 at 10:22 AM, joehuang  wrote:
>> > Hello, Shinobu,
>> >
>> > Yes, as what you described here, the "initialize" in "core.py" is used
>> > for unit/function test only. For system integration test( for example,
>> > tempest ), it would be better to use mysql like DB, this is done by the
>> > configuration in DB part.
>>
>> Thank you for your thought.
>>
>> >
>> > From my point of view, the tricircle DB part could be enhanced in the DB
>> > model and migration scripts. Currently unit test use DB model to initialize
>> > the data base, but not using the migration scripts,
>>
>> I'm assuming the migration scripts are in "tricircle/db". Is it right?
>
>
> migration scripts are in tricircle/db/migrate_repo
>>
>>
>> What is the DB model?
>> Why do we need 2-way-methods at the moment?
>
>
> DB models are defined in tricircle/db/models.py. Models.py defines tables in
> object level, so other modules can import models.py then operate the tables
> by operating the objects. Migration scripts defines tables in table level,
> you define table fields, constraints in the scripts then migration tool will
> read the scripts and build the tables.

Dose "models.py" manage database schema(e.g., create / delete columns,
tables, etc)?

> Migration tool has a feature to
> generate migration scripts from DB models automatically but it may make
> mistakes sometimes, so currently we manually maintain the table structure in
> both DB model and migration scripts.

Is *migration tool* different from bot DB models and migration scripts?

>>
>>
>> > so the migration scripts can only be tested when using devstack for
>> > integration test. It would better to using migration script to instantiate
>> > the DB, and tested in the unit test too.
>>
>> If I understand you correctly, we are moving forward to using the
>> migration scripts for both unit and integration tests.
>>
>> Cheers,
>> Shinobu
>>
>> >
>> > (Also move the discussion to the openstack-dev mail-list)
>> >
>> > Best Regards
>> > Chaoyi Huang ( joehuang )
>> >
>> > -Original Message-
>> > From: Shinobu Kinjo [mailto:ski...@redhat.com]
>> > Sent: Tuesday, March 22, 2016 7:43 AM
>> > To: joehuang; khayam.gondal; zhangbinsjtu; shipengfei92; newypei;
>> > Liuhaixia; caizhiyuan (A); huangzhipeng
>> > Subject: Using in-memory database for unit tests
>> >
>> > Hello,
>> >
>> > In "initialize" method defined in "core.py", we're using *in-memory*
>> > strategy making use of sqlite. AFAIK we are using this solution for only
>> > testing purpose. Unit tests using this solution should be fine for small
>> > scale environment. But it's not good enough even it's for testing.
>> >
>> > What do you think?
>> > Any thought, suggestion would be appreciated.
>> >
>> > [1]
>> > https://github.com/openstack/tricircle/blob/master/tricircle/db/core.py#L124-L127
>> >
>> > Cheers,
>> > Shinobu
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Email:
>> shin...@linux.com
>> GitHub:
>> shinobu-x
>> Blog:
>> Life with Distributed Computational System based on OpenSource
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo] Call for CPL(s) - cross project liasons

2016-03-21 Thread Joshua Harlow

Hi all,

During today's oslo meeting[1] there was not (has not been) as many 
projects in attendance as I (and I'm sure others) would like.


So I just wanted to send this out to try to gather more oslo liasons 
since I know there exists more projects with more folks than attended 
that meeting.


https://wiki.openstack.org/wiki/CrossProjectLiaisons

It'd be great to have more engagement (on both sides) and getting more 
folks to show up to the weekly oslo meeting (or even show up every other 
week would be great to) as it helps the oslo folks plan features, find 
out what's broken (and ...) and be proactive (instead of being reactive).


http://eavesdrop.openstack.org/#Oslo_Team_Meeting

If timing is an issue, please do let me know and we can try to work 
through a different arrangement.


Shout-out for this completed ;)

Thanks folks,

Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election][ec2-api][winstackers][stable] status of teams without PTL candidates

2016-03-21 Thread Mike Perez
On 12:33 Mar 21, Doug Hellmann wrote:
> 
> > On Mar 21, 2016, at 12:03 PM, Alexandre Levine  
> > wrote:
> > 
> > Doug,
> > 
> > Let me clarify a bit the situation.
> > Before this February there wasn't such a project at all. EC2 API was
> > a built-in part of nova so no dedicated PTL was required. The built-in part
> > got removed and our project got promoted. We're a team of 3 developers
> > which nevertheless are committed to this support for year and a half
> > already. The reason I didn't nominate myself is solely because I'm new to
> > the process and I thought that first cycle will actually start from Mitaka
> > so I didn't have to bother. I hope it's forgivable and our ongoing support
> > of the code to make sure it works with both OpenStack and Amazon will make
> > up for it if a little.
> 
> Yes, please don't take my original proposal as anything other than me
> suggesting some "clean up" based on me not having all the info about the
> status of EC2. If we need to clarify that all projects are expected to
> participate in elections, that's something we can address. I'll look at
> wording of the existing requirements in the next week or so. If the team has
> a leader, you're all set and I'm happy to support keeping EC2 an official
> team. 

Hope this cover things:

https://review.openstack.org/#/c/295581
https://review.openstack.org/#/c/295609
https://review.openstack.org/#/c/295611

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-21 Thread Hongbin Lu
Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.

· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.

· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.

· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:

· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.

· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often needing some more investment).

I think it is worth a go to try it out and have concrete areas to improve if 
there are problems.

Tim

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store 

Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

2016-03-21 Thread Vega Cai
Let me try to explain some.

On 22 March 2016 at 10:09, Shinobu Kinjo  wrote:

> On Tue, Mar 22, 2016 at 10:22 AM, joehuang  wrote:
> > Hello, Shinobu,
> >
> > Yes, as what you described here, the "initialize" in "core.py" is used
> for unit/function test only. For system integration test( for example,
> tempest ), it would be better to use mysql like DB, this is done by the
> configuration in DB part.
>
> Thank you for your thought.
>
> >
> > From my point of view, the tricircle DB part could be enhanced in the DB
> model and migration scripts. Currently unit test use DB model to initialize
> the data base, but not using the migration scripts,
>
> I'm assuming the migration scripts are in "tricircle/db". Is it right?
>

migration scripts are in tricircle/db/migrate_repo

>
> What is the DB model?
> Why do we need 2-way-methods at the moment?
>

DB models are defined in tricircle/db/models.py. Models.py defines tables
in object level, so other modules can import models.py then operate the
tables by operating the objects. Migration scripts defines tables in table
level, you define table fields, constraints in the scripts then migration
tool will read the scripts and build the tables. Migration tool has a
feature to generate migration scripts from DB models automatically but it
may make mistakes sometimes, so currently we manually maintain the table
structure in both DB model and migration scripts.

>
> > so the migration scripts can only be tested when using devstack for
> integration test. It would better to using migration script to instantiate
> the DB, and tested in the unit test too.
>
> If I understand you correctly, we are moving forward to using the
> migration scripts for both unit and integration tests.
>
> Cheers,
> Shinobu
>
> >
> > (Also move the discussion to the openstack-dev mail-list)
> >
> > Best Regards
> > Chaoyi Huang ( joehuang )
> >
> > -Original Message-
> > From: Shinobu Kinjo [mailto:ski...@redhat.com]
> > Sent: Tuesday, March 22, 2016 7:43 AM
> > To: joehuang; khayam.gondal; zhangbinsjtu; shipengfei92; newypei;
> Liuhaixia; caizhiyuan (A); huangzhipeng
> > Subject: Using in-memory database for unit tests
> >
> > Hello,
> >
> > In "initialize" method defined in "core.py", we're using *in-memory*
> strategy making use of sqlite. AFAIK we are using this solution for only
> testing purpose. Unit tests using this solution should be fine for small
> scale environment. But it's not good enough even it's for testing.
> >
> > What do you think?
> > Any thought, suggestion would be appreciated.
> >
> > [1]
> https://github.com/openstack/tricircle/blob/master/tricircle/db/core.py#L124-L127
> >
> > Cheers,
> > Shinobu
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Email:
> shin...@linux.com
> GitHub:
> shinobu-x
> Blog:
> Life with Distributed Computational System based on OpenSource
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] Asynchronous TaaS APIs

2016-03-21 Thread Anil Rao
The current tap-service-* and tap-flow-* APIs are synchronous. When the call 
completes a status of success or failure is returned. The problem here though 
is that this status is being returned by the TaaS plugin after it goes through 
some checks and successfully updates the configuration state. One of the 
operations performed by the plugin is to issue tasks to the TaaS agent/driver. 
However, failures encountered by the latter don't get returned back to the 
user. This can lead to problem situations where the configuration state might 
reflect that tap-services and/or tap-flows have been successfully created, but 
in actuality that may not always be the case.
I think we should adopt an asynchronous model, where we maintain the state for 
tap-service and tap-flow objects. Valid states could be "created", 
"create-pending" and "failed." In addition, we will need a suitable mechanism 
to have the plugin extract the current state from the agent/driver and provide 
it to the end-user.
Another scenario where the asynchronous model with states (as described above) 
will be useful is for TaaS backend implementations that may take a while to 
complete certain operations. In this situation, the front-end doesn't need to 
block completely; it can return as soon as the request is successfully handed 
to the agent or if the config tasks itself failed. For the former case, 
subsequent queries of the object's state will indicate if the operation has 
completed, is still pending or has failed.
Thoughts...?
Anil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

2016-03-21 Thread joehuang
Hi, Shinobu,

See inline comments

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Shinobu Kinjo [mailto:shinobu...@gmail.com] 
Sent: Tuesday, March 22, 2016 10:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Liuhaixia; zhangbinsjtu; huangzhipeng; newypei; caizhiyuan (A)
Subject: Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

On Tue, Mar 22, 2016 at 10:22 AM, joehuang  wrote:
> Hello, Shinobu,
>
> Yes, as what you described here, the "initialize" in "core.py" is used for 
> unit/function test only. For system integration test( for example, tempest ), 
> it would be better to use mysql like DB, this is done by the configuration in 
> DB part.

Thank you for your thought.

>
> From my point of view, the tricircle DB part could be enhanced in the 
> DB model and migration scripts. Currently unit test use DB model to 
> initialize the data base, but not using the migration scripts,

I'm assuming the migration scripts are in "tricircle/db". Is it right?

What is the DB model?
Why do we need 2-way-methods at the moment?

[joehuang] : the migration script is in 
https://github.com/openstack/tricircle/tree/master/tricircle/db/migrate_repo/versions,
 There are scripts for table definition. DB models is in 
https://github.com/openstack/tricircle/blob/master/tricircle/db/models.py, this 
is scripts using class to access tables.

> so the migration scripts can only be tested when using devstack for 
> integration test. It would better to using migration script to instantiate 
> the DB, and tested in the unit test too.

If I understand you correctly, we are moving forward to using the migration 
scripts for both unit and integration tests.

[joehuang] Correct. Quite appreciate if you can contribute.

Cheers,
Shinobu

>
> (Also move the discussion to the openstack-dev mail-list)
>
> Best Regards
> Chaoyi Huang ( joehuang )
>
> -Original Message-
> From: Shinobu Kinjo [mailto:ski...@redhat.com]
> Sent: Tuesday, March 22, 2016 7:43 AM
> To: joehuang; khayam.gondal; zhangbinsjtu; shipengfei92; newypei; 
> Liuhaixia; caizhiyuan (A); huangzhipeng
> Subject: Using in-memory database for unit tests
>
> Hello,
>
> In "initialize" method defined in "core.py", we're using *in-memory* strategy 
> making use of sqlite. AFAIK we are using this solution for only testing 
> purpose. Unit tests using this solution should be fine for small scale 
> environment. But it's not good enough even it's for testing.
>
> What do you think?
> Any thought, suggestion would be appreciated.
>
> [1] 
> https://github.com/openstack/tricircle/blob/master/tricircle/db/core.p
> y#L124-L127
>
> Cheers,
> Shinobu
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Anita Kuno
On 03/21/2016 09:48 PM, Clark Boylan wrote:
> On Mon, Mar 21, 2016, at 06:37 PM, Assaf Muller wrote:
>> On Mon, Mar 21, 2016 at 9:26 PM, Clark Boylan 
>> wrote:
>>> On Mon, Mar 21, 2016, at 06:15 PM, Assaf Muller wrote:
 On Mon, Mar 21, 2016 at 8:09 PM, Clark Boylan 
 wrote:

 If what we want is to cut down execution time I'd suggest to stop
 running Cinder tests on Neutron patches (Call it as an experiment) and
 see how long it takes for a regression to slip in. Being an
 optimistic, I would guess: Never!
>>>
>>> Experience has shown about a week and that its not an if but a when.
>>
>> I'm really curious how can a Neutron patch screw up Cinder (And the
>> regression be missed by Neutron and Nova tests that interact with
>> Neutron). I guess I wasn't around when this was happening. If anyone
>> could shed historic light on this I'd appreciate it.
> 
> Not neutron screwing up cinder just general time to regression when gate
> stops testing something. We saw it when we stopped testing postgres for
> example.
> 
 If we're running these tests on Neutron patches solely as a data point
 for performance testing, Tempest is obviously not the tool for the job
 and doesn't provide any added value we can't get from Rally and
 profilers for example. If there's otherwise value for running Cinder
 (And other tests that don't exercise the Neutron API), I'd love to
 know what it is :) I cannot remember any legit Cinder failure on
 Neutron patches.
>>>
>>> I think that is the complete wrong approach to take here. We have caught
>>> a problem in neutron your goal should be to fix it not to stop testing
>>> it.
>>
>> You misunderstood my intentions. I'm not saying we should plant our
>> head in the sand and sing until the problem goes away, but I am saying
>> that if we're interested in uncovering performance issues with
>> Neutron's control plane, then there's more effective ways to do so. If
>> you're interested and have the energy, profiling the neutron-server
>> process while running Rally tests is a much better usage of time.
>> Comparing nova-network and Neutron is just not a useful data point.
> 
> The question was why is Neutron CI so slow. Upon investigation I found
> that jobs using nova-net are ~20 minutes faster in one cloud than those
> using neutron. I am not attempting to do performance testing on Neutron
> I am attempting to narrow down where this lost 20 minutes can be found.
> In this case it is a very useful data point. We know we can run these
> tests faster because we have that data. Therefore the assumption is that
> neutron can (and honestly it should) run just as quickly.
> 
> We need these tests for integration testing (at least thats the
> assertion by them living in tempest). We also want the jobs to run
> faster (the topic of this thread). Using the data available to us we
> find that the biggest costs in these jobs is the tempest testing itself.
> The best way to make the jobs run quicker is to address the tests
> themselves. Looking at the relative performance of the two solutions
> available to us we find that there is room for improvement in the
> Neutron testing. Thats all I am trying to point out. This has nothing to
> do with proper performance testing or running rally and everything to do
> with make the integration tests quicker.
> 
>>> The fact that neutron is much slower in these test cases is an
>>> indication that these tests DO exercise the neutron api and that you do
>>> want to cover these code paths and that you need to address them, not
>>> that you should stop testing them.
>>>
>>> We are not running these tests on neutron solely for performance
>>> testing. In fact to get reasonable performance testing out of it I had
>>> to jump through a few hoops: make tempest run serially then recheck
>>> until the jobs ran in the same cloud more than once. Performance testing
>>> has never been the goal of these tests. These tests exist to make sure
>>> that OpenStack works. Boot from volume is an important piece of this and
>>> we are making sure that OpenStack (this means glance, nova, neutron,
>>> cinder) continue to work for this use case.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I would like to thank Clark, who could have chosen many different tasks
fighting for his attention today, yet chose to focus on getting data for
neutron tests in order to help Rosella and Ihar in their stated goal.

Thank you, Clark,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [jacket][tricircle] Introduction to jacket, a new project

2016-03-21 Thread joehuang
Thanks to Kevin’s explanation. Some more comment on Tricircle.

Hybrid cloud is only one of the use cases, there are two other use cases for 
Tricircle:


1.  massive distributed edge clouds
Current Internet is good at processing downlink service. All contents are 
stored in centralized data centers and to some extent the access is accelerated 
with CDN.

As more and more user generated content uploaded to the cloud and web site, 
these contents and data still have to be uploaded to some big data centers, the 
path is long and the bandwidth is limited and slow. For example, it’s very slow 
to upload/streaming HD/2k/4k video for every user concurrently, both for 
pictures or videos, they have to be uploaded with quality loss, and slow, use 
cloud as the first storage for user data is not the choice yet, currently it’s 
mainly for backup, and for non- time sensitive data. Some video captured and 
stored with quality loss even lead to the difficulty to provide the crime 
evidence or other purpose. The last mile of network access (fix or mobile) is 
wide enough, the main hindrance is that bandwidth in MAN(Metropolitan Area 
Network) and Backbone and WAN is limited and expensive.

Now building the massive distributed edge clouds in edge data centers (compared 
to the centralized cloud) with computing and storage close to end user is 
emerging, and even for NFV with more flexible networking capability will 
provide better personalized networking functionalities,  and also help to move 
the computation and storage close to end user. With shortest path from the end 
user to the storage and computation, the uplink speed could be larger and 
terminate the bandwidth consumption as early as possible,  will definitely 
bring better user experience, and change the way of content generation and 
store: real time, all data in cloud.

VNF/App/Storage in the edge cloud can provide better user experience for the 
end user, the movement or distribution of VNF/App/Storage from one edge data 
center to another one is also needed. For example, all video will be stored and 
processed in Hawaii locally when I am taking video in travelling, but I hope 
the video after processing will be moved to China Shenzhen when I come back to 
China. But in Shenzhen, I want to share the video with streaming service not 
only in Shenzhen but to friends in Shanghai Beijing, so the data and the 
streaming service can be built in Shenzhen/Shanghai/Beijing too. For VNF, 
distributed designed VNF will be placed to multiple edge data centers for 
higher reliability/availability, and even chaining multiple VNFs cross edge 
data centers for better customized networking capabilities.

The emerging massive distributed edge clouds will not only be some independent 
clouds, some new requirements are needed:

n  Tenant level L2/L3 networking across data centers

n  Volume/VM/object storage migration/distribution

n  Distributed image management

n  Distributed quota management

n  ...
This is the job of Tricircle to work as OpenStack API gateway to the edge 
clouds, and address the functionalities cross edge site cloud.

2. large scale cloud
Compared Amazon, the scalability of OpenStack is still not good enough. One 
Amazon AZ can supports >5 
servers(http://www.slideshare.net/AmazonWebServices/spot301-aws-innovation-at-scale-aws-reinvent-2014).

Cells is a good enhancement, but the shortage of Cells is: 1) only nova 
supports cells. 2) using RPC for inter-datacenter communication will bring the 
difficulty in inter-dc troubleshooting. 3) upgrade has to deal with DB and RPC 
change. 4)difficult for multi-vendor integration for different cell.

From the experience of production large scale public cloud, the large scale 
cloud can only be built by capacity expansion step by step (intra-AZ and 
inter-AZ). And the challenge in capacity expansion is how to do the sizing:

n  Number of Nova-API Server...

n  Number of Cinder-API Server..

n  Number of Neutron-API Server…

n  Number of Scheduler..

n  Number of Conductor…

n  specification of physical server…

n  specification of physical switch…

n  Size of storage for Image..

n  Size of management plane bandwidth…

n  size of data plane bandwidth…

n  reservation of rack space …

n  reservation of networking slots…

n  ….
You have to estimate, calculate, monitoring, simulate, test, online grey 
expansion for controller nodes and network nodes…whenever you add new machines 
to the cloud. The difficulty is that you can’t test and verify in all size, not 
to mention >5 servers.

The feasible way to expand one large scale cloud is to add some already tested 
building block, that means we would prefer to build large scale public cloud by 
adding tested OpenStack instance (including controller and compute nodes) one 
by one, but not enlarge one OpenStack uncontraintly. This way put the cloud 
construction under control.

Building large scale cloud by adding tested OpenStack instance one by one, will 
lead 

Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

2016-03-21 Thread Shinobu Kinjo
On Tue, Mar 22, 2016 at 10:22 AM, joehuang  wrote:
> Hello, Shinobu,
>
> Yes, as what you described here, the "initialize" in "core.py" is used for 
> unit/function test only. For system integration test( for example, tempest ), 
> it would be better to use mysql like DB, this is done by the configuration in 
> DB part.

Thank you for your thought.

>
> From my point of view, the tricircle DB part could be enhanced in the DB 
> model and migration scripts. Currently unit test use DB model to initialize 
> the data base, but not using the migration scripts,

I'm assuming the migration scripts are in "tricircle/db". Is it right?

What is the DB model?
Why do we need 2-way-methods at the moment?

> so the migration scripts can only be tested when using devstack for 
> integration test. It would better to using migration script to instantiate 
> the DB, and tested in the unit test too.

If I understand you correctly, we are moving forward to using the
migration scripts for both unit and integration tests.

Cheers,
Shinobu

>
> (Also move the discussion to the openstack-dev mail-list)
>
> Best Regards
> Chaoyi Huang ( joehuang )
>
> -Original Message-
> From: Shinobu Kinjo [mailto:ski...@redhat.com]
> Sent: Tuesday, March 22, 2016 7:43 AM
> To: joehuang; khayam.gondal; zhangbinsjtu; shipengfei92; newypei; Liuhaixia; 
> caizhiyuan (A); huangzhipeng
> Subject: Using in-memory database for unit tests
>
> Hello,
>
> In "initialize" method defined in "core.py", we're using *in-memory* strategy 
> making use of sqlite. AFAIK we are using this solution for only testing 
> purpose. Unit tests using this solution should be fine for small scale 
> environment. But it's not good enough even it's for testing.
>
> What do you think?
> Any thought, suggestion would be appreciated.
>
> [1] 
> https://github.com/openstack/tricircle/blob/master/tricircle/db/core.py#L124-L127
>
> Cheers,
> Shinobu
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Clark Boylan
On Mon, Mar 21, 2016, at 06:37 PM, Assaf Muller wrote:
> On Mon, Mar 21, 2016 at 9:26 PM, Clark Boylan 
> wrote:
> > On Mon, Mar 21, 2016, at 06:15 PM, Assaf Muller wrote:
> >> On Mon, Mar 21, 2016 at 8:09 PM, Clark Boylan 
> >> wrote:
> >>
> >> If what we want is to cut down execution time I'd suggest to stop
> >> running Cinder tests on Neutron patches (Call it as an experiment) and
> >> see how long it takes for a regression to slip in. Being an
> >> optimistic, I would guess: Never!
> >
> > Experience has shown about a week and that its not an if but a when.
> 
> I'm really curious how can a Neutron patch screw up Cinder (And the
> regression be missed by Neutron and Nova tests that interact with
> Neutron). I guess I wasn't around when this was happening. If anyone
> could shed historic light on this I'd appreciate it.

Not neutron screwing up cinder just general time to regression when gate
stops testing something. We saw it when we stopped testing postgres for
example.

> >> If we're running these tests on Neutron patches solely as a data point
> >> for performance testing, Tempest is obviously not the tool for the job
> >> and doesn't provide any added value we can't get from Rally and
> >> profilers for example. If there's otherwise value for running Cinder
> >> (And other tests that don't exercise the Neutron API), I'd love to
> >> know what it is :) I cannot remember any legit Cinder failure on
> >> Neutron patches.
> >
> > I think that is the complete wrong approach to take here. We have caught
> > a problem in neutron your goal should be to fix it not to stop testing
> > it.
> 
> You misunderstood my intentions. I'm not saying we should plant our
> head in the sand and sing until the problem goes away, but I am saying
> that if we're interested in uncovering performance issues with
> Neutron's control plane, then there's more effective ways to do so. If
> you're interested and have the energy, profiling the neutron-server
> process while running Rally tests is a much better usage of time.
> Comparing nova-network and Neutron is just not a useful data point.

The question was why is Neutron CI so slow. Upon investigation I found
that jobs using nova-net are ~20 minutes faster in one cloud than those
using neutron. I am not attempting to do performance testing on Neutron
I am attempting to narrow down where this lost 20 minutes can be found.
In this case it is a very useful data point. We know we can run these
tests faster because we have that data. Therefore the assumption is that
neutron can (and honestly it should) run just as quickly.

We need these tests for integration testing (at least thats the
assertion by them living in tempest). We also want the jobs to run
faster (the topic of this thread). Using the data available to us we
find that the biggest costs in these jobs is the tempest testing itself.
The best way to make the jobs run quicker is to address the tests
themselves. Looking at the relative performance of the two solutions
available to us we find that there is room for improvement in the
Neutron testing. Thats all I am trying to point out. This has nothing to
do with proper performance testing or running rally and everything to do
with make the integration tests quicker.

> > The fact that neutron is much slower in these test cases is an
> > indication that these tests DO exercise the neutron api and that you do
> > want to cover these code paths and that you need to address them, not
> > that you should stop testing them.
> >
> > We are not running these tests on neutron solely for performance
> > testing. In fact to get reasonable performance testing out of it I had
> > to jump through a few hoops: make tempest run serially then recheck
> > until the jobs ran in the same cloud more than once. Performance testing
> > has never been the goal of these tests. These tests exist to make sure
> > that OpenStack works. Boot from volume is an important piece of this and
> > we are making sure that OpenStack (this means glance, nova, neutron,
> > cinder) continue to work for this use case.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-21 Thread Adam Young

I had a good discussion with the Nova folks in IRC today.

My goal was to understand what could talk to what, and the short 
according to dansmith


" any node in nova land has to be able to talk to the queue for any 
other one for the most part: compute->compute, compute->conductor, 
conductor->compute, api->everything. There might be a few exceptions, 
but not worth it, IMHO, in the current architecture."


Longer conversation is here:
 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27

Right now, the message queue is a nightmare.  All sorts of sensitive 
information flows over the message queue: Tokens (including admin) are 
the most obvious.  Every piece of audit data. All notifications and all 
control messages.


Before we continue down the path of "anything can talk to anything" can 
we please map out what needs to talk to what, and why?  Many of the use 
cases seem to be based on something that should be kicked off by the 
conductor, such as "migrate, resize, live-migrate" and it sounds like 
there are plans to make that happen.


So, let's assume we can get to the point where, if node 1 needs to talk 
to node 2, it will do so only via the conductor.  With that in place, we 
can put an access control rule in place:


1.  Compute nodes can only read from the queue 
compute.-novacompute-.localdomain

2.  Compute nodes can only write to response queues in the RPC vhost
3.  Compute nodes can only write to notification queus in the 
notification host.


I know that with AMQP, we should be able to identify the writer of a 
message.  This means that each compute node should have its own user.  I 
have identified how to do that for Rabbit and QPid.  I assume for 0mq is 
would make sense to use ZAP (http://rfc.zeromq.org/spec:27) but I'd 
rather the 0mq maintainers chime in here.


I think it is safe (and sane) to have the same use on the compute node 
communicate with  Neutron, Nova, and Ceilometer.  This will avoid a 
false sense of security: if one is compromised, they are all going to be 
compromised.  Plan accordingly.


Beyond that, we should have message broker users for each of the 
components that is a client of the broker.


Applications that run on top of the cloud, and that do not get presence 
on the compute nodes, should have their own VHost.  I see Sahara on my 
Tripleo deploy, but I assume there are others.  Either they completely 
get their own vhost, or the apps should share one separate from the 
RPC/Notification vhosts we currently have.  Even Heat might fall into 
this category.


Note that those application users can be allowed to read from the 
notification queues if necessary.  They just should not be using the 
same vhost for their own traffic.


Please tell me if/where I am blindingly wrong in my analysis.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Assaf Muller
On Mon, Mar 21, 2016 at 9:26 PM, Clark Boylan  wrote:
> On Mon, Mar 21, 2016, at 06:15 PM, Assaf Muller wrote:
>> On Mon, Mar 21, 2016 at 8:09 PM, Clark Boylan 
>> wrote:
>> > On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
>> >> On 03/21/2016 04:09 PM, Clark Boylan wrote:
>> >> > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
>> >> >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
>> >> >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
>> >>  Do you have an a better insight of job runtimes vs jobs in other
>> >>  projects?
>> >>  Most of the time in the job runtime is actually spent setting the
>> >>  infrastructure up, and I am not sure we can do anything about it, 
>> >>  unless
>> >>  we
>> >>  take this with Infra.
>> >> >>>
>> >> >>> I haven't done a comparison yet buts lets break down the runtime of a
>> >> >>> recent successful neutron full run against neutron master [0].
>> >> >>
>> >> >> And now for some comparative data from the gate-tempest-dsvm-full job
>> >> >> [0]. This job also ran against a master change that merged and ran in
>> >> >> the same cloud and region as the neutron job.
>> >> >>
>> >> > snip
>> >> >> Generally each step of this job was quicker. There were big differences
>> >> >> in devstack and tempest run time though. Is devstack much slower to
>> >> >> setup neutron when compared to nova net? For tempest it looks like we
>> >> >> run ~1510 tests against neutron and only ~1269 against nova net. This
>> >> >> may account for the large difference there. I also recall that we run
>> >> >> ipv6 tempest tests against neutron deployments that were inefficient 
>> >> >> and
>> >> >> booted 2 qemu VMs per test (not sure if that is still the case but
>> >> >> illustrates that the tests themselves may not be very quick in the
>> >> >> neutron case).
>> >> >
>> >> > Looking at the tempest slowest tests output for each of these jobs
>> >> > (neutron and nova net) some tests line up really well across jobs and
>> >> > others do not. In order to get a better handle on the runtime for
>> >> > individual tests I have pushed https://review.openstack.org/295487 which
>> >> > will run tempest serially reducing the competition for resources between
>> >> > tests.
>> >> >
>> >> > Hopefully the subunit logs generated by this change can provide more
>> >> > insight into where we are losing time during the tempest test runs.
>> >
>> > The results are in, we have gate-tempest-dsvm-full [0] and
>> > gate-tempest-dsvm-neutron-full [1] job results where tempest ran
>> > serially to reduce resource contention and provide accurateish per test
>> > timing data. Both of these jobs ran on the same cloud so should have
>> > comparable performance from the underlying VMs.
>> >
>> > gate-tempest-dsvm-full
>> > Time spent in job before tempest: 700 seconds
>> > Time spent running tempest: 2428
>> > Tempest tests run: 1269 (113 skipped)
>> >
>> > gate-tempest-dsvm-neutron-full
>> > Time spent in job before tempest: 789 seconds
>> > Time spent running tempest: 4407 seconds
>> > Tempest tests run: 1510 (76 skipped)
>> >
>> > All times above are wall time as recorded by Jenkins.
>> >
>> > We can also compare the 10 slowest tests in the non neutron job against
>> > their runtimes in the neutron job. (note this isn't a list of the top 10
>> > slowest tests in the neutron job because that job runs extra tests).
>> >
>> > nova net job
>> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
>> >   85.232
>> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
>> > 83.319
>> > tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
>> >  50.338
>> > tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
>> > 43.494
>> > tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
>> > 40.225
>> > tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
>> >39.653
>> > tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete
>> > 37.720
>> > tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete
>> > 36.355
>> > tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm_from_stopped
>> >27.375
>> > tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks
>> > 27.025
>> >
>> > neutron job
>> > 

Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Clark Boylan
On Mon, Mar 21, 2016, at 06:15 PM, Assaf Muller wrote:
> On Mon, Mar 21, 2016 at 8:09 PM, Clark Boylan 
> wrote:
> > On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
> >> On 03/21/2016 04:09 PM, Clark Boylan wrote:
> >> > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
> >> >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
> >> >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
> >>  Do you have an a better insight of job runtimes vs jobs in other
> >>  projects?
> >>  Most of the time in the job runtime is actually spent setting the
> >>  infrastructure up, and I am not sure we can do anything about it, 
> >>  unless
> >>  we
> >>  take this with Infra.
> >> >>>
> >> >>> I haven't done a comparison yet buts lets break down the runtime of a
> >> >>> recent successful neutron full run against neutron master [0].
> >> >>
> >> >> And now for some comparative data from the gate-tempest-dsvm-full job
> >> >> [0]. This job also ran against a master change that merged and ran in
> >> >> the same cloud and region as the neutron job.
> >> >>
> >> > snip
> >> >> Generally each step of this job was quicker. There were big differences
> >> >> in devstack and tempest run time though. Is devstack much slower to
> >> >> setup neutron when compared to nova net? For tempest it looks like we
> >> >> run ~1510 tests against neutron and only ~1269 against nova net. This
> >> >> may account for the large difference there. I also recall that we run
> >> >> ipv6 tempest tests against neutron deployments that were inefficient and
> >> >> booted 2 qemu VMs per test (not sure if that is still the case but
> >> >> illustrates that the tests themselves may not be very quick in the
> >> >> neutron case).
> >> >
> >> > Looking at the tempest slowest tests output for each of these jobs
> >> > (neutron and nova net) some tests line up really well across jobs and
> >> > others do not. In order to get a better handle on the runtime for
> >> > individual tests I have pushed https://review.openstack.org/295487 which
> >> > will run tempest serially reducing the competition for resources between
> >> > tests.
> >> >
> >> > Hopefully the subunit logs generated by this change can provide more
> >> > insight into where we are losing time during the tempest test runs.
> >
> > The results are in, we have gate-tempest-dsvm-full [0] and
> > gate-tempest-dsvm-neutron-full [1] job results where tempest ran
> > serially to reduce resource contention and provide accurateish per test
> > timing data. Both of these jobs ran on the same cloud so should have
> > comparable performance from the underlying VMs.
> >
> > gate-tempest-dsvm-full
> > Time spent in job before tempest: 700 seconds
> > Time spent running tempest: 2428
> > Tempest tests run: 1269 (113 skipped)
> >
> > gate-tempest-dsvm-neutron-full
> > Time spent in job before tempest: 789 seconds
> > Time spent running tempest: 4407 seconds
> > Tempest tests run: 1510 (76 skipped)
> >
> > All times above are wall time as recorded by Jenkins.
> >
> > We can also compare the 10 slowest tests in the non neutron job against
> > their runtimes in the neutron job. (note this isn't a list of the top 10
> > slowest tests in the neutron job because that job runs extra tests).
> >
> > nova net job
> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
> >   85.232
> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
> > 83.319
> > tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
> >  50.338
> > tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
> > 43.494
> > tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
> > 40.225
> > tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
> >39.653
> > tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete
> > 37.720
> > tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete
> > 36.355
> > tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm_from_stopped
> >27.375
> > tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks
> > 27.025
> >
> > neutron job
> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
> >  110.345
> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
> >   

Re: [openstack-dev] [neutron] BGP Dynamic Routing Development GoingForward

2016-03-21 Thread Mickey Spiegel
We are very interested in spinning out BGP Dynamic Routing into a separate 
stadium project. We are ready and willing to help make this happen soon.

Mickey


-Vikram Choudhary  wrote: -
To: "OpenStack Development Mailing List (not for usage questions)" 

From: Vikram Choudhary 
Date: 03/21/2016 02:36AM
Subject: Re: [openstack-dev] [neutron] BGP Dynamic Routing Development Going
Forward

Hi All,

I would like to reopen this mail tread for 'N' release. It's really great that 
we are able to ship 'BGP Dynamic Routing' functionality as part of Mitaka. 
Thanks to everyone who has contributed and helped in making this a reality ;)

For 'N' cycle, couple of RFE's [1] are already being lined up. In this regard 
we would like to hear from the community on what will be the best way moving 
forward:

1. Spun out 'BGP Dynamic Routing functionality' as a separate stadium project?
(FYI: We already initiated this during Mitaka via [2] & [3])

2. Keep developing in the neutron repo till we reach to a decision?
(As we have done in Mitaka so far)

[1] https://bugs.launchpad.net/neutron/+bugs?field.tag=l3-bgp
[2] https://review.openstack.org/#/c/268726/
[3] https://review.openstack.org/#/c/268727/

Thanks
Vikram


On Tue, Jan 26, 2016 at 3:33 AM, Armando M.  wrote:


On 25 January 2016 at 08:23, Tidwell, Ryan  wrote:
  
 
 Responses inline
  
 From: Gal Sagie [mailto:gal.sa...@gmail.com] 
 Sent: Friday, January 22, 2016 9:49 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] BGP Dynamic Routing Development Going 
Forward
  
 
 The real question that needs to be asked (at least for me) is how this feature 
can work with other plugins/ML2 drivers
 
 that are not the reference implementation.
  
  
  -  Regardless of the ML2 drivers you use, ML2 is 
supported with the reference implementation.  The code we have only works with 
ML2 though, which is a  concern for putting this in the main repo.
  
  
 How hard (possible) it is to take the API part (or maybe even the agent) and 
use that in another Neutron implementation.
  
 Then focus on which ever option that works best to achieve this.
  
 -  The agent is actually very portable in my opinion.  The server-side 
code is not so portable, as mentioned above only ML2 is supported.  Identifying 
 next-hops is done by querying the DB, it’s hard to make that portable between 
plugins.
  
  
  
 I personally think that if the long term goal is to have this in a separate 
repo then this should happen right now.
  
 "We will do this later" just won't work, it will be harder and it will just 
not happen (or it will cause a lot of pain to people
  
 that started deploying this)
  
 At least thats my opinion, of course it depends a lot on the people who 
actually work on this...
  
  -  I completely agree which is why I’m not too excited 
about deferring a split.  It doesn’t really set us back in our development 
efforts to move out to  a separate repo.  We’re quickly closing in on being 
functionally complete and this code peels out of the main repo rather cleanly, 
so I feel we really lose nothing by just moving to out of the main repo 
immediately if that’s the direction we go for the long  haul.  As you point out 
it saves users some pain in during a future upgrade.

In my humble opinion, you should get yourselves be guided by the ones who have 
the most hands-on experience with the Neutron codebase. By all means, we do 
make mistakes, but we're the ones who have been dealing with the hurdles caused 
by those mistakes. If we advised you for a strategy, then this strategy is most 
likely the direct consequence of a past/ongoing experience; if you continue 
ignoring this simple fact in your judgement, then this discussion is pointless.

  
  
  
 Gal.
  
 
  
 
 On Sat, Jan 23, 2016 at 2:15 AM, Vikram Choudhary  wrote:
  I agree with Armando and feel option 2 would be viable if we really want to 
deliver this feature in Mitaka time frame. Adding a new stadium project invites 
more work and can be done in N release.
 Thanks 
 Vikram 
 
 
 
 On Jan 22, 2016 11:47 PM, "Armando M."  wrote:
  
  
 
  
 
 On 22 January 2016 at 08:57, Tidwell, Ryan  wrote:
  
 
 I wanted to raise the question of whether to develop BGP dynamic routing in 
the Neutron repo or spin it out to as a stadium project.  This question has 
been raised recently on reviews  and in offline discussions.  For those 
unfamiliar with this work, BGP efforts in Neutron entail admin-only API’s for 
configuring and propagating BGP announcements of next-hops for floating IP’s, 
tenant networks, and host routes for each compute port when using  DVR.  As we 
are getting late in the Mitaka cycle, I would like to be sure there is 
consensus on 

Re: [openstack-dev] [tricircle] Using in-memory database for unit tests

2016-03-21 Thread joehuang
Hello, Shinobu,

Yes, as what you described here, the "initialize" in "core.py" is used for 
unit/function test only. For system integration test( for example, tempest ), 
it would be better to use mysql like DB, this is done by the configuration in 
DB part.

>From my point of view, the tricircle DB part could be enhanced in the DB model 
>and migration scripts. Currently unit test use DB model to initialize the data 
>base, but not using the migration scripts, so the migration scripts can only 
>be tested when using devstack for integration test. It would better to using 
>migration script to instantiate the DB, and tested in the unit test too.

(Also move the discussion to the openstack-dev mail-list)

Best Regards
Chaoyi Huang ( joehuang )

-Original Message-
From: Shinobu Kinjo [mailto:ski...@redhat.com] 
Sent: Tuesday, March 22, 2016 7:43 AM
To: joehuang; khayam.gondal; zhangbinsjtu; shipengfei92; newypei; Liuhaixia; 
caizhiyuan (A); huangzhipeng
Subject: Using in-memory database for unit tests

Hello,

In "initialize" method defined in "core.py", we're using *in-memory* strategy 
making use of sqlite. AFAIK we are using this solution for only testing 
purpose. Unit tests using this solution should be fine for small scale 
environment. But it's not good enough even it's for testing.

What do you think?
Any thought, suggestion would be appreciated.

[1] 
https://github.com/openstack/tricircle/blob/master/tricircle/db/core.py#L124-L127

Cheers,
Shinobu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Prefecting user and user_roles resources with domain-specific conf is failing.

2016-03-21 Thread Adam Young

On 03/21/2016 09:34 AM, Sofer Athlan-Guyot wrote:

Hi,

we have a big problem when using domain-specific configuration.  The
listing of all users is not supported by keystone when it's used[1][2].

What this means is that prefetch method in keystone_user won't work, or
more specifically, instances method will fail.

This poses a problem for the keystone_user_role, as the user instances
method is called there too.

The missing bit when domain-specific configuration is used is that the
operator must precise the domain on the command line option.

As I see it there are three ways to approach this:

  - iterate over all domains and keep the same behavior as now;
  - detect somehow that the domain-specific configuration is used and
hack, both instances methods to add domain options
  - remove prefetch from keystone_user and keystone_user_role (kinda get
my preference, see below)


So, listing all users ism, in general a bad idea.  I assume the reason 
is to find a userid from a user name?


Would better support from Keystone server make a difference here?


The problem I see with the first two methods depends on the usual use
case of the domain specific configuration.

For what I understand, this would be mainly used to connect to existing
LDAP server, certainly large AD.  If that's the case then we will have
the same problem that the keystone people have seen, ie very big list of
poeple, most of them unrelated to what is happening.  We will then have
the risk that:
  - keystone fails;
  - the puppet process would be slowed down significantly;

So listing all users in this case seems like a very bad idea.  As I
don't see a way to disable prefetching dynamically, when domain-specific
is used (maybe have to be digged into ?), then I tend to favor the
removal of this from kesytone_user and keystone_user_role.
Keystone_user_role is the main problem here as it require a lot of call
to be build and prefetching help here.

So I don't see a best solution to this problem, so I'd like to have more
inputs about the right course of action.

Note: It was first noticed by Matthew J Black, which has open this bug
report[3] and started to work on a fix here[4]

[1] 
https://github.com/openstack/keystone/blob/master/doc/source/configuration.rst 
(look for domain-specific)
[2] https://bugs.launchpad.net/keystone/+bug/1555629
[3] https://bugs.launchpad.net/puppet-keystone/+bug/1554555
[4] https://review.openstack.org/#/c/289995/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Assaf Muller
On Mon, Mar 21, 2016 at 8:09 PM, Clark Boylan  wrote:
> On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
>> On 03/21/2016 04:09 PM, Clark Boylan wrote:
>> > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
>> >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
>> >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
>>  Do you have an a better insight of job runtimes vs jobs in other
>>  projects?
>>  Most of the time in the job runtime is actually spent setting the
>>  infrastructure up, and I am not sure we can do anything about it, unless
>>  we
>>  take this with Infra.
>> >>>
>> >>> I haven't done a comparison yet buts lets break down the runtime of a
>> >>> recent successful neutron full run against neutron master [0].
>> >>
>> >> And now for some comparative data from the gate-tempest-dsvm-full job
>> >> [0]. This job also ran against a master change that merged and ran in
>> >> the same cloud and region as the neutron job.
>> >>
>> > snip
>> >> Generally each step of this job was quicker. There were big differences
>> >> in devstack and tempest run time though. Is devstack much slower to
>> >> setup neutron when compared to nova net? For tempest it looks like we
>> >> run ~1510 tests against neutron and only ~1269 against nova net. This
>> >> may account for the large difference there. I also recall that we run
>> >> ipv6 tempest tests against neutron deployments that were inefficient and
>> >> booted 2 qemu VMs per test (not sure if that is still the case but
>> >> illustrates that the tests themselves may not be very quick in the
>> >> neutron case).
>> >
>> > Looking at the tempest slowest tests output for each of these jobs
>> > (neutron and nova net) some tests line up really well across jobs and
>> > others do not. In order to get a better handle on the runtime for
>> > individual tests I have pushed https://review.openstack.org/295487 which
>> > will run tempest serially reducing the competition for resources between
>> > tests.
>> >
>> > Hopefully the subunit logs generated by this change can provide more
>> > insight into where we are losing time during the tempest test runs.
>
> The results are in, we have gate-tempest-dsvm-full [0] and
> gate-tempest-dsvm-neutron-full [1] job results where tempest ran
> serially to reduce resource contention and provide accurateish per test
> timing data. Both of these jobs ran on the same cloud so should have
> comparable performance from the underlying VMs.
>
> gate-tempest-dsvm-full
> Time spent in job before tempest: 700 seconds
> Time spent running tempest: 2428
> Tempest tests run: 1269 (113 skipped)
>
> gate-tempest-dsvm-neutron-full
> Time spent in job before tempest: 789 seconds
> Time spent running tempest: 4407 seconds
> Tempest tests run: 1510 (76 skipped)
>
> All times above are wall time as recorded by Jenkins.
>
> We can also compare the 10 slowest tests in the non neutron job against
> their runtimes in the neutron job. (note this isn't a list of the top 10
> slowest tests in the neutron job because that job runs extra tests).
>
> nova net job
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
>   85.232
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
> 83.319
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
>  50.338
> tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
> 43.494
> tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
> 40.225
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
>39.653
> tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete
> 37.720
> tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete
> 36.355
> tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm_from_stopped
>27.375
> tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks
> 27.025
>
> neutron job
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
>  110.345
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
>108.170
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
>  63.852
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
>  

Re: [openstack-dev] [Congress] HOL test completed

2016-03-21 Thread Tim Hinrichs
Fantastic! We have our rc1 request in for the same Sha. Hopefully that will
end up being Mitaka.

Tim
On Mon, Mar 21, 2016 at 4:48 PM Eric K  wrote:

> Happy to report that everything in Congress HOL* works as expected after
> all the merges today**.
>
> *
> https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
> **commit 5e2c7cde598fcb4ed781a211fb421a5e94afb406
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Clark Boylan
On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
> On 03/21/2016 04:09 PM, Clark Boylan wrote:
> > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
> >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
> >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote: 
>  Do you have an a better insight of job runtimes vs jobs in other
>  projects?
>  Most of the time in the job runtime is actually spent setting the
>  infrastructure up, and I am not sure we can do anything about it, unless
>  we
>  take this with Infra.
> >>>
> >>> I haven't done a comparison yet buts lets break down the runtime of a
> >>> recent successful neutron full run against neutron master [0].
> >>
> >> And now for some comparative data from the gate-tempest-dsvm-full job
> >> [0]. This job also ran against a master change that merged and ran in
> >> the same cloud and region as the neutron job.
> >>
> > snip
> >> Generally each step of this job was quicker. There were big differences
> >> in devstack and tempest run time though. Is devstack much slower to
> >> setup neutron when compared to nova net? For tempest it looks like we
> >> run ~1510 tests against neutron and only ~1269 against nova net. This
> >> may account for the large difference there. I also recall that we run
> >> ipv6 tempest tests against neutron deployments that were inefficient and
> >> booted 2 qemu VMs per test (not sure if that is still the case but
> >> illustrates that the tests themselves may not be very quick in the
> >> neutron case).
> > 
> > Looking at the tempest slowest tests output for each of these jobs
> > (neutron and nova net) some tests line up really well across jobs and
> > others do not. In order to get a better handle on the runtime for
> > individual tests I have pushed https://review.openstack.org/295487 which
> > will run tempest serially reducing the competition for resources between
> > tests.
> > 
> > Hopefully the subunit logs generated by this change can provide more
> > insight into where we are losing time during the tempest test runs.

The results are in, we have gate-tempest-dsvm-full [0] and
gate-tempest-dsvm-neutron-full [1] job results where tempest ran
serially to reduce resource contention and provide accurateish per test
timing data. Both of these jobs ran on the same cloud so should have
comparable performance from the underlying VMs.

gate-tempest-dsvm-full
Time spent in job before tempest: 700 seconds
Time spent running tempest: 2428
Tempest tests run: 1269 (113 skipped)

gate-tempest-dsvm-neutron-full
Time spent in job before tempest: 789 seconds
Time spent running tempest: 4407 seconds
Tempest tests run: 1510 (76 skipped)

All times above are wall time as recorded by Jenkins.

We can also compare the 10 slowest tests in the non neutron job against
their runtimes in the neutron job. (note this isn't a list of the top 10
slowest tests in the neutron job because that job runs extra tests).

nova net job
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  85.232
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
83.319
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
 50.338
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
43.494
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
40.225
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
   39.653
tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete
37.720
tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete
36.355
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm_from_stopped
   27.375
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks
27.025

neutron job
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
 110.345
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
   108.170
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
 63.852
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
   59.931
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
57.835

Re: [openstack-dev] [networking-ovn][ovn4nfv]

2016-03-21 Thread Russell Bryant
On Mon, Mar 21, 2016 at 5:18 PM, John McDowall <
jmcdow...@paloaltonetworks.com> wrote:

> All,
>
> As a VNF vendor we have been looking at ways to enable customers to simply
> scale up (and down) VNF’s in complex virtual networks at scale. Our goal
> is to help accelerate the deployment of SDN and VNF’s and more
> specifically enable zero-trust security at scale for applications.  This
> requires the easy and fast deployment of Next Generation Firewalls (and
> other VNF¹s) into the traffic path of any application.
>

​Thanks a lot for your work on this!  I see you posted the same message
over on the OVS discuss mailing list.  I think the current work is probably
more relevant to that list, so I'm going to respond over there.
​

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FFE for fuel-openstack-tasks and fuel-remove-conflict-openstack

2016-03-21 Thread Andrew Woodward
I've mocked up the change to implementation using the already landed
changes to ceph as an example

https://review.openstack.org/295571

On Mon, Mar 21, 2016 at 3:44 PM Andrew Woodward  wrote:

> We had originally planned for the FFEs for both fuel-openstack-tasks[1]
> and fuel-remove-conflict-openstack to [2] to close on 3/20, This would have
> placed them before changes that conflict with
> fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3].
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088297.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088298.html
> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/089028.html
>
> However we found this morning that the changes from [2], and more of issue
> [1] will result in further issues such as [4], where as the task files
> move, any task that explicitly relied on it, now no longer is in the same
> path.
>
> [4] https://review.openstack.org/#/c/295170/
>
> Due to this newly identified issue with backwards comparability. It
> appears that [4] shows that we have plugins using interfaces that we don't
> have formal coverage for so If we introduce this set of changes, we will
> cause breakage for plugins that use fuel's current tasks.
>
> From a deprecation standpoint we don't have a way to deal with this,
> unless  fuel-openstack-tasks [1] lands after
> fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3]. In this
> case we can take advantage of the class include stubs, leaving a copy in
> the old location (osnailyfacter/modular/roles/compute.pp) pointing to the
> new include location (include openstack_tasks::roles::compute) and adding a
> warning for deprecation. The tasks includes in the new location
> openstack_tasks/examples/roles/compute.pp would simply include the updated
> class location w/o the warning.
>
> This would take care of [1] and it's review [5]
>
> [5] https://review.openstack.org/283332
>
> This still leaves [2] un-addressed, we still have 3 open CR for it:
>
> [6] Compute https://review.openstack.org/285567
> [7] Cinder https://review.openstack.org/294736
> [8] Swift https://review.openstack.org/294979
>
> Compute [6] is in good shape, while Cinder [7] and Swift [8] are not. For
> these do we want to continue to land them, if so what do we want to do
> about the now deprecated openstack:: tasks? We could leave them in place
> with a warning since we would not be using them
>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] HOL test completed

2016-03-21 Thread Eric K
Happy to report that everything in Congress HOL* works as expected after all
the merges today**.

*https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA2
2Orw/pub
**commit 5e2c7cde598fcb4ed781a211fb421a5e94afb406


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] branching Mitaka March 21, 2016 (important note for CRs)

2016-03-21 Thread Steven Dake (stdake)


From: Steven Dake >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, March 21, 2016 at 9:02 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [kolla] branching Mitaka March 21, 2016 (important 
note for CRs)

Hey folks,

Typically all projects branch at rc1, but originally because we had a mountain 
of blueprints and bugs after mitaka-3 was released, I sent an email indicating 
we would branch at rc2.

Unfortunately branching at rc2 is not a viable plan because we are gating the 
master code base of OpenStack with Mitaka deployment tooling.  Because this 
could result in us finding master bugs in the mitaka codebase, we are branching 
now, as soon as we believe the branch is fairly stable which I hope Is today :).

IMPORTANT for Core reviewers,
If any patch until the release of Mitaka has a TrivialFix  flag, please -1 the 
patch.  All patches need a bug id until Mitaka is released to properly handle 
backports into the mitaka branch.  If its not in the tracker and targeted for 
rc2, it won't be backported.  If it is in the tracker and targeted for rc2, it 
will be backported.  This includes tardy blueprint work.

Since sharing the load on backports hasn't worked in the past, I will take full 
responsibility for cherry-picking and backporting all bug fixes until Mitaka 
releases.  Any patch without a bug ID WILL NOT BE BACKPORTED - so please don't 
approve TrivialFix changes.  Documentation changes are ok without any special 
tagging.

Regards,
-steve


Master has been branched into stable/mitaka.  Master is open for business for 
new work.

I'd like to remind everyone people count on us to deliver a working bug-free 
Mitaka milestone.  As such, I'd super appreciate the team to stay focused on 
fixing the current 58 bugs in Mitaka rc2.  I am considering tagging an rc2 this 
Friday March 25th to lessen the number of bugs we have to specifically test in 
rc3, which would still tag on March 31-April 2nd.

If anyone is opposed to this tagging plan of releasing an rc2 and rc3, speak up 
now, otherwise that is how I will proceed.

We will be dependent on other projects releasing their final Mitaka milestone 
because other projects we import may not have released their milestone at the 
exact April 8th deadline.  For this reason I'd ask folks to be aware we may 
have a bit more time then other projects to release Mitaka.  The latest I'd 
like to tag Mitaka is April 15th, but we will tag as soon as every dependent 
project we ship has a release milestone that follows the release cycle.  This 
will allow us to release with a functional build from source and build from 
binary model for CentOS Ubuntu and Oracle Linux.

I'd like to take a moment to thank the tremendous work of everyone in the 
community thus far for cranking out so much work in this 6 month cycle.  Over 
1000 commits - completely fantastic!  Let us work together to finish the job on 
Mitaka so Operators can feel comfortable deploying Mitaka with Kolla 2.0.0 as 
soon as we tag.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposing Tony Breeds for stable-maint-core

2016-03-21 Thread Tony Breeds
On Mon, Mar 21, 2016 at 03:47:38PM -0500, Matt Riedemann wrote:

> Given the responses, Tony is now part of the stable-maint-core group.
> Congratulations and thanks again for the hard work Tony!

Thanks Matt.

And thanks to everyone that supported me.

The support and positive attitude of this community is fantastic.  It's one of
the things that allows OpenStack to be great.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] FFE for fuel-openstack-tasks and fuel-remove-conflict-openstack

2016-03-21 Thread Andrew Woodward
We had originally planned for the FFEs for both fuel-openstack-tasks[1] and
fuel-remove-conflict-openstack to [2] to close on 3/20, This would have
placed them before changes that conflict with
fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3].

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088297.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088298.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/089028.html

However we found this morning that the changes from [2], and more of issue
[1] will result in further issues such as [4], where as the task files
move, any task that explicitly relied on it, now no longer is in the same
path.

[4] https://review.openstack.org/#/c/295170/

Due to this newly identified issue with backwards comparability. It appears
that [4] shows that we have plugins using interfaces that we don't have
formal coverage for so If we introduce this set of changes, we will cause
breakage for plugins that use fuel's current tasks.

>From a deprecation standpoint we don't have a way to deal with this, unless
 fuel-openstack-tasks [1] lands after
fuel-refactor-osnailyfacter-for-puppet-master-compatibility [3]. In this
case we can take advantage of the class include stubs, leaving a copy in
the old location (osnailyfacter/modular/roles/compute.pp) pointing to the
new include location (include openstack_tasks::roles::compute) and adding a
warning for deprecation. The tasks includes in the new location
openstack_tasks/examples/roles/compute.pp would simply include the updated
class location w/o the warning.

This would take care of [1] and it's review [5]

[5] https://review.openstack.org/283332

This still leaves [2] un-addressed, we still have 3 open CR for it:

[6] Compute https://review.openstack.org/285567
[7] Cinder https://review.openstack.org/294736
[8] Swift https://review.openstack.org/294979

Compute [6] is in good shape, while Cinder [7] and Swift [8] are not. For
these do we want to continue to land them, if so what do we want to do
about the now deprecated openstack:: tasks? We could leave them in place
with a warning since we would not be using them

-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [shotgun] New shotgun2 command: short-report

2016-03-21 Thread Alex Schultz
On Mon, Mar 21, 2016 at 3:15 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Alex,
>
> >This should have been created before removing the thing providing this
> information previously.
>
> Once again, using version.yaml you could NOT reproduce the env, because
> what was really installed on the Fuel node had nothing in common with what
> it was written in version.yaml. The information you think was actual was,
> in fact, not actual.
>


So it could be, that's the difference. Yes it could be out of date for long
running environments where people have updated/upgraded. But in the case of
BVT failures or testing, it was more likely then not to be correct.  From
my standpoint as a developer working on non-released versions where the
packages were not likely to be updated, this information was used to work
on pre-release issues. Look once again, the choice to remove this
information had impacts far beyond what was understood when the choice was
made to remove it. Fine, we're dealing with it now and hopefully we can
improve things going forward.  I'm just asking for more information about
what people are intending with these types of tools so that others can
understand how to use them.



>
> Having 'shotgun report' output you can see real SHA sums, not ephemeral
> (like in version.yaml).
>
> And without removal version.yaml we could not implement some features in
> make system, so, this new build/test package based approach could not have
> been implemented before version.yaml removal. We are moving towards better
> developer experience NOT making things worse. In fact version.yaml removal
> and substitution it with 'shotgun report' was a fix for broken version.yaml
> content.
>

I've voiced my issues with shotgun2 report and the quality of the
information provided.


>
> As for attaching short report to bugs instead of long report, I don't know
> what motivation was for this change except their convenience. Probably
> short report is to be used for internal QA team purposes only.
>
>
>
Once again, I'm just asking for more information as to the intended usage
of this new command and asking that this information be made public.

Thanks,
-Alex


>
>
>
> Vladimir Kozhukalov
>
> On Mon, Mar 21, 2016 at 11:20 PM, Alex Schultz 
> wrote:
>
>>
>> On Mon, Mar 21, 2016 at 1:59 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Alex,
>>>
>>> That is just a short report that QA team needs for their convenience.
>>> Please consider this letter as just FYI, nothing more.
>>>
>>>
>> If a bug only contains the short report, how do we work on fixing the
>> bug?  My question is valid and should be answered.  What good is this
>> information if I cannot reproduce an environment with this information? How
>> does one translate or locate specific package versions for reproducing
>> issues?  Is this documented?  If not, will it be?  I'd like to know what
>> the point of this short report is since I'm not sure how it could be
>> consumed by QA/Devs?
>>
>>
>>
>>> 'shotgun report' allows you to see commit SHA from which a package was
>>> built. It is even more information than it was available in version.yaml
>>> and this information is actual unlike the content of version.yaml.
>>>
>>> I know you were opposing getting rid of version.yaml but the thing is
>>> Fuel now can be installed on any CentOS 7.2 node directly from RPM
>>> repository. You don't even need the Fuel ISO, and thus version.yaml could
>>> not be an artifact that we could rely on (no ISO build id any more, no sha
>>> sums). Instead, now we rely on packages that are currently installed on the
>>> master node. The only issue with this approach is the ability to easily
>>> reproduce the env having just this list of packages attached to a bug. But
>>> it is not worse that it was with version.yaml.
>>>
>>>
>> I was only opposed to getting rid of it without a proper replacement
>> which is why I keep asking the same questions as an equivalent replacement
>> does not seem to exist.  Also it is much worse now that we don't have
>> version.yaml because I may have no way to locate these mystical package
>> versions that keep getting reported.  At least with git hashes I could at
>> least look at the same code base across everything. Now without this
>> information I have no way of building a complete picture of the code being
>> utilized in the environment.
>>
>>
>>> I'm currently working on design draft about modular data driven
>>> functional testing. This could also help for troubleshooting. In a nutshell
>>> the developer experience will be like:
>>>  1) you look at log files (`shotgun dump`) and roughly locate the issue
>>> (that allows you to choose respective test case)
>>>  2) you run script passing some data to it (data are to come from
>>> 'shotgun report --machinereadable' or smth like this)
>>>  3) this script builds testing/experimental env for you (env is to
>>> include only those components that are respective 

[openstack-dev] [networking-ovn][ovn4nfv]

2016-03-21 Thread John McDowall
All,

As a VNF vendor we have been looking at ways to enable customers to simply
scale up (and down) VNF’s in complex virtual networks at scale. Our goal
is to help accelerate the deployment of SDN and VNF’s and more
specifically enable zero-trust security at scale for applications.  This
requires the easy and fast deployment of Next Generation Firewalls (and
other VNF¹s) into the traffic path of any application.

Over the last several weeks we have created a prototype that implements
a simple VNF insertion approach. Before we do additional work we have a
couple of questions for the community:

Questions
‹

1. This approach has the advantage of being very simple and works with
existing VNF¹s, does it make sense to the community?
2. If it is of interest how could it be improved and or enhanced to make
it more useful and consumable?

Design Guidelines
‹

At the start of the effort we created a set of design guidelines to
constrain the problem space.


* The goal is a Service Function Insertion (SFI) approach that is simpler
and easier to deploy than Service Function Chaining and is more applicable
to single function insertion or very short chains.
* The initial design target is DC/Enterprises where the requirements are
typically for insertion of a limited set of VNF¹s in specific network
locations to act on specific applications.
* Minimal changes to existing VNF, ours and others,
* Make the solution open to all VNF¹s.
* Leverage bump in the wire connectivity as this does not require L2 or L3
knowledge/configuration in the VNF.
* Firewalls want to inspect/classify all traffic on a link, so
pre-classifing traffic beyond ACL¹s is not necessary.
* Deploy on standard infrastructure; Openstack and Open vSwitch with
minimal changes
* Work with virtualization and containers and physical devices seamlessly.
* Insert and remove security is seconds, one of the drivers of the
requirement for speed is container deployment
* Simple to deploy and easy to debug is important - atomic insertion and
removal of VNF is an important aspect of this.


Approach


We have developed a prototype, roughly using the ovn4nfv model proposed by
Vikram Dham and others in OPNFV.  The implemented prototype of ovn4nfv is
on OpenVSwitch 2.5 and Openstack Mitaka (development branch). I would like
to stress this is a prototype and not production code. My objective was to
prove to myself (and others) that the concept would work and then ask for
feedback from the community on level of interest and how best to design a
production implementation.

I have called this effort service function insertion (SFI) to
differentiate from service function chaining (SFC). This approach is
simpler than SFC and requires minimal or no changes to existing VNF¹s that
act as a bump in the wire, but it will probably not handle long complex
chains or graphs. It can possibly handle chaining one or two VNF¹s in a
static manner, but not sure if it could go beyond that. I am open to
suggestions of how to extend/improve it.

The traffic steering is implemented by inserting 2 ingress and 2 egress
rules in the ovn-nb pipeline at ingress stage 3. These rules have a higher
priority than the default rules. The changes to OVN and rules are listed
in the implementation section.

The control plane is implemented in both Open vSwitch and in Openstack. In
Openstack there is a set of extension interfaces added to the
networking-ovn plugin. There are both CLI and REST API¹s provided for
Openstack and CLI for Open vSwitch.

The OVN model enables logical changes to the flow rules and Openstack
neutron plugin model allows separation of changes to extensions to the
networking-OVN plugin. I have however violated a few boundaries for
expediency that would need to be fixed before
this could be easily deployed.

We are happy to contribute the code back to the community, but would like
to gauge the level on interest and solicit feedback on the approach. We
are open to any and all suggestions for improvements in both
implementation and approach.

Below I have given a rough overview of the implementation and the changes
I have made to the various code bases. Just to re-iterate this was done as
a quick prototype and was a learning experience in Openstack and Open
vSwitch on the way so the quality
of the code and architecture are not production ready. Links are provide
to my github repositories with the changes.

Implementation
‹‹

The approach atomically inserts rules into ovn-nb to intercept any traffic
going to or coming from a vm or a container, requires insertion of four
new rules. There are no other changes in Open vSwitch. While the prototype
is using a firewall as the VNF there is no requirement for that to be the
case any VNF that supports ³bump in the wire mode² should work.

App-1 is the application that needs to be protected
FW-2 is the input port of the firewall
FW-1 is the egress port of the firewall
App-2 is an application talking to App-1


Re: [openstack-dev] [tripleo] becoming third party CI

2016-03-21 Thread Clark Boylan
On Mon, Mar 21, 2016, at 05:33 AM, Derek Higgins wrote:
> Doing this in 3rd party ci I think simplyfies things because we'll no
> longer need a public cloud and as a result the security measures
> requeired to avoid putting cloud credentials on the jenkins slaves
> wont be needed.

Just a word of warning that if you run arbitrary code posted to Gerrit
and post log results that are publicly available then you probably want
to avoid putting any credentials or other secret data on the Jenkins
slaves as that can be trivially exposed (assuming the jobs allow root to
do tripleo things like build dib images).

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [shotgun] New shotgun2 command: short-report

2016-03-21 Thread Vladimir Kozhukalov
Alex,

>This should have been created before removing the thing providing this
information previously.

Once again, using version.yaml you could NOT reproduce the env, because
what was really installed on the Fuel node had nothing in common with what
it was written in version.yaml. The information you think was actual was,
in fact, not actual.

Having 'shotgun report' output you can see real SHA sums, not ephemeral
(like in version.yaml).

And without removal version.yaml we could not implement some features in
make system, so, this new build/test package based approach could not have
been implemented before version.yaml removal. We are moving towards better
developer experience NOT making things worse. In fact version.yaml removal
and substitution it with 'shotgun report' was a fix for broken version.yaml
content.

As for attaching short report to bugs instead of long report, I don't know
what motivation was for this change except their convenience. Probably
short report is to be used for internal QA team purposes only.





Vladimir Kozhukalov

On Mon, Mar 21, 2016 at 11:20 PM, Alex Schultz 
wrote:

>
> On Mon, Mar 21, 2016 at 1:59 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Alex,
>>
>> That is just a short report that QA team needs for their convenience.
>> Please consider this letter as just FYI, nothing more.
>>
>>
> If a bug only contains the short report, how do we work on fixing the
> bug?  My question is valid and should be answered.  What good is this
> information if I cannot reproduce an environment with this information? How
> does one translate or locate specific package versions for reproducing
> issues?  Is this documented?  If not, will it be?  I'd like to know what
> the point of this short report is since I'm not sure how it could be
> consumed by QA/Devs?
>
>
>
>> 'shotgun report' allows you to see commit SHA from which a package was
>> built. It is even more information than it was available in version.yaml
>> and this information is actual unlike the content of version.yaml.
>>
>> I know you were opposing getting rid of version.yaml but the thing is
>> Fuel now can be installed on any CentOS 7.2 node directly from RPM
>> repository. You don't even need the Fuel ISO, and thus version.yaml could
>> not be an artifact that we could rely on (no ISO build id any more, no sha
>> sums). Instead, now we rely on packages that are currently installed on the
>> master node. The only issue with this approach is the ability to easily
>> reproduce the env having just this list of packages attached to a bug. But
>> it is not worse that it was with version.yaml.
>>
>>
> I was only opposed to getting rid of it without a proper replacement which
> is why I keep asking the same questions as an equivalent replacement does
> not seem to exist.  Also it is much worse now that we don't have
> version.yaml because I may have no way to locate these mystical package
> versions that keep getting reported.  At least with git hashes I could at
> least look at the same code base across everything. Now without this
> information I have no way of building a complete picture of the code being
> utilized in the environment.
>
>
>> I'm currently working on design draft about modular data driven
>> functional testing. This could also help for troubleshooting. In a nutshell
>> the developer experience will be like:
>>  1) you look at log files (`shotgun dump`) and roughly locate the issue
>> (that allows you to choose respective test case)
>>  2) you run script passing some data to it (data are to come from
>> 'shotgun report --machinereadable' or smth like this)
>>  3) this script builds testing/experimental env for you (env is to
>> include only those components that are respective to chosen test case)
>>  5) you run some tests against this lab and manually do some experiments
>> to kill the bug
>>
>>
>>
>
> This should have been created before removing the thing providing this
> information previously. Yes I know I sound like a broken record on this,
> but it's very hard to address issues if you cannot reproduce the
> environment they occur on.  I'm trying to make sure we are providing all
> the information to aid in reproducing issues to get them fixed and not just
> providing more information that is ultimately ignored because it's useless.
>
> Thanks,
> -Alex
>
>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Mon, Mar 21, 2016 at 5:28 PM, Alex Schultz 
>> wrote:
>>
>>>
>>>
>>> On Mon, Mar 21, 2016 at 7:21 AM, Volodymyr Shypyguzov <
>>> vshypygu...@mirantis.com> wrote:
>>>
 Hi, all

 Just wanted to inform you, that shotgun2 now has new command
 short-report, which allows you to receive shorter and cleaner output for
 attaching to bug description, sharing, etc.

 Usage: shotgun2 short-report
 Example output: http://paste.openstack.org/show/491256/


>>> How will we be able to find those specific packages and how will 

Re: [openstack-dev] [tripleo] becoming third party CI

2016-03-21 Thread Ben Nemec
On 03/21/2016 03:52 PM, Paul Belanger wrote:
> On Mon, Mar 21, 2016 at 10:57:53AM -0500, Ben Nemec wrote:
>> On 03/21/2016 07:33 AM, Derek Higgins wrote:
>>> On 17 March 2016 at 16:59, Ben Nemec  wrote:
 On 03/10/2016 05:24 PM, Jeremy Stanley wrote:
> On 2016-03-10 16:09:44 -0500 (-0500), Dan Prince wrote:
>> This seems to be the week people want to pile it on TripleO. Talking
>> about upstream is great but I suppose I'd rather debate major changes
>> after we branch Mitaka. :/
> [...]
>
> I didn't mean to pile on TripleO, nor did I intend to imply this was
> something which should happen ASAP (or even necessarily at all), but
> I do want to better understand what actual benefit is currently
> derived from this implementation vs. a more typical third-party CI
> (which lots of projects are doing when they find their testing needs
> are not met by the constraints of our generic test infrastructure).
>
>> With regards to Jenkins restarts I think it is understood that our job
>> times are long. How often do you find infra needs to restart Jenkins?
>
> We're restarting all 8 of our production Jenkins masters weekly at a
> minimum, but generally more often when things are busy (2-3 times a
> week). For many months we've been struggling with a thread leak for
> which their development team has not seen as a priority to even
> triage our bug report effectively. At this point I think we've
> mostly given up on expecting it to be solved by anything other than
> our upcoming migration off of Jenkins, but that's another topic
> altogether.
>
>> And regardless of that what if we just said we didn't mind the
>> destructiveness of losing a few jobs now and then (until our job
>> times are under the line... say 1.5 hours or so). To be clear I'd
>> be fine with infra pulling the rug on running jobs if this is the
>> root cause of the long running jobs in TripleO.
>
> For manual Jenkins restarts this is probably doable (if additional
> hassle), but I don't know whether that's something we can easily
> shoehorn into our orchestrated/automated restarts.
>
>> I think the "benefits are minimal" is bit of an overstatement. The
>> initial vision for TripleO CI stands and I would still like to see
>> individual projects entertain the option to use us in their gates.
> [...]
>
> This is what I'd like to delve deeper into. The current
> implementation isn't providing you with any mechanism to prevent
> changes which fail jobs running in the tripleo-test cloud from
> merging to your repos, is it? You're still having to manually
> inspect the job results posted by it? How is that particularly
> different from relying on third-party CI integration?
>
> As for other projects making use of the same jobs, right now the
> only convenience I'm aware of is that they can add check-tripleo
> pipeline jobs in our Zuul layout file instead of having you add it
> to yours (which could itself reside in a Git repo under your
> control, giving you even more flexibility over those choices). In
> fact, with a third-party CI using its own separate Gerrit account,
> you would be able to leave clear -1/+1 votes on check results which
> is not possible with the present solution.
>
> So anyway, I'm not saying that I definitely believe the third-party
> CI route will be better for TripleO, but I'm not (yet) clear on what
> tangible benefit you're receiving now that you lose by switching to
> that model.
>

 FWIW, I think third-party CI probably makes sense for TripleO.
 Practically speaking we are third-party CI right now - we run our own
 independent hardware infrastructure, we aren't multi-region, and we
 can't leave a vote on changes.  Since the first two aren't likely to
 change any time soon (although I believe it's still a long-term goal to
 get to a place where we can run in regular infra and just contribute our
 existing CI hardware to the general infra pool, but that's still a long
 way off), and moving to actual third-party CI would get us the ability
 to vote, I think it's worth pursuing.

 As an added bit of fun, we have a forced move of our CI hardware coming
 up in the relatively near future, and if we don't want to have multiple
 days (and possibly more, depending on how the move goes) of TripleO CI
 outage we're probably going to need to stand up a new environment in
 parallel anyway.  If we're doing that it might make sense to try hooking
 it in through the third-party infra instead of the way we do it today.
 Hopefully that would allow us to work out the kinks before the old
 environment goes away.

 Anyway, I'm sure we'll need a bunch more discussion about this, but I
 wanted to chime 

[openstack-dev] [ironic] weekly subteam status report

2016-03-21 Thread Ruby Loo
Hi,

We are keen to present this week's subteam report for Ironic. As usual,
this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 07.03.2016):
- Ironic: 167 bugs (+10) + 174 wishlist items (-1). 19 new (+3), 121 in
progress (-2), 0 critical, 24 high (+2) and 13 incomplete (+1)
- Inspector: 9 bugs + 16 wishlist items (+1). 1 new (+1), 6 in progress, 0
critical, 2 high and 0 incomplete
- Nova bugs with Ironic tag: 15 (-1). 0 new, 0 critical, 0 high
- dtantsur on PTO Mar 21-25

Network isolation (Neutron/Ironic work) (jroll)
===
- bumping to newton :(

RAID (lucasagomes)
==
- done \o/

Node filter API and claims endpoint (jroll, devananda, lucasagomes)
===
- no update; deprioritized in favor of neutron work, manual cleaning

Multiple compute hosts (jroll, devananda)
=
- newton

Nova Liaisons (jlvillal & mrda)
===
- Bug scrub performed. No updates besides that.

Testing/Quality (jlvillal/krtaylor)
===
- Grenade: Narrowed down problem to what appears to be an issue with DHCP
not being configured correctly. As can see the DHCP address not being given
to the Ironic bare-metal node.
- Grenade: Troubleshooting/debugging continues

Inspector (dtansur)
===
- Released ironic-inspector 3.2.0 - the final release for Mitaka
- stable/mitaka was created from this release
- Released ironic-inspector 2.2.5 for Liberty with 2 bug fixes

Bifrost (TheJulia)
==
- Expecting to release 1.0 monday or tuesday.

Drivers:

OneView (gabriel-bezerra/thiagop/sinval/liliars)

- Dynamic allocation
- spec: https://review.openstack.org/#/c/275726/
- implementation: https://review.openstack.org/#/c/286192/
- docs: [wip]

CIMC (sambetts)
---
- Third Party CI for this driver is very close to complete, had to add
monkey patch for Ironic flavor in tempest
.

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] becoming third party CI

2016-03-21 Thread Paul Belanger
On Mon, Mar 21, 2016 at 10:57:53AM -0500, Ben Nemec wrote:
> On 03/21/2016 07:33 AM, Derek Higgins wrote:
> > On 17 March 2016 at 16:59, Ben Nemec  wrote:
> >> On 03/10/2016 05:24 PM, Jeremy Stanley wrote:
> >>> On 2016-03-10 16:09:44 -0500 (-0500), Dan Prince wrote:
>  This seems to be the week people want to pile it on TripleO. Talking
>  about upstream is great but I suppose I'd rather debate major changes
>  after we branch Mitaka. :/
> >>> [...]
> >>>
> >>> I didn't mean to pile on TripleO, nor did I intend to imply this was
> >>> something which should happen ASAP (or even necessarily at all), but
> >>> I do want to better understand what actual benefit is currently
> >>> derived from this implementation vs. a more typical third-party CI
> >>> (which lots of projects are doing when they find their testing needs
> >>> are not met by the constraints of our generic test infrastructure).
> >>>
>  With regards to Jenkins restarts I think it is understood that our job
>  times are long. How often do you find infra needs to restart Jenkins?
> >>>
> >>> We're restarting all 8 of our production Jenkins masters weekly at a
> >>> minimum, but generally more often when things are busy (2-3 times a
> >>> week). For many months we've been struggling with a thread leak for
> >>> which their development team has not seen as a priority to even
> >>> triage our bug report effectively. At this point I think we've
> >>> mostly given up on expecting it to be solved by anything other than
> >>> our upcoming migration off of Jenkins, but that's another topic
> >>> altogether.
> >>>
>  And regardless of that what if we just said we didn't mind the
>  destructiveness of losing a few jobs now and then (until our job
>  times are under the line... say 1.5 hours or so). To be clear I'd
>  be fine with infra pulling the rug on running jobs if this is the
>  root cause of the long running jobs in TripleO.
> >>>
> >>> For manual Jenkins restarts this is probably doable (if additional
> >>> hassle), but I don't know whether that's something we can easily
> >>> shoehorn into our orchestrated/automated restarts.
> >>>
>  I think the "benefits are minimal" is bit of an overstatement. The
>  initial vision for TripleO CI stands and I would still like to see
>  individual projects entertain the option to use us in their gates.
> >>> [...]
> >>>
> >>> This is what I'd like to delve deeper into. The current
> >>> implementation isn't providing you with any mechanism to prevent
> >>> changes which fail jobs running in the tripleo-test cloud from
> >>> merging to your repos, is it? You're still having to manually
> >>> inspect the job results posted by it? How is that particularly
> >>> different from relying on third-party CI integration?
> >>>
> >>> As for other projects making use of the same jobs, right now the
> >>> only convenience I'm aware of is that they can add check-tripleo
> >>> pipeline jobs in our Zuul layout file instead of having you add it
> >>> to yours (which could itself reside in a Git repo under your
> >>> control, giving you even more flexibility over those choices). In
> >>> fact, with a third-party CI using its own separate Gerrit account,
> >>> you would be able to leave clear -1/+1 votes on check results which
> >>> is not possible with the present solution.
> >>>
> >>> So anyway, I'm not saying that I definitely believe the third-party
> >>> CI route will be better for TripleO, but I'm not (yet) clear on what
> >>> tangible benefit you're receiving now that you lose by switching to
> >>> that model.
> >>>
> >>
> >> FWIW, I think third-party CI probably makes sense for TripleO.
> >> Practically speaking we are third-party CI right now - we run our own
> >> independent hardware infrastructure, we aren't multi-region, and we
> >> can't leave a vote on changes.  Since the first two aren't likely to
> >> change any time soon (although I believe it's still a long-term goal to
> >> get to a place where we can run in regular infra and just contribute our
> >> existing CI hardware to the general infra pool, but that's still a long
> >> way off), and moving to actual third-party CI would get us the ability
> >> to vote, I think it's worth pursuing.
> >>
> >> As an added bit of fun, we have a forced move of our CI hardware coming
> >> up in the relatively near future, and if we don't want to have multiple
> >> days (and possibly more, depending on how the move goes) of TripleO CI
> >> outage we're probably going to need to stand up a new environment in
> >> parallel anyway.  If we're doing that it might make sense to try hooking
> >> it in through the third-party infra instead of the way we do it today.
> >> Hopefully that would allow us to work out the kinks before the old
> >> environment goes away.
> >>
> >> Anyway, I'm sure we'll need a bunch more discussion about this, but I
> >> wanted to chime in with my two cents.
> > 
> > We need to answer 

Re: [openstack-dev] [stable] Proposing Tony Breeds for stable-maint-core

2016-03-21 Thread Matt Riedemann



On 3/18/2016 3:11 PM, Matt Riedemann wrote:

I'd like to propose tonyb for stable-maint-core. Tony is pretty much my
day to day guy on stable, he's generally in every stable team meeting
(which is not attended well so I appreciate it), and he's as proactive
as ever on staying on top of gate issues when they come up, so he's well
deserving of it in my mind.

Here are review stats for stable for the last 90 days (as defined in the
reviewstats repo):

http://paste.openstack.org/show/491155/

Tony is also the latest nova-stable-maint core and he's done a great job
there (as expected) and is very active, which is again much appreciated.

Please respond with ack/nack.



Given the responses, Tony is now part of the stable-maint-core group. 
Congratulations and thanks again for the hard work Tony!


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Manila] BP https://blueprints.launchpad.net/manila/+spec/access-groups

2016-03-21 Thread Ben Swartzlander

On 03/09/2016 03:51 AM, nidhi.h...@wipro.com wrote:

Hi All,

This is just a gentle reminder to the previous mail ..

PFA is revised doc.

Same is pasted here also.

https://etherpad.openstack.org/p/access_group_nidhimittalhada

Kindly share your thoughts on this..


Now that we've finally wrapped up RC1 I hope people take a look at this 
proposal. Nidhi, you should propose this topic for the design summit, 
especially if you will be able to participate in person. If not, we can 
still discuss it in Austin.


-Ben



Thanks

Nidhi

*From:* Nidhi Mittal Hada (Product Engineering Service)
*Sent:* Friday, February 26, 2016 3:22 PM
*To:* 'OpenStack Development Mailing List (not for usage questions)'

*Cc:* 'bswa...@netapp.com' ; 'Ben Swartzlander'

*Subject:* [OpenStack-Dev][Manila] BP
https://blueprints.launchpad.net/manila/+spec/access-groups

Hi Manila Team,

I am working on

https://blueprints.launchpad.net/manila/+spec/access-groups

For this I have created initial document as attached with the mail.

It contains DB CLI REST API related changes.

Could you please have a look and share your opinion.

Kindly let me know, if there is some understanding gap,

or something I have missed to document or

share your comments in general to make it better.

*Thank you.*

*Nidhi Mittal Hada*

*Architect | PES / COE*– *Kolkata India*

*Wipro Limited*

*M*+91 74 3910 9883 | *O* +91 33 3095 4767 | *VOIP* +91 33 3095 4767

The information contained in this electronic message and any attachments
to this message are intended for the exclusive use of the addressee(s)
and may contain proprietary, confidential or privileged information. If
you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and
destroy all copies of this message and any attachments. WARNING:
Computer viruses can be transmitted via email. The recipient should
check this email and any attachments for the presence of viruses. The
company accepts no liability for any damage caused by any virus
transmitted by this email. www.wipro.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Newton design summit topics

2016-03-21 Thread Ben Swartzlander

I've started an etherpad to collect ideas for summit topics:

https://etherpad.openstack.org/p/manila-newton-summit-topics

Please add your suggestions to the top section and we'll get them 
categorized and scheduled in the bottom section in time for Austin.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Sean Dague
On 03/21/2016 04:09 PM, Clark Boylan wrote:
> On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
>> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
>>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote: 
 Do you have an a better insight of job runtimes vs jobs in other
 projects?
 Most of the time in the job runtime is actually spent setting the
 infrastructure up, and I am not sure we can do anything about it, unless
 we
 take this with Infra.
>>>
>>> I haven't done a comparison yet buts lets break down the runtime of a
>>> recent successful neutron full run against neutron master [0].
>>
>> And now for some comparative data from the gate-tempest-dsvm-full job
>> [0]. This job also ran against a master change that merged and ran in
>> the same cloud and region as the neutron job.
>>
> snip
>> Generally each step of this job was quicker. There were big differences
>> in devstack and tempest run time though. Is devstack much slower to
>> setup neutron when compared to nova net? For tempest it looks like we
>> run ~1510 tests against neutron and only ~1269 against nova net. This
>> may account for the large difference there. I also recall that we run
>> ipv6 tempest tests against neutron deployments that were inefficient and
>> booted 2 qemu VMs per test (not sure if that is still the case but
>> illustrates that the tests themselves may not be very quick in the
>> neutron case).
> 
> Looking at the tempest slowest tests output for each of these jobs
> (neutron and nova net) some tests line up really well across jobs and
> others do not. In order to get a better handle on the runtime for
> individual tests I have pushed https://review.openstack.org/295487 which
> will run tempest serially reducing the competition for resources between
> tests.
> 
> Hopefully the subunit logs generated by this change can provide more
> insight into where we are losing time during the tempest test runs.

Subunit logs aren't the full story here. Activity in addCleanup doesn't
get added to the subunit time accounting for the test, which causes some
interesting issues when waiting for resources to delete. I would be
especially cautious of that on some of these.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [shotgun] New shotgun2 command: short-report

2016-03-21 Thread Alex Schultz
On Mon, Mar 21, 2016 at 1:59 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Alex,
>
> That is just a short report that QA team needs for their convenience.
> Please consider this letter as just FYI, nothing more.
>
>
If a bug only contains the short report, how do we work on fixing the bug?
My question is valid and should be answered.  What good is this information
if I cannot reproduce an environment with this information? How does one
translate or locate specific package versions for reproducing issues?  Is
this documented?  If not, will it be?  I'd like to know what the point of
this short report is since I'm not sure how it could be consumed by
QA/Devs?



> 'shotgun report' allows you to see commit SHA from which a package was
> built. It is even more information than it was available in version.yaml
> and this information is actual unlike the content of version.yaml.
>
> I know you were opposing getting rid of version.yaml but the thing is Fuel
> now can be installed on any CentOS 7.2 node directly from RPM repository.
> You don't even need the Fuel ISO, and thus version.yaml could not be an
> artifact that we could rely on (no ISO build id any more, no sha sums).
> Instead, now we rely on packages that are currently installed on the master
> node. The only issue with this approach is the ability to easily reproduce
> the env having just this list of packages attached to a bug. But it is not
> worse that it was with version.yaml.
>
>
I was only opposed to getting rid of it without a proper replacement which
is why I keep asking the same questions as an equivalent replacement does
not seem to exist.  Also it is much worse now that we don't have
version.yaml because I may have no way to locate these mystical package
versions that keep getting reported.  At least with git hashes I could at
least look at the same code base across everything. Now without this
information I have no way of building a complete picture of the code being
utilized in the environment.


> I'm currently working on design draft about modular data driven functional
> testing. This could also help for troubleshooting. In a nutshell the
> developer experience will be like:
>  1) you look at log files (`shotgun dump`) and roughly locate the issue
> (that allows you to choose respective test case)
>  2) you run script passing some data to it (data are to come from 'shotgun
> report --machinereadable' or smth like this)
>  3) this script builds testing/experimental env for you (env is to include
> only those components that are respective to chosen test case)
>  5) you run some tests against this lab and manually do some experiments
> to kill the bug
>
>
>

This should have been created before removing the thing providing this
information previously. Yes I know I sound like a broken record on this,
but it's very hard to address issues if you cannot reproduce the
environment they occur on.  I'm trying to make sure we are providing all
the information to aid in reproducing issues to get them fixed and not just
providing more information that is ultimately ignored because it's useless.

Thanks,
-Alex


>
>
> Vladimir Kozhukalov
>
> On Mon, Mar 21, 2016 at 5:28 PM, Alex Schultz 
> wrote:
>
>>
>>
>> On Mon, Mar 21, 2016 at 7:21 AM, Volodymyr Shypyguzov <
>> vshypygu...@mirantis.com> wrote:
>>
>>> Hi, all
>>>
>>> Just wanted to inform you, that shotgun2 now has new command
>>> short-report, which allows you to receive shorter and cleaner output for
>>> attaching to bug description, sharing, etc.
>>>
>>> Usage: shotgun2 short-report
>>> Example output: http://paste.openstack.org/show/491256/
>>>
>>>
>> How will we be able to find those specific packages and how will we be
>> able to correlate them with the equivalent commit in the git repository?
>>
>> Thanks,
>> -Alex
>>
>>
>>> Regards,
>>> Volodymyr
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-21 Thread Zane Bitter
tl;dr Containers represent a massive, and also mandatory, opportunity 
for TripleO. Lets start thinking about ways that we can take maximum 
advantage to achieve the goals of the project.


Now that you have the tl;dr I'm going to start from the beginning, so 
settle in and grab yourself a cup of coffee or other poison of your choice.


After working on developing Heat from the very beginning of the project 
in early 2012 and debugging a bunch of TripleO deployments in the field, 
it is my considered opinion that Heat is a poor fit for the workloads 
that TripleO is currently asking of it. To illustrate why, I need to 
explain what it is that Heat is really designed to do.


Here's a theoretical example of how I've always imagined Heat software 
deployments would make Heat users' lives better. For simplicity, I'm 
just going to model two software components, a user-facing service that 
connects to some back-end service:


  resources:
backend_component:
  type: OS::Heat::SoftwareComponent
  properties:
configs:
  - tool: script
actions:
  - CREATE
  - UPDATE
config: |
  PORT=$(get_backend_port || random_port)
  stop_backend
  start_backend $DEPLOY_VERSION $PORT $CONFIG
  addr="$(hostname):$(get_backend_port)"
  printf '%s' "$addr" >${heat_outputs_path}.host_and_port
  - tool: script
actions:
  - DELETE
config: |
   stop_backend
 inputs:
   - name: DEPLOY_VERSION
   - name: CONFIG
 outputs:
   - name: host_and_port

frontend_component:
  type: OS::Heat::SoftwareComponent
  properties:
configs:
  - tool: script
actions:
  - CREATE
  - UPDATE
config: |
  stop_frontend
  start_frontend $DEPLOY_VERSION $BACKEND_ADDR $CONFIG
  - tool: script
actions:
  - DELETE
config: |
  stop_frontend
inputs:
  - name: DEPLOY_VERSION
  - name: BACKEND_ADDR
  - name: CONFIG

backend:
  type: OS::Heat::SoftwareDeployment
  properties:
server: {get_resource: backend_server}
name: {get_param: backend_version} # Forces upgrade replacement
actions: [CREATE, UPDATE, DELETE]
config: {get_resource: backend_component}
input_values:
  DEPLOY_VERSION: ${get_param: backend_version}
  CONFIG: ${get_param: backend_config}

frontend:
  type: OS::Heat::SoftwareDeployment
  properties:
server: {get_resource: frontend_server}
name: {get_param: frontend_version} # Forces upgrade replacement
actions: [CREATE, UPDATE, DELETE]
config: {get_resource: frontend_component}
input_values:
  DEPLOY_VERSION: ${get_param: frontend_version}
  BACKEND_ADDR: {get_attr: [backend, host_and_port]}
  CONFIG: ${get_param: frontend_config}


This is actually quite a beautiful system, if I may say so:

- Whenever a version changes, Heat knows to update that component, and 
the components can be updated independently.
- If the backend in this example restarts on a different port, the 
frontend is updated to point to the new port.
- Everything is completely agnostic as to which server it is running on. 
They could be running on the same server or different servers.
- Everything is integrated with the infrastructure (not only the servers 
you're deploying on and the networks and volumes connected to them, but 
also things like load balancers), so everything is created at the right 
time, in parallel where possible, and any errors are reported all in one 
place.
- If something requires e.g. a restart after changing another component, 
we can encode that. And if it doesn't, we can encode that too.
- There's next to no downtime required: if e.g. we upgrade the backend, 
we first deploy a new one listening on a new port, then update the 
frontend to listen on the new port, then finally shut down the old 
backend. Again, we can choose when we want this and when we just want to 
update in place and reload.
- The application doesn't even need to worry about versioning the 
protocol that its two constituent parts communicate over: as long as the 
backend_version and frontend_version that we pass are always compatible, 
only compatible versions of the two services ever talk to each other.
- If anything at all fails at any point before, during or after this 
part of the template, Heat can automatically roll everything back into 
the exact same state as it was in before, without any outside 
intervention. You can insert test deployments that check everything is 
working and have them automatically roll back if it's not, all with no 
downtime for users.


So you can use this to do something like a fancier version of blue-green 

Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Clark Boylan
On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
> > On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote: 
> > > Do you have an a better insight of job runtimes vs jobs in other
> > > projects?
> > > Most of the time in the job runtime is actually spent setting the
> > > infrastructure up, and I am not sure we can do anything about it, unless
> > > we
> > > take this with Infra.
> > 
> > I haven't done a comparison yet buts lets break down the runtime of a
> > recent successful neutron full run against neutron master [0].
> 
> And now for some comparative data from the gate-tempest-dsvm-full job
> [0]. This job also ran against a master change that merged and ran in
> the same cloud and region as the neutron job.
> 
snip
> Generally each step of this job was quicker. There were big differences
> in devstack and tempest run time though. Is devstack much slower to
> setup neutron when compared to nova net? For tempest it looks like we
> run ~1510 tests against neutron and only ~1269 against nova net. This
> may account for the large difference there. I also recall that we run
> ipv6 tempest tests against neutron deployments that were inefficient and
> booted 2 qemu VMs per test (not sure if that is still the case but
> illustrates that the tests themselves may not be very quick in the
> neutron case).

Looking at the tempest slowest tests output for each of these jobs
(neutron and nova net) some tests line up really well across jobs and
others do not. In order to get a better handle on the runtime for
individual tests I have pushed https://review.openstack.org/295487 which
will run tempest serially reducing the competition for resources between
tests.

Hopefully the subunit logs generated by this change can provide more
insight into where we are losing time during the tempest test runs.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [shotgun] New shotgun2 command: short-report

2016-03-21 Thread Vladimir Kozhukalov
Alex,

That is just a short report that QA team needs for their convenience.
Please consider this letter as just FYI, nothing more.

'shotgun report' allows you to see commit SHA from which a package was
built. It is even more information than it was available in version.yaml
and this information is actual unlike the content of version.yaml.

I know you were opposing getting rid of version.yaml but the thing is Fuel
now can be installed on any CentOS 7.2 node directly from RPM repository.
You don't even need the Fuel ISO, and thus version.yaml could not be an
artifact that we could rely on (no ISO build id any more, no sha sums).
Instead, now we rely on packages that are currently installed on the master
node. The only issue with this approach is the ability to easily reproduce
the env having just this list of packages attached to a bug. But it is not
worse that it was with version.yaml.

I'm currently working on design draft about modular data driven functional
testing. This could also help for troubleshooting. In a nutshell the
developer experience will be like:
 1) you look at log files (`shotgun dump`) and roughly locate the issue
(that allows you to choose respective test case)
 2) you run script passing some data to it (data are to come from 'shotgun
report --machinereadable' or smth like this)
 3) this script builds testing/experimental env for you (env is to include
only those components that are respective to chosen test case)
 5) you run some tests against this lab and manually do some experiments to
kill the bug




Vladimir Kozhukalov

On Mon, Mar 21, 2016 at 5:28 PM, Alex Schultz  wrote:

>
>
> On Mon, Mar 21, 2016 at 7:21 AM, Volodymyr Shypyguzov <
> vshypygu...@mirantis.com> wrote:
>
>> Hi, all
>>
>> Just wanted to inform you, that shotgun2 now has new command
>> short-report, which allows you to receive shorter and cleaner output for
>> attaching to bug description, sharing, etc.
>>
>> Usage: shotgun2 short-report
>> Example output: http://paste.openstack.org/show/491256/
>>
>>
> How will we be able to find those specific packages and how will we be
> able to correlate them with the equivalent commit in the git repository?
>
> Thanks,
> -Alex
>
>
>> Regards,
>> Volodymyr
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] unit tests and DB Access

2016-03-21 Thread Andreas Jaeger
The OpenStack Infra team is removing soon access to the openstack_citest
MySQL/PostgreSQL database for unittests by default. So, this covers
these tests:
* python27
* python34
* coverage
* pypy

So far, openstack_citest was set up for both MySQL and PostgreSQL by the
infra scripts. Removing this by default, allows to only install those
packages that are really needed by projects.

We know that some projects need access to the openstack_citest database
during unittests - and want those projects to continue testing with
database access. Therefore, new jobs have been created, these are called
"python27-db", "python34-db", "coverage-db", and "pypy-db".
I did an analysis of database access of unit tests based on a few
heuristics, followed by manual review and created changes for those
projects that seem to need unit tests. For all these projects I created
changes and send them for review [2]. The reviews change these repositories:
* blazar
* cerberus
* cinder
* glance
* heat
* ironic
* ironic-inspector
* keystone
* kite
* manila
* murano
* nodepool
* oslo.db
* rack
* sahara
* storyboard
* subunit2sql
* sqlalchemy-migrate
* tacker
* taskflow
* trove

Additionaly, I  found neutron and rally but received already feedback
that these do not need database access for unit tests.

If your project needs MySQL or PostgreSQL database access, a patch like
one of the proposed changes need to be done. Feel free to do it yourself
or tell me.

I plan to merge the above changes once I got some +1s by cores or PTLs
on the repositories - or on Thursday this week. Afterwards, we will
remove database access from the unit tests.

So, my call for action: If your repository is in the list above, please
review and comment on the open changes. If it's not in the list and
needs database access, speak up or patch.

This is part of our consolidation on a single trusty image, called
ubuntu-trusty (see [1]).

Andreas

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088832.html
[2]
https://review.openstack.org/#/q/project:openstack-infra/project-config+topic:bindep+status:open


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] becoming third party CI

2016-03-21 Thread Jeremy Stanley
On 2016-03-21 10:57:53 -0500 (-0500), Ben Nemec wrote:
[...]
> So I'm not sure this is actually a step away from gating all the
> projects.  In fact, since we can't vote today as part of the integrated
> gate, and I believe that would continue to be the case until we could
> run entirely in regular infra instead of as a separate thing, I feel
> like this is probably a requirement to be voting on other projects
> anytime in the near future.
[...]

One other point which was touched on in the thread about using
Tempest is that you're currently limited by job duration constraints
in our upstream CI. If you controlled the full CI stack you could in
theory support jobs which run as long as you need.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ec2-api] EC2 API Future

2016-03-21 Thread Doug Hellmann


> On Mar 21, 2016, at 2:33 PM, Tim Bell  wrote:
> 
> 
>> On 21/03/16 17:23, "Doug Hellmann"  wrote:
>> 
>> 
>> 
>>> On Mar 20, 2016, at 3:26 PM, Tim Bell  wrote:
>>> 
>>> 
>>> Doug,
>>> 
>>> Given that the EC2 functionality is currently in use by at least 1/6th of 
>>> production clouds 
>>> (https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf page 
>>> 34), this is a worrying situation.
>> 
>> I completely agree. 
>> 
>>> 
>>> The EC2 functionality was recently deprecated from Nova on the grounds that 
>>> the EC2 API project was the correct way to proceed. With the proposal now 
>>> to not have an EC2 API project at all, this will leave many in the 
>>> community confused.
>> 
>> That wasn't the proposal. We have lots of unofficial projects. My suggestion 
>> was that if the EC2 team wasn't participating in the community governance 
>> process, we should not list them as official. That doesn't mean disbanding 
>> the project, just updating our reference materials to reflect reality and 
>> clearly communicat expectations. It sounds like that was a misunderstanding 
>> which has been cleared up, though, so I think we're all set to continue 
>> considering it an official project.
> 
> There is actually quite a lot of activity going on to get the EC2 API to an 
> easy state to deploy. CERN has been involved in the puppet-ec2api and RDO 
> packaging which currently does not count as participation in the EC2 API 
> project given the split of repositories. However, it is critical for 
> deployment of a project that it can be installed and configured.

That's a great point, and in the future I'll look at packaging and other 
related repos when trying to gauge activity. 

Doug

> 
> Tim
> 
>> 
>> Doug
>> 
>>> 
>>> Tim
>>> 
>>> 
>>> 
>>> 
 On 20/03/16 17:48, "Doug Hellmann"  wrote:
 
 ...
 
 The EC2-API project doesn't appear to be very actively worked on.
 There is one very recent commit from an Oslo team member, another
 couple from a few days before, and then the next one is almost a
 month old. Given the lack of activity, if no team member has
 volunteered to be PTL I think we should remove the project from the
 official list for lack of interest.
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-21 Thread John Belamaric
Hi Carl,

Sorry for the slow reply.

I think that both of these can be solved with the existing interface, by 
expanding the different types of "request" objects. Right now, we have very 
basic and limited requests: SpecificSubnet, AnySubnet. There is no reason we 
can't create a subnet request that includes tag (or host or segment) 
information that can be used by the existing IPAM driver methods. If the tag is 
an optional refinement, then the request type can subclass the existing 
AnySubnet request; otherwise, the request would need to be rejected by the 
driver.

Each driver can deliver its own factory for converting create_subnet (etc.) API 
requests into IPAM request objects. Thus, each driver can decide whether it 
supports some incoming request type or not. This same mechanism that applies to 
subnet also applies to IPs.

One thing we may want to consider, though, is discoverability of these 
capabilities. We have this now for extensions in general, of course. But in 
this case, we would want to be able to discover whether or not the IPAM driver 
supports this functionality, before enabling the use of the routed segments 
feature. As far as I know, we don't today provide a way to discover whether a 
particular feature works with a particular plugin or other extension. That is, 
I don't think we allow extensions to specify that they are dependent on other 
specific extensions, do we?

John

> On Mar 11, 2016, at 6:15 PM, Carl Baldwin  wrote:
> 
> Hi,
> 
> I have started to get into coding [1] for the Neutron routed networks
> specification [2].
> 
> This spec proposes a new association between network segments and
> subnets.  This affects how IPAM needs to work because until we know
> where the port is going to land, we cannot allocate an IP address for
> it.  Also, IPAM will need to somehow be aware of segments.  We have
> proposed a host / segment mapping which could be transformed to a host
> / subnet mapping for IPAM purposes.
> 
> I wanted to get the opinion of folks like Salvatore, John Belamaric,
> and you (if you interested) on this.  How will this affect the
> interface to pluggable IPAM and how can pluggable implementations can
> accommodate this change.  Obviously, we wouldn't require
> implementations to support it but routed networks wouldn't be very
> useful without it.  So, those implementations would not be compatible
> when routed networks are deployed.
> 
> Another related topic was brought up in the recent Neutron mid-cycle.
> We talked about adding a service type attribute to to subnets.  The
> reason for this change is to allow operators to create special subnets
> on a network to be used only by certain kinds of ports.  For example,
> DVR fip namespace gateway ports burn a public IP for no good reason.
> This new feature would allow operators to create a special subnet in
> the network with private addressing only to be used by these ports.
> 
> Another example would give operators the ability to use private
> subnets for router external gateway ports if shared SNAT is not needed
> or doesn't need to use public IPs.
> 
> These are two ways in which subnets are taking on extra
> characteristics which distinguish them from other subnets on the same
> network.  That is why I lumped them together in to one thread.
> 
> Carl
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-03-21 Thread Zane Bitter

Late to the party, but this comparison seems misleading to me...

On 26/01/16 04:46, Steven Hardy wrote:

It's one more thing, which is already maintained and has an active
community, vs yet-another-bespoke-special-to-tripleo-thing.  IMHO we have
*way*  too many tripleo specific things already.

However, lets look at the "python knowledge" thing in a bit more detail.

Let's say, as an operator I want to wire in a HTTP call to an internal asset
management system.  The requirement is to log an HTTP call with some
content every time an overcloud is deployed or updated.  (This sort of
requirement is*very*  common in enterprise environments IME)

In the mistral case[1], the modification would look something like:

http_task:
   action: std.http url='assets.foo.com' 

You'd simply add two lines to your TripleO deployment workflow yaml[2]:

Now, consider the bespoke API case.  You have to do some or all of the
following:

- Find the python code which handles deployment and implements the workflow
- Pull and fork the code base, resolve any differences between the upstream
   version and whatever pacakged version you're running
- Figure out how to either hack in your HTTP calls via a python library, or
   build a new plugin mechanism to enable out-of-tree deployment hooks
- Figure out a bunch of complex stuff to write unit tests, battle for
   weeks/months to get your code accepted upstream (or, maintain the fork
   forever and deal with rebasing, packaging, and the fact that your entire
   API is no longer supported by your vendor because you hacked on it)


If I were doing it I would write a piece of WSGI middleware - a highly 
standardised thing, not specific to TripleO or even OpenStack, that a 
non-python-ninja could easily figure out from StackOverflow - then 
deploy it on the undercloud machine and add it into the paste pipeline.


  class AssetControl(wsgi.Middleware):
  def process_request(self, req):
  requests.get('assets.foo.com', data={'some':'arguments'})

It's true that the 'deploy it on the machine' step is probably more 
complicated than the 'upload a new workflow' one. OTOH most sysadmins 
are *really* good at installing stuff on a machine, and there is a HUGE 
advantage in not ever having to merge your forked workflow definitions.



Which of these is most accessible to a traditional non-python-ninja
sysadmin?


Given the above, I would genuinely have to say the second. WSGI and 
Requests are *very* well documented *everywhere*.


Though the biggest difference, I suspect, comes when you have to 
incorporate some logic in there. Say you want to log the request to a 
different server when the user's manager's oldest pet's middle name 
begins with 'Q' or something. (I would venture to speculate that this 
kind of requirement is, ahem, not all that uncommon in enterprise 
environments either ;) In Python this is pretty trivial and you always 
have StackOverflow to help when you get stuck; if you're having to 
implement it in some obscure DSL that knows nothing about your 
application then you could be in for a world of hurt.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Clark Boylan
On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote: 
> > Do you have an a better insight of job runtimes vs jobs in other
> > projects?
> > Most of the time in the job runtime is actually spent setting the
> > infrastructure up, and I am not sure we can do anything about it, unless
> > we
> > take this with Infra.
> 
> I haven't done a comparison yet buts lets break down the runtime of a
> recent successful neutron full run against neutron master [0].

And now for some comparative data from the gate-tempest-dsvm-full job
[0]. This job also ran against a master change that merged and ran in
the same cloud and region as the neutron job.

Basic host setup takes 63 seconds. Start of job to 2016-03-21
16:46:41.058 [1]
Workspace setup takes 380 seconds. 2016-03-21 16:46:41.058 [1] to
2016-03-21 16:53:01.754 [2]
Devstack takes 890 seconds. 2016-03-21 16:53:19.235 [3] to 2016-03-21
17:08:10.082 [4]
Loading old tempest subunit streams takes 63 seconds. 2016-03-21
17:08:10.111 [5] to 2016-03-21 17:09:13.454 [6]
Tempest takes 1347 seconds. 2016-03-21 17:09:13.587 [7] to 2016-03-21
17:31:40.885 [8]
Then we spend the rest of the test time (52 seconds) cleaning up.
2016-03-21 17:31:40.885 [8] to end of job.

[0]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/
[1]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_16_46_41_058
[2]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_16_53_01_754
[3]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_16_53_19_235
[4]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_17_08_10_082
[5]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_17_08_10_111
[6]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_17_09_13_454
[7]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_17_09_13_587
[8]
http://logs.openstack.org/48/294548/1/gate/gate-tempest-dsvm-full/d94901e/console.html#_2016-03-21_17_31_40_885

Generally each step of this job was quicker. There were big differences
in devstack and tempest run time though. Is devstack much slower to
setup neutron when compared to nova net? For tempest it looks like we
run ~1510 tests against neutron and only ~1269 against nova net. This
may account for the large difference there. I also recall that we run
ipv6 tempest tests against neutron deployments that were inefficient and
booted 2 qemu VMs per test (not sure if that is still the case but
illustrates that the tests themselves may not be very quick in the
neutron case).

Of course we may also be seeing differences in cloud VMs (though tried
to control for by looking at tests that ran in the same region). Hard to
say without more data. In any case this hopefully serves as a good
starting point for others to dig into the ~20 minute discrepancy between
nova net + tempest and neutron + tempest.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance_store][VMware] Different glance store for Nova snapshot in VMware

2016-03-21 Thread Sabari Murugesan
Hi Dongcan

It looks like you may have multiple glance-api services configured with
different
glance_store backends i.e one with filesystem and the other with vsphere.

Or it could be that you changed the glance_store backend configuration
between the two snapshot operations. Can you check again ?

If you try to upload a new image it should be using the vsphere backend
according to your configuration. Please ping me on IRC (sabari) on
#openstack-glance,
I can help you more quickly.

Thanks
Sabari


On Mon, Mar 14, 2016 at 8:32 PM, dongcan ye  wrote:

> Hi, Sabari
>
> Thanks for your reply.
>
> Yes, I had tuned FileSystem to vsphere. Then I create an image, image info
> shows it stores in VMware
> datastore.
>
> I had followed your suggestion, add show_multiple_locations to glance api
> conf.
> Then I repeat the operations.
>
> Results from two snapshotted images:
> locations [{"url":
> "file:///var/lib/glance/images/0cdd8188-537e-49e2-b173-7de122070574",
> "metadata": {}}]
>
> locations [{"url": "vsphere://
> 172.20.2.38/folder/openstack_glance/f462c06a-f202-4b1f-a89a-17f72264b502?dcPath=IDC_Test=LUN03-00",
> "metadata": {}}]
>
>>
>>
On Mon, Mar 14, 2016 at 2:41 AM, Sabari Murugesan 
wrote:

> Hi Dongcan
>
> Regardless of when you snapshot, the image should be uploaded to
> the default glance store. Is it possible that you had enabled the
> FileSystem
> store earlier and recently changed to the vsphere store ?
>
> To further debug, can
> you add the following to the default section of glance-api.conf and provide
> us the value for 'locations' attribute of the snapshotted image. You can do
> "glance image-show --os-image-api-version 2 image-show " to
> know the location.
>
> [DEFAULT]
> show_multiple_locations = True
>
> Thanks
> Sabari
>
>
> On Sun, Mar 13, 2016 at 7:54 PM, dongcan ye  wrote:
>
>> Hi all,
>>
>> In our production environment, we enables glance_store for VMware
>> datastore.
>> Configuration in glance-api.conf:
>>
>> [DEFAULT]
>> show_image_direct_url = True
>> [glance_store]
>> stores= glance.store.vmware_datastore.Store
>> default_store = vsphere
>> vmware_server_host= 172.18.6.22
>> vmware_server_username = administrator@vsphere.local
>> vmware_server_password = 1qaz!QAZ
>> vmware_datastores = ICT Test:F7-HPP9500-SAS-ICTHPCLUSTER03-LUN06
>>
>>
>> Firstly we boot an instance, make online snapshot for the VM, we see the
>> image stores on local file system:
>> direct_url
>> file:///var/lib/glance/images/8cf7ba51-31d8-4282-89db-06957d609691
>>
>> Then we poweroff the VM, make offline snapshot, the image stores on
>> VMware datastore:
>> direct_urlvsphere://
>> 172.20.2.38/folder/openstack_glance/52825a70-f645-46b5-80ec-7a430dcd13cf?dcPath=IDC_Test=LUN03-00
>>
>> In Nova VCDriver, make snapshot will upload VM disk file to Glance image
>> server. But why different behaviour for the VM poweron and poweroff?
>>
>> Hopes for your reply.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ec2-api] EC2 API Future

2016-03-21 Thread Tim Bell

On 21/03/16 17:23, "Doug Hellmann"  wrote:

>
>
>> On Mar 20, 2016, at 3:26 PM, Tim Bell  wrote:
>> 
>> 
>> Doug,
>> 
>> Given that the EC2 functionality is currently in use by at least 1/6th of 
>> production clouds 
>> (https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf page 
>> 34), this is a worrying situation.
>
>I completely agree. 
>
>> 
>> The EC2 functionality was recently deprecated from Nova on the grounds that 
>> the EC2 API project was the correct way to proceed. With the proposal now to 
>> not have an EC2 API project at all, this will leave many in the community 
>> confused.
>
>That wasn't the proposal. We have lots of unofficial projects. My suggestion 
>was that if the EC2 team wasn't participating in the community governance 
>process, we should not list them as official. That doesn't mean disbanding the 
>project, just updating our reference materials to reflect reality and clearly 
>communicat expectations. It sounds like that was a misunderstanding which has 
>been cleared up, though, so I think we're all set to continue considering it 
>an official project. 

There is actually quite a lot of activity going on to get the EC2 API to an 
easy state to deploy. CERN has been involved in the puppet-ec2api and RDO 
packaging which currently does not count as participation in the EC2 API 
project given the split of repositories. However, it is critical for deployment 
of a project that it can be installed and configured.

Tim

>
>Doug
>
>> 
>> Tim
>> 
>> 
>> 
>> 
>>> On 20/03/16 17:48, "Doug Hellmann"  wrote:
>>> 
>>> ...
>>> 
>>> The EC2-API project doesn't appear to be very actively worked on.
>>> There is one very recent commit from an Oslo team member, another
>>> couple from a few days before, and then the next one is almost a
>>> month old. Given the lack of activity, if no team member has
>>> volunteered to be PTL I think we should remove the project from the
>>> official list for lack of interest.
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Armando M.
On 21 March 2016 at 11:08, Clark Boylan  wrote:

> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
> > Do you have an a better insight of job runtimes vs jobs in other
> > projects?
> > Most of the time in the job runtime is actually spent setting the
> > infrastructure up, and I am not sure we can do anything about it, unless
> > we
> > take this with Infra.
>
> I haven't done a comparison yet buts lets break down the runtime of a
> recent successful neutron full run against neutron master [0].
>
> Basic host setup takes 65 seconds. Start of job to 2016-03-17
> 22:14:27.397 [1]
> Workspace setup takes 520 seconds. 2016-03-17 22:14:27.397 [1] to
> 2016-03-17 22:23:07.429 [2]
> Devstack takes 1205 seconds. 2016-03-17 22:23:18.760 [3] to 2016-03-17
> 22:43:23.339 [4]
> Loading old tempest subunit streams takes 155 seconds. 2016-03-17
> 22:43:23.340 [5] to 2016-03-17 22:45:58.061 [6]
> Tempest takes 1982 seconds. 2016-03-17 22:45:58.201 [7] to 2016-03-17
> 23:19:00.117 [8]
> Then we spend the rest of the test time (76 seconds) cleaning up.
> 2016-03-17 23:19:00.117 [8] to end of job.
>
> Note that I haven't accounted for all of the time used and instead of
> focused on the major steps that use the most time. Also it is Monday
> morning and some of my math may be off.


> [0]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/
> [1]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_14_27_397
> [2]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_23_07_429
> [3]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_23_18_760
> [4]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_43_23_339
> [5]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_43_23_340
> [6]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_45_58_061
> [7]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_45_58_201
> [8]
>
> http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_23_19_00_117
>
> One big takeaway from this is that the vast majority of the time is
> spent in devstack and tempest not in the infrastructure setup. You
> should be able to dig into both the devstack setup and tempest test
> runtimes and hopefully speed things up.
>
> Hopefully this gives you enough information to get started into digging
> on this.
>

Clark: thanks for this insightful response.

I should clarify my comment about infrastructure setup (it is Monday for me
too :)): what I meant was the there is a good portion of time spent to get
to a point where tests can be run. That includes node setup as well as
stacking. That is obviously less than 50%, but even >30% feels like a
substantial overhead. I am not sure what we can do about it, but looping
you in this discussion seemed like the least this thread should do.

That said, there are many tempest tests that take over 30 seconds to
complete and those do not even touch Neutron. For those that do, then we
should clearly identify where the slowness comes from and I think that's
where, as a Neutron team, our focus should be.

IMO, before we go on and talk about evicting jobs, I think we should take a
closer look (i.e. profiling) where time is spent so that we can make each
test run leaner.

[1]
http://status.openstack.org//openstack-health/#/job/gate-tempest-dsvm-neutron-full?groupKey=project=hour=2016-03-21T18:14:19.534Z

>
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] High Availability

2016-03-21 Thread Daneyon Hansen (danehans)
All,

Does anyone have experience deploying Barbican in a highly-available fashion? 
If so, I'm interested in learning from your experience. Any insight you can 
provide is greatly appreciated.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Packaging CI for Fuel

2016-03-21 Thread Thomas Goirand
On 03/19/2016 11:10 AM, Monty Taylor wrote:
> The patch looks good, but it conflicts with the
> move-nova-jobs-to-db-macro and the "add debian jessie support for bindep
> fallback" change. Rather than fighting the rebase fight, let's put a
> brief hold on this (sorry, I know) and land it as soon as those land.

Ok.

It'd be nice if someone could ping me when I can resume my work on it,
if you guys are following closely the other 2 patches (if not, I'll try
to remember to check for them).

> We'll need to work to make sure that we're using zuul-cloner to get the
> right things checked out

Is this a macro the package build needs to use?

> Luckily, we've got and apt-repository infrastructure already set up and
> ready in infra thanks to the mirror work we did this last cycle. It's
> using reprepro fwiw.

Outch ! I had multiple very bad experiences with reprepro. For example,
if we rebuild the same package with the same version, reprepro will
*not* pick-up the package. And we do need to do this, because the
debian/changelog needs to match what is uploaded to the Debian archive,
and it cannot increment.

The thing is, maintaining a Debian repository can be done with a very
small shell script like this one:

http://anonscm.debian.org/cgit/openstack/openstack-pkg-tools.git/tree/build-tools/pkgos-scan-repo

Hopefully, we can switch to something like this that allows more control
than reprepro allows.

> I believe we should add jessie to the list of things we mirror in it,
> and then also add a volume to hold things we publish ourselves.

These are typically "one off" backports which we don't need to care care
much of, but which are needed for other OpenStack to build or run.
Probably something based on a yaml file listing all the packages could
be enough.

> We'll also need to move from having an unsigned reprepro to a signed
> reprepro if we're going to publish our own packages. We've not been
> signing the repo so far because we've sort of wanted to discourage use
> of our mirror outside of the gate - but it turns  out our mirror is
> AMAZING - so I think it's time we change that.

Ok. The script I listed above signs the repo, it's not hard to do, as
you probably know already.

>> Finally, we'll need a way to build backports from Sid and also publish
>> them.
> 
> Hrm. We might want to mirror sid then too. I'd like to talk about the
> backport building process - hopefully a process that does not need to
> require us making a repo in gerrit for each package we want to backport
> and include in our repo.

Exactly!

The list of packages, you may find it here:
http://mitaka-jessie.pkgs.mirantis.com/debian/pool/jessie-mitaka-backports-nochange/

Or, more easily, in the Sources file here:
ftp://mitaka-jessie.pkgs.mirantis.com/debian/dists/jessie-mitaka-backports-nochange/main/source/

The list of packages could be maintained in a .yaml file for example,
then parsed to regularly maintain the backports, and sending an alert if
one package fails to build.

> It would also be good to tie off with the security team about this. One
> of the reasons we stopped publishing debs years ago is that it made us a
> de-facto derivative distro. People were using our packages in
> production, including backports we'd built in support of those packages,
> but our backports were not receiving security/CVE attention, so we were
> concerned that we were causing people to be exposed to issues.

I'd like to also take care of these packages, and upload them to Sid. I
already packaged some of them, though I was stopped because of the lack
of support for statsd >= 3.x (which is in Sid), which has since been
added. I could resume that work once Mitaka is done (in the mean while,
I'm *very* busy with it).

> We'll also want to make sure we're building packages for trusty and xenial.

All I write works for both. It is fully my intention to support Trusty
and Xenial as well as Jessie and hopefully Sid (with probably Sid as
non-voting, as it will break too often).

> Yay for movement!

+1 !!!
If you're really going to be the PTL *and* help this to happen, that's
fantastic. Thanks for your comments already.

Cheers,

Thomas Goirand (zigo)

P.S: ACK for Fungi's reply about security.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Clark Boylan
On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote: 
> Do you have an a better insight of job runtimes vs jobs in other
> projects?
> Most of the time in the job runtime is actually spent setting the
> infrastructure up, and I am not sure we can do anything about it, unless
> we
> take this with Infra.

I haven't done a comparison yet buts lets break down the runtime of a
recent successful neutron full run against neutron master [0].

Basic host setup takes 65 seconds. Start of job to 2016-03-17
22:14:27.397 [1]
Workspace setup takes 520 seconds. 2016-03-17 22:14:27.397 [1] to
2016-03-17 22:23:07.429 [2]
Devstack takes 1205 seconds. 2016-03-17 22:23:18.760 [3] to 2016-03-17
22:43:23.339 [4]
Loading old tempest subunit streams takes 155 seconds. 2016-03-17
22:43:23.340 [5] to 2016-03-17 22:45:58.061 [6]
Tempest takes 1982 seconds. 2016-03-17 22:45:58.201 [7] to 2016-03-17
23:19:00.117 [8]
Then we spend the rest of the test time (76 seconds) cleaning up.
2016-03-17 23:19:00.117 [8] to end of job.

Note that I haven't accounted for all of the time used and instead of
focused on the major steps that use the most time. Also it is Monday
morning and some of my math may be off.

[0]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/
[1]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_14_27_397
[2]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_23_07_429
[3]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_23_18_760
[4]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_43_23_339
[5]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_43_23_340
[6]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_45_58_061
[7]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_22_45_58_201
[8]
http://logs.openstack.org/18/294018/3/check/gate-tempest-dsvm-neutron-full/1cce1e8/console.html.gz#_2016-03-17_23_19_00_117

One big takeaway from this is that the vast majority of the time is
spent in devstack and tempest not in the infrastructure setup. You
should be able to dig into both the devstack setup and tempest test
runtimes and hopefully speed things up.

Hopefully this gives you enough information to get started into digging
on this.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election][ec2-api][winstackers][stable] status of teams without PTL candidates

2016-03-21 Thread Anita Kuno
On 03/21/2016 01:06 PM, Jeremy Stanley wrote:
> On 2016-03-21 06:14:25 -0400 (-0400), Sean Dague wrote:
> [...]
>> I'm +1 on Tony for Stable Maint. Though I also think that we
>> should rollback any misgivings about election officials running
>> for PTL slots. Of our 40 races only 7 (?) were actual elections.
>> Election officials are chosen well before nominations open up, so
>> when a presumed status quo is not, I don't know why we'd want
>> responsible members of the community sitting on the sidelines.
> [...]
> 
> This is where having two or more election officials, an open
> nomination process and an impartial third-party polling system
> already does a good job of protecting the community from malfeasance
> on the part of an election official abusing their trust to sweep a
> PTL seat.
> 
> That said, while not an election official I did recuse myself from
> generating the electorate rolls this time once it was clear I was in
> a contested race for PTL. Election officials should similarly feel
> free to find other volunteers to take over for them if they realize
> they're about to be officiating over a poll in which they're one of
> the candidates, though I don't know that we need to consider that a
> hard requirement if suitable volunteers are unavailable to take
> over.
> 
Tony and I did discuss me taking over for him as election official so
that he could run. We did agree that a change in the midst of an
election was not ideal but I do feel strongly that anyone who is a valid
candidate for a role should be allowed to run if that is their wish. I
also feel that both running in an election whilst participating in the
administration of the process is not a great idea.

The decision was Tony's to make and in the end, he did not inform me
that he wished for me to take over for him, which I respect.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Fuel] Increasing deadlock_timeout for PostgreSQL

2016-03-21 Thread Vladimir Kozhukalov
Roman,

According to documentation [1] this setting is responsible for the amount
of time to wait a lock to be released before trying to check if it is, in
fact, a deadlock. This time just allows not to spend resources for
unnecessary checks.
If a lock is not a deadlock, this check will just skip it and request will
continue to wait for a lock to be released until lock_timeout is exceeded.

[1] http://www.postgresql.org/docs/9.3/static/runtime-config-locks.html

Vladimir Kozhukalov

On Mon, Mar 21, 2016 at 7:39 PM, Roman Prykhodchenko  wrote:

> Folks,
>
> We have been analyzing a bunch of random failures in Fuel tests and
> encountered several ones caused by detector raising errors occasionally
> [1]. After attempts to reproduce the same behavior have failed we’ve
> decided to run the same test suit on overloaded nodes. Those test-runs
> allowed us to catch the same behavior we’ve seen on CI slaves. After
> analyzing both PostgreSQL logs and Nailgun’s code we’ve found no reasons
> for those deadlocks to occur.
>
> Thinking about the facts mentioned we came up with the idea that those
> random deadlocks occur in cases when CI slaves are overloaded by other jobs
> and transactions start hitting deadlock timeout. Thus I propose to change
> PostgreSQL’s deadlock_timeout value from the default one to 3-5 seconds.
> That will slow down tests, if they run on an overloaded CI slave but will
> help to avoid random and false-positive deadlock warnings.
>
>
> References:
>
> 1. https://bugs.launchpad.net/fuel/+bug/1556070
>
>
> - romcheg
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Fuel] Increasing deadlock_timeout for PostgreSQL

2016-03-21 Thread Igor Kalnitsky
Hey Roman,

Thank you for investigation. However, I think that changing
'deadlock_timeout' won't help us. According to PostgreSQL
documentation [1], this option sets how frequently to check if there
is a deadlock condition. So it won't fix deadlocks themselves.

Thus I see no reason why we should change that option and wait more
time before raising the deadlock exception.

- Igor

[1]: http://www.postgresql.org/docs/9.4/static/runtime-config-locks.html

On Mon, Mar 21, 2016 at 6:39 PM, Roman Prykhodchenko  wrote:
> Folks,
>
> We have been analyzing a bunch of random failures in Fuel tests and 
> encountered several ones caused by detector raising errors occasionally [1]. 
> After attempts to reproduce the same behavior have failed we’ve decided to 
> run the same test suit on overloaded nodes. Those test-runs allowed us to 
> catch the same behavior we’ve seen on CI slaves. After analyzing both 
> PostgreSQL logs and Nailgun’s code we’ve found no reasons for those deadlocks 
> to occur.
>
> Thinking about the facts mentioned we came up with the idea that those random 
> deadlocks occur in cases when CI slaves are overloaded by other jobs and 
> transactions start hitting deadlock timeout. Thus I propose to change 
> PostgreSQL’s deadlock_timeout value from the default one to 3-5 seconds. That 
> will slow down tests, if they run on an overloaded CI slave but will help to 
> avoid random and false-positive deadlock warnings.
>
>
> References:
>
> 1. https://bugs.launchpad.net/fuel/+bug/1556070
>
>
> - romcheg
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election][ec2-api][winstackers][stable] status of teams without PTL candidates

2016-03-21 Thread Jeremy Stanley
On 2016-03-21 06:14:25 -0400 (-0400), Sean Dague wrote:
[...]
> I'm +1 on Tony for Stable Maint. Though I also think that we
> should rollback any misgivings about election officials running
> for PTL slots. Of our 40 races only 7 (?) were actual elections.
> Election officials are chosen well before nominations open up, so
> when a presumed status quo is not, I don't know why we'd want
> responsible members of the community sitting on the sidelines.
[...]

This is where having two or more election officials, an open
nomination process and an impartial third-party polling system
already does a good job of protecting the community from malfeasance
on the part of an election official abusing their trust to sweep a
PTL seat.

That said, while not an election official I did recuse myself from
generating the electorate rolls this time once it was clear I was in
a contested race for PTL. Election officials should similarly feel
free to find other volunteers to take over for them if they realize
they're about to be officiating over a poll in which they're one of
the candidates, though I don't know that we need to consider that a
hard requirement if suitable volunteers are unavailable to take
over.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Hummingbird Roadmap

2016-03-21 Thread Avishay Traeger
Hi all,
I was wondering what the roadmap for Hummingbird is.
Will development continue?  Will support continue?  Is it expected to reach
feature parity or even replace the Python code?

Thank you,
Avishay


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][ptls] tagging reviews, making tags searchable

2016-03-21 Thread Jim Rollenhagen
On Sat, Mar 19, 2016 at 04:51:14PM +, Jeremy Stanley wrote:
> On 2016-03-19 02:28:48 + (+), Amrith Kumar wrote:
> > Maybe we could collaborate and build something that'd work for
> > multiple projects? Happy to help with this. It is clearly a
> > problem that some projects seem to be facing and there aren't any
> > good solutions there.
> 
> Rossella Sblendido was working on a similar solution[1] for Neutron
> reviews, so this seems to be a popular desire across many projects.
> The rough consensus[2] in Infra was in favor of a CI job to perform
> the appropriate analysis to generate the desired dashboards and then
> push those into Gerrit so that it could host them directly rather
> than people having to maintain and update separate websites hosting
> query-based Gerrit dashboards.
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086392.html
> [2] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-03-11.log.html#t2016-03-11T14:46:05

This does sound good, and I'm all for some collaboration here.

As a first stab, I was planning on building a web page with
ironic-specific priorities highlighted, probably backed by some YAML
file that cores could update. Gerrit solves this problem if we use
topics well, but we're pretty bad at that.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] What would you like changed/fixed/new in oslo??

2016-03-21 Thread Markus Zoeller
> From: Joshua Harlow 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 03/20/2016 04:33 AM
> Subject: [openstack-dev] [oslo][all] What would you like changed/
> fixed/new in oslo??
> 
> Howday all,
> 
> Just to start some conversation for the next cycle,
> 
> I wanted to start thinking about what folks may like to see in oslo (or 
> yes, even what u dislike in any of the oslo libraries).
> 
> For those who don't know, oslo[1] is a lot of libraries (27+) so one of 
> my complaints (and one I will try to help make better) is that most 
> people probably don't know what the different 'offerings' of these 
> libraries are or how to use them (docs, tutorials, docs, and more docs).
> 
> I'll pick another pet-peeve of mine as a second one to get people 
thinking.
> 
> 2) The lack of oslo.messaging having a good security scheme (even 
> something basic as a hmac or signature that can be verified, this scares 

> the heck out of me what is possible over RPC) turned on by default so 
> I'd like to start figuring out how to get *something* (basic == HMAC 
> signature, or maybe advanced == barbican or ???)
> 
> What other thoughts do people have?
> 
> Good, bad, crazy (just keep it PG-13) thoughts are ok ;)
> 
> -Josh
> 
> [1] https://wiki.openstack.org/wiki/Oslo
> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

[oslo.config]
I'd like to have the possibility to treat generator warnings as errors.
I looked at [1] and [2] and didn't see an option for that. We use the
generator in Nova and got bug [3] which I like to avoid in the future.

References:
[1] 
https://github.com/openstack/oslo.config/blob/master/doc/source/generator.rst
[2] 
https://github.com/openstack/oslo.config/blob/master/oslo_config/generator.py
[3] https://bugs.launchpad.net/nova/+bug/1553231

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election][ec2-api][winstackers][stable] status of teams without PTL candidates

2016-03-21 Thread Anita Kuno
On 03/21/2016 12:33 PM, Doug Hellmann wrote:
> 
>> On Mar 21, 2016, at 12:03 PM, Alexandre Levine  
>> wrote:
>>
>> Doug,
>>
>> Let me clarify a bit the situation.
>> Before this February there wasn't such a project at all. EC2 API was a 
>> built-in part of nova so no dedicated PTL was required. The built-in part 
>> got removed and our project got promoted. We're a team of 3 developers which 
>> nevertheless are committed to this support for year and a half already. The 
>> reason I didn't nominate myself is solely because I'm new to the process and 
>> I thought that first cycle will actually start from Mitaka so I didn't have 
>> to bother. I hope it's forgivable and our ongoing support of the code to 
>> make sure it works with both OpenStack and Amazon will make up for it if a 
>> little.
> 
> Yes, please don't take my original proposal as anything other than me 
> suggesting some "clean up" based on me not having all the info about the 
> status of EC2. If we need to clarify that all projects are expected to 
> participate in elections, that's something we can address. I'll look at 
> wording of the existing requirements in the next week or so. If the team has 
> a leader, you're all set and I'm happy to support keeping EC2 an official 
> team. 
> 
> Doug
> 
>>
>> Best regards,
>>  Alex Levine

Yes, Alex as Doug said, his comment was about policy was not about your
situation personally.

In order to manage such a large and diverse group, folks need clear
reliable policy they can trust. Policy needs to be examined and
addressed in terms of applying to the needs of the whole group, not
driven out of shape by corner cases as they appear.

Policy is used as a set of standards (when it works well) to unite those
that would apply said policy. This, I believe, was the motivation behind
Doug's original statement about election policy and group expectations.

Thank you,
Anita.


>>
>>
>>> On 3/21/16 3:09 PM, Doug Hellmann wrote:
>>>
 On Mar 21, 2016, at 5:33 AM, Thierry Carrez  wrote:

 Doug Hellmann wrote:
> I won't be able to make the TC meeting this week because of travel,
> so I wanted to lay out my thoughts on the three PTL-less projects
> based on the outcome of the recent election (EC2-API, Winstackers,
> and Stable Maintenance).
> [...]
 First of all, I think we need to recognize that with more than 50 project 
 teams, it's pretty likely there will always be people missing the 
 nomination boat for one reason or another. Small and understaffed projects 
 just make it even more likely, as the pool of candidates there is so 
 small. I actually find it easier to excuse EC2API and Winstackers handful 
 of contributors for missing it in Mitaka, than to excuse Magnum and its 
 hundred of contributors for missing it in Liberty.
>>> Forgiveness is fine, but our governance process is one of the few explicit 
>>> things we list as required of official teams, and I think we should 
>>> consider it a strong requirement for remaining actively listed no matter 
>>> the size or age of the team.
>>>
> The EC2-API project doesn't appear to be very actively worked on.
> There is one very recent commit from an Oslo team member, another
> couple from a few days before, and then the next one is almost a
> month old. Given the lack of activity, if no team member has
> volunteered to be PTL I think we should remove the project from the
> official list for lack of interest.
 The EC2API project is a bit of a corner case: something we want to exist 
 as an official project but which is critically understaffed. Missing the 
 PTL nomination boat is more a sign of this understaffing than anything 
 else. I suspect we still very much want this to exist, so I'm not 
 convinced we should take this opportunity to remove the project.

 If anything, I hope this situation that may remind the various 
 stakeholders depending on that functionality to be present and maintained 
 that it doesn't exist in a vacuum. Open source software is not magic 
 ponies giving you free-as-in-beer software. You need the people who depend 
 on the feature to support (directly or indirectly) its maintenance.
>>> No matter how much we need the project, failing to demonstrate that the 
>>> team is actually involved is a bad sign. It sounds like there was a recent 
>>> change in leadership that made it unclear of the need to formally declare a 
>>> candidacy, so maybe we just need to work on making that more clear.
>>>
> The Winstackers project is much more active in the repository, but
> there doesn't seem to be much traffic on the mailing list. It's not
> clear why no one signed up to be PTL, and I couldn't find a notice
> that the current PTL is not running. I'm tempted to suggest removing
> Winstackers from the official project list for lack of participation

Re: [openstack-dev] [oslo][all] What would you like changed/fixed/new in oslo??

2016-03-21 Thread Michael Johnson
Does Oslo provide a consistent hashing library?

I think a number of projects (swift [1] and ironic [2] for example)
are using various implementations and Octavia may need to start using
consistent hashing soon.

[1] http://docs.openstack.org/developer/swift/ring.html
[2] 
https://blueprints.launchpad.net/ironic/+spec/instance-mapping-by-consistent-hash

Michael

On Sat, Mar 19, 2016 at 8:33 PM, Joshua Harlow  wrote:
> Howday all,
>
> Just to start some conversation for the next cycle,
>
> I wanted to start thinking about what folks may like to see in oslo (or yes,
> even what u dislike in any of the oslo libraries).
>
> For those who don't know, oslo[1] is a lot of libraries (27+) so one of my
> complaints (and one I will try to help make better) is that most people
> probably don't know what the different 'offerings' of these libraries are or
> how to use them (docs, tutorials, docs, and more docs).
>
> I'll pick another pet-peeve of mine as a second one to get people thinking.
>
> 2) The lack of oslo.messaging having a good security scheme (even something
> basic as a hmac or signature that can be verified, this scares the heck out
> of me what is possible over RPC) turned on by default so I'd like to start
> figuring out how to get *something* (basic == HMAC signature, or maybe
> advanced == barbican or ???)
>
> What other thoughts do people have?
>
> Good, bad, crazy (just keep it PG-13) thoughts are ok ;)
>
> -Josh
>
> [1] https://wiki.openstack.org/wiki/Oslo
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Fuel] Increasing deadlock_timeout for PostgreSQL

2016-03-21 Thread Roman Prykhodchenko
Folks,

We have been analyzing a bunch of random failures in Fuel tests and encountered 
several ones caused by detector raising errors occasionally [1]. After attempts 
to reproduce the same behavior have failed we’ve decided to run the same test 
suit on overloaded nodes. Those test-runs allowed us to catch the same behavior 
we’ve seen on CI slaves. After analyzing both PostgreSQL logs and Nailgun’s 
code we’ve found no reasons for those deadlocks to occur.

Thinking about the facts mentioned we came up with the idea that those random 
deadlocks occur in cases when CI slaves are overloaded by other jobs and 
transactions start hitting deadlock timeout. Thus I propose to change 
PostgreSQL’s deadlock_timeout value from the default one to 3-5 seconds. That 
will slow down tests, if they run on an overloaded CI slave but will help to 
avoid random and false-positive deadlock warnings.


References:

1. https://bugs.launchpad.net/fuel/+bug/1556070


- romcheg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election][ec2-api][winstackers][stable] status of teams without PTL candidates

2016-03-21 Thread Doug Hellmann

> On Mar 21, 2016, at 12:03 PM, Alexandre Levine  
> wrote:
> 
> Doug,
> 
> Let me clarify a bit the situation.
> Before this February there wasn't such a project at all. EC2 API was a 
> built-in part of nova so no dedicated PTL was required. The built-in part got 
> removed and our project got promoted. We're a team of 3 developers which 
> nevertheless are committed to this support for year and a half already. The 
> reason I didn't nominate myself is solely because I'm new to the process and 
> I thought that first cycle will actually start from Mitaka so I didn't have 
> to bother. I hope it's forgivable and our ongoing support of the code to make 
> sure it works with both OpenStack and Amazon will make up for it if a little.

Yes, please don't take my original proposal as anything other than me 
suggesting some "clean up" based on me not having all the info about the status 
of EC2. If we need to clarify that all projects are expected to participate in 
elections, that's something we can address. I'll look at wording of the 
existing requirements in the next week or so. If the team has a leader, you're 
all set and I'm happy to support keeping EC2 an official team. 

Doug

> 
> Best regards,
>  Alex Levine
> 
> 
>> On 3/21/16 3:09 PM, Doug Hellmann wrote:
>> 
>>> On Mar 21, 2016, at 5:33 AM, Thierry Carrez  wrote:
>>> 
>>> Doug Hellmann wrote:
 I won't be able to make the TC meeting this week because of travel,
 so I wanted to lay out my thoughts on the three PTL-less projects
 based on the outcome of the recent election (EC2-API, Winstackers,
 and Stable Maintenance).
 [...]
>>> First of all, I think we need to recognize that with more than 50 project 
>>> teams, it's pretty likely there will always be people missing the 
>>> nomination boat for one reason or another. Small and understaffed projects 
>>> just make it even more likely, as the pool of candidates there is so small. 
>>> I actually find it easier to excuse EC2API and Winstackers handful of 
>>> contributors for missing it in Mitaka, than to excuse Magnum and its 
>>> hundred of contributors for missing it in Liberty.
>> Forgiveness is fine, but our governance process is one of the few explicit 
>> things we list as required of official teams, and I think we should consider 
>> it a strong requirement for remaining actively listed no matter the size or 
>> age of the team.
>> 
 The EC2-API project doesn't appear to be very actively worked on.
 There is one very recent commit from an Oslo team member, another
 couple from a few days before, and then the next one is almost a
 month old. Given the lack of activity, if no team member has
 volunteered to be PTL I think we should remove the project from the
 official list for lack of interest.
>>> The EC2API project is a bit of a corner case: something we want to exist as 
>>> an official project but which is critically understaffed. Missing the PTL 
>>> nomination boat is more a sign of this understaffing than anything else. I 
>>> suspect we still very much want this to exist, so I'm not convinced we 
>>> should take this opportunity to remove the project.
>>> 
>>> If anything, I hope this situation that may remind the various stakeholders 
>>> depending on that functionality to be present and maintained that it 
>>> doesn't exist in a vacuum. Open source software is not magic ponies giving 
>>> you free-as-in-beer software. You need the people who depend on the feature 
>>> to support (directly or indirectly) its maintenance.
>> No matter how much we need the project, failing to demonstrate that the team 
>> is actually involved is a bad sign. It sounds like there was a recent change 
>> in leadership that made it unclear of the need to formally declare a 
>> candidacy, so maybe we just need to work on making that more clear.
>> 
 The Winstackers project is much more active in the repository, but
 there doesn't seem to be much traffic on the mailing list. It's not
 clear why no one signed up to be PTL, and I couldn't find a notice
 that the current PTL is not running. I'm tempted to suggest removing
 Winstackers from the official project list for lack of participation
 in project governance, but perhaps a probation period is in order
 since it's a relatively new team. Probation would depend on having
 the team find a PTL volunteer, of course.
>>> I suspect this one is more of a classic "didn't pay attention" case. Since 
>>> I don't think we need Winstackers as an official project as much as we need 
>>> EC2API, we should definitely have a discussion about whether it's still 
>>> worth keeping as an official project or if it would be as good to just make 
>>> it an unofficial project.
>>> 
 The situation with the Stable Maintenance team is ironically shaky.
 The outgoing PTL has entered the Nova PTL election, though he has
 said he would take up the 

Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Armando M.
On 21 March 2016 at 04:15, Rossella Sblendido  wrote:

> Hello all,
>
> the tests that we run on the gate for Neutron take pretty long (longer
> than one hour). I think we can improve that and make better use of the
> resources.

Here are some ideas that came up when Ihar and I discussed this topic
> during the sprint in Brno:
>
> 1) We have few jobs that are non-voting. I think it's OK to have
> non-voting jobs for a limited amount of time, while we try to make them
> stable but this shouldn't be too long, otherwise we waste time running
> those tests without even using the results. If a job is still not-voting
> after 3 months (or 4 or 6, we can find a good time interval) the job
> should be removed. My hope is that this threat will make us find some
> time to actually fix the job and make it vote :)
>
> 2) multi-node jobs run for every patch set. Is that really what we want?
> They take pretty long. We could move them to a periodic job. I know we
> can easily forget about periodic jobs, to avoid that we could run them
> in the gate queue too. If a patch can't merge because of a failure we
> will fix the issue. To trigger them for a specific patch that might
> affect multi-node we can run the experimental jobs.
>
> Thoughts?
>

Thanks for raising the topic. That said, I am not sure I see how what you
propose is going to make things better. Jobs, either non voting or multnode
run in parallel, thus reducing the number of jobs won't reduce the time to
feedback though it would improve resource usage. We are already pretty
conscious of that and compared to other projects we already run a limited
numbers of jobs, but we can do better, of course.

Do you have an a better insight of job runtimes vs jobs in other projects?
Most of the time in the job runtime is actually spent setting the
infrastructure up, and I am not sure we can do anything about it, unless we
take this with Infra.


>
> Rossella
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Armando M.
On 21 March 2016 at 04:32, Sean M. Collins  wrote:

> Rossella Sblendido wrote:
> > 2) multi-node jobs run for every patch set. Is that really what we want?
> > They take pretty long. We could move them to a periodic job.
>
> I would rather remove all the single-node jobs. Nova has been moving to
> multinode jobs for their gate (if I recall correctly my
> conversation with Dan Smith) and we should be moving in this direction
> too. We should test Neutron the way it is deployed in production.
>
>
This was not true last time I checked. Switching to multinode jobs for the
gate means that all projects in the integrated gate will have to use the
miltinode configuration.


> Also, who is really monitoring the periodic jobs? Truthfully? I know
> there are some IPv6 jobs that are periodic and I'll be the first to
> admit that I am not following them *at all*.
>
> So, my thinking is, unless it's running at the gate and inflicting pain
> on people, it's not going to be a treated as a priority. Look at Linux
> Bridge - serious race conditions that existed for years only
> got fixed once I inflicted pain on all the Neutron devs by making it
> voting and running on every patchset (sorry, not sorry).
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Somebody posted this on #openstack-meetng-4

2016-03-21 Thread Anita Kuno
On 03/21/2016 11:59 AM, Sukhdev Kapur wrote:
> I just noticed that somebody posted this on the above meeting channel.


Yes, that took place over the weekend and pleia2 removed that user.

Should you see spam occur in irc channels please notify someone in
#openstack-infra so they can address the issue. Posting to the mailing
list will likely make the problem worse rather than better as you
spreading whatever spam they are trying to disseminate.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ec2-api] EC2 API Future

2016-03-21 Thread Doug Hellmann


> On Mar 20, 2016, at 3:26 PM, Tim Bell  wrote:
> 
> 
> Doug,
> 
> Given that the EC2 functionality is currently in use by at least 1/6th of 
> production clouds 
> (https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf page 
> 34), this is a worrying situation.

I completely agree. 

> 
> The EC2 functionality was recently deprecated from Nova on the grounds that 
> the EC2 API project was the correct way to proceed. With the proposal now to 
> not have an EC2 API project at all, this will leave many in the community 
> confused.

That wasn't the proposal. We have lots of unofficial projects. My suggestion 
was that if the EC2 team wasn't participating in the community governance 
process, we should not list them as official. That doesn't mean disbanding the 
project, just updating our reference materials to reflect reality and clearly 
communicat expectations. It sounds like that was a misunderstanding which has 
been cleared up, though, so I think we're all set to continue considering it an 
official project. 

Doug

> 
> Tim
> 
> 
> 
> 
>> On 20/03/16 17:48, "Doug Hellmann"  wrote:
>> 
>> ...
>> 
>> The EC2-API project doesn't appear to be very actively worked on.
>> There is one very recent commit from an Oslo team member, another
>> couple from a few days before, and then the next one is almost a
>> month old. Given the lack of activity, if no team member has
>> volunteered to be PTL I think we should remove the project from the
>> official list for lack of interest.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn][Neutron] OVN support for routed networks(plugin interface for host mapping)

2016-03-21 Thread Russell Bryant
On Thu, Mar 17, 2016 at 1:45 AM, Hong Hui Xiao  wrote:

> Hi Russell.
>
> Since the "ovn-bridge-mapping" will become accessible in OVN Southbound
> DB, do you meant that neutron plugin can read those bridge mappings from
> the OVN Southbound DB? I didn't think in that way because I thought
> networking-ovn will only transact data with OVN Northbound DB.
>

​You're right that networking-ovn currently only uses the OVN northbound
DB.  This requirement crosses the line into physical space and needing to
know about some physical environment details, so reading from the
southbound DB for this info is acceptable.​
​

> Also, do you have any link to describe the ongoing work in OVN to sync the
> "ovn-bridge-mapping" from hypervisor?


​This patch introduces some new tables to the southbound DB:

http://openvswitch.org/pipermail/dev/2016-March/068112.html
​
I was thinking that we would be able to read the physical endpoints table
to get what we need, but now I'm thinking it may not fit our use case.

The alternative would be to just store the bridge mappings as an
external_id on the Chassis record in the southbound database.  How quickly
is this needed?

-- 
Russell Bryant
​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposing Tony Breeds for stable-maint-core

2016-03-21 Thread Doug Hellmann


> On Mar 18, 2016, at 4:11 PM, Matt Riedemann  
> wrote:
> 
> I'd like to propose tonyb for stable-maint-core. Tony is pretty much my day 
> to day guy on stable, he's generally in every stable team meeting (which is 
> not attended well so I appreciate it), and he's as proactive as ever on 
> staying on top of gate issues when they come up, so he's well deserving of it 
> in my mind.
> 
> Here are review stats for stable for the last 90 days (as defined in the 
> reviewstats repo):
> 
> http://paste.openstack.org/show/491155/
> 
> Tony is also the latest nova-stable-maint core and he's done a great job 
> there (as expected) and is very active, which is again much appreciated.
> 
> Please respond with ack/nack.

+1 ack

Doug

> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-21 Thread Markus Zoeller
The Neutron RFE process [1] demands time-committment from various groups 
which I cannot give. A face-to-face discussion could be benefitial to 
come to a conclusion, so I proposed a session for the Newton summit 
at [2] (see section "RFEs: communication channel and process"). 
There won't be a big difference on the bug list in 4 weeks when the 
summit starts, which means (with my "bug czar" hat on) I can handle 
this.

I'm going to clean up the existing whislist bugs which are older
than 12 months, which is a common cleanup task as described in [3]
and should surprise no one. Help is appreciated [4]. It makes the list
significantly less cluttered and helps the nova bugs team immediately. 
The next 4 weeks until the summit, new bug reports which are RFEs will 
be set to "wishlist" again, to keep a consistent state which is then
the starting point for future actions we decide on the summit.

I'm going to forward this thread to the ops ML, as it affects them also
a lot and they should have the chance to say their opinions. I still 
prefer the "discuss RFEs on the ML (-dev/-ops)" idea, Matt already 
pointed out the potential benefits.

With a "requirements engineering hat" on, without any Nova "flavor":
Maybe an extra list "https://bugs.launchpad.net/openstack-rfe; would
make sense, as, I assume, the other projects face the same challenges.
But I'd rather not discuss this in this thread.

Thanks for your input so far. Let's see what the ops say on their ML
and what the Newton summit will bring.

References:
[1] 
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
[2] https://etherpad.openstack.org/p/newton-nova-summit-ideas
[3] 
https://wiki.openstack.org/wiki/BugTriage#Task_9:_Deprecate_old_wishlist_bugs_.28bug_supervisors.29
[4] http://45.55.105.55:8082/bugs-dashboard.html#tabWishlist

Regards, Markus Zoeller (markus_z)

Rochelle Grober  wrote on 03/18/2016 12:45:38 
AM:

> From: Rochelle Grober 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 03/18/2016 12:46 AM
> Subject: Re: [openstack-dev] [nova] Wishlist bugs == (trivial) 
blueprint?
> 
> (Inline because the mail formatted friendly this time)
> 
> From: Tim Bell March 17, 2016 11:26 AM:
> On 17/03/16 18:29, "Sean Dague"  wrote:
> 
> >On 03/17/2016 11:57 AM, Markus Zoeller wrote:
> >
> >> Suggested action items:
> >> 
> >> 1. I close the open wish list items older than 6 months (=138 
reports)
> >>and explain in the closing comment that they are outdated and the 
> >>ML should be used for future RFEs (as described above).
> >> 2. I post on the openstack-ops ML to explain why we do this
> >> 3. I change the Nova bug report template to explain this to avoid 
more
> >>RFEs in the bug report list in the future.
> 
> Please take a look at how Neutron is doing this.  [1] is their list of
> RFEs. [2] is the ML post Kyle provided to document how Ops and other 
> users can submit RFEs without needing to know how to submit specs or 
> code OpenStack Neutron. I'll let Kyle post on how successful the 
> process is, if he wants to.
> 
> The point here is that Neutron uses wishlist combined with [RFE] in 
> the title to identify Ops and user requests.  This identifies items as
> Ops/user asks that these comuunities consider important.  Also, the 
> point is that Yes, post the RFE on the ops list, but open the RFE bug 
> and allow comments, voting there.  The bug system does much better 
> keeping track of the request and Ops votes once it exists.  Plus, once
> Ops and others know about the lightweight process, they'll know where 
> to go looking so they can vote/add comments.  Please don't restrict 
> RFEs to mailing list.  It's a great way to lose them.  So my suggestion 
here is:
> 
> 1.  Close the wishlist (all of it???) and post in each that if it's a 
> new feature the submitter thinks is useful to himself and others, 
> resubmit with [RFE] in title, priority wishlist, pointer to the Neutron 
docs.
> 2.  Post to openstack-ops and usercommittee why, and ask them to 
> discuss on the ML and review all [RFE]s that they submit (before or 
> after, but if the bug number is on ML, they can vote on it and add 
> comments, etc.)
> 3. Change the template to highlight/require the information needed to 
> move forward with *any* submitted bug by dev.
> 
> >> 4. In 6 months I double-check the rest of the open wishlist bugs
> >>if they found developers, if not I'll close them too.
> >> 5. Continously double-check if wishlist bug reports get created
> >>
> >> Doubts? Thoughts? Concerns? Agreements?
> >
> >This sounds like a very reasonable plan to me. Thanks for summarizing
> >all the concerns and coming up with a pretty balanced plan here. +1.
> >
> >   -Sean
> 
> I?d recommend running it by the -ops* list along with the RFE 
> proposal. I think many of the cases

Re: [openstack-dev] [i18n][horizon][sahara][trove][magnum][murano] dashboard plugin release schedule

2016-03-21 Thread Hayes, Graham
On 19/03/2016 17:47, Akihiro Motoki wrote:
> Hi dashboard plugins team
> (sahara-dashboard, trove-dashboard, magnum-ui, murano-dashboard)
>

There is also a designate-dashboard plugin - we have translation set up
for Mitaka.

We have string frozen as of RC1 - so if there is translations before
RC2 we can release them as part of RC2

- Graham

> As Horizon i18n liaison, I would like to have a consensus on a rough schedule
> of translation import for Horizon plugins.
> Several plugins and horizon itself already released RC1.
>
> For Horizon translation, we use the following milestone:
>Mitaka-3 : Soft string freeze
>Mitaka-RC1: Hard string freeze
>Mitaka-RC2: Final translation import
>
> Does this milestone sound good for sahara/trove/magnum/murano dashboard 
> plugins?
> This means that each dashboard plugin project needs to release RC2 (or some 
> RC)
> even only for translation import. Otherwise, translator efforts after
> hard string freeze
> will not be included in Mitaka release.
>
> If the above idea sounds good, I hope RC2 (or later RC) will be released
> on early of the week Mar 28 for translation import.
> This schedule allows translators to work on translations after "Hard
> string freeze".
>
> Mitaka is the first release for Horizon plugins with translations,
> so I hope this mail helps everyone around translations.
>
> Best Regards,
> Akihiro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election][ec2-api][winstackers][stable] status of teams without PTL candidates

2016-03-21 Thread Alexandre Levine

Doug,

Let me clarify a bit the situation.
Before this February there wasn't such a project at all. EC2 API was a 
built-in part of nova so no dedicated PTL was required. The built-in 
part got removed and our project got promoted. We're a team of 3 
developers which nevertheless are committed to this support for year and 
a half already. The reason I didn't nominate myself is solely because 
I'm new to the process and I thought that first cycle will actually 
start from Mitaka so I didn't have to bother. I hope it's forgivable and 
our ongoing support of the code to make sure it works with both 
OpenStack and Amazon will make up for it if a little.


Best regards,
  Alex Levine


On 3/21/16 3:09 PM, Doug Hellmann wrote:



On Mar 21, 2016, at 5:33 AM, Thierry Carrez  wrote:

Doug Hellmann wrote:

I won't be able to make the TC meeting this week because of travel,
so I wanted to lay out my thoughts on the three PTL-less projects
based on the outcome of the recent election (EC2-API, Winstackers,
and Stable Maintenance).
[...]

First of all, I think we need to recognize that with more than 50 project 
teams, it's pretty likely there will always be people missing the nomination 
boat for one reason or another. Small and understaffed projects just make it 
even more likely, as the pool of candidates there is so small. I actually find 
it easier to excuse EC2API and Winstackers handful of contributors for missing 
it in Mitaka, than to excuse Magnum and its hundred of contributors for missing 
it in Liberty.

Forgiveness is fine, but our governance process is one of the few explicit 
things we list as required of official teams, and I think we should consider it 
a strong requirement for remaining actively listed no matter the size or age of 
the team.


The EC2-API project doesn't appear to be very actively worked on.
There is one very recent commit from an Oslo team member, another
couple from a few days before, and then the next one is almost a
month old. Given the lack of activity, if no team member has
volunteered to be PTL I think we should remove the project from the
official list for lack of interest.

The EC2API project is a bit of a corner case: something we want to exist as an 
official project but which is critically understaffed. Missing the PTL 
nomination boat is more a sign of this understaffing than anything else. I 
suspect we still very much want this to exist, so I'm not convinced we should 
take this opportunity to remove the project.

If anything, I hope this situation that may remind the various stakeholders 
depending on that functionality to be present and maintained that it doesn't 
exist in a vacuum. Open source software is not magic ponies giving you 
free-as-in-beer software. You need the people who depend on the feature to 
support (directly or indirectly) its maintenance.

No matter how much we need the project, failing to demonstrate that the team is 
actually involved is a bad sign. It sounds like there was a recent change in 
leadership that made it unclear of the need to formally declare a candidacy, so 
maybe we just need to work on making that more clear.


The Winstackers project is much more active in the repository, but
there doesn't seem to be much traffic on the mailing list. It's not
clear why no one signed up to be PTL, and I couldn't find a notice
that the current PTL is not running. I'm tempted to suggest removing
Winstackers from the official project list for lack of participation
in project governance, but perhaps a probation period is in order
since it's a relatively new team. Probation would depend on having
the team find a PTL volunteer, of course.

I suspect this one is more of a classic "didn't pay attention" case. Since I 
don't think we need Winstackers as an official project as much as we need EC2API, we 
should definitely have a discussion about whether it's still worth keeping as an official 
project or if it would be as good to just make it an unofficial project.


The situation with the Stable Maintenance team is ironically shaky.
The outgoing PTL has entered the Nova PTL election, though he has
said he would take up the Stable team work again if he does not
become Nova PTL. That election will not be over until 24 March, so
I think we should wait before taking any action. If Matt becomes
Nova PTL, and no other volunteer steps forward, I will take on the
responsibilities, though I do want to keep the Stable team separate
from the Release team. That said, I would very much prefer to have
someone else be the Stable team PTL so I hope we can find a volunteer.

The situation with stable maintenance is a bit of a corner case too. Matt would 
have stayed PTL for stable if he wasn't elected for Nova, and Tony, being an 
election official, couldn't really throw his name in the PTL election hat at 
the last minute. I think this is a case where the TC looking at the potential 
names and making their choice will be working as intended

I 

Re: [openstack-dev] [oslo][all] What would you like changed/fixed/new in oslo??

2016-03-21 Thread Ian Cordasco
-Original Message-
From: Adam Young 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 20, 2016 at 12:03:01
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [oslo][all] What would you like changed/fixed/new 
in oslo??

> On 03/19/2016 11:33 PM, Joshua Harlow wrote:
> > Howday all,
> >
> > Just to start some conversation for the next cycle,
> >
> > I wanted to start thinking about what folks may like to see in oslo
> > (or yes, even what u dislike in any of the oslo libraries).
> >
> > For those who don't know, oslo[1] is a lot of libraries (27+) so one
> > of my complaints (and one I will try to help make better) is that most
> > people probably don't know what the different 'offerings' of these
> > libraries are or how to use them (docs, tutorials, docs, and more docs).
> >
> > I'll pick another pet-peeve of mine as a second one to get people
> > thinking.
> >
> > 2) The lack of oslo.messaging having a good security scheme (even
> > something basic as a hmac or signature that can be verified, this
> > scares the heck out of me what is possible over RPC) turned on by
> > default so I'd like to start figuring out how to get *something*
> > (basic == HMAC signature, or maybe advanced == barbican or ???)
>  
> Red Herring. We don't need HMAC. We need to make better use of the
> tools in Rabbit.
>  
> 1. Split the vhosts between notifications and control plan. The code
> is in place to do this already, but we need to update the configuration
> tools to make use of that.

I'd agree that this definitely makes sense.

> 2. Drop the default login and password. All services, and all compute
> nodes should get their own Rabbit user and an autogenerated password.
> Even better would be to use Client Certificate validaltion, but that
> requires a CA.

The OpenStack Ansible project already does this. I'd be surprised if the other 
deployment projects aren't already doing this. Besides I'm not certain this is 
something that oslo/oslo.messaging can enforce.

> 3. We desperately need a CA story.

Like Anchor (https://wiki.openstack.org/wiki/Security/Projects/Anchor, 
https://git.openstack.org/openstack/anchor)?

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Somebody posted this on #openstack-meetng-4

2016-03-21 Thread Sukhdev Kapur
I just noticed that somebody posted this on the above meeting channel.


[23:22:31]   Allah is doing

[23:22:42]   sun is not doing Allah is doing

[23:22:42]   *yamamoto* (~yamam...@i118-21-144-234.s30.a048.ap.plala.or.jp)
left IRC (Remote host closed the connection)

[23:22:54]   moon is not doing Allah is doing

[23:23:10]   stars are not doing Allah is doing

[23:23:22]   planets are not doing Allah is doing

[23:23:35]   galaxies are not doing Allah is doing

[23:23:47]   oceans are not doing Allah is doing

[23:23:51]   *Sukhdev* (~textual@162.210.130.3) left IRC (Ping timeout: 246
seconds)

[23:24:04]   mountains are not doing Allah is doing

[23:24:16]   trees are not doing Allah is doing

[23:24:29]   mom is not doing Allah is doing

[23:24:38]   dad is not doing Allah is doing

[23:24:50]   boss is not doing Allah is doing

[23:24:59]   job is not doing Allah is doing

[23:25:11]   dollar is not doing Allah is doing

[23:25:23]   degree is not doing Allah is doing

[23:25:41]   medicine is not doing Allah is doing

[23:25:59]   customers are not doing Allah is doing

[23:26:20]   you can not get a job without the permission of
allah

[23:26:43]   you can not get married without the permission of
allah

[23:27:05]   nobody can get angry at you without the permission
of allah

[23:27:18]   light is not doing Allah is doing

[23:27:29]   fan is not doing Allah is doing

[23:27:30]   *amotoki* (~amot...@fl1-119-242-22-153.tky.mesh.ad.jp) joined
the channel

[23:27:40]   businessess are not doing Allah is doing

[23:27:52]   america is not doing Allah is doing

[23:28:12]   fire can not burn without the permission of allah

[23:28:28]   knife can not cut without the permission of allah

[23:28:45]   rulers are not doing Allah is doing

[23:29:01]   governments are not doing Allah is doing

[23:29:13]   sleep is not doing Allah is doing

[23:29:26]   hunger is not doing Allah is doing

[23:31:37]   food does not take away the hunger Allah takes away
the hunger

[23:31:54]   *amotoki* (~amot...@fl1-119-242-22-153.tky.mesh.ad.jp) left
IRC (Ping timeout: 250 seconds)

[23:32:03]   water does not take away the thirst Allah takes
away the thirst

[23:32:18]   seeing is not doing Allah is doing

[23:32:33]   hearing is not doing Allah is doing

[23:32:47]   seasons are not doing Allah is doing

[23:33:01]   weather is not doing Allah is doing

[23:33:10]   humans are not doing Allah is doing

[23:33:20]   animals are not doing Allah is doing

[23:33:42]   the best amongst you are those who learn and teach
quran

[23:34:39]   *spzala* (~spzala@107.15.105.223) joined the channel

[23:35:17]   one letter read from book of Allah amounts to one
good deed ten times

[23:35:55]   one letter read from book of Allah amounts to one
good deed and Allah multiplies one good deed ten times

[23:36:34]   hearts get rusted as does iron with water to remove
rust from heart recitation of Quran and rememberance of death

[23:36:37]   *Jeffrey4l__* (~Jeffrey@119.251.238.37) joined the channel

[23:36:44]   heart is likened to a mirror

[23:37:09]   when a person commits one sin a black dot sustains
the heart

[23:38:38]   Allah is doing

[23:38:55]   sun is not doing Allah is doing

[23:39:31]   *spzala* (~spzala@107.15.105.223) left IRC (Ping timeout: 248
seconds)

[23:39:34]   *user_9876* (779dcd38@gateway/web/cgi-irc/
kiwiirc.com/ip.119.157.205.56) left IRC (Quit: http://www.kiwiirc.com/ - A
hand crafted IRC client)

[23:41:51]   *yamamoto* (~yamam...@i118-21-144-234.s30.a048.ap.plala.or.jp)
joined the channel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] becoming third party CI

2016-03-21 Thread Ben Nemec
On 03/21/2016 07:33 AM, Derek Higgins wrote:
> On 17 March 2016 at 16:59, Ben Nemec  wrote:
>> On 03/10/2016 05:24 PM, Jeremy Stanley wrote:
>>> On 2016-03-10 16:09:44 -0500 (-0500), Dan Prince wrote:
 This seems to be the week people want to pile it on TripleO. Talking
 about upstream is great but I suppose I'd rather debate major changes
 after we branch Mitaka. :/
>>> [...]
>>>
>>> I didn't mean to pile on TripleO, nor did I intend to imply this was
>>> something which should happen ASAP (or even necessarily at all), but
>>> I do want to better understand what actual benefit is currently
>>> derived from this implementation vs. a more typical third-party CI
>>> (which lots of projects are doing when they find their testing needs
>>> are not met by the constraints of our generic test infrastructure).
>>>
 With regards to Jenkins restarts I think it is understood that our job
 times are long. How often do you find infra needs to restart Jenkins?
>>>
>>> We're restarting all 8 of our production Jenkins masters weekly at a
>>> minimum, but generally more often when things are busy (2-3 times a
>>> week). For many months we've been struggling with a thread leak for
>>> which their development team has not seen as a priority to even
>>> triage our bug report effectively. At this point I think we've
>>> mostly given up on expecting it to be solved by anything other than
>>> our upcoming migration off of Jenkins, but that's another topic
>>> altogether.
>>>
 And regardless of that what if we just said we didn't mind the
 destructiveness of losing a few jobs now and then (until our job
 times are under the line... say 1.5 hours or so). To be clear I'd
 be fine with infra pulling the rug on running jobs if this is the
 root cause of the long running jobs in TripleO.
>>>
>>> For manual Jenkins restarts this is probably doable (if additional
>>> hassle), but I don't know whether that's something we can easily
>>> shoehorn into our orchestrated/automated restarts.
>>>
 I think the "benefits are minimal" is bit of an overstatement. The
 initial vision for TripleO CI stands and I would still like to see
 individual projects entertain the option to use us in their gates.
>>> [...]
>>>
>>> This is what I'd like to delve deeper into. The current
>>> implementation isn't providing you with any mechanism to prevent
>>> changes which fail jobs running in the tripleo-test cloud from
>>> merging to your repos, is it? You're still having to manually
>>> inspect the job results posted by it? How is that particularly
>>> different from relying on third-party CI integration?
>>>
>>> As for other projects making use of the same jobs, right now the
>>> only convenience I'm aware of is that they can add check-tripleo
>>> pipeline jobs in our Zuul layout file instead of having you add it
>>> to yours (which could itself reside in a Git repo under your
>>> control, giving you even more flexibility over those choices). In
>>> fact, with a third-party CI using its own separate Gerrit account,
>>> you would be able to leave clear -1/+1 votes on check results which
>>> is not possible with the present solution.
>>>
>>> So anyway, I'm not saying that I definitely believe the third-party
>>> CI route will be better for TripleO, but I'm not (yet) clear on what
>>> tangible benefit you're receiving now that you lose by switching to
>>> that model.
>>>
>>
>> FWIW, I think third-party CI probably makes sense for TripleO.
>> Practically speaking we are third-party CI right now - we run our own
>> independent hardware infrastructure, we aren't multi-region, and we
>> can't leave a vote on changes.  Since the first two aren't likely to
>> change any time soon (although I believe it's still a long-term goal to
>> get to a place where we can run in regular infra and just contribute our
>> existing CI hardware to the general infra pool, but that's still a long
>> way off), and moving to actual third-party CI would get us the ability
>> to vote, I think it's worth pursuing.
>>
>> As an added bit of fun, we have a forced move of our CI hardware coming
>> up in the relatively near future, and if we don't want to have multiple
>> days (and possibly more, depending on how the move goes) of TripleO CI
>> outage we're probably going to need to stand up a new environment in
>> parallel anyway.  If we're doing that it might make sense to try hooking
>> it in through the third-party infra instead of the way we do it today.
>> Hopefully that would allow us to work out the kinks before the old
>> environment goes away.
>>
>> Anyway, I'm sure we'll need a bunch more discussion about this, but I
>> wanted to chime in with my two cents.
> 
> We need to answer this question soon, I'm currently working on the CI
> parts that we need in order of move to OVB[1] and was assuming we
> would be maintaining the status quo. What we end up doing would look
> very different if we move to 3rd party 

Re: [openstack-dev] [Openstack] Boot instance from volume via Horizon dashboard fails

2016-03-21 Thread Matt Riedemann



On 3/21/2016 10:31 AM, Mike Perez wrote:

On 16:01 Mar 21, Eugen Block wrote:

Hi all,

I'm just a (new) Openstack user, not a developer, but I have a question
regarding the Horizon dashboard respectively launching instances via
dashboard.


Hi Eugen!

Welcome to the community! This mailing list is development focused and not our
support channel. You can request help at our general mailing list [1], or Ask
OpenStack [2].

[1] - http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] - https://ask.openstack.org/en/questions/



Having said that, there is probably a bug in Horizon since it's 
defaulting to vda for the device name when booting from volume.


The libvirt driver in nova ignores the requested device name in boot 
from volume / volume attach requests since Liberty [1]. It's best to let 
the virt driver in nova pick the device name, you can get the mountpoint 
via the volume attachment later after the volume's status is 'in-use'.


[1] https://review.openstack.org/#/c/189632/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Prefecting user and user_roles resources with domain-specific conf is failing.

2016-03-21 Thread Denis Egorenko
>
> The domain, at this stage won't be a problem to be concerned with, we
> will have its value.  The only thing to add, I think, would be some kind
> of caching for keystone_user_role call to avoid repetition.  This
> shouldn't be hard to implement though.


Well yes, we can create method in parent keystone.rb module and global
variable,
for keeping (caching) current roles. Like it done currently for domains:

https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone.rb#L106-L130

Are our solutions same?

I've added this as a meeting point for tomorrow, so that we can take a
> final decision on this and start coding:
>  -
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160322


Yep, that have to be discussed.

2016-03-21 18:09 GMT+03:00 Sofer Athlan-Guyot :

> Denis Egorenko  writes:
>
> > Hi Athlan,
> >
> > thanks for attention of this problem. We have one more related change
> > [1] and bug [2]
> > for this problem, when we option 'domain_specific_drivers' is used.
> >
> > I would like to vote for 3) case.
> >
> > As I see it there are three ways to approach this:
> > - iterate over all domains and keep the same behavior as now;
> > - detect somehow that the domain-specific configuration is used
> > and
> > hack, both instances methods to add domain options
> > - remove prefetch from keystone_user and keystone_user_role (kinda
> > get
> > my preference, see below)
>
> We agree then :)
>
> > Let me explain why.
> > Using of prefetch and instances methods have a couple of problem, like
> > we can't pass some values to them and can't to set proper options
> > (dynamical of course).
> > Backing to this problem, it means, that we can't specify some domain
> > and hence we can
> > iterate over all domains or check users only for default domain. Both
> > ways are not acceptable to me.
> >
> > As solution for this problem, i see using calling kinda of instances
> > method from exist? method.
> > On this stage we can use all parameters, which are passed to
> > keystone_user{_role}
> > providers and we can choose proper domain if specified. If not -
> > default domain will be used.
>
> The domain, at this stage won't be a problem to be concerned with, we
> will have its value.  The only thing to add, I think, would be some kind
> of caching for keystone_user_role call to avoid repetition.  This
> shouldn't be hard to implement though.
>
> I've added this as a meeting point for tomorrow, so that we can take a
> final decision on this and start coding:
>  -
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160322
>
> >
> > [1] https://review.openstack.org/213906
> > [2] https://bugs.launchpad.net/puppet-keystone/+bug/1485508
> >
> > 2016-03-21 16:34 GMT+03:00 Sofer Athlan-Guyot :
> >
> > Hi,
> >
> > we have a big problem when using domain-specific configuration.
> > The
> > listing of all users is not supported by keystone when it's used
> > [1][2].
> >
> > What this means is that prefetch method in keystone_user won't
> > work, or
> > more specifically, instances method will fail.
> >
> > This poses a problem for the keystone_user_role, as the user
> > instances
> > method is called there too.
> >
> > The missing bit when domain-specific configuration is used is that
> > the
> > operator must precise the domain on the command line option.
> >
> > As I see it there are three ways to approach this:
> >
> > - iterate over all domains and keep the same behavior as now;
> > - detect somehow that the domain-specific configuration is used
> > and
> > hack, both instances methods to add domain options
> > - remove prefetch from keystone_user and keystone_user_role (kinda
> > get
> > my preference, see below)
> >
> > The problem I see with the first two methods depends on the usual
> > use
> > case of the domain specific configuration.
> >
> > For what I understand, this would be mainly used to connect to
> > existing
> > LDAP server, certainly large AD. If that's the case then we will
> > have
> > the same problem that the keystone people have seen, ie very big
> > list of
> > poeple, most of them unrelated to what is happening. We will then
> > have
> > the risk that:
> > - keystone fails;
> > - the puppet process would be slowed down significantly;
> >
> > So listing all users in this case seems like a very bad idea. As I
> > don't see a way to disable prefetching dynamically, when
> > domain-specific
> > is used (maybe have to be digged into ?), then I tend to favor the
> > removal of this from kesytone_user and keystone_user_role.
> > Keystone_user_role is the main problem here as it require a lot of
> > call
> > to be build and prefetching help here.
> >
> > So I don't see a 

Re: [openstack-dev] Visa Invitation Letter doesn't contain attachment

2016-03-21 Thread Anita Kuno
On 03/20/2016 10:46 PM, Zhi Chang wrote:
> hi, guys.
>  
> I received Visa Invitation Letter a few days ago. But the letter doesn't 
> contain attachment. What should I do?
> 
> 
> 
> 
> Thanks
> Zhi Chang
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Please contact the foundation directly, posting the openstack-dev
mailing list won't help you get your visa sorted out.

Please email who ever you contacted to request the visa invitation letter.

I'm going to cc eve...@openstack.org on this reply in the chance that
whoever reads that email inbox can help you.

Please follow up with the foundation, not the -dev mailing list.

Thank you, hope to see you at summit,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Boot instance from volume via Horizon dashboard fails

2016-03-21 Thread Mike Perez
On 16:01 Mar 21, Eugen Block wrote:
> Hi all,
> 
> I'm just a (new) Openstack user, not a developer, but I have a question
> regarding the Horizon dashboard respectively launching instances via
> dashboard.

Hi Eugen!

Welcome to the community! This mailing list is development focused and not our
support channel. You can request help at our general mailing list [1], or Ask
OpenStack [2].

[1] - http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] - https://ask.openstack.org/en/questions/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Doug Wiegley

> On Mar 21, 2016, at 5:40 AM, Ihar Hrachyshka  wrote:
> 
> Sean M. Collins  wrote:
> 
>> Rossella Sblendido wrote:
>>> 2) multi-node jobs run for every patch set. Is that really what we want?
>>> They take pretty long. We could move them to a periodic job.
>> 
>> I would rather remove all the single-node jobs. Nova has been moving to
>> multinode jobs for their gate (if I recall correctly my
>> conversation with Dan Smith) and we should be moving in this direction
>> too. We should test Neutron the way it is deployed in production.
>> 
>> Also, who is really monitoring the periodic jobs? Truthfully? I know
>> there are some IPv6 jobs that are periodic and I'll be the first to
>> admit that I am not following them *at all*.
> 
> Well, stable maintainers track their periodic job failures. :) Email 
> notifications when something starts failing help.
> 
>> 
>> So, my thinking is, unless it's running at the gate and inflicting pain
>> on people, it's not going to be a treated as a priority. Look at Linux
>> Bridge - serious race conditions that existed for years only
>> got fixed once I inflicted pain on all the Neutron devs by making it
>> voting and running on every patchset (sorry, not sorry).
> 
> I think there is still common ground between you and Rossella’s stances: the 
> fact that we want to inflict gating pain does not mean that we want to 
> execute every single job on each PS uploaded to gerrit. For some advanced and 
> non-obvious checks [like partial grenade] the validation could be probably 
> postponed till the patch hits the gate.
> 
> Yes, sometimes it will mean gate being reset due to the bad patch. This can 
> be avoided in most of cases if reviewers and the author for a patch that 
> potentially touches a specific scenario execute the jobs before hitting the 
> gate with the patch [for example, if the job is in experimental set, it’s a 
> matter of ‘check experimental’ before pressing W+1].

We have been pretty consciously moving neutron jobs to cause pain to *neutron* 
and not everyone else, which is the opposite of a “gate only” plan. Aside from 
that being against infra policy, I think I’m reading between the lines that 
folks are wanting faster iterations between patchsets. I note that the standard 
-full job is up to 55-65 minutes, from it’s old time of 40-45. Have we 
characterized why that’s so much slower now? Perhaps addressing that will bring 
down to the turn-around for all.

Thanks,
doug


> 
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n][horizon][sahara][trove][magnum][murano] dashboard plugin release schedule

2016-03-21 Thread Serg Melikyan
Akihiro,

thank you for your effort with translation, this milestone sounds fine for
murano-dashboard.

On Sat, Mar 19, 2016 at 10:43 AM, Akihiro Motoki  wrote:

> Hi dashboard plugins team
> (sahara-dashboard, trove-dashboard, magnum-ui, murano-dashboard)
>
> As Horizon i18n liaison, I would like to have a consensus on a rough
> schedule
> of translation import for Horizon plugins.
> Several plugins and horizon itself already released RC1.
>
> For Horizon translation, we use the following milestone:
>   Mitaka-3 : Soft string freeze
>   Mitaka-RC1: Hard string freeze
>   Mitaka-RC2: Final translation import
>
> Does this milestone sound good for sahara/trove/magnum/murano dashboard
> plugins?
> This means that each dashboard plugin project needs to release RC2 (or
> some RC)
> even only for translation import. Otherwise, translator efforts after
> hard string freeze
> will not be included in Mitaka release.
>
> If the above idea sounds good, I hope RC2 (or later RC) will be released
> on early of the week Mar 28 for translation import.
> This schedule allows translators to work on translations after "Hard
> string freeze".
>
> Mitaka is the first release for Horizon plugins with translations,
> so I hope this mail helps everyone around translations.
>
> Best Regards,
> Akihiro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Prefecting user and user_roles resources with domain-specific conf is failing.

2016-03-21 Thread Sofer Athlan-Guyot
Denis Egorenko  writes:

> Hi Athlan,
>
> thanks for attention of this problem. We have one more related change
> [1] and bug [2]
> for this problem, when we option 'domain_specific_drivers' is used.
>
> I would like to vote for 3) case.
>
> As I see it there are three ways to approach this:
> - iterate over all domains and keep the same behavior as now;
> - detect somehow that the domain-specific configuration is used
> and
> hack, both instances methods to add domain options
> - remove prefetch from keystone_user and keystone_user_role (kinda
> get
> my preference, see below)

We agree then :)

> Let me explain why.
> Using of prefetch and instances methods have a couple of problem, like
> we can't pass some values to them and can't to set proper options
> (dynamical of course).
> Backing to this problem, it means, that we can't specify some domain
> and hence we can
> iterate over all domains or check users only for default domain. Both
> ways are not acceptable to me.
>
> As solution for this problem, i see using calling kinda of instances
> method from exist? method.
> On this stage we can use all parameters, which are passed to
> keystone_user{_role}
> providers and we can choose proper domain if specified. If not -
> default domain will be used.

The domain, at this stage won't be a problem to be concerned with, we
will have its value.  The only thing to add, I think, would be some kind
of caching for keystone_user_role call to avoid repetition.  This
shouldn't be hard to implement though.

I've added this as a meeting point for tomorrow, so that we can take a
final decision on this and start coding:
 - https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160322

>
> [1] https://review.openstack.org/213906
> [2] https://bugs.launchpad.net/puppet-keystone/+bug/1485508
>
> 2016-03-21 16:34 GMT+03:00 Sofer Athlan-Guyot :
>
> Hi,
> 
> we have a big problem when using domain-specific configuration.
> The
> listing of all users is not supported by keystone when it's used
> [1][2].
> 
> What this means is that prefetch method in keystone_user won't
> work, or
> more specifically, instances method will fail.
> 
> This poses a problem for the keystone_user_role, as the user
> instances
> method is called there too.
> 
> The missing bit when domain-specific configuration is used is that
> the
> operator must precise the domain on the command line option.
> 
> As I see it there are three ways to approach this:
> 
> - iterate over all domains and keep the same behavior as now;
> - detect somehow that the domain-specific configuration is used
> and
> hack, both instances methods to add domain options
> - remove prefetch from keystone_user and keystone_user_role (kinda
> get
> my preference, see below)
> 
> The problem I see with the first two methods depends on the usual
> use
> case of the domain specific configuration.
> 
> For what I understand, this would be mainly used to connect to
> existing
> LDAP server, certainly large AD. If that's the case then we will
> have
> the same problem that the keystone people have seen, ie very big
> list of
> poeple, most of them unrelated to what is happening. We will then
> have
> the risk that:
> - keystone fails;
> - the puppet process would be slowed down significantly;
> 
> So listing all users in this case seems like a very bad idea. As I
> don't see a way to disable prefetching dynamically, when
> domain-specific
> is used (maybe have to be digged into ?), then I tend to favor the
> removal of this from kesytone_user and keystone_user_role.
> Keystone_user_role is the main problem here as it require a lot of
> call
> to be build and prefetching help here.
> 
> So I don't see a best solution to this problem, so I'd like to
> have more
> inputs about the right course of action.
> 
> Note: It was first noticed by Matthew J Black, which has open this
> bug
> report[3] and started to work on a fix here[4]
> 
> [1]
> 
> https://github.com/openstack/keystone/blob/master/doc/source/configuration.rst
>(look for domain-specific)
> [2] https://bugs.launchpad.net/keystone/+bug/1555629
> [3] https://bugs.launchpad.net/puppet-keystone/+bug/1554555
> [4] https://review.openstack.org/#/c/289995/
> --
> Sofer Athlan-Guyot
> 
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot


[openstack-dev] [Openstack] Boot instance from volume via Horizon dashboard fails

2016-03-21 Thread Eugen Block

Hi all,

I'm just a (new) Openstack user, not a developer, but I have a  
question regarding the Horizon dashboard respectively launching  
instances via dashboard.


I have a Liberty deployment running on 1 controller and 2 compute nodes.
I also deployed an external cinder-volume as a storage backend for my  
instances. It works fine if you use the nova boot command (nova boot  
--block-device[...]) to launch instances or if you use kvm as  
hypervisor. But if you use xen (as I do) and you want to launch the  
instance via Horizon dashboard, you get an invalid block device  
mapping, because nova tries to attach /dev/vda as root device instead  
of /dev/xvda:


nova-compute.log:

[instance: 09f96335-4f3b-4b3e-a7b3-ef384144b00b] Booting with blank  
volume at /dev/vda


libxl-driver.log:

2016-03-21 14:01:07 CET libxl: error: libxl.c:2733:device_disk_add:  
Invalid or unsupported virtual disk identifier vda
2016-03-21 14:01:07 CET libxl: error:  
libxl_create.c:1167:domcreate_launch_dm: unable to add disk devices


As I already said, this error doesn't occur if you use CLI.
So I tried to figure out, what the difference between nova boot and  
Horizon is.
On controller node is the file  
/srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py which contains an assignment for the (by default) invisible field "can_set_mount_point" to the variable "device_name", here is a snippet from the class  
SetInstanceDetailsAction():


---cut here---
device_name = forms.CharField(label=_("Device Name"),
  required=False,
  INITIAL="VDA",
  help_text=_("Volume mount point  
(e.g. 'vda' "

  "mounts at '/dev/vda'). Leave "
  "this field blank to let the "
  "system choose a device name "
  "for you."))
---cut here---

Now if I make that field visible and change its value to XVDA,  
everything works fine and the VM is created successfully. So I made a  
minor change to see if it's more suitable for my environment, I  
changed the initial value of "device_name":


---cut here---
device_name = forms.CharField(label=_("Device Name"),
  required=False,
  INITIAL="",
---cut here---

and removed the assignment "{'device_name': device_name}" from the  
array "dev_mapping_2". Instead, I appended an if-statement to add the  
device_name only if it's not empty:


---cut here---
elif source_type == 'volume_image_id':
device_name = context.get('device_name', '').strip() or None
dev_mapping_2 = [
{'source_type': 'image',
 'destination_type': 'volume',
 'delete_on_termination':
 bool(context['delete_on_terminate']),
 'uuid': context['source_id'],
 'boot_index': '0',
 'volume_size': context['volume_size']
 }
]
IF DEVICE_NAME:
DEV_MAPPING_2.APPEND({"DEVICE_NAME": DEVICE_NAME})
---cut here---

This seems to work (for me), but I'm quite new to Openstack, so what  
would your professional opinion be on this subject?
Of course, that if-statement would be also necessary if the  
"source_type" is "volume_snapshot_id", at least if this would be the  
way to go.


I'm looking forward to your answer!

Regards,
Eugen




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] A big thanks to the Horizon team for their support (Django 1.9 support needed for Debian)

2016-03-21 Thread Thomas Goirand
Hi there!

The Horizon team really went out of their boundaries to help adding
support for Django 1.9, which was required for the upload to Debian.
Apart from Debian, no other distro needed the support just yet, but they
made it happen on time, just after the RC1 (which is fine, I'm adding
the patch as distro-specific until it's merged).

So, thanks a lot to robcresswell, itxaka and tsufiev who all worked on
this tricky issue!

Cheers,

Thomas Goirand (zigo)

P.S: On the next following days, I'll work on the Horizon plugins.
Hopefully, their wont be too many DJ1.9 issues there.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-21 Thread Steven Hardy
On Mon, Mar 21, 2016 at 10:19:47AM -0400, Emilien Macchi wrote:
> On Mon, Mar 21, 2016 at 9:59 AM, Steven Hardy  wrote:
> > On Mon, Mar 21, 2016 at 09:41:42AM -0400, Emilien Macchi wrote:
> >> On Mon, Mar 21, 2016 at 6:57 AM, Steven Hardy  wrote:
> >> > On Fri, Mar 18, 2016 at 01:27:33PM +, arkady_kanev...@dell.com wrote:
> >> >>Emilien,
> >> >>
> >> >>Agree on the rant. But not clear on concrete proposal to fix it.
> >> >>
> >> >>Spend more time “fixing” CI and use Tempest as a gate is a bit wage.
> >> >>
> >> >>Unless we test known working version of each project in TripleO CI 
> >> >> you are
> >> >>dependent on health of other components.
> >> >
> >> > I've so far resisted replying to this thread, because while valid, many 
> >> > of
> >> > the concerns expressed by Emilien are quite general complaints, and it's
> >> > hard to reply with specific solutions.
> >> >
> >> > However work *is* going on to improve many of these problems, let's see 
> >> > if
> >> > I can provide a summary, to clarify the various "concrete proposals" 
> >> > which
> >> > do exist.
> >> >
> >> > 1. Core team & review velocity
> >> >
> >> > We've had a small and very overloaded core team for a while now, and this
> >> > will be helped by expanding our community to include those who've been
> >> > regularly contributing excellent work and reviews as core reviewers:
> >> >
> >> > http://lists.openstack.org/pipermail/openstack-dev/2016-February/087774.html
> >> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/089235.html
> >> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/089912.html
> >> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/089913.html
> >> >
> >> > Note that I personally think it's absolutely fine for folks to be more
> >> > expert in some subsystem and to focus review extra attention on e.g API,
> >> > UI, Puppet or whatever.  This subsystem-core model has been well proven 
> >> > in
> >> > other projects, and folks will naturally broaden their areas of deeper
> >> > knowledge over time.
> >> >
> >> > Related to this is movement of code, such as the puppet-tripleo 
> >> > refactoring
> >> > mentioned by Michael - this has already started, and will help with
> >> > providing a cleaner interface between the puppet and heat pieces (which
> >> > will also help focus reviewer attention appropriately).
> >>
> >> Indeed, Michael, Dan & I are working on moving out the Puppet code
> >> from THT to puppet-tripleo.
> >> That's a nice move, and I appreciate TripleO team support on it.
> >>
> >> > 2. Day 1 developer experience
> >> >
> >> > This is closely related to the CI failure rate - there are efforts to
> >> > integrate with the RDO tripleo-quickstart tooling, which simplifies the
> >> > initial undercloud setup, and potentially makes consuming pre-built,
> >> > validated undercloud images (probably output artefacts from our new
> >> > periodic CI job) much easier.
> >> >
> >> > So, this will mean that both developers and CI can potentially be less
> >> > regularly impacted by trunk regressions which often cause CI to fail, and
> >> > break developer environments.
> >> >
> >> > https://review.openstack.org/#/c/276810/5
> >> >
> >> > 3. CI coverage and trunk failure rate
> >> >
> >> > We've been working really hard to improve things here, which are really
> >> > several inter-related issues:
> >> >
> >> > - Lack of Hardware capacity in the tripleo CI cloud
> >> > - Frequent trunk regressions breaking our CI
> >> > - Lack of coverage of some key features (network isolation, SSL, IPv6, 
> >> > upgrades)
> >> > - Lack of coverage for vendor plugin templates/puppet code
> >> >
> >> > There's work ongoing to improve this from multiple perspectives:
> >> >
> >> > New periodic CI job (to be used for automated promotion of the
> >> > current-tripleo repo, and for pre-built undercloud images):
> >> > https://review.openstack.org/#/c/271370/
> >> >
> >> > Add network isolation support to CI:
> >> > https://review.openstack.org/#/c/288163/
> >> >
> >> > Test SSL enabled in overcloud:
> >> > https://review.openstack.org/#/c/281988/
> >> >
> >> > CI coverage of IPv6:
> >> > https://review.openstack.org/#/c/289445/
> >> >
> >> > Discussion around better documented integration for third-party CI:
> >> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/088972.html
> >>
> >> Do we have plans to execute Tempest?
> >
> > This is something which has been discussed several times, but right now we
> > don't have the time available to run it per-commit because we'll hit the
> > job timeout.
> >
> > This situation will improve as we gain time e.g through use of cached
> > pre-built images, but right now I think we could look at enabling it only
> > on the periodic job when that is fully proven.
> >
> > Having said that, I should point out that tempest doesn't get us great
> > coverage of some newer projects - e.g all Heat 

Re: [openstack-dev] [oslo][all] What would you like changed/fixed/new in oslo??

2016-03-21 Thread gordon chung


On 20/03/2016 5:58 PM, Joshua Harlow wrote:
> On 03/20/2016 10:00 AM, Adam Young wrote:
>> I started with a blog post here:
>>
>> http://adam.younglogic.com/2016/03/what-can-talk-to-what-on-the-openstack-message-broker/
>>
>>
>>
>> and did a brief spike here:
>>
>> http://adam.younglogic.com/2016/03/tie-your-rabbit-down/
>>
>> We made the mistake of pursuing HMAC back several releases ago.  It lead
>> to Kite.  We don't need that yet.
>
> Nice I like the big table @
> http://adam.younglogic.com/2016/03/what-can-talk-to-what-on-the-openstack-message-broker/
>
>
> As for HMAC several years/releases ago, what was the issue (just
> wondering)? Just to much load on controller nodes to do verification?
> Not enough adoption, something else...?
>

we have HMAC signing in Ceilometer[1] when we passed messages between 
the different services. i added support a long while back to support 
disabling signing because it does add quite a bit of overhead to the 
whole process. unfortunately the bug description i wrote was terrible[2] 
so i don't have any numbers (though should be easy enough to figure 
out). i don't believe it adds a lot of CPU load (not that i recall) but 
it does add quite a bit of latency (10s of ms) to the whole process so 
it will affect scenarios where you are dealing with large amounts of 
messages or 'real-time' stories.

i tend to agree with ayoung that ideally we should leverage 
authentication capabilities before considering the crypto scenario. 
Kafka itself started to implement security in the latest release and 
from what i can tell, there's a lot of disclaimers that you will 
experience serious performance degradation if you enable it[3].


[1] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/publisher/utils.py#L43
[2] https://bugs.launchpad.net/ceilometer/+bug/1436077
[3] 
https://blog.cloudera.com/blog/2016/02/whats-new-in-clouderas-distribution-of-apache-kafka/

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] Boot instance from volume via Horizon dashboard fails

2016-03-21 Thread Eugen Block

Hi all,

I'm not subscribed to this list, usually I'm just a (new) Openstack  
user, but I have a question regarding the Horizon dashboard  
respectively launching instances via dashboard.


I have a Liberty deployment running on 1 controller and 2 compute nodes.
I also deployed an external cinder-volume as a storage backend for my  
instances. It works fine if you use the nova boot command (nova boot  
--block-device[...]) to launch instances or if you use kvm as  
hypervisor. But if you use xen (as I do) and you want to launch the  
instance via Horizon dashboard, you get an invalid block device  
mapping, because nova tries to attach /dev/vda as root device instead  
of /dev/xvda:


nova-compute.log:

[instance: 09f96335-4f3b-4b3e-a7b3-ef384144b00b] Booting with blank  
volume at /dev/vda


libxl-driver.log:

2016-03-21 14:01:07 CET libxl: error: libxl.c:2733:device_disk_add:  
Invalid or unsupported virtual disk identifier vda
2016-03-21 14:01:07 CET libxl: error:  
libxl_create.c:1167:domcreate_launch_dm: unable to add disk devices


As I already said, this error doesn't occur if you use CLI.
So I tried to figure out, what the difference between nova boot and  
Horizon is.
On controller node is the file  
/srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py which contains an assignment for the (by default) invisible field "can_set_mount_point" to the variable "device_name", here is a snippet from the class  
SetInstanceDetailsAction():


---cut here---
    device_name = forms.CharField(label=_("Device Name"),
                                  required=False,
                                  INITIAL="VDA",
                                  help_text=_("Volume mount point  
(e.g. 'vda' "

                                              "mounts at '/dev/vda'). Leave "
                                              "this field blank to let the "
                                              "system choose a device name "
                                              "for you."))
---cut here---

Now if I make that field visible and change its value to XVDA,  
everything works fine and the VM is created successfully. So I made a  
minor change to see if it's more suitable for my environment, I  
changed the initial value of "device_name":


---cut here---
    device_name = forms.CharField(label=_("Device Name"),
                                  required=False,
                                  INITIAL="",
---cut here---

and removed the assignment "{'device_name': device_name}" from the  
array "dev_mapping_2". Instead, I appended an if-statement to add the  
device_name only if it's not empty:


---cut here---
        elif source_type == 'volume_image_id':
            device_name = context.get('device_name', '').strip() or None
            dev_mapping_2 = [
                {'source_type': 'image',
                 'destination_type': 'volume',
                 'delete_on_termination':
                     bool(context['delete_on_terminate']),
                 'uuid': context['source_id'],
                 'boot_index': '0',
                 'volume_size': context['volume_size']
                 }
            ]
            IF DEVICE_NAME:
                DEV_MAPPING_2.APPEND({"DEVICE_NAME": DEVICE_NAME})
---cut here---

This seems to work (for me), but I'm quite new to Openstack, so what  
would your professional opinion be on this subject?
Of course, that if-statement would be also necessary if the  
"source_type" is "volume_snapshot_id", at least if this would be the  
way to go.


I'm looking forward to your answer!

Regards,
Eugen
 Eugen Block                             voice   :
+49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15                       
D-22423 Hamburg                         e-mail  :
ebl...@nde.ag

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Routed networks / Generic Resource pools

2016-03-21 Thread Miguel Angel Ajo Pelayo
On Mon, Mar 21, 2016 at 3:17 PM, Jay Pipes  wrote:
> On 03/21/2016 06:22 AM, Miguel Angel Ajo Pelayo wrote:
>>
>> Hi,
>>
>> I was doing another pass on this spec, to see if we could leverage
>> it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
>> have a question [1]
>>
>> I guess I'm just missing some detail, but looking at the 2nd scenario,
>> why wouldn't availability zones allow the same exactly if we used one
>> availability zone per subnet?
>>
>>What's the advantage of modelling it via a generic resource pool?
>
>
> Hi Miguel,
>
> On the (Nova) scheduler side, we don't actually care whether Neutron uses
> availability zone or subnet pool to model the boundaries of a pool of some
> resource. The generic-resource-pools functionality being added to Nova (as
> the new placement API meant to become the split-out scheduler RESTful API)
> just sees a resource provider UUID and an inventory of some type of
> resource.

That means, that we could also match a pool by the requisites of the resources
bound to the instance we're trying to deploy (i.e disk space (GB), bandwidth
(NIC_KB)).

>
> In the case of Neutron QoS, the first thing to determine would be what is
> the resource type exactly? The resource type must be able to be represented
> with an integer amount of something. For QoS, I *think* the resource type
> would be "NIC_BANDWIDTH_KB" or something like that. Is that correct?

The resource could be NIC_BANDWIDTH_KB, yes, in a simplified case
we could care about just tenant networks connectivity, but we can also
have provider networks bound to this. And they would be separate counts.

>This
> would represent the amount of total network bandwidth that a workload can
> consume on a particular compute node. Is that statement correct?

This would represent the amount of total network bandwidth a port could
consume (and by consume I mean: asking for a "min" bandwidth guarantee).

>
> Now, the second thing that would need to be determined is what resource
> boundary this resource type would have. I *think* it is the amount of
> bandwidth consumed on a set of compute nodes? Like, amount of bandwidth
> consumed within a rack?

No, what we're trying to model first is, the maximum bandwidth available
on a compute node [+physnet combination].

(please note this is coming from NFV / telco requisites):
When they schedule VNFs, they want to be 100% sure the throughput a VNF
can provide is exactly what they asked for, and not less (because for example
you had 10Gb throughput on a NIC, but you schedule 3 VNFs supposed
to push 5Gb each).



> Or some similar segmentation of a network, like an
> aggregate, which is a generic grouping of compute nodes. If so, then the
> bandwidth resource would be considered a *shared* resource, shared among the
> compute nodes in the aggregate. And if this is the case, then
> generic-resource-pools are intended for *exactly* this type of scenario.

We could certainly use generic resource pools to model rack switches, and their
bandwidth capabilities. But that would not satisfy my above paragraph, they are
two levels of verification that would be independent.



__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [shotgun] New shotgun2 command: short-report

2016-03-21 Thread Alex Schultz
On Mon, Mar 21, 2016 at 7:21 AM, Volodymyr Shypyguzov <
vshypygu...@mirantis.com> wrote:

> Hi, all
>
> Just wanted to inform you, that shotgun2 now has new command short-report,
> which allows you to receive shorter and cleaner output for attaching to bug
> description, sharing, etc.
>
> Usage: shotgun2 short-report
> Example output: http://paste.openstack.org/show/491256/
>
>
How will we be able to find those specific packages and how will we be able
to correlate them with the equivalent commit in the git repository?

Thanks,
-Alex


> Regards,
> Volodymyr
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose trown for core

2016-03-21 Thread Juan Antonio Osorio
On Sun, Mar 20, 2016 at 8:32 PM, Dan Prince  wrote:

> I'd like to propose that we add John Trowbridge to the TripleO core
> review team. John has become one of the goto guys in helping to chase
> down upstream trunk chasing issues. He has contributed a lot to helping
> keep general CI issues running and been involved with several new
> features over the past year around node introspection, etc. His
> involvement with the RDO team also gives him a healthy prospective
> about sane releasing practices, etc.
>
> John doesn't have the highest TripleO review stats ATM but I expect his
> stats to continue to climb. Especially with his work on upcoming
> improvements like tripleo-quickstart, etc. Having John on board the
> core team would help drive these projects and it would also be great to
> have him able to land fixes related to trunk chasing, etc. I expect
> he'll gradually jump into helping with other TripleO projects as well.
>
> If you agree please +1. If there is no negative feedback I'll add him
> next Monday.
>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+1

-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose trown for core

2016-03-21 Thread Jay Dobies



On 03/20/2016 02:32 PM, Dan Prince wrote:

I'd like to propose that we add John Trowbridge to the TripleO core
review team. John has become one of the goto guys in helping to chase
down upstream trunk chasing issues. He has contributed a lot to helping
keep general CI issues running and been involved with several new
features over the past year around node introspection, etc. His
involvement with the RDO team also gives him a healthy prospective
about sane releasing practices, etc.

John doesn't have the highest TripleO review stats ATM but I expect his
stats to continue to climb. Especially with his work on upcoming
improvements like tripleo-quickstart, etc. Having John on board the
core team would help drive these projects and it would also be great to
have him able to land fixes related to trunk chasing, etc. I expect
he'll gradually jump into helping with other TripleO projects as well.

If you agree please +1. If there is no negative feedback I'll add him
next Monday.

Dan


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose EmilienM for core

2016-03-21 Thread Jay Dobies



On 03/20/2016 02:22 PM, Dan Prince wrote:

I'd like to propose that we add Emilien Macchi to the TripleO core
review team. Emilien has been getting more involved with TripleO during
this last release. In addition to help with various Puppet things he
also has experience in building OpenStack installation tooling,
upgrades, and would be a valuable prospective to the core team. He has
also added several new features around monitoring into instack-
undercloud.

Emilien is currently acting as the Puppet PTL. Adding him to the
TripleO core review team could help us move faster towards some of the
upcoming features like composable services, etc.

If you agree please +1. If there is no negative feedback I'll add him
next Monday.

Dan


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #75

2016-03-21 Thread Emilien Macchi
Hi,

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting4.

https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack

As usual, free free to bring topics in this etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160322

We'll start discussing about the Summit, so feel fee to join if you
have topic proposals.

We'll also have open discussion for bugs & reviews, so anyone is welcome
to join.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-21 Thread Emilien Macchi
On Mon, Mar 21, 2016 at 9:59 AM, Steven Hardy  wrote:
> On Mon, Mar 21, 2016 at 09:41:42AM -0400, Emilien Macchi wrote:
>> On Mon, Mar 21, 2016 at 6:57 AM, Steven Hardy  wrote:
>> > On Fri, Mar 18, 2016 at 01:27:33PM +, arkady_kanev...@dell.com wrote:
>> >>Emilien,
>> >>
>> >>Agree on the rant. But not clear on concrete proposal to fix it.
>> >>
>> >>Spend more time “fixing” CI and use Tempest as a gate is a bit wage.
>> >>
>> >>Unless we test known working version of each project in TripleO CI you 
>> >> are
>> >>dependent on health of other components.
>> >
>> > I've so far resisted replying to this thread, because while valid, many of
>> > the concerns expressed by Emilien are quite general complaints, and it's
>> > hard to reply with specific solutions.
>> >
>> > However work *is* going on to improve many of these problems, let's see if
>> > I can provide a summary, to clarify the various "concrete proposals" which
>> > do exist.
>> >
>> > 1. Core team & review velocity
>> >
>> > We've had a small and very overloaded core team for a while now, and this
>> > will be helped by expanding our community to include those who've been
>> > regularly contributing excellent work and reviews as core reviewers:
>> >
>> > http://lists.openstack.org/pipermail/openstack-dev/2016-February/087774.html
>> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/089235.html
>> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/089912.html
>> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/089913.html
>> >
>> > Note that I personally think it's absolutely fine for folks to be more
>> > expert in some subsystem and to focus review extra attention on e.g API,
>> > UI, Puppet or whatever.  This subsystem-core model has been well proven in
>> > other projects, and folks will naturally broaden their areas of deeper
>> > knowledge over time.
>> >
>> > Related to this is movement of code, such as the puppet-tripleo refactoring
>> > mentioned by Michael - this has already started, and will help with
>> > providing a cleaner interface between the puppet and heat pieces (which
>> > will also help focus reviewer attention appropriately).
>>
>> Indeed, Michael, Dan & I are working on moving out the Puppet code
>> from THT to puppet-tripleo.
>> That's a nice move, and I appreciate TripleO team support on it.
>>
>> > 2. Day 1 developer experience
>> >
>> > This is closely related to the CI failure rate - there are efforts to
>> > integrate with the RDO tripleo-quickstart tooling, which simplifies the
>> > initial undercloud setup, and potentially makes consuming pre-built,
>> > validated undercloud images (probably output artefacts from our new
>> > periodic CI job) much easier.
>> >
>> > So, this will mean that both developers and CI can potentially be less
>> > regularly impacted by trunk regressions which often cause CI to fail, and
>> > break developer environments.
>> >
>> > https://review.openstack.org/#/c/276810/5
>> >
>> > 3. CI coverage and trunk failure rate
>> >
>> > We've been working really hard to improve things here, which are really
>> > several inter-related issues:
>> >
>> > - Lack of Hardware capacity in the tripleo CI cloud
>> > - Frequent trunk regressions breaking our CI
>> > - Lack of coverage of some key features (network isolation, SSL, IPv6, 
>> > upgrades)
>> > - Lack of coverage for vendor plugin templates/puppet code
>> >
>> > There's work ongoing to improve this from multiple perspectives:
>> >
>> > New periodic CI job (to be used for automated promotion of the
>> > current-tripleo repo, and for pre-built undercloud images):
>> > https://review.openstack.org/#/c/271370/
>> >
>> > Add network isolation support to CI:
>> > https://review.openstack.org/#/c/288163/
>> >
>> > Test SSL enabled in overcloud:
>> > https://review.openstack.org/#/c/281988/
>> >
>> > CI coverage of IPv6:
>> > https://review.openstack.org/#/c/289445/
>> >
>> > Discussion around better documented integration for third-party CI:
>> > http://lists.openstack.org/pipermail/openstack-dev/2016-March/088972.html
>>
>> Do we have plans to execute Tempest?
>
> This is something which has been discussed several times, but right now we
> don't have the time available to run it per-commit because we'll hit the
> job timeout.
>
> This situation will improve as we gain time e.g through use of cached
> pre-built images, but right now I think we could look at enabling it only
> on the periodic job when that is fully proven.
>
> Having said that, I should point out that tempest doesn't get us great
> coverage of some newer projects - e.g all Heat scenario coverage was moved
> out of the tempest tree, and other projects have done similar AFAIK, so we
> may end up with very sparse API surface tests (or nothing at all) in these 
> cases.

In Puppet OpenStack CI, we execute smoke tests (a few tests of each
service, and some scenarios), and some tests not in 

Re: [openstack-dev] [nova][neutron] Routed networks / Generic Resource pools

2016-03-21 Thread Jay Pipes

On 03/21/2016 06:22 AM, Miguel Angel Ajo Pelayo wrote:

Hi,

I was doing another pass on this spec, to see if we could leverage
it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
have a question [1]

I guess I'm just missing some detail, but looking at the 2nd scenario,
why wouldn't availability zones allow the same exactly if we used one
availability zone per subnet?

   What's the advantage of modelling it via a generic resource pool?


Hi Miguel,

On the (Nova) scheduler side, we don't actually care whether Neutron 
uses availability zone or subnet pool to model the boundaries of a pool 
of some resource. The generic-resource-pools functionality being added 
to Nova (as the new placement API meant to become the split-out 
scheduler RESTful API) just sees a resource provider UUID and an 
inventory of some type of resource.


In the case of Neutron QoS, the first thing to determine would be what 
is the resource type exactly? The resource type must be able to be 
represented with an integer amount of something. For QoS, I *think* the 
resource type would be "NIC_BANDWIDTH_KB" or something like that. Is 
that correct? This would represent the amount of total network bandwidth 
that a workload can consume on a particular compute node. Is that 
statement correct?


Now, the second thing that would need to be determined is what resource 
boundary this resource type would have. I *think* it is the amount of 
bandwidth consumed on a set of compute nodes? Like, amount of bandwidth 
consumed within a rack? Or some similar segmentation of a network, like 
an aggregate, which is a generic grouping of compute nodes. If so, then 
the bandwidth resource would be considered a *shared* resource, shared 
among the compute nodes in the aggregate. And if this is the case, then 
generic-resource-pools are intended for *exactly* this type of scenario.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Infra] Nailgun extensions testing

2016-03-21 Thread Evgeniy L
Hi Roman,

>> reasonable to just install it from PyPi (first we need to release
Nailgun to PyPi)

Yes there will be dependencies, but there should be a way to test core
extensions (those which go to standard Fuel build) from master (or any
other branch), so installing from pypi is not always an option.

>> Master of Nailgun can run its own non-voting job to check compatibility
with the existing extensions ...

Yes, when patch is submitted to Nailgun, it should run extensions gates to
test compatibility, and when patch to extension is submitted it should test
it with specific version of Nailgun.

Thanks,

On Mon, Mar 21, 2016 at 4:12 PM, Roman Prykhodchenko  wrote:

> The idea is to write python (or shell) script which will:
>
> - clone all required repos (like fuel-web, extensions repos) using
>   probably zuul-cloner
>
>
> Doesn’t nodepool automatically do that?
>
> - checkout to appropriate stable branches / will cherry-pick some
>   commit / stay on master
>
>
> As far as I understand extensions have Nailgun as their Python requirement
> so it would be reasonable to just install it from PyPi (first we need to
> release Nailgun to PyPi). Master of Nailgun can run its own non-voting job
> to check compatibility with the existing extensions and notify authors
> about any compatibility issues.
>
> 17 бер. 2016 р. о 14:42 Sylwester Brzeczkowski 
> написав(ла):
>
> Hi everyone!
>
> I’m looking for boilerplates/good practices regarding to testing
> extensions with core code.
>
> Since we unlocked Nailgun extensions system [0] and now there
> is a possibility to install the extensions from external sources we
> want to also provide a way to test your own extensions against
> Nailgun and some other extensions. Here is the spec for this activity [1]
>
> The idea is to write python (or shell) script which will:
> - clone all required repos (like fuel-web, extensions repos) using
>   probably zuul-cloner
> - checkout to appropriate stable branches / will cherry-pick some
>   commit / stay on master
> - run tests
>
> This script will be used to:
> - test extension with different Nailgun versions (to check if it’s
> compatible)
>   locally and on extension’s jenkins gate jobs
> - test extension with different Nailgun versions and with other extensions
>   enabled (depending on needs)
> - test Nailgun with some core extensions locally and on fuel-web
>   jenkins gate jobs
>
> The script will be placed in fuel-web repo as extensions will need
> to have Nailgun in its requirements anyway.
>
> There will be new jenkins job which will consume names of
> extensions to test and the branches/commits/versions which
> the tests should be run against. The job will basically fetch fuel-web
> repo, and run the script mentioned above.
>
> What do you think about the idea? Is it a good approach?
> Am I missing some already existing solutions for this problem?
>
> Regards
>
> [0]
> https://blueprints.launchpad.net/fuel/+spec/stevedore-extensions-discovery
> [1] https://review.openstack.org/#/c/281749/
>
>
> --
> *Sylwester Brzeczkowski*
> Python Software Engineer
> Product Development-Core : Product Engineering
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >