Re: [openstack-dev] [all] using reno for release note management

2015-12-09 Thread Dmitry Tantsur

On 12/08/2015 08:33 PM, Emilien Macchi wrote:

This morning the Puppet OpenStack had the weekly meeting [1] and we
discussed about using reno [2] for release note management.

We saw three possibilities:

1/ we use reno and enforce each contributions (bugfix, feature, etc) to
also edit a release note YAML file
2/ we use reno and the release note YAML file can be updated later (by
the contributor or someone else)


Note that this approach will somewhat complicate backporting to stable 
branches (if it applies to you ofc), as you'll need release notes there 
as well.



3/ we don't use reno and continue to manually write release notes

The solution 3/ is definitely not in our scope and we are willing to use
reno. Though we are looking for a way to switch using it.

Some people in our group shared some feedback, and ideas. Feel free to
comment inline:

* Having a YAML file for one feature/bugfix/... sounds heavy.
* We have 23 repositories (modules) - we will probably start using reno
for one or two modules, and see how it works.
* We will apply 2/ with the goal to go 1/ one day. We think directly
doing 1/ will have the risk to frustrate our group since not anyone is
familiar with releases. Giving -1 to a good patch just because it does
not have a release note update is not something we want now. We need to
educate people at using it, that's why I think we might go 2/.
* Using reno will spread the release note management to our group,
instead of one release manager taking care of that.

Feel free to have more feedback or comment inline, we are really willing
to suggestions.

Thanks!

[1]
https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack#Previous_meetings
(2] http://docs.openstack.org/developer/reno/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Thierry Carrez
Thierry Carrez wrote:
> Thierry Carrez wrote:
>> The nomination deadline is passed, we have two candidates!
>>
>> I'll be setting up the election shortly (with Jeremy's help to generate
>> election rolls).
> 
> OK, the election just started. Recent contributors to a stable branch
> (over the past year) should have received an email with a link to vote.
> If you haven't and think you should have, please contact me privately.
> 
> The poll closes on Tuesday, December 8th at 23:59 UTC.
> Happy voting!

Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Tempest] OS-INHERIT APIs were skipped by Jenkins because "os_inherit" in keystone.conf was disable.

2015-12-09 Thread Henry Nash
Hi Maho,

So in the keystone unit tests, we flip the os_inherit flag back and forth 
during tests to make sure it is honored correctly.  For the tempest case, I 
don’t think you need to do that level of testing. Setting the os_inherit flag 
to true will have no effect if you have not created any role assignments that 
are inherited - you’ll just get the regular assignments back as normal. So 
provided there is no test data leakage between tests (i.e. old data lying 
around from a previous test), I think it should be safe to run tempest with 
os_inherit switched on.

Henry
> On 9 Dec 2015, at 08:45, koshiya maho  wrote:
> 
> Hi all,
> 
> I pushed the patch set of OS-INHERIT API tempest (keystone v3).
> https://review.openstack.org/#/c/250795/
> 
> But, all API tests in patch set was skipped, because "os_inherit" in 
> keystone.conf of 
> Jenkins jobs was disable. So, it couldn't be confirmed.
> 
> Reference information : 
> http://logs.openstack.org/95/250795/5/check/gate-tempest-dsvm-full/fbde6d2/logs/etc/keystone/keystone.conf.txt.gz
> #L1422
> https://github.com/openstack/keystone/blob/master/keystone/common/config.py#L224
> 
> Default "os_inherit" setting is disable. OS-INHERIT APIs need "os_inherit" 
> setting enable.
> 
> For keystone v3 tempests using OS-INHERIT, we should enable "os_inherit" of 
> the existing keystone.conf called by Jenkins.
> Even if "os_inherit" is enable, I think there have no effects on other 
> tempests.
> 
> Do you have any other ideas?
> 
> Thank you and best regards,
> 
> --
> Maho Koshiya
> NTT Software Corporation
> E-Mail : koshiya.m...@po.ntts.co.jp
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposing Cody Herriges part of Puppet OpenStack core

2015-12-09 Thread Denis Egorenko
+1

2015-12-09 0:25 GMT+03:00 Clayton O'Neill :

> +1
>
> On Tue, Dec 8, 2015 at 4:15 PM, Matt Fischer  wrote:
>
>> +1
>>
>> On Tue, Dec 8, 2015 at 2:07 PM, Rich Megginson 
>> wrote:
>>
>>> On 12/08/2015 09:49 AM, Emilien Macchi wrote:
>>>
>>> Hi,
>>>
>>> Back in "old days", Cody was already core on the modules, when they were
>>> hosted by Puppetlabs namespace.
>>> His contributions [1] are very valuable to the group:
>>> * strong knowledge on Puppet and all dependencies in general.
>>> * very helpful to debug issues related to Puppet core or dependencies
>>> (beaker, etc).
>>> * regular attendance to our weekly meeting
>>> * pertinent reviews
>>> * very understanding of our coding style
>>>
>>> I would like to propose having him back part of our core team.
>>> As usual, we need to vote.
>>>
>>>
>>> +1
>>>
>>> Thanks,
>>>
>>> [1]http://stackalytics.openstack.org/?metric=commits=all_type=all_id=ody-cat
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposing Cody Herriges part of Puppet OpenStack core

2015-12-09 Thread Yanis Guenane
+1

On 12/08/2015 05:49 PM, Emilien Macchi wrote:
> Hi,
>
> Back in "old days", Cody was already core on the modules, when they were
> hosted by Puppetlabs namespace.
> His contributions [1] are very valuable to the group:
> * strong knowledge on Puppet and all dependencies in general.
> * very helpful to debug issues related to Puppet core or dependencies
> (beaker, etc).
> * regular attendance to our weekly meeting
> * pertinent reviews
> * very understanding of our coding style
>
> I would like to propose having him back part of our core team.
> As usual, we need to vote.
>
> Thanks,
>
> [1]
> http://stackalytics.openstack.org/?metric=commits=all_type=all_id=ody-cat
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Yanis Guenane

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposing Sofer Athlan Guyot part of puppet-keystone core team

2015-12-09 Thread Yanis Guenane
Sorry for the delay, def. big +1 for me.
Congrats Sofer !

On 12/07/2015 09:32 PM, Emilien Macchi wrote:
>
> On 12/03/2015 03:05 PM, Emilien Macchi wrote:
>> Hi,
>>
>> For some months, Puppet OpenStack group has been very lucky to have
>> Sofer working with us.
>> He became a huge contributor to puppet-keystone, he knows the module
>> perfectly and wrote insane amount of code recently, to bring new
>> features that our community requested (some stats: [1]).
>> He's always here to help on IRC and present during our weekly meetings.
>>
>> Core contributors, please vote if you agree to add him part of
>> puppet-keystone core team.
> So far we got 2 positive reviews from core people, and no negative thought.
> Welcome Sofer and thanks again for your work!
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Yanis Guenane

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] diskimage-builder and python 2/3 compatibility

2015-12-09 Thread Ian Wienand
On 12/09/2015 07:15 AM, Gregory Haynes wrote:
> We ran in to a couple issues adding Fedora 23 support to
> diskimage-builder caused by python2 not being installed by default.
> This can be solved pretty easily by installing python2, but given that
> this is eventually where all our supported distros will end up I would
> like to find a better long term solution (one that allows us to make
> images which have the same python installed that the distro ships by
> default).

So I wonder if we're maybe hitting premature optimisation with this

> We use +x and a #! to specify a python
> interpreter, but this needs to be python3 on distros which do not ship a
> python2, and python elsewhere.

> Create a symlink in the chroot from /usr/local/bin/dib-python to
> whatever the apropriate python executable is for that distro.

This is a problem for anyone wanting to ship a script that "just
works" across platforms.  I found a similar discussion about a python
launcher at [1] which covers most points and is more or less what
is described above.

I feel like contribution to some sort of global effort in this regard
might be the best way forward, and then ensure dib uses it.

-i

[1] https://mail.python.org/pipermail/linux-sig/2015-October/00.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable/liberty 13.1.0 release planning

2015-12-09 Thread Thierry Carrez
Matt Riedemann wrote:
> We've had a few high priority regression fixes in stable/liberty [1][2]
> so I think it's time to do a release.
> [...]

You probably mean 12.0.1 ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread Julien Danjou
Hi fellow developers,

Some oslo_utils.timeutils functions have been deprecated for months and
several major version of oslo.utils. We're going to remove these
functions as part of:

https://review.openstack.org/#/c/252898/

Some projects, Glance in particular, are still using these functions.
FWIW, I've started to cook a patch for Glance at:

https://review.openstack.org/#/c/253517/

Please, make sure you don't use any of these functions or upgrading to a
new oslo.utils will very likely break your project.

Happy hacking!

Cheers,
-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Status of the Support Conditionals in Heat templates

2015-12-09 Thread Sergey Kraynev
Hi Heaters,

On the last IRC meeting we had a question about Support Conditionals spec
[1].
Previous attempt for this staff is here [2].
The example of first POC in Heat can be reviewed here [3]

As I understand we have not had any complete decision about this work.
So I'd like to clarify feelings of community about it. This clarification
may be done as answers for two simple questions:
 - Why do we want to implement it?
 - Why do NOT we want to implement it?

My personal feeling is:
- Why do we want to implement it?
* A lot of users wants to have similar staff.
* It's already presented in AWS, so will be good to have this feature
in Heat too.
 - Why do NOT we want to implement it?
* it can be solved with Jinja [4] . However I don't think, that it's
really important reason for blocking this work.

Please share your idea about two questions above.
It should allows us to eventually decide, want we implement it or not.

[1] https://review.openstack.org/#/c/245042/
[2] https://review.openstack.org/#/c/153771/
[3] https://review.openstack.org/#/c/221648/1
[4] http://jinja.pocoo.org/
-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Improving Mistral pep8 rules files to match Mistral guidelines

2015-12-09 Thread ELISHA, Moshe (Moshe)
Hi all,

Is it possible to add all / some of the special guidelines of Mistral (like 
blank line before return, period at end of comment, ...) to our pep8 rules file?

This can save a lot of time for both committers and reviewers.

Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] RBAC usage at production

2015-12-09 Thread Oguz Yarimtepe

Hi,

I am wondering whether there are people using RBAC at production. The 
policy.json file has a structure that requires restart of the service 
each time you edit the file. Is there and on the fly solution or tips 
about it?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: 09 December 2015 08:57
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are open
> 
> Thierry Carrez wrote:
> > Thierry Carrez wrote:
> >> The nomination deadline is passed, we have two candidates!
> >>
> >> I'll be setting up the election shortly (with Jeremy's help to
> >> generate election rolls).
> >
> > OK, the election just started. Recent contributors to a stable branch
> > (over the past year) should have received an email with a link to vote.
> > If you haven't and think you should have, please contact me privately.
> >
> > The poll closes on Tuesday, December 8th at 23:59 UTC.
> > Happy voting!
> 
> Election is over[1], let me congratulate Matt Riedemann for his election !
> Thanks to everyone who participated to the vote.
> 
> Now I'll submit the request for spinning off as a separate project team to the
> governance ASAP, and we should be up and running very soon.
> 
> Cheers,
> 
> [1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a
> 
> --
> Thierry Carrez (ttx)
> 

Congratulations Matt,

Almost 200 voters, sounds like great start for the new team. 

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Thomas Goirand
On 12/08/2015 04:09 AM, Dolph Mathews wrote:
> In Debian, many services/daemons are run, then their API is used by the
> package. In the case of Keystone, for example, it is possible to ask,
> via Debconf, that Keystone registers itself in the service catalog. If
> we get Keystone within Apache, it becomes at least harder to do so.
> 
> 
> You start the next paragraph with "the other issue," but I'm not clear
> on what the issue is here? This sounds like a bunch of complexity that
> is working as you expect it to.

Ok, let me try again. If the only way to get Keystone up and running is
through a web server like Apache, then starting Keystone and then using
its API is not as easy as if it was a daemon on its own. For example,
there may be some other types of configuration in use in the web server
which the package doesn't control.

> The other issue is that if all services are sharing the same web server,
> restarting the web server restarts all services. Or, said otherwise: if
> I need to change a configuration value of any of the services served by
> Apache, I will need to restart them all, which is very annoying: I very
> much prefer to just restart *ONE* service if I need.
> 
> As a deployer, I'd solve this by running one API service per server. As
> a packager, I don't expect you to solve this outside of AIO
> architectures, in which case, uptime is obviously not a concern.

Let's say there's a security issue in Keystone. One would expect that a
simple "apt-get dist-upgrade" will do all. If Keystone is installed in a
web server, should the package aggressively attempts to restart it? If
not, what is the proposed solution to have Keystone restarted in this case?

> Also, something which we learned the hard way at Mirantis: it is *very*
> annoying that Apache restarts every Sunday morning by default in
> distributions like Ubuntu and Debian (I'm not sure for the other
> distros). No, the default config of logrotate and Apache can't be
> changed in distros just to satisfy OpenStack users: there's other users
> of Apache in these distros.
> 
> Yikes! Is the debian Apache package intended to be useful in production?
> That doesn't sound like an OpenStack-specific problem at all. How is
> logrotate involved? Link?

It is logrotate which restarts apache. From /etc/logrotate.d/apache2:

/var/log/apache2/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if /etc/init.d/apache2 status > /dev/null ; then \
/etc/init.d/apache2 reload > /dev/null; \
fi;
endscript
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi; \
endscript
}

This is to be considered quite harmful. Yes, I am fully aware that this
file can be tweaked. Though this is the default, and it is always best
to provide a default which works for our users. And in this case, no
OpenStack package maintainer controls it.

> Then, yes, uWSGI becomes a nice option. I used it for the Barbican
> package, and it worked well. Though the uwsgi package in Debian isn't
> very well maintained, and multiple times, Barbican could have been
> removed from Debian testing because of RC bugs against uWSGI.
> 
> uWSGI is a nice option, but no one should be tied to that either-- in
> the debian world or otherwise.

For Debian users, I intend to provide useful configuration so that
everything works by default. OpenStack is complex enough. It is my role,
as a package maintainer, to make it easier to use.

One of the options I have is to create new packages like this:

python-keystone -> Already exists, holds the Python code
keystone -> Currently contains the Eventlet daemon

I could transform the later into a meta package depending on any of the
below options:

keystone-apache -> Default configuration for Apache
keystone-uwsgi -> An already configured startup script using uwsgi

Though I'm not sure the FTP masters will love me if I create so many new
packages just to create automated configurations... Also, maybe it's
just best to provide *one* implementation which we all consider the
best. I'm just not sure yet which one it is. Right now, I'm leaning
toward uwsgi.

> So, all together, I'm a bit reluctant to see the Eventlet based servers
> going away. If it's done, then yes, I'll work around it. Though I'd
> prefer if it didn't.
> 
> Think of it this way: keystone is moving towards using Python
> where Python excels, and is punting up the stack where Python is
> handicapped. Don't think of it as a work around, think of it as having
> the freedom to architect your own deployment.

I'm ok with that, but as per the above, I'd like to provide something
which just works for 

[openstack-dev] [Keystone][Tempest] OS-INHERIT APIs were skipped by Jenkins because "os_inherit" in keystone.conf was disable.

2015-12-09 Thread koshiya maho
Hi all,

I pushed the patch set of OS-INHERIT API tempest (keystone v3).
https://review.openstack.org/#/c/250795/

But, all API tests in patch set was skipped, because "os_inherit" in 
keystone.conf of 
Jenkins jobs was disable. So, it couldn't be confirmed.

Reference information : 
http://logs.openstack.org/95/250795/5/check/gate-tempest-dsvm-full/fbde6d2/logs/etc/keystone/keystone.conf.txt.gz
#L1422
https://github.com/openstack/keystone/blob/master/keystone/common/config.py#L224

Default "os_inherit" setting is disable. OS-INHERIT APIs need "os_inherit" 
setting enable.

For keystone v3 tempests using OS-INHERIT, we should enable "os_inherit" of the 
existing keystone.conf called by Jenkins.
Even if "os_inherit" is enable, I think there have no effects on other tempests.

Do you have any other ideas?

Thank you and best regards,

--
Maho Koshiya
NTT Software Corporation
E-Mail : koshiya.m...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] using reno for release note management

2015-12-09 Thread Julien Danjou
On Tue, Dec 08 2015, Doug Hellmann wrote:

> I suggest that as part of option 2, reviewers who might have voted
> -1 on a patch under option 1 could submit a follow-up patch to add
> the note. That will help educate contributors about the need to do
> it, and make the note patch easy to find for them so they can review
> it.

This is what I was going to suggest too. You should apply 1/, but be
ready to hi-jack the patch and update it with the release note as
needed. Don't push people away with a -1 they don't know how to solve:
show and educate them!

As Dmitry pointed out, 2/ is a bad habit as it complicates the workflow
for backports, revert, etc…

My 2c,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable/liberty 13.1.0 release planning

2015-12-09 Thread Thierry Carrez
Matt Riedemann wrote:
> We've had a few high priority regression fixes in stable/liberty [1][2]
> so I think it's time to do a release.
> [...]

You probably mean 12.0.1 ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Thierry Carrez
Armando M. wrote:
> For whom of you is interested in the conversation, the topic was brought
> for discussion at the latest TC meeting [1]. Unfortunately I was unable
> to join, however I would like to try and respond to some of the comments
> made to clarify my position on the matter:
> 
>> ttx: the neutron PTL say he can't vouch for anything in the neutron
> "stadium"
> 
> To be honest that's not entirely my position.
> [...]

I think I should have said "for everything" rather than "for anything" :)

>> flaper87: I agree a driver should not be independent
> 
> Why, what's your rationale? If we dig deeper, some drivers are small
> code drops with no or untraceable maintainers. Some are actively
> developed and can be fairly complex. The spectrum is pretty wide. Either
> way, I think that preventing them from being independent in principle
> may hurt the ones that can be pretty elaborated, and the ones that are
> stale may hurt Neutron's reputation because we're the ones who are
> supposed to look after them (after all didn't we vouch for them??)
> [...]

Yes, I agree with you that the line in the sand (between what should be
independent and what should stay in neutron) should not be based on a
technical classification, but on a community definition. The "big tent"
is all about project teams - we judge if that team follows the OpenStack
way, more than we judge what the team technically produces. As far as
neutron goes, the question is not whether what the team produces is a
plugin or a driver: the question is whether all the things are actually
produced by the same team and the same leadership.

If the teams producing those things overlap so significantly the Neutron
leadership can vouch for them being done by "the neutron project team",
they should stay in. If the subteams do not overlap, or follow different
development practices, or have independent leadership, they are not
produced by "the neutron project team" and should have their own
independent project team.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Config support for oslo.config.cfg.MultiStrOpt

2015-12-09 Thread Martin Mágr



On 12/04/2015 11:21 PM, Emilien Macchi wrote:


On 12/02/2015 10:32 PM, Cody Herriges wrote:

Martin,

I see no reason this shouldn't just be pushed into puppetlabs-inifile.
I can't actually find a real "spec" for INI file and even the Wiki
link[3] calls out that there is no actual spec.

I suggest:

1/ we land https://review.openstack.org/#/c/234727/
2/ in the meantime, we work on a puppetlabs-inifile patch
3/ once it's done, we switch puppet-openstacklib to use it.

What do you think?
Martin, are you willing to work on it?


Sure, no problem.




On Fri, Nov 27, 2015 at 5:04 AM, Martin Mágr > wrote:

 Greetings,

   I've submitted patch to puppet-openstacklib [1] which adds
 provider for parsing INI files containing duplicated variables
 (a.k.a MultiStrOpt [2]). Such parameters are used for example to set
 service_providers/service_provider for Neutron LBaaSv2. There has
 been a thought raised, that the patch should rather be submitted to
 puppetlabs-inifile module instead. The reason I did not submitted
 the patch to inifile module is that IMHO duplicate variables are not
 in the INI file spec [3]. Thoughts?

 Regards,
 Martin


 [1] https://review.openstack.org/#/c/234727/
 [2]
 
https://docs.openstack.org/developer/oslo.config/api/oslo.config.cfg.html#oslo.config.cfg.MultiStrOpt
 [3] https://en.wikipedia.org/wiki/INI_file#Duplicate_names

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] #midonet now our official channel; OpenStack service bots added

2015-12-09 Thread Sandro Mathys
As discussed [1] and announced [2], #midonet is now the official
MidoNet development channel.

We've now also got the three OpenStack service bots approved:

1) meetbot (nickname: openstack), in order to have a properly logged
meetings in #midonet. Note that our regular meetings are scheduled to
happen in #openstack-meeting - but in case an irregular / impromptu
meeting is required, this should prove useful.

When starting a meeting, make sure to use either "midonet" or
"networking_midonet" (mind the underscore) as the meeting name. The
logs can then be found at [3] or [4] respectively.

Furthermore, the channel is also constantly being logged to [5].

2) gerritbot (nickname: openstackgerrit), in order to get gerrit
notifications. Unfortunately, only OpenStack gerrit is supported,
meaning that we currently get notifications for networking-midonet
only.

3) statusbot (nickname: openstackstatus), in order to receive
notification from OpenStack infra. They send out status notifications,
e.g. Gerrit is restarted or there's a problem with the gate. See [6]
for past notifications to understand better how they use it.

Furthermore, the "#success " command has recently been added
and can be used by everyone. It's meant to collect small successes in
OpenStack development, and they're collected on a wiki page [7]. I
think "highlights" are also cherry-picked for the OpenStack
newsletter. Figure it would be great to use when reaching development
milestones or publishing new releases, as well as any other success.

Let me know if you have any questions regarding our IRC channel or any
of these bots.

Cheers,
Sandro

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-December/080914.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-December/081376.html
[3] http://eavesdrop.openstack.org/meetings/midonet/
[4] http://eavesdrop.openstack.org/meetings/networking_midonet/
[5] http://eavesdrop.openstack.org/irclogs/#midonet/
[6] https://wiki.openstack.org/wiki/Infrastructure_Status
[7] https://wiki.openstack.org/wiki/Successes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Sean Dague
On 12/09/2015 01:46 AM, Armando M. wrote:
> 
> 
> On 3 December 2015 at 02:21, Thierry Carrez  > wrote:
> 
> Armando M. wrote:
> > On 2 December 2015 at 01:16, Thierry Carrez  
> > >> wrote:
> >> Armando M. wrote:
> >> >> One solution is, like you mentioned, to make some (or all) of 
> them
> >> >> full-fledged project teams. Be aware that this means the TC 
> would judge
> >> >> those new project teams individually and might reject them if 
> we feel
> >> >> the requirements are not met. We might want to clarify what 
> happens
> >> >> then.
> >> >
> >> > That's a good point. Do we have existing examples of this or 
> would we be
> >> > sailing in uncharted waters?
> >>
> >> It's been pretty common that we rejected/delayed applications for
> >> projects where we felt they needed more alignment. In such cases, 
> the
> >> immediate result for those projects if they are out of the Neutron
> >> "stadium" is that they would fall from the list of official 
> projects.
> >> Again, I'm fine with that outcome, but I want to set expectations
> >> clearly :)
> >
> > Understood. It sounds to me that the outcome would be that those
> > projects (that may end up being rejected) would show nowhere on [1], but
> > would still be hosted and can rely on the support and services of the
> > OpenStack community, right?
> >
> > [1] http://governance.openstack.org/reference/projects/
> 
> Yes they would still be hosted on OpenStack development infrastructure.
> Contributions would no longer count toward ATC status, so people who
> only contribute to those projects would no longer be able to vote in the
> Technical Committee election. They would not have "official" design
> summit space either -- they can still camp in the hallway though :)
> 
> 
> Hi folks,
> 
> For whom of you is interested in the conversation, the topic was brought
> for discussion at the latest TC meeting [1]. Unfortunately I was unable
> to join, however I would like to try and respond to some of the comments
> made to clarify my position on the matter:
> 
>> ttx: the neutron PTL say he can't vouch for anything in the neutron
> "stadium"
> 
> To be honest that's not entirely my position.
> 
> The problem stems from the fact that, if I am asked what the stadium
> means, as a PTL I can't give a straight answer; ttx put it relatively
> well (and I quote him): by adding all those projects under your own
> project team, you bypass the Technical Committee approval that they
> behave like OpenStack projects and are produced by the OpenStack
> community. The Neutron team basically vouches for all of them to be on
> par. As far as the Technical Committee goes, they are all being produced
> by the same team we originally blessed (the Neutron project team).
> 
> The reality is: some of these projects are not produced by the same
> team, they do not behave the same way, and they do not follow the same
> practices and guidelines. For the stadium to make sense, in my humble
> opinion, a definition of these practices should happen and enforcement
> should follow, but who's got the time for policing and enforcing
> eviction, especially on a large scale? So we either reduce the scale
> (which might not be feasible because in OpenStack we're all about
> scaling and adding more and more and more), or we address the problem
> more radically by evolving the relationship from tight aggregation to
> loose association; this way who needs to vouch for the Neutron
> relationship is not the Neutron PTL, but the person sponsoring the
> project that wants to be associated to Neutron. On the other end, the
> vouching may still be pursued, but for a much more focused set of
> initiatives that are led by the same team.
> 
>> russellb: Iattempted to start breaking down the different types of
> repos that are part of the stadium (consumer, api, implementation of
> technology, plugins/drivers).
> 
> The distinction between implementation of technology, plugins/drivers
> and api is not justified IMO because from a neutron standpoint they all
> look like the same: they leverage the pluggable extensions to the
> Neutron core framework. As I attempted to say: we have existing plugins
> and drivers that implement APIs, and we have plugins that implement
> technology, so the extra classification seems overspecification.
> 
>> flaper87: I agree a driver should not be independent
> 
> Why, what's your rationale? If we dig deeper, some drivers are small
> code drops with no or untraceable maintainers. Some are actively
> developed and can be fairly complex. The spectrum is pretty wide. Either
> way, I think that preventing them from being independent 

[openstack-dev] [openstack-ansible] Mid Cycle Sprint

2015-12-09 Thread Jesse Pretorius
Hi everyone,

At the Mitaka design summit in Tokyo we had some corridor discussions about
doing a mid-cycle meetup for the purpose of continuing some design
discussions and doing some specific sprint work.

***
I'd like indications of who would like to attend and what
locations/dates/topics/sprints would be of interest to you.
***

For guidance/background I've put some notes together below:

Location

We have contributors, deployers and downstream consumers across the globe
so picking a venue is difficult. Rackspace have facilities in the UK
(Hayes, West London) and in the US (San Antonio) and are happy for us to
make use of them.

Dates
-
Most of the mid-cycles for upstream OpenStack projects are being held in
January. The Operators mid-cycle is on February 15-16.

As I feel that it's important that we're all as involved as possible in
these events, I would suggest that we schedule ours after the Operators
mid-cycle.

It strikes me that it may be useful to do our mid-cycle immediately after
the Ops mid-cycle, and do it in the UK. This may help to optimise travel
for many of us.

Format
--
The format of the summit is really for us to choose, but typically they're
formatted along the lines of something like this:

Day 1: Big group discussions similar in format to sessions at the design
summit.

Day 2: Collaborative code reviews, usually performed on a projector, where
the goal is to merge things that day (if a review needs more than a single
iteration, we skip it. If a review needs small revisions, we do them on the
spot).

Day 3: Small group / pair programming.

Topics
--
Some topics/sprints that come to mind that we could explore/do are:
 - Install Guide Documentation Improvement [1]
 - Development Documentation Improvement (best practises, testing, how to
develop a new role, etc)
 - Upgrade Framework [2]
 - Multi-OS Support [3]

[1] https://etherpad.openstack.org/p/oa-install-docs
[2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
[3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Improving Mistral pep8 rules files to match Mistral guidelines

2015-12-09 Thread Anastasia Kuznetsova
Hi Moshe,

Great idea!

It is possible to prepare some additional code checks, for example you can
take a look how it was done in Rally project [1].
Before starting such work in Mistral, I guess that we can describe our
addition code style rules in our official docs (somewhere in "Developer
Guide" section [2]).

[1] https://github.com/openstack/rally/tree/master/tests/hacking
[2] http://docs.openstack.org/developer/mistral/#developer-guide

On Wed, Dec 9, 2015 at 11:21 AM, ELISHA, Moshe (Moshe) <
moshe.eli...@alcatel-lucent.com> wrote:

> Hi all,
>
>
>
> Is it possible to add all / some of the special guidelines of Mistral
> (like blank line before return, period at end of comment, …) to our pep8
> rules file?
>
>
>
> This can save a lot of time for both committers and reviewers.
>
>
>
> Thanks!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposing Cody Herriges part of Puppet OpenStack core

2015-12-09 Thread Emilien Macchi
\o/ I think that's a big +1 from the team :-)

Welcome back Cody and thanks for your work!

On 12/09/2015 04:06 AM, Denis Egorenko wrote:
> +1
> 
> 2015-12-09 0:25 GMT+03:00 Clayton O'Neill  >:
> 
> +1
> 
> On Tue, Dec 8, 2015 at 4:15 PM, Matt Fischer  > wrote:
> 
> +1
> 
> On Tue, Dec 8, 2015 at 2:07 PM, Rich Megginson
> > wrote:
> 
> On 12/08/2015 09:49 AM, Emilien Macchi wrote:
>> Hi,
>>
>> Back in "old days", Cody was already core on the modules, when 
>> they were
>> hosted by Puppetlabs namespace.
>> His contributions [1] are very valuable to the group:
>> * strong knowledge on Puppet and all dependencies in general.
>> * very helpful to debug issues related to Puppet core or 
>> dependencies
>> (beaker, etc).
>> * regular attendance to our weekly meeting
>> * pertinent reviews
>> * very understanding of our coding style
>>
>> I would like to propose having him back part of our core team.
>> As usual, we need to vote.
> 
> +1
> 
>> Thanks,
>>
>> [1]
>> 
>> http://stackalytics.openstack.org/?metric=commits=all_type=all_id=ody-cat
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Best Regards,
> Egorenko Denis,
> Deployment Engineer
> Mirantis
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] jsonschema for scheduler hints

2015-12-09 Thread Sean Dague
On 12/08/2015 11:47 PM, Ken'ichi Ohmichi wrote:
> Hi Sylvain,
> 
> 2015-12-04 17:48 GMT+09:00 Sylvain Bauza :
>>>
>>> That leaves the out-of-tree discussion about custom filters and how we
>>> could have a consistent behaviour given that. Should we accept something in
>>> a specific deployment while another deployment could 401 against it ? Mmm,
>>> bad to me IMHO.
>>
>> We can have code to check the out-of-tree filters didn't expose any same
>> hints with in-tree filter.
>>
>> Sure, and thank you for that, that was missing in the past. That said, there
>> are still some interoperability concerns, let me explain : as a cloud
>> operator, I'm now providing a custom filter (say MyAwesomeFilter) which does
>> the lookup for an hint called 'my_awesome_hint'.
>>
>> If we enforce a strict validation (and not allow to accept any hint) it
>> would mean that this cloud would accept a request with 'my_awesome_hint'
>> while another cloud which wouldn't be running MyAwesomeFilter would then
>> deny the same request.
> 
> I am thinking the operator/vendor own filter should have some
> implementation code for registering its original hint to jsonschema to
> expose/validate available hints in the future.
> The way should be easy as possible so that they can implement the code easily.
> After that, we will be able to make the validation strict again.

Yeh, that was my thinking. As someone that did a lot of the jsonschema
work, is that something you could prototype?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Mid Cycle Sprint

2015-12-09 Thread Major Hayden
On Wed, 2015-12-09 at 12:45 +, Jesse Pretorius wrote:
> At the Mitaka design summit in Tokyo we had some corridor discussions
> about doing a mid-cycle meetup for the purpose of continuing some
> design discussions and doing some specific sprint work.
> 
> ***
> I'd like indications of who would like to attend and what
> locations/dates/topics/sprints would be of interest to you.
> ***

I'm glad to see this brought up on the list.  As a fairly new
contributor, I'd really like some more face time with folks who work on
openstack-ansible.

As far as topics go, I'm very interested in:

  * Documentation cleanup (writing docs for personas, friendlier
install guide, troubleshooting docs)

  * Multi-OS support (specifically Fedora + CentOS, possibly Debian)

I'm located in San Antonio, TX (USA), so I'd prefer to have it
somewhere around here.  I certainly wouldn't pass up a trip to London
either (if it's in the cards). ;)

-- 
Major Hayden



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Linux Bridge CI is now a voting gate job

2015-12-09 Thread Jay Pipes

On 12/08/2015 07:54 PM, Andreas Scheuring wrote:

Great work!


Indeed, Sean, fantastic effort getting this done. :)

-jay

On Mo, 2015-12-07 at 21:05 +, Sean M.

Collins wrote:

>It's been a couple months - the last time I posted on this subject we
>were still working on getting Linux Bridge to become an experimental[1]
>job. During the Liberty cycle, the Linux Bridge CI was promoted from
>experimental status, to being run on all Neutron changes, but
>non-voting.
>
>Well, we're finally at the point where the Linux Bridge job is
>gating[2]. I am sure there are still bugs that will need to be addressed
>- I will be watching the gate very carefully the next couple of hours
>and throughout this week.
>
>Feel free to leave bags of flaming :poo: at my doorstep
>
>On a serious note, thank you to everyone who over this year has
>committed patches and fixes to make this happen, it's been an amazing
>example of open source and community involvement. I'll be happy to buy
>drinks if you helped with LB in San Antonio if there is a neutron social
>event (in addition to paying back amotoki for the Tokyo social).
>
>[1]:http://lists.openstack.org/pipermail/openstack-dev/2015-July/068859.html
>[2]:https://review.openstack.org/205674
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-12-09 Thread Jiri Tomasek

On 11/25/2015 03:17 PM, Jay Dobies wrote:

I think at the same time we add a mechanism to distinguish between
internal and external parameters, we need to add something to indicate
required v. optional.

With a nested stack, anything that's not part of the top-level 
parameter

contract is defaulted. The problem is that it loses information on what
is a valid default v. what's simply defaulted to pass validation.


I thought the nested validation spec was supposed to handle that though?
  To me, required vs. optional should be as simple as "Does the 
parameter

definition have a 'default' key?  If yes, then it's optional, if no,
then it's required for the user to pass a value via a parameter or
parameter_default".  I realize we may not have been following that up to
now for various reasons, but it seems like Heat is already providing a
pretty explicit mechanism for marking params as required, so we ought to
use it.


Ya, I was mistaken here. Taking a look at the cinder-netapp.yaml, it 
looks like we're using this correctly:


...
  CinderNetappBackendName:
type: string
default: 'tripleo_netapp'
  CinderNetappLogin:
type: string
  CinderNetappPassword:
type: string
hidden: true
...


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I need to read the thread once again, but I'd like to add a few 
observations from the GUI implementation:


The nested validation as it works right now, requires that all root 
template parameters need to have 'default' or 'value' set, otherwise the 
heat validation fails and no parameters are returned. This is a sort of 
a blocker because we need to use this to retrieve the parameters and let 
user set the values for them. This means, that to be able to list the 
parameters, we need to make sure that all root template parameters have 
'default' set, which is not optimal.


Other observation (maybe a bit outside of the topic) is that the list of 
parameters defined in root template is huge, It would be nice if root 
template and more possibly root environment included resource registry 
only for the roles/templates that are explicitly required for the 
minimal deployment (controller, compute) and split other roles into 
separate  optional environments.
In current situation the user is required to set flavors, node counts 
etc. for all roles defined in root template even though he is not going 
to use them (sets the node_count to 0)



Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Linux Bridge CI is now a voting gate job

2015-12-09 Thread Sean Dague
On 12/07/2015 04:05 PM, Sean M. Collins wrote:
> It's been a couple months - the last time I posted on this subject we
> were still working on getting Linux Bridge to become an experimental[1]
> job. During the Liberty cycle, the Linux Bridge CI was promoted from
> experimental status, to being run on all Neutron changes, but
> non-voting.
> 
> Well, we're finally at the point where the Linux Bridge job is
> gating[2]. I am sure there are still bugs that will need to be addressed
> - I will be watching the gate very carefully the next couple of hours
> and throughout this week.
> 
> Feel free to leave bags of flaming :poo: at my doorstep
> 
> On a serious note, thank you to everyone who over this year has
> committed patches and fixes to make this happen, it's been an amazing
> example of open source and community involvement. I'll be happy to buy
> drinks if you helped with LB in San Antonio if there is a neutron social
> event (in addition to paying back amotoki for the Tokyo social).
> 
> [1]: http://lists.openstack.org/pipermail/openstack-dev/2015-July/068859.html
> [2]: https://review.openstack.org/205674
> 

Nicely done! Thanks much!

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-glanceclient] Return request-id to caller

2015-12-09 Thread Kekane, Abhishek
Hi Devs,

We are adding support for returning 'x-openstack-request-id'  to the caller as 
per the design proposed in cross-project specs:
http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html

Problem Description:
Cannot add a new property of list type to the warlock.model object.

How is a model object created:
Let's take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model() does the 
job of creating a warlock.model object(essentially a dict) based on the schema 
given as argument (image schema retrieved from glance in this case). Inside 
model() the raw() method simply return the image schema as JSON object. The 
advantage of this warlock.model object over a simple dict is that it validates 
any changes to object based on the rules specified in the reference schema.  
The keys of this  model object are available as object properties to the caller.

Underlying reason:
The schema for different sub APIs is returned a bit differently. For images, 
metadef APIs glance.schema.Schema.raw() is used which returns a schema 
containing "additionalProperties": {"type": "string"}. Whereas for members and 
tasks APIs glance.schema.Schema.minimal() is used to return schema object which 
does not contain "additionalProperties".

So we can add extra properties of any type to the model object returned from 
members or tasks API but for images and metadef APIs we can only add properties 
which can be of type string. Also for the latter case we depend on the glance 
configuration to allow additional properties.

As per our analysis we have come up with two approaches for resolving this 
issue:

Approach #1:  Inject request_ids property in the warlock model object in glance 
client
Here we do the following:
1. Inject the 'request_ids' as additional property into the model 
object(returned from model())
2. Return the model object which now contains request_ids property

Limitations:
1. Because the glance schemas for images and metadef only allows additional 
properties of type string, so even though natural type of request_ids should be 
list we have to make it as a comma separated 'string' of request ids as a 
compromise.
2. Lot of extra code is needed to wrap objects returned from the client API so 
that the caller can get request ids. For example we need to write wrapper 
classes for dict, list, str, tuple, generator.
3. Not a good design as we are adding a property which should actually be a 
base property but added as additional property as a compromise.
4. There is a dependency on glance whether to allow custom/additional 
properties or not. [2]

Approach #2:  Add 'request_ids' property to all schema definitions in glance

Here we add  'request_ids' property as follows to the various APIs (schema):

"request_ids": {
  "type": "array",
  "items": {
"type": "string"
  }
}

Doing this will make changes in glance client very simple as compared to 
approach#1.
This also looks a better design as it will be consistent.
We simply need to modify the request_ids property in  various API calls for 
example glanceclient.v2.images.get().

Please let us know which approach is better or any suggestions for the same.

[1] 
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L179
[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L944

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-09 Thread Kai Qiang Wu


Hi All,

I am not sure what changes these days, We found quite often now, the
Jenkins failed for:


http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/libvirt/libvirtd.txt.gz

2015-12-09 08:52:27.892+: 22957: debug :
qemuMonitorJSONCommandWithFd:264 : Send command
'{"execute":"qmp_capabilities","id":"libvirt-1"}' for write with FD -1
2015-12-09 08:52:27.892+: 22957: debug : qemuMonitorSend:959 :
QEMU_MONITOR_SEND_MSG: mon=0x7fa66400c6f0 msg=
{"execute":"qmp_capabilities","id":"libvirt-1"}
 fd=-1
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:347 :
dispatching to max 0 clients, called from event watch 6
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:360 :
event not handled.
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:347 :
dispatching to max 0 clients, called from event watch 6
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:360 :
event not handled.
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:347 :
dispatching to max 0 clients, called from event watch 6
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:360 :
event not handled.
2015-12-09 08:52:28.070+: 22951: error : qemuMonitorIORead:554 : Unable
to read from monitor: Connection reset by peer
2015-12-09 08:52:28.070+: 22951: error : qemuMonitorIO:690 : internal
error: early end of file from monitor: possible problem:
Cannot set up guest memory 'pc.ram': Cannot allocate memory




It not hit such resource issue before.  I am not sure if Infra or nova guys
know about it ?


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Antoni Segura Puimedon
On Tue, Dec 8, 2015 at 1:58 PM, Galo Navarro  wrote:

> Hi Sandro,
>
> >> 1) (Downstream) packaging: midonet and python-midonetclient are two
> >> distinct packages, and therefore should have distinct upstream
> >> tarballs - which are compiled on a per repo basis.
>
> This is actually not accurate: there is no such thing as a midonet
> package. The midonet repo produces 4 or 5 separate packages: agent,
> cluster, tools, py client.
>
> I'd like to understand a bit better what exactly you're trying to
> achieve. Is it to produce tarballs? All of them? Just
> python-midonetclient?
>
> Let's examine the concrete requirements before rushing headlong into
> highly disruptive changes like splitting repos. For example, a
> py-midonetclient tarball can be built already without having a
> separate repo.
>
> > 3) In order to put python-midonetclient on PyPI, it's probably
> > required to be in its own repo as well, isn't it? Because that's
> > another requirement [3]
>
> Ditto. We already have a mirror repo of pyc for this purpose
> https://github.com/midonet/python-midonetclient, synced daily.
>

Some of the problems with that is that it does not have any git log history
nor does it feel like a coding project at all.

Allow me to put forward a solution that will allow you keep the development
in the midonet tree while, at the same time, having a proper repository
with identifiable patches in github.com/midonet/python-midonetclient

Look at the repo I created [1] and specially at its commit history [2]

As you can see, it has all the commit history relevant for the
midonet/python-midonetclient (and only that one) is present.

This is generated in the following way. There should be job that does once
in the
midonet repo:

git format-patch -o "${HOME}/patches_current
--relative=python-midonetclient \
7aef7ea7845a2125696303a277d40bd45c9240e2..master

Then, each day it should do:

cd ${JOB_HOME}
git clone https://github.com/celebdor/python-midonetclient.git
git clone https://github.com/midonet/midonet.git

pushd midonet
git format-patch -o "${JOB_HOME}/patches_new"
--relative=python-midonetclient \
7aef7ea7845a2125696303a277d40bd45c9240e2..master
popd
pushd python-midonetclient

for file in `diff <(ls -1a "${HOME}/patches_current") <(ls -1a
"${JOB_HOME}/patches_new") | cut -f2 -d' '`
do
git am < "${JOB_HOME}/patches_new/$file"
done

popd
mv patches_new "${HOME}/patches_current"

It should be quite straightforward to change whichever job you currently
use to
this.

The last remaining issue will be that of tags. github.com/midonet/midonet
is not
tagging all the releases. However, python-midonetclient should, so I would
just
ask that when you make a release you push the tag to python-midonetclient
as well.

[1] https://github.com/celebdor/python-midonetclient/
[2] https://github.com/celebdor/python-midonetclient/commits/master


Regards,

Toni


> g
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic]Boot physical machine fails, says "PXE-E11 ARP Timeout"

2015-12-09 Thread Zhi Chang
hi, all
I treat a normal physical machine as a bare metal machine. The physical 
machine booted when I run "nova boot xxx" in command line. But there is an 
error happens. I upload a movie in youtube, link: 
https://www.youtube.com/watch?v=XZQCNsrkyMI=youtu.be. Could someone 
give me some advice?


Thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][defcore] Process to imrpove tests coverge in temepest

2015-12-09 Thread Flavio Percoco

On 08/12/15 22:31 +0100, Jordan Pittier wrote:

Hi Flavio,

On Tue, Dec 8, 2015 at 9:52 PM, Flavio Percoco  wrote:

  
   Oh, I meant ocasionally. Whenever a missing test for an API is found,

   it'd be easy enough for the implementer to sohw up at the meeting and
   bring it up.

From my experience as a Tempest reviewer, I'd say that most newly added tests
are *not* submitted by Tempest regular contributors. I assume (wrongly ?) that
it's mostly people from the actual projects (e.g glance) who are interested in
adding new Tempest tests to test a feature recently implemented. Put
differently, I don't think it's part of Tempest core team/community to add new
tests. We mostly provide a framework and guidance these days.


I agree that the tempest team should focus on providing the framework
rather than the tests themselves. However, these tests are often
contributed by ppl that are not part of the project's team.


But, reading this thread, I don"t know what to suggest. As a Tempest reviewer I
won't start a new ML thread or send a message to a PTL each time I see a new
test being added...I assume the patch author to know what he is doing, I can't
keep on with what's going on in each and every project.


This is what I'd like to avoid. This assumption is exactly what almost
got the tasks API test almost merged and that will likely happen for
other things.

I don't think it's wrong to ping someone from the community when new
tests are added, especially because these tests are used by defcore
as well. Adding the PTL to the review (or some liaison) is simple
enough. We do this for many things in OpenStack. That is, we wait for
PTLs/liaisons approval before going forward with some decisions.


Also, a test can be quickly removed if it is latter on deemed not so useful.


Sure but this is wasting people's time. The contributor's, reviewer's
and community's time as it'll have to be added, reviewed and then
deleted.

I agree this doesn't happen too often but the fact that it happened is
enough of a reason for me to work on improving the process. Again,
especially because these tests are not meant to be used just by our
CI.

Cheers,
Flavio



Jordan
 


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Doug Hellmann
Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:
> On 3 December 2015 at 02:21, Thierry Carrez  wrote:
> 
> > Armando M. wrote:
> > > On 2 December 2015 at 01:16, Thierry Carrez  > > > wrote:
> > >> Armando M. wrote:
> > >> >> One solution is, like you mentioned, to make some (or all) of
> > them
> > >> >> full-fledged project teams. Be aware that this means the TC
> > would judge
> > >> >> those new project teams individually and might reject them if we
> > feel
> > >> >> the requirements are not met. We might want to clarify what
> > happens
> > >> >> then.
> > >> >
> > >> > That's a good point. Do we have existing examples of this or
> > would we be
> > >> > sailing in uncharted waters?
> > >>
> > >> It's been pretty common that we rejected/delayed applications for
> > >> projects where we felt they needed more alignment. In such cases,
> > the
> > >> immediate result for those projects if they are out of the Neutron
> > >> "stadium" is that they would fall from the list of official
> > projects.
> > >> Again, I'm fine with that outcome, but I want to set expectations
> > >> clearly :)
> > >
> > > Understood. It sounds to me that the outcome would be that those
> > > projects (that may end up being rejected) would show nowhere on [1], but
> > > would still be hosted and can rely on the support and services of the
> > > OpenStack community, right?
> > >
> > > [1] http://governance.openstack.org/reference/projects/
> >
> > Yes they would still be hosted on OpenStack development infrastructure.
> > Contributions would no longer count toward ATC status, so people who
> > only contribute to those projects would no longer be able to vote in the
> > Technical Committee election. They would not have "official" design
> > summit space either -- they can still camp in the hallway though :)
> >
> 
> Hi folks,
> 
> For whom of you is interested in the conversation, the topic was brought
> for discussion at the latest TC meeting [1]. Unfortunately I was unable to
> join, however I would like to try and respond to some of the comments made
> to clarify my position on the matter:
> 
> > ttx: the neutron PTL say he can't vouch for anything in the neutron
> "stadium"
> 
> To be honest that's not entirely my position.
> 
> The problem stems from the fact that, if I am asked what the stadium means,
> as a PTL I can't give a straight answer; ttx put it relatively well (and I
> quote him): by adding all those projects under your own project team, you
> bypass the Technical Committee approval that they behave like OpenStack
> projects and are produced by the OpenStack community. The Neutron team
> basically vouches for all of them to be on par. As far as the Technical
> Committee goes, they are all being produced by the same team we originally
> blessed (the Neutron project team).
> 
> The reality is: some of these projects are not produced by the same team,
> they do not behave the same way, and they do not follow the same practices
> and guidelines. For the stadium to make sense, in my humble opinion, a

This is the thing that's key, for me. As Anita points out elsewhere in
this thread, we want to structure our project teams so that decision
making and responsibility are placed in the same set of hands. It sounds
like the Stadium concept has made it easy to let those diverge.

> definition of these practices should happen and enforcement should follow,
> but who's got the time for policing and enforcing eviction, especially on a
> large scale? So we either reduce the scale (which might not be feasible
> because in OpenStack we're all about scaling and adding more and more and
> more), or we address the problem more radically by evolving the
> relationship from tight aggregation to loose association; this way who
> needs to vouch for the Neutron relationship is not the Neutron PTL, but the
> person sponsoring the project that wants to be associated to Neutron. On
> the other end, the vouching may still be pursued, but for a much more
> focused set of initiatives that are led by the same team.
> 
> > russellb: I attempted to start breaking down the different types of repos
> that are part of the stadium (consumer, api, implementation of technology,
> plugins/drivers).
> 
> The distinction between implementation of technology, plugins/drivers and
> api is not justified IMO because from a neutron standpoint they all look
> like the same: they leverage the pluggable extensions to the Neutron core
> framework. As I attempted to say: we have existing plugins and drivers that
> implement APIs, and we have plugins that implement technology, so the extra
> classification seems overspecification.
> 
> > flaper87: I agree a driver should not be independent
> 
> Why, what's your rationale? If we dig deeper, some drivers are small code
> drops with no or untraceable maintainers. Some 

[openstack-dev] [Fuel] Private links

2015-12-09 Thread Roman Prykhodchenko
Folks,

on the last two days I have marked several bugs as incomplete because they were 
referring to one or more private resources that are not accessible by anyone 
who does not have a @mirantis.com  account.

Please keep in mind that Fuel is an open source project and the bug tracker we 
use is absolutely public. There should not be any private links in public bugs 
on Launchpad. Please don’t attach links to files on corporate Google Drive or 
tickets to Jira. The same rule should be applied for code reviews.

That said, I’d like to confirm that we can submit world-accessible links to BVT 
results. If not — that should be fixed ASAP.


- romcheg




signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-glanceclient] Return request-id to caller

2015-12-09 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:
> On 09/12/15 11:33 +, Kekane, Abhishek wrote:
> >Hi Devs,
> >
> > 
> >
> >We are adding support for returning ‘x-openstack-request-id’  to the caller 
> >as
> >per the design proposed in cross-project specs:
> >
> >http://specs.openstack.org/openstack/openstack-specs/specs/
> >return-request-id.html
> >
> > 
> >
> >Problem Description:
> >
> >Cannot add a new property of list type to the warlock.model object.
> >
> > 
> >
> >How is a model object created:
> >
> >Let’s take an example of glanceclient.api.v2.images.get() call [1]:
> >
> > 
> >
> >Here after getting the response we call model() method. This model() does the
> >job of creating a warlock.model object(essentially a dict) based on the 
> >schema
> >given as argument (image schema retrieved from glance in this case). Inside
> >model() the raw() method simply return the image schema as JSON object. The
> >advantage of this warlock.model object over a simple dict is that it 
> >validates
> >any changes to object based on the rules specified in the reference schema. 
> >The keys of this  model object are available as object properties to the
> >caller.
> >
> > 
> >
> >Underlying reason:
> >
> >The schema for different sub APIs is returned a bit differently. For images,
> >metadef APIs glance.schema.Schema.raw() is used which returns a schema
> >containing “additionalProperties”: {“type”: “string”}. Whereas for members 
> >and
> >tasks APIs glance.schema.Schema.minimal() is used to return schema object 
> >which
> >does not contain “additionalProperties”.
> >
> > 
> >
> >So we can add extra properties of any type to the model object returned from
> >members or tasks API but for images and metadef APIs we can only add 
> >properties
> >which can be of type string. Also for the latter case we depend on the glance
> >configuration to allow additional properties.
> >
> > 
> >
> >As per our analysis we have come up with two approaches for resolving this
> >issue:
> >
> > 
> >
> >Approach #1:  Inject request_ids property in the warlock model object in 
> >glance
> >client
> >
> >Here we do the following:
> >
> >1. Inject the ‘request_ids’ as additional property into the model object
> >(returned from model())
> >
> >2. Return the model object which now contains request_ids property
> >
> > 
> >
> >Limitations:
> >
> >1. Because the glance schemas for images and metadef only allows additional
> >properties of type string, so even though natural type of request_ids should 
> >be
> >list we have to make it as a comma separated ‘string’ of request ids as a
> >compromise.
> >
> >2. Lot of extra code is needed to wrap objects returned from the client API 
> >so
> >that the caller can get request ids. For example we need to write wrapper
> >classes for dict, list, str, tuple, generator.
> >
> >3. Not a good design as we are adding a property which should actually be a
> >base property but added as additional property as a compromise.
> >
> >4. There is a dependency on glance whether to allow custom/additional
> >properties or not. [2]
> >
> > 
> >
> >Approach #2:  Add ‘request_ids’ property to all schema definitions in glance
> >
> > 
> >
> >Here we add  ‘request_ids’ property as follows to the various APIs (schema):
> >
> > 
> >
> >“request_ids”: {
> >
> >  "type": "array",
> >
> >  "items": {
> >
> >"type": "string"
> >
> >  }
> >
> >}
> >
> > 
> >
> >Doing this will make changes in glance client very simple as compared to
> >approach#1.
> >
> >This also looks a better design as it will be consistent.
> >
> >We simply need to modify the request_ids property in  various API calls for
> >example glanceclient.v2.images.get().
> >
> 
> Hey Abhishek,
> 
> thanks for working on this.
> 
> To be honest, I'm a bit confused on why the request_id needs to be an
> attribute of the image. Isn't it passed as a header? Does it have to
> be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id
available to the user of the client library [1]. The user typically
doesn't have access to the headers, so the request id needs to be
part of the payload returned from each method. In other clients
that work with simple data types, they've subclassed dict, list,
etc. to add the extra property. This adds the request id to the
return value without making a breaking change to the API of the
client library.

Abhishek, would it be possible to add the request id information
to the schema data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that
data takes (dictionary, JSON blob, etc.). If it's a dictionary
visible to the client code it would be straightforward to add data
to it.

Failing that, is it possible to change warlock to allow extra
properties with arbitrary types to be added to objects? Because
validating inputs to the constructor is all well and good, but
breaking the ability to add data to an object 

Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Anita Kuno
On 12/09/2015 09:02 AM, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:
>> Thierry Carrez wrote:
>>> Thierry Carrez wrote:
 The nomination deadline is passed, we have two candidates!

 I'll be setting up the election shortly (with Jeremy's help to generate
 election rolls).
>>>
>>> OK, the election just started. Recent contributors to a stable branch
>>> (over the past year) should have received an email with a link to vote.
>>> If you haven't and think you should have, please contact me privately.
>>>
>>> The poll closes on Tuesday, December 8th at 23:59 UTC.
>>> Happy voting!
>>
>> Election is over[1], let me congratulate Matt Riedemann for his election
>> ! Thanks to everyone who participated to the vote.
>>
>> Now I'll submit the request for spinning off as a separate project team
>> to the governance ASAP, and we should be up and running very soon.
>>
>> Cheers,
>>
>> [1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a
>>
> 
> Congratulations, Matt!
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thanks to both candidates for putting their name forward, it is nice to
have an election.

Congratulations Matt,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Jaume Devesa
Hi Galo,

I think the goal of this split is well explained by Sandro in the first
mails of the chain:

1. Downstream packaging
2. Tagging the delivery properly as a library
3. Adding as a project on pypi

​OpenStack provide us a tarballs web page[1] for each branch of each
project of the infrastructure.
Then, projects like Delorean can allow us to download theses tarball master
branches, create the
packages and host them in a target repository for each one of the rpm-like
distributions[2]. I am pretty sure
that there is something similar for Ubuntu.

Everything is done in a very straightforward and standarized way, because
every repo has its own
deliverable. You can look how they are packaged and you won't see too many
differences between
them. Packaging a python-midonetclient it will be trivial if it is
separated in a single repo. It will be
complicated and we'll have to do tricky things if it is a directory inside
the midonet repo. And I am not
sure if Ubuntu and RDO community will allow us to have weird packaging
metadata repos.

So to me the main reason is

4. Leverage all the infrastructure and procedures that OpenStack offers to
integrate MidoNet
as best as possible with the release process and delivery.


Regards,

[1]: ​http://tarballs.openstack.org/
[2]: http://trunk.rdoproject.org

On 9 December 2015 at 15:52, Antoni Segura Puimedon 
wrote:

>
> -- Forwarded message --
> From: Galo Navarro 
> Date: Wed, Dec 9, 2015 at 2:48 PM
> Subject: Re: [openstack-dev] [midonet] Split up python-midonetclient
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Cc: Jaume Devesa 
>
>
> >> Ditto. We already have a mirror repo of pyc for this purpose
> >> https://github.com/midonet/python-midonetclient, synced daily.
> >
> > Some of the problems with that is that it does not have any git log
> history
> > nor does it feel like a coding project at all.
>
> Of course, because the goal of this repo is not to provide a
> changelog. It's to provide an independent repo. If you want git log,
> you should do a git log python-midonetclient in the source repo
> (/midonet/midonet).
>
> > Allow me to put forward a solution that will allow you keep the
> development
> > in the midonet tree while, at the same time, having a proper repository
> > with identifiable patches in github.com/midonet/python-midonetclient
>
> Thanks, but I insist: can we please clarify *what* are we trying to
> achieve, before we jump into solutions?
>
> g
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Service Group TNG meeting - Thursday 15:00 UTC

2015-12-09 Thread Sean Dague
The Service Group TNG meeting is officially kicking off, and will be
Thursday's at 15:00 UTC -
https://wiki.openstack.org/wiki/Meetings/ServiceCatalogTNG

It will be in the new formed: #openstack-meeting-cp channel

The Agenda for the next meeting is:

* Realistic Goals for Mitaka
* Service Catalog Spec
* Project Id Optionality in projects
* What URLs are needed

Anyone interested in joining is welcome to attend. We realize this is
not a TZ that is going to work for everyone, so expect to carry forward
as much of the conversation as possible back into the mailing list. This
is mostly a weekly drum beat to ensure we don't fall off the wagon and
have stuff forgotten.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Dolph Mathews
On Wed, Dec 9, 2015 at 2:25 AM, Thomas Goirand  wrote:

> On 12/08/2015 04:09 AM, Dolph Mathews wrote:
> > In Debian, many services/daemons are run, then their API is used by
> the
> > package. In the case of Keystone, for example, it is possible to ask,
> > via Debconf, that Keystone registers itself in the service catalog.
> If
> > we get Keystone within Apache, it becomes at least harder to do so.
> >
> >
> > You start the next paragraph with "the other issue," but I'm not clear
> > on what the issue is here? This sounds like a bunch of complexity that
> > is working as you expect it to.
>
> Ok, let me try again. If the only way to get Keystone up and running is
> through a web server like Apache, then starting Keystone and then using
> its API is not as easy as if it was a daemon on its own. For example,
> there may be some other types of configuration in use in the web server
> which the package doesn't control.
>

I would make the assumption that the node is solely intended for use by
keystone. Either it is solely for use by keystone and I care about uptime,
or the node is not solely for use by keystone and thus I don't care about
uptime.


>
> > The other issue is that if all services are sharing the same web
> server,
> > restarting the web server restarts all services. Or, said otherwise:
> if
> > I need to change a configuration value of any of the services served
> by
> > Apache, I will need to restart them all, which is very annoying: I
> very
> > much prefer to just restart *ONE* service if I need.
> >
> > As a deployer, I'd solve this by running one API service per server. As
> > a packager, I don't expect you to solve this outside of AIO
> > architectures, in which case, uptime is obviously not a concern.
>
> Let's say there's a security issue in Keystone. One would expect that a
> simple "apt-get dist-upgrade" will do all. If Keystone is installed in a
> web server, should the package aggressively attempts to restart it? If
> not, what is the proposed solution to have Keystone restarted in this case?
>

Yes, restart the web server aggressively, because I'm likely doing the
upgrade with the intent of receiving the immediate benefit of updates.


>
> > Also, something which we learned the hard way at Mirantis: it is
> *very*
> > annoying that Apache restarts every Sunday morning by default in
> > distributions like Ubuntu and Debian (I'm not sure for the other
> > distros). No, the default config of logrotate and Apache can't be
> > changed in distros just to satisfy OpenStack users: there's other
> users
> > of Apache in these distros.
> >
> > Yikes! Is the debian Apache package intended to be useful in production?
> > That doesn't sound like an OpenStack-specific problem at all. How is
> > logrotate involved? Link?
>
> It is logrotate which restarts apache. From /etc/logrotate.d/apache2:
>
> /var/log/apache2/*.log {
> daily
> missingok
> rotate 14
> compress
> delaycompress
> notifempty
> create 640 root adm
> sharedscripts
> postrotate
> if /etc/init.d/apache2 status > /dev/null ; then \
> /etc/init.d/apache2 reload > /dev/null; \
> fi;
> endscript
> prerotate
> if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
> run-parts /etc/logrotate.d/httpd-prerotate; \
> fi; \
> endscript
> }
>
> This is to be considered quite harmful. Yes, I am fully aware that this
> file can be tweaked. Though this is the default, and it is always best
> to provide a default which works for our users. And in this case, no
> OpenStack package maintainer controls it.
>
> > Then, yes, uWSGI becomes a nice option. I used it for the Barbican
> > package, and it worked well. Though the uwsgi package in Debian isn't
> > very well maintained, and multiple times, Barbican could have been
> > removed from Debian testing because of RC bugs against uWSGI.
> >
> > uWSGI is a nice option, but no one should be tied to that either-- in
> > the debian world or otherwise.
>
> For Debian users, I intend to provide useful configuration so that
> everything works by default. OpenStack is complex enough. It is my role,
> as a package maintainer, to make it easier to use.
>
> One of the options I have is to create new packages like this:
>
> python-keystone -> Already exists, holds the Python code
> keystone -> Currently contains the Eventlet daemon
>
> I could transform the later into a meta package depending on any of the
> below options:
>
> keystone-apache -> Default configuration for Apache
> keystone-uwsgi -> An already configured startup script using uwsgi
>
> Though I'm not sure the FTP masters will love me if I create so many new
> packages just to create automated configurations... Also, maybe it's
> just best to provide *one* 

Re: [openstack-dev] [puppet] proposing Cody Herriges part of Puppet OpenStack core

2015-12-09 Thread Sebastien Badia
On Tue, Dec 08, 2015 at 11:49:08AM (-0500), Emilien Macchi wrote:
> Hi,
> 
> Back in "old days", Cody was already core on the modules, when they were
> hosted by Puppetlabs namespace.
> His contributions [1] are very valuable to the group:
> * strong knowledge on Puppet and all dependencies in general.
> * very helpful to debug issues related to Puppet core or dependencies
> (beaker, etc).
> * regular attendance to our weekly meeting
> * pertinent reviews
> * very understanding of our coding style
> 
> I would like to propose having him back part of our core team.
> As usual, we need to vote.

Of course, a big +1!

Thanks Cody!

Seb


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable/liberty 13.1.0 release planning

2015-12-09 Thread Matt Riedemann



On 12/9/2015 3:46 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

We've had a few high priority regression fixes in stable/liberty [1][2]
so I think it's time to do a release.
[...]


You probably mean 12.0.1 ?



Err 12.1.0, yeah. Since we've had dependency updates in stable/liberty I 
thought that made it a minor version bump to 12.1.0.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] making project_id optional in API URLs

2015-12-09 Thread michael mccune

On 12/08/2015 05:59 PM, Adam Young wrote:

I think it is kindof irrelevant.  It can be there or not be there in the
URL itself, so long as it does not show up in the service catalog. From
an policy standpoint, having the project in the URL means that you can
do an access control check without fetching the object from the
database; you should, however, confirm that the object return belongs to
the project at a later point.


from the policy standpoint does it matter if the project id appears in 
the url or in the headers?


mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Anita Kuno
On 12/09/2015 07:06 AM, Sean Dague wrote:
> On 12/09/2015 01:46 AM, Armando M. wrote:
>>
>>
>> On 3 December 2015 at 02:21, Thierry Carrez > > wrote:
>>
>> Armando M. wrote:
>> > On 2 December 2015 at 01:16, Thierry Carrez > 
>> > >> wrote:
>> >> Armando M. wrote:
>> >> >> One solution is, like you mentioned, to make some (or all) of 
>> them
>> >> >> full-fledged project teams. Be aware that this means the TC 
>> would judge
>> >> >> those new project teams individually and might reject them if 
>> we feel
>> >> >> the requirements are not met. We might want to clarify what 
>> happens
>> >> >> then.
>> >> >
>> >> > That's a good point. Do we have existing examples of this or 
>> would we be
>> >> > sailing in uncharted waters?
>> >>
>> >> It's been pretty common that we rejected/delayed applications for
>> >> projects where we felt they needed more alignment. In such cases, 
>> the
>> >> immediate result for those projects if they are out of the Neutron
>> >> "stadium" is that they would fall from the list of official 
>> projects.
>> >> Again, I'm fine with that outcome, but I want to set expectations
>> >> clearly :)
>> >
>> > Understood. It sounds to me that the outcome would be that those
>> > projects (that may end up being rejected) would show nowhere on [1], 
>> but
>> > would still be hosted and can rely on the support and services of the
>> > OpenStack community, right?
>> >
>> > [1] http://governance.openstack.org/reference/projects/
>>
>> Yes they would still be hosted on OpenStack development infrastructure.
>> Contributions would no longer count toward ATC status, so people who
>> only contribute to those projects would no longer be able to vote in the
>> Technical Committee election. They would not have "official" design
>> summit space either -- they can still camp in the hallway though :)
>>
>>
>> Hi folks,
>>
>> For whom of you is interested in the conversation, the topic was brought
>> for discussion at the latest TC meeting [1]. Unfortunately I was unable
>> to join, however I would like to try and respond to some of the comments
>> made to clarify my position on the matter:
>>
>>> ttx: the neutron PTL say he can't vouch for anything in the neutron
>> "stadium"
>>
>> To be honest that's not entirely my position.
>>
>> The problem stems from the fact that, if I am asked what the stadium
>> means, as a PTL I can't give a straight answer; ttx put it relatively
>> well (and I quote him): by adding all those projects under your own
>> project team, you bypass the Technical Committee approval that they
>> behave like OpenStack projects and are produced by the OpenStack
>> community. The Neutron team basically vouches for all of them to be on
>> par. As far as the Technical Committee goes, they are all being produced
>> by the same team we originally blessed (the Neutron project team).
>>
>> The reality is: some of these projects are not produced by the same
>> team, they do not behave the same way, and they do not follow the same
>> practices and guidelines. For the stadium to make sense, in my humble
>> opinion, a definition of these practices should happen and enforcement
>> should follow, but who's got the time for policing and enforcing
>> eviction, especially on a large scale? So we either reduce the scale
>> (which might not be feasible because in OpenStack we're all about
>> scaling and adding more and more and more), or we address the problem
>> more radically by evolving the relationship from tight aggregation to
>> loose association; this way who needs to vouch for the Neutron
>> relationship is not the Neutron PTL, but the person sponsoring the
>> project that wants to be associated to Neutron. On the other end, the
>> vouching may still be pursued, but for a much more focused set of
>> initiatives that are led by the same team.
>>
>>> russellb: Iattempted to start breaking down the different types of
>> repos that are part of the stadium (consumer, api, implementation of
>> technology, plugins/drivers).
>>
>> The distinction between implementation of technology, plugins/drivers
>> and api is not justified IMO because from a neutron standpoint they all
>> look like the same: they leverage the pluggable extensions to the
>> Neutron core framework. As I attempted to say: we have existing plugins
>> and drivers that implement APIs, and we have plugins that implement
>> technology, so the extra classification seems overspecification.
>>
>>> flaper87: I agree a driver should not be independent
>>
>> Why, what's your rationale? If we dig deeper, some drivers are small
>> code drops with no or untraceable maintainers. Some are actively
>> 

Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Antoni Segura Puimedon
On Wed, Dec 9, 2015 at 2:41 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Tue, Dec 8, 2015 at 1:58 PM, Galo Navarro  wrote:
>
>> Hi Sandro,
>>
>> >> 1) (Downstream) packaging: midonet and python-midonetclient are two
>> >> distinct packages, and therefore should have distinct upstream
>> >> tarballs - which are compiled on a per repo basis.
>>
>> This is actually not accurate: there is no such thing as a midonet
>> package. The midonet repo produces 4 or 5 separate packages: agent,
>> cluster, tools, py client.
>>
>> I'd like to understand a bit better what exactly you're trying to
>> achieve. Is it to produce tarballs? All of them? Just
>> python-midonetclient?
>>
>> Let's examine the concrete requirements before rushing headlong into
>> highly disruptive changes like splitting repos. For example, a
>> py-midonetclient tarball can be built already without having a
>> separate repo.
>>
>> > 3) In order to put python-midonetclient on PyPI, it's probably
>> > required to be in its own repo as well, isn't it? Because that's
>> > another requirement [3]
>>
>> Ditto. We already have a mirror repo of pyc for this purpose
>> https://github.com/midonet/python-midonetclient, synced daily.
>>
>
> Some of the problems with that is that it does not have any git log history
> nor does it feel like a coding project at all.
>
> Allow me to put forward a solution that will allow you keep the development
> in the midonet tree while, at the same time, having a proper repository
> with identifiable patches in github.com/midonet/python-midonetclient
>
> Look at the repo I created [1] and specially at its commit history [2]
>
> As you can see, it has all the commit history relevant for the
> midonet/python-midonetclient (and only that one) is present.
>
> This is generated in the following way. There should be job that does once
> in the
> midonet repo:
>
> git format-patch -o "${HOME}/patches_current
> --relative=python-midonetclient \
> 7aef7ea7845a2125696303a277d40bd45c9240e2..master
>
> Then, each day it should do:
>
> cd ${JOB_HOME}
> git clone https://github.com/celebdor/python-midonetclient.git
> git clone https://github.com/midonet/midonet.git
>
> pushd midonet
> git format-patch -o "${JOB_HOME}/patches_new"
> --relative=python-midonetclient \
> 7aef7ea7845a2125696303a277d40bd45c9240e2..master
> popd
> pushd python-midonetclient
>
> for file in `diff <(ls -1a "${HOME}/patches_current") <(ls -1a
> "${JOB_HOME}/patches_new") | cut -f2 -d' '`
> do
> git am < "${JOB_HOME}/patches_new/$file"
> done
>

Obviously at this point it should do a "git push" :P


> popd
> mv patches_new "${HOME}/patches_current"
>
> It should be quite straightforward to change whichever job you currently
> use to
> this.
>
> The last remaining issue will be that of tags. github.com/midonet/midonet
> is not
> tagging all the releases. However, python-midonetclient should, so I would
> just
> ask that when you make a release you push the tag to python-midonetclient
> as well.
>
> [1] https://github.com/celebdor/python-midonetclient/
> [2] https://github.com/celebdor/python-midonetclient/commits/master
>
>
> Regards,
>
> Toni
>
>
>> g
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-glanceclient] Return request-id to caller

2015-12-09 Thread Flavio Percoco

On 09/12/15 11:33 +, Kekane, Abhishek wrote:

Hi Devs,



We are adding support for returning ‘x-openstack-request-id’  to the caller as
per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html



Problem Description:

Cannot add a new property of list type to the warlock.model object.



How is a model object created:

Let’s take an example of glanceclient.api.v2.images.get() call [1]:



Here after getting the response we call model() method. This model() does the
job of creating a warlock.model object(essentially a dict) based on the schema
given as argument (image schema retrieved from glance in this case). Inside
model() the raw() method simply return the image schema as JSON object. The
advantage of this warlock.model object over a simple dict is that it validates
any changes to object based on the rules specified in the reference schema. 
The keys of this  model object are available as object properties to the

caller.



Underlying reason:

The schema for different sub APIs is returned a bit differently. For images,
metadef APIs glance.schema.Schema.raw() is used which returns a schema
containing “additionalProperties”: {“type”: “string”}. Whereas for members and
tasks APIs glance.schema.Schema.minimal() is used to return schema object which
does not contain “additionalProperties”.



So we can add extra properties of any type to the model object returned from
members or tasks API but for images and metadef APIs we can only add properties
which can be of type string. Also for the latter case we depend on the glance
configuration to allow additional properties.



As per our analysis we have come up with two approaches for resolving this
issue:



Approach #1:  Inject request_ids property in the warlock model object in glance
client

Here we do the following:

1. Inject the ‘request_ids’ as additional property into the model object
(returned from model())

2. Return the model object which now contains request_ids property



Limitations:

1. Because the glance schemas for images and metadef only allows additional
properties of type string, so even though natural type of request_ids should be
list we have to make it as a comma separated ‘string’ of request ids as a
compromise.

2. Lot of extra code is needed to wrap objects returned from the client API so
that the caller can get request ids. For example we need to write wrapper
classes for dict, list, str, tuple, generator.

3. Not a good design as we are adding a property which should actually be a
base property but added as additional property as a compromise.

4. There is a dependency on glance whether to allow custom/additional
properties or not. [2]



Approach #2:  Add ‘request_ids’ property to all schema definitions in glance



Here we add  ‘request_ids’ property as follows to the various APIs (schema):



“request_ids”: {

 "type": "array",

 "items": {

   "type": "string"

 }

}



Doing this will make changes in glance client very simple as compared to
approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in  various API calls for
example glanceclient.v2.images.get().



Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

Cheers,
Flavio




Please let us know which approach is better or any suggestions for the same.



[1] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/
v2/images.py#L179

[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#
L944


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:
> Thierry Carrez wrote:
> > Thierry Carrez wrote:
> >> The nomination deadline is passed, we have two candidates!
> >>
> >> I'll be setting up the election shortly (with Jeremy's help to generate
> >> election rolls).
> > 
> > OK, the election just started. Recent contributors to a stable branch
> > (over the past year) should have received an email with a link to vote.
> > If you haven't and think you should have, please contact me privately.
> > 
> > The poll closes on Tuesday, December 8th at 23:59 UTC.
> > Happy voting!
> 
> Election is over[1], let me congratulate Matt Riedemann for his election
> ! Thanks to everyone who participated to the vote.
> 
> Now I'll submit the request for spinning off as a separate project team
> to the governance ASAP, and we should be up and running very soon.
> 
> Cheers,
> 
> [1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a
> 

Congratulations, Matt!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-ansible] Mid Cycle Sprint

2015-12-09 Thread Curtis
On Wed, Dec 9, 2015 at 5:45 AM, Jesse Pretorius
 wrote:
> Hi everyone,
>
> At the Mitaka design summit in Tokyo we had some corridor discussions about
> doing a mid-cycle meetup for the purpose of continuing some design
> discussions and doing some specific sprint work.
>
> ***
> I'd like indications of who would like to attend and what
> locations/dates/topics/sprints would be of interest to you.
> ***
>

I'd like to get more involved in openstack-ansible. I'll be going to
the operators mid-cycle in Feb, so could stay later and attend in West
London. However, I could likely make it to San Antonio as well. Not
sure if that helps but I will definitely try to attend where ever it
occurs.

Thanks.

> For guidance/background I've put some notes together below:
>
> Location
> 
> We have contributors, deployers and downstream consumers across the globe so
> picking a venue is difficult. Rackspace have facilities in the UK (Hayes,
> West London) and in the US (San Antonio) and are happy for us to make use of
> them.
>
> Dates
> -
> Most of the mid-cycles for upstream OpenStack projects are being held in
> January. The Operators mid-cycle is on February 15-16.
>
> As I feel that it's important that we're all as involved as possible in
> these events, I would suggest that we schedule ours after the Operators
> mid-cycle.
>
> It strikes me that it may be useful to do our mid-cycle immediately after
> the Ops mid-cycle, and do it in the UK. This may help to optimise travel for
> many of us.
>
> Format
> --
> The format of the summit is really for us to choose, but typically they're
> formatted along the lines of something like this:
>
> Day 1: Big group discussions similar in format to sessions at the design
> summit.
>
> Day 2: Collaborative code reviews, usually performed on a projector, where
> the goal is to merge things that day (if a review needs more than a single
> iteration, we skip it. If a review needs small revisions, we do them on the
> spot).
>
> Day 3: Small group / pair programming.
>
> Topics
> --
> Some topics/sprints that come to mind that we could explore/do are:
>  - Install Guide Documentation Improvement [1]
>  - Development Documentation Improvement (best practises, testing, how to
> develop a new role, etc)
>  - Upgrade Framework [2]
>  - Multi-OS Support [3]
>
> [1] https://etherpad.openstack.org/p/oa-install-docs
> [2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
> [3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Dolph Mathews
Benchmarks always appreciated!

But, these types of benchmarks are *entirely* useless unless you can
provide the exact configuration you used for each scenario so that others
can scrutinize the test method and reproduce your results. So, off the top
of my head, I'm looking for:

* keystone.conf
* distro used for testing, in case they override the project's defaults
* all nginx config files
* all uwsgi config files
* apache config, including virtual hosts and mods
* your test client and it's configuration
* server & client architecture, and at least some idea of what lies in
between (networking, etc)
* whatever else I'm forgetting

A mailing list is probably not the best method to provide anything other
than a summary; so I'd suggest publishing the details in a gist, blog post,
or both.

And to comment on the results themselves: you shouldn't be seeing that big
of a performance gap between httpd and everything else; something is
fundamentally different about that configuration. These are just web
servers, after all. Choosing between them should not be a matter of
performance, but it should be a choice of documentation, licensing,
community, operability, supportability, reliability, etc. Performance
should be relatively similar, and thus a much lower priority in making your
selection.

On Tue, Dec 8, 2015 at 10:09 PM, Ginwala, Aliasgar 
wrote:

>
>
> Hi All:
>
> Just to inform Steve and all the folks who brought up this talk ;We did
> some benchmarking using wsgi, apache and nginx for keystone with mysql as
> token backend and we got following results on Juno version. Hence I am just
> giving you brief highlight about the results we got.
>
> spawning 100 users per sec for create token, below are the results:
>
> *Using nginx with uwsgi: *
> rps *32*—(reqests/sec)
> median time ~ 3.3 sec
> no of processes 20
>
> *using apache *
> rps *75*
> median time ~ 1.3 sec
> avg time - 1.5 sec
> no of processes 20
>
> *using wsgi *
> rps *28*
> median time ~ 3.4
> avg 3.5
> no of processes 20
>
>
> We are planning to switch to apache since we are not seeing good results
> using nginx with uwsgi. May be some more added support is required for
> nginx performance.We did not encounter this auto restart issue in test
> yet and hence are open to more inputs.
>
> Any other suggestions are welcome too. Let us know in case of queries.
>
> Regards,
> Aliasgar
>
> On 8 December 2015 at 07:53, Thomas Goirand  wrote:
>
>> On 12/01/2015 07:57 AM, Steve Martinelli wrote:
>> > Trying to summarize here...
>> >
>> > - There isn't much interest in keeping eventlet around.
>> > - Folks are OK with running keystone in a WSGI server, but feel they are
>> > constrained by Apache.
>> > - uWSGI could help to support multiple web servers.
>> >
>> > My opinion:
>> >
>> > - Adding support for uWSGI definitely sounds like it's worth
>> > investigating, but not achievable in this release (unless someone
>> > already has something cooked up).
>> > - I'm tempted to let eventlet stick around another release, since it's
>> > causing pain on some of our operators.
>> > - Other folks have managed to run keystone in a web server (and
>> > hopefully not feel pain when doing so!), so it might be worth getting
>> > technical details on just how it was accomplished. If we get an OK from
>> > the operator community later on in mitaka, I'd still be OK with removing
>> > eventlet, but I don't want to break folks.
>> >
>> > stevemar
>> >
>> > From: John Dewey 
>> > 100% agree.
>> >
>> > We should look at uwsgi as the reference architecture. Nginx/Apache/etc
>> > should be interchangeable, and up to the operator which they choose to
>> > use. Hell, with tcp load balancing now in opensource Nginx, I could get
>> > rid of Apache and HAProxy by utilizing uwsgi.
>> >
>> > John
>>
>> The main problem I see with running Keystone (or any other service) in a
>> web server, is that *I* (as a package maintainer) will loose the control
>> over when the service is started. Let me explain why that is important
>> for me.
>>
>> In Debian, many services/daemons are run, then their API is used by the
>> package. In the case of Keystone, for example, it is possible to ask,
>> via Debconf, that Keystone registers itself in the service catalog. If
>> we get Keystone within Apache, it becomes at least harder to do so.
>>
>
> I was going to leave this up to others to comment on here, but IMO -
> excellent. Anyone that is doing an even semi serious deployment of
> OpenStack is going to require puppet/chef/ansible or some form of
> orchestration layer for deployment. Even for test deployments it seems to
> me that it's crazy for this sort of functionality be handled from debconf.
> The deployers of the system are going to understand if they want to use
> eventlet or apache and should therefore understand what restarting apache
> on a system implies.
>
>
>>
>> The other issue is that if all services are sharing the same web server,

Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Galo Navarro
>> Ditto. We already have a mirror repo of pyc for this purpose
>> https://github.com/midonet/python-midonetclient, synced daily.
>
> Some of the problems with that is that it does not have any git log history
> nor does it feel like a coding project at all.

Of course, because the goal of this repo is not to provide a
changelog. It's to provide an independent repo. If you want git log,
you should do a git log python-midonetclient in the source repo
(/midonet/midonet).

> Allow me to put forward a solution that will allow you keep the development
> in the midonet tree while, at the same time, having a proper repository
> with identifiable patches in github.com/midonet/python-midonetclient

Thanks, but I insist: can we please clarify *what* are we trying to
achieve, before we jump into solutions?

g

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Davanum Srinivas
Congrats, Matt!

-- Dims

On Wed, Dec 9, 2015 at 5:02 PM, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:
>> Thierry Carrez wrote:
>> > Thierry Carrez wrote:
>> >> The nomination deadline is passed, we have two candidates!
>> >>
>> >> I'll be setting up the election shortly (with Jeremy's help to generate
>> >> election rolls).
>> >
>> > OK, the election just started. Recent contributors to a stable branch
>> > (over the past year) should have received an email with a link to vote.
>> > If you haven't and think you should have, please contact me privately.
>> >
>> > The poll closes on Tuesday, December 8th at 23:59 UTC.
>> > Happy voting!
>>
>> Election is over[1], let me congratulate Matt Riedemann for his election
>> ! Thanks to everyone who participated to the vote.
>>
>> Now I'll submit the request for spinning off as a separate project team
>> to the governance ASAP, and we should be up and running very soon.
>>
>> Cheers,
>>
>> [1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a
>>
>
> Congratulations, Matt!
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] jsonschema for scheduler hints

2015-12-09 Thread Ken'ichi Ohmichi
2015-12-09 21:20 GMT+09:00 Sean Dague :
> On 12/08/2015 11:47 PM, Ken'ichi Ohmichi wrote:
>> Hi Sylvain,
>>
>> 2015-12-04 17:48 GMT+09:00 Sylvain Bauza :

 That leaves the out-of-tree discussion about custom filters and how we
 could have a consistent behaviour given that. Should we accept something in
 a specific deployment while another deployment could 401 against it ? Mmm,
 bad to me IMHO.
>>>
>>> We can have code to check the out-of-tree filters didn't expose any same
>>> hints with in-tree filter.
>>>
>>> Sure, and thank you for that, that was missing in the past. That said, there
>>> are still some interoperability concerns, let me explain : as a cloud
>>> operator, I'm now providing a custom filter (say MyAwesomeFilter) which does
>>> the lookup for an hint called 'my_awesome_hint'.
>>>
>>> If we enforce a strict validation (and not allow to accept any hint) it
>>> would mean that this cloud would accept a request with 'my_awesome_hint'
>>> while another cloud which wouldn't be running MyAwesomeFilter would then
>>> deny the same request.
>>
>> I am thinking the operator/vendor own filter should have some
>> implementation code for registering its original hint to jsonschema to
>> expose/validate available hints in the future.
>> The way should be easy as possible so that they can implement the code 
>> easily.
>> After that, we will be able to make the validation strict again.
>
> Yeh, that was my thinking. As someone that did a lot of the jsonschema
> work, is that something you could prototype?

Yes.
On a prototype https://review.openstack.org/#/c/220440/ , each filter
needs to contain get_scheduler_hint_api_schema() which returns
available scheduler_hints parameter. Then stevedore detects these
parameters from each filter and extends jsonschema with them.
On current prototype, the detection and extension are implemented in nova-api.
but we need to change the prototype like:

  1. nova-sched detects available scheduler-hints from filters.
  2. nova-sched passes these scheduler-hints to nova-api via RPC.
  3. nova-api extends jsonschema with the gotten scheduler-hints.

After implementing the mechanism, the operator/vendor own filters just
need to implement get_scheduler_hint_api_schema(). That is not so
hard, I feel.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Galo Navarro
Hi,

> I think the goal of this split is well explained by Sandro in the first
> mails of the chain:
>
> 1. Downstream packaging
> 2. Tagging the delivery properly as a library
> 3. Adding as a project on pypi

Not really, because (1) and (2) are *a consequence* of the repo split. Not
a cause. Please correct me if I'm reading wrong but he's saying:

- I want tarballs
- To produce tarballs, I want a separate repo, and separate repos have (1),
(2) as requirements.

So this is where I'm going: producing a tarball of pyc does *not* require a
separate repo. If we don't need a new repo, we don't need to do all the
things that a separate repo requires.

Now:

> OpenStack provide us a tarballs web page[1] for each branch of each
project
> of the infrastructure.
> Then, projects like Delorean can allow us to download theses tarball
master
> branches, create the
> packages and host them in a target repository for each one of the rpm-like
> distributions[2]. I am pretty sure
> that there is something similar for Ubuntu.

This looks more accurate: you're actually not asking for a tarball. You're
asking for being compatible with a system that produces tarballs off a
repo. This is very different :)

So questions: we have a standalone mirror of the repo, that could be used
for this purpose. Say we move the mirror to OSt infra, would things work?

> Everything is done in a very straightforward and standarized way, because
> every repo has its own
> deliverable. You can look how they are packaged and you won't see too many
> differences between
> them. Packaging a python-midonetclient it will be trivial if it is
separated
> in a single repo. It will be

But create a lot of other problems in development. With a very important
difference: the pain created by the mirror solution is solved cheaply with
software (e.g.: as you know, with a script). OTOH, the pain created by
splitting the repo is paid in very costly human resources.

> complicated and we'll have to do tricky things if it is a directory inside
> the midonet repo. And I am not
> sure if Ubuntu and RDO community will allow us to have weird packaging
> metadata repos.

I do get this point and it's a major concern, IMO we should split to a
different conversation as it's not related to where PYC lives, but to a
more general question: do we really need a repo per package?

Like Guillermo and myself said before, the midonet repo generate 4
packages, and this will grow. If having a package per repo is really a
strong requirement, there is *a lot* of work ahead, so we need to start
talking about this now. But like I said, it's orthogonal to the PYC points
above.

g
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable/liberty 12.0.1 release planning

2015-12-09 Thread Matt Riedemann



On 12/9/2015 8:44 AM, Matt Riedemann wrote:



On 12/9/2015 3:46 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

We've had a few high priority regression fixes in stable/liberty [1][2]
so I think it's time to do a release.
[...]


You probably mean 12.0.1 ?



Err 12.1.0, yeah. Since we've had dependency updates in stable/liberty I
thought that made it a minor version bump to 12.1.0.



Talked about this in the release channel this morning [1]. Summary is as 
long as we aren't raising the minimum required version of a dependency 
in stable/liberty, then the nova server release should be 12.0.1. We'd 
only bump to 12.1.0 if we needed a newer minimum dependency, and I don't 
think we have one of those (but will double check).


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2015-12-09.log.html#t2015-12-09T15:07:12


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca]: Mid Cycle Doodle

2015-12-09 Thread Fabio Giannetti (fgiannet)
Guys,
   Please find here the doodle for the mid-cycle:

http://doodle.com/poll/yy4unhffy7hi3x67

If we run the meeting Thu/Fri 28/29 we can have the 28 a joint session
with Congress.
First week of Feb is all open and I guess we need to decide if to do 2 or
3 days.
Thanks,
Fabio


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread John Griffith
On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan  wrote:

> Hi all,
>
> Currently when deleting a volume, it checks whether there are snapshots
> created from it. If yes deletion is prohibited.  But it allows to extend
> the volume, no check whether there are snapshots from it.
>
​Correct​


>
> The two behaviors in Cinder are not consistent from my viewpoint.
>
​Well, your snapshot was taken at a point in time; and if you do a create
from snapshot the whole point is you want what you HAD when the snapshot
command was issued and NOT what happened afterwards.  So in my opinion this
is not inconsistent at all.
​


>
> In backend storage, their behaviors are same.
>
​Which backend storage are you referring to in this case?
​


> For full snapshot, if still in copying progress, both extend and deletion
> are not allowed. If snapshot copying finishes, both extend and deletion are
> allowed.
> For incremental snapshot, both extend and deletion are not allowed.
>

​So your particular backend has "different/specific" rules/requirements
around snapshots.  That's pretty common, I don't suppose theres any way to
hack around this internally?  In other words do things on your backend like
clones as snaps etc to make up for the differences in behavior?​


>
> As a result, this raises two concerns here:
> 1. Let such operations behavior same in Cinder.
> 2. I prefer to let storage driver decide the dependencies, not in the
> general core codes.
>

​I have and always will strongly disagree with this approach and your
proposal.  Sadly we've already started to allow more and more vendor
drivers just "do their own thing" and implement their own special API
methods.  This is in my opinion a horrible path and defeats the entire
purpose of have a Cinder abstraction layer.

This will make it impossible to have compatibility between clouds for those
that care about it, it will make it impossible for operators/deployers to
understand exactly what they can and should expect in terms of the usage of
their cloud.  Finally, it will also mean that not OpenStack API
functionality is COMPLETELY dependent on backend device.  I know people are
sick of hearing me say this, so I'll keep it short and say it one more time:
"Compatibility in the API matters and should always be our priority"


> Meanwhile, if we let driver to decide the dependencies, the following
> changes need to do in Cinder:
> 1. When creating a snapshot from volume, it needs copy all metadata of
> volume to snapshot. Currently it doesn't.
> Any other potential issues please let me know.
>
> Any input will be appreciated.
>
> Best wishes
> Lisa
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] RBAC usage at production

2015-12-09 Thread Kris G. Lindgren
In other projects the policy.json file is read each time of api request.  So 
changes to the file take place immediately.  I was 90% sure keystone was the 
same way?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy







On 12/9/15, 1:39 AM, "Oguz Yarimtepe"  wrote:

>Hi,
>
>I am wondering whether there are people using RBAC at production. The 
>policy.json file has a structure that requires restart of the service 
>each time you edit the file. Is there and on the fly solution or tips 
>about it?
>
>
>
>___
>OpenStack-operators mailing list
>openstack-operat...@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Ubuntu bootstrap] Ubuntu bootstrap becomes default in the Fuel

2015-12-09 Thread Dmitry Klenov
Hello folks,

I would like to announce that we have completed all items for 'Ubuntu
bootstrap' feature. Thanks to the team for hard work and dedication!

Starting from today Ubuntu bootstrap is enabled in the Fuel by default.

Also it is worth mentioning that Ubuntu bootstrap is integrated with
'Biosdevnames' feature implemented by MOS-Linux team, so new bootstrap will
also benefit from persistent interface naming.

Thanks,
Dmitry.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Doug Wiegley

> On Dec 9, 2015, at 7:25 AM, Doug Hellmann  wrote:
> 
> Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:
>> On 3 December 2015 at 02:21, Thierry Carrez  wrote:
>> 
>>> Armando M. wrote:
 On 2 December 2015 at 01:16, Thierry Carrez > wrote:
>Armando M. wrote:
>>> One solution is, like you mentioned, to make some (or all) of
>>> them
>>> full-fledged project teams. Be aware that this means the TC
>>> would judge
>>> those new project teams individually and might reject them if we
>>> feel
>>> the requirements are not met. We might want to clarify what
>>> happens
>>> then.
>> 
>> That's a good point. Do we have existing examples of this or
>>> would we be
>> sailing in uncharted waters?
> 
>It's been pretty common that we rejected/delayed applications for
>projects where we felt they needed more alignment. In such cases,
>>> the
>immediate result for those projects if they are out of the Neutron
>"stadium" is that they would fall from the list of official
>>> projects.
>Again, I'm fine with that outcome, but I want to set expectations
>clearly :)
 
 Understood. It sounds to me that the outcome would be that those
 projects (that may end up being rejected) would show nowhere on [1], but
 would still be hosted and can rely on the support and services of the
 OpenStack community, right?
 
 [1] http://governance.openstack.org/reference/projects/
>>> 
>>> Yes they would still be hosted on OpenStack development infrastructure.
>>> Contributions would no longer count toward ATC status, so people who
>>> only contribute to those projects would no longer be able to vote in the
>>> Technical Committee election. They would not have "official" design
>>> summit space either -- they can still camp in the hallway though :)
>>> 
>> 
>> Hi folks,
>> 
>> For whom of you is interested in the conversation, the topic was brought
>> for discussion at the latest TC meeting [1]. Unfortunately I was unable to
>> join, however I would like to try and respond to some of the comments made
>> to clarify my position on the matter:
>> 
>>> ttx: the neutron PTL say he can't vouch for anything in the neutron
>> "stadium"
>> 
>> To be honest that's not entirely my position.
>> 
>> The problem stems from the fact that, if I am asked what the stadium means,
>> as a PTL I can't give a straight answer; ttx put it relatively well (and I
>> quote him): by adding all those projects under your own project team, you
>> bypass the Technical Committee approval that they behave like OpenStack
>> projects and are produced by the OpenStack community. The Neutron team
>> basically vouches for all of them to be on par. As far as the Technical
>> Committee goes, they are all being produced by the same team we originally
>> blessed (the Neutron project team).
>> 
>> The reality is: some of these projects are not produced by the same team,
>> they do not behave the same way, and they do not follow the same practices
>> and guidelines. For the stadium to make sense, in my humble opinion, a
> 
> This is the thing that's key, for me. As Anita points out elsewhere in
> this thread, we want to structure our project teams so that decision
> making and responsibility are placed in the same set of hands. It sounds
> like the Stadium concept has made it easy to let those diverge.
> 
>> definition of these practices should happen and enforcement should follow,
>> but who's got the time for policing and enforcing eviction, especially on a
>> large scale? So we either reduce the scale (which might not be feasible
>> because in OpenStack we're all about scaling and adding more and more and
>> more), or we address the problem more radically by evolving the
>> relationship from tight aggregation to loose association; this way who
>> needs to vouch for the Neutron relationship is not the Neutron PTL, but the
>> person sponsoring the project that wants to be associated to Neutron. On
>> the other end, the vouching may still be pursued, but for a much more
>> focused set of initiatives that are led by the same team.
>> 
>>> russellb: I attempted to start breaking down the different types of repos
>> that are part of the stadium (consumer, api, implementation of technology,
>> plugins/drivers).
>> 
>> The distinction between implementation of technology, plugins/drivers and
>> api is not justified IMO because from a neutron standpoint they all look
>> like the same: they leverage the pluggable extensions to the Neutron core
>> framework. As I attempted to say: we have existing plugins and drivers that
>> implement APIs, and we have plugins that implement technology, so the extra
>> classification seems overspecification.
>> 
>>> flaper87: I agree a driver should not be independent
>> 
>> Why, what's your rationale? If we dig deeper, 

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Sean M. Collins
On Wed, Dec 09, 2015 at 07:06:40AM EST, Sean Dague wrote:
> Changing the REST API isn't innovation, it's incompatibility for end
> users. If we're ever going to have compatible clouds and a real interop
> effort, the APIs for all our services need to be very firmly controlled.
> Extending the API arbitrarily should be a deprecated concept across
> OpenStack.
> 
> Otherwise, I have no idea what the neutron (or any other project) API is.
> 

+1 - when I was at Comcast we had some sites that were nova-network and
some that were Neutron, and there were plenty of differences that people
had to bake into their tooling. I really don't want to see it now become
a case where some Neutron deployments have vastly different behaviors. I
think we've got a lot of API extensions currently that are "must have"
(Security Group API, L3 API are probably the biggest) and at some point
we're going to need grasp the nettle of trying to make more of the API
extensions that we have floating around part of a core network api.

So, when it comes to the REST API I think the Neutron project needs to
have opinions on things and standardize behaviors, and that
implementations behind the API is where products and projects can
differentiate.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 TypeManager question

2015-12-09 Thread Kevin Benton
It does not appear to be used. I suspect maybe the original intention is
that this could be passed to the type driver which might do something
special depending on the tenant_id. However, the type driver interface does
not include the tenant_id in 'allocate_tenant_segment' so we should either
update the type driver interface to include that or just remove it from
create_network_segments.

On Mon, Dec 7, 2015 at 9:28 AM, Sławek Kapłoński 
wrote:

> Hello,
>
> Recently I was checking something in code of ML2 Type manager
> (neutron.plugins.ml2.managers) and I found in TypeManager class in method
> create_network_segments that to this method as argument is passed
> "tenant_id"
> but I can't find where it is used.
> Can someone more experienced explain me why this tenant_id is passed there
> and
> where it is used? Thx in advance :)
>
> --
> Pozdrawiam / Best regards
> Sławek Kapłoński
> sla...@kaplonski.pl
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] RBAC usage at production

2015-12-09 Thread Edgar Magana
We use RBAC in production but basically modify networking operations and some 
compute ones. In our case we don’t need to restart the services if we modify 
the policy.json file. I am surprise that keystone is not following the same 
process. 

Edgar




On 12/9/15, 9:06 AM, "Kris G. Lindgren"  wrote:

>In other projects the policy.json file is read each time of api request.  So 
>changes to the file take place immediately.  I was 90% sure keystone was the 
>same way?
>
>___
>Kris Lindgren
>Senior Linux Systems Engineer
>GoDaddy
>
>
>
>
>
>
>
>On 12/9/15, 1:39 AM, "Oguz Yarimtepe"  wrote:
>
>>Hi,
>>
>>I am wondering whether there are people using RBAC at production. The 
>>policy.json file has a structure that requires restart of the service 
>>each time you edit the file. Is there and on the fly solution or tips 
>>about it?
>>
>>
>>
>>___
>>OpenStack-operators mailing list
>>openstack-operat...@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>openstack-operat...@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread Arkady_Kanevsky
You can do lazy copy that happens only when volume or snapshot is deleted.
You will need to have refcount on metadata.

-Original Message-
From: Li, Xiaoyan [mailto:xiaoyan...@intel.com]
Sent: Tuesday, December 08, 2015 10:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] Dependencies of snapshots on volumes

Hi all,

Currently when deleting a volume, it checks whether there are snapshots created 
from it. If yes deletion is prohibited. But it allows to extend the volume, no 
check whether there are snapshots from it.

The two behaviors in Cinder are not consistent from my viewpoint.

In backend storage, their behaviors are same.
For full snapshot, if still in copying progress, both extend and deletion are 
not allowed. If snapshot copying finishes, both extend and deletion are allowed.
For incremental snapshot, both extend and deletion are not allowed.

As a result, this raises two concerns here:
1. Let such operations behavior same in Cinder.
2. I prefer to let storage driver decide the dependencies, not in the general 
core codes.

Meanwhile, if we let driver to decide the dependencies, the following changes 
need to do in Cinder:
1. When creating a snapshot from volume, it needs copy all metadata of volume 
to snapshot. Currently it doesn't.
Any other potential issues please let me know.

Any input will be appreciated.

Best wishes
Lisa


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]Boot physical machine fails, says "PXE-E11 ARP Timeout"

2015-12-09 Thread Vasyl Saienko
Hello Zhi,

It seems that there is no connectivity between HW server and Gateway/TFTP
server.
You can boot live CD on it, assign the same IP manually and check if you
are able to ping 10.0.0.1

Sincerely,
Vasyl Saienko

On Wed, Dec 9, 2015 at 3:59 PM, Zhi Chang  wrote:

> hi, all
> I treat a normal physical machine as a bare metal machine. The
> physical machine booted when I run "nova boot xxx" in command line. But
> there is an error happens. I upload a movie in youtube, link:
> https://www.youtube.com/watch?v=XZQCNsrkyMI=youtu.be. Could
> someone give me some advice?
>
> Thx
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Is there blueprint to add argument collection UI for action

2015-12-09 Thread Stan Lagun
Tony,

I forgot to mention that it possible to call actions with parameters using
API. It is just a UI that is missing.

Also I'd like to mention that any involvement of yours will be appreciated
- ideas, suggestions, commits or even if you take a lead over this feature
- everything will help to make it be available sooner than later.
Please let me know if I can help.

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Mon, Dec 7, 2015 at 10:25 AM, WANG, Ming Hao (Tony T) <
tony.a.w...@alcatel-lucent.com> wrote:

> Stan,
>
>
>
> Thanks for your information very much.
>
> I’m looking forward for this feature! J
>
>
>
> Thanks,
>
> Tony
>
>
>
> *From:* Stan Lagun [mailto:sla...@mirantis.com]
> *Sent:* Saturday, December 05, 2015 4:52 AM
> *To:* WANG, Ming Hao (Tony T)
> *Cc:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [murano] Is there blueprint to add argument collection UI
> for action
>
>
>
> Tony,
>
>
>
> Here is the blueprint:
> https://blueprints.launchpad.net/murano/+spec/action-ui
>
>
>
> The problem is much bigger then it seems to. Action parameters are very
> similar to regular application properties and thus there should be a
> dedicated dynamic UI form (think ui.yaml) for each action describing its
> inputs.
>
> We discussed it many times and each time the resolution was that it is
> better to do major refactoring of dynamic UI forms before introducing
> additional forms. The intention was to either simplify or completely get
> rid of ui.yaml because it is yet another DSL to learn. Most of the
> information from ui.yaml could be obtained by examining MuranoPL class
> properties. And what is missing could be added as additional metadata right
> to the classes. However a lot of work requires to do it properly (starting
> from new API that would be MuranoPL-aware). That's why we still have no
> proper UI for the actions.
>
>
>
> Maybe we should reconsider and to have many ui forms in single package
> until we have a better solution.
>
>
>
>
>
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
>
>
> On Fri, Dec 4, 2015 at 5:42 AM, WANG, Ming Hao (Tony T) <
> tony.a.w...@alcatel-lucent.com> wrote:
>
> Dear Stan and Murano developers,
>
>
>
> The current murano-dashboards can add the action button for the actions
> defined in Murano class definition automatically, which is great.
>
> Is there any blueprint to add argument collection UI for the action?
>
> Currently, the murano-dashboards only uses environment_id + action_id to
> run the actions, and user has no way to provide action arguments from
> UI.
>
>
>
> Thanks in advance,
>
> Tony
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Ginwala, Aliasgar
Sounds good Dolph. Will try to post  the details as requested on gist/ blog 
shortly.

Regards,
Ali

From: Dolph Mathews >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, December 9, 2015 at 5:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] Removing 
functionality that was deprecated in Kilo and upcoming deprecated functionality 
in Mitaka

Benchmarks always appreciated!

But, these types of benchmarks are *entirely* useless unless you can provide 
the exact configuration you used for each scenario so that others can 
scrutinize the test method and reproduce your results. So, off the top of my 
head, I'm looking for:

* keystone.conf
* distro used for testing, in case they override the project's defaults
* all nginx config files
* all uwsgi config files
* apache config, including virtual hosts and mods
* your test client and it's configuration
* server & client architecture, and at least some idea of what lies in between 
(networking, etc)
* whatever else I'm forgetting

A mailing list is probably not the best method to provide anything other than a 
summary; so I'd suggest publishing the details in a gist, blog post, or both.

And to comment on the results themselves: you shouldn't be seeing that big of a 
performance gap between httpd and everything else; something is fundamentally 
different about that configuration. These are just web servers, after all. 
Choosing between them should not be a matter of performance, but it should be a 
choice of documentation, licensing, community, operability, supportability, 
reliability, etc. Performance should be relatively similar, and thus a much 
lower priority in making your selection.

On Tue, Dec 8, 2015 at 10:09 PM, Ginwala, Aliasgar 
> wrote:


Hi All:

Just to inform Steve and all the folks who brought up this talk ;We did some 
benchmarking using wsgi, apache and nginx for keystone with mysql as token 
backend and we got following results on Juno version. Hence I am just giving 
you brief highlight about the results we got.

spawning 100 users per sec for create token, below are the results:

Using nginx with uwsgi:
rps 32—(reqests/sec)
median time ~ 3.3 sec
no of processes 20

using apache
rps 75
median time ~ 1.3 sec
avg time - 1.5 sec
no of processes 20

using wsgi
rps 28
median time ~ 3.4
avg 3.5
no of processes 20


We are planning to switch to apache since we are not seeing good results using 
nginx with uwsgi. May be some more added support is required for nginx 
performance.We did not encounter this auto restart issue in test yet and hence 
are open to more inputs.

Any other suggestions are welcome too. Let us know in case of queries.

Regards,
Aliasgar

On 8 December 2015 at 07:53, Thomas Goirand 
> wrote:
On 12/01/2015 07:57 AM, Steve Martinelli wrote:
> Trying to summarize here...
>
> - There isn't much interest in keeping eventlet around.
> - Folks are OK with running keystone in a WSGI server, but feel they are
> constrained by Apache.
> - uWSGI could help to support multiple web servers.
>
> My opinion:
>
> - Adding support for uWSGI definitely sounds like it's worth
> investigating, but not achievable in this release (unless someone
> already has something cooked up).
> - I'm tempted to let eventlet stick around another release, since it's
> causing pain on some of our operators.
> - Other folks have managed to run keystone in a web server (and
> hopefully not feel pain when doing so!), so it might be worth getting
> technical details on just how it was accomplished. If we get an OK from
> the operator community later on in mitaka, I'd still be OK with removing
> eventlet, but I don't want to break folks.
>
> stevemar
>
> From: John Dewey >
> 100% agree.
>
> We should look at uwsgi as the reference architecture. Nginx/Apache/etc
> should be interchangeable, and up to the operator which they choose to
> use. Hell, with tcp load balancing now in opensource Nginx, I could get
> rid of Apache and HAProxy by utilizing uwsgi.
>
> John

The main problem I see with running Keystone (or any other service) in a
web server, is that *I* (as a package maintainer) will loose the control
over when the service is started. Let me explain why that is important
for me.

In Debian, many services/daemons are run, then their API is used by the
package. In the case of Keystone, for example, it is possible to ask,
via Debconf, that Keystone registers itself in the service catalog. If
we get Keystone within Apache, it becomes at least harder to do so.

I was going to leave this up to others to comment on 

Re: [openstack-dev] [puppet] proposing Cody Herriges part of Puppet OpenStack core

2015-12-09 Thread Cody Herriges
Thank you everyone for the support!  I am quite excited to be working in
this community again.  Is has been a long road for me to get back here
from the day Dan Bode passed things on to me and I did the 2.0.0 release
to the Forge and then a quick hand off again to Chris.

In retrospect I regret bowing out for as long as I did.

-- 
Cody



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-glanceclient] Return request-id to caller

2015-12-09 Thread stuart . mclaren
mail, I'd probably go with option #2 but
I'm curious to know the answer to my question.

Cheers,
Flavio




Please let us know which approach is better or any suggestions for the same.



[1] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/
v2/images.py#L179

[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#
L944


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






--

Message: 15
Date: Wed, 9 Dec 2015 21:59:50 +0800
From: "=?utf-8?B?WmhpIENoYW5n?=" <chang...@unitedstack.com>
To: "=?utf-8?B?b3BlbnN0YWNrLWRldg==?="
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [ironic]Boot physical machine fails,   says
"PXE-E11 ARP Timeout"
Message-ID: <tencent_50bbe4336f52f9e54b571...@qq.com>
Content-Type: text/plain; charset="utf-8"

hi, all
   I treat a normal physical machine as a bare metal machine. The physical machine booted 
when I run "nova boot xxx" in command line. But there is an error happens. I upload 
a movie in youtube, link: https://www.youtube.com/watch?v=XZQCNsrkyMI=youtu.be. 
Could someone give me some advice?


Thx
Zhi Chang
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openstack.org/pipermail/openstack-dev/attachments/20151209/00ee4d3e/attachment-0001.html>

--

Message: 16
Date: Wed, 09 Dec 2015 09:02:38 -0500
From: Doug Hellmann <d...@doughellmann.com>
To: openstack-dev <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
open
Message-ID: <1449669713-sup-8899@lrrr.local>
Content-Type: text/plain; charset=UTF-8

Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).


OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!


Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a



Congratulations, Matt!

Doug



--

Message: 17
Date: Wed, 9 Dec 2015 09:32:53 -0430
From: Flavio Percoco <fla...@redhat.com>
To: Jordan Pittier <jordan.pitt...@scality.com>
Cc: "OpenStack Development Mailing List \(not for usage questions\)"
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [glance][tempest][defcore] Process to
imrpove tests coverge in temepest
Message-ID: <20151209140253.gb10...@redhat.com>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On 08/12/15 22:31 +0100, Jordan Pittier wrote:

Hi Flavio,

On Tue, Dec 8, 2015 at 9:52 PM, Flavio Percoco <fla...@redhat.com> wrote:


   Oh, I meant ocasionally. Whenever a missing test for an API is found,
   it'd be easy enough for the implementer to sohw up at the meeting and
   bring it up.


From my experience as a Tempest reviewer, I'd say that most newly added tests

are *not* submitted by Tempest regular contributors. I assume (wrongly ?) that
it's mostly people from the actual projects (e.g glance) who are interested in
adding new Tempest tests to test a feature recently implemented. Put
differently, I don't think it's part of Tempest core team/community to add new
tests. We mostly provide a framework and guidance these days.


I agree that the tempest team should focus on providing the framework
rather than the tests themselves. However, these tests are often
contributed by ppl that are not part of the project's team.


But, reading this thread, I don"t know what to suggest. As a Tempest revi

Re: [openstack-dev] [mistral] bugfix for "Fix concurrency issues by using READ_COMMITTED" unveils / creates a different bug

2015-12-09 Thread KOFFMAN, Noa (Noa)
Hi,

I have reproduced this issue multiple times before your fix was merged.

In order to reproduce I used a workflow with multiple async actions, and 
resumed all of them at the same time.

I just created a ticket in launchpad [1], with the workflow used and the 
mistral engine logs.

[1] - https://bugs.launchpad.net/mistral/+bug/1524477

If anyone could take a look and confirm the bug it would be great.

Thanks
Noa Koffman


From: ELISHA, Moshe (Moshe) [moshe.eli...@alcatel-lucent.com]
Sent: Monday, December 07, 2015 6:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [mistral] bugfix for "Fix concurrency issues by using 
READ_COMMITTED" unveils / creates a different bug

Hi all,

The current bugfix I am working on[1] have unveiled / created a bug.
Test “WorkflowResumeTest.test_resume_different_task_states” sometimes fails 
because “task4” is executed twice instead of once (See unit test output and 
workflow below).

This happens because task2 on-complete is running task4 as expected but also 
task3 executes task4 by mistake.

It is not consistent but it happens quite often – This happens if the unit test 
resumes the WF and updates action execution of task2 and finishes task2 before 
task3 is finished.
Scenario:


1.   Task2 in method on_action_complete – changes task2 state to RUNNING.

2.   Task3 in method on_action_complete – changes task2 state to RUNNING 
(before task2 calls _on_task_state_change).

3.   Task3 in “_on_task_state_change” > “continue_workflow” > 
“DirectWorkflowController ._find_next_commands” – it finds task2 because task2 
is in SUCCESS and processed = False and “_find_next_commands_for_task(task2)” 
returns task4.

4.   Task3 executes command to RunTask task4.

5.   Task2 in “_on_task_state_change” > “continue_workflow” > 
“DirectWorkflowController ._find_next_commands” – it finds task2 because task2 
is in SUCCESS and processed = False and “_find_next_commands_for_task(task2)” 
returns task4.

6.   Task2 executes command to RunTask task4.


[1] - https://review.openstack.org/#/c/253819/


If I am not mistaken – this issue also exists in the current code and my bugfix 
only made it much more often. Can you confirm?
I don’t have enough knowledge on how to fix this issue…
For now – I have modified the test_resume_different_task_states unit test to 
wait for task3 to be processed before updating the action execution of task2.
If you agree this bug exist today as well – we can proceed with my bugfix and 
open a different bug for that issue.

Thanks.



[stack@melisha-devstack mistral(keystone_admin)]$ tox -e py27 -- 
WorkflowResumeTest.test_resume_different_task_states
...
==
FAIL: 
mistral.tests.unit.engine.test_workflow_resume.WorkflowResumeTest.test_resume_different_task_states
tags: worker-0
--
pythonlogging:'': {{{WARNING [oslo_db.sqlalchemy.utils] Id not in sort_keys; is 
sort_keys unique?}}}
stderr: {{{
/opt/stack/mistral/.tox/py27/local/lib/python2.7/site-packages/novaclient/v2/client.py:109:
 UserWarning: 'novaclient.v2.client.Client' is not designed to be initialized 
directly. It is inner class of novaclient. Please, use 
'novaclient.client.Client' instead. Related lp bug-report: 1493576
  _LW("'novaclient.v2.client.Client' is not designed to be "
}}}

stdout: {{{
Engine test case exception occurred: 4 != 5
Exception type: 

Printing workflow executions...

wb.wf1 [state=SUCCESS, output={u'__execution': {u'params': {}, u'id': 
u'2807dd99-ca6f-49d7-886d-7d3b79e1c49e', u'spec': {u'type': u'direct', u'name': 
u'wf1', u'tasks': {u'task4': {u'type': u'direct', u'name': u'task4', 
u'version': u'2.0', u'action': u'std.echo output="Task 4"'}, u'task2': 
{u'type': u'direct', u'name': u'task2', u'on-complete': [u'task4'], u'version': 
u'2.0', u'action': u'std.mistral_http url="http://google.com;'}, u'task3': 
{u'type': u'direct', u'name': u'task3', u'version': u'2.0', u'action': 
u'std.echo output="Task 3"'}, u'task1': {u'type': u'direct', u'name': u'task1', 
u'on-complete': [u'task3', u'pause'], u'version': u'2.0', u'action': u'std.echo 
output="Hi!"'}}, u'version': u'2.0'}, u'input': {}}, u'task4': u'Task 4', 
u'task3': u'Task 3', u'__tasks': {u'848c6e92-b1b1-4d54-b11d-c93cfb4fc88f': 
u'task2', u'00a546e7-8da9-4603-b6be-54d58b14c625': u'task1'}}]
 task2 [id=848c6e92-b1b1-4d54-b11d-c93cfb4fc88f, state=SUCCESS, 
published={}]
 task1 [id=00a546e7-8da9-4603-b6be-54d58b14c625, state=SUCCESS, 
published={}]
 task3 [id=8ce20324-7fba-4424-bcd2-1e0c9b27fd4a, state=SUCCESS, 
published={}]
 task4 [id=3758de43-9bc3-4ac9-b3f3-29eb543b16ef, state=SUCCESS, 
published={}]
 task4 [id=f12ee464-0ba5-48c7-8423-9f425a00e675, state=SUCCESS, 
published={}]
}}}

Traceback (most recent call last):
  File 

Re: [openstack-dev] [Openstack-operators] [keystone] RBAC usage at production

2015-12-09 Thread Timothy Symanczyk
We are running keystone kilo in production, and I¹m actively implementing
RBAC right now. I¹m certain that, at least with the version of keystone
we¹re running, a restart is NOT required when the policy file is modified.

Tim




On 12/9/15, 9:18 AM, "Edgar Magana"  wrote:

>We use RBAC in production but basically modify networking operations and
>some compute ones. In our case we don¹t need to restart the services if
>we modify the policy.json file. I am surprise that keystone is not
>following the same process.
>
>Edgar
>
>
>
>
>On 12/9/15, 9:06 AM, "Kris G. Lindgren"  wrote:
>
>>In other projects the policy.json file is read each time of api request.
>> So changes to the file take place immediately.  I was 90% sure keystone
>>was the same way?
>>
>>___
>>Kris Lindgren
>>Senior Linux Systems Engineer
>>GoDaddy
>>
>>
>>
>>
>>
>>
>>
>>On 12/9/15, 1:39 AM, "Oguz Yarimtepe"  wrote:
>>
>>>Hi,
>>>
>>>I am wondering whether there are people using RBAC at production. The
>>>policy.json file has a structure that requires restart of the service
>>>each time you edit the file. Is there and on the fly solution or tips
>>>about it?
>>>
>>>
>>>
>>>___
>>>OpenStack-operators mailing list
>>>openstack-operat...@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>___
>>OpenStack-operators mailing list
>>openstack-operat...@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] New release of Gabbi HTTP API Tester

2015-12-09 Thread Chris Dent


In my continuing effort to make gabbi useful to the OpenStack
community I've released a new version (1.11.0) with a new feature.

Gabbi is a testing library that uses a YAML format to declare HTTP
API requests and their expected responses.

https://pypi.python.org/pypi/gabbi

The new release add a feature that's been desired for a while but
wasn't implemented until today when krotscheck needed it to add some
tests of CORS support in Ceilometer. The feature confirms that a
forbidden header is not present in results. See
`response_forbidden_headers` in the docs:

http://gabbi.readthedocs.org/en/latest/format.html#response-expectations

Thanks to him for providing the feedback to push things along and
inspiring the discussion that led to a reasonable solution.

In the OpenStack world gabbi is currently being used in ceilometer,
gnocchi and aodh for some functional tests of their APIs as well as an
integration test of heat, ceilometer, aodh and gnocchi. It makes such
things very readable and maintainable and most people rather like it
after giving it a shot.

If anyone is curious about using it in another project the code in
the projects listed above[1] can provide some guidance or look for me
(as cdent) in the #gabbi IRC channel or any of several openstack-*
channels.

Thanks!

[1] Gnocchi has quite a few good examples starting from the loader:
https://github.com/openstack/gnocchi/blob/master/gnocchi/tests/gabbi/test_gabbi.py
the required configuration fixture (establishes backend stuff):
https://github.com/openstack/gnocchi/blob/master/gnocchi/tests/gabbi/fixtures.py
and the yaml test files themselves:
https://github.com/openstack/gnocchi/tree/master/gnocchi/tests/gabbi/gabbits

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-16, Dec 14-18

2015-12-09 Thread Doug Hellmann
Focus
-

With the Mitaka 1 milestone behind us, teams should have retrospectives
to consider how the cycle is going so far and where things stand
as we enter the major feature development period leading up to the
second milestone. In particular, make note of the cross-project
themes identified at the summit [1] and whether any work needs to
be done to address them in your project.

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-November/078756.html

General Notes
-

I will be offline until Dec 21. If you need help with releases, the
rest of the release management team will be available to assist
you. Use #openstack-release on IRC or "[release]" on the mailing
list as usual.

Release Actions
---

If http://docs.openstack.org/releases/releases/liberty.html does
not include a link to the release notes for your Liberty releases,
please submit a patch to your deliverable file in openstack/releases
adding a link. We will add links for the Mitaka release later, when
we start publishing release notes under that series name, so please
only update the Liberty file for now.

Important Dates
---

Mitaka 2: Jan 19-21

Mitaka release schedule (note this has moved from the wiki): 
http://docs.openstack.org/releases/schedules/mitaka.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] diskimage-builder and python 2/3 compatibility

2015-12-09 Thread Gregory Haynes
Excerpts from Ian Wienand's message of 2015-12-09 09:35:15 +:
> On 12/09/2015 07:15 AM, Gregory Haynes wrote:
> > We ran in to a couple issues adding Fedora 23 support to
> > diskimage-builder caused by python2 not being installed by default.
> > This can be solved pretty easily by installing python2, but given that
> > this is eventually where all our supported distros will end up I would
> > like to find a better long term solution (one that allows us to make
> > images which have the same python installed that the distro ships by
> > default).
> 
> So I wonder if we're maybe hitting premature optimisation with this

That's a fair point. My thinking is that this is a thing we are hitting
now, and if we do not fix this then we are going to end up adding a
python2 dependency everywhere. This isn't the worst thing, but if we end
up wanting to remove that later it will be a backwards incompat issue.
So IMO if it's easy enough to get correct now it would be awesome to do
rather than ripping python2 out from underneath users at a later date.

> 
> > We use +x and a #! to specify a python
> > interpreter, but this needs to be python3 on distros which do not ship a
> > python2, and python elsewhere.
> 
> > Create a symlink in the chroot from /usr/local/bin/dib-python to
> > whatever the apropriate python executable is for that distro.
> 
> This is a problem for anyone wanting to ship a script that "just
> works" across platforms.  I found a similar discussion about a python
> launcher at [1] which covers most points and is more or less what
> is described above.
> 
> I feel like contribution to some sort of global effort in this regard
> might be the best way forward, and then ensure dib uses it.
> 
> -i
> 
> [1] https://mail.python.org/pipermail/linux-sig/2015-October/00.html
> 

My experience has been that this is something the python community
doesn't necessarially want (it would be pretty trivial to fix with a
python2or3 runner). I half expected some feedback of "please don't do
that, treat python2 and 3 as separate languages", which was a big reason
for this post. This is even more complicated by it being a
distro-specific issue (some distros do ship a /usr/bin/python which
points to either 2 or 3, depending on what is available). Basically,
yes, it would be great for the larger python community to solve this but
I am not hopeful of that actually happening.

We do need to come up with some kind of fix in the meantime. As I
mentioned above, the 'just install python2' fix has some annoyances
later down the road, and my thinking is that the symlink approach is not
much more work without the same problems for us...

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-09 Thread Jim Rollenhagen
On Fri, Dec 04, 2015 at 05:38:43PM +0100, Dmitry Tantsur wrote:
> Hi!
> 
> As you all probably know, we've switched to reno for managing release notes.
> What it also means is that the release team has stopped managing milestones
> for us. We have to manually open/close milestones in launchpad, if we feel
> like. I'm a bit tired of doing it for inspector, so I'd prefer we stop it.
> If we need to track release-critical patches, we usually do it in etherpad
> anyway. We also have importance fields for bugs, which can be applied to
> both important bugs and important features.
> 
> During a quick discussion on IRC Sam mentioned that neutron also dropped
> using blueprints for tracking features. They only use bugs with RFE tag and
> specs. It makes a lot of sense to me to do the same, if we stop tracking
> milestones.
> 
> For both ironic and ironic-inspector I'd like to get your opinion on the
> following suggestions:
> 1. Stop tracking milestones in launchpad
> 2. Drop existing milestones to avoid confusion
> 3. Stop using blueprints and move all active blueprints to bugs with RFE
> tags; request a bug URL instead of a blueprint URL in specs.
> 
> So in the end we'll end up with bugs for tracking user requests, specs for
> complex features and reno for tracking for went into a particular release.
> 
> Important note: if you vote for keeping things for ironic-inspector, I may
> ask you to volunteer in helping with them ;)

We decided we're going to try this in Monday's meeting, following
roughly the same process as Neutron:
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements

Note that as the goal here is to stop managing blueprints and milestones
in launchpad, a couple of things will differ from the neutron process:

1) A matching blueprint will not be created; the tracking will only be
done in the bug.

2) A milestone will not be immediately chosen for the feature
enhancement, as we won't track milestones on launchpad.

Now, some requests for volunteers. We need:

1) Someone to document this process in our developer docs.

2) Someone to update the spec template to request a bug link, instead of
a blueprint link.

3) Someone to help move existing blueprints into RFEs.

4) Someone to point specs for incomplete work at the new RFE bugs,
instead of the existing blueprints.

I can help with some or all of these, but hope to not do all the work
myself. :)

Thanks for proposing this, Dmitry!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] last-call backwards compat for libraries and clients spec

2015-12-09 Thread Robert Collins
https://review.openstack.org/#/c/226157/ is very close to consensus
now, at least as I read it. Please do weigh in this week, as I expect
the cross project meeting next week to be the last discussion of it
before it goes to the TC.

Cheers,
Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] RBAC usage at production

2015-12-09 Thread Steve Martinelli

Whether or not a restart is required is actually handled by oslo.policy.
Which is only included in Kilo and newer versions of Keystone. The work to
avoid restarting the service went in in commit [0] and was further worked
on in [1].

Juno and older versions are using the oslo-incubator code to handle policy
(before it was turned into it's own library), and AFAICT don't have the
check to see if policy.json has been modified.

[0]
https://github.com/openstack/oslo.policy/commit/63d699aff89969fdfc584ce875a23ba0a90e5b51
[1]
https://github.com/openstack/oslo.policy/commit/b5f07dfe4cd4a5d12c7fecbc3954694d934de642

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   Timothy Symanczyk 
To: "OpenStack Development Mailing List (not for usage questions)"
, "Kris G. Lindgren"
, Oguz Yarimtepe
,
"openstack-operat...@lists.openstack.org"

Date:   2015/12/09 04:40 PM
Subject:Re: [openstack-dev] [Openstack-operators] [keystone] RBAC usage
at production



We are running keystone kilo in production, and I¹m actively implementing
RBAC right now. I¹m certain that, at least with the version of keystone
we¹re running, a restart is NOT required when the policy file is modified.

Tim




On 12/9/15, 9:18 AM, "Edgar Magana"  wrote:

>We use RBAC in production but basically modify networking operations and
>some compute ones. In our case we don¹t need to restart the services if
>we modify the policy.json file. I am surprise that keystone is not
>following the same process.
>
>Edgar
>
>
>
>
>On 12/9/15, 9:06 AM, "Kris G. Lindgren"  wrote:
>
>>In other projects the policy.json file is read each time of api request.
>> So changes to the file take place immediately.  I was 90% sure keystone
>>was the same way?
>>
>>___
>>Kris Lindgren
>>Senior Linux Systems Engineer
>>GoDaddy
>>
>>
>>
>>
>>
>>
>>
>>On 12/9/15, 1:39 AM, "Oguz Yarimtepe"  wrote:
>>
>>>Hi,
>>>
>>>I am wondering whether there are people using RBAC at production. The
>>>policy.json file has a structure that requires restart of the service
>>>each time you edit the file. Is there and on the fly solution or tips
>>>about it?
>>>
>>>
>>>
>>>___
>>>OpenStack-operators mailing list
>>>openstack-operat...@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>___
>>OpenStack-operators mailing list
>>openstack-operat...@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Thomas Goirand
On 12/08/2015 06:39 AM, Jamie Lennox wrote:
> The main problem I see with running Keystone (or any other service) in a
> web server, is that *I* (as a package maintainer) will loose the control
> over when the service is started. Let me explain why that is important
> for me.
> 
> In Debian, many services/daemons are run, then their API is used by the
> package. In the case of Keystone, for example, it is possible to ask,
> via Debconf, that Keystone registers itself in the service catalog. If
> we get Keystone within Apache, it becomes at least harder to do so.
> 
> I was going to leave this up to others to comment on here, but IMO -
> excellent. Anyone that is doing an even semi serious deployment of
> OpenStack is going to require puppet/chef/ansible or some form of
> orchestration layer for deployment. Even for test deployments it seems
> to me that it's crazy for this sort of functionality be handled from
> debconf. The deployers of the system are going to understand if they
> want to use eventlet or apache and should therefore understand what
> restarting apache on a system implies.

It is often what everyone from within the community says. However,
there's lots of users who hardly do a single deployment, maybe 2. I
don't agree that they should all invest a huge amount of time in some
automation tools, and for them, packages should be enough.

Anyway, the debconf handling is completely optional, and most of the
helpers are completely disabled by default. So it is *not* in the way of
using any deployment tool like puppet.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Roman Prykhodchenko to python-fuelclient cores

2015-12-09 Thread Dmitry Pyzhov
Thank you guys. As there are no objections for a week I'm adding Roman to
python-fuelclient-core group. Roman, congratulations! Use your power wisely
=)

On Wed, Dec 2, 2015 at 12:43 PM, Roman Vyalov  wrote:

> +1
>
> On Wed, Dec 2, 2015 at 12:35 PM, Aleksandr Didenko 
> wrote:
>
>> +1
>>
>> On Wed, Dec 2, 2015 at 9:13 AM, Julia Aranovich 
>> wrote:
>>
>>> +1
>>>
>>> On Tue, Dec 1, 2015 at 10:29 PM Sergii Golovatiuk <
>>> sgolovat...@mirantis.com> wrote:
>>>
 +1

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Dec 1, 2015 at 6:15 PM, Aleksey Kasatkin <
 akasat...@mirantis.com> wrote:

> +1.
> No doubts. )
>
>
> Aleksey Kasatkin
>
>
> On Tue, Dec 1, 2015 at 5:49 PM, Dmitry Pyzhov 
> wrote:
>
>> Guys,
>>
>> I propose to promote Roman Prykhodchenko to python-fuelclient cores.
>> He is the main contributor and maintainer of this repo. And he did a 
>> great
>> job making changes toward OpenStack recommendations. Cores, please reply
>> with your +1/-1.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 TypeManager question

2015-12-09 Thread Kevin Benton
So in that case don't we need to update the type driver API to pass that
through to the allocation call?

On Wed, Dec 9, 2015 at 12:45 PM, Sławek Kapłoński 
wrote:

> Hello,
>
> In fact we were creating our own type driver and we want to use this
> tenant_id
> so it was good that it is here :) But I was just curious why it is like
> that.
> Thanks.
>
> --
> Pozdrawiam / Best regards
> Sławek Kapłoński
> sla...@kaplonski.pl
>
> Dnia Wednesday 09 of December 2015 09:55:36 Kevin Benton pisze:
> > It does not appear to be used. I suspect maybe the original intention is
> > that this could be passed to the type driver which might do something
> > special depending on the tenant_id. However, the type driver interface
> does
> > not include the tenant_id in 'allocate_tenant_segment' so we should
> either
> > update the type driver interface to include that or just remove it from
> > create_network_segments.
> >
> > On Mon, Dec 7, 2015 at 9:28 AM, Sławek Kapłoński 
> >
> > wrote:
> > > Hello,
> > >
> > > Recently I was checking something in code of ML2 Type manager
> > > (neutron.plugins.ml2.managers) and I found in TypeManager class in
> method
> > > create_network_segments that to this method as argument is passed
> > > "tenant_id"
> > > but I can't find where it is used.
> > > Can someone more experienced explain me why this tenant_id is passed
> there
> > > and
> > > where it is used? Thx in advance :)
> > >
> > > --
> > > Pozdrawiam / Best regards
> > > Sławek Kapłoński
> > > sla...@kaplonski.pl
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Dolph Mathews
On Wednesday, December 9, 2015, Ginwala, Aliasgar  wrote:

> Hi Dolph/team:
>
> As requested I have outlined most of the files and configs to give more
> clear picture @ https://gist.github.com/noah8713/7d5554d78b60cd9a4999:
>

The number of threads per API your config files show range from 20 to
10,000 (processes
* threads). On one end, your server might be idling during the benchmark.
On the other end, you're probably exhausting the server's memory during the
benchmark run. So, absolutely nothing about this benchmark appears to be
fair.

What is the maximum number of concurrent users you're spinning up to in
locust? (not the spawn rate)


>
> * keystone.conf   —uploaded
> * distro used for testing, in case they override the project's defaults
>  —mentioned
> * all nginx config files —uploaded nginx-keystone.conf
> * all uwsgi config files —keystone-main.ini, keystone-admin.ini and
> upstart file
> * apache config, including virtual hosts and mods —apache-keystone.conf
> and python files (common for nginx)
> * your test client and it's configuration —mentioned
> * server & client architecture, and at least some idea of what lies in
> between (networking, etc)  —briefly outlined
> * whatever else I'm forgetting —feel free to add in the comments
>
>
>
> Regards,
> Ali
>
> From: Dolph Mathews  >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org
> >
> Date: Wednesday, December 9, 2015 at 5:42 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org
> >
> Subject: Re: [openstack-dev] [Openstack-operators] [keystone] Removing
> functionality that was deprecated in Kilo and upcoming deprecated
> functionality in Mitaka
>
> Benchmarks always appreciated!
>
> But, these types of benchmarks are *entirely* useless unless you can
> provide the exact configuration you used for each scenario so that others
> can scrutinize the test method and reproduce your results. So, off the top
> of my head, I'm looking for:
>
> * keystone.conf
> * distro used for testing, in case they override the project's defaults
> * all nginx config files
> * all uwsgi config files
> * apache config, including virtual hosts and mods
> * your test client and it's configuration
> * server & client architecture, and at least some idea of what lies in
> between (networking, etc)
> * whatever else I'm forgetting
>
> A mailing list is probably not the best method to provide anything other
> than a summary; so I'd suggest publishing the details in a gist, blog post,
> or both.
>
> And to comment on the results themselves: you shouldn't be seeing that big
> of a performance gap between httpd and everything else; something is
> fundamentally different about that configuration. These are just web
> servers, after all. Choosing between them should not be a matter of
> performance, but it should be a choice of documentation, licensing,
> community, operability, supportability, reliability, etc. Performance
> should be relatively similar, and thus a much lower priority in making your
> selection.
>
> On Tue, Dec 8, 2015 at 10:09 PM, Ginwala, Aliasgar  > wrote:
>
>>
>>
>> Hi All:
>>
>> Just to inform Steve and all the folks who brought up this talk ;We did
>> some benchmarking using wsgi, apache and nginx for keystone with mysql as
>> token backend and we got following results on Juno version. Hence I am just
>> giving you brief highlight about the results we got.
>>
>> spawning 100 users per sec for create token, below are the results:
>>
>> *Using nginx with uwsgi: *
>> rps *32*—(reqests/sec)
>> median time ~ 3.3 sec
>> no of processes 20
>>
>> *using apache *
>> rps *75*
>> median time ~ 1.3 sec
>> avg time - 1.5 sec
>> no of processes 20
>>
>> *using wsgi *
>> rps *28*
>> median time ~ 3.4
>> avg 3.5
>> no of processes 20
>>
>>
>> We are planning to switch to apache since we are not seeing good results
>> using nginx with uwsgi. May be some more added support is required for
>> nginx performance.We did not encounter this auto restart issue in test
>> yet and hence are open to more inputs.
>>
>> Any other suggestions are welcome too. Let us know in case of queries.
>>
>> Regards,
>> Aliasgar
>>
>> On 8 December 2015 at 07:53, Thomas Goirand > > wrote:
>>
>>> On 12/01/2015 07:57 AM, Steve Martinelli wrote:
>>> > Trying to summarize here...
>>> >
>>> > - There isn't much interest in keeping eventlet around.
>>> > - Folks are OK with running keystone in a WSGI server, but feel they
>>> are
>>> > constrained by Apache.
>>> > - uWSGI could help to support multiple web 

Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Aliasgar Ginwala
So in any case i am using 20 processes and 20 threads to have optimal
utilization. I did try with 50 50 values too for apache and the utilization
spikes a bit high but its fine for load test. Utilization do touch around
40% for 20 20 values of processes and threads.I am running it on BM with
128gb memory.I am spinning 100 concurrent users. Rps is bit killing as
create token does sign tokeb operation too which consumes some time.

Regards,
Ali

On Wednesday, December 9, 2015, Dolph Mathews 
wrote:

>
>
> On Wednesday, December 9, 2015, Ginwala, Aliasgar  > wrote:
>
>> Hi Dolph/team:
>>
>> As requested I have outlined most of the files and configs to give more
>> clear picture @ https://gist.github.com/noah8713/7d5554d78b60cd9a4999:
>>
>
> The number of threads per API your config files show range from 20 to
> 10,000 (processes * threads). On one end, your server might be idling
> during the benchmark. On the other end, you're probably exhausting the
> server's memory during the benchmark run. So, absolutely nothing about this
> benchmark appears to be fair.
>
> What is the maximum number of concurrent users you're spinning up to in
> locust? (not the spawn rate)
>
>
>>
>> * keystone.conf   —uploaded
>> * distro used for testing, in case they override the project's defaults
>>  —mentioned
>> * all nginx config files —uploaded nginx-keystone.conf
>> * all uwsgi config files —keystone-main.ini, keystone-admin.ini and
>> upstart file
>> * apache config, including virtual hosts and mods —apache-keystone.conf
>> and python files (common for nginx)
>> * your test client and it's configuration —mentioned
>> * server & client architecture, and at least some idea of what lies in
>> between (networking, etc)  —briefly outlined
>> * whatever else I'm forgetting —feel free to add in the comments
>>
>>
>>
>> Regards,
>> Ali
>>
>> From: Dolph Mathews 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Wednesday, December 9, 2015 at 5:42 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [Openstack-operators] [keystone] Removing
>> functionality that was deprecated in Kilo and upcoming deprecated
>> functionality in Mitaka
>>
>> Benchmarks always appreciated!
>>
>> But, these types of benchmarks are *entirely* useless unless you can
>> provide the exact configuration you used for each scenario so that others
>> can scrutinize the test method and reproduce your results. So, off the top
>> of my head, I'm looking for:
>>
>> * keystone.conf
>> * distro used for testing, in case they override the project's defaults
>> * all nginx config files
>> * all uwsgi config files
>> * apache config, including virtual hosts and mods
>> * your test client and it's configuration
>> * server & client architecture, and at least some idea of what lies in
>> between (networking, etc)
>> * whatever else I'm forgetting
>>
>> A mailing list is probably not the best method to provide anything other
>> than a summary; so I'd suggest publishing the details in a gist, blog post,
>> or both.
>>
>> And to comment on the results themselves: you shouldn't be seeing that
>> big of a performance gap between httpd and everything else; something is
>> fundamentally different about that configuration. These are just web
>> servers, after all. Choosing between them should not be a matter of
>> performance, but it should be a choice of documentation, licensing,
>> community, operability, supportability, reliability, etc. Performance
>> should be relatively similar, and thus a much lower priority in making your
>> selection.
>>
>> On Tue, Dec 8, 2015 at 10:09 PM, Ginwala, Aliasgar 
>> wrote:
>>
>>>
>>>
>>> Hi All:
>>>
>>> Just to inform Steve and all the folks who brought up this talk ;We did
>>> some benchmarking using wsgi, apache and nginx for keystone with mysql as
>>> token backend and we got following results on Juno version. Hence I am just
>>> giving you brief highlight about the results we got.
>>>
>>> spawning 100 users per sec for create token, below are the results:
>>>
>>> *Using nginx with uwsgi: *
>>> rps *32*—(reqests/sec)
>>> median time ~ 3.3 sec
>>> no of processes 20
>>>
>>> *using apache *
>>> rps *75*
>>> median time ~ 1.3 sec
>>> avg time - 1.5 sec
>>> no of processes 20
>>>
>>> *using wsgi *
>>> rps *28*
>>> median time ~ 3.4
>>> avg 3.5
>>> no of processes 20
>>>
>>>
>>> We are planning to switch to apache since we are not seeing good results
>>> using nginx with uwsgi. May be some more added support is required for
>>> nginx performance.We did not encounter this auto restart issue in test
>>> yet and hence are open to more inputs.
>>>
>>> Any other suggestions are welcome too. Let us know in case of queries.
>>>
>>> Regards,
>>> 

Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Ginwala, Aliasgar
Hi Dolph/team:

As requested I have outlined most of the files and configs to give more clear 
picture @ https://gist.github.com/noah8713/7d5554d78b60cd9a4999:

* keystone.conf   —uploaded
* distro used for testing, in case they override the project's defaults  
—mentioned
* all nginx config files —uploaded nginx-keystone.conf
* all uwsgi config files —keystone-main.ini, keystone-admin.ini and upstart file
* apache config, including virtual hosts and mods —apache-keystone.conf and 
python files (common for nginx)
* your test client and it's configuration —mentioned
* server & client architecture, and at least some idea of what lies in between 
(networking, etc)  —briefly outlined
* whatever else I'm forgetting —feel free to add in the comments



Regards,
Ali

From: Dolph Mathews >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, December 9, 2015 at 5:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] Removing 
functionality that was deprecated in Kilo and upcoming deprecated functionality 
in Mitaka

Benchmarks always appreciated!

But, these types of benchmarks are *entirely* useless unless you can provide 
the exact configuration you used for each scenario so that others can 
scrutinize the test method and reproduce your results. So, off the top of my 
head, I'm looking for:

* keystone.conf
* distro used for testing, in case they override the project's defaults
* all nginx config files
* all uwsgi config files
* apache config, including virtual hosts and mods
* your test client and it's configuration
* server & client architecture, and at least some idea of what lies in between 
(networking, etc)
* whatever else I'm forgetting

A mailing list is probably not the best method to provide anything other than a 
summary; so I'd suggest publishing the details in a gist, blog post, or both.

And to comment on the results themselves: you shouldn't be seeing that big of a 
performance gap between httpd and everything else; something is fundamentally 
different about that configuration. These are just web servers, after all. 
Choosing between them should not be a matter of performance, but it should be a 
choice of documentation, licensing, community, operability, supportability, 
reliability, etc. Performance should be relatively similar, and thus a much 
lower priority in making your selection.

On Tue, Dec 8, 2015 at 10:09 PM, Ginwala, Aliasgar 
> wrote:


Hi All:

Just to inform Steve and all the folks who brought up this talk ;We did some 
benchmarking using wsgi, apache and nginx for keystone with mysql as token 
backend and we got following results on Juno version. Hence I am just giving 
you brief highlight about the results we got.

spawning 100 users per sec for create token, below are the results:

Using nginx with uwsgi:
rps 32—(reqests/sec)
median time ~ 3.3 sec
no of processes 20

using apache
rps 75
median time ~ 1.3 sec
avg time - 1.5 sec
no of processes 20

using wsgi
rps 28
median time ~ 3.4
avg 3.5
no of processes 20


We are planning to switch to apache since we are not seeing good results using 
nginx with uwsgi. May be some more added support is required for nginx 
performance.We did not encounter this auto restart issue in test yet and hence 
are open to more inputs.

Any other suggestions are welcome too. Let us know in case of queries.

Regards,
Aliasgar

On 8 December 2015 at 07:53, Thomas Goirand 
> wrote:
On 12/01/2015 07:57 AM, Steve Martinelli wrote:
> Trying to summarize here...
>
> - There isn't much interest in keeping eventlet around.
> - Folks are OK with running keystone in a WSGI server, but feel they are
> constrained by Apache.
> - uWSGI could help to support multiple web servers.
>
> My opinion:
>
> - Adding support for uWSGI definitely sounds like it's worth
> investigating, but not achievable in this release (unless someone
> already has something cooked up).
> - I'm tempted to let eventlet stick around another release, since it's
> causing pain on some of our operators.
> - Other folks have managed to run keystone in a web server (and
> hopefully not feel pain when doing so!), so it might be worth getting
> technical details on just how it was accomplished. If we get an OK from
> the operator community later on in mitaka, I'd still be OK with removing
> eventlet, but I don't want to break folks.
>
> stevemar
>
> From: John Dewey >
> 100% agree.
>
> We should look at uwsgi as the reference architecture. Nginx/Apache/etc
> should be interchangeable, and up to the operator which they choose to
> use. Hell, with tcp 

Re: [openstack-dev] [ironic]Ironic operations on nodes in maintenance mode

2015-12-09 Thread Jim Rollenhagen
Sorry I dropped the ball on this thread. :(

On Tue, Nov 24, 2015 at 11:51:52AM -0800, Shraddha Pandhe wrote:
> On Tue, Nov 24, 2015 at 7:39 AM, Jim Rollenhagen 
> wrote:
> 
> > On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> > > Hi,
> > >
> > > I would like to know how everyone is using maintenance mode and what is
> > > expected from admins about nodes in maintenance. The reason I am bringing
> > > up this topic is because, most of the ironic operations, including manual
> > > cleaning are not allowed for nodes in maintenance. Thats a problem for
> > us.
> > >
> > > The way we use it is as follows:
> > >
> > > We allow users to put nodes in maintenance mode (indirectly) if they find
> > > something wrong with the node. They also provide a maintenance reason
> > along
> > > with it, which gets stored as "user_reason" under maintenance_reason. So
> > > basically we tag it as user specified reason.
> > >
> > > To debug what happened to the node our operators use manual cleaning to
> > > re-image the node. By doing this, they can find out all the issues
> > related
> > > to re-imaging (dhcp, ipmi, image transfer, etc). This debugging process
> > > applies to all the nodes that were put in maintenance either by user, or
> > by
> > > system (due to power cycle failure or due to cleaning failure).
> >
> > Interesting; do you let the node go through cleaning between the user
> > nuking the instance and doing this manual cleaning stuff?
> >
> 
> Do you mean automated cleaning? If so, yes we let that go through since
> thats allowed in maintenance mode.

It isn't upstream; all heartbeats are recorded with no action taken for
a long time now.
> 
> >
> > At Rackspace, we leverage the fact that maintenance mode will not allow
> > the node to proceed through the state machine. If a user reports a
> > hardware issue, we maintenance the node on their behalf, and when they
> > delete it, it boots the agent for cleaning and begins heartbeating.
> > Heartbeats are ignored in maintenance mode, which gives us time to
> > investigate the hardware, fix things, etc. When the issue is resolved,
> > we remove maintenance mode, it goes through cleaning, then back in the
> > pool.
> 
> 
> What is the provision state when maintenance mode is removed? Does it
> automatically go back into the available pool? How does a user report a
> hardware issue?

The node remains in cleaning, with the agent heartbeating, until
maintenance mode is removed. Then it goes back through cleaning to
available.
> 
> Due to large scale, we can't always assure that someone will take care of
> the node right away. So we have some automation to make sure that user's
> quota is freed.
> 
> 1. If a user finds some problem with the node, the user calls our break-fix
> extension (with reason for break-fix) which deletes the instance for the
> user and frees the quota.
> 2. Internally nova deletes the instance and calls destroy on virt driver.
> This follows the normal delete flow with automated cleaning.
> 3. We have an automated tool called Reparo which constantly monitors the
> node list for nodes in maintenance mode.
> 4. If it finds any nodes in maintenance, it runs one round of manual
> cleaning on it to check if the issue was transient.
> 5. If cleaning fails, we need someone to take a look at it.
> 6. If cleaning succeeds, we put the node back in available pool.
> 
> This is only way we can scale to hundreds of thousands of nodes. If manual
> cleaning was not allowed in maintenance mode, our operators would hate us :)
> 
> If the provision state of the node is such a way that the node cannot be
> picked up by the scheduler, we can remove maintenance mode and run manual
> cleaning.

Hm, I'm trying to think of a way to make that work without cleaning
allowed in maintenance mode... I haven't got much. We've always
preferred for us (or our automation) to take a look at the node *before*
we do any cleaning on it, as cleaning may mask some of that. The
manageable state is intended to be the provision state you mentioned.
You can move from "clean failed" to manageable, if you could make
something fail the cleaning when the node is in maintenance mode. Might
be the best route here.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] diskimage-builder and python 2/3 compatibility

2015-12-09 Thread Clint Byrum
Excerpts from Gregory Haynes's message of 2015-12-09 12:49:07 -0800:
> Excerpts from Ian Wienand's message of 2015-12-09 09:35:15 +:
> > On 12/09/2015 07:15 AM, Gregory Haynes wrote:
> > > We ran in to a couple issues adding Fedora 23 support to
> > > diskimage-builder caused by python2 not being installed by default.
> > > This can be solved pretty easily by installing python2, but given that
> > > this is eventually where all our supported distros will end up I would
> > > like to find a better long term solution (one that allows us to make
> > > images which have the same python installed that the distro ships by
> > > default).
> > 
> > So I wonder if we're maybe hitting premature optimisation with this
> 
> That's a fair point. My thinking is that this is a thing we are hitting
> now, and if we do not fix this then we are going to end up adding a
> python2 dependency everywhere. This isn't the worst thing, but if we end
> up wanting to remove that later it will be a backwards incompat issue.
> So IMO if it's easy enough to get correct now it would be awesome to do
> rather than ripping python2 out from underneath users at a later date.
> 

+1 to spending a few brain cycles trying to decide if we can avoid churn
for users later. What we do in the guest should be carefully considered.

> > 
> > > We use +x and a #! to specify a python
> > > interpreter, but this needs to be python3 on distros which do not ship a
> > > python2, and python elsewhere.
> > 
> > > Create a symlink in the chroot from /usr/local/bin/dib-python to
> > > whatever the apropriate python executable is for that distro.
> > 
> > This is a problem for anyone wanting to ship a script that "just
> > works" across platforms.  I found a similar discussion about a python
> > launcher at [1] which covers most points and is more or less what
> > is described above.
> > 
> > I feel like contribution to some sort of global effort in this regard
> > might be the best way forward, and then ensure dib uses it.
> > 
> > -i
> > 
> > [1] https://mail.python.org/pipermail/linux-sig/2015-October/00.html
> > 
> 
> My experience has been that this is something the python community
> doesn't necessarially want (it would be pretty trivial to fix with a
> python2or3 runner). I half expected some feedback of "please don't do
> that, treat python2 and 3 as separate languages", which was a big reason
> for this post. This is even more complicated by it being a
> distro-specific issue (some distros do ship a /usr/bin/python which
> points to either 2 or 3, depending on what is available). Basically,
> yes, it would be great for the larger python community to solve this but
> I am not hopeful of that actually happening.
>

Really there are 3 languages:

python2 only
python3 only
python2or3

And when you've been good and used python2or3 ... it would be nice to be
able to just express that without the machinery of setuptools and entry
points (which is the other option here btw)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread Mike Perez
On 09:27 Dec 09, John Griffith wrote:
> On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan  wrote:



> > As a result, this raises two concerns here:
> > 1. Let such operations behavior same in Cinder.
> > 2. I prefer to let storage driver decide the dependencies, not in the
> > general core codes.
> >
> 
> ​I have and always will strongly disagree with this approach and your
> proposal.  Sadly we've already started to allow more and more vendor
> drivers just "do their own thing" and implement their own special API
> methods.  This is in my opinion a horrible path and defeats the entire
> purpose of have a Cinder abstraction layer.
> 
> This will make it impossible to have compatibility between clouds for those
> that care about it, it will make it impossible for operators/deployers to
> understand exactly what they can and should expect in terms of the usage of
> their cloud.  Finally, it will also mean that not OpenStack API
> functionality is COMPLETELY dependent on backend device.  I know people are
> sick of hearing me say this, so I'll keep it short and say it one more time:
> "Compatibility in the API matters and should always be our priority"

+1

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 TypeManager question

2015-12-09 Thread Sławek Kapłoński
Hello,

In fact we were creating our own type driver and we want to use this tenant_id 
so it was good that it is here :) But I was just curious why it is like that. 
Thanks.

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia Wednesday 09 of December 2015 09:55:36 Kevin Benton pisze:
> It does not appear to be used. I suspect maybe the original intention is
> that this could be passed to the type driver which might do something
> special depending on the tenant_id. However, the type driver interface does
> not include the tenant_id in 'allocate_tenant_segment' so we should either
> update the type driver interface to include that or just remove it from
> create_network_segments.
> 
> On Mon, Dec 7, 2015 at 9:28 AM, Sławek Kapłoński 
> 
> wrote:
> > Hello,
> > 
> > Recently I was checking something in code of ML2 Type manager
> > (neutron.plugins.ml2.managers) and I found in TypeManager class in method
> > create_network_segments that to this method as argument is passed
> > "tenant_id"
> > but I can't find where it is used.
> > Can someone more experienced explain me why this tenant_id is passed there
> > and
> > where it is used? Thx in advance :)
> > 
> > --
> > Pozdrawiam / Best regards
> > Sławek Kapłoński
> > sla...@kaplonski.pl
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

signature.asc
Description: This is a digitally signed message part.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [infra] IRC broken

2015-12-09 Thread Tim Hinrichs
The issue with #openstack-meeting seems to have resolved itself about 35
minutes into our scheduled time.  The only thing that might be missing from
the logs are messages where people were trying to figure out what was going
on.  So if anyone missed the meeting, start at the 35 minute mark.

I included [infra] and a few more details below.

- It seemed there was a partitioning in #openstack-meeting for a while.  I
could #start the meeting, but except for 1 other person (masahito), no one
could see my messages.  I could exchange messages with masahito, but
neither of us could send messages to ekcs or ramineni.  I could, however,
see ekcs and ramineni's messages to each other.

- We had the same problem with #congress.  I eventually called ekcs, and we
tried to debug.

- I tried logging back in a couple of times, which eventually worked.  My
nick changed.  At that point, I could exchange messages with ekcs and
ramineni, and I could send messages to masahito, but I couldn't see
messages from masahito.

- Just as we were about to cancel the meeting, #congress started working
again.  Then #openstack-meeting started working, so we resumed the meeting.
 (I'm not sure #congress started working first, or if we just all happened
to notice #congress first.)

If anyone has any other pertinent details, please include them.

Tim

On Wed, Dec 9, 2015 at 5:07 PM Anita Kuno  wrote:

> On 12/09/2015 07:33 PM, Tim Hinrichs wrote:
> > It seems IRC is broken for the meeting we're supposed to be having right
> > now.  The symptom is that you can logon but may only see a fragment of
> the
> > users/messages that are being sent.  We even had a partitioning where 2
> > people could exchange messages, and a different 2 people could exchange
> > messages, but not all 4 people could exchange messages.
> >
> > Unless it fixes itself in the next few minutes, we'll cancel and have the
> > discussion over email.  And given how late we're starting, we'll likely
> > have an email discussion anyway.
> >
> > Tim
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> It seems you were able to hold your meeting?
>
> http://eavesdrop.openstack.org/meetings/congressteammeeting/2015/congressteammeeting.2015-12-10-00.00.log.html
>
> I don't see anything wrong with the #openstack-meeting channel.
>
> Do let infra know if you have additional details.
>
> Thank you,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Roman Prykhodchenko to python-fuelclient cores

2015-12-09 Thread Roman Prykhodchenko
Guys,

thank you for your trust, I will keep working on improving this small but 
significant part of Fuel’s ecosystem.


- romcheg

> 9 груд. 2015 р. о 21:09 Dmitry Pyzhov  написав(ла):
> 
> Thank you guys. As there are no objections for a week I'm adding Roman to 
> python-fuelclient-core group. Roman, congratulations! Use your power wisely =)
> 
> On Wed, Dec 2, 2015 at 12:43 PM, Roman Vyalov  > wrote:
> +1
> 
> On Wed, Dec 2, 2015 at 12:35 PM, Aleksandr Didenko  > wrote:
> +1
> 
> On Wed, Dec 2, 2015 at 9:13 AM, Julia Aranovich  > wrote:
> +1
> 
> On Tue, Dec 1, 2015 at 10:29 PM Sergii Golovatiuk  > wrote:
> +1
> 
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
> 
> On Tue, Dec 1, 2015 at 6:15 PM, Aleksey Kasatkin  > wrote:
> +1.
> No doubts. )
> 
> 
> Aleksey Kasatkin
> 
> 
> On Tue, Dec 1, 2015 at 5:49 PM, Dmitry Pyzhov  > wrote:
> Guys,
> 
> I propose to promote Roman Prykhodchenko to python-fuelclient cores. He is 
> the main contributor and maintainer of this repo. And he did a great job 
> making changes toward OpenStack recommendations. Cores, please reply with 
> your +1/-1.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas] [octavia] Michael Johnson new Subteam PTL

2015-12-09 Thread Eichberger, German
All,

We are congratulating Michael Johnson (johnsom) as new STPTL [1]. We are 
excited to work with him the next cycle. We would also like to thank Stephen 
Balukoff for his candidacy. Lastly, we would like to thank dougwig for 
conducting/overseeing the election.

Happy Holidays,
German + Brandon



[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_cf118bc0186899ff
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][drivers] Spec freeze approaching: Review priorities

2015-12-09 Thread Flavio Percoco

Greetings,

To all Glance drivers and people interested in following up on Glance
specs. I've added to our meeting agenda etherpad[0] the list of review
priorities for specs.

Please, bare in mind that our spec freeze is approaching and we need
to provide as much feedback as possible on the proposed specs so that
spec writers will have enough time to address our comments.

As a reminder, the spec freeze for Glance will start on Mon 28th and
it'll end on Jan 1st.

Thanks everyone for your efforts,
Flavio

[0] https://etherpad.openstack.org/p/glance-drivers-meeting-agenda

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] IRC broken

2015-12-09 Thread Anita Kuno
On 12/09/2015 07:33 PM, Tim Hinrichs wrote:
> It seems IRC is broken for the meeting we're supposed to be having right
> now.  The symptom is that you can logon but may only see a fragment of the
> users/messages that are being sent.  We even had a partitioning where 2
> people could exchange messages, and a different 2 people could exchange
> messages, but not all 4 people could exchange messages.
> 
> Unless it fixes itself in the next few minutes, we'll cancel and have the
> discussion over email.  And given how late we're starting, we'll likely
> have an email discussion anyway.
> 
> Tim
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

It seems you were able to hold your meeting?
http://eavesdrop.openstack.org/meetings/congressteammeeting/2015/congressteammeeting.2015-12-10-00.00.log.html

I don't see anything wrong with the #openstack-meeting channel.

Do let infra know if you have additional details.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Midcycle meetup

2015-12-09 Thread Ben Swartzlander

On 12/04/2015 04:42 PM, Ben Swartzlander wrote:

On 11/19/2015 01:00 PM, Ben Swartzlander wrote:

If you planning to attend the midcycle in any capacity, please vote your
preferences here:

https://www.surveymonkey.com/r/BXPLDXT


The results of the survey were clear. Most people prefer the week of Jan
12-14.

There was an offer to host in Roseville, CA by HP (thanks HP) but at the
meeting yesterday most people still preferred the RTP site, so we will
be planning on hosting the meeting in RTP that week, unless someone
absolutely can't make that week.

What remains to be decided is whether we do Tuesday+Wednesday or
Wednesday+Thursday. We've tried both, and the 2 day length has worked
out very well. I personally lean towards Wednesday+Thursday, but please
reply back to me or the list if you have a different preference.

We need to finalize the dates so people can make travel arrangements.
I'll set the deadline to decide by Tuesday Dec 8 so people will have 5
week to make travel plans.


Okay it's final -- we will hold the midcycle meetup on Jan 13-14 at 
NetApp's RTP office.


-Ben



-Ben

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [infra] IRC broken

2015-12-09 Thread Anita Kuno
On 12/09/2015 08:29 PM, Tim Hinrichs wrote:
> The issue with #openstack-meeting seems to have resolved itself about 35
> minutes into our scheduled time.  The only thing that might be missing from
> the logs are messages where people were trying to figure out what was going
> on.  So if anyone missed the meeting, start at the 35 minute mark.
> 
> I included [infra] and a few more details below.

Infra has their own mailing list. I have included the infra mailing list
on this post.

It sounds like what you are describing is a netsplit:
https://en.wikipedia.org/wiki/Netsplit

This is common irc behaviour and has nothing to do with Infra or OpenStack.

Sorry your meeting was affected, glad you were able to carry on.

Thanks,
Anita.

> 
> - It seemed there was a partitioning in #openstack-meeting for a while.  I
> could #start the meeting, but except for 1 other person (masahito), no one
> could see my messages.  I could exchange messages with masahito, but
> neither of us could send messages to ekcs or ramineni.  I could, however,
> see ekcs and ramineni's messages to each other.
> 
> - We had the same problem with #congress.  I eventually called ekcs, and we
> tried to debug.
> 
> - I tried logging back in a couple of times, which eventually worked.  My
> nick changed.  At that point, I could exchange messages with ekcs and
> ramineni, and I could send messages to masahito, but I couldn't see
> messages from masahito.
> 
> - Just as we were about to cancel the meeting, #congress started working
> again.  Then #openstack-meeting started working, so we resumed the meeting.
>  (I'm not sure #congress started working first, or if we just all happened
> to notice #congress first.)
> 
> If anyone has any other pertinent details, please include them.
> 
> Tim
> 
> On Wed, Dec 9, 2015 at 5:07 PM Anita Kuno  wrote:
> 
>> On 12/09/2015 07:33 PM, Tim Hinrichs wrote:
>>> It seems IRC is broken for the meeting we're supposed to be having right
>>> now.  The symptom is that you can logon but may only see a fragment of
>> the
>>> users/messages that are being sent.  We even had a partitioning where 2
>>> people could exchange messages, and a different 2 people could exchange
>>> messages, but not all 4 people could exchange messages.
>>>
>>> Unless it fixes itself in the next few minutes, we'll cancel and have the
>>> discussion over email.  And given how late we're starting, we'll likely
>>> have an email discussion anyway.
>>>
>>> Tim
>>>
>>>
>>>
>>>
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> It seems you were able to hold your meeting?
>>
>> http://eavesdrop.openstack.org/meetings/congressteammeeting/2015/congressteammeeting.2015-12-10-00.00.log.html
>>
>> I don't see anything wrong with the #openstack-meeting channel.
>>
>> Do let infra know if you have additional details.
>>
>> Thank you,
>> Anita.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mesos Conductor using container-create operations

2015-12-09 Thread Hongbin Lu
As Bharath mentioned, I am +1 to extend the "container" object to Mesos bay. In 
addition, I propose to extend "container" to k8s as well (the details are 
described in this BP [1]). The goal is to promote this API resource to be 
technology-agnostic and make it portable across all COEs. I am going to justify 
this proposal by a use case.

Use case:
I have an app. I used to deploy my app to a VM in OpenStack. Right now, I want 
to deploy my app to a container. I have basic knowledge of container but not 
familiar with specific container tech. I want a simple and intuitive API to 
operate a container (i.e. CRUD), like how I operated a VM before. I find it 
hard to learn the DSL introduced by a specific COE (k8s/marathon). Most 
importantly, I want my deployment to be portable regardless of the choice of 
cluster management system and/or container runtime. I want OpenStack to be the 
only integration point, because I don't want to be locked-in to specific 
container tech. I want to avoid the risk that a specific container tech being 
replaced by another in the future. Optimally, I want Keystone to be the only 
authentication system that I need to deal with. I don't want the extra 
complexity to deal with additional authentication system introduced by specific 
COE.

Solution:
Implement "container" object for k8s and mesos bay (and all the COEs introduced 
in the future).

That's it. I would appreciate if you can share your thoughts on this proposal.

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers

Best regards,
Hongbin

From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
Sent: December-08-15 11:40 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Mesos Conductor using container-create operations

Hi,

As we have discussed in last meeting, we cannot continue with changes in 
container-create[1] as long as we have suitable use case. But I honestly feel 
to have some kind of support for mesos + marathon apps, because magnum supports 
COE related functionalities for docker swarm (container-create) and k8s 
(pod-create, rc-create..) but not for mesos bays.

As hongbin suggested, we use the existing functionality of container-create and 
support in mesos-conductor. Currently we have container-create only for docker 
swarm bay. Let's have support for the same command for mesos bay with out any 
changes in client side.

Let me know your suggestions.

Regards
Bharath T
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Linux Bridge CI is now a voting gate job

2015-12-09 Thread Armando M.
On 9 December 2015 at 03:53, Sean Dague  wrote:

> On 12/07/2015 04:05 PM, Sean M. Collins wrote:
> > It's been a couple months - the last time I posted on this subject we
> > were still working on getting Linux Bridge to become an experimental[1]
> > job. During the Liberty cycle, the Linux Bridge CI was promoted from
> > experimental status, to being run on all Neutron changes, but
> > non-voting.
> >
> > Well, we're finally at the point where the Linux Bridge job is
> > gating[2]. I am sure there are still bugs that will need to be addressed
> > - I will be watching the gate very carefully the next couple of hours
> > and throughout this week.
> >
> > Feel free to leave bags of flaming :poo: at my doorstep
> >
> > On a serious note, thank you to everyone who over this year has
> > committed patches and fixes to make this happen, it's been an amazing
> > example of open source and community involvement. I'll be happy to buy
> > drinks if you helped with LB in San Antonio if there is a neutron social
> > event (in addition to paying back amotoki for the Tokyo social).
> >
> > [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/068859.html
> > [2]: https://review.openstack.org/205674
> >
>
> Nicely done! Thanks much!
>

Don't want to be a party pooper but I added the job failure rate to the
dashboard [1], and the result so far is not pleasing. Hopefully this is due
to a failure tail fixed in [2].

Cheers,
Armando

[1] https://review.openstack.org/#/c/255588/
[2] https://review.openstack.org/#/c/252493/


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-09 Thread Joshua Harlow

So,

To try to reach some kind of conclusion here I am wondering if it would 
be acceptable to folks (would people even adopt such a change?) if we 
(oslo folks/others) provided a new function in say lockutils.py (in 
oslo.concurrency) that would let users of oslo.concurrency pick which 
kind of lock they would want to use...


The two types would be:

1. A pid based lock, which would *not* be resistant to crashing 
processes, it would perhaps use 
https://github.com/openstack/pylockfile/blob/master/lockfile/pidlockfile.py 
internally. It would be more easily breakable and more easily 
introspect-able (by either deleting the file or `cat` the file to see 
the pid inside of it).
2. The existing lock that is resistant to crashing processes (it 
automatically releases on owner process crash) but is not easily 
introspect-able (to know who is using the lock) and is not easily 
breakable (aka to forcefully break the lock and release waiters and the 
current lock holder).


Would people use these two variants if (oslo) provided them, or would 
the status quo exist and nothing much would change?


A third possibility is to spend energy using/integrating tooz 
distributed locks and treating different processes on the same system as 
distributed instances [even though they really are not distributed in 
the classical sense]). These locks that tooz supports are already 
introspect-able (via various means) and can be broken if needed (work is 
in progress to make this breaking process more useable via API).


Thoughts?

-Josh

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2015-12-01 09:28:18 -0800:

Sean Dague wrote:

On 12/01/2015 08:08 AM, Duncan Thomas wrote:

On 1 December 2015 at 13:40, Sean Dague>   wrote:


  The current approach means locks block on their own, are processed in
  the order they come in, but deletes aren't possible. The busy lock would
  mean deletes were normal. Some extra cpu spent on waiting, and lock
  order processing would be non deterministic. It's trade offs, but I
  don't know anywhere that we are using locks as queues, so order
  shouldn't matter. The cpu cost on the busy wait versus the lock file
  cleanliness might be worth making. It would also let you actually see
  what's locked from the outside pretty easily.


The cinder locks are very much used as queues in places, e.g. making
delete wait until after an image operation finishes. Given that cinder
can already bring a node into resource issues while doing lots of image
operations concurrently (such as creating lots of bootable volumes at
once) I'd be resistant to anything that makes it worse to solve a
cosmetic issue.

Is that really a queue? Don't do X while Y is a lock. Do X, Y, Z, in
order after W is done is a queue. And what you've explains above about
Don't DELETE while DOING OTHER ACTION, is really just the queue model.

What I mean by treating locks as queues was depending on X, Y, Z
happening in that order after W. With a busy wait approach they might
happen as Y, Z, X or X, Z, B, Y. They will all happen after W is done.
But relative to each other, or to new ops coming in, no real order is
enforced.


So ummm, just so people know the fasteners lock code (and the stuff that
has existed for file locks in oslo.concurrency and prior to that
oslo-incubator...) never has guaranteed the aboved sequencing.

How it works (and has always worked) is the following:

1. A lock object is created
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L85)
2. That lock object acquire is performed
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L125)
3. At that point do_open is called to ensure the file exists (if it
exists already it is opened in append mode, so no overwrite happen) and
the lock object has a reference to the file descriptor of that file
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L112)
4. A retry loop starts, that repeats until either a provided timeout is
elapsed or the lock is acquired, the retry logic u can skip over but the
code that the retry loop calls is
https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L92

The retry loop (really this loop @
https://github.com/harlowja/fasteners/blob/master/fasteners/_utils.py#L87)
will idle for a given delay between the next attempt to lock the file,
so that means there is no queue like sequencing, and that if for example
entity A (who created lock object at t0) sleeps for 50 seconds between
delays and entity B (who created lock object at t1) and sleeps for 5
seconds between delays would prefer entity B getting it (since entity B
has a smaller retry delay).

So just fyi, I wouldn't be depending on these for queuing/ordering as is...



Agreed, this form of fcntl locking is basically equivalent to
O_CREAT|O_EXCL locks as Sean described, since we never use the blocking
form. I'm 

Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Rochelle Grober
Congratulations, Matt!

Condolences to Erno, but he's already said he's still part of the team.

I'm looking forward to the IRC meetings and and also what the team becomes.

--Rocky

> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Wednesday, December 09, 2015 6:11 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
> open
> 
> On 12/09/2015 09:02 AM, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:
> >> Thierry Carrez wrote:
> >>> Thierry Carrez wrote:
>  The nomination deadline is passed, we have two candidates!
> 
>  I'll be setting up the election shortly (with Jeremy's help to
> generate
>  election rolls).
> >>>
> >>> OK, the election just started. Recent contributors to a stable
> branch
> >>> (over the past year) should have received an email with a link to
> vote.
> >>> If you haven't and think you should have, please contact me
> privately.
> >>>
> >>> The poll closes on Tuesday, December 8th at 23:59 UTC.
> >>> Happy voting!
> >>
> >> Election is over[1], let me congratulate Matt Riedemann for his
> election
> >> ! Thanks to everyone who participated to the vote.
> >>
> >> Now I'll submit the request for spinning off as a separate project
> team
> >> to the governance ASAP, and we should be up and running very soon.
> >>
> >> Cheers,
> >>
> >> [1] http://civs.cs.cornell.edu/cgi-
> bin/results.pl?id=E_2f5fd6c3837eae2a
> >>
> >
> > Congratulations, Matt!
> >
> > Doug
> >
> >
> ___
> ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Thanks to both candidates for putting their name forward, it is nice to
> have an election.
> 
> Congratulations Matt,
> Anita.
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] IRC broken

2015-12-09 Thread Tim Hinrichs
It seems IRC is broken for the meeting we're supposed to be having right
now.  The symptom is that you can logon but may only see a fragment of the
users/messages that are being sent.  We even had a partitioning where 2
people could exchange messages, and a different 2 people could exchange
messages, but not all 4 people could exchange messages.

Unless it fixes itself in the next few minutes, we'll cancel and have the
discussion over email.  And given how late we're starting, we'll likely
have an email discussion anyway.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QA] New testing guidelines

2015-12-09 Thread Assaf Muller
Today we merged [1] which adds content to the Neutron testing guidelines:
http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron

The document details Neutron's different testing infrastructures:
* Unit
* Functional
* Fullstack (Integration testing with services deployed by the testing
infra itself)
* In-tree Tempest

The new documentation provides:
* Examples
* Do's and don'ts
* Good and bad usage of mock
* The anatomy of a good unit test

And primarily the advantages and use cases for each testing framework.

It's short - I encourage developers to go through it. Reviewers may
use it as reference / link when testing anti-pattern pop up.

Please send feedback on this thread or better yet in the form of a
devref patch. Thank you!


[1] https://review.openstack.org/#/c/245984/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >