Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Sam Yaple
Here is why I am on board with this. As we have discovered, the logging
with the syslog plugin leaves alot to be desired. It (to my understanding)
still can't save tracebacks/stacktraces to the log files for whatever
reason. stdout/stderr however works perfectly fine. That said the Docker
log stuff has been a source of pain in the past, but it has gotten better.
It does have the limitation of being only able to log one output at a time.
This means, as an example, the neutron-dhcp-agent could send its logs to
stdout/err but the dnsmasq process that it launch (that also has logs)
would have to mix its logs in with the neutron logs in stdout/err. Can Heka
handle this and separate them efficiently? Otherwise I see no choice but to
stick with something that can handle multiple logs from a single container.

Sam Yaple

On Mon, Jan 11, 2016 at 10:16 PM, Eric LEMOINE 
wrote:

>
> Le 11 janv. 2016 18:45, "Michał Jastrzębski"  a écrit :
> >
> > On 11 January 2016 at 10:55, Eric LEMOINE  wrote:
> > > Currently the services running in containers send their logs to
> > > rsyslog. And rsyslog stores the logs in local files, located in the
> > > host's /var/log directory.
> >
> > Yeah, however plan was to teach rsyslog to forward logs to central
> > logging stack once this thing is implemented.
>
> Yes. With the current ELK Change Request, Rsyslog sends logs to the
> central Logstash instance. If you read my design doc you'll understand that
> it's precisely what we're proposing changing.
>
> > > I know. Our plan is to rely on Docker. Basically: containers write
> > > their logs to stdout. The logs are collected by Docker Engine, which
> > > makes them available through the unix:///var/run/docker.sock socket.
> > > The socket is mounted into the Heka container, which uses the Docker
> > > Log Input plugin [*] to reads the logs from that that socket.
> > >
> > > [*] <
> http://hekad.readthedocs.org/en/latest/config/inputs/docker_log.html>
> >
> > So docker logs isn't best thing there is, however I'd suspect that's
> > mostly console output fault. If you can tap into stdout efficiently,
> > I'd say that's pretty good option.
>
> I'm not following you. Could you please be more specific?
>
> > >> Seems to me we need additional comparason of heka vs rsyslog;) Also
> > >> this would have to be hands down better because rsyslog is already
> > >> implemented, working and most of operators knows how to use it.
> > >
> > >
> > > We don't need to remove Rsyslog. Services running in containers can
> > > write their logs to both Rsyslog and stdout, which even is what they
> > > do today (at least for the OpenStack services).
> > >
> >
> > There is no point for that imho. I don't want to have several systems
> > doing the same thing. Let's make decision of one, but optimal toolset.
> > Could you please describe bottoms up what would your logging stack
> > look like? Heka listening on stdout, transfering stuff to
> > elasticsearch and kibana on top of it?
>
> My plan is to provide details in the blueprint document, that I'll
> continue working on if the core developers agree with the principles of the
> proposed architecture and change.
>
> But here's our plan—as already described in my previous email: the Kolla
> services, which run in containers, write their logs to stdout. Logs are
> collected by the Docker engine. Heka's Docker Log Input plugin is used to
> read the container logs from the Docker endpoint (Unix socket). Since Heka
> will run in a container a volume is necessary for accessing the Docker
> endpoint. The Docker Log Input plugin inserts the logs into the Heka
> pipeline, at the end of which an Elasticsearch Output plugin will send the
> log messages to Elasticsearch. Here's a blog post reporting on that
> approach: <
> http://www.ianneubert.com/wp/2015/03/03/how-to-use-heka-docker-and-tutum/>.
> We haven't tested that approach yet, but we plan to experiment with it as
> we work on the specs.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 转发: [fuel] github can't sync with git because of big uploading

2016-01-11 Thread wuwenbin
Hi:
   I have forward the email and can you help me see why no answers.
   Thanks.
Bests
Cathy

发件人: wuwenbin
发送时间: 2015年12月30日 10:35
收件人: 'openstack-dev@lists.openstack.org'
抄送: Jiangrui (Henry, R); Zhaokexue
主题: [fuel] github can't sync with git because of big uploading

Hi all:
 Repo of fuel-plugin-onos has something wrong because of uploading big 
file. Though codes are reverted while history still contains the pack which 
results in big downloading and unsync with github.
 I really want to solve this problem and please forgive my own decision 
for a new commit of new onosfw because I don’t want this impacting the project. 
I have to admit that I am really bad at management of commit and merge. So I 
invite fuel ptl as the manager of new repo to avoid such things.
 Does anyone can help me solve this as soon as possible?
   Thanks
Bests
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] DocImpact vs. reno

2016-01-11 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 11/01/16 23:42, Sean Dague wrote:



> 
> This conversation has gone on long enough I've completely lost the
> problem we're trying to solve and the constraints around it.

Thank you :)

> 
> I'd like to reset the conversation a little.
> 
> Goal: to not flood Docs team with vague bugs that are hard to decypher
> 
> Current Approach: machine enforce extra words after DocImpact (not
> reviewed by doc team)
> 
> Downsides with current approach:
> * it's a machine, not people, so clarity isn't guarunteed.
> * the reviewers of the commit message aren't the people that will have
> to deal with it,   leading to bad quality control on the reviews.
> * extra jobs which cause load and inhibit our ability to stop reseting
> jenkins votes on commit message changes
> 
> My Alternative Approach:
> 
> File doc bugs against project team instead of doc team. Make passing bug
> to Doc team a project team responsibility to ensure context is provided
> when it's needed.

I'm happy to try this, as long as the PTL's of the defcore projects agree.

> 
> This also means there is a feedback loop between the reviewers and the
> folks having to deal with the artifacts (on first pass).

- From docs perspective, it removes the triaging burden off us. If teams are 
happy to take that on, I certainly won't stand in their way.

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWlDWBAAoJELppzVb4+KUyhg4H/j2KKMKQDht9qjbIi80L9CgH
dzC59in/iUqRSjkAt44YG9ikwTQ5zPjIerR7Gj6Lmvm4cijWMoU+rhgO+7A07Nb4
sSADhjcshT8KPhJM/c9jf7BbZld7mGRZ7FrwH+FaxL8ESlcCbaEU9qVSxuwVciJy
ZALGroVDnILQmT5jzOLhOTNzuSW2FZlwamDhuV5TUp3LI8sLlnR0+W5K/6gC4Lmr
LEtIlvsEUk/7bNC3915jMiIrQuGwUBdxL0Z6xcPHRhkXHeiJUHvB31+4kxK8FqVc
GTQWJKNi9yAJ/GQ360vbXhY6HNnVTz0Fs22jlneAu48tlJqqtO6g0KHN8NZVHBc=
=T8bN
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-01-11 Thread Sean M. Collins
On Mon, Jan 11, 2016 at 12:57:05PM PST, Clark Boylan wrote:
> On Mon, Jan 11, 2016, at 12:35 PM, Sean M. Collins wrote:
> > Nice find. I actually pushed a patch recently that we should be
> > advertising the MTU by default. I think this really shows that it should
> > be enabled by default.
> > 
> > https://review.openstack.org/263486l
> >
> ++ Neutron should be able to determine what the outer MTU is and adjust
> the advertised inner MTU automatically based on the overhead required
> for whatever tunnel protocol is in use all without the deployer or cloud
> user needing to know anything special.

Right - and Neutron does when an operator explicitly enables it. I
think it's one of those things where we exercised abundant caution when
merging the feature, where we didn't enable it by default and then it
slipped through the cracks.

So - I think maybe we need to be a little more aggressive in enabling
things by default that really have no reason to not be enabled. 

Neutron should Just Work™

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Eric LEMOINE
Le 11 janv. 2016 18:45, "Michał Jastrzębski"  a écrit :
>
> On 11 January 2016 at 10:55, Eric LEMOINE  wrote:
> > Currently the services running in containers send their logs to
> > rsyslog. And rsyslog stores the logs in local files, located in the
> > host's /var/log directory.
>
> Yeah, however plan was to teach rsyslog to forward logs to central
> logging stack once this thing is implemented.

Yes. With the current ELK Change Request, Rsyslog sends logs to the central
Logstash instance. If you read my design doc you'll understand that it's
precisely what we're proposing changing.

> > I know. Our plan is to rely on Docker. Basically: containers write
> > their logs to stdout. The logs are collected by Docker Engine, which
> > makes them available through the unix:///var/run/docker.sock socket.
> > The socket is mounted into the Heka container, which uses the Docker
> > Log Input plugin [*] to reads the logs from that that socket.
> >
> > [*] <
http://hekad.readthedocs.org/en/latest/config/inputs/docker_log.html>
>
> So docker logs isn't best thing there is, however I'd suspect that's
> mostly console output fault. If you can tap into stdout efficiently,
> I'd say that's pretty good option.

I'm not following you. Could you please be more specific?

> >> Seems to me we need additional comparason of heka vs rsyslog;) Also
> >> this would have to be hands down better because rsyslog is already
> >> implemented, working and most of operators knows how to use it.
> >
> >
> > We don't need to remove Rsyslog. Services running in containers can
> > write their logs to both Rsyslog and stdout, which even is what they
> > do today (at least for the OpenStack services).
> >
>
> There is no point for that imho. I don't want to have several systems
> doing the same thing. Let's make decision of one, but optimal toolset.
> Could you please describe bottoms up what would your logging stack
> look like? Heka listening on stdout, transfering stuff to
> elasticsearch and kibana on top of it?

My plan is to provide details in the blueprint document, that I'll continue
working on if the core developers agree with the principles of the proposed
architecture and change.

But here's our plan—as already described in my previous email: the Kolla
services, which run in containers, write their logs to stdout. Logs are
collected by the Docker engine. Heka's Docker Log Input plugin is used to
read the container logs from the Docker endpoint (Unix socket). Since Heka
will run in a container a volume is necessary for accessing the Docker
endpoint. The Docker Log Input plugin inserts the logs into the Heka
pipeline, at the end of which an Elasticsearch Output plugin will send the
log messages to Elasticsearch. Here's a blog post reporting on that
approach: <
http://www.ianneubert.com/wp/2015/03/03/how-to-use-heka-docker-and-tutum/>.
We haven't tested that approach yet, but we plan to experiment with it as
we work on the specs.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday January 12th at 19:00 UTC

2016-01-11 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday January 12th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-01-05-19.05.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-01-05-19.05.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-01-05-19.05.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project Meeting, Tue Jan 12th, 21:00 UTC

2016-01-11 Thread Mike Perez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting January 12th at 21:00 UTC in the NEW
#openstack-meeting-cp IRC channel, with the following agenda:

* Team announcements (horizontal, vertical, diagonal)
* API guides vision - developer.openstack.org and REST API docs
* Cross-Project Spec Liaisons [1]
* Open discussion

 If you're from a horizontal team (Release management, QA, Infra, Docs,
 Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and
 have something to communicate to the other teams, feel free to abuse the
 relevant sections of that meeting and make sure it gets #info-ed by the
 meetbot in the meeting summary.

 See you there!

 For more details on this meeting, please see:
 https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

 [1] - https://review.openstack.org/#/c/266072/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-11 Thread Tzu-Mainn Chen
- Original Message -
> Background info:
> 
> We've got a problem in TripleO at the moment where many of our
> workflows can be driven by the command line only. This causes some
> problems for those trying to build a UI around the workflows in that
> they have to duplicate deployment logic in potentially multiple places.
> There are specs up for review which outline how we might solve this
> problem by building what is called TripleO API [1].
> 
> Late last year I began experimenting with an OpenStack service called
> Mistral which contains a generic workflow API. Mistral supports
> defining workflows in YAML and then creating, managing, and executing
> them via an OpenStack API. Initially the effort was focused around the
> idea of creating a workflow in Mistral which could supplant our
> "baremetal introspection" workflow which currently lives in python-
> tripleoclient. I create a video presentation which outlines this effort
> [2]. This particular workflow seemed to fit nicely within the Mistral
> tooling.
> 
> 
> 
> More recently I've turned my attention to what it might look like if we
> were to use Mistral as a replacement for the TripleO API entirely. This
> brings forth the question of would TripleO be better off building out
> its own API... or would relying on existing OpenStack APIs be a better
> solution?
> 
> Some things I like about the Mistral solution:
> 
> - The API already exists and is generic.
> 
> - Mistral already supports interacting with many of the OpenStack API's
> we require [3]. Integration with keystone is baked in. Adding support
> for new clients seems straightforward (I've had no issues in adding
> support for ironic, inspector, and swift actions).
> 
> - Mistral actions are pluggable. We could fairly easily wrap some of
> our more complex workflows (perhaps those that aren't easy to replicate
> with pure YAML workflows) by creating our own TripleO Mistral actions.
> This approach would be similar to creating a custom Heat resource...
> something we have avoided with Heat in TripleO but I think it is
> perhaps more reasonable with Mistral and would allow us to again build
> out our YAML workflows to drive things. This might allow us to build
> off some of the tripleo-common consolidation that is already underway
> ...
> 
> - We could achieve a "stable API" by simply maintaining input
> parameters for workflows in a stable manner. Or perhaps workflows get
> versioned like a normal API would be as well.
> 
> - The purist part of me likes Mistral quite a bit. It fits nicely with
> the deploy OpenStack with OpenStack. I sort of feel like if we have to
> build our own API in TripleO part of this vision has failed and could
> even be seen as a massive technical debt which would likely be hard to
> build a community around outside of TripleO.
> 
> - Some of the proposed validations could perhaps be implemented as new
> Mistral actions as well. I'm not convinced we require TripleO API just
> to support a validations mechanism yet. Perhaps validations seem hard
> because we are simply trying to do them in the wrong places anyway?
> (like for example perhaps we should validate network connectivity at
> inspection time rather than during provisioning).
> 
> - Power users might find a workflow built around a Mistral API more
> easy to interact with and expand upon. Perhaps this ends up being
> something that gets submitted as a patchset back to the TripleO that we
> accept into our upstream "stock" workflow sets.
> 

Hiya!  Thanks for putting down your thoughts.

I think I fundamentally disagree with the idea of using Mistral, simply
because many of the actions we'd like to expose through a REST API
(described in the tripleo-common deployment library spec [1]) aren't
workflows; they're straightforward get/set methods.  Putting a workflow
engine in front of that feels like overkill and an added complication
that simply isn't needed.  And added complications can lead to unneeded
complications: for instance, one open Mistral bug details how it may
not scale well [2].

The Mistral solution feels like we're trying to force a circular peg
into a round-ish hole.  In a vacuum, if we were to consider the
engineering problem of exposing a code base to outside consumers in a
non-language specific fashion - I'm pretty sure we'd just suggest the
creation of a REST API and be done with it; the thought of using a
workflow engine as the frontend would not cross our minds.

I don't really agree with the 'purist' argument.  We already have custom
business logic written in the TripleO CLI; accepting that within TripleO,
but not a very thin API layer, feels like an arbitrary line to me.  And
if that line exists, I'd argue that if it forces overcomplicated
solutions to straightforward engineering problems, then that line needs
to be moved.


Mainn


[1] 
https://github.com/openstack/tripleo-specs/blob/master/specs/mitaka/tripleo-overcloud-deployment-library.rst
[2] 

Re: [openstack-dev] [oslo][osdk] PrettyTable needs a home in OpenStack

2016-01-11 Thread Joshua Harlow
Also using the fact that get_string() and __init__() of prettytable are 
the most complicated (both take kwargs that can tweak the behavior of 
the generated results) and that most projects (see below) don't seem to 
be customizing the behavior (that much) it makes me thing its safer to 
provide a basic compat/option translation layer and move on.


Here is the analysis of the following projects usage of it:

automaton
cliff
cloudv-ostf-adapter
distil
faafo
fairy-slipper
gnocchi
ironic-lib
kolla-mesos
magnum
nova
openstack-ansible
rally
requirements
sahara
scalpels
stacktach-klugman
vmtp
vmware-nsx

Grep/tiny-crappy-script result:

http://paste.openstack.org/show/483512/

The case that sticks out is:

./nova/openstack/common/cliutils.py (or other usages of cliutils).

print(encodeutils.safe_encode(pt.get_string(**kwargs)))

That allows arbitrary kwargs so unknown how that will work (requires 
more analysis to figure out really what those kwargs are).


The rest seem like they could be accommodated by something like (with 
some minor additions to do sorting for example):


https://bitbucket.org/astanin/python-tabulate/pull-requests/25/

-Josh

Doug Hellmann wrote:

Excerpts from Victor Stinner's message of 2016-01-11 11:15:56 +0100:

Le 11/01/2016 10:37, Thierry Carrez a écrit :

Joshua Harlow wrote:

[...]
So I'd def help keep prettytable going, of course another option is to
move to https://pypi.python.org/pypi/tabulate (which does seem active
and/or maintained); tabulate provides pretty much the same thing
(actually more table formats @
https://pypi.python.org/pypi/tabulate#table-format ) than prettytable
and the api is pretty much the same (or nearly).

So that's another way to handle this (just to move off prettytable
entirely).

This sounds like a reasonable alternative...


IMHO contributing to an actively developped library (tabulate) seems
more productive than starting to maintain a second library which is
currently no more maintained.

Does anyone know how much code should be modified to replace prettytable
with tabulate on the whole OpenStack project?


We seem to have a rather long list of things consuming PrettyTable:

$ grep -i prettytable */*requirement*.txt
automaton/requirements.txt:PrettyTable<0.8,>=0.7
cliff/requirements.txt:PrettyTable<0.8,>=0.7
cloudv-ostf-adapter/requirements.txt:PrettyTable>=0.7,<0.8
distil/requirements.txt:prettytable==0.7.2
faafo/requirements.txt:PrettyTable>=0.7,<0.8
fairy-slipper/requirements.txt:prettytable
gnocchi/requirements.txt:prettytable
ironic-lib/requirements.txt:PrettyTable<0.8,>=0.7
kolla-mesos/requirements.txt:PrettyTable<0.8,>=0.7
magnum/requirements.txt:PrettyTable<0.8,>=0.7
nova/requirements.txt:PrettyTable<0.8,>=0.7
openstack-ansible/requirements.txt:PrettyTable>=0.7,<0.8   # 
scripts/inventory-manage.py
python-blazarclient/requirements.txt:PrettyTable>=0.7,<0.8
python-ceilometerclient/requirements.txt:PrettyTable<0.8,>=0.7
python-cinderclient/requirements.txt:PrettyTable<0.8,>=0.7
python-evoqueclient/requirements.txt:PrettyTable<0.8,>=0.7
python-glanceclient/requirements.txt:PrettyTable<0.8,>=0.7
python-heatclient/requirements.txt:PrettyTable<0.8,>=0.7
python-ironicclient/requirements.txt:PrettyTable<0.8,>=0.7
python-keystoneclient/requirements.txt:PrettyTable<0.8,>=0.7
python-magnumclient/requirements.txt:PrettyTable<0.8,>=0.7
python-manilaclient/requirements.txt:PrettyTable<0.8,>=0.7
python-monascaclient/requirements.txt:PrettyTable>=0.7,<0.8
python-muranoclient/requirements.txt:PrettyTable<0.8,>=0.7
python-novaclient/requirements.txt:PrettyTable<0.8,>=0.7
python-rackclient/requirements.txt:PrettyTable>=0.7,<0.8
python-saharaclient/requirements.txt:PrettyTable<0.8,>=0.7
python-searchlightclient/requirements.txt:PrettyTable<0.8,>=0.7
python-senlinclient/requirements.txt:PrettyTable<0.8,>=0.7
python-surveilclient/requirements.txt:prettytable
python-troveclient/requirements.txt:PrettyTable<0.8,>=0.7
python-tuskarclient/requirements.txt:PrettyTable<0.8,>=0.7
rally/requirements.txt:PrettyTable<0.8,>=0.7
requirements/global-requirements.txt:PrettyTable>=0.7,<0.8
sahara/test-requirements.txt:PrettyTable<0.8,>=0.7
scalpels/requirements.txt:PrettyTable>=0.7,<0.8
stacktach-klugman/requirements.txt:prettytable
vmtp/requirements.txt:prettytable>=0.7.2
vmware-nsx/requirements.txt:PrettyTable<0.8,>=0.7

I suspect we could skip porting a lot of those, since they look like
clients and we're working to move all command line programs into the
unified client.

That leaves 18:

$ grep -i prettytable */*requirement*.txt | grep -v client
automaton/requirements.txt:PrettyTable<0.8,>=0.7
cliff/requirements.txt:PrettyTable<0.8,>=0.7
cloudv-ostf-adapter/requirements.txt:PrettyTable>=0.7,<0.8
distil/requirements.txt:prettytable==0.7.2
faafo/requirements.txt:PrettyTable>=0.7,<0.8
fairy-slipper/requirements.txt:prettytable
gnocchi/requirements.txt:prettytable
ironic-lib/requirements.txt:PrettyTable<0.8,>=0.7
kolla-mesos/requirements.txt:PrettyTable<0.8,>=0.7

Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC

2016-01-11 Thread Hirofumi Ichihara



On 2016/01/12 5:14, Armando M. wrote:



On 11 January 2016 at 12:04, Carl Baldwin > wrote:


What do we do?  My calendar was set up with the sane bi-weekly thing
and it shows the meeting for tomorrow.  The last word from our
fearless leader is that we'll have it today.  So, I'll be there today
unless instructed otherwise.

The ics file now seems to reset the cadence beginning today at 2100
and next Tuesday, the 19th, at 1400.  I guess we should either hold
the meeting today and reset the cadence or fix the ics file.


This is what I would like to do now:

https://review.openstack.org/#/c/266019
I personally haven't seen that much of an attendance difference 
anyway, and at this point, it'll simplify our lives and avoid grief 
going forward.

I like it.

However, we have gathered from all over the world because neutron is big 
project. Should we have the choice so that more people get attendance 
opportunity?





Carl

On Mon, Jan 11, 2016 at 12:09 PM, Kevin Benton > wrote:
> The issue is simply that you have a sane bi-weekly thing setup
in your
> calendar. What we have for Neutron is apparently defined as “odd
and even
> weeks when weeks are represented as an short integer counting
from the first
> of the year”, a.k.a. “bi-weekly” as a robot might define it. :)
>
>
> On Jan 11, 2016, at 11:00 AM, Kyle Mestery > wrote:
>
> On Mon, Jan 11, 2016 at 12:57 PM, Kyle Mestery
> wrote:
>>
>> On Mon, Jan 11, 2016 at 12:45 PM, Armando M. > wrote:
>>>
>>> Disregard the email subject.
>>>
>>> I stand corrected. Let's meet today.
>>>
>>
>> Something is wrong, I have the meeting on my google calendar,
and it shows
>> up as tomorrow for this week. I've had these setup as rotating
for a while
>> now, so something is fishy with the .ics files.
>
>
> If you look here [1], the meeting cadence was:
>
> 12-15-2015: Tuesday
> 12-21-2015: Monday
> 12-29-2015: Tuesday (skipped)
> 01-04-2016: Monday (skipped)
> 01-12-2016 Tuesday
>
> The meeting is tomorrow.
>
> [1] http://eavesdrop.openstack.org/meetings/networking/2015/
>>
>>
>>>
>>> On 11 January 2016 at 10:24, Ihar Hrachyshka
> wrote:

 Armando M. > wrote:

> Hi neutrinos,
>
> A kind reminder for tomorrow's meeting at 1400UTC.
>
> Cheers,
> Armando
>
> [1] https://wiki.openstack.org/wiki/Network/Meetings
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
>
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Is it just me, or when you use .ics file from eavesdrop, it
says the
 meeting is today?

 http://eavesdrop.openstack.org/calendars/neutron-team-meeting.ics

 Is it the same issue as described in:



http://lists.openstack.org/pipermail/openstack-dev/2015-December/082902.html

 and that is suggested to fix by readding your events from
updated .ics
 file:



http://lists.openstack.org/pipermail/openstack-dev/2016-January/083216.html

 Ihar



__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
   

Re: [openstack-dev] [Horizon] Routing in Horizon

2016-01-11 Thread Tripp, Travis S
Rajat,

Thanks for starting some discussion here.  I think your target=“_self” is taken 
care of now, right?

Re: ng-route vs ui-router - everything I have ever seen or been told supports 
using ui-router instead of ng-route.  I know a quick google search supports 
that, but let me google that for all of us and give several references:

http://www.funnyant.com/angularjs-ui-router/
http://www.amasik.com/angularjs-ngroute-vs-ui-router/
http://stackoverflow.com/questions/21023763/angularjs-difference-between-angular-route-and-angular-ui-router
http://www.pearpages.com/javascript/angularjs/routing/2015/10/13/ngroute-vs-ui-router.html
http://stackoverflow.com/questions/32523512/ui-router-vs-ngroute-for-sinlge-page-app

So, I’m wondering if there’d been any discussion I missed on why not bring in 
ui-router?

Of course there is question using the new router in angular vs ui-router, but 
finding many pros- cons- on that seems to be a bit more difficult. Since it is 
1.5 / 2.0 and neither are past rc / beta, it doesn’t seem like something we can 
debate too well.

https://angular.github.io/router/getting-started
http://www.angulardaily.com/2015/12/angularintroduction-to-ngnewrouter-vs.html

Thanks,
Travis

From: Rajat Vig >
Reply-To: OpenStack List 
>
Date: Thursday, January 7, 2016 at 1:53 AM
To: OpenStack List 
>
Subject: [openstack-dev] [Horizon] Routing in Horizon

Hi Everyone

One of my recent patches which enabled HTML5 based routing via Angular merged, 
some interesting things spun out.
I'd to scramble a few patches to get things
​​ back the same way
for all new Angular Panels.

I also realized that getting Horizon to an SPA with Angular requires more 
thought than mere fixing the current burning issue.
This mail's intent is to spur a direction on how we do routing in Angular and 
how do Angular Panels go back/forth between themselves and older Django panels.

The patch https://review.openstack.org/#/c/173885/ is possibly the first of 
many to use Angular based routing.
It currently uses ngRoute as the library was included in the xstatic-angular 
package.

What I'm roughly thinking to solve some of the immediate issues (there's 
possbily much more that I'm not)

1. If we are going to go with the SPA route, then all Panels need to indicate 
that they are Angular based.
For panels that are Angular, they need to declare routes they'd like to manage.

2. All tags on Angular Panels (including header, sidebar, footer) which don't 
route to Angular Panels will need the attribute target="_self" else Angular 
will not allow navigation to those links.

All sidebar links currently have the attribute set but headers and footers 
don't.
Sidebar links for Angular shouldn't have the attribute else SPA like behavior 
will not happen.
This will need to be documen
​tation​
​
​
​
.

3. That implies yet another problem with the Spinner Modal which gets activated 
on all sidebar clicks.
It'll need to be done differently for Angular routing vs hrefs still with 
Django.
The current spinner relies on a browser page refresh to disappear.

Then there's ngRoute.
Routing needs in Angular may need to handle route conflicts and maybe nested 
routes.
There are other, possibly better options that we could consider
1. https://github.com/angular-ui/ui-router
2. https://angular.github.io/router/

I've been part of the community for not long enough yet and I'm not yet 
completely aware of what exists outside the Horizon codebase so I might be 
missing somethings as well.

Regards
Rajat
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][tests] approach to functional/integration tests

2016-01-11 Thread Christopher_Dearborn
I like #1 as well since it would enable quick coding iteration, but I would 
also like to be able to run the tests against a running OpenStack cluster that 
includes Ironic, so put me down for #3 (Tempest) as well.


Chris Dearborn
Dell Inc

From: John Villalovos [mailto:openstack@sodarock.com]
Sent: Monday, January 11, 2016 10:42 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic][tests] approach to functional/integration 
tests

I personally like Option #1 as an additional goal is to make it easy for 
developers to also run the functional tests with 'tox -efunctional'.  And also 
run the functional tests as part of a normal tox run.

On Mon, Jan 11, 2016 at 6:54 AM, Serge Kovaleff 
> wrote:
Option 3 - to concentrate on Tempest tests 
https://review.openstack.org/#/c/253982/

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70

On Mon, Jan 11, 2016 at 4:49 PM, Serge Kovaleff 
> wrote:
Hi All,

Last week I had a noble goal to write "one-more" functional test in Ironic.
I did find a folder "func" but it was empty.

Friends helped me to find a WIP patch https://review.openstack.org/#/c/235612/

and here comes the question of this email: what approach we would like to 
implement:
Option 1 - write infrastructure code that starts/configure/stops the services
Option 2 - rely on installed DevStack and run the tests over it

Both options have their Cons and Pros. Both options are implemented across the 
OpenStack umbrella.
Option 1 - Glance, Nova, the patch above
Option 2 - HEAT and my favorite at the moment.

Any ideas?

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC

2016-01-11 Thread Armando M.
On 11 January 2016 at 13:54, Hirofumi Ichihara <
ichihara.hirof...@lab.ntt.co.jp> wrote:

>
>
> On 2016/01/12 5:14, Armando M. wrote:
>
>
>
> On 11 January 2016 at 12:04, Carl Baldwin  wrote:
>
>> What do we do?  My calendar was set up with the sane bi-weekly thing
>> and it shows the meeting for tomorrow.  The last word from our
>> fearless leader is that we'll have it today.  So, I'll be there today
>> unless instructed otherwise.
>>
>> The ics file now seems to reset the cadence beginning today at 2100
>> and next Tuesday, the 19th, at 1400.  I guess we should either hold
>> the meeting today and reset the cadence or fix the ics file.
>>
>>
> This is what I would like to do now:
>
> https://review.openstack.org/#/c/266019
>
> I personally haven't seen that much of an attendance difference anyway,
> and at this point, it'll simplify our lives and avoid grief going forward.
>
> I like it.
>
> However, we have gathered from all over the world because neutron is big
> project. Should we have the choice so that more people get attendance
> opportunity?
>

This time, the return to the normal schedule was a disaster, plus every
time we switch to daylight savings, or every time there's a holiday
break/summit we have twice the chances to screw up if we keep the bi-weekly
schedule.

If I go and look at the logs [1] I don't have hard evidence that the
bi-weekly schedule does indeed help the attendance of friendlier timezones,
so I wonder...what's the point?

A.

[1] http://eavesdrop.openstack.org/meetings/networking/

>
>
>
>
> Carl
>>
>> On Mon, Jan 11, 2016 at 12:09 PM, Kevin Benton < 
>> blak...@gmail.com> wrote:
>> > The issue is simply that you have a sane bi-weekly thing setup in your
>> > calendar. What we have for Neutron is apparently defined as “odd and
>> even
>> > weeks when weeks are represented as an short integer counting from the
>> first
>> > of the year”, a.k.a. “bi-weekly” as a robot might define it. :)
>> >
>> >
>> > On Jan 11, 2016, at 11:00 AM, Kyle Mestery < 
>> mest...@mestery.com> wrote:
>> >
>> > On Mon, Jan 11, 2016 at 12:57 PM, Kyle Mestery 
>> wrote:
>> >>
>> >> On Mon, Jan 11, 2016 at 12:45 PM, Armando M. 
>> wrote:
>> >>>
>> >>> Disregard the email subject.
>> >>>
>> >>> I stand corrected. Let's meet today.
>> >>>
>> >>
>> >> Something is wrong, I have the meeting on my google calendar, and it
>> shows
>> >> up as tomorrow for this week. I've had these setup as rotating for a
>> while
>> >> now, so something is fishy with the .ics files.
>> >
>> >
>> > If you look here [1], the meeting cadence was:
>> >
>> > 12-15-2015: Tuesday
>> > 12-21-2015: Monday
>> > 12-29-2015: Tuesday (skipped)
>> > 01-04-2016: Monday (skipped)
>> > 01-12-2016 Tuesday
>> >
>> > The meeting is tomorrow.
>> >
>> > [1] http://eavesdrop.openstack.org/meetings/networking/2015/
>> >>
>> >>
>> >>>
>> >>> On 11 January 2016 at 10:24, Ihar Hrachyshka 
>> wrote:
>> 
>>  Armando M. < arma...@gmail.com> wrote:
>> 
>> > Hi neutrinos,
>> >
>> > A kind reminder for tomorrow's meeting at 1400UTC.
>> >
>> > Cheers,
>> > Armando
>> >
>> > [1] https://wiki.openstack.org/wiki/Network/Meetings
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>>  Is it just me, or when you use .ics file from eavesdrop, it says the
>>  meeting is today?
>> 
>>  http://eavesdrop.openstack.org/calendars/neutron-team-meeting.ics
>> 
>>  Is it the same issue as described in:
>> 
>> 
>> 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082902.html
>> 
>>  and that is suggested to fix by readding your events from updated
>> .ics
>>  file:
>> 
>> 
>> 
>> http://lists.openstack.org/pipermail/openstack-dev/2016-January/083216.html
>> 
>>  Ihar
>> 
>> 
>> 
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe:
>>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> __
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >
>> >
>> 

[openstack-dev] [release][release] reno release 1.3.0 (_independent)

2016-01-11 Thread doug
We are eager to announce the release of:

reno 1.3.0: RElease NOtes manager

This release is part of the _independent release series.

With source available at:

http://git.openstack.org/cgit/openstack/reno

With package available at:

https://pypi.python.org/pypi/reno

For more details, please see below.

Please report issues through launchpad:

http://bugs.launchpad.net/reno

1.3.0
^

Bug Fixes

* Fixes bug 1522153 so that notes added in commits that are merged
  after tags are associated with the correct version.


Changes in reno 1.2.0..1.3.0


aeb9c81 Fix reference to old subcommand in usage
4b45743 fix notes appearing in releases older than they should

Diffstat (except docs and test files)
-

.../fix-git-log-ordering-0e52f95f66c8db5b.yaml |   6 +
reno/scanner.py| 119 ++--
test-requirements.txt  |   2 +
5 files changed, 415 insertions(+), 17 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 5956652..c352436 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6,0 +7,2 @@ hacking<0.11,>=0.10.0
+mock>=1.2
+



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] github can't sync with git because of big uploading

2016-01-11 Thread Asselin, Ramy
I understand this is an issue, but I don’t see why syncing to github is needed 
so urgently? You can still keep working and accessing the repo by 
developers/tools using openstack’s git farm [1]

[1] http://git.openstack.org/cgit/openstack/fuel-plugin-onos/



From: wuwenbin [mailto:wuwenb...@huawei.com]
Sent: Monday, January 11, 2016 5:14 PM
To: Mateusz Matuszkowiak
Cc: openstack-dev@lists.openstack.org
Subject: [openstack-dev] 转发: [fuel] github can't sync with git because of big 
uploading

Hi:
   I have forward the email and can you help me see why no answers.
   Thanks.
Bests
Cathy

发件人: wuwenbin
发送时间: 2015年12月30日 10:35
收件人: 'openstack-dev@lists.openstack.org'
抄送: Jiangrui (Henry, R); Zhaokexue
主题: [fuel] github can't sync with git because of big uploading

Hi all:
 Repo of  has something wrong because of uploading big file. Though 
codes are reverted while history still contains the pack which results in big 
downloading and unsync with github.
 I really want to solve this problem and please forgive my own decision 
for a new commit of new onosfw because I don’t want this impacting the project. 
I have to admit that I am really bad at management of commit and merge. So I 
invite fuel ptl as the manager of new repo to avoid such things.
 Does anyone can help me solve this as soon as possible?
   Thanks
Bests
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][stackalytics] some projects not visible in stackalytics now

2016-01-11 Thread Ilya Shakhat
2016-01-11 12:48 GMT+03:00 Thierry Carrez :

>
> I totally agree. Stackalytics contains data for non-OpenStack projects
> anyway (see under "complementary"), so I think it's fine to list unofficial
> projects under "Others" or "Unofficial".


Will be renamed soon: https://review.openstack.org/#/c/265741/

Thanks,
Ilya Shakhat


> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.

2016-01-11 Thread Koteswar
vendor specific mech driver code, where I am doing some read/write to sql.

def _create_port(self, port): switchports = port['port']['switchports']
LOG.debug(_LE("_create_port switch: %s"), port) network_id = port['port']['
network_id'] subnets = db.get_subnets_by_network(self.context, network_id)
if not subnets: LOG.error("Subnet not found for the network")
self._raise_ml2_error(wexc.HTTPNotFound,
'create_port') for switchport in switchports: switch_mac_id = switchport['
switch_id'] port_id = switchport['port_id'] bnp_switch =
db.get_bnp_phys_switch_by_mac(self.context, switch_mac_id) # check for port
and switch level existence if not bnp_switch: LOG.error(_LE("No physical
switch found '%s' "), switch_mac_id) self._raise_ml2_error(wexc.HTTPNotFound,
'create_port') phys_port = db.get_bnp_phys_port(self.context, bnp_switch.id,
port_id) if not phys_port: LOG.error(_LE("No physical port found for '%s' "),
phys_port) self._raise_ml2_error(wexc.HTTPNotFound, 'create_port') if
bnp_switch.status != constants.SWITCH_STATUS['enable']: LOG.error(_LE("Physical
switch is not Enabled '%s' "), bnp_switch.status)
self._raise_ml2_error(wexc.HTTPBadRequest,
'create_port')

On Mon, Jan 11, 2016 at 2:47 PM, Anna Kamyshnikova <
akamyshnik...@mirantis.com> wrote:

> Hi!
>
> Can you point what mechanism driver is this or the piece of code that give
> this error?
>
> On Mon, Jan 11, 2016 at 11:58 AM, Koteswar  wrote:
>
>> Hi All,
>>
>>
>>
>> In my mechanism driver, I am reading/writing into sql db in a fixed
>> interval looping call. Sometimes I get the following error when I stop and
>> start neutron server
>>
>> InvalidRequestError: This session is in 'prepared' state; no further SQL
>> can be emitted within this transaction.
>>
>>
>>
>> I am using context.session.query() for add, delete, update and get.
>> Please help me if any one resolved an issue like this.
>>
>>
>>
>> Full trace is as follows:
>>
>> 2016-01-06 15:33:21.799 [01;31mERROR neutron.plugins.ml2.managers
>> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
>> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mMechanism driver
>> 'hp' failed in create_port_precommit[00m
>>
>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>> [01;35m[00mTraceback (most recent call last):
>>
>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>> [01;35m[00m  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
>> line 394, in _call_on_drivers
>>
>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>> [01;35m[00mgetattr(driver.obj, method_name)(context)
>>
>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>> [01;35m[00m  File
>> "/usr/local/lib/python2.7/dist-packages/baremetal_network_provisioning/ml2/mechanism_hp.py",
>> line 67, in create_port_precommit
>>
>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>> [01;35m[00mraise e
>>
>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>> [01;35m[00mInvalidRequestError: This session is in 'prepared' state; no
>> further SQL can be emitted within this transaction.
>>
>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>> [01;35m[00m
>>
>> 2016-01-06 15:33:21.901 [01;31mERROR neutron.api.v2.resource
>> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
>> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mcreate failed[00m
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>> [01;35m[00mTraceback (most recent call last):
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
>> File "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>> [01;35m[00mresult = method(request=request, **args)
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
>> File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in
>> wrapper
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>> [01;35m[00mectxt.value = e.inner_exc
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
>> File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
>> 195, in __exit__
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>> [01;35m[00msix.reraise(self.type_, self.value, self.tb)
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
>> File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in
>> wrapper
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>> [01;35m[00mreturn f(*args, **kwargs)
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
>> File "/opt/stack/neutron/neutron/api/v2/base.py", line 516, in create
>>
>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>> [01;35m[00mobj = do_create(body)
>>
>> [01;31m2016-01-06 

Re: [openstack-dev] [Glance] [Keystone][CI]

2016-01-11 Thread Kairat Kushaev
Hi bharath,
I think it is better to create a bug about that to discuss this issue.
I am not sure that Openstack Dev Mailing List is appropriate place to
discuss bugs.

BTW, your trouble looks similar to
https://bugs.launchpad.net/devstack/+bug/1515352. Could you please try WA
described in the bug?

Best regards,
Kairat Kushaev

On Mon, Jan 11, 2016 at 9:50 AM, bharath  wrote:

> Hi,
>
>
> Facing "Could not determine a suitable URL for the plugin " Error during
> stacking.
> issue is seen only after repeated stack , unstack in a setup.
> It will be fine 10 to 15 CI runs later it is starting throwing below error.
> I am able to consistently reproduce it
>
> 2016-01-11 06:28:06.724 | + '[' bare = bare ']'
> 2016-01-11 06:28:06.724 | + '[' '' = zcat ']'
> 2016-01-11 06:28:06.724 | + openstack --os-cloud=devstack-admin image
> create cirros-0.3.2-x86_64-disk --public --container-format=bare
> --disk-format qcow2
> 2016-01-11 06:28:07.308 | Could not determine a suitable URL for the plugin
> 2016-01-11 06:28:07.330 | + exit_trap
> 2016-01-11 06:28:07.330 | + local r=1
> 2016-01-11 06:28:07.331 | ++ jobs -p
> 2016-01-11 06:28:07.331 | + jobs=
>
>
> Thanks,
> bharath
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][osdk] PrettyTable needs a home in OpenStack

2016-01-11 Thread Victor Stinner

Le 11/01/2016 10:37, Thierry Carrez a écrit :

Joshua Harlow wrote:

[...]
So I'd def help keep prettytable going, of course another option is to
move to https://pypi.python.org/pypi/tabulate (which does seem active
and/or maintained); tabulate provides pretty much the same thing
(actually more table formats @
https://pypi.python.org/pypi/tabulate#table-format ) than prettytable
and the api is pretty much the same (or nearly).

So that's another way to handle this (just to move off prettytable
entirely).


This sounds like a reasonable alternative...



IMHO contributing to an actively developped library (tabulate) seems 
more productive than starting to maintain a second library which is 
currently no more maintained.


Does anyone know how much code should be modified to replace prettytable 
with tabulate on the whole OpenStack project?


--

I don't like the global trend in OpenStack to create a new community 
separated from the Python community. In general, OpenStack libraries 
have too many dependencies and it's harder to contribute to other 
projects. Gerrit is less popular than Github, and OpenStack requires to 
sign a contributor agreement. I would prefer to stop moving things into 
OpenStack and continue to contribute to existing projects, as we already do.


(I also know why projects are moved into OpenStack "big tent", they are 
some good arguments.)


Well, that's my feeling that OpenStack and Python communities are 
splitted, maybe I'm wrong ;-) I just want to avoid what happened in 
Zope: a lot of great code and great libraries, but too many dependencies 
and at the end a different community.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-01-11 Thread Ihar Hrachyshka

Ihar Hrachyshka  wrote:


Sean M. Collins  wrote:


Just a quick update where we are:

Increasing the verbosity of the SSH session into the instance that is
created during the cinder portion is showing that we are actually
connecting to the instance successfully. We get the dropbear SSH banner,
but then the instance hangs. Eventually SSH terminates the connection, 5
minutes later.

http://logs.openstack.org/35/187235/12/experimental/gate-grenade-dsvm-neutron-multinode/984e651/logs/grenade.sh.txt.gz#_2016-01-08_20_13_40_040


As per [1], could be related to mtu on the interface. Do we configure MTU  
on external devices to accommodate for tunnelling headers?


As per [2] neutron server logs, the network in question is vxlan.

If that’s indeed the mtu issue, and since Cirros does not support DHCP  
MTU option documented for ml2 at [3], I don’t know how to validate  
whether it’s indeed the issue.


UPD: ^ that’s actually not true, cirros supports the option since 0.3.3 [1]  
(and we use 0.3.4 [2]), so let’s try to enforce it inside neutron and see  
whether it helps: https://review.openstack.org/265759


[1] https://bugs.launchpad.net/cirros/+bug/1301958
[2]  
http://logs.openstack.org/35/187235/11/experimental/gate-grenade-dsvm-neutron-multinode/a5af283/logs/grenade.sh.txt.gz#_2015-11-30_18_57_46_067






Also, what’s the underlying infrastructure that is available in gate?  
Does it allow vlan for tenant networks? (We could enforce vlan for the  
network and see whether it fixes the issue.)


[1] https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1254085
[2]  
http://logs.openstack.org/35/187235/11/experimental/gate-grenade-dsvm-neutron-multinode/a5af283/logs/old/screen-q-svc.txt.gz#_2015-11-30_19_28_44_685
[3]  
https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/7/networking-guide/chapter-16-configure-mtu-settings


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.

2016-01-11 Thread Koteswar
Hi All,



In my mechanism driver, I am reading/writing into sql db in a fixed
interval looping call. Sometimes I get the following error when I stop and
start neutron server

InvalidRequestError: This session is in 'prepared' state; no further SQL
can be emitted within this transaction.



I am using context.session.query() for add, delete, update and get. Please
help me if any one resolved an issue like this.



Full trace is as follows:

2016-01-06 15:33:21.799 [01;31mERROR neutron.plugins.ml2.managers
[[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mMechanism driver
'hp' failed in create_port_precommit[00m

[01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
[01;35m[00mTraceback (most recent call last):

[01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
[01;35m[00m  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
line 394, in _call_on_drivers

[01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
[01;35m[00mgetattr(driver.obj, method_name)(context)

[01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
[01;35m[00m  File
"/usr/local/lib/python2.7/dist-packages/baremetal_network_provisioning/ml2/mechanism_hp.py",
line 67, in create_port_precommit

[01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
[01;35m[00mraise e

[01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
[01;35m[00mInvalidRequestError: This session is in 'prepared' state; no
further SQL can be emitted within this transaction.

[01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
[01;35m[00m

2016-01-06 15:33:21.901 [01;31mERROR neutron.api.v2.resource
[[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mcreate failed[00m

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
[01;35m[00mTraceback (most recent call last):

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
result = method(request=request, **args)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in
wrapper

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
ectxt.value = e.inner_exc

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
195, in __exit__

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
six.reraise(self.type_, self.value, self.tb)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in
wrapper

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
return f(*args, **kwargs)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/opt/stack/neutron/neutron/api/v2/base.py", line 516, in create

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
obj = do_create(body)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/opt/stack/neutron/neutron/api/v2/base.py", line 498, in do_create

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
request.context, reservation.reservation_id)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
195, in __exit__

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
six.reraise(self.type_, self.value, self.tb)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/opt/stack/neutron/neutron/api/v2/base.py", line 491, in do_create

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
return obj_creator(request.context, **kwargs)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in
wrapper

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
ectxt.value = e.inner_exc

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
195, in __exit__

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
six.reraise(self.type_, self.value, self.tb)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in
wrapper

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
return f(*args, **kwargs)

[01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
File 

Re: [openstack-dev] [neutron] InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.

2016-01-11 Thread Anna Kamyshnikova
Hi!

Can you point what mechanism driver is this or the piece of code that give
this error?

On Mon, Jan 11, 2016 at 11:58 AM, Koteswar  wrote:

> Hi All,
>
>
>
> In my mechanism driver, I am reading/writing into sql db in a fixed
> interval looping call. Sometimes I get the following error when I stop and
> start neutron server
>
> InvalidRequestError: This session is in 'prepared' state; no further SQL
> can be emitted within this transaction.
>
>
>
> I am using context.session.query() for add, delete, update and get. Please
> help me if any one resolved an issue like this.
>
>
>
> Full trace is as follows:
>
> 2016-01-06 15:33:21.799 [01;31mERROR neutron.plugins.ml2.managers
> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mMechanism driver
> 'hp' failed in create_port_precommit[00m
>
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mTraceback (most recent call last):
>
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00m  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
> line 394, in _call_on_drivers
>
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mgetattr(driver.obj, method_name)(context)
>
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00m  File
> "/usr/local/lib/python2.7/dist-packages/baremetal_network_provisioning/ml2/mechanism_hp.py",
> line 67, in create_port_precommit
>
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mraise e
>
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mInvalidRequestError: This session is in 'prepared' state; no
> further SQL can be emitted within this transaction.
>
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00m
>
> 2016-01-06 15:33:21.901 [01;31mERROR neutron.api.v2.resource
> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mcreate failed[00m
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mTraceback (most recent call last):
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mresult = method(request=request, **args)
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in
> wrapper
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mectxt.value = e.inner_exc
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
> 195, in __exit__
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00msix.reraise(self.type_, self.value, self.tb)
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in
> wrapper
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mreturn f(*args, **kwargs)
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/opt/stack/neutron/neutron/api/v2/base.py", line 516, in create
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mobj = do_create(body)
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/opt/stack/neutron/neutron/api/v2/base.py", line 498, in do_create
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mrequest.context, reservation.reservation_id)
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
> 195, in __exit__
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00msix.reraise(self.type_, self.value, self.tb)
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/opt/stack/neutron/neutron/api/v2/base.py", line 491, in do_create
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mreturn obj_creator(request.context, **kwargs)
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in
> wrapper
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mectxt.value = e.inner_exc
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource [01;35m[00m
> File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
> 195, in __exit__
>
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m

Re: [openstack-dev] [stable] [horizon] Horizon stable and fast increasing Django releases

2016-01-11 Thread Thierry Carrez

Thomas Goirand wrote:

[...]
Best would be if this kind of cap was only a temporary solution until we
really fix the issues. If possible (I perfectly know that in some case
this may be difficult), I'd like to have Horizon stable to contain
backports for Django last release (currently 1.9), instead of just
capping the requirements. Otherwise, Horizon becomes the single piece in
all of OpenStack where I spend all of my maintainer's time, which really
isn't good (there's lots of work to be done in other fields).

Your thoughts? Am I the only one interested by this? Let's say I'm the
only one interested by it (I hope it's not the case), if I do the work,
will the patches be accepted in the Stable branch?


If the backports are done in a way that fixes issues for Django 1.9 
users without changing code paths for <1.9 users, I guess that those 
backports could be accepted. Sounds like a good topic to add to the 
stable team meeting agenda if you want a more definitive answer to that 
question.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][stackalytics] some projects not visible in stackalytics now

2016-01-11 Thread joehuang
Hi, Thierry,

Thanks for explanation. One question is whether these projects (like Kingbird, 
Tacker, Tricircle) will be listed in stackalytics.com any more, maybe not in 
the "OpenStack others", but "Others" category?

It's also important for Technical Committee/others to know the contribution of 
one project, especially useful for the TC when the project ask for TC approve. 

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Saturday, January 09, 2016 12:37 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra][stackalytics] some projects not visible in 
stackalytics now

Zhipeng Huang wrote:
> Does this mean big tent projects = TC approved ? My understanding was 
> that for those project migrated from stackforge to OpenStack infra 
> now, is OpenStack big tent projects but not TC-approved OpenStack projects.

No. As was made pretty clear when Stackforge was decommissioned (but apparently 
not clear enough), being hosted under the OpenStack infrastructure does not 
equate being an official OpenStack project (under the "big tent"). To become an 
OpenStack project, you need to help with the OpenStack Mission and follow the 
OpenStack way, and then apply to the Technical Committee for recognition.

Only TC-recognized projects are allowed to use the OpenStack name in 
conjunction with their project name, so calling them "openstack others" 
projects is misleading.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][osdk] PrettyTable needs a home in OpenStack

2016-01-11 Thread Thierry Carrez

Joshua Harlow wrote:

[...]
So I'd def help keep prettytable going, of course another option is to
move to https://pypi.python.org/pypi/tabulate (which does seem active
and/or maintained); tabulate provides pretty much the same thing
(actually more table formats @
https://pypi.python.org/pypi/tabulate#table-format ) than prettytable
and the api is pretty much the same (or nearly).

So that's another way to handle this (just to move off prettytable
entirely).


This sounds like a reasonable alternative...

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][stackalytics] some projects not visible in stackalytics now

2016-01-11 Thread Ilya Shakhat
Hi everyone!

It appears that Stackalytics host is banned by review.openstack.org. It
gets "Received disconnect from 2001:4800:7818:102:be76:4eff:fe05:9b12: 7:
Too many concurrent connections" while attempting to execute "gerrit"
command. As consequence Stackalytics fails to retrieve list of projects (so
only those from governance list are visible), and all review stats is not
updated.

Infra folks, could you please check at Gerrit side that my hypothesis is
valid? is there anything to be done in Stackalytics to reduce the load on
review.o.o?

Thanks,
Ilya

PS.
project list will be fixed soon by https://review.openstack.org/#/c/265730/

2016-01-07 18:39 GMT+03:00 Jay Pipes :

> Hi everyone, and happy New Year!
>
> Ilya and a number of other Mirantis folks are currently on holiday until
> the 10th. I'm sure he will address the issues promptly upon his return.
>
> All the best,
> -jay
>
> On 01/07/2016 09:34 AM, Michał Dulko wrote:
>
>> On 01/07/2016 02:59 PM, Shake Chen wrote:
>>
>>> seem some project not update many day.like
>>>
>>> http://stackalytics.com/?project_type=openstack=kolla-group
>>>
>>> the data still stay 02 Jan 2016
>>>
>>>
>> And I can report that for some users (like me [1]) actions are not
>> tracked since the beginning of 2016. I can assure that I've did quite a
>> few reviews and got some patches merged.
>>
>> [1]
>>
>> http://stackalytics.com/?metric=marks=mitaka_id=michal-dulko-f
>>
>> ___
>> OpenStack-Infra mailing list
>> openstack-in...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-01-11 Thread Radomir Dopieralski

On 01/08/2016 09:51 PM, Mike Bayer wrote:



On 01/08/2016 04:44 AM, Radomir Dopieralski wrote:

On 01/07/2016 05:55 PM, Mike Bayer wrote:


but also even if you're under something like
mod_wsgi, you can spawn a child process or worker thread regardless.
You always have a Python interpreter running and all the things it can
do.


Actually you can't, reliably. Or, more precisely, you really shouldn't.
Most web servers out there expect to do their own process/thread
management and get really embarrassed if you do something like this,
resulting in weird stuff happening.


I have to disagree with this as an across-the-board rule, partially
because my own work in building an enhanced database connection
management system is probably going to require that a background thread
be running in order to reap stale database connections.   Web servers
certainly do their own process/thread management, but a thoughtfully
organized background thread in conjunction with a supporting HTTP
service allows this to be feasible.   In the case of mod_wsgi,
particularly when using mod_wsgi in daemon mode, spawning of threads,
processes and in some scenarios even wholly separate applications are
supported use cases.


[...]


It is certainly reasonable that not all web application containers would
be effective with apps that include custom background threads or
processes (even though IMO any system that's running a Python
interpreter shouldn't have any issues with a limited number of
well-behaved daemon-mode threads), but at least in the case of mod_wsgi,
this is supported; that gives Openstack's HTTP-related applications with
carefully/thoughtfully organized background threads at least one
industry-standard alternative besides being forever welded to its
current homegrown WSGI server implementation.


This is still writing your application for a specific configuration of a 
specific version of a specific implementation of the protocol on a 
specific web server. While this may work as a stopgap solution, I think
it's a really bad long-term strategy. We should be programming for a 
protocol specification (WSGI in this case), not for a particular 
implementation (unless we need to throw in workarounds for 
implementation bugs). This way of thinking led to the trouble we have 
right now, and the fix is not to change the code to exploit another 
specific implementation, but to rewrite it so that it works on any 
compatible web server. If possible.


At least it seems so to my naive programmer mind. Sorry for ranting,
I'm sure that you are aware of the trade-off here.

--
Radomir Dopieralski

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Service password storage

2016-01-11 Thread Levin
Dear openstack developers,
I installed openstack via devstack recently, and I found out that the
admin passwords for services like cinder and nova are stored in plain
text in their /etc/*/*.conf files. These files are rw--r--r-- by
default, which I believe to be a pretty serious security risk. Is this
intended, and/or configurable pre-install?

Greetings,
ih3



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Heads up for decomposed plugin break

2016-01-11 Thread Ihar Hrachyshka

Sean M. Collins  wrote:


On Fri, Jan 08, 2016 at 07:50:47AM PST, Chris Dent wrote:

On Fri, 8 Jan 2016, Gary Kotton wrote:


The commit https://github.com/openstack/neutron/commit/5d53dfb8d64186-
b5b1d2f356fbff8f222e15d1b2 may break the decomposed plugins that make
use of the method _get_tenant_id_for_create


Just out of curiosity, is it not standard practice that a plugin
shouldn't use a private method?


+1 - hopefully decomposed plugins will audit their code and look for
other calls to private methods.


The fact that it broke *aas repos too suggests that we were not showing a  
proper example to those decomposed. I think it can be reasonable to restore  
the method until N, with a deprecation message, as Garry suggested in his  
patch. Especially since there is no actual burden to keep the method for  
another cycle.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-01-11 Thread Ihar Hrachyshka

Sean M. Collins  wrote:


Just a quick update where we are:

Increasing the verbosity of the SSH session into the instance that is
created during the cinder portion is showing that we are actually
connecting to the instance successfully. We get the dropbear SSH banner,
but then the instance hangs. Eventually SSH terminates the connection, 5
minutes later.

http://logs.openstack.org/35/187235/12/experimental/gate-grenade-dsvm-neutron-multinode/984e651/logs/grenade.sh.txt.gz#_2016-01-08_20_13_40_040


As per [1], could be related to mtu on the interface. Do we configure MTU  
on external devices to accommodate for tunnelling headers?


As per [2] neutron server logs, the network in question is vxlan.

If that’s indeed the mtu issue, and since Cirros does not support DHCP MTU  
option documented for ml2 at [3], I don’t know how to validate whether it’s  
indeed the issue.


Also, what’s the underlying infrastructure that is available in gate? Does  
it allow vlan for tenant networks? (We could enforce vlan for the network  
and see whether it fixes the issue.)


[1] https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1254085
[2]  
http://logs.openstack.org/35/187235/11/experimental/gate-grenade-dsvm-neutron-multinode/a5af283/logs/old/screen-q-svc.txt.gz#_2015-11-30_19_28_44_685
[3]  
https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/7/networking-guide/chapter-16-configure-mtu-settings


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][aodh][vitrage] The purpose of notification about alarm updating

2016-01-11 Thread Julien Danjou
On Sun, Jan 10 2016, AFEK, Ifat (Ifat) wrote:

> I'm a bit confused. This thread started by liusheng asking to remove the
> notifications to the bus[1], and I replied saying that we do want to use this
> capability in Vitrage. What is the notifier plugin that you suggested? Is it
> the same notifier that liusheng referred to, or something else?

The thread is about removing oslo.messaging notifications on alarm
update (creation, deletion, etc).

What you want is to receive an oslo.messaging notification on alarm
triggering. This does not exist yet, but should be a aodh-notifier
plugin to be written – which should be pretty straightforward.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Seeking FFE for "Prevention of Unauthorized errors in Swift driver"

2016-01-11 Thread Kairat Kushaev
Hello Glance Team!
I would like to request FFE for the following spec:
https://review.openstack.org/#/c/248681/.

The change allows users not to bother about token expiration when
uploading/downloading big images to/from swift backend which is pretty
useful IMO.  The change is not visible to end users and it affects only the
case when token is expired during image upload/download. This makes the
change less risky.

Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] help needed: volunteers for bug skimming (1 week)

2016-01-11 Thread Markus Zoeller
Augustina Ragwitz  wrote on 01/08/2016 07:50:23 PM:

> From: Augustina Ragwitz 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 01/08/2016 08:00 PM
> Subject: Re: [openstack-dev] [nova][bugs] help needed: volunteers for 
> bug skimming (1 week)
> 
> I signed up for the week before the Nova Midcycle meeting. Even though 
> I'm still new, I feel capable of at least getting the tags mostly right. 

> When I'm not quite sure which tag it would fall under, I search for 
> similar bugs and see how they were tagged.

Thanks for signing up! 
That's a valid approach.

> I did have one clarifying question, what exactly are the expectations of 

> the bug skimming duty? I assumed it was just tagging bugs to get them 
> into the appropriate queues for triaging. Do we also need to confirm the 

> bugs once they've been tagged or does that fall under "triage" and not 
> "skimming"?
> 
> Thanks!
> Augustina

In short, do as much as possible before the expertise of the subteams
is needed. Also, if you spot potential critical bugs, shout out in the 
#openstack-nova channel (for me or one of the core reviewers [1]).

As a longer clarification, the duty includes:
* Switch the bug to "incomplete" when crucial information is missing
  and ask the reporter to provide more information. This includes:
* steps to reproduce
* the version of Nova and the novaclient (or os-client)
* logs (on debug level)
* environment details depending on the bug
* libvirt/kvm versions, VMWare version, ...
* storage type (ceph, LVM, GPFS, ...) 
* network type (nova-network or neutron)
  I subscribe myself to bugs when I switch them to "incomplete" to see
  when responses come in. See "You are not directly subscribed to this 
  bug's notifications." on the right hand side of a Launchpad bug report.
* Close as "invalid" if it is a support request or feature request.
* Switch to "confirm" if you could reproduce the described issue. This
  is not always possible for you because of missing resources like a 
  ceph storage or something like that. 
* Add a tag (or more tags) if the report allows you to narrow down the
  area which potentially contains the issue. This should be the entry
  point for subteams to dig deeper to find the root cause and potential
  solutions.
* Bring critical bugs to the attention of the other contributors. The
  #openstack-nova channel and/or a ML post is useful.

I usually explained in a comment *why* I changed a status and which
next steps I expect from whom. For example, if I switch to "incomplete",
tell the reporter to add this and that piece of information and to switch
the bug back to "new" when the reporter has done this. Another example, 
if it was a feature request, I switched to "invalid" and added the links
to the blueprint process.

I used the term "skimming" because each day there are new bugs where
nobody took a look at. They "float" on top of all the other bug reports 
which should have been looked at before. 

I see the whole process in 3 levels:
* level 1: bug skimming duty => keep the input sane, prepares the
   report for level 2
* level 2: subteam digs deeper, finds the issue, proposes solution 
   ideas for level 3
* level 3: contributor provides a change in gerrit based on level 2

Does this clarify some of the open questions?

References:
[1] Nova core reviewers list:
https://review.openstack.org/#/admin/groups/25,members

Regards, Markus Zoeller (markus_z)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][cfg] Benefit of MultiStrOpt over ListOpt

2016-01-11 Thread Markus Zoeller
John Stanford  wrote on 01/10/2016 10:49:08 PM:

> From: John Stanford 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 01/10/2016 10:55 PM
> Subject: Re: [openstack-dev] [oslo][cfg] Benefit of MultiStrOpt over 
ListOpt
> 
> Hi Dims,
> 
> Thanks for the ref.  I filed a couple tickets for nova and cinder. 
> Looks like neutron has mostly done away with them.
> 
> Best regards,
> 
> John

Would you add the ticket numbers here please? I work for Nova 
on the config options and like to be aware of those bug reports.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][stackalytics] some projects not visible in stackalytics now

2016-01-11 Thread Thierry Carrez

joehuang wrote:

Thanks for explanation. One question is whether these projects (like Kingbird, Tacker, Tricircle) 
will be listed in stackalytics.com any more, maybe not in the "OpenStack others", but 
"Others" category?

It's also important for Technical Committee/others to know the contribution of 
one project, especially useful for the TC when the project ask for TC approve.


I totally agree. Stackalytics contains data for non-OpenStack projects 
anyway (see under "complementary"), so I think it's fine to list 
unofficial projects under "Others" or "Unofficial".


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Updating XStatic packages

2016-01-11 Thread Richard Jones
In terms of process, obviously updating the data in the xstatic is
relatively easy (for most packages) but I wonder from a review standpoint
what sort of process we should have here. To approve a merge of an xstatic
package update, I feel like I should have, in order of preference:

1. a link to the change notes for the project in question so I can see what
the impact of the update is, or
2. a list of those changes in the bug or change message for the xstatic
update (i.e. the relevant information from #1), and
3. an explanation of what existing problems are being solved in Horizon
through the update (if any) or what impact there will be on Horizon with
the upgrade (i.e. an examination of #2 in the context of Horizon).

I know that we often need to justify package updates to downstream
packagers, and having this information will certainly help our case. Or is
this all a bit too much burden?


 Richard

On 7 January 2016 at 04:36, Rajat Vig  wrote:

> I did update some of the low hanging packages where the upgrades seemed
> safe to do and noted the patch numbers in the same EtherPad.
>
> I wasn't sure what to mark the effort against. So I created some bugs.
> Should it be a blueprint?
>
> -Rajat
>
> On Wed, Jan 6, 2016 at 2:18 AM, Rob Cresswell (rcresswe) <
> rcres...@cisco.com> wrote:
>
>> Hi all,
>>
>> While the automated system is broken, I’d like to work on manually
>> releasing a few of the XStatic packages. This will *only* be the release
>> stage; we will still use gerrit to review the package content as usual.
>>
>> List of packages current versions, and their upstreams, can be found
>> here: https://etherpad.openstack.org/p/horizon-libs
>>
>> If anyone has spare time, please consider investigating some of the
>> dependencies listed above. To update an XStatic package, propose a change
>> to its repo as you would with Horizon; they are all in the OpenStack
>> namespace. For example, Xstatic-Angular can be found at
>> http://git.openstack.org/cgit/openstack/xstatic-angular/
>>
>> Thanks,
>> Rob
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Routing in Horizon

2016-01-11 Thread Richard Jones
The  tag addition has had to be reverted as it broke other parts of
the application (notably lazy loaded tabs like Instance Details), sadly.

Regarding which router to use - I've used the built-in router in the past
quite successfully. I think I'd want to see a solid reason for using a 3rd
party one. It could be that nested routes are part of that, but I kinda see
that as a convenience thing, unless I'm missing something core to how
routing is planned to be done. Then again we really haven't had any
discussion about how we see routing work. The one patch that's up that uses
routing seems perfectly able to do so without needing extended capabilities
of 3rd party routers.


 Richard


On 12 January 2016 at 09:44, Tripp, Travis S  wrote:

> Rajat,
>
> Thanks for starting some discussion here.  I think your target=“_self” is
> taken care of now, right?
>
> Re: ng-route vs ui-router - everything I have ever seen or been told
> supports using ui-router instead of ng-route.  I know a quick google search
> supports that, but let me google that for all of us and give several
> references:
>
> http://www.funnyant.com/angularjs-ui-router/
> http://www.amasik.com/angularjs-ngroute-vs-ui-router/
>
> http://stackoverflow.com/questions/21023763/angularjs-difference-between-angular-route-and-angular-ui-router
>
> http://www.pearpages.com/javascript/angularjs/routing/2015/10/13/ngroute-vs-ui-router.html
>
> http://stackoverflow.com/questions/32523512/ui-router-vs-ngroute-for-sinlge-page-app
>
> So, I’m wondering if there’d been any discussion I missed on why not bring
> in ui-router?
>
> Of course there is question using the new router in angular vs ui-router,
> but finding many pros- cons- on that seems to be a bit more difficult.
> Since it is 1.5 / 2.0 and neither are past rc / beta, it doesn’t seem like
> something we can debate too well.
>
> https://angular.github.io/router/getting-started
>
> http://www.angulardaily.com/2015/12/angularintroduction-to-ngnewrouter-vs.html
>
> Thanks,
> Travis
>
> From: Rajat Vig >
> Reply-To: OpenStack List >
> Date: Thursday, January 7, 2016 at 1:53 AM
> To: OpenStack List >
> Subject: [openstack-dev] [Horizon] Routing in Horizon
>
> Hi Everyone
>
> One of my recent patches which enabled HTML5 based routing via Angular
> merged, some interesting things spun out.
> I'd to scramble a few patches to get things
> ​​ back the same way
> for all new Angular Panels.
>
> I also realized that getting Horizon to an SPA with Angular requires more
> thought than mere fixing the current burning issue.
> This mail's intent is to spur a direction on how we do routing in Angular
> and how do Angular Panels go back/forth between themselves and older Django
> panels.
>
> The patch https://review.openstack.org/#/c/173885/ is possibly the first
> of many to use Angular based routing.
> It currently uses ngRoute as the library was included in the
> xstatic-angular package.
>
> What I'm roughly thinking to solve some of the immediate issues (there's
> possbily much more that I'm not)
>
> 1. If we are going to go with the SPA route, then all Panels need to
> indicate that they are Angular based.
> For panels that are Angular, they need to declare routes they'd like to
> manage.
>
> 2. All tags on Angular Panels (including header, sidebar, footer) which
> don't route to Angular Panels will need the attribute target="_self" else
> Angular will not allow navigation to those links.
>
> All sidebar links currently have the attribute set but headers and footers
> don't.
> Sidebar links for Angular shouldn't have the attribute else SPA like
> behavior will not happen.
> This will need to be documen
> ​tation​
> ​
> ​
> ​
> .
>
> 3. That implies yet another problem with the Spinner Modal which gets
> activated on all sidebar clicks.
> It'll need to be done differently for Angular routing vs hrefs still with
> Django.
> The current spinner relies on a browser page refresh to disappear.
>
> Then there's ngRoute.
> Routing needs in Angular may need to handle route conflicts and maybe
> nested routes.
> There are other, possibly better options that we could consider
> 1. https://github.com/angular-ui/ui-router
> 2. https://angular.github.io/router/
>
> I've been part of the community for not long enough yet and I'm not yet
> completely aware of what exists outside the Horizon codebase so I might be
> missing somethings as well.
>
> Regards
> Rajat
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__

[openstack-dev] How to get the current stable release

2016-01-11 Thread Christian Berendt
Sorry if this is a stupid question and I waste your time. I am looking 
for a file that speicifes the name and version number of the latest 
stable release.


I have not found this information in the governance repository (I only 
found a hard coded variable called latest_stable_branch in the 
tool/stable.py script, using kilo as the current value 
(https://github.com/openstack/governance/blob/master/tools/stable.py#L21-L24)).


In the releases repository is a note in the doc/source/index.rst file (I 
think this is the source of http://docs.openstack.org/releases/) that 
releases/liberty is the current stable release.


Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][security] New BP for anti brute force in keystone

2016-01-11 Thread Youwenwei
I have registered a new bp for keystone with the capability of anti brute force


Problem Description:
the attacks of account are increasing in the cloud
the attacker steals the account information by guessing the password in brute 
force.
therefore, the ability of account in anti brute force is necessary.

proposed Change:
1. add two configure properties for keystone: threshold for times of password 
error consecutively, time of locked when password error number reaches the 
threshold.
2. add two properties of user information in times of password consecutive 
errors, and last password error time. when the password of an account error 
consecutively reaches threshold, the account will be locked with a few time.
3. locked account will unlock automatically when locked status time out
4. the APIs of keystone which use user_name and password for authentication, 
the message of response will add an error description when the account is locked

https://blueprints.launchpad.net/keystone/+spec/anti-brute-force


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] list of blockers to deploy mitaka

2016-01-11 Thread Emilien Macchi
Hi,

I've listed the blockers I've found out when trying to deploy our Puppet
modules with Mitaka:
https://etherpad.openstack.org/p/puppet-openstack-ci-mitaka

Until we don't fix all those issues, we can't bump our CI to Mitaka, so
we really need to make progress if we want to synchronize our release
with other OpenStack projects.

Any contribution is welcome,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova quota-show questions

2016-01-11 Thread neetu jain
https://bugs.launchpad.net/python-novaclient/+bug/1337990

nova quota-show seems to have a couple of issues

1) showing admin quotas (even if current tenant is demo)
   -this can be solved by the fix proposed in
https://review.openstack.org/#/c/265971/  i think

2) But the problem is how to display tenant level quotas as pointed our in
the review, if the user should default to current user? Should that be a
seperate flag ex:-"--tenant-quota" ? whats the best way to maintain
backward compatibility of showing tenant level quotas while still
addressing the vulnerability that demo can see admin's quotas

3) There are also bugs around incorrect tenantname .. How do i lookup a
tenant name to map to its ID in shell.py without making a keystone call ?
any pointers?


Neetu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-01-11 Thread Alex Xu
Sorry for sending this mail late. We have weekly Nova API meeting today.
The meeting is being held Tuesday UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] virtual midcycle dates

2016-01-11 Thread Jim Rollenhagen
Hi all,

Here's a list of potential dates for our midcycle; please note which you
would be able to attend.

http://doodle.com/poll/2gnvq6eee3a6dfbk

Note that since we're including people from lots of time zones, this
will be basically running for much of the day. We'll have to coordinate
sessions with who is interested, and maybe do dual sessions for big
items, and consolidate thoughts into something coherent.

People may need to get up really early or stay online really late to
attend some of these sessions. Sorry for that; that's the nature of a
virtual thing like this. Note that you aren't required to do so (since
I'm not your boss), but the more the merrier. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options: IRC meeting at Jan. 11th

2016-01-11 Thread Esra Celik
Hi Markus 

I am sorry that I missed the meeting. Bu it seems like sfinucan already asked 
what was in my mind, like getting the patches merged faster.. 
>From the logs I finally understood what "single point of entry for sample 
>config g eneration" is for. So I will wait that to be merged and rebase my 
>patches [1],[2],[3], is it ok? 
Now should I work on some other patches? nova/compute/manager.py options seem 
nice :) 
Or should I focus on later patches that will improve the help texts? 

[1] https://review.openstack.org/#/c/254092/ 
[2] https://review.openstack.org/#/c/255124/ 
[3] https://review.openstack.org/#/c/260181/ 

Best Regards, 
esracelik 

- Orijinal Mesaj -

> Kimden: "Markus Zoeller" 
> Kime: "OpenStack Development Mailing List (not for usage questions)"
> 
> Gönderilenler: 11 Ocak Pazartesi 2016 17:47:23
> Konu: Re: [openstack-dev] [nova] config options: IRC meeting at Jan. 11th

> Markus Zoeller/Germany/IBM@IBMDE wrote on 01/08/2016 03:21:28 PM:

> > From: Markus Zoeller/Germany/IBM@IBMDE
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 01/08/2016 03:22 PM
> > Subject: [openstack-dev] [nova] config options: IRC meeting at Jan. 11th
> >
> > First things first, I'd like to thank those people who are contributing
> > to this task very much! It's good to see the progress we already made
> > and that the knowledge of this area is now spread amongst more people.
> >
> > The second thing I'd like to talk about is the next steps.
> > It would be great if we could have a short realtime conversation in IRC
> > to:
> > * make clear that the help text changes are necessary for a patch series
> > * clarify open questions from your side
> > * discuss the impact of [1] which should be the last piece in place
> > which reduces the merge conflicts we faced to a very minimum.
> >
> > Let's use the channel #openstack-meeting-3 at coming Monday,
> > January the 11th, at 15:00 UTC for that.
> >
> > References:
> > [1] "single point of entry for sample config generation"
> > https://review.openstack.org/#/c/260015
> >
> > Regards, Markus Zoeller (markus_z)

> In case you missed it (it was on short notice), here are the logs:
> http://eavesdrop.openstack.org/meetings/nova_config_options/2016/nova_config_options.2016-01-11-15.01.log.html

> Let me know if you have questions.

> Regards, Markus Zoeller (markus_z)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable] pbr release 0.11.1 (kilo)

2016-01-11 Thread doug
We are jubilant to announce the release of:

pbr 0.11.1: Python Build Reasonableness

This release is part of the kilo stable release series.

With source available at:

http://git.openstack.org/cgit/openstack-dev/pbr

With package available at:

https://pypi.python.org/pypi/pbr

For more details, please see below and:

http://launchpad.net/pbr/+milestone/0.11.1

Please report issues through launchpad:

http://bugs.launchpad.net/pbr


Changes in pbr 0.11.0..0.11.1
-

6e472b4 Make setup.py --help-commands work without testrepository
474019d Add dependencies for building wheels

Diffstat (except docs and test files)
-

pbr/packaging.py | 24 +---
pbr/testr_command.py | 27 ---
tools/integration.sh | 18 --
3 files changed, 49 insertions(+), 20 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][i18n] Proposed change for consistent setup of translation - up for review

2016-01-11 Thread Andreas Jaeger
Projects that want to have translations, need to setup their repository 
in very specific ways which lead to quite a few questions from some of 
you and also quite different solutions. Let's make this easier - and 
consistent.


I've written a spec that aims to provide a consistent set up for the 
repositories and ask for your review and comments on it:


https://review.openstack.org/#/c/262545/

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] glance_store drivers deprecation/stabilization: Volunteers needed

2016-01-11 Thread Flavio Percoco
Greetings,

Gentle reminder that this is happening next week.

Cheers,
Flavio

- Original Message -
> From: "Flavio Percoco" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, December 10, 2015 9:16:09 AM
> Subject: [glance][all] glance_store drivers deprecation/stabilization: 
> Volunteers needed
> 
> Greetings,
> 
> As some of you know, there's a proposal (still a rough draft) for
> refactoring the glance_store API. This library is the home for the
> store drivers used by Glance to save or access the image data.
> 
> As other drivers in OpenStack, this library is facing the issue of
> having unmaintained, untested and incomplete implementations of stores
> that are, hypotetically, being used in production environments.
> 
> In order to guarantee some level of stability and, more important,
> maintenance, the Glance team is looking for volunteers to sign up as
> maintainers/keepers of the existing drivers.
> 
> Unfortunately, given the fact that our team is not as big as we would
> like and that we don't have the knowledge to provide support for every
> single driver, the Glance team will have to deprecate, and later
> remove, the drivers that will remain without a maintainer.
> 
> Each driver will have to have a voting CI job running (maintained by
> the driver maintainer) that will have to run Glance's functional tests
> to ensure the API features are also supported by the driver.
> 
> There are 2 drivers I belive shouldn't fall into this category and
> that should be maintained by the Glance community itself. These
> drivers are:
> 
> - Filesystem
> - Http
> 
> Please, find the full list of drivers here[0] and feel free to sign up
> as volunteer in as many drivers as your time permits to maintain.
> Please, provide all the information required as the lack of it will
> result in the candidacy not being valid. As some sharp eyes will
> notice, the Swift driver is not in the list above. The reason for that
> is that, although it's a key piece of OpenStack, not everyone in the
> Glance community knows the code of that driver well-enough and there
> are enough folks that know it that could perhaps volunteer as
> maintainers/reviewers for it. Furthermore, adding the swift driver
> there would mean we should probably add the Cinder one as well as it's
> part of OpenStack just like Swift. We can extend that list later. For
> now, I'd like to focus on bringing some stability to the library.
> 
> The above information, as soon as it's complete or the due date is
> reached, will be added to glance_store's docs so that folks know where
> to find the drivers maintainers and who to talk to when things go
> south.
> 
> Here's an attempt to schedule some of this work (please refer to
> this tag[0.1] and this soon-to-be-approved review[0.2] to have more
> info w.r.t the deprecation times and backwards compatibility
> guarantees):
> 
> - By mitaka 2 (Jan 16-22), all drivers should have a maintainer.
>   Drivers without one, will be marked as deprecated in Mitaka.
> 
> - By N-2 (schedule still not available), all drivers that were marked
>   as deprecated in Mitaka will be removed.
> 
> - By N-1 (schedule still not available), all drivers should have
>   support for the main storage capabilities[1], which are READ_ACCESS,
>   WRITE_ACCESS, and DRIVER_REUSABLE. Drivers that won't have support
>   for the main set of capabilities will be marked as deprecated and
>   then removed in O-1 (except for the HTTP one, which the team has
>   agreed on keeping as a read-only driver).
> 
> - By N-2 (schedule still not available), all drivers need to have a
>   voting gate. Drivers that won't have voting gates will be marked as
>   deprecated and then removed in O-1.
> 
> Although glance_store has intermediate releases, the above is being
> planned based on the integrated release to avoid sudden "surprises"
> on already released OpenStack versions.
> 
> Note that the above plan requires that the ongoing effort for setting
> up a gate based on functional tests for glance_store will be
> completed. There's enough time to get all this done for every driver.
> 
> In addition to the above, I'd like to note that we need to do this
> *before* the refactor[2] so that we can provide a minimum guarantee
> that it won't break the existing contract. Furthermore, maintainers of
> this drivers will be asked to help migrating their drivers to the new
> API but that will follow a different schedule that needs to be
> discussed in the spec itself.
> 
> This is, obviously, a multi-release effort that will require syncing
> with future PTLs of the project.
> 
> One more thing. Note that the above work shouldn't distract the team
> from the priorities we've scheduled for Mitaka. The requested
> work/info should be simple enough to provide and work on without
> distracting us. I'll take care of following-up and pinging some folks
> as needed.
> 
> Please, provide your feedback and/or concerns on the above plan,
> 

[openstack-dev] [release][ironic] ironic-python-agent release 1.1.0 (mitaka)

2016-01-11 Thread doug
We are glad to announce the release of:

ironic-python-agent 1.1.0: Ironic Python Agent Ramdisk

This release is part of the mitaka release series.

With package available at:

https://pypi.python.org/pypi/ironic-python-agent

For more details, please see below.


1.1.0
^


New Features


* The CoreOS image builder now uses the latest CoreOS stable version
  when building images.

* IPA now supports Linux-IO as an alternative to tgtd. The iSCSI
  extension will try to use Linux-IO first, and fall back to tgtd if
  Linux-IO is not found or cannot be used.

* Adds support for setting proxy info for downloading images. This
  is controlled by the *proxies* and *no_proxy* keys in the
  *image_info* dict of the *prepare_image* command.

* Adds support for streaming raw images directly onto the disk. This
  avoids writing the image to a tmpfs partition before writing it to
  disk, which also enables using images larger than the usable amount
  of RAM on the machine IPA runs on. Pass *stream_raw_images=True* to
  the *prepare_image* command to enable this; it is disabled by
  default.

* CoreOS image builder now runs IPA in a chroot, instead of a
  container. systemd-nspawn has been adding more security features
  that break several things IPA needs to do (after all, IPA
  manipulates hardware), such as using sysrq triggers or writing to
  /sys.

* Root device hints now also inspect ID_WWN_WITH_EXTENSION and
  ID_WWN_VENDOR_EXTENSION from udev.


Upgrade Notes
*

* Now that IPA runs in a chroot, any operator tooling built around
  the container may need to change (for example, methods of getting a
  shell inside the container).


Bug Fixes
*

* Raw images larger than available of RAM may now be used by passing
  *stream_raw_images=True* to the *prepare_image* command; these will
  be streamed directly to disk.

* Fixes an issue using the "logs" inspection collector when logs
  contain non-ascii characters.

* Makes tgtd ready status detection more robust.

* Fixes configdrive creation for MBR disks greater than 2TB.


Other Notes
***

* System random is now used where applicable, rather than the
  default python random library.


Changes in ironic-python-agent 1.0.0..1.1.0
---

43a149d Updated from global requirements
dcdb06d Replace deprecated LOG.warn with LOG.warning
4b561f1 Updated from global requirements
943d2c0 Revert "Use latest CoreOS stable when building"
a39dfbd Updated from global requirements
ffcdcd4 Add mitaka reno page
cfcef97 Replace assertEqual(None, *) with assertIsNone in tests
b9df861 Catch up release notes for Mitaka
e8488c2 Add reno for release notes management
d185927 Fix trivial typo in docs
5bac998 Updated from global requirements
4cd64e2 Delete the Linux-IO target before setting up local boot
056bb42 CoreOS: Ensure /run is mounted before starting
6dc7f34 Deprecated tox -downloadcache option removed
a253e50 Use latest CoreOS stable when building
84fc428 Updated from global requirements
b5b0b63 Run IPA in chroot instead of container in CoreOS
5fa258b Fix "logs" inspection collector when logs contain non-ascii symbols
2fc6ce2 pyudev exception has changed for from_device_file
c474a5a Support Linux-IO in addition to tgtd
f4ad4d7 Updated from global requirements
863b47b Updated from global requirements
e320bb8 Add support for streaming raw images directly onto the disk
65053b7 Refactor the image download and checksum computation bits
c21409e Follow up patch for da9c3b0adc67efa916fc534d975823c0a45948a1
a01c4c9 Create partition at max msdos limit for disks > 2TB
54c901e Support proxies for image download
d97dbf2 Updated from global requirements
da9c3b0 Extend root device hints for different types of WWN
505b345 Fix to preserve double dashes of command line option in HTML.
59630d4 Updated from global requirements
9e75ba5 Use oslo.log instead of original logging
037e391 Updated from global requirements
18d5d6a Replace deprecated LOG.warn with LOG.warning
e51ccbe avoid duplicate text in ISCSIError message
fb920f4 determine tgtd ready status through tgtadm
f042be5 Updated from global requirements
1aeef4d Updated from global requirements
f01 Add param docstring into the normalize func
06d34ae Make calling arguments easier to understand
6131b2e Ensure all methods in utils.py have docstrings
7823240 Updated from global requirements
af20875 Update gitignore
5f7bc48 Reduce size of CoreOS ramdisk
deb50ac Add LOG.debug() if requested device type not found
d538f5e Babel is not a direct dependency
27048ef Move oslotest to test-requirements
ebd7b07 Use mount -t sysfs to avoid host /sys dependencies
9eb329c Update the launchpad link for IPA
9efc1c1 Updated from global requirements
5bbb9de Fix log formatting error in iscsi.py
e7de4bb Open mitaka development
4a7b954 Adds more functional tests for commands
dcbba2b Enforce all flake8 rules except E129
3af9ab3 Refactor list_all_block_devices & add block_type param
6e2b0f7 

Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC

2016-01-11 Thread Armando M.
On 11 January 2016 at 12:04, Carl Baldwin  wrote:

> What do we do?  My calendar was set up with the sane bi-weekly thing
> and it shows the meeting for tomorrow.  The last word from our
> fearless leader is that we'll have it today.  So, I'll be there today
> unless instructed otherwise.
>
> The ics file now seems to reset the cadence beginning today at 2100
> and next Tuesday, the 19th, at 1400.  I guess we should either hold
> the meeting today and reset the cadence or fix the ics file.
>
>
This is what I would like to do now:

https://review.openstack.org/#/c/266019

I personally haven't seen that much of an attendance difference anyway, and
at this point, it'll simplify our lives and avoid grief going forward.

Carl
>
> On Mon, Jan 11, 2016 at 12:09 PM, Kevin Benton  wrote:
> > The issue is simply that you have a sane bi-weekly thing setup in your
> > calendar. What we have for Neutron is apparently defined as “odd and even
> > weeks when weeks are represented as an short integer counting from the
> first
> > of the year”, a.k.a. “bi-weekly” as a robot might define it. :)
> >
> >
> > On Jan 11, 2016, at 11:00 AM, Kyle Mestery  wrote:
> >
> > On Mon, Jan 11, 2016 at 12:57 PM, Kyle Mestery 
> wrote:
> >>
> >> On Mon, Jan 11, 2016 at 12:45 PM, Armando M.  wrote:
> >>>
> >>> Disregard the email subject.
> >>>
> >>> I stand corrected. Let's meet today.
> >>>
> >>
> >> Something is wrong, I have the meeting on my google calendar, and it
> shows
> >> up as tomorrow for this week. I've had these setup as rotating for a
> while
> >> now, so something is fishy with the .ics files.
> >
> >
> > If you look here [1], the meeting cadence was:
> >
> > 12-15-2015: Tuesday
> > 12-21-2015: Monday
> > 12-29-2015: Tuesday (skipped)
> > 01-04-2016: Monday (skipped)
> > 01-12-2016 Tuesday
> >
> > The meeting is tomorrow.
> >
> > [1] http://eavesdrop.openstack.org/meetings/networking/2015/
> >>
> >>
> >>>
> >>> On 11 January 2016 at 10:24, Ihar Hrachyshka 
> wrote:
> 
>  Armando M.  wrote:
> 
> > Hi neutrinos,
> >
> > A kind reminder for tomorrow's meeting at 1400UTC.
> >
> > Cheers,
> > Armando
> >
> > [1] https://wiki.openstack.org/wiki/Network/Meetings
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
>  Is it just me, or when you use .ics file from eavesdrop, it says the
>  meeting is today?
> 
>  http://eavesdrop.openstack.org/calendars/neutron-team-meeting.ics
> 
>  Is it the same issue as described in:
> 
> 
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082902.html
> 
>  and that is suggested to fix by readding your events from updated .ics
>  file:
> 
> 
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-January/083216.html
> 
>  Ihar
> 
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-01-11 Thread Sean M. Collins
Nice find. I actually pushed a patch recently that we should be advertising the 
MTU by default. I think this really shows that it should be enabled by default.

https://review.openstack.org/263486l
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Bug Day 2

2016-01-11 Thread Rob Cresswell (rcresswe)
Reminder about bug day! Just starting if you’re over in Australia ;)

Rob


On 4 Jan 2016, at 10:54, Rob Cresswell (rcresswe) 
> wrote:

Hi folks,

I think we should have another bug day to continue the good work started last 
time. I’d suggest Tuesday the 12th of January, as most people should be back at 
work by then. We can use the same etherpad too: 
https://etherpad.openstack.org/p/horizon-bug-day

For those not around for the previous one, the bug day is used to review our 
bug reports on Launchpad, and discuss them in IRC. This may be asking for help 
recreating an issue, whether a bug has been fixed etc. The goal is to have an 
organised, prioritised list of valid bug reports. Open new bugs if you stumble 
across them, but bug discovery is not the focus here.

Regards,
Rob




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-01-11 Thread Clark Boylan
On Mon, Jan 11, 2016, at 12:35 PM, Sean M. Collins wrote:
> Nice find. I actually pushed a patch recently that we should be
> advertising the MTU by default. I think this really shows that it should
> be enabled by default.
> 
> https://review.openstack.org/263486l
>
++ Neutron should be able to determine what the outer MTU is and adjust
the advertised inner MTU automatically based on the overhead required
for whatever tunnel protocol is in use all without the deployer or cloud
user needing to know anything special.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][ironic] ironic-python-agent release 1.0.1 (liberty)

2016-01-11 Thread doug
We are chuffed to announce the release of:

ironic-python-agent 1.0.1: Ironic Python Agent Ramdisk

This release is part of the liberty stable release series.

With package available at:

https://pypi.python.org/pypi/ironic-python-agent

For more details, please see below.


1.0.1
^

Bug Fixes

* Fixes an issue using the "logs" inspection collector when logs
  contain non-ascii characters.


Changes in ironic-python-agent 1.0.0..1.0.1
---

a8e1125 Add release note for 9353adda59d7fe9d85465ef195b82a2437814e39
20ded56 Add reno for release notes management
ef06c18 Updated from global requirements
9353add Fix "logs" inspection collector when logs contain non-ascii symbols
636c848 pyudev exception has changed for from_device_file
d1df25b Updated from global requirements
2fa4828 Set up stable/liberty branch

Diffstat (except docs and test files)
-

.gitignore |   3 +
.gitreview |   1 +
ironic_python_agent/hardware.py|   3 +-
ironic_python_agent/inspector.py   |   4 +-
releasenotes/notes/.placeholder|   0
.../logs-collector-non-ascii-010339bf256443c8.yaml |   4 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 276 +
releasenotes/source/current-series.rst |   5 +
releasenotes/source/index.rst  |  10 +
releasenotes/source/liberty.rst|   6 +
releasenotes/source/mitaka.rst |   6 +
requirements.txt   |   4 +-
setup.cfg  |   1 -
test-requirements.txt  |   1 +
tox.ini|   4 +
19 files changed, 349 insertions(+), 10 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3d41b4a..1873b3a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -16 +16 @@ oslo.service>=0.7.0 # Apache-2.0
-oslo.utils>=2.0.0 # Apache-2.0
+oslo.utils!=2.6.0,>=2.0.0 # Apache-2.0
@@ -21 +21 @@ pyudev
-requests>=2.5.2
+requests!=2.8.0,!=2.9.0,>=2.5.2
diff --git a/test-requirements.txt b/test-requirements.txt
index 2296e51..994947e 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -16,0 +17 @@ oslosphinx>=2.5.0 # Apache-2.0
+reno>=0.1.1  # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wipe of the nodes' disks

2016-01-11 Thread Artur Svechnikov
>As Andrew said already, in such case LVM meta data will remain on the hard
drive. So if you remove partition table, reboot the node (env reset), then
configure exactly the same partition table (like when you use the same
>default disk allocation in Fuel), then Linux will find LVM info on the
same sectors of HDD on re-created partitions and re-assemble old LVM
devices automatically.

>Alex is right, wiping partition table is not enough. User can create
partition table with exact the same sizes of partition as was before. In
this case lvm may detect metadata on untouched partitions to assemble a
logical >volume. We should remove lvm metadata from every partition (or
wipe 1st megabyte of every partition).

Fuel-agent can deal with it [0].

[0]
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L205-L221

Best regards,
Svechnikov Artur

On Thu, Jan 7, 2016 at 7:07 PM, Andrew Woodward  wrote:

>
>
> On Tue, Dec 29, 2015 at 5:35 AM Sergii Golovatiuk <
> sgolovat...@mirantis.com> wrote:
>
>> Hi,
>>
>> Let me comment inline.
>>
>>
>> On Mon, Dec 28, 2015 at 7:06 PM, Andrew Woodward 
>> wrote:
>>
>>> In order to ensure that LVM can be configured as desired, its necessary
>>> to purge them and then reboot the node, otherwise the partitioning commands
>>> will most likely fail on the next attempt as they will be initialized
>>> before we can start partitioning the node. Hence, when a node is removed
>>> from the environment, it is supposed to have this data destroyed. Since
>>> it's a running system, the most effective way was to blast the first 1Mb of
>>> each partition. (with out many more reboots)
>>>
>>> As to the fallback to SSH, there are two times we use this process, with
>>> the node reboot (after cobbler/IBP finishes), and with the wipe as we are
>>> discussing here. These are for the odd occurrences of the nodes failing to
>>> restart after the MCO command. I don't think anyone has had much success
>>> trying to figure out why this occurs, but I've seen nodes get stuck in
>>> provisioning and remove in multiple environments using 6.1 where they
>>> managed to break the SSH Fallback. It would occur around 1/20 nodes
>>> seemingly randomly. So with the SSH fallback I nearly never see the failure
>>> in node reboot.
>>>
>>
>> If we talk about 6.1-7.0 release there shouldn't be any problems with mco
>> reboot. SSH fallback must be deprecated at all.
>>
>
> As I noted, I've see several 6.1 deployments where it was needed, I'd
> consider it still very much in use. In other cases it might be necessary to
> attempt to deal with a node who's MCO agent is dead, IMO they should be
> kept.
>
>
>>>
>>
>>>
>>
>>>
>>
>>> On Thu, Dec 24, 2015 at 6:28 AM Alex Schultz 
>>> wrote:
>>>
 On Thu, Dec 24, 2015 at 1:29 AM, Artur Svechnikov
  wrote:
 > Hi,
 > We have faced the issue that nodes' disks are wiped after stop
 deployment.
 > It occurs due to the logic of nodes removing (this is old logic and
 it's not
 > actual already as I understand). This logic contains step which calls
 > erase_node[0], also there is another method with wipe of disks [1].
 AFAIK it
 > was needed for smooth cobbler provision and ensure that nodes will
 not be
 > booted from disk when it shouldn't. Instead of cobbler we use IBP from
 > fuel-agent where current partition table is wiped before provision
 stage.
 > And use disks wiping for insurance that nodes will not booted from
 disk
 > doesn't seem good solution. I want to propose not to wipe disks and
 simply
 > unset bootable flag from node disks.

>>>
>> Disks must be wiped as boot flag doesn't guarantee anything. If bootlag
>> is not set, BIOS will ignore ignore the device in boot-order. More over, 2
>> partitions may have bootflag or operator may set to skip boot-order in BIOS.
>>
>> >
 > Please share your thoughts. Perhaps some other components use the
 fact that
 > disks are wiped after node removing or stop deployment. If it's so,
 then
 > please tell about it.
 >
 > [0]
 >
 https://github.com/openstack/fuel-astute/blob/master/lib/astute/nodes_remover.rb#L132-L137
 > [1]
 >
 https://github.com/openstack/fuel-astute/blob/master/lib/astute/ssh_actions/ssh_erase_nodes.rb
 >

 I thought the erase_node[0] mcollective action was the process that
 cleared a node's disks after their removal from an environment. When
 do we use the ssh_erase_nodes?  Is it a fall back mechanism if the
 mcollective fails?  My understanding on the history is based around
 needing to have the partitions and data wiped so that the LVM groups
 and other partition information does not interfere with the
 installation process the next time the node is provisioned.  That
 might have been a side effect of cobbler and we should test if it's
 still 

Re: [openstack-dev] [doc] [api] Vision and changes for OpenStack API Guides

2016-01-11 Thread Jay Pipes

On 01/08/2016 04:39 PM, Anne Gentle wrote:

Hi all,

With milestone 2 coming next week I want to have a chat about API
guides, API reference information, and the
http://developer.openstack.org site. We have a new tool, fairy-slipper
[1], yep you read that right, that can migrate files from WADL to
Swagger as well as serve up reference info. We also have new build jobs
that can build API guides right from your projects repo to
developer.openstack.org .

There's a lot going on, so I've got an agenda item for next week to hold
a cross-project meeting so that you can send a team representative to
get the scoop on API guides and bring it back to your teams. I've
fielded great questions from a few teams, and want to ensure everyone's
in the loop.

A pair of specs set the stage for this work, feel free to review these
in advance of the meeting and come with questions. [2] [3]


Awesome specs and I love everything in both of those.

The one question I have is specifically around API microversions. Has 
the doc team had any ideas on how to handle the publication of 
microversion improvements to the API docs? I'm thinking something like 
"New in 2.18!"-like stuff would be really useful for API users.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] DocImpact vs. reno

2016-01-11 Thread Tom Fifield

On 11/01/16 20:08, Sean Dague wrote:

On 01/10/2016 11:31 PM, Lana Brindley wrote:


Wow. That'll make the release notes process painful this round ... o.O


Hmmm. In my mind it will make it a lot easier. In the past we end up
getting to the release and sit around and go "hmmm, what did we change
in the last 6 months that people care about?" And forget 90% of it. This
does the work up front. We can then just provide a final edit and
summary of highlights, and we're done.

Having spoke with ops over the years, no one is going to be upset if we
tell them all the changes that might impact them.





Would love it to be the case, but I don't think that's correct. Or if it's 
supposed to be correct, it hasn't been well communicated :)



Few random reviews from the DocImpact queue that didn't have relnotes:



https://review.openstack.org/#/c/180202/


I can only speak on the Nova change (as that's a team I review for).
You'll see this comment in there -
https://review.openstack.org/#/c/180202/31//COMMIT_MSG - a relnote was
expected for the patch series. Whether or not it managed to slip
through, I don't know.


Confirmed - no relnotes for this.




https://review.openstack.org/#/c/249814/
https://review.openstack.org/#/c/250818/
https://review.openstack.org/#/c/230983/



Didn't really look closely into these - would encourage someone with a bit more time to 
do so, but the fact that these were so trivial to eke out means that "nearly 
all" is almost certainly a bad assumption.



My experience would indicate that many, many DocImpact bugs are really not 
worthy of relnotes.


Can you provide some references? Again, my imagination doesn't really
come up with a lot of Nova changes that would be valid DocImpact but
wouldn't need a reno. I can see bugs filed against Docs explicitly
because there is a mismatch.


Since you wanted to focus only on nova, here's some more DocImpact 
reviews that did not have relnotes. Again, I basically haven't read 
these -  if someone wanted to do this properly, much appreciated.



https://review.openstack.org/#/c/165750/
https://review.openstack.org/#/c/184153/
https://review.openstack.org/#/c/237643/
https://review.openstack.org/#/c/180202/
https://review.openstack.org/#/c/242213/
https://review.openstack.org/#/c/224500/
https://review.openstack.org/#/c/147516/





-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder as generic volume manager

2016-01-11 Thread Ivan Kolodyazhny
Hi Team,

Let me introduce updates related to use-cinder-without-nova blueprint [1].
According to spec [2] we've introduced new python-brick-cinderclient-ext
project [3]. It's an official Cinder project and implemented as an
extension to python-cinderclient.

For now, it supports only 'get-connector' CLI and Python API. The next
steps are:

   - setup functional tests job for our CI to test it over each patch
   - implement attach/detach API
   - implement 'force detach' from the Cinder side.

Any feedback, comments and contribution are welcome!

[1] https://blueprints.launchpad.net/cinder/+spec/use-cinder-without-nova
[2] https://review.openstack.org/#/c/224124/
[3] https://github.com/openstack/python-brick-cinderclient-ext

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Wed, Sep 16, 2015 at 5:33 PM, Ivan Kolodyazhny  wrote:

> Jan, all,
>
> I've started a work on this task to complete it in Mitaka. Here is a very
> draft spec [1] and PoC [2].
>
> [1] https://review.openstack.org/224124
> [2] https://review.openstack.org/223851
>
> Regards,
> Ivan Kolodyazhny,
> Web Developer,
> http://blog.e0ne.info/,
> http://notacash.com/,
> http://kharkivpy.org.ua/
>
> On Tue, Jul 14, 2015 at 4:59 PM, Jan Safranek  wrote:
>
>> On 07/10/2015 12:19 AM, Walter A. Boring IV wrote:
>> > On 07/09/2015 12:21 PM, Tomoki Sekiyama wrote:
>> >> Hi all,
>> >>
>> >> Just FYI, here is a sample script I'm using for testing os-brick which
>> >> attaches/detaches the cinder volume to the host using cinderclient and
>> >> os-brick:
>> >>
>> >> https://gist.github.com/tsekiyama/ee56cc0a953368a179f9
>> >>
>> >> "python attach.py " will attach the volume to the executed
>> >> host and shows a volume path. When you hit the enter key, the volume is
>> >> detached.
>> >>
>> >> Note this is skipping "reserve" or "start_detaching" APIs so the volume
>> >> state is not changed to "Attaching" or "Detaching".
>> >>
>> >> Regards,
>> >> Tomoki
>> >
>> > Very cool Tomoki.  After chatting with folks in the Cinder IRC channel
>> > it looks like we are going to look at going with something more like
>> what
>> > your script is doing.   We are most likely going to create a separate
>> > command line tool that does this same orchestration, using cinder
>> client, a new
>> > Cinder API that John Griffith is working on, and os-brick.
>>
>> Very cool indeed, it looks exactly like as what I need.
>>
>>
>> Jan
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release models to be frozen around mitaka-2

2016-01-11 Thread Thierry Carrez

Hi everyone,

This is a small warning that we'll be freezing the release model for 
deliverables in the Mitaka cycle at mitaka-2 (January 21).


So for example if you wanted to switch from release:independent to one 
of the cycle-oriented models (release:cycle-with-intermediary or 
release:cycle-with-milestones) to be included in the final Mitaka 
release, please propose an openstack/governance change to do that 
(change in the reference/projects.yaml file) ASAP !


I know Magnum plans to do so but hasn't yet.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 01/11/2016

2016-01-11 Thread Renat Akhmerov
Hi,

We’ll have a team meeting today at our regular time 16.00 UTC at 
#openstack-meeting. Please join to discuss the project status.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Open discussion


Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.

2016-01-11 Thread Kevin Benton
Is 'self.context' used to handle other requests as well? I would suggest
generating a new context (neutron.context.get_admin_context()) for each
fixed interval looping call to ensure you aren't sharing a DB session with
another thread.

On Mon, Jan 11, 2016 at 2:26 AM, Koteswar  wrote:

> vendor specific mech driver code, where I am doing some read/write to sql.
>
> def _create_port(self, port): switchports = port['port']['switchports']
> LOG.debug(_LE("_create_port switch: %s"), port) network_id = port['port'][
> 'network_id'] subnets = db.get_subnets_by_network(self.context,
> network_id) if not subnets: LOG.error("Subnet not found for the network")
> self._raise_ml2_error(wexc.HTTPNotFound, 'create_port') for switchport in
> switchports: switch_mac_id = switchport['switch_id'] port_id = switchport[
> 'port_id'] bnp_switch = db.get_bnp_phys_switch_by_mac(self.context,
> switch_mac_id) # check for port and switch level existence if not
> bnp_switch: LOG.error(_LE("No physical switch found '%s' "),
> switch_mac_id) self._raise_ml2_error(wexc.HTTPNotFound, 'create_port')
> phys_port = db.get_bnp_phys_port(self.context, bnp_switch.id, port_id) if
> not phys_port: LOG.error(_LE("No physical port found for '%s' "),
> phys_port) self._raise_ml2_error(wexc.HTTPNotFound, 'create_port') if
> bnp_switch.status != constants.SWITCH_STATUS['enable']: 
> LOG.error(_LE("Physical
> switch is not Enabled '%s' "), bnp_switch.status) 
> self._raise_ml2_error(wexc.HTTPBadRequest,
> 'create_port')
>
> On Mon, Jan 11, 2016 at 2:47 PM, Anna Kamyshnikova <
> akamyshnik...@mirantis.com> wrote:
>
>> Hi!
>>
>> Can you point what mechanism driver is this or the piece of code that
>> give this error?
>>
>> On Mon, Jan 11, 2016 at 11:58 AM, Koteswar  wrote:
>>
>>> Hi All,
>>>
>>>
>>>
>>> In my mechanism driver, I am reading/writing into sql db in a fixed
>>> interval looping call. Sometimes I get the following error when I stop and
>>> start neutron server
>>>
>>> InvalidRequestError: This session is in 'prepared' state; no further SQL
>>> can be emitted within this transaction.
>>>
>>>
>>>
>>> I am using context.session.query() for add, delete, update and get.
>>> Please help me if any one resolved an issue like this.
>>>
>>>
>>>
>>> Full trace is as follows:
>>>
>>> 2016-01-06 15:33:21.799 [01;31mERROR neutron.plugins.ml2.managers
>>> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
>>> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mMechanism driver
>>> 'hp' failed in create_port_precommit[00m
>>>
>>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>>> [01;35m[00mTraceback (most recent call last):
>>>
>>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>>> [01;35m[00m  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
>>> line 394, in _call_on_drivers
>>>
>>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>>> [01;35m[00mgetattr(driver.obj, method_name)(context)
>>>
>>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>>> [01;35m[00m  File
>>> "/usr/local/lib/python2.7/dist-packages/baremetal_network_provisioning/ml2/mechanism_hp.py",
>>> line 67, in create_port_precommit
>>>
>>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>>> [01;35m[00mraise e
>>>
>>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>>> [01;35m[00mInvalidRequestError: This session is in 'prepared' state; no
>>> further SQL can be emitted within this transaction.
>>>
>>> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
>>> [01;35m[00m
>>>
>>> 2016-01-06 15:33:21.901 [01;31mERROR neutron.api.v2.resource
>>> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
>>> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mcreate failed[00m
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00mTraceback (most recent call last):
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00m  File "/opt/stack/neutron/neutron/api/v2/resource.py", line 83,
>>> in resource
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00mresult = method(request=request, **args)
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00m  File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py",
>>> line 146, in wrapper
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00mectxt.value = e.inner_exc
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00m  File
>>> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195,
>>> in __exit__
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00msix.reraise(self.type_, self.value, self.tb)
>>>
>>> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
>>> [01;35m[00m  File 

Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-01-11 Thread Ihar Hrachyshka

Ihar Hrachyshka  wrote:


Ihar Hrachyshka  wrote:


Sean M. Collins  wrote:


Just a quick update where we are:

Increasing the verbosity of the SSH session into the instance that is
created during the cinder portion is showing that we are actually
connecting to the instance successfully. We get the dropbear SSH banner,
but then the instance hangs. Eventually SSH terminates the connection, 5
minutes later.

http://logs.openstack.org/35/187235/12/experimental/gate-grenade-dsvm-neutron-multinode/984e651/logs/grenade.sh.txt.gz#_2016-01-08_20_13_40_040


As per [1], could be related to mtu on the interface. Do we configure  
MTU on external devices to accommodate for tunnelling headers?


As per [2] neutron server logs, the network in question is vxlan.

If that’s indeed the mtu issue, and since Cirros does not support DHCP  
MTU option documented for ml2 at [3], I don’t know how to validate  
whether it’s indeed the issue.


UPD: ^ that’s actually not true, cirros supports the option since 0.3.3  
[1] (and we use 0.3.4 [2]), so let’s try to enforce it inside neutron and  
see whether it helps: https://review.openstack.org/265759


UPD: seems like enforcing instance mtu to 1400 indeed makes us pass forward  
into tempest:


http://logs.openstack.org/59/265759/3/experimental/gate-grenade-dsvm-neutron-multinode/a167a59/console.html

And there are only three failures there:

http://logs.openstack.org/59/265759/3/experimental/gate-grenade-dsvm-neutron-multinode/a167a59/console.html#_2016-01-11_11_58_47_945

I also don’t see any RPC versioning related traces in service logs, which  
is a good sign.





[1] https://bugs.launchpad.net/cirros/+bug/1301958
[2]  
http://logs.openstack.org/35/187235/11/experimental/gate-grenade-dsvm-neutron-multinode/a5af283/logs/grenade.sh.txt.gz#_2015-11-30_18_57_46_067




Also, what’s the underlying infrastructure that is available in gate?  
Does it allow vlan for tenant networks? (We could enforce vlan for the  
network and see whether it fixes the issue.)


[1] https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1254085
[2]  
http://logs.openstack.org/35/187235/11/experimental/gate-grenade-dsvm-neutron-multinode/a5af283/logs/old/screen-q-svc.txt.gz#_2015-11-30_19_28_44_685
[3]  
https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/7/networking-guide/chapter-16-configure-mtu-settings


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] DocImpact vs. reno

2016-01-11 Thread Sean Dague
On 01/11/2016 07:55 AM, Tom Fifield wrote:
> On 11/01/16 20:08, Sean Dague wrote:
>> On 01/10/2016 11:31 PM, Lana Brindley wrote:
>> 
>>> Wow. That'll make the release notes process painful this round ... o.O
>>
>> Hmmm. In my mind it will make it a lot easier. In the past we end up
>> getting to the release and sit around and go "hmmm, what did we change
>> in the last 6 months that people care about?" And forget 90% of it. This
>> does the work up front. We can then just provide a final edit and
>> summary of highlights, and we're done.
>>
>> Having spoke with ops over the years, no one is going to be upset if we
>> tell them all the changes that might impact them.
>>
>>>
>>>
 Would love it to be the case, but I don't think that's correct. Or
 if it's supposed to be correct, it hasn't been well communicated :)
>>>
 Few random reviews from the DocImpact queue that didn't have relnotes:
>>>
 https://review.openstack.org/#/c/180202/
>>
>> I can only speak on the Nova change (as that's a team I review for).
>> You'll see this comment in there -
>> https://review.openstack.org/#/c/180202/31//COMMIT_MSG - a relnote was
>> expected for the patch series. Whether or not it managed to slip
>> through, I don't know.
> 
> Confirmed - no relnotes for this.
> 
>>
 https://review.openstack.org/#/c/249814/
 https://review.openstack.org/#/c/250818/
 https://review.openstack.org/#/c/230983/
>>>
 Didn't really look closely into these - would encourage someone with
 a bit more time to do so, but the fact that these were so trivial to
 eke out means that "nearly all" is almost certainly a bad assumption.
>>>
>>>
>>> My experience would indicate that many, many DocImpact bugs are
>>> really not worthy of relnotes.
>>
>> Can you provide some references? Again, my imagination doesn't really
>> come up with a lot of Nova changes that would be valid DocImpact but
>> wouldn't need a reno. I can see bugs filed against Docs explicitly
>> because there is a mismatch.
> 
> Since you wanted to focus only on nova, here's some more DocImpact
> reviews that did not have relnotes. Again, I basically haven't read
> these -  if someone wanted to do this properly, much appreciated.
> 
> 
> https://review.openstack.org/#/c/165750/
> https://review.openstack.org/#/c/184153/
> https://review.openstack.org/#/c/237643/
> https://review.openstack.org/#/c/180202/
> https://review.openstack.org/#/c/242213/
> https://review.openstack.org/#/c/224500/
> https://review.openstack.org/#/c/147516/

Looking through the list it looks like there are a few that merged
before reno. A bunch where DocImpact is being used by the Author as "I
changed docs", and a couple that I have no idea why the flag was stuck
in there at all. A number of the DocImpact tags even had the extra
context line, which seemed to be description of what docs the author
changed in the patch.

This conversation has gone on long enough I've completely lost the
problem we're trying to solve and the constraints around it.

I'd like to reset the conversation a little.

Goal: to not flood Docs team with vague bugs that are hard to decypher

Current Approach: machine enforce extra words after DocImpact (not
reviewed by doc team)

Downsides with current approach:
* it's a machine, not people, so clarity isn't guarunteed.
* the reviewers of the commit message aren't the people that will have
to deal with it,   leading to bad quality control on the reviews.
* extra jobs which cause load and inhibit our ability to stop reseting
jenkins votes on commit message changes

My Alternative Approach:

File doc bugs against project team instead of doc team. Make passing bug
to Doc team a project team responsibility to ensure context is provided
when it's needed.

This also means there is a feedback loop between the reviewers and the
folks having to deal with the artifacts (on first pass).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.

2016-01-11 Thread Mike Bayer


On 01/11/2016 03:58 AM, Koteswar wrote:
> Hi All,
> 
>  
> 
> In my mechanism driver, I am reading/writing into sql db in a fixed
> interval looping call. Sometimes I get the following error when I stop
> and start neutron server
> 
> InvalidRequestError: This session is in 'prepared' state; no further SQL
> can be emitted within this transaction.
> 
>  
> 
> I am using context.session.query() for add, delete, update and get.
> Please help me if any one resolved an issue like this.

the stack trace is unfortunately re-thrown from the ml2.managers code
without retaining the original traceback; use this form to reraise with
original tb:

exc_info = sys.exc_info()
raise type(e), e, exc_info[2]

There's likely helpers somewhere in oslo that do this.

The cause of this error is that a transaction commit is failing, the
error is being caught and this same Session is being used again without
rollback called first.   The code below illustrates the problem and how
to solve.

from sqlalchemy import create_engine
from sqlalchemy.orm import Session


e = create_engine("sqlite://")

s = Session(e)


conn = s.connection()


def boom():
raise Exception("sqlite commit failed")

# "break" connection.commit(),
# so that the commit fails
conn.connection.commit = boom
try:
# fail
s.commit()
except Exception, e:
# uncomment this to fix the error
# s.rollback()
pass
finally:
boom = False


# prepared state error
s.connection()

> 
>  
> 
> Full trace is as follows:
> 
> 2016-01-06 15:33:21.799 [01;31mERROR neutron.plugins.ml2.managers
> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mMechanism driver
> 'hp' failed in create_port_precommit[00m
> 
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mTraceback (most recent call last):
> 
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00m  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
> line 394, in _call_on_drivers
> 
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mgetattr(driver.obj, method_name)(context)
> 
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00m  File
> "/usr/local/lib/python2.7/dist-packages/baremetal_network_provisioning/ml2/mechanism_hp.py",
> line 67, in create_port_precommit
> 
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mraise e
> 
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00mInvalidRequestError: This session is in 'prepared' state; no
> further SQL can be emitted within this transaction.
> 
> [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> [01;35m[00m
> 
> 2016-01-06 15:33:21.901 [01;31mERROR neutron.api.v2.resource
> [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
> 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mcreate failed[00m
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mTraceback (most recent call last):
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m  File "/opt/stack/neutron/neutron/api/v2/resource.py", line
> 83, in resource
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mresult = method(request=request, **args)
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in
> wrapper
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mectxt.value = e.inner_exc
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
> 195, in __exit__
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00msix.reraise(self.type_, self.value, self.tb)
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in
> wrapper
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mreturn f(*args, **kwargs)
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line 516,
> in create
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mobj = do_create(body)
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line 498,
> in do_create
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00mrequest.context, reservation.reservation_id)
> 
> [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> [01;35m[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
> 195, in __exit__
> 
> [01;31m2016-01-06 

Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-01-11 Thread Mike Bayer


On 01/11/2016 05:39 AM, Radomir Dopieralski wrote:
> On 01/08/2016 09:51 PM, Mike Bayer wrote:
>>
>>
>> On 01/08/2016 04:44 AM, Radomir Dopieralski wrote:
>>> On 01/07/2016 05:55 PM, Mike Bayer wrote:
>>>
 but also even if you're under something like
 mod_wsgi, you can spawn a child process or worker thread regardless.
 You always have a Python interpreter running and all the things it can
 do.
>>>
>>> Actually you can't, reliably. Or, more precisely, you really shouldn't.
>>> Most web servers out there expect to do their own process/thread
>>> management and get really embarrassed if you do something like this,
>>> resulting in weird stuff happening.
>>
>> I have to disagree with this as an across-the-board rule, partially
>> because my own work in building an enhanced database connection
>> management system is probably going to require that a background thread
>> be running in order to reap stale database connections.   Web servers
>> certainly do their own process/thread management, but a thoughtfully
>> organized background thread in conjunction with a supporting HTTP
>> service allows this to be feasible.   In the case of mod_wsgi,
>> particularly when using mod_wsgi in daemon mode, spawning of threads,
>> processes and in some scenarios even wholly separate applications are
>> supported use cases.
> 
> [...]
> 
>> It is certainly reasonable that not all web application containers would
>> be effective with apps that include custom background threads or
>> processes (even though IMO any system that's running a Python
>> interpreter shouldn't have any issues with a limited number of
>> well-behaved daemon-mode threads), but at least in the case of mod_wsgi,
>> this is supported; that gives Openstack's HTTP-related applications with
>> carefully/thoughtfully organized background threads at least one
>> industry-standard alternative besides being forever welded to its
>> current homegrown WSGI server implementation.
> 
> This is still writing your application for a specific configuration of a
> specific version of a specific implementation of the protocol on a
> specific web server. While this may work as a stopgap solution, I think
> it's a really bad long-term strategy. We should be programming for a
> protocol specification (WSGI in this case), not for a particular
> implementation (unless we need to throw in workarounds for
> implementation bugs). 

That is fine, but then you are saying that all of those aforementioned
Nova services which do in fact use WSGI with its own homegrown eventlet
server should nevertheless be rewritten to not use any background
threads, which I also presented as the ideal choice.   Right now, the
fact that these Nova services use background threads is being used as a
justification for why these services can never move to use a proper web
server, even though they are still WSGI apps running inside of a WSGI
container, so they are already doing the thing that claims to prevent
this move from being possible.

Also, mod_wsgi's compatibility with background threads is not linked to
a "specific version", it's intrinsic in the organization of the product.
  I would wager that most other WSGI containers can probably handle this
use case as well but this would need to be confirmed.





> 
> At least it seems so to my naive programmer mind. Sorry for ranting,
> I'm sure that you are aware of the trade-off here.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] DocImpact vs. reno

2016-01-11 Thread Sean Dague
On 01/10/2016 11:31 PM, Lana Brindley wrote:

> Wow. That'll make the release notes process painful this round ... o.O

Hmmm. In my mind it will make it a lot easier. In the past we end up
getting to the release and sit around and go "hmmm, what did we change
in the last 6 months that people care about?" And forget 90% of it. This
does the work up front. We can then just provide a final edit and
summary of highlights, and we're done.

Having spoke with ops over the years, no one is going to be upset if we
tell them all the changes that might impact them.

> 
> 
>> Would love it to be the case, but I don't think that's correct. Or if it's 
>> supposed to be correct, it hasn't been well communicated :)
> 
>> Few random reviews from the DocImpact queue that didn't have relnotes:
> 
>> https://review.openstack.org/#/c/180202/

I can only speak on the Nova change (as that's a team I review for).
You'll see this comment in there -
https://review.openstack.org/#/c/180202/31//COMMIT_MSG - a relnote was
expected for the patch series. Whether or not it managed to slip
through, I don't know.

>> https://review.openstack.org/#/c/249814/
>> https://review.openstack.org/#/c/250818/
>> https://review.openstack.org/#/c/230983/
> 
>> Didn't really look closely into these - would encourage someone with a bit 
>> more time to do so, but the fact that these were so trivial to eke out means 
>> that "nearly all" is almost certainly a bad assumption.
> 
> 
> My experience would indicate that many, many DocImpact bugs are really not 
> worthy of relnotes.

Can you provide some references? Again, my imagination doesn't really
come up with a lot of Nova changes that would be valid DocImpact but
wouldn't need a reno. I can see bugs filed against Docs explicitly
because there is a mismatch.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] re-introducing twisted to global-requirements

2016-01-11 Thread Sean Dague
On 01/08/2016 03:52 PM, Jim Rollenhagen wrote:
> On Fri, Jan 08, 2016 at 03:39:51PM -0500, Jay Pipes wrote:
>>
>> No, nothing nefarious there. Sorry for letting my personal frustrations
>> bubble over into this.
>>
>> I am not blocking anything from going forward and I definitely am not asking
>> for a revert of any g-r patch. Nor am I trying to obstruct you in your
>> governance of Ironic.
>>
>> I was just raising my concerns as an OpenStack citizen and getting my
>> opinion out on paper.
> 
> As you should; thanks for doing that. I'm eager to see how this goes. :)

I think there is also a huge difference between how an effort like this
goes in the first 6 months, and what it looks like in 2 or 3 years once
none of the people originally working on it still are. Especially when
you start talking about more complex API flows that require there to be
state behind the scenes. The internal state associated with flows is
where this tends to get into the weeds.

An alternative approach would be making a fake / simulator driver like
Jay suggested, and spawning all the ironic services in a single process
using the oslo.messaging in memory driver and in mem sqlite db. This
kind of approach is what we use in Nova functional tests to setup a
whole stack fake cloud inside a single test process.

At the end of the day it's the choice of the Ironic team. Just something
to think really hard about.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Infra] [fuel-plugin-mistral] Request to add group member into the new Openstack project

2016-01-11 Thread Nikolay Makhotkin
Hi openstack-infra team,

Kindly please add nmakhot...@mirantis.com to the following Gerrit ACL groups:

https://review.openstack.org/#/admin/groups/1156,members

https://review.openstack.org/#/admin/groups/1155,members

(ref. https://review.openstack.org/#/c/238890/)

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #66

2016-01-11 Thread Emilien Macchi
Hi folks!

Tomorrow we will have our weekly meeting at UTC 1500.
Here is our agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160112

Feel free to add more topics, reviews, bugs, as usual.

See you there,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.

2016-01-11 Thread Koteswar
thanks a lot. I will try this.

On Mon, Jan 11, 2016 at 7:26 PM, Mike Bayer  wrote:

>
>
> On 01/11/2016 03:58 AM, Koteswar wrote:
> > Hi All,
> >
> >
> >
> > In my mechanism driver, I am reading/writing into sql db in a fixed
> > interval looping call. Sometimes I get the following error when I stop
> > and start neutron server
> >
> > InvalidRequestError: This session is in 'prepared' state; no further SQL
> > can be emitted within this transaction.
> >
> >
> >
> > I am using context.session.query() for add, delete, update and get.
> > Please help me if any one resolved an issue like this.
>
> the stack trace is unfortunately re-thrown from the ml2.managers code
> without retaining the original traceback; use this form to reraise with
> original tb:
>
> exc_info = sys.exc_info()
> raise type(e), e, exc_info[2]
>
> There's likely helpers somewhere in oslo that do this.
>
> The cause of this error is that a transaction commit is failing, the
> error is being caught and this same Session is being used again without
> rollback called first.   The code below illustrates the problem and how
> to solve.
>
> from sqlalchemy import create_engine
> from sqlalchemy.orm import Session
>
>
> e = create_engine("sqlite://")
>
> s = Session(e)
>
>
> conn = s.connection()
>
>
> def boom():
> raise Exception("sqlite commit failed")
>
> # "break" connection.commit(),
> # so that the commit fails
> conn.connection.commit = boom
> try:
> # fail
> s.commit()
> except Exception, e:
> # uncomment this to fix the error
> # s.rollback()
> pass
> finally:
> boom = False
>
>
> # prepared state error
> s.connection()
>
> >
> >
> >
> > Full trace is as follows:
> >
> > 2016-01-06 15:33:21.799 [01;31mERROR neutron.plugins.ml2.managers
> > [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
> > 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mMechanism driver
> > 'hp' failed in create_port_precommit[00m
> >
> > [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> > [01;35m[00mTraceback (most recent call last):
> >
> > [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> > [01;35m[00m  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
> > line 394, in _call_on_drivers
> >
> > [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> > [01;35m[00mgetattr(driver.obj, method_name)(context)
> >
> > [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> > [01;35m[00m  File
> >
> "/usr/local/lib/python2.7/dist-packages/baremetal_network_provisioning/ml2/mechanism_hp.py",
> > line 67, in create_port_precommit
> >
> > [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> > [01;35m[00mraise e
> >
> > [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> > [01;35m[00mInvalidRequestError: This session is in 'prepared' state; no
> > further SQL can be emitted within this transaction.
> >
> > [01;31m2016-01-06 15:33:21.799 TRACE neutron.plugins.ml2.managers
> > [01;35m[00m
> >
> > 2016-01-06 15:33:21.901 [01;31mERROR neutron.api.v2.resource
> > [[01;36mreq-d940a1b6-253a-43d2-b5ff-6c784c8a520f [00;36madmin
> > 83b5358da62a407f88155f447966356f[01;31m] [01;35m[01;31mcreate failed[00m
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00mTraceback (most recent call last):
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00m  File "/opt/stack/neutron/neutron/api/v2/resource.py", line
> > 83, in resource
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00mresult = method(request=request, **args)
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00m  File
> > "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in
> > wrapper
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00mectxt.value = e.inner_exc
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00m  File
> > "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
> > 195, in __exit__
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00msix.reraise(self.type_, self.value, self.tb)
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00m  File
> > "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in
> > wrapper
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00mreturn f(*args, **kwargs)
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line 516,
> > in create
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00mobj = do_create(body)
> >
> > [01;31m2016-01-06 15:33:21.901 TRACE neutron.api.v2.resource
> > [01;35m[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line 498,
> > 

Re: [openstack-dev] [telemetry][aodh][vitrage] The purpose of notification about alarm updating

2016-01-11 Thread gord chung
so one point i should mention is that the notification we propose on 
removing was originally attended for Ceilometer events. much in same way 
as disabling the aodh-notifier to allow Vitrage to consume alarm 
triggers, it isn't exactly safe for Vitrage to consume the existing 
messages as Ceilometer may also consume it.


the idea with moving it to a discrete notifier (whether zaqar or plain 
oslo.messaging) is so the consumer of message is explicit.


On 10/01/2016 1:30 AM, AFEK, Ifat (Ifat) wrote:

Hi Julien,

I'm a bit confused. This thread started by liusheng asking to remove the 
notifications to the bus[1], and I replied saying that we do want to use this 
capability in Vitrage. What is the notifier plugin that you suggested? Is it 
the same notifier that liusheng referred to, or something else?


[1] https://review.openstack.org/#/c/246727/

Thanks,
Ifat.


-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info]
Sent: Friday, January 08, 2016 11:42 AM

On Thu, Jan 07 2016, AFEK, Ifat (Ifat) wrote:


We have two motivations: one is that we want to get notifications on
every change, but we don't want to register our webhook to each and
every alarm; and the other is that we already listen to the message
bus for other openstack components, so we would like to handle aodh

the same way.

If we disable aodh-notifier, won't it have other impacts on the
system? What if someone else used a webhook, and won't be notified
because we disabled the notifier?

We could have a notifier plugin that would send a notification using
oslo.messaging I guess. That shouldn't be a big deal.

--
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] DocImpact vs. reno

2016-01-11 Thread Markus Zoeller
Tom Fifield  wrote on 01/11/2016 01:55:21 PM:

> From: Tom Fifield 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 01/11/2016 01:55 PM
> Subject: Re: [openstack-dev] [doc] DocImpact vs. reno
> 
> On 11/01/16 20:08, Sean Dague wrote:
> > On 01/10/2016 11:31 PM, Lana Brindley wrote:
> > 
> >> Wow. That'll make the release notes process painful this round ... 
o.O
> >
> > Hmmm. In my mind it will make it a lot easier. In the past we end up
> > getting to the release and sit around and go "hmmm, what did we change
> > in the last 6 months that people care about?" And forget 90% of it. 
This
> > does the work up front. We can then just provide a final edit and
> > summary of highlights, and we're done.
> >
> > Having spoke with ops over the years, no one is going to be upset if 
we
> > tell them all the changes that might impact them.
> >
> >>
> >>
> >>> Would love it to be the case, but I don't think that's correct. Or
> if it's supposed to be correct, it hasn't been well communicated :)
> >>
> >>> Few random reviews from the DocImpact queue that didn't have 
relnotes:
> >>
> >>> https://review.openstack.org/#/c/180202/
> >
> > I can only speak on the Nova change (as that's a team I review for).
> > You'll see this comment in there -
> > https://review.openstack.org/#/c/180202/31//COMMIT_MSG - a relnote was
> > expected for the patch series. Whether or not it managed to slip
> > through, I don't know.
> 
> Confirmed - no relnotes for this.
> 
> >
> >>> https://review.openstack.org/#/c/249814/
> >>> https://review.openstack.org/#/c/250818/
> >>> https://review.openstack.org/#/c/230983/
> >>
> >>> Didn't really look closely into these - would encourage someone 
> with a bit more time to do so, but the fact that these were so trivial
> to eke out means that "nearly all" is almost certainly a bad assumption.
> >>
> >>
> >> My experience would indicate that many, many DocImpact bugs are 
> really not worthy of relnotes.
> >
> > Can you provide some references? Again, my imagination doesn't really
> > come up with a lot of Nova changes that would be valid DocImpact but
> > wouldn't need a reno. I can see bugs filed against Docs explicitly
> > because there is a mismatch.
> 
> Since you wanted to focus only on nova, here's some more DocImpact 
> reviews that did not have relnotes. Again, I basically haven't read 
> these -  if someone wanted to do this properly, much appreciated.
> 
> 
> https://review.openstack.org/#/c/165750/
> https://review.openstack.org/#/c/184153/
> https://review.openstack.org/#/c/237643/
> https://review.openstack.org/#/c/180202/
> https://review.openstack.org/#/c/242213/
> https://review.openstack.org/#/c/224500/
> https://review.openstack.org/#/c/147516/

At a short glance I would say:

https://review.openstack.org/#/c/165750/ 
config option needs to be set for backwards compatible change
=> should have reno file

https://review.openstack.org/#/c/184153/ 
enables snapshot for parallels. HypervisorSupportMatrix.ini is already
altered within the change => no further action necessary

https://review.openstack.org/#/c/237643/ 
Removes a deprecated config option => should have reno file

https://review.openstack.org/#/c/180202/ 
Enhances flavor extra specs => to this day I don't know how they get 
documented and I'm clueless about a further action

https://review.openstack.org/#/c/242213/ 
changes default values of the policy.json => should have reno file

https://review.openstack.org/#/c/224500/ 
Does the doc change already in the change (config option help)
=> no further action necessary

https://review.openstack.org/#/c/147516/
introduces new config options => should have reno file

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Eric LEMOINE
On Mon, Jan 11, 2016 at 5:01 PM, Michał Jastrzębski  wrote:
> On 11 January 2016 at 09:16, Eric LEMOINE  wrote:

>> * Logstash was designed for logs processing.  Heka is a "unified data
>> processing" software, designed to process streams of any type of data.
>> So Heka is about running one service on each box instead of many.
>> Using a single service for processing different types of data also
>> makes it possible to do correlations, and derive metrics from logs and
>> events.  See Rob Miller's presentation [4] for more details.
>
> Right now we use rsyslog for that.


Currently the services running in containers send their logs to
rsyslog. And rsyslog stores the logs in local files, located in the
host's /var/log directory.


> As I understand Heka right now
> would be actually alternative to rsyslog, and that is already
> implemented. Also with Heka case, we might run into same problem we've
> seen with rsyslog - transporting logs from service to heka. Keep in
> mind we're in dockers and heka will be in different container than
> service it's supposed to listen to. We do that by sharing faked
> /dev/log across containers and rsyslog can handle that.


I know. Our plan is to rely on Docker. Basically: containers write
their logs to stdout. The logs are collected by Docker Engine, which
makes them available through the unix:///var/run/docker.sock socket.
The socket is mounted into the Heka container, which uses the Docker
Log Input plugin [*] to reads the logs from that that socket.

[*] 


> Also Heka seems to be just processing mechanism while logstash is
> well...stash, so it saves and persists logs, so it seems to me they're
> different layers of log processing.

No. Logstash typically stores the logs in Elasticsearch. And we'd do
the same with Heka.


>
> Seems to me we need additional comparason of heka vs rsyslog;) Also
> this would have to be hands down better because rsyslog is already
> implemented, working and most of operators knows how to use it.


We don't need to remove Rsyslog. Services running in containers can
write their logs to both Rsyslog and stdout, which even is what they
do today (at least for the OpenStack services).


Hope that makes sense!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Seeking FFE for "Support single disk image OVA/OVF package upload"

2016-01-11 Thread stuart . mclaren

Hello Glance Team!
  Hope you had a wonderful vacation and wishing you health 
and happiness for 2016.


Would very much appreciate your considering 
https://review.openstack.org/194868 for a feature freeze exception.


I believe the spec is pretty solid, and we can deliver on the 
implementation by M-2. But were unable to get enough core



Votes during the holiday season.



  Regards



  Malini




-- next part --
An HTML attachment was scrubbed...
URL: 



I downloaded the patch which implements the spec 
(https://review.openstack.org/#/c/214810):


I can make this REST API call to perform OVA import:

http://paste.openstack.org/show/483332/

(it exercises the new OVA extract code path).

There's a parallel effort in the project to provide 'official' (Defcore)
APIs for image upload/conversion. What will be the advantage of having
two different REST API calls (a 'tasks' based one and a DefCore one)
for importing OVAs?


As you mentioned above, the team is working on refactoring the image
import process for Glance. The solution has different requirements and
dependencies. One of those dependencies is making the existing task
API admin only to then be able to deprecate it in the future.

This has been discussed several times at the summit, in meetings and,
to make sure it's clear to everyone (apparently it isn't) it's also
been made of the spec of the refactor you mentioned[0] (see work
items).

[0] 
https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html#work-items



 I understand we're aiming to retire the tasks API and that this call is
 intended to be admin only.

 I guess what I'd like to get a handle on is this:

 In the future, if a user asks "Why did my OVA API call fail?", we'll
 need to ask "Well, which OVA API did you use?"

 At that level there is a duplication -- something we'd generally try to
 avoid. If we're going to have that duplication I'd just like to understand
 why. For example, is it because we want the functionality exposed
 sooner? Or is it that the ability to trigger the new functionality via
 a /tasks call is a side-effect of the task implementation (one we don't
 really want in this case) and it would be too much effort to change it.



It seems to be possible to successfully make the above API call whether
you're admin or not and whether the server has the patch applied or
not. Is that expected?


You'll be able to run that request until this[0] is done.


 The current implementation hard codes that the OVA functionality is admin
 only.  But the call still "succeeds" as a non-admin (creates an active
 image) because it re-uses the 'import' task type (ie we've an overloaded
 API). That means that if you're admin the API is ambiguous (knowing
 the request and response is not enough to know what actually happened).
 That may only be the current implementation though, and may not be what
 the spec intended.

 (Let's follow this bit up on the spec.)


In addition
to this work, there's also the requirement to make the task executable
only by admins.
This has been explicitly mentioned in the OVF spec and
we need to test/enforce that the code respects this.

Note that we're evaluating an exception for the *spec* and not the
code. Therefore, using the current version of the code as a reference
rather than what's in the spec is probably not ideal.

One final note that I'd like to make. The *task class/implementation*
has nothing to do with the API. It can be functionally tested without
API calls and it can be disabled. The fact that you can run it using
the old task API doesn't mean that you should or that it's what we're
recomending. The old task API is taking its first step down the
deprecation path and it'll take some time for that to happen. This, I
insist, does not mean the team is encouraging such API.

The OVF work was delayed in Kilo. We also blocked in Liberty because
we knew the upload path needed to be refectored. In Mitaka we blocked
it until the very end of the spec review process because we wanted to
make sure it wouldn't interfeer with the priorities of the cycle. Now
that we know that, I can't hardly think of a reason to not let this
through. One motivation is that I don't think it'll confuse folks as
we're clearly saying (in code, communications and whatnot) that the
old task API should go away.

Sure, some ppl don't listen and the world isn't perfect but there are
trade offs and those are the ones we're evaluating.

[0] https://bugs.launchpad.net/glance/+bug/1527716

Cheers,
Flavio



-Stuart



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [doc] DocImpact vs. reno

2016-01-11 Thread Doug Hellmann
Excerpts from Lana Brindley's message of 2016-01-11 14:31:17 +1000:
> On 09/01/16 14:07, Tom Fifield wrote:
> > On 08/01/16 21:15, Sean Dague wrote:
> >> On 01/07/2016 06:21 PM, Lana Brindley wrote:
> >>>
>  On 7 Jan 2016, at 2:09 AM, Sean Dague  wrote:
> 
>  On 01/06/2016 09:02 AM, Jeremy Stanley wrote:
> > On 2016-01-06 07:52:48 -0500 (-0500), Sean Dague wrote:
> > [...]
> >> I think auto openning against a project, and shuffling it to
> >> manuals manually (with details added by humans) would be fine.
> >>
> >> It's not clear to me why a new job was required for that.
> >
> > The new check job was simply a requirement of the Docs team, since
> > they were having trouble triaging auto-generated bugs they were
> > receiving and wanted to enforce the inclusion of some expository
> > metadata.
> 
>  Sure, but if that triage is put back on the Nova team, that seems fine.
> >>>
> >>> So you’re thinking we should make all docimpact bugs go to the project 
> >>> bug queue? Even for defcore?
> >>
> >> Yes, because then it would be the responsibility of the project team to
> >> ensure there is enough info before passing it onto the docs team.
> 
> I'm willing to try this, if the defcore teams approve.
> 
> >>>
> 
>  It also doesn't make sense to me there would be a DocImpact change that
>  wouldn't also have a reno section. The reason someone thinks that a
>  change needs reflection in the manual is that it adds/removes/changes a
>  feature that would also show up in release notes. Perhaps my imagination
>  isn't sufficient to come up with a scenario where DocImpact is valid,
>  but we wouldn't have content in one of those sections.
> >>>
> >>> I can think of plenty. What about where a command is changed slightly? Or 
> >>> an output is formatted differently? Or where flags have been removed, or 
> >>> default values changed, or ….
> >>
> >> Nearly all of those changes have been triggering release notes at this
> >> point. They are all changes the user needs to adapt to because they
> >> potentially impact compatibility.
> 
> Wow. That'll make the release notes process painful this round ... o.O

Can you clarify what you're worried about here? The point of the
new tool is that once the note is added to the patch with the code,
there is no more manual work to do to publish it.

> 
> > 
> > Would love it to be the case, but I don't think that's correct. Or if it's 
> > supposed to be correct, it hasn't been well communicated :)
> > 
> > Few random reviews from the DocImpact queue that didn't have relnotes:
> > 
> > https://review.openstack.org/#/c/180202/
> > https://review.openstack.org/#/c/249814/
> > https://review.openstack.org/#/c/250818/
> > https://review.openstack.org/#/c/230983/
> > 
> > Didn't really look closely into these - would encourage someone with a bit 
> > more time to do so, but the fact that these were so trivial to eke out 
> > means that "nearly all" is almost certainly a bad assumption.
> > 

There was a period of time early on where we were still rolling out the
tool that notes may not have been written. Each project team was
supposed to go back and review commits made prior to turning reno on to
see if they needed notes, and to commit them separately. Has that been
done here?

Perhaps one way to take advantage of both tools is to have the DocImpact
validation look inside the commit and require a release notes file? That
way the reviewable portion of the commit is not in the message, but we
can still require some description of why DocImpact was added.

Doug

> 
> My experience would indicate that many, many DocImpact bugs are really not 
> worthy of relnotes.
> 
> > 
> >>>
> >>> Bugs and relnotes are two very different things.
> 
> 
> L
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-11 Thread Dan Prince
Background info:

We've got a problem in TripleO at the moment where many of our
workflows can be driven by the command line only. This causes some
problems for those trying to build a UI around the workflows in that
they have to duplicate deployment logic in potentially multiple places.
There are specs up for review which outline how we might solve this
problem by building what is called TripleO API [1].

Late last year I began experimenting with an OpenStack service called
Mistral which contains a generic workflow API. Mistral supports
defining workflows in YAML and then creating, managing, and executing
them via an OpenStack API. Initially the effort was focused around the
idea of creating a workflow in Mistral which could supplant our
"baremetal introspection" workflow which currently lives in python-
tripleoclient. I create a video presentation which outlines this effort
[2]. This particular workflow seemed to fit nicely within the Mistral
tooling.



More recently I've turned my attention to what it might look like if we
were to use Mistral as a replacement for the TripleO API entirely. This
brings forth the question of would TripleO be better off building out
its own API... or would relying on existing OpenStack APIs be a better
solution?

Some things I like about the Mistral solution:

- The API already exists and is generic.

- Mistral already supports interacting with many of the OpenStack API's
we require [3]. Integration with keystone is baked in. Adding support
for new clients seems straightforward (I've had no issues in adding
support for ironic, inspector, and swift actions).

- Mistral actions are pluggable. We could fairly easily wrap some of
our more complex workflows (perhaps those that aren't easy to replicate
with pure YAML workflows) by creating our own TripleO Mistral actions.
This approach would be similar to creating a custom Heat resource...
something we have avoided with Heat in TripleO but I think it is
perhaps more reasonable with Mistral and would allow us to again build
out our YAML workflows to drive things. This might allow us to build
off some of the tripleo-common consolidation that is already underway
...

- We could achieve a "stable API" by simply maintaining input
parameters for workflows in a stable manner. Or perhaps workflows get
versioned like a normal API would be as well.

- The purist part of me likes Mistral quite a bit. It fits nicely with
the deploy OpenStack with OpenStack. I sort of feel like if we have to
build our own API in TripleO part of this vision has failed and could
even be seen as a massive technical debt which would likely be hard to
build a community around outside of TripleO.

- Some of the proposed validations could perhaps be implemented as new
Mistral actions as well. I'm not convinced we require TripleO API just
to support a validations mechanism yet. Perhaps validations seem hard
because we are simply trying to do them in the wrong places anyway?
(like for example perhaps we should validate network connectivity at
inspection time rather than during provisioning).

- Power users might find a workflow built around a Mistral API more
easy to interact with and expand upon. Perhaps this ends up being
something that gets submitted as a patchset back to the TripleO that we
accept into our upstream "stock" workflow sets.



Last week we landed the last patches [4] to our undercloud to enable
installing Mistral by simply setting: enable_mistral = true in
undercloud.conf. NOTE: you'll need to be using a recent trunk repo from
Delorean so that you have the recently added Mistral packages for this
to work. Although the feature is disable by default this should enable
those wishing to tinker with Mistral as a new TripleO undercloud
service an easy path forwards.

[1] https://review.openstack.org/#/c/230432
[2] https://www.youtube.com/watch?v=bnAT37O-sdw
[3] http://git.openstack.org/cgit/openstack/mistral/tree/mistral/action
s/openstack/mapping.json
[4] https://etherpad.openstack.org/p/tripleo-undercloud-workflow


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] FYI: Updated wiki on creating functional jobs

2016-01-11 Thread Paul Michali
I revised the wiki page that I created previously on how to add functional
jobs to the gate. I added more details, indicated about getting liaison
review, and added some miscellaneous information on templates, skipping
tests, and making tests release based.

Ref: https://wiki.openstack.org/wiki/Neutron/FunctionalGateSetup
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [eslint-config-openstack] Nominating Beth Elwell to eslint-config-openstack core

2016-01-11 Thread Michael Krotscheck
Hello everyone!

This email is to nominate Elizabeth "Beth" Elwell (aka betherly) to core on
eslint-config-openstack. She provides valuable EU timezone coverage, as
well as a friendly and inviting demeanor when discussing language rules.
Also, she's been keeping up-to-date with the extant reviews, and - most
importantly - said yes when I asked her.

In my book, that's a little crazypants, because she's willing to jump into
the linting fight. But hey, everyone's weird in their own special way :)

Michael Krotscheck
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] [api] Vision and changes for OpenStack API Guides

2016-01-11 Thread Anne Gentle
On Fri, Jan 8, 2016 at 7:35 PM, Anne Gentle 
wrote:

>
>
> On Fri, Jan 8, 2016 at 3:39 PM, Anne Gentle  > wrote:
>
>> Hi all,
>>
>> With milestone 2 coming next week I want to have a chat about API guides,
>> API reference information, and the http://developer.openstack.org site.
>> We have a new tool, fairy-slipper [1], yep you read that right, that can
>> migrate files from WADL to Swagger as well as serve up reference info. We
>> also have new build jobs that can build API guides right from your projects
>> repo to developer.openstack.org.
>>
>> There's a lot going on, so I've got an agenda item for next week to hold
>> a cross-project meeting so that you can send a team representative to get
>> the scoop on API guides and bring it back to your teams. I've fielded great
>> questions from a few teams, and want to ensure everyone's in the loop.
>>
>> A pair of specs set the stage for this work, feel free to review these in
>> advance of the meeting and come with questions. [2] [3]
>>
>> Join me next week at the first pop-up Cross Project meeting, Tuesday at
>> 1700, in #openstack-meeting-cp. Feel free to add to the agenda at
>>
>
>
Double whoops, I mean Tuesday at 2100.
Anne


> Woops, agenda here:
>
>
> https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda
>
>
>
>>
>> Thanks,
>> Anne
>>
>>
>> 1. https://github.com/openstack/fairy-slipper
>> 2. mitaka spec:
>> http://specs.openstack.org/openstack/docs-specs/specs/mitaka/app-guides-mitaka-vision.html
>> 3. liberty spec:
>> http://specs.openstack.org/openstack/docs-specs/specs/liberty/api-site.html
>>
>> --
>> Anne Gentle
>> Rackspace
>> Principal Engineer
>> www.justwriteclick.com
>>
>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 1/11/2016

2016-01-11 Thread Renat Akhmerov
Hi,

Thank you for joining the meeting today. In case you’d like to see meeting 
minutes and log here they are:
http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-01-11-16.00.html
 

http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-01-11-16.00.log.html
 


Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC

2016-01-11 Thread Armando M.
Hi neutrinos,

A kind reminder for tomorrow's meeting at 1400UTC.

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

2016-01-11 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Nova/Neutron Folks,
Are we with an agreement on the Nova/Neutron related changes for addressing the 
Live Migration issues.
We started this thread couple of weeks back and it went silent.
The discussion is still active and being discussed as part of the bug.
https://bugs.launchpad.net/neutron/+bug/1456073

Please add in your comments if you have any issues or concerns.

Thanks
Swami

From: Vasudevan, Swaminathan (PNB Roseville)
Sent: Friday, December 18, 2015 10:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

Hi Oleg/Sean M Collins/Carl_baldwin/Kevin Benton/GaryK,

Here is my thought. I have updated the bug report with the details. But this is 
for your information.


Oleg in my opinion, the right time to create the floatingip infrastructure 
would be before the vm actually migrates and is planning to migrate.

1. If we get the "future_host" for migration information from the nova, we can 
prepare the host for the fip migration - like
   Create Router namespace
   Create FIP namespace
   Associate the Router and FIP Namespace.
  I have made some headway with this on this patch.
 https://review.openstack.org/#/c/259171/

2. In order for this to be there, we have to track the port with respect to the 
"old_host", "cur_host" and "new_host" or "future_host".
   For this I would suggest that we make changes to the port-binding table to 
handle all "host" changes.
  In this case the old_host and the cur_host can be the same. The new_host 
denotes where the port is intended to move. Once we get this information, the 
server can pre-populate the details and send it to the agent to create the fip 
namespace.
  In order to address this I have already created a patch.
  https://review.openstack.org/#/c/259299/

3. The thing that we need more should we need to have a different type of 
"event_notifier" such as "MIGRATE_START" or "MIGRATE_END" for the port, or else 
are we going to make use of the same "UPDATE_PORT", "BEFORE_UPDATE" for this. 
-- This should be considered.

4. With all this infrastructure, when "NOVA" provides us a notification before 
"pre-migration" to setup the L3, then we can go ahead and create it.

5. If there are any other issues on the neutron side, we can notify 'NOVA" that 
network-is-not-ready for migration and NOVA should take necessary action.

6. If everything is fine, we send a "OK" message, and NOVA will proceed with 
the migration.

7. If NOVA errors out, it should send a reply back to us and about its state 
and we should revert the state on our side.

Please let me know if you have any other questions.


Thanks
Swami

From: Oleg Bondarev [mailto:obonda...@mirantis.com]
Sent: Friday, December 18, 2015 2:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

I think it might be a bit early to start a cross-project discussion on this.
I'd suggest to first figure out what questions do we have, what would we like 
to get from nova.
So I think it'd be more constructive if we first think on it within neutron 
team.
I left some questions on the bug [1], please see comment #8

[1] https://bugs.launchpad.net/neutron/+bug/1456073

Thanks,
Oleg

On Fri, Dec 18, 2015 at 12:14 AM, Vasudevan, Swaminathan (PNB Roseville) 
> wrote:
Hi Sean M. Collins,
Thanks for the information.
It would be great if we can bring in the right people from both sides to 
discuss and solve this problem
Please let me know if you can pull in the right people from the nova side and I 
can get the people from the neutron side.

Thanks
Swami

-Original Message-
From: Sean M. Collins [mailto:s...@coreitpro.com]
Sent: Thursday, December 17, 2015 1:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

On Thu, Dec 17, 2015 at 02:08:42PM EST, Vasudevan, Swaminathan (PNB Roseville) 
wrote:
> Hi Folks,
> I would like organize a meeting between the Nova and Neutron team to work 
> refining the Nova/Neutron notificiations for the Live Migration.
>
> Today we only have Notification from Neutron to Nova on any port status 
> update.
>
> But we don't have any similar notification from Nova on any Migration state 
> change.
> Neutron L3 will be interested in knowing the state change for vm migration 
> and can take necessary action pro-actively to create the necessary L3 related 
> plumbing that is required.
>
> Here are some of the bugs that are currently filed with respect to nova live 
> migration and neutron.
> https://bugs.launchpad.net/neutron/+bug/1456073
> https://bugs.launchpad.net/neutron/+bug/1414559
>
> Please let me know who will be interested in participating in the discussion.
> It would be great if we get some 

Re: [openstack-dev] [Neutron] Heads up for decomposed plugin break

2016-01-11 Thread Doug Wiegley


> On Jan 11, 2016, at 2:42 AM, Ihar Hrachyshka  wrote:
> 
> Sean M. Collins  wrote:
> 
>>> On Fri, Jan 08, 2016 at 07:50:47AM PST, Chris Dent wrote:
 On Fri, 8 Jan 2016, Gary Kotton wrote:
 
 The commit https://github.com/openstack/neutron/commit/5d53dfb8d64186-
 b5b1d2f356fbff8f222e15d1b2 may break the decomposed plugins that make
 use of the method _get_tenant_id_for_create
>>> 
>>> Just out of curiosity, is it not standard practice that a plugin
>>> shouldn't use a private method?
>> 
>> +1 - hopefully decomposed plugins will audit their code and look for
>> other calls to private methods.
> 
> The fact that it broke *aas repos too suggests that we were not showing a 
> proper example to those decomposed. I think it can be reasonable to restore 
> the method until N, with a deprecation message, as Garry suggested in his 
> patch. Especially since there is no actual burden to keep the method for 
> another cycle.

The neutron community has been really lax about enforcing private methods.  And 
while we should absolutely reverse that trend, likely we should give some 
warning. I agree with not going whole hog on that until N. 

I'd suggest putting in a debtcollector reference when putting the method back. 

Doug

> 
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][tests] approach to functional/integration tests

2016-01-11 Thread Serge Kovaleff
Option 3 - to concentrate on Tempest tests
https://review.openstack.org/#/c/253982/

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70

On Mon, Jan 11, 2016 at 4:49 PM, Serge Kovaleff 
wrote:

> Hi All,
>
> Last week I had a noble goal to write "one-more" functional test in Ironic.
> I did find a folder "func" but it was empty.
>
> Friends helped me to find a WIP patch
> https://review.openstack.org/#/c/235612/
>
> and here comes the question of this email: what approach we would like to
> implement:
> Option 1 - write infrastructure code that starts/configure/stops the
> services
> Option 2 - rely on installed DevStack and run the tests over it
>
> Both options have their Cons and Pros. Both options are implemented across
> the OpenStack umbrella.
> Option 1 - Glance, Nova, the patch above
> Option 2 - HEAT and my favorite at the moment.
>
> Any ideas?
>
> Cheers,
> Serge Kovaleff
> http://www.mirantis.com
> cell: +38 (063) 83-155-70
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [API] [Glance] New v2 Image Import Specification Discussion

2016-01-11 Thread Flavio Percoco

On 08/01/16 17:14 +, Ian Cordasco wrote:

Hey all,


Yo!

Thanks for taking the time to read the spec and writing this email.



I'm well aware that this isn't exactly timely because we've partially agreed on 
the import spec in large strokes. [1]_ I haven't had a great deal of time 
before now to really dig into the spec, though, so I'm bringing my feedback. 
I've spoken to some API-WG members and other Glance team members and I thought 
it would be best to discuss some of the concerns here.

An outline (or tl;dr) is roughly

- We're exposing way too much complexity to the user


I agree with this to some extent and I think we could work out a way
to reduce some of these calls or make them optional as in they should
always be exposed but the user doesn't need to call them to complete
an upload.


- The user has to be very aware of discovering all of the data from the cloud
  - The user has to store a lot of that data to make subsequent requests
  - Exposing information to the user is not bad, making the user resubmit it to 
the cloud is bad


This is the part I'd like to simplify a bit in the current spec.


- We're introducing two new image states. Do we need two?


In some of the patch sets we discused ways to avoid adding new status
but the agreement seemed to have been to add these two to communicate
things properly to users and be ease the way we reason about the
process within Glance. I'll quote Stuart "When you need a new status,
you need a new status"


- Quotas - Are these even still a thing?


Yes they are and they were discussed in the spec as well.


So the problem description explains why uploading an image can be very complex 
and why some clouds need to not allow uploads directly to Glance. I understand 
that very well. I also recognize that I am the person who advocated so strongly 
for having information endpoints that help the user discover information about 
the cloud. I think this allows for us to make smarter clients that can help the 
user do the right thing (DTRT) instead of shooting themselves in the foot 
accidentally.

All of that said, I have some very severe reservations about the extra surface 
area we're adding to the images API and in particular how a user's view of the 
image upload flow would work. We've shipped v2 of Glance as the default version 
now in python-glanceclient (as of Liberty) so users are already going to be 
familiar with a two-step dance of creating an image:

1. Create the image record (image-create)
2. Upload the image data (image-upload)

In this new workflow, the dance is extended

1. Get information about how the cloud is configured and prefers images


Technically, this call is optional. It is a requirement for scripts
that want to work accross multiple clouds as you need this info for
#4.


2. Create the image record
3. Upload the image data somewhere (directly to Glance, or to some other 
service in that cloud perhaps [e.g., swift]) that is specified in the data 
retrieved from step 1
4. Tell Glance to import/process the image


This one I think we could simplify a bit by not making all these
fields required and allow the cloud to have their own defaults.



Here are my issues with this new upload workflow:

1. The user must pre-fetch the information to have any chance of their upload 
being successful


Not really (read above).


2. They have to use that information to find out where to upload the image 
bytes (data) to


Not really. The upload info is returned in the `image-create` call
which is a requirement in any of the evaluated workflow options. The
new header tells the user where the image data should go.


3. In step 4, they have to tell Glance if they're using the direct/indirect 
(currently named glance-direct and swift-local respectively) and then give it 
information (retrieved from step 1) about what the target formats are for that 
cloud



Again, I believe we could make target_* fields optional in the
*import* call as I (the user) most of the time don't care about the
final format of my image as long as it boots. Perhaps, the most
critical information in call #1 is the one related to the srouce_*
fields as that's something the user must respect. Not passing the
target_* fields in the import call will let the cloud use whatever it
has as default target format, which can be communicated through call
#1.

[snip just want to focus on the proposal]


So what if we modeled this a little

1. The user creates an image record (image-create)
2. The user uses the link in the response to upload the image data (we don't 
care whether it is direct or not because the user can follow it without 
thinking about it)
3. The user sends a request to process the image without having to specify what 
to translate the image to because the operator has chosen a preferred image 
type (and maybe has other acceptable types that mean the image will not need to 
be converted if it's already of that type).


If you read my comments above, you'll notice that 

Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Michał Jastrzębski
On 11 January 2016 at 09:16, Eric LEMOINE  wrote:
> Hi
>
> As discussed on IRC the other day [1] we want to propose a distributed
> logs processing architecture based on Heka [2], built on Alicja
> Kwasniewska's ELK work with
> .  Please take a look at the
> design document I've started working on [3].  The document is still
> work-in-progress, but the "Problem statement" and "Proposed change"
> sections should provide you with a good overview of the architecture
> we have in mind.
>
> In the proposed architecture each cluster node runs an instance of
> Heka for collecting and processing logs.  And instead of sending the
> processed logs to a centralized Logstash instance, logs are directly
> sent to Elasticsearch, which itself can be distributed across multiple
> nodes for high-availability and scaling.  The proposed architecture is
> based on Heka, and it doesn't use Logstash.
>
> That being said, it is important to note that the intent of this
> proposal is not strictly directed at replacing Logstash by Heka.  The
> intent is to propose a distributed architecture with Heka running on
> each cluster node rather than having Logstash run as a centralized
> logs processing component.  For such a distributed architecture we
> think that Heka is more appropriate, with a smaller memory footprint
> and better performances in general.  In addition, Heka is also more
> than a logs processing tool, as it's designed to process streams of
> any type of data, including events, logs and metrics.
>
> Some elements of comparison between Heka and Logstash:
>
> * Logstash was designed for logs processing.  Heka is a "unified data
> processing" software, designed to process streams of any type of data.
> So Heka is about running one service on each box instead of many.
> Using a single service for processing different types of data also
> makes it possible to do correlations, and derive metrics from logs and
> events.  See Rob Miller's presentation [4] for more details.

Right now we use rsyslog for that. As I understand Heka right now
would be actually alternative to rsyslog, and that is already
implemented. Also with Heka case, we might run into same problem we've
seen with rsyslog - transporting logs from service to heka. Keep in
mind we're in dockers and heka will be in different container than
service it's supposed to listen to. We do that by sharing faked
/dev/log across containers and rsyslog can handle that.
Also Heka seems to be just processing mechanism while logstash is
well...stash, so it saves and persists logs, so it seems to me they're
different layers of log processing.

Seems to me we need additional comparason of heka vs rsyslog;) Also
this would have to be hands down better because rsyslog is already
implemented, working and most of operators knows how to use it.

> * The virtual size of the Logstash Docker image is 447 MB, while the
> virtual size of an Heka image built from the same base image
> (debian:jessie) is 177 MB.  For comparison the virtual size of the
> Elasticsearch image is 345 MB.
>
> * Heka is written in Go and has no dependencies.  Go programs are
> compiled to native code.  This in contrast to Logstash which uses
> JRuby and as such requires running a Java Virtual Machine.  Besides
> this native versus interpreted code aspect, this also can raise the
> question of which JVM to use (Oracle, OpenJDK?) and which version
> (6,7,8?).
>
> * There are six types of Heka plugins: Inputs, Splitters, Decoders,
> Filters, Encoders, and Outputs.  Heka plugins are written in Go or
> Lua.  When written in Lua their executions are sandbox'ed, where
> misbehaving plugins may be shut down by Heka.  Lua plugins may also be
> dynamically added to Heka with no config changes or Heka restart. This
> is an important property on container environments such as Mesos,
> where workloads are changed dynamically.
>
> * To avoid losing logs under high load it is often recommend to use
> Logstash together with Redis [5].  Redis plays the role of a buffer,
> where logs are queued when Logstash or Elasticsearch cannot keep up
> with the load.  Heka, as a "unified data processing" software,
> includes its own resilient message queue, making it unnecessary to use
> an external queue (Redis for example).
>
> * Heka is faster than Logstash for processing logs, and its memory
> footprint is smaller.  I ran tests, where 3,400,000 log messages were
> read from 500 input files and then written to a single output file.
> Heka processed the 3,400,000 log messages in 12 seconds, consuming
> 500M of RAM.  Logstash processed the 3,400,000 log messages in 1mn
> 35s, consuming 1.1G of RAM.  Adding a grok filter to parse and
> structure logs, Logstash processed the 3,400,000 log messages in 2mn
> 15s, consuming 1.5G of RAM. Using an equivalent filtering plugin, Heka
> processed the 3,400,000 log messages in 27s, consuming 730M of RAM.
> See my GitHub repo [6] for more 

Re: [openstack-dev] [oslo][osdk] PrettyTable needs a home in OpenStack

2016-01-11 Thread Doug Hellmann
Excerpts from Victor Stinner's message of 2016-01-11 11:15:56 +0100:
> Le 11/01/2016 10:37, Thierry Carrez a écrit :
> > Joshua Harlow wrote:
> >> [...]
> >> So I'd def help keep prettytable going, of course another option is to
> >> move to https://pypi.python.org/pypi/tabulate (which does seem active
> >> and/or maintained); tabulate provides pretty much the same thing
> >> (actually more table formats @
> >> https://pypi.python.org/pypi/tabulate#table-format ) than prettytable
> >> and the api is pretty much the same (or nearly).
> >>
> >> So that's another way to handle this (just to move off prettytable
> >> entirely).
> >
> > This sounds like a reasonable alternative...
> 
> 
> IMHO contributing to an actively developped library (tabulate) seems 
> more productive than starting to maintain a second library which is 
> currently no more maintained.
> 
> Does anyone know how much code should be modified to replace prettytable 
> with tabulate on the whole OpenStack project?

We seem to have a rather long list of things consuming PrettyTable:

$ grep -i prettytable */*requirement*.txt
automaton/requirements.txt:PrettyTable<0.8,>=0.7
cliff/requirements.txt:PrettyTable<0.8,>=0.7
cloudv-ostf-adapter/requirements.txt:PrettyTable>=0.7,<0.8
distil/requirements.txt:prettytable==0.7.2
faafo/requirements.txt:PrettyTable>=0.7,<0.8
fairy-slipper/requirements.txt:prettytable
gnocchi/requirements.txt:prettytable
ironic-lib/requirements.txt:PrettyTable<0.8,>=0.7
kolla-mesos/requirements.txt:PrettyTable<0.8,>=0.7
magnum/requirements.txt:PrettyTable<0.8,>=0.7
nova/requirements.txt:PrettyTable<0.8,>=0.7
openstack-ansible/requirements.txt:PrettyTable>=0.7,<0.8   # 
scripts/inventory-manage.py
python-blazarclient/requirements.txt:PrettyTable>=0.7,<0.8
python-ceilometerclient/requirements.txt:PrettyTable<0.8,>=0.7
python-cinderclient/requirements.txt:PrettyTable<0.8,>=0.7
python-evoqueclient/requirements.txt:PrettyTable<0.8,>=0.7
python-glanceclient/requirements.txt:PrettyTable<0.8,>=0.7
python-heatclient/requirements.txt:PrettyTable<0.8,>=0.7
python-ironicclient/requirements.txt:PrettyTable<0.8,>=0.7
python-keystoneclient/requirements.txt:PrettyTable<0.8,>=0.7
python-magnumclient/requirements.txt:PrettyTable<0.8,>=0.7
python-manilaclient/requirements.txt:PrettyTable<0.8,>=0.7
python-monascaclient/requirements.txt:PrettyTable>=0.7,<0.8
python-muranoclient/requirements.txt:PrettyTable<0.8,>=0.7
python-novaclient/requirements.txt:PrettyTable<0.8,>=0.7
python-rackclient/requirements.txt:PrettyTable>=0.7,<0.8
python-saharaclient/requirements.txt:PrettyTable<0.8,>=0.7
python-searchlightclient/requirements.txt:PrettyTable<0.8,>=0.7
python-senlinclient/requirements.txt:PrettyTable<0.8,>=0.7
python-surveilclient/requirements.txt:prettytable
python-troveclient/requirements.txt:PrettyTable<0.8,>=0.7
python-tuskarclient/requirements.txt:PrettyTable<0.8,>=0.7
rally/requirements.txt:PrettyTable<0.8,>=0.7
requirements/global-requirements.txt:PrettyTable>=0.7,<0.8
sahara/test-requirements.txt:PrettyTable<0.8,>=0.7
scalpels/requirements.txt:PrettyTable>=0.7,<0.8
stacktach-klugman/requirements.txt:prettytable
vmtp/requirements.txt:prettytable>=0.7.2
vmware-nsx/requirements.txt:PrettyTable<0.8,>=0.7

I suspect we could skip porting a lot of those, since they look like
clients and we're working to move all command line programs into the
unified client.

That leaves 18:

$ grep -i prettytable */*requirement*.txt | grep -v client
automaton/requirements.txt:PrettyTable<0.8,>=0.7
cliff/requirements.txt:PrettyTable<0.8,>=0.7
cloudv-ostf-adapter/requirements.txt:PrettyTable>=0.7,<0.8
distil/requirements.txt:prettytable==0.7.2
faafo/requirements.txt:PrettyTable>=0.7,<0.8
fairy-slipper/requirements.txt:prettytable
gnocchi/requirements.txt:prettytable
ironic-lib/requirements.txt:PrettyTable<0.8,>=0.7
kolla-mesos/requirements.txt:PrettyTable<0.8,>=0.7
magnum/requirements.txt:PrettyTable<0.8,>=0.7
nova/requirements.txt:PrettyTable<0.8,>=0.7
openstack-ansible/requirements.txt:PrettyTable>=0.7,<0.8   # 
scripts/inventory-manage.py
rally/requirements.txt:PrettyTable<0.8,>=0.7
requirements/global-requirements.txt:PrettyTable>=0.7,<0.8
sahara/test-requirements.txt:PrettyTable<0.8,>=0.7
scalpels/requirements.txt:PrettyTable>=0.7,<0.8
stacktach-klugman/requirements.txt:prettytable
vmtp/requirements.txt:prettytable>=0.7.2
vmware-nsx/requirements.txt:PrettyTable<0.8,>=0.7

There are a few server projects, and I assume they are using the
lib in their management commands.  I'm sure there are shell scripts
out there parsing the rendered tables.  Does tabulate produce tables
using the exact same format as PrettyTable?

We could justify moving cliff, since it makes it easy to produce
more parsable output using selectable formatters, but some of those
other consuming projects may be a bit harder to change if we want
to maintain our backwards compatibility requirements.

There are a few libraries on the list, too (automaton, ironic-lib), and
that's confusing. It 

Re: [openstack-dev] [ironic][tests] approach to functional/integration tests

2016-01-11 Thread Serge Kovaleff
We might want to implement all three options. The more tests the better :)

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70

On Mon, Jan 11, 2016 at 5:41 PM, John Villalovos  wrote:

> I personally like Option #1 as an additional goal is to make it easy for
> developers to also run the functional tests with 'tox -efunctional'.  And
> also run the functional tests as part of a normal tox run.
>
> On Mon, Jan 11, 2016 at 6:54 AM, Serge Kovaleff 
> wrote:
>
>> Option 3 - to concentrate on Tempest tests
>> https://review.openstack.org/#/c/253982/
>>
>> Cheers,
>> Serge Kovaleff
>> http://www.mirantis.com
>> cell: +38 (063) 83-155-70
>>
>> On Mon, Jan 11, 2016 at 4:49 PM, Serge Kovaleff 
>> wrote:
>>
>>> Hi All,
>>>
>>> Last week I had a noble goal to write "one-more" functional test in
>>> Ironic.
>>> I did find a folder "func" but it was empty.
>>>
>>> Friends helped me to find a WIP patch
>>> https://review.openstack.org/#/c/235612/
>>>
>>> and here comes the question of this email: what approach we would like
>>> to implement:
>>> Option 1 - write infrastructure code that starts/configure/stops the
>>> services
>>> Option 2 - rely on installed DevStack and run the tests over it
>>>
>>> Both options have their Cons and Pros. Both options are implemented
>>> across the OpenStack umbrella.
>>> Option 1 - Glance, Nova, the patch above
>>> Option 2 - HEAT and my favorite at the moment.
>>>
>>> Any ideas?
>>>
>>> Cheers,
>>> Serge Kovaleff
>>> http://www.mirantis.com
>>> cell: +38 (063) 83-155-70
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][tests] approach to functional/integration tests

2016-01-11 Thread John Villalovos
I personally like Option #1 as an additional goal is to make it easy for
developers to also run the functional tests with 'tox -efunctional'.  And
also run the functional tests as part of a normal tox run.

On Mon, Jan 11, 2016 at 6:54 AM, Serge Kovaleff 
wrote:

> Option 3 - to concentrate on Tempest tests
> https://review.openstack.org/#/c/253982/
>
> Cheers,
> Serge Kovaleff
> http://www.mirantis.com
> cell: +38 (063) 83-155-70
>
> On Mon, Jan 11, 2016 at 4:49 PM, Serge Kovaleff 
> wrote:
>
>> Hi All,
>>
>> Last week I had a noble goal to write "one-more" functional test in
>> Ironic.
>> I did find a folder "func" but it was empty.
>>
>> Friends helped me to find a WIP patch
>> https://review.openstack.org/#/c/235612/
>>
>> and here comes the question of this email: what approach we would like to
>> implement:
>> Option 1 - write infrastructure code that starts/configure/stops the
>> services
>> Option 2 - rely on installed DevStack and run the tests over it
>>
>> Both options have their Cons and Pros. Both options are implemented
>> across the OpenStack umbrella.
>> Option 1 - Glance, Nova, the patch above
>> Option 2 - HEAT and my favorite at the moment.
>>
>> Any ideas?
>>
>> Cheers,
>> Serge Kovaleff
>> http://www.mirantis.com
>> cell: +38 (063) 83-155-70
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][osdk] PrettyTable needs a home in OpenStack

2016-01-11 Thread Flavio Percoco

On 11/01/16 11:15 +0100, Victor Stinner wrote:

Le 11/01/2016 10:37, Thierry Carrez a écrit :

Joshua Harlow wrote:

[...]
So I'd def help keep prettytable going, of course another option is to
move to https://pypi.python.org/pypi/tabulate (which does seem active
and/or maintained); tabulate provides pretty much the same thing
(actually more table formats @
https://pypi.python.org/pypi/tabulate#table-format ) than prettytable
and the api is pretty much the same (or nearly).

So that's another way to handle this (just to move off prettytable
entirely).


This sounds like a reasonable alternative...



IMHO contributing to an actively developped library (tabulate) seems 
more productive than starting to maintain a second library which is 
currently no more maintained.


Does anyone know how much code should be modified to replace 
prettytable with tabulate on the whole OpenStack project?


--

I don't like the global trend in OpenStack to create a new community 
separated from the Python community. In general, OpenStack libraries 
have too many dependencies and it's harder to contribute to other 
projects. Gerrit is less popular than Github, and OpenStack requires 
to sign a contributor agreement. I would prefer to stop moving things 
into OpenStack and continue to contribute to existing projects, as we 
already do.


I wouldn't see this as "OpenStack just likes to create new communities
because why not?". IMHO, the fact the OpenStack community is willing
to take on libraries and help maintaining them is a good example of
good open source behavior. We use the library, the original author
doesn't have time and we have resources to help.

We are not the ones using this library and this is the way we have to
contribute to it. We're giving the library a home. If someone is
willing to take the library on outside of OpenStack, then fine. I'm
all for that. I just don't have the time myself.

That said, I also think consuming tabulate would be a good solution
but again, I don't have time to help with the migration. Therefore,
until that happens, I'm concerned that we don't have a way to fix bugs
and contribute to the current library we're using, which is
prettytable.

If the switch happens and we all pass to tabulate, then we can
re-evaluate whether keeping prettytable under the OpenStack tent makes
sense.

Flavio

(I also know why projects are moved into OpenStack "big tent", they 
are some good arguments.)


Well, that's my feeling that OpenStack and Python communities are 
splitted, maybe I'm wrong ;-) I just want to avoid what happened in 
Zope: a lot of great code and great libraries, but too many 
dependencies and at the end a different community.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options: IRC meeting at Jan. 11th

2016-01-11 Thread Markus Zoeller
Markus Zoeller/Germany/IBM@IBMDE wrote on 01/08/2016 03:21:28 PM:

> From: Markus Zoeller/Germany/IBM@IBMDE
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 01/08/2016 03:22 PM
> Subject: [openstack-dev] [nova] config options: IRC meeting at Jan. 11th
> 
> First things first, I'd like to thank those people who are contributing
> to this task very much! It's good to see the progress we already made
> and that the knowledge of this area is now spread amongst more people.
> 
> The second thing I'd like to talk about is the next steps.
> It would be great if we could have a short realtime conversation in IRC
> to:
> * make clear that the help text changes are necessary for a patch series
> * clarify open questions from your side
> * discuss the impact of [1] which should be the last piece in place 
>   which reduces the merge conflicts we faced to a very minimum.
> 
> Let's use the channel #openstack-meeting-3 at coming Monday, 
> January the 11th, at 15:00 UTC for that.
> 
> References:
> [1] "single point of entry for sample config generation"
> https://review.openstack.org/#/c/260015
> 
> Regards, Markus Zoeller (markus_z)

In case you missed it (it was on short notice), here are the logs:
http://eavesdrop.openstack.org/meetings/nova_config_options/2016/nova_config_options.2016-01-11-15.01.log.html

Let me know if you have questions.

Regards, Markus Zoeller (markus_z)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Seeking FFE for "Support single disk image OVA/OVF package upload"

2016-01-11 Thread Flavio Percoco

On 11/01/16 15:01 +, stuart.mcla...@hp.com wrote:

An HTML attachment was scrubbed...
URL: 



I downloaded the patch which implements the spec 
(https://review.openstack.org/#/c/214810):


I can make this REST API call to perform OVA import:

http://paste.openstack.org/show/483332/

(it exercises the new OVA extract code path).

There's a parallel effort in the project to provide 'official' (Defcore)
APIs for image upload/conversion. What will be the advantage of having
two different REST API calls (a 'tasks' based one and a DefCore one)
for importing OVAs?


As you mentioned above, the team is working on refactoring the image
import process for Glance. The solution has different requirements and
dependencies. One of those dependencies is making the existing task
API admin only to then be able to deprecate it in the future.

This has been discussed several times at the summit, in meetings and,
to make sure it's clear to everyone (apparently it isn't) it's also
been made of the spec of the refactor you mentioned[0] (see work
items).

[0] 
https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html#work-items



I understand we're aiming to retire the tasks API and that this call is
intended to be admin only.

I guess what I'd like to get a handle on is this:

In the future, if a user asks "Why did my OVA API call fail?", we'll
need to ask "Well, which OVA API did you use?"

At that level there is a duplication -- something we'd generally try to
avoid. If we're going to have that duplication I'd just like to understand
why. For example, is it because we want the functionality exposed
sooner? Or is it that the ability to trigger the new functionality via
a /tasks call is a side-effect of the task implementation (one we don't
really want in this case) and it would be too much effort to change it.


Definitely not the former. The later seems to be more accurate
although I would add to it the fact we're deactivating this API.


It seems to be possible to successfully make the above API call whether
you're admin or not and whether the server has the patch applied or
not. Is that expected?


You'll be able to run that request until this[0] is done.


The current implementation hard codes that the OVA functionality is admin
only.  But the call still "succeeds" as a non-admin (creates an active
image) because it re-uses the 'import' task type (ie we've an overloaded
API). That means that if you're admin the API is ambiguous (knowing
the request and response is not enough to know what actually happened).
That may only be the current implementation though, and may not be what
the spec intended.

(Let's follow this bit up on the spec.)


The call that succeeds is the *image* import, which is not really the
same as the *OVA* import task. This is exactly why I've been trying so
hard to separate the current task HTTP API from the task engine. They
are unrelated and we discussing this now is not really useful because
that task HTTP API is going to go away and ppl shouldn't be using it
to begin with.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Nova + Glance_v2 = Love

2016-01-11 Thread Mikhail Fedosin
Sam, Flavio, Zhenyu, Daniel thank you for your responses!

Indeed, we're in a difficult position with activating zero-size images...
To fix it we have to solve 2 problems: one with disk and container formats
parameters - glance v2 doesn't allow activate images if these two are
unset; and the second with image data - glance v2 requires some data to be
passed before image activation.

About formats: I don't see any good solution rather than to hardcode them
for Glance and then ignore in Nova. If you can figure out a better one,
please share :) But changing image schema - no way!

About data: afaiu, block_device_mapping property contains volume_id. That
means we may take it out and activate an image by providing new location,
i.e. cinder://volume_id. The problem is setting custom locations for image
is disabled by default and enabling it may cause serious mishaps.
My solution was to provide empty data (not 1 byte as Sam said) with
image-upload, it leads to image activation and it works. Unfortunately,
it's just another workaround, because we create absolutely unnecessary
empty file in store and after that users can download it, which is wrong.
But we can improve it on glance side: if we provide empty data in
image-upload, glance won't create a file in store and set location to the
image (in other words - avoid Location layer in Glance domain model). In
that case these operation will be identical (v1:image-create(size=0) and
v2:image-create()+image-upload(data='')).

Frankly speaking I wish to see more feedback on this matter, because we
will have to make difficult decisions or Nova will never adopt Glance v2
api.

On Fri, Jan 8, 2016 at 10:14 PM, Sam Matzek  wrote:

> On Fri, Jan 8, 2016 at 11:54 AM, Sam Matzek  wrote:
> > On Fri, Jan 8, 2016 at 8:31 AM, Flavio Percoco 
> wrote:
> >> On 29/12/15 07:41 -0600, Sam Matzek wrote:
> >>>
> >>> On Thu, Dec 24, 2015 at 7:49 AM, Mikhail Fedosin <
> mfedo...@mirantis.com>
> >>> wrote:
> 
>  Hello, it's another topic about glance v2 adoption in Nova, but it's
>  different from the others. I want to declare that there is a set of
>  commits,
>  that make Nova version agnostic and allow to work with both glance
> apis.
>  The
>  idea of the solution is to determine the current api version in the
>  beginning and make appropriate requests after.
>  (https://review.openstack.org/#/c/228578/,
>  https://review.openstack.org/#/c/238309/,
>  https://review.openstack.org/#/c/259097/)
> 
>  Indeed, it requires some additional (and painful) work, but now all
>  tempest
>  tests pass in Jenkins.
> 
>  Note: this thread is not about xenplugin, there is another topic,
> called
>  'Xenplugin + Glance_v2 = Hate'
> 
>  Here the main issues we faced and how we've solved them:
> 
>  1. "changes-since" filter for image-list is not supported in v2 api.
>  Steve
>  Lewis made a great job and implemented a set of filters with
> comparison
>  operators https://review.openstack.org/#/c/197388/. Filtering by
>  'changes-since' is completely similar to 'gte:updated_at'.
> 
>  2. Filtering by custom properties must have prefix 'property-'. In v2
>  it's
>  not required.
> 
>  3. V2 states that all custom properties must be image attributes, but
>  Nova
>  assumes that they are included in 'properties' dictionary. It's fixed
>  with
>  glanceclient's method 'is_base_property(prop_name)', that returns
> False
>  in
>  case of custom property.
> 
>  4. is_public=True/False is visibility="public"/"private" respectively.
> 
>  5. Deleting custom image properties in Nova is performed with
>  'purge_props'
>  flag. If it's set to True, then all prop names, that are not included
> in
>  updated data, will be removed. In case of v2 we have to explicitly
>  specify
>  prop names in the list param 'remove_props'. To implement this
> behaviour,
>  if
>  'purge_props' is set, we make additional 'get' request to determine
> the
>  existing properties and put in 'remove_prop' list only those, that are
>  not
>  in updated_data.
> 
>  6. My favourite:
>  There is an ability to activate an empty image by just providing
> 'size =
>  0':
>  https://review.openstack.org/#/c/9715/, in this case image will be a
>  collection of metadata. Glance v2 doesn't support this "feature" and
>  that's
>  why we have to implement a very dirty workaround:
>  * v2 requires, that disk_format and container-format must be set
>  before
>  the activation. if these params are not provided to 'create' method
> then
>  we
>  hardcode it to 'qcow2' and 'bare'.
>  * we call 'upload' method with empty data (data = '') to activate
>  image.
>  Me (and the rest glance team) think that this image activation 

[openstack-dev] [release] change to deliverable file schema: send-announcements-to

2016-01-11 Thread Doug Hellmann
Liaisons,

With the increase in release announcements on the openstack-announce
list, we've had requests to move some of that traffic that isn't
of immediate interest to most users to openstack-dev.  Last week
we merged changes to the schema of the deliverables files in the
releases repository to address this, adding a new required field.

The 'send-announcements-to' value should be an email address where
announcements for the release go. As part of adding the field, I
updated the mitaka and independent deliverable files, but if you
need to release stable versions of projects you may have to go back
and add values to those files.

All server projects should use openstack-annou...@lists.openstack.org.
Most other projects should use openstack-dev@lists.openstack.org,
unless they are deployer-facing tools.

Refer to the schema docs [1] for more details, and follow up here
if you have any questions.

Thanks,
Doug

[1] 
http://docs.openstack.org/releases/instructions.html#deliverables-file-schema

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Eric LEMOINE
Hi

As discussed on IRC the other day [1] we want to propose a distributed
logs processing architecture based on Heka [2], built on Alicja
Kwasniewska's ELK work with
.  Please take a look at the
design document I've started working on [3].  The document is still
work-in-progress, but the "Problem statement" and "Proposed change"
sections should provide you with a good overview of the architecture
we have in mind.

In the proposed architecture each cluster node runs an instance of
Heka for collecting and processing logs.  And instead of sending the
processed logs to a centralized Logstash instance, logs are directly
sent to Elasticsearch, which itself can be distributed across multiple
nodes for high-availability and scaling.  The proposed architecture is
based on Heka, and it doesn't use Logstash.

That being said, it is important to note that the intent of this
proposal is not strictly directed at replacing Logstash by Heka.  The
intent is to propose a distributed architecture with Heka running on
each cluster node rather than having Logstash run as a centralized
logs processing component.  For such a distributed architecture we
think that Heka is more appropriate, with a smaller memory footprint
and better performances in general.  In addition, Heka is also more
than a logs processing tool, as it's designed to process streams of
any type of data, including events, logs and metrics.

Some elements of comparison between Heka and Logstash:

* Logstash was designed for logs processing.  Heka is a "unified data
processing" software, designed to process streams of any type of data.
So Heka is about running one service on each box instead of many.
Using a single service for processing different types of data also
makes it possible to do correlations, and derive metrics from logs and
events.  See Rob Miller's presentation [4] for more details.

* The virtual size of the Logstash Docker image is 447 MB, while the
virtual size of an Heka image built from the same base image
(debian:jessie) is 177 MB.  For comparison the virtual size of the
Elasticsearch image is 345 MB.

* Heka is written in Go and has no dependencies.  Go programs are
compiled to native code.  This in contrast to Logstash which uses
JRuby and as such requires running a Java Virtual Machine.  Besides
this native versus interpreted code aspect, this also can raise the
question of which JVM to use (Oracle, OpenJDK?) and which version
(6,7,8?).

* There are six types of Heka plugins: Inputs, Splitters, Decoders,
Filters, Encoders, and Outputs.  Heka plugins are written in Go or
Lua.  When written in Lua their executions are sandbox'ed, where
misbehaving plugins may be shut down by Heka.  Lua plugins may also be
dynamically added to Heka with no config changes or Heka restart. This
is an important property on container environments such as Mesos,
where workloads are changed dynamically.

* To avoid losing logs under high load it is often recommend to use
Logstash together with Redis [5].  Redis plays the role of a buffer,
where logs are queued when Logstash or Elasticsearch cannot keep up
with the load.  Heka, as a "unified data processing" software,
includes its own resilient message queue, making it unnecessary to use
an external queue (Redis for example).

* Heka is faster than Logstash for processing logs, and its memory
footprint is smaller.  I ran tests, where 3,400,000 log messages were
read from 500 input files and then written to a single output file.
Heka processed the 3,400,000 log messages in 12 seconds, consuming
500M of RAM.  Logstash processed the 3,400,000 log messages in 1mn
35s, consuming 1.1G of RAM.  Adding a grok filter to parse and
structure logs, Logstash processed the 3,400,000 log messages in 2mn
15s, consuming 1.5G of RAM. Using an equivalent filtering plugin, Heka
processed the 3,400,000 log messages in 27s, consuming 730M of RAM.
See my GitHub repo [6] for more information about the test
environment.

Also, I want to say that our team has been using Heka in production
for about a year, in clusters of up to 200 nodes.  Heka has proven to
be very robust, efficient and flexible enough to address our logs
processing and monitoring use-cases.  We've also acquired a solid
experience with it.

Any comments are welcome!

Thanks.


[1] 

[2] 
[3] 

[4] 
[5] 
[6] 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [ironic][tests] approach to functional/integration tests

2016-01-11 Thread Serge Kovaleff
Hi All,

Last week I had a noble goal to write "one-more" functional test in Ironic.
I did find a folder "func" but it was empty.

Friends helped me to find a WIP patch
https://review.openstack.org/#/c/235612/

and here comes the question of this email: what approach we would like to
implement:
Option 1 - write infrastructure code that starts/configure/stops the
services
Option 2 - rely on installed DevStack and run the tests over it

Both options have their Cons and Pros. Both options are implemented across
the OpenStack umbrella.
Option 1 - Glance, Nova, the patch above
Option 2 - HEAT and my favorite at the moment.

Any ideas?

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Sam Yaple
I like the idea of using Heka. You and I have discussed this on IRC before.
So my vote for this is +1. I can't think of any downside. I would like to
hear Alicja Kwasniewska's view on this as she has done the majority of work
with Logstash up until this point.

Sam Yaple

On Mon, Jan 11, 2016 at 3:16 PM, Eric LEMOINE  wrote:

> Hi
>
> As discussed on IRC the other day [1] we want to propose a distributed
> logs processing architecture based on Heka [2], built on Alicja
> Kwasniewska's ELK work with
> .  Please take a look at the
> design document I've started working on [3].  The document is still
> work-in-progress, but the "Problem statement" and "Proposed change"
> sections should provide you with a good overview of the architecture
> we have in mind.
>
> In the proposed architecture each cluster node runs an instance of
> Heka for collecting and processing logs.  And instead of sending the
> processed logs to a centralized Logstash instance, logs are directly
> sent to Elasticsearch, which itself can be distributed across multiple
> nodes for high-availability and scaling.  The proposed architecture is
> based on Heka, and it doesn't use Logstash.
>
> That being said, it is important to note that the intent of this
> proposal is not strictly directed at replacing Logstash by Heka.  The
> intent is to propose a distributed architecture with Heka running on
> each cluster node rather than having Logstash run as a centralized
> logs processing component.  For such a distributed architecture we
> think that Heka is more appropriate, with a smaller memory footprint
> and better performances in general.  In addition, Heka is also more
> than a logs processing tool, as it's designed to process streams of
> any type of data, including events, logs and metrics.
>
> Some elements of comparison between Heka and Logstash:
>
> * Logstash was designed for logs processing.  Heka is a "unified data
> processing" software, designed to process streams of any type of data.
> So Heka is about running one service on each box instead of many.
> Using a single service for processing different types of data also
> makes it possible to do correlations, and derive metrics from logs and
> events.  See Rob Miller's presentation [4] for more details.
>
> * The virtual size of the Logstash Docker image is 447 MB, while the
> virtual size of an Heka image built from the same base image
> (debian:jessie) is 177 MB.  For comparison the virtual size of the
> Elasticsearch image is 345 MB.
>
> * Heka is written in Go and has no dependencies.  Go programs are
> compiled to native code.  This in contrast to Logstash which uses
> JRuby and as such requires running a Java Virtual Machine.  Besides
> this native versus interpreted code aspect, this also can raise the
> question of which JVM to use (Oracle, OpenJDK?) and which version
> (6,7,8?).
>
> * There are six types of Heka plugins: Inputs, Splitters, Decoders,
> Filters, Encoders, and Outputs.  Heka plugins are written in Go or
> Lua.  When written in Lua their executions are sandbox'ed, where
> misbehaving plugins may be shut down by Heka.  Lua plugins may also be
> dynamically added to Heka with no config changes or Heka restart. This
> is an important property on container environments such as Mesos,
> where workloads are changed dynamically.
>
> * To avoid losing logs under high load it is often recommend to use
> Logstash together with Redis [5].  Redis plays the role of a buffer,
> where logs are queued when Logstash or Elasticsearch cannot keep up
> with the load.  Heka, as a "unified data processing" software,
> includes its own resilient message queue, making it unnecessary to use
> an external queue (Redis for example).
>
> * Heka is faster than Logstash for processing logs, and its memory
> footprint is smaller.  I ran tests, where 3,400,000 log messages were
> read from 500 input files and then written to a single output file.
> Heka processed the 3,400,000 log messages in 12 seconds, consuming
> 500M of RAM.  Logstash processed the 3,400,000 log messages in 1mn
> 35s, consuming 1.1G of RAM.  Adding a grok filter to parse and
> structure logs, Logstash processed the 3,400,000 log messages in 2mn
> 15s, consuming 1.5G of RAM. Using an equivalent filtering plugin, Heka
> processed the 3,400,000 log messages in 27s, consuming 730M of RAM.
> See my GitHub repo [6] for more information about the test
> environment.
>
> Also, I want to say that our team has been using Heka in production
> for about a year, in clusters of up to 200 nodes.  Heka has proven to
> be very robust, efficient and flexible enough to address our logs
> processing and monitoring use-cases.  We've also acquired a solid
> experience with it.
>
> Any comments are welcome!
>
> Thanks.
>
>
> [1] <
> http://eavesdrop.openstack.org/meetings/kolla/2016/kolla.2016-01-06-16.32.html
> >
> [2] 
> [3] <
> 

Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2016-01-11 Thread Jim Rollenhagen
FYI, this work was completed. Docs are here:
http://docs.openstack.org/developer/ironic/dev/code-contribution-guide.html#adding-new-features

// jim

On Wed, Dec 09, 2015 at 01:58:34PM -0800, Jim Rollenhagen wrote:
> On Fri, Dec 04, 2015 at 05:38:43PM +0100, Dmitry Tantsur wrote:
> > Hi!
> > 
> > As you all probably know, we've switched to reno for managing release notes.
> > What it also means is that the release team has stopped managing milestones
> > for us. We have to manually open/close milestones in launchpad, if we feel
> > like. I'm a bit tired of doing it for inspector, so I'd prefer we stop it.
> > If we need to track release-critical patches, we usually do it in etherpad
> > anyway. We also have importance fields for bugs, which can be applied to
> > both important bugs and important features.
> > 
> > During a quick discussion on IRC Sam mentioned that neutron also dropped
> > using blueprints for tracking features. They only use bugs with RFE tag and
> > specs. It makes a lot of sense to me to do the same, if we stop tracking
> > milestones.
> > 
> > For both ironic and ironic-inspector I'd like to get your opinion on the
> > following suggestions:
> > 1. Stop tracking milestones in launchpad
> > 2. Drop existing milestones to avoid confusion
> > 3. Stop using blueprints and move all active blueprints to bugs with RFE
> > tags; request a bug URL instead of a blueprint URL in specs.
> > 
> > So in the end we'll end up with bugs for tracking user requests, specs for
> > complex features and reno for tracking for went into a particular release.
> > 
> > Important note: if you vote for keeping things for ironic-inspector, I may
> > ask you to volunteer in helping with them ;)
> 
> We decided we're going to try this in Monday's meeting, following
> roughly the same process as Neutron:
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
> 
> Note that as the goal here is to stop managing blueprints and milestones
> in launchpad, a couple of things will differ from the neutron process:
> 
> 1) A matching blueprint will not be created; the tracking will only be
> done in the bug.
> 
> 2) A milestone will not be immediately chosen for the feature
> enhancement, as we won't track milestones on launchpad.
> 
> Now, some requests for volunteers. We need:
> 
> 1) Someone to document this process in our developer docs.
> 
> 2) Someone to update the spec template to request a bug link, instead of
> a blueprint link.
> 
> 3) Someone to help move existing blueprints into RFEs.
> 
> 4) Someone to point specs for incomplete work at the new RFE bugs,
> instead of the existing blueprints.
> 
> I can help with some or all of these, but hope to not do all the work
> myself. :)
> 
> Thanks for proposing this, Dmitry!
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Michał Jastrzębski
On 11 January 2016 at 10:55, Eric LEMOINE  wrote:
> On Mon, Jan 11, 2016 at 5:01 PM, Michał Jastrzębski  wrote:
>> On 11 January 2016 at 09:16, Eric LEMOINE  wrote:
>
>>> * Logstash was designed for logs processing.  Heka is a "unified data
>>> processing" software, designed to process streams of any type of data.
>>> So Heka is about running one service on each box instead of many.
>>> Using a single service for processing different types of data also
>>> makes it possible to do correlations, and derive metrics from logs and
>>> events.  See Rob Miller's presentation [4] for more details.
>>
>> Right now we use rsyslog for that.
>
>
> Currently the services running in containers send their logs to
> rsyslog. And rsyslog stores the logs in local files, located in the
> host's /var/log directory.

Yeah, however plan was to teach rsyslog to forward logs to central
logging stack once this thing is implemented.

>> As I understand Heka right now
>> would be actually alternative to rsyslog, and that is already
>> implemented. Also with Heka case, we might run into same problem we've
>> seen with rsyslog - transporting logs from service to heka. Keep in
>> mind we're in dockers and heka will be in different container than
>> service it's supposed to listen to. We do that by sharing faked
>> /dev/log across containers and rsyslog can handle that.
>
>
> I know. Our plan is to rely on Docker. Basically: containers write
> their logs to stdout. The logs are collected by Docker Engine, which
> makes them available through the unix:///var/run/docker.sock socket.
> The socket is mounted into the Heka container, which uses the Docker
> Log Input plugin [*] to reads the logs from that that socket.
>
> [*] 

So docker logs isn't best thing there is, however I'd suspect that's
mostly console output fault. If you can tap into stdout efficiently,
I'd say that's pretty good option.

>
>> Also Heka seems to be just processing mechanism while logstash is
>> well...stash, so it saves and persists logs, so it seems to me they're
>> different layers of log processing.
>
> No. Logstash typically stores the logs in Elasticsearch. And we'd do
> the same with Heka.
>
>
>>
>> Seems to me we need additional comparason of heka vs rsyslog;) Also
>> this would have to be hands down better because rsyslog is already
>> implemented, working and most of operators knows how to use it.
>
>
> We don't need to remove Rsyslog. Services running in containers can
> write their logs to both Rsyslog and stdout, which even is what they
> do today (at least for the OpenStack services).
>

There is no point for that imho. I don't want to have several systems
doing the same thing. Let's make decision of one, but optimal toolset.
Could you please describe bottoms up what would your logging stack
look like? Heka listening on stdout, transfering stuff to
elasticsearch and kibana on top of it?

> Hope that makes sense!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][osdk] PrettyTable needs a home in OpenStack

2016-01-11 Thread Julien Danjou
On Mon, Jan 11 2016, Doug Hellmann wrote:

> I suspect we could skip porting a lot of those, since they look like
> clients and we're working to move all command line programs into the
> unified client.

Not really in the telemetry roadmap honestly. But we could move away
From prettytable without much burden anyway.

> There are a few server projects, and I assume they are using the
> lib in their management commands.  I'm sure there are shell scripts
> out there parsing the rendered tables.  Does tabulate produce tables
> using the exact same format as PrettyTable?

We barely use it in Gnocchi, it could be easy to switch to something
fancier. Back then, we just picked PrettyTable because every other
project was using it and it was good enough for a quick CLI.

> We could justify moving cliff, since it makes it easy to produce
> more parsable output using selectable formatters, but some of those
> other consuming projects may be a bit harder to change if we want
> to maintain our backwards compatibility requirements.

I think that's a good idea, gnocchiclient and aodhclient already rely on
cliff for example, and ceilometerclient should switch at some point.

> There are a few libraries on the list, too (automaton, ironic-lib), and
> that's confusing. It would be interesting to know how they're using
> table output.

automaton uses it for its verbose mode, it's just 2 lines to change… :)

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][osdk] PrettyTable needs a home in OpenStack

2016-01-11 Thread Joshua Harlow
And here is a simple converter that uses tabulate but has somewhat like 
the pretty table object format that people are used to (could be away to 
get 90% of the common usage over to tabulate, minus the special 
prettytable users that are doing advanced things).


https://gist.github.com/harlowja/d5f4824b4ce0ea95530c

The grid output/format of tabulate seems to be pretty close to what 
prettytable output/format is (but my guess is there is some variation, 
just because of different authors/codebases...)


-Josh

Julien Danjou wrote:

On Mon, Jan 11 2016, Doug Hellmann wrote:


I suspect we could skip porting a lot of those, since they look like
clients and we're working to move all command line programs into the
unified client.


Not really in the telemetry roadmap honestly. But we could move away
 From prettytable without much burden anyway.


There are a few server projects, and I assume they are using the
lib in their management commands.  I'm sure there are shell scripts
out there parsing the rendered tables.  Does tabulate produce tables
using the exact same format as PrettyTable?


We barely use it in Gnocchi, it could be easy to switch to something
fancier. Back then, we just picked PrettyTable because every other
project was using it and it was good enough for a quick CLI.


We could justify moving cliff, since it makes it easy to produce
more parsable output using selectable formatters, but some of those
other consuming projects may be a bit harder to change if we want
to maintain our backwards compatibility requirements.


I think that's a good idea, gnocchiclient and aodhclient already rely on
cliff for example, and ceilometerclient should switch at some point.


There are a few libraries on the list, too (automaton, ironic-lib), and
that's confusing. It would be interesting to know how they're using
table output.


automaton uses it for its verbose mode, it's just 2 lines to change… :)

Cheers,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >