[openstack-dev] [AODH] event-alarm timeout discussion

2016-09-20 Thread Zhai, Edwin

All,

I'd like make some clarification for the event-alarm timeout design as many of 
you have some misunderstanding here. Pls. correct me if any mistakes.


I realized that there are 2 different things, but we mix them sometime:
1. event-timeout-alarm
This is one new type of alarm that bracket *.start and *.end events and get 
alarmed when receive *.start but no *.end in timeout. This new alarm handles one 
type of events/actions, e.g. create one alarm for instance creation, then all 
instances created in future will be handled by this alarm. This is not for real 
time, so it's acceptable that user know one instance creation failure in 5 mins.


This new type of alarm can be implemented by one worker to check the DB 
periodically to do the statistic work. That is, new evaluator works in 'polling' 
mode, something like threshold alarm evaluator.


One BP is @
https://review.openstack.org/#/c/199005/

2. event-alarm timeout
This is one new feature for _existed_ event-alarm evaluator. One alarm becomes 
'UNALARM' when not receive desire event in timeout. This feature just handles 
one specific event, e.g create one alarm for instance ABC's XYZ operation with 
5s, then user is notified in 5s immediately if no XYZ.done event comes. If want 
check for another instance, we need create another alarm.


This is used in telco scenario, where operator want know if operation failure in 
real time.


My patch(https://review.openstack.org/#/c/272028/) is for this purpose only, but 
I feel many guys mistaken them(sometimes even me) as they looks similar. So my 
question is: Do you think this telco usage model of event-alarm timeout is 
valid? If not, we can avoid discussing its implementation and ignore following.



=== event-alarm timeout implementation =
As it's for event-alarm, we need keep it as event-driven. Furthermore, for quick 
response, we need use event for timeout handling. Periodic worker can't meet 
real time requirement.


Separated queue for 'alarm.timeout.end'(indicates timeout expire) leads tricky 
race condition.  e.g.  'XYZ.done' comes in queue1, and 'alarm.timeout.end' comes 
in queue2, so that they are handled in parallel way:


1. In queue1, 'XYZ.done' is checking against alarm(current UNKNOWN), and will be 
set ALARM in next step.
2. In queue2, 'alarm.timeout.end' is checking against same alarm(current 
UNKNOWN), and will be set to OK(UNALARM) in next step.

3. In qeueu1, alarm transition happen: UNKNOWN => ALARM
4. In queue2, another alarm transition happen: ALARM =>OK(UNALARM)

So this alarm has bogus transition: UNKNOWN=>ALARM=>UNALARM, and tells the user: 
required event came, then no required event came;


If put all events in one queue, evaluator handles them one by one(low level oslo 
mesg should be multi-threaded) so that second event would see alarm state as not 
UNKNOWN, and give up its transition.  As Gordc said, it's slow. But only very 
small part of the event-alarm need timeout handling, as it's only for telco 
usage model.


One possible improvement as JD pointed out is to avoid so many spawned thread. 
We can just create one thread inside evaluator, and ask this thread handle all 
timeout requests from evaluator. Is it acceptable for event-alarm timeout 
solution?



Best Rgds,
Edwin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Tony Breeds
On Tue, Sep 20, 2016 at 11:57:26AM +0100, Daniel P. Berrange wrote:
> On Tue, Sep 20, 2016 at 12:48:49PM +0200, Kashyap Chamarthy wrote:
> > The said patch in question fixes a CVE[x] in stable/liberty.
> > 
> > We currently have two options, both of them have caused an impasse with
> > the Nova upstream / stable maintainers.  We've had two-ish months to
> > mull over this.  I'd prefer to get this out of a limbo, & bring this to
> > a logical conclusion.
> > 
> > The two options at hand:
> > 
> > (1) Nova backport from master (that also adds a check for the presence
> > of 'ProcessLimits' attribute which is only present in
> > oslo.concurrency>=2.6.1; and a conditional check for 'prlimit'
> > parameter in qemu_img_info() method.)
> > 
> > https://review.openstack.org/#/c/327624/ -- "virt: set address space
> > & CPU time limits when running qemu-img"
> > 
> > (2) Or bump global-requirements for 'oslo.concurrency'
> > 
> > https://review.openstack.org/#/c/337277/5 -- Bump
> > 'global-requirements' for 'oslo.concurrency' to 2.6.1
> 
> Actually we have 3 options
> 
>   (3) Do nothing, leave the bug unfixed in stable/liberty
> 
> While this is a security bug, it is one that has existed in every single
> openstack release ever, and it is not a particularly severe bug. Even if
> we fixed in liberty, it would still remain unfixed in every release before
> liberty. We're in the verge of releasing Newton at which point liberty
> becomes less relevant. So I question whether it is worth spending more
> effort on dealing with this in liberty upstream.  Downstream vendors
> still have the option to do either (1) or (2) in their own private
> branches if they so desire, regardless of whether we fix it upstream.

I think 3 is the least worst option.  If we're going to do something else then
it'd need to be (1).  I feel like we need to rule out (2).

I'll hack something up in the requirements repo to show that the try/except
does what is needed which oslo.concurrency is < 2.6.1

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security groups - API signature breakage

2016-09-20 Thread Kevin Benton
I didn't realize how many plugins subclassed the mixin and called super
with the default_sg kwarg. We need a different fix.

On Tue, Sep 20, 2016 at 10:00 PM, Gary Kotton  wrote:

> Hi,
>
> The patch https://review.openstack.org/373108 has broken the API
> signature. I have posted a revert for this.
>
> Can we do something like this? I do not think so. This breaks all plugins.
>
> Thanks
>
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osc][keystone] User Project List

2016-09-20 Thread Adrian Turjak
I'm running into what doesn't really seem like a sensible design choice
around how 'openstack project list' handles the user filter.

Because of this here:
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/identity/v3/project.py#L199
and because of the find_resource call, even if you are already passing
in a user_id, the client MUST go to Keystone to get your user to be able
to filter your project list.

The default keystone policy up until Newton doesn't let a user get their
own user, and thus that client call falls over despite the ability to
use this endpoint just fine:
http://developer.openstack.org/api-ref/identity/v3/index.html?expanded=list-projects-for-user-detail#list-projects-for-user

Plus doing: "keystoneclient.projects.list(user='')" works
just fine.

By forcing that resource_find you are making it impossible for anyone to
use the client to actually get that a list of their own projects if
their Keystone policy is still the old one.


To fix this, how do people feel about adding a "user project list"
command or even a "--auth-user" option to the "project list" command. In
both cases we can get the id from the token and bypass the need for a
resource_find. That way it's less API calls, but also compatible with
older policies.

Any preferences as to which solution? I don't mind writing the
blueprint/patch myself, just would like to know the preferred option.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Security groups - API signature breakage

2016-09-20 Thread Gary Kotton
Hi,
The patch https://review.openstack.org/373108 has broken the API signature. I 
have posted a revert for this.
Can we do something like this? I do not think so. This breaks all plugins.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Should we discuss the use cases of force volume detach in the Tricircle

2016-09-20 Thread joehuang
Hello,

I think Duncan's comments in another thread make sense to me. How do you think 
about it?

Best Regards
Chaoyi Huang (joehuang)

From: D'Angelo, Scott [scott.dang...@hpe.com]
Sent: 19 September 2016 20:34
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Should we discuss the use cases of 
force volume detach in the Tricircle


Please keep in mind that Nova keeps some volume state in the BlockDeviceMapping 
table. Without changes to Nova, a force_detach function is not complete.

I am interested in this use case, as are other Cinder developers. Please feel 
free to contact me in IRC with questions as "scottda".


Scott D'Angelo


From: joehuang 
Sent: Sunday, September 18, 2016 3:29:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Should we discuss the use cases of 
force volume detach in the Tricircle

This is a good question. I also sent a mail in cinder thread that wants to know 
why the tempest test cases missing for the "force volume detach".
The spec for the "force volume detach" could be found here: 
https://github.com/openstack/cinder-specs/blob/master/specs/liberty/implement-force-detach-for-safe-cleanup.rst

From: cr_...@126.com [cr_...@126.com]
Sent: 18 September 2016 16:53
To: openstack-dev
Subject: [openstack-dev] [tricircle] Should we discuss the use cases of force 
volume detach in the Tricircle

Hello,
When the patch "force volume detach" has submited , some proposals have came 
back.
The important point is wheathe this function is needed or safe.
Should we disscuss some uses cases of this function. Such as the define of this 
function, when this function been triggered.



Best regards,
Ronghui Cao, Ph.D. Candidate
College of Information Science and Engineering
Hunan University, Changsha 410082, Hunan, China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]tempest test case for force detach volume

2016-09-20 Thread joehuang
Understand that. I think the current situation may be reasonable. If Cinder API 
calling will lead to un-expected status, and need force-detach to restore the 
volume status back to normal, that means it's some issue we need to fix in 
Cinder.

Best Regards
Chaoyi Huang (joehuang)

From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 19 September 2016 21:28
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [cinder]tempest test case for force detach volume


Writing a sensible test for this api is rather tricky, since it is intended to 
clean up one very specific error condition, and has few guarantees about the 
state it leaves the system in. It is provided as a tool to allow the system 
administrator to clean up certain faults and situations without needing to 
manually effort the database, however the conditions under which it is safe to 
use, and the cleanup actions that are required after calling it, vary between 
backends.

The only test I can think of that is  probably safe across all backends is to 
call reserve, create_export, reset then delete. (All directly against the 
cinder endpoint with no Nova involvement).

There is a substantial danger in the thinking that this call is any sort of 
generic fixup - it will happily leave volumes attached behind the scenes, open 
to data corruption.

On 18 Sep 2016 05:48, "joehuang" 
> wrote:
Hello, Ken,

Thank you for your information, for APIs without tempest test cases,
it's due to hard to build the test environment, or it's just for the API
is not mature enough? I want to know why the tempest test cases
were not added at the same time when the features were implemented.

Best Regards
Chaoyi Huang(joehuang)

From: Ken'ichi Ohmichi [ken1ohmi...@gmail.com]
Sent: 15 September 2016 2:02
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder]tempest test case for force detach volume

Hi Chaoyi,

That is a nice point.
Now Tempest have tests for some volume v2 action APIs which doesn't
contain os-force_detach.
The available APIs of tempest are two: os-set_image_metadata and
os-unset_image_metadata like
https://github.com/openstack/tempest/blob/master/tempest/services/volume/v2/json/volumes_client.py#L27
That is less than I expected by comparing the API reference.

The corresponding API tests' patches are welcome if interested in :-)

Thanks
Ken Ohmichi

---


2016-09-13 17:58 GMT-07:00 joehuang 
>:
> Hello,
>
> Is there ant tempest test case for "os-force_detach" action to force detach
> a volume? I didn't find such a test case both in the repository
> https://github.com/openstack/cinder/tree/master/cinder/tests/tempest
> and https://github.com/openstack/tempest
>
> The API link is:
> http://developer.openstack.org/api-ref-blockstorage-v2.html#forcedetachVolume
>
> Best Regards
> Chaoyi Huang(joehuang)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]freeze date and tricircle splitting

2016-09-20 Thread joehuang
very good, let's look over each patch (for all patches in review) in the weekly 
meeting to decide whether it should be merged before freeze.

Best Regards
Chaoyi Huang (joehuang)


From: Shinobu Kinjo [shinobu...@gmail.com]
Sent: 20 September 2016 15:45
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]freeze date and tricircle splitting

Good point.

On Tue, Sep 20, 2016 at 3:00 PM, Vega Cai  wrote:
> Since the patches for layer 3 networking automation has been merged, I think
> we'd better include the patches for resource cleaning in the tagged branch.
> Here is the list:
>
> Floating ip deletion: https://review.openstack.org/354604
> Subnet cleaning: https://review.openstack.org/355847
> Router cleaning: https://review.openstack.org/360848
>
> Shinobu Kinjo 于2016年9月20日周二 下午12:04写道:
>>
>> This announcement should have been first to avoid any unnecessary work -;
>>
>> On Tue, Sep 20, 2016 at 12:51 PM, joehuang  wrote:
>> > Hello, as the Trio2o repository has been established, it's time for us
>> > to
>> > discuss the freeze date and tagging(newton branch) for the last release
>> > of
>> > Tricircle with gateway function.
>> >
>> > The freeze date is the gate for patches to be merged before
>> > tagging(newton
>> > branch). If a patch can't finish review process before the freeze date,
>> > and
>> > not able to be merged in Tricircle, then it's suggested to be handled
>> > like
>> > this:
>> >
>> > 1. If it's networking automation related patch, continue the review
>> > process
>> > in Tricircle after tagging(newton branch), will be merged in Tricircle
>> > trunk
>> > in the future .
>> >
>> > 2. If it's gateway related patch, abandon the patch, re-submit the patch
>> > in
>> > Trio2o.
>> >
>> > 3. If it's patch about pod management, for it's common feature, so
>> > continue
>> > the review process in tricircle  after tagging(newton branch) , and
>> > submit a
>> > new patch for this feature in Trio2o separately.
>> >
>> > Exception request after freeze date, before tagging(newton branch): If
>> > there
>> > is some patch must be merged before the tagging(newton branch), then
>> > need to
>> > send the exception request in the mail-list for the patch, and approved
>> > by
>> > PTL.
>> >
>> > That means we need to define a deadline for patches to be merged in
>> > Tricircle before tagging(newton branch), and define the scope of patches
>> > wish to be merged in Trcircle before splitting.
>> >
>> > Your thoughts, proposal for the freeze date and patches to be merged?
>> >
>> > (As the old thread containing both Trio2o and Tricircle in the subject,
>> > not
>> > good to follow, so a new thread is started)
>> >
>> > Best Regards
>> > Chaoyi Huang (joehuang)
>> >
>> > 
>> > From: joehuang
>> > Sent: 18 September 2016 16:34
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: RE: [openstack-dev] [tricircle][trio2o] trio2o git repo ready
>> > and
>> > tricircle splitting
>> >
>> > Thank you for your comment, Zhiyuan.
>> >
>> > For pod management, because these two projects need to run
>> > independently, I
>> > think submit patches separately as needed may be a better choice.
>> >
>> > Best Regards
>> > Chaoyi Huang(joehuang)
>> > 
>> > From: Vega Cai [luckyveg...@gmail.com]
>> > Sent: 18 September 2016 16:27
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: Re: [openstack-dev] [tricircle][trio2o] trio2o git repo ready
>> > and
>> > tricircle splitting
>> >
>> > +1 for the proposal. What about the codes for Pod operation? It seems
>> > that
>> > both Tricircle and Trio2o need these codes. We submit patches to these
>> > two
>> > projects separately?
>> >
>> > Zhiyuan
>> >
>> > joehuang 于2016年9月18日周日 下午4:17写道:
>> >>
>> >> hello, team,
>> >>
>> >> Trio2o git repository is ready now: https://github.com/openstack/trio2o
>> >>
>> >> The repository inherited all files and commit messages from Tricircle.
>> >>
>> >> It's now the time start to do the tricircle splitting: a blue print is
>> >> registere for Tricircle cleaning:
>> >>
>> >> https://blueprints.launchpad.net/tricircle/+spec/make-tricircle-dedicated-for-networking-automation-across-neutron.There
>> >> are lots of documentation work to do. Please review these doc in the
>> >> BP,
>> >> thanks.
>> >>
>> >> There are some proposal for patches during the splitting:
>> >>
>> >> 1. For patch which is already in review status, let's review it in
>> >> Tricircle (no matter it's for Trio2o or Tricircle), after it's get
>> >> merged,
>> >> then port it to Trio2o. After all patches get merged, let's have a last
>> >> tag
>> >> for Tricircle to cover both gateway and networking automation function.
>> >> Then
>> >> the cleaning will be done in Tricircle to make 

[openstack-dev] [nova] Nova API sub-team meeting

2016-09-20 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc]a chance to meet all TCs for Tricircle big-tent application in Barcelona summit?

2016-09-20 Thread joehuang
Thank you for the message. Will the weekly IRC meeting in that week also be 
cancelled during
the summit period according to your experience?

Best Regards
Chaoyi Huang (joehuang)


From: Thierry Carrez [thie...@openstack.org]
Sent: 20 September 2016 18:03
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc]a chance to meet all TCs for Tricircle 
big-tent application in Barcelona summit?

joehuang wrote:
> Hello, all TCs,
>
> Understand that you will be quite busy in Barcelona summit, all TCs will
> be shown in the Barcelona summit, and meet together(usually, I assumed).
> So is it possible to have a chance to meet all TCs there for Tricircle
> big-tent application https://review.openstack.org/#/c/338796/ ? F2f talk
> to answer the concerns may help the understanding and decision.

Some TC members might not be present in Barcelona (we don't even know
who will be on the TC then), and it's generally difficult to corner us
all at the same time... But yes, you should try to grab us if you can.
We won't have a formal meeting in Barcelona (beyond the Board+TC+UC
meeting on the Monday).

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra]project description not updated in github repository

2016-09-20 Thread joehuang
Thank you for the explanation. Can the description be updated manually in 
https://github.com/openstack/tricircle/  ? 

Best Regards
Chaoyi Huang (joehuang)


From: Andreas Jaeger [a...@suse.com]
Sent: 20 September 2016 17:48
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [infra]project description not updated in github 
repository

On 09/20/2016 09:39 AM, joehuang wrote:
> Hello,
>
> The project description of has not been updated in in github repository
> even if the description was updated in gerrit/projects.yaml
>
> In following patch, the Tricircle project description has been updated
> from "Tricircle is a project for OpenStack Multiple Site Deployment
> solution" to "Tricircle is to provide networking automation across
> Neutron.",
> https://review.openstack.org/#/c/367114/6/gerrit/projects.yaml
>
> But the description in the github repository below the code tab is still
> "Tricircle is a project for OpenStack cascading solution. ", it's old
> description.
>
> In the same patch, Trio2o project description is "Trio2o is to provide
> APIs gateway for multiple OpenStack clouds to act as a single OpenStack
> cloud."
>
> And this information was presented in the github repository below the
> code tab: https://github.com/openstack/trio2o
>
> Is there any other way to update the project description in github
> repository?

github is a mirror of github and there are some API limitations, we
currently cannot update the description in github for changes.

http://git.openstack.org/cgit/openstack/tricircle is updated,

Andreas
--
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-20 Thread Konstanski, Carlos P
Am Dienstag, den 20.09.2016, 15:31 -0600 schrieb Konstanski, Carlos P:
> I am currently using python-cinderclient version 1.5.0, though the code in
> question is still in master.
> 
> When calling client.services.list() I get this result: "AttributeError:
> service"
> 
> The execution path of client.services.list() eventually leads to this method
> in
> cinderclient/v2/services.py:24:
> 
> def __repr__(self):                                                          
>  
>     return "" % self.service                                    
>   
> 
> which in turn triggers a call to Resouce.__getattr__() in
> cinderclient/openstack/common/apiclient/base.py:456.
> 
> This custom getter will never find an attribute called service because a
> Service
> instance looks something like the following:
> 
> {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone': u'nova',
> u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00', u'state':
> u'up', u'disabled_reason': None}
> 
> So it returns the string "AttributeError: service".
> 
> One way or another a fix is warranted, and I am ready, willing and able to
> provide the fix. But first I want to find out more about the bigger picture.
> could  it be that this __repr__() method actually correct, but the code that
> populates my service instance is faulty? This could easily be the case if the
> dict that feeds the Service class were to look like the following (for
> example):
> 
> {u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone':
> u'nova', u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
> u'state': u'up', u'disabled_reason': None}}
> 
> Somehow I doubt it; why hide all the useful attributes in a dict under a
> single
> parent attribute? But I'm new to cinder and I don't know the rules. I'm not
> here
> to question your methods.
> 
> Or am I just using it wrong? This code has survived for a long time, and
> certainly someone would have noticed a problem by now. But it seems pretty
> straightforward. How many ways are there to prepare a call to
> client.services.list()? I get a Client instance, call authenticate() for fun,
> and then call client.services.list(). Not a lot going on here.
> 
> I'll get to work on a patch when I figure out what it is supposed to do, if it
> is not already doing it.
> 
> Sincerely,
> Carlos Konstanski

I guess the question I should be asking is this: Manager._list() (in
cinderclient/base.py) returns a list of printable representations of objects,
not a list of the objects themselves. Hopefully there's a more useful method
that returns a list of actual objects, or at least a JSON representation. If I
can't find such a method then I'll be back, or I'll put up a review to add one.

Carlos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Too many mails on announce list again :)

2016-09-20 Thread Doug Hellmann
Excerpts from Morgan Fainberg's message of 2016-09-20 14:38:27 -0700:
> On Tue, Sep 20, 2016 at 9:18 AM, Doug Hellmann 
> wrote:
> 
> > Excerpts from Thierry Carrez's message of 2016-09-20 10:19:04 +0200:
> > > Steve Martinelli wrote:
> > > > I think bundling the puppet, ansible and oslo releases together would
> > > > cut down on a considerable amount of traffic. Bundling or grouping new
> > > > releases may not be the most accurate, but if it encourages the right
> > > > folks to read the content instead of brushing it off, I think thats
> > > > worth while.
> > >
> > > Yeah, I agree that the current "style" of announcing actively trains
> > > people to ignore announces. The trick is that it's non-trivial to
> > > regroup announces (as they are automatically sent as a post-job for each
> > > tag).
> > >
> > > Solutions include:
> > >
> > > * A daily job that catches releases of the day and batches them into a
> > > single announce (issue being you don't get notified as soon as the
> > > release is available, and the announce email ends up being extremely
> > long)
> > >
> > > * A specific -release ML where all announces are posted, with a daily
> > > job to generate an email (one to -announce for services, one to -dev for
> > > libraries) that links to them, without expanding (issue being you don't
> > > have the natural thread in -dev to react to a broken oslo release)
> > >
> > > * Somehow generate the email from the openstack/release request rather
> > > than from the tags
> >
> > One email, with less detail, generated when a file merges into
> > openstack/release is my preference because it's easier to implement.
> >
> > Alternately we could move all of the announcements we have now to
> > a new -release list and folks that only want one email a day can
> > subscribe using digest delivery. Of course they could do that with
> > the list we have now, too.
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> A release list makes a lot of sense. If you also include clear metadata in
> the subject such as including the owning project aka: keystone (for
> keystone auth, keystonemiddleware, keystoneclient), people can do direct
> filtering for what they care about ( as well digest mode).
> 
> --/morgan

We do that now. All of the announcements include the tag [new], the
tag for the project team that owns the deliverable, the name of the
deliverable, and the series for which it is being released.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Too many mails on announce list again :)

2016-09-20 Thread Anita Kuno

On 16-09-20 05:38 PM, Morgan Fainberg wrote:

On Tue, Sep 20, 2016 at 9:18 AM, Doug Hellmann 
wrote:


Excerpts from Thierry Carrez's message of 2016-09-20 10:19:04 +0200:

Steve Martinelli wrote:

I think bundling the puppet, ansible and oslo releases together would
cut down on a considerable amount of traffic. Bundling or grouping new
releases may not be the most accurate, but if it encourages the right
folks to read the content instead of brushing it off, I think thats
worth while.

Yeah, I agree that the current "style" of announcing actively trains
people to ignore announces. The trick is that it's non-trivial to
regroup announces (as they are automatically sent as a post-job for each
tag).

Solutions include:

* A daily job that catches releases of the day and batches them into a
single announce (issue being you don't get notified as soon as the
release is available, and the announce email ends up being extremely

long)

* A specific -release ML where all announces are posted, with a daily
job to generate an email (one to -announce for services, one to -dev for
libraries) that links to them, without expanding (issue being you don't
have the natural thread in -dev to react to a broken oslo release)

* Somehow generate the email from the openstack/release request rather
than from the tags

One email, with less detail, generated when a file merges into
openstack/release is my preference because it's easier to implement.

Alternately we could move all of the announcements we have now to
a new -release list and folks that only want one email a day can
subscribe using digest delivery. Of course they could do that with
the list we have now, too.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


A release list makes a lot of sense. If you also include clear metadata in
the subject such as including the owning project aka: keystone (for
keystone auth, keystonemiddleware, keystoneclient), people can do direct
filtering for what they care about ( as well digest mode).

--/morgan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Also you can set the description for the list to state the fact this 
list is meant for releases, so it will curtail complaints from 
subscribers saying we don't like what we are getting.


Thanks,
Anita.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [architecture] Architecture WG Process

2016-09-20 Thread Dean Troyer
The Architecture Working Group discussed a process for managing group
topics at the last meeting[0] to guide us down the path to many fruitful
discussions.  In summary, we propose the following:

* We will use a (yet to be created) Git repository and the usual Gerrit
review workflow for document management.
  * The core team for this was seeded by volunteer from those attending the
first couple of meetings.  Further changes to the core team is intended to
follow the usual OpenStack pattern based on participation and activity in
the WG.
* The original spec proposed by Clint[1] was converted into an introductory
document, now found in [2], that contains an overview of the WG processes.
* A new "How To Contribute" section was added that contains the topic
proposal/selection workflow we agreed to start with:
  * Someone proposes a background document to the 'backlog' directory of
the WG repo describing the potential topic in enough detail for the WG to
evaluate if the topic is in scope for the WG.
  * The proposal review will be held for a minimum of one week to allow
time for discussion regarding scope.
  * The proposal should also be added to an upcoming WG meeting agenda for
discussion in a meeting.
  * Proposals that do not receive objections after the review period will
be accepted as in-scope and merged into the repo backlog.
  * The WG will move topics from the backlog to the 'active' directory as
the topics are selected for work.

At this time the meetings are still scheduled as noted in the wiki[3],
Clint has an action to investigate an alternate time for odd weeks more
friendly to non-US participants.  Please join us!

dt

[0] log:
http://eavesdrop.openstack.org/meetings/arch_wg/2016/arch_wg.2016-09-15-19.02.log.html
[1] https://review.openstack.org/#/c/335141/
[2] https://etherpad.openstack.org/p/arch-wg-draft
[3] https://wiki.openstack.org/wiki/Meetings/Arch-WG

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Too many mails on announce list again :)

2016-09-20 Thread Morgan Fainberg
On Tue, Sep 20, 2016 at 9:18 AM, Doug Hellmann 
wrote:

> Excerpts from Thierry Carrez's message of 2016-09-20 10:19:04 +0200:
> > Steve Martinelli wrote:
> > > I think bundling the puppet, ansible and oslo releases together would
> > > cut down on a considerable amount of traffic. Bundling or grouping new
> > > releases may not be the most accurate, but if it encourages the right
> > > folks to read the content instead of brushing it off, I think thats
> > > worth while.
> >
> > Yeah, I agree that the current "style" of announcing actively trains
> > people to ignore announces. The trick is that it's non-trivial to
> > regroup announces (as they are automatically sent as a post-job for each
> > tag).
> >
> > Solutions include:
> >
> > * A daily job that catches releases of the day and batches them into a
> > single announce (issue being you don't get notified as soon as the
> > release is available, and the announce email ends up being extremely
> long)
> >
> > * A specific -release ML where all announces are posted, with a daily
> > job to generate an email (one to -announce for services, one to -dev for
> > libraries) that links to them, without expanding (issue being you don't
> > have the natural thread in -dev to react to a broken oslo release)
> >
> > * Somehow generate the email from the openstack/release request rather
> > than from the tags
>
> One email, with less detail, generated when a file merges into
> openstack/release is my preference because it's easier to implement.
>
> Alternately we could move all of the announcements we have now to
> a new -release list and folks that only want one email a day can
> subscribe using digest delivery. Of course they could do that with
> the list we have now, too.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

A release list makes a lot of sense. If you also include clear metadata in
the subject such as including the owning project aka: keystone (for
keystone auth, keystonemiddleware, keystoneclient), people can do direct
filtering for what they care about ( as well digest mode).

--/morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-20 Thread Konstanski, Carlos P
I am currently using python-cinderclient version 1.5.0, though the code in
question is still in master.

When calling client.services.list() I get this result: "AttributeError: service"

The execution path of client.services.list() eventually leads to this method in
cinderclient/v2/services.py:24:

def __repr__(self):                                                            
    return "" % self.service                                       

which in turn triggers a call to Resouce.__getattr__() in
cinderclient/openstack/common/apiclient/base.py:456.

This custom getter will never find an attribute called service because a Service
instance looks something like the following:

{u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone': u'nova',
u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00', u'state':
u'up', u'disabled_reason': None}

So it returns the string "AttributeError: service".

One way or another a fix is warranted, and I am ready, willing and able to
provide the fix. But first I want to find out more about the bigger picture.
could  it be that this __repr__() method actually correct, but the code that
populates my service instance is faulty? This could easily be the case if the
dict that feeds the Service class were to look like the following (for example):

{u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone':
u'nova', u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
u'state': u'up', u'disabled_reason': None}}

Somehow I doubt it; why hide all the useful attributes in a dict under a single
parent attribute? But I'm new to cinder and I don't know the rules. I'm not here
to question your methods.

Or am I just using it wrong? This code has survived for a long time, and
certainly someone would have noticed a problem by now. But it seems pretty
straightforward. How many ways are there to prepare a call to
client.services.list()? I get a Client instance, call authenticate() for fun,
and then call client.services.list(). Not a lot going on here.

I'll get to work on a patch when I figure out what it is supposed to do, if it
is not already doing it.

Sincerely,
Carlos Konstanski
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Matt Riedemann

On 9/20/2016 4:17 PM, Matt Riedemann wrote:

On 9/20/2016 7:38 AM, Alan Pevec wrote:

2016-09-20 13:27 GMT+02:00 Kashyap Chamarthy :

  (3) Do nothing, leave the bug unfixed in stable/liberty


That was the unspoken third option, thanks for spelling it out. :-)


Yes, let's abandon both reviews.

Cheers,
Alan

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'd rather not bump the minimum on oslo.concurrency given the transitive
dependencies that would be pulled in which required higher minimums.

So if I had to choose I'd go with the nova backport with the workaround:

https://review.openstack.org/#/c/327624/

Which logs an error if you don't have a new enough oslo.concurrency.

But I'm also fine with just considering this a latent bug as danpb
pointed out and let downstream packagers/vendors handle it as they see fit.



BTW, with my downstream hat on, if I were backporting this and packaging 
it, I'd personally cherry pick the oslo.concurrency fix that is required 
to whatever version of oslo.concurrency we shipped in each release and 
bump the patch version on our rpm. Then patch the nova fix in and make 
the nova rpm dependent on that patched oslo.concurrency rpm version. But 
that's just me. We were constrained by legal approval on which versions 
of something we could ship, and after a release those were basically 
frozen, so we couldn't just pick up new minimums and would workaround 
that by patching in the fixes we actually needed and tied the dependent 
versions between the rpms.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Matt Riedemann

On 9/20/2016 7:38 AM, Alan Pevec wrote:

2016-09-20 13:27 GMT+02:00 Kashyap Chamarthy :

  (3) Do nothing, leave the bug unfixed in stable/liberty


That was the unspoken third option, thanks for spelling it out. :-)


Yes, let's abandon both reviews.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'd rather not bump the minimum on oslo.concurrency given the transitive 
dependencies that would be pulled in which required higher minimums.


So if I had to choose I'd go with the nova backport with the workaround:

https://review.openstack.org/#/c/327624/

Which logs an error if you don't have a new enough oslo.concurrency.

But I'm also fine with just considering this a latent bug as danpb 
pointed out and let downstream packagers/vendors handle it as they see fit.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stackalytics] [deb] [packaging] OpenStack contribution stats skewed by deb-* projects

2016-09-20 Thread Ilya Shakhat
Hi,

tldr; Commits stats are significantly skewed by deb-* projects (
http://stackalytics.com/?metric=commits=packaging-deb-group)

By default Stackalytics processes commits from project's master branch. For
some "old core" projects there is configuration to process stable branches
as well. If some commit is cherry-picked from master to stable it is
counted twice in both branches / releases. The configuration for stable
branch is simple - branch starting with branching point (e.g. stable/newton
that starts with rc1)

In deb-* projects master branch corresponds to upstream Debian community.
All OpenStack-related contribution goes into debian/ branch. But
unlike in the rest of OpenStack, git workflow differs and the branch
contains merge commits from master. This makes filtering "pure" branch
commits from those that came from master quite tricky (not possible to
specify the branch-point). And support of this will require changes in
Stackalytics code.

Since currently we are at the time when people may get nervous about
numbers, I'd suggest to temporary hide all commits from deb-* projects and
revise stats processing in a month.

Thanks,
Ilya
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Ocata development is open

2016-09-20 Thread Matt Riedemann

On 9/20/2016 2:44 PM, Matt Riedemann wrote:

Since we cut newton-rc1 last Thursday the master branch is now
officially open for Ocata development. We're still working on newton-rc2
and may be backporting regression bug fixes. I expect to tag newton-rc2
sometime next week after the final translations are in.

I've started approving some specless blueprints for Ocata already since
they are carry overs from newton:

https://blueprints.launchpad.net/nova/ocata

If you had a spec proposed or approved for Newton but the blueprint
wasn't completed in Newton, please re-propose the spec for Ocata:

https://specs.openstack.org/openstack/nova-specs/readme.html#previously-approved-specifications


Given the short runway with the Ocata schedule:

https://releases.openstack.org/ocata/schedule.html

I don't think we're going to limit spec approvals to only re-proposals
of previously approved specs. We have 4 weeks until the summit and only
a little over 2 weeks after the summit before the o-1 milestone. So
everything is fair game right now for specs, but I'd personally like to
prioritize re-approvals.

I think we'll be looking at a spec freeze at o-1 and then standard
feature freeze at o-3, for both priority and non-priority items.
However, I know that when push comes to shove, if we're rushing at the
last two weeks before feature freeze to get things done, the priority
blueprints are going to get priority. I'm hoping I can do a better job
of checkpointing our progress during Ocata so we're not rushing so much
at the end, but will need help from the subteams to stay on task and on
track for our goals for the release.

To simplify, I have the proposed Ocata draft schedule for Nova on the
wiki here:

https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule#Dates_Overview

As a reminder for summit planning, we have an etherpad here for proposed
topics:

https://etherpad.openstack.org/p/ocata-nova-summit-ideas

I'm going to filter that list this week and we'll meet next week to
discuss session topics and the design summit schedule. The currently
proposed track layout is in Thierry's email here:

http://lists.openstack.org/pipermail/openstack-dev/2016-September/103851.html


Let me know if you have any questions either by pinging me in
#openstack-nova in freenode IRC or just reply to this email.



I forgot to mention that I've also created an Ocata review priorities 
etherpad:


https://etherpad.openstack.org/p/ocata-nova-priorities-tracking

That's primed from the newton-nova-priorities-tracking etherpad so may 
need some scrubbing.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Ocata development is open

2016-09-20 Thread Matt Riedemann
Since we cut newton-rc1 last Thursday the master branch is now 
officially open for Ocata development. We're still working on newton-rc2 
and may be backporting regression bug fixes. I expect to tag newton-rc2 
sometime next week after the final translations are in.


I've started approving some specless blueprints for Ocata already since 
they are carry overs from newton:


https://blueprints.launchpad.net/nova/ocata

If you had a spec proposed or approved for Newton but the blueprint 
wasn't completed in Newton, please re-propose the spec for Ocata:


https://specs.openstack.org/openstack/nova-specs/readme.html#previously-approved-specifications

Given the short runway with the Ocata schedule:

https://releases.openstack.org/ocata/schedule.html

I don't think we're going to limit spec approvals to only re-proposals 
of previously approved specs. We have 4 weeks until the summit and only 
a little over 2 weeks after the summit before the o-1 milestone. So 
everything is fair game right now for specs, but I'd personally like to 
prioritize re-approvals.


I think we'll be looking at a spec freeze at o-1 and then standard 
feature freeze at o-3, for both priority and non-priority items. 
However, I know that when push comes to shove, if we're rushing at the 
last two weeks before feature freeze to get things done, the priority 
blueprints are going to get priority. I'm hoping I can do a better job 
of checkpointing our progress during Ocata so we're not rushing so much 
at the end, but will need help from the subteams to stay on task and on 
track for our goals for the release.


To simplify, I have the proposed Ocata draft schedule for Nova on the 
wiki here:


https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule#Dates_Overview

As a reminder for summit planning, we have an etherpad here for proposed 
topics:


https://etherpad.openstack.org/p/ocata-nova-summit-ideas

I'm going to filter that list this week and we'll meet next week to 
discuss session topics and the design summit schedule. The currently 
proposed track layout is in Thierry's email here:


http://lists.openstack.org/pipermail/openstack-dev/2016-September/103851.html

Let me know if you have any questions either by pinging me in 
#openstack-nova in freenode IRC or just reply to this email.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] OpenFlow version to use in the OVS agent

2016-09-20 Thread Cathy Zhang
Hi Bernard,

Networking-sfc currently uses OF1.3. Although OF1.3 dumps all groups, 
networking-sfc has follow-on filter code to select the info associated with the 
specific group ID from the dump. So we are fine and let's keep it as OF1.3. 

We can upgrade to OF1.5 when Neutron uses OF1.5. 

Thanks,
Cathy


-Original Message-
From: Bernard Cafarelli [mailto:bcafa...@redhat.com] 
Sent: Tuesday, September 20, 2016 7:16 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [networking-sfc] OpenFlow version to use in the OVS 
agent

In the OVSSfcAgent migration to a L2 agent extension review[1], Igor Duarte 
Cardoso noticed a difference on the OpenFlow versions between a comment and 
actual code.
In current code [2], we have:
# We need to dump-groups according to group Id, # which is a feature of 
OpenFlow1.5 full_args = ["ovs-ofctl", "-O openflow13", cmd, self.br_name

Indeed, only OpenFlow 1.5 and later support dumping a specific group [3]. 
Earlier versions of OpenFlow always dump all groups.
So current code will return all groups:
$ sudo ovs-ofctl -O OpenFlow13 dump-groups br-int 1 OFPST_GROUP_DESC reply 
(OF1.3) (xid=0x2):
 
group_id=1,type=select,bucket=actions=set_field:fa:16:3e:05:46:69->eth_dst,resubmit(,5),bucket=actions=set_field:fa:16:3e:cd:b7:7e->eth_dst,resubmit(,5)
 
group_id=2,type=select,bucket=actions=set_field:fa:16:3e:2d:f3:28->eth_dst,resubmit(,5)
$ sudo ovs-ofctl -O OpenFlow15 dump-groups br-int 1 OFPST_GROUP_DESC reply 
(OF1.5) (xid=0x2):
 
group_id=1,type=select,bucket=bucket_id:0,actions=set_field:fa:16:3e:05:46:69->eth_dst,resubmit(,5),bucket=bucket_id:1,actions=set_field:fa:16:3e:cd:b7:7e->eth_dst,resubmit(,5)

This code behavior will not change in my extension rewrite, so this will still 
have to be fixed. though I am not sure on the solution:
* We can use Openflow 1.5, but its support looks experimental? And Neutron 
apparently only uses up to 1.4 (for OVS firewall extension)
* Method to dump a group can "grep" the group ID in the complete dump.
Not as efficient but works with OpenFlow 1.1+
* Use another system to load balance across the port pairs?

Thoughts?
In gerrit, I kept it set to 1.5 (no impact for now as this is still marked as 
WIP)

[1]: https://review.openstack.org/#/c/351789
[2]: 
https://github.com/openstack/networking-sfc/blob/master/networking_sfc/services/sfc/common/ovs_ext_lib.py
[3]: http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt

--
Bernard Cafarelli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Community Contributor Awards

2016-09-20 Thread Kendall Nelson
Hello all,

I’m pleased to announce the next round of community contributor awards!
Similar to the Austin Summit, the awards will be presented by the
Foundation at the feedback session of the upcoming Summit in Barcelona.

Now accepting nominations! Please submit anyone you think is deserving of
an award!

https://openstackfoundation.formstack.com/forms/community_contributor_award_nomination_form

Please submit all nominations by the end of day on October 7th.

There are so many people out there who do invaluable work that should be
recognized. People that hold the community together, people that make
working on OpenStack fun, people that do a lot but aren’t called out for
their work, people that speak their mind and aren’t afraid to challenge the
norm.

Like last time, we won’t have a defined set of awards so we take extra note
of what you say about the nominee in your submission to pick the winners.

We’re excited to hear who you want to celebrate and why you think they are
awesome!

All the Best,

Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-20 Thread Haïkel
2016-09-19 19:40 GMT+02:00 Jeffrey Zhang :
> Kolla core reviewer team,
>
> Kolla supports multiple Linux distros now, including
>
> * Ubuntu
> * CentOS
> * RHEL
> * Fedora
> * Debian
> * OracleLinux
>
> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
> robust gate to ensure the quality.
>
> For fedora, Kolla hasn't any test for it and nobody reports any bug
> about it( i.e. nobody use fedora as base distro image). We (kolla
> team) also do not have enough resources to support so many Linux
> distros. I prefer to deprecate fedora support now.  This is talked in
> past but inconclusive[0].
>
> Please vote:
>
> 1. Kolla needs support fedora( if so, we need some guys to set up the
> gate and fix all the issues ASAP in O cycle)
> 2. Kolla should deprecate fedora support
>
> [0] http://lists.openstack.org/pipermail/openstack-dev/2016-June/098526.html
>


/me has no voting rights

As RDO maintainer and Fedora developer, I support option 2. as it'd be
very time-consuming to maintain Fedora support..


>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Browser Support

2016-09-20 Thread Shinobu Kinjo
Thanks, guys.

 - Shinobu

On Tue, Sep 20, 2016 at 5:46 PM, Rob Cresswell
 wrote:
> Agreed. I've created a bug here:
> https://bugs.launchpad.net/horizon/+bug/1625514
>
> Rob
>
> On 20 September 2016 at 09:40, amot...@gmail.com  wrote:
>>
>> I think It is time to merge the wiki into horizon devref.
>> It is not a good situation that we have two contents.
>>
>> 2016-09-20 17:01 GMT+09:00 Rob Cresswell :
>> > As you've noticed, this doc isn't updated currently. The current
>> > browsers
>> > supported are listed here:
>> > http://docs.openstack.org/developer/horizon/faq.html (Stable Firefox and
>> > Chrome, and IE 11+)
>> >
>> > Rob
>> >
>> > On 20 September 2016 at 08:23, Shinobu Kinjo  wrote:
>> >>
>> >> There are ambiguous definitions like:
>> >>
>> >> IE 11 Good?
>> >>
>> >> Can we define this description as specific as possible?
>> >>
>> >>
>> >> On Tue, Sep 20, 2016 at 4:15 PM, Radomir Dopieralski
>> >>  wrote:
>> >> > On Tue, Sep 20, 2016 at 3:53 AM, Jason Rist  wrote:
>> >> >>
>> >> >> This page hasn't been updated for a while - does anyone know the
>> >> >> latest?
>> >> >>
>> >> >> https://wiki.openstack.org/wiki/Horizon/BrowserSupport
>> >> >>
>> >> >
>> >> > As far as I know, there were no changes to the officially supported
>> >> > browser
>> >> > versions.
>> >> >
>> >> > The support for very old browsers (like msie 6) is going to
>> >> > deteriorate
>> >> > slowly, as the libraries that we use for styles and js drop support
>> >> > for
>> >> > them.
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > __
>> >> > OpenStack Development Mailing List (not for usage questions)
>> >> > Unsubscribe:
>> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-cisco] Poll to decide networking-cisco weekly meeting times

2016-09-20 Thread Jeremy Stanley
On 2016-09-20 16:23:29 + (+), Sam Betts (sambetts) wrote:
> My interpretation of the guidelines was that those channels are
> reserved for official OpenStack projects (projects included in
> governance projects.yaml) only, so until we regain that status I
> was planning on holding it in the project channel.
[...]

There are plenty of unofficial teams who reserve times in our
official meeting channels already, so this shouldn't be an issue.
We've said in the past that if there is a pressing need to reassign
meeting slots then official teams would get priority, but I don't
believe that has ever actually happened and so isn't something I
would worry about.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-20 Thread Steven Dake (stdake)
Matthias,

I was asked why this is so by a different person.  The reason is determining 
majority is impossible if the electorate isn’t well defined in advance of the 
vote.  In this case, the electorate is the core team who were selected by their 
peers to serve as the leadership of the project.  We could correct this 
deficiency by holding votes across all contributors for the last year.  The 
authorized votes for the PTL election [1] for Kolla Newton were 114 people, and 
69 voted.

The PTL election was a major vote – not something simple like a deprecation 
vote.  Yet 60% of eligible voters voted.  Using this mechanism would result in 
an inability to obtain a majority on any issue in my opinion, would be more 
heavyweight, and require a whole lot more work, and finally slow down decision 
making processes.

We vote on proposals such as this to remove logjams (if there are any to be 
removed) and we want it to be lightweight.

Regards
-steve



From: Steven Dake 
Date: Tuesday, September 20, 2016 at 8:32 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

Mathias,

Thank you for voicing your opinion (and anyone is welcome to do that in Kolla), 
however, core reviewer votes are the only binding votes in the decision making 
process.

Regards
-steve


From: Mathias Ewald 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, September 20, 2016 at 7:25 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

Option 2

2016-09-20 16:07 GMT+02:00 Steven Dake (stdake) 
>:
Consider this a reversal of my vote for Debian deprecation.

Swapnil, thanks for bringing this fact to our attention.  It was missing from 
the original vote.  I don’t know why I didn’t bring up Benedikt’s contributions 
(which were substantial) just as Paul’s were substantial for Oracle Linux.  I 
guess the project is too busy for me to keep all the context in my brain.  The 
fact that there is no debian gate really is orthogonal to the discussion in my 
mind.  With this action we would be turning away a contributor who has made 
real contributions, and send the signal his work doesn’t matter or fit in with 
the project plans.

This is totally the wrong message to send.  Further others might interpret such 
an act as a “we don’t care about anyone’s contributions” which is not the 
culture we have cultivated since origination of the project.  We have built a 
culture of “you build it, we will accept it once it passes review”.  We want to 
hold on to that – it’s a really good thing for Kolla.  There have been 
rumblings in this thread and on irc of the expanding support matrix and our 
(lack) of dealing with it appropriately.  I think there are other ways to solve 
that problem without a policy hammer.

I added the fedora work originally along with others who have since moved on to 
other projects.  I personally have been unsuccessful at maintaining it, because 
of the change to DNF (And PTL is a 100% time commitment without a whole lot of 
time for implementation work).  That said Fedora moves too fast for me 
personally to commit to maintenance there – so my vote there remains unchanged.

Regards
-steve





On 9/20/16, 2:34 AM, "Paul Bourke" 
> wrote:

If it's the case Benedict or noone else is interested in continuing
Debian, I can reverse my vote. Though it seems I'll be outvoted anyway ;)

On 20/09/16 10:21, Swapnil Kulkarni wrote:
> On Tue, Sep 20, 2016 at 2:38 PM, Paul Bourke 
> wrote:
>> -1 for deprecating Debian.
>>
>> As I mentioned in https://review.openstack.org/#/c/369183/, Debian 
support
>> was added incrementally by Benedikt Trefzer as recently as June. So it's
>> reasonable to believe there is at least one active user of Debian.
>>
>> I would like to try get some input from him on whether he's still using 
it
>> and would be interested in helping maintain by adding gates etc.
>>
>> On 19/09/16 18:44, Jeffrey Zhang wrote:
>>>
>>> Kolla core reviewer team,
>>>
>>> Kolla supports multiple Linux distros now, including
>>>
>>> * Ubuntu
>>> * CentOS
>>> * RHEL
>>> * Fedora
>>> * Debian
>>> * OracleLinux
>>>
>>> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
>>> robust gate to ensure the quality.
>>>
>>> For Debian, Kolla hasn't any test for it and nobody reports any bug
>>> about it( i.e. nobody use Debian as base distro image). We (kolla
>>> team) also do not have 

Re: [openstack-dev] [networking-cisco] Poll to decide networking-cisco weekly meeting times

2016-09-20 Thread Sam Betts (sambetts)
My interpretation of the guidelines was that those channels are reserved for 
official OpenStack projects (projects included in governance projects.yaml) 
only, so until we regain that status I was planning on holding it in the 
project channel. If this isn't the case and we can use the openstack-meeting 
channels then that would be ideal for the reasons you've stated plus we have to 
move to them anyway once you are accepted into the governance project list so 
it would be one less thing to redo later. I will look into this further and if 
we can use the openstack-meeting channels then I'll aim to make that the home 
for our meetings.

Sam

On 20/09/2016 16:46, "Steven Dake (stdake)" 
> wrote:

Sam,

Can this meeting instead be held in the normal openstack-meeting-1 -> 4 
channels?  Having one-off meetings in #openstack-networking-cisco is totally 
fine.  Having standing team meetings there is atypical of OpenStack projects.  
The main value of using the opentack-meeting-1-4 channels is that a lot of 
people are in those channels, and it is easy to ping an individual in a broadly 
represented channel for cross-project issues rather than a project-specific 
channel.

Regards
-steve


From: "Sam Betts (sambetts)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, September 20, 2016 at 5:05 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [networking-cisco] Poll to decide networking-cisco 
weekly meeting times

For those interested in participating, we are setting up a weekly meeting to 
discuss the development and design of networking-cisco. This meeting will be 
held on the #openstack-networking-cisco channel. If you are interested please 
use the link below to provide which of the dates and times you are best for 
you. We will use this poll to ensure we organise a meeting which aligns best 
for the majority of attendees.

http://doodle.com/poll/huqedr8hac679c9y
NOTE: All the times are in UTC.

Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] let's talk (development) environment deployment tooling and workflows

2016-09-20 Thread Alex Schultz
On Mon, Sep 19, 2016 at 11:21 AM, Steven Hardy  wrote:

> Hi Alex,
>
> Firstly, thanks for this detailed feedback - it's very helpful to have
> someone with a fresh perspective look at the day-1 experience for TripleO,
> and while some of what follows are "know issues", it's great to get some
> perspective on them, as well as ideas re how we might improve things.
>
> On Thu, Sep 15, 2016 at 09:09:24AM -0600, Alex Schultz wrote:
> > Hi all,
> >
> > I've recently started looking at the various methods for deploying and
> > developing tripleo.  What I would like to bring up is the current
> > combination of the tooling for managing the VM instances and the
> > actual deployment method to launch the undercloud/overcloud
> > installation.  While running through the various methods and reading
> > up on the documentation, I'm concerned that they are not currently
> > flexible enough for a developer (or operator for that matter) to be
> > able to setup the various environment configurations for testing
> > deployments and doing development.  Additionally I ran into issues
> > just trying get them working at all so this probably doesn't help when
> > trying to attract new contributors as well.  The focus of this email
> > and of my experience seems to relate with workflow-simplification
> > spec[0].  I would like to share my experiences with the various
> > tooling available and raise some ideas.
> >
> > Example Situation:
> >
> > For example, I have a laptop with 16G of RAM and an SSD and I'd like
> > to get started with tripleo.  How can I deploy tripleo?
>
> So, this is probably problem #1, because while I have managed to deploy a
> minimal TripleO environment on a laptop with 16G of RAM, I think it's
> pretty widely known that it's not really enough (certainly with our default
> configuration, which has unfortunately grown over time as more and more
> things got integrated).
>
> I see two options here:
>
> 1. Document the reality (which is really you need a physical machine with
> at least 32G RAM unless you're prepared to deal with swapping).
>
> 2. Look at providing a "TripleO lite" install option, which disables some
> services (both on the undercloud and default overcloud install).
>
> Either of these are defintely possible, but (2) seems like the best
> long-term solution (although it probably means another CI job).
>
>
Yea I think 1 is an ok short term and 2 is the ideal solution.  I think we
really need to do a full evaluation of what services run and where to make
sure they are also properly tunable. I think it might help with some of
this memory utilization as well.


> > Tools:
> >
> > instack:
> >
> > I started with the tripleo docs[1] that reference using the instack
> > tools for virtual environment creation while deploying tripleo.   The
> > docs say you need at least 12G of RAM[2].  The docs lie (step 7[3]).
> > So after basically shutting everything down and letting it deploy with
> > all my RAM, the deployment fails because the undercloud runs out of
> > RAM and OOM killer kills off heat.  This was not because I had reduced
> > the amount of ram for the undercloud node or anything.  It was because
> > by default, 6GB of RAM with no swap is configured for the undercloud
> > (not sure if this is a bug?).  So I added a swap file to the
> > undercloud and continued. My next adventure was having the overcloud
> > deployment fail because lack of memory as puppet fails trying to spawn
> > a process and gets denied.  The instack method does not configure swap
> > for the VMs that are deployed and the deployment did not work with 5GB
> > RAM for each node.  So for a full 16GB I was unable to follow the
> > documentation and use instack to successfully deploy.  At this point I
> > switched over to trying to use tripleo-quickstart.  Eventually I was
> > able to figure out a configuration with instack to get it to deploy
> > when I figured out how to enable swap for the overcloud deployment.
>
> Yeah, so this definitely exposes that we need to update the docs, and also
> provide an easy install-time option to enable swap on all-the-things for
> memory contrained environments.
>

Even for non-memory constrained environments, having some would help
alleviate some of these random errors due to memory issues.


>
> > tripleo-quickstart:
> >
> > The next thing I attempted to use was the tripleo-quickstart[4].
> > Following the directions I attempted to deploy against my localhost.
> > I turns out that doesn't work as expected since ansible likes to do
> > magic when dealing with localhost[5].  Ultimately I was unable to get
> > it working against my laptop locally because I ran into some libvirt
> > issues.  But I was able to get it to work when I pointed it at a
> > separate machine.  It should be noted that tripleo-quickstart creates
> > an undercloud with swap which was nice because then it actually works,
> > but is an inconsistent experience depending on which tool you used for
> > 

Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 11:36:29AM -0400, Sean Dague wrote:
> On 09/20/2016 11:20 AM, Daniel P. Berrange wrote:
> > On Tue, Sep 20, 2016 at 11:01:23AM -0400, Sean Dague wrote:
> >> On 09/20/2016 10:38 AM, Daniel P. Berrange wrote:
> >>> On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
>  This is a bit delayed due to the release rush, finally getting back to
>  writing up my experiences at the Ops Meetup.
> 
>  Nova Feedback Session
>  =
> 
>  We had a double session for Feedback for Nova from Operators, raw
>  etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.
> 
>  The median release people were on in the room was Kilo. Some were
>  upgrading to Liberty, many had older than Kilo clouds. Remembering
>  these are the larger ops environments that are engaged enough with the
>  community to send people to the Ops Meetup.
> 
> 
>  Performance Bottlenecks
>  ---
> 
>  * scheduling issues with Ironic - (this is a bug we got through during
>    the week after the session)
>  * live snapshots actually end up performance issue for people
> 
>  The workarounds config group was not well known, and everyone in the
>  room wished we advertised that a bit more. The solution for snapshot
>  performance is in there
> 
>  There were also general questions about what scale cells should be
>  considered at.
> 
>  ACTION: we should make sure workarounds are advertised better
> >>>
> >>> Workarounds ought to be something that admins are rarely, if
> >>> ever, having to deal with.
> >>>
> >>> If the lack of live snapshot is such a major performance problem
> >>> for ops, this tends to suggest that our default behaviour is wrong,
> >>> rather than a need to publicise that operators should set this
> >>> workaround.
> >>>
> >>> eg, instead of optimizing for the case of a broken live snapshot
> >>> support by default, we should optimize for the case of working
> >>> live snapshot by default. The broken live snapshot stuff was so
> >>> rare that no one has ever reproduced it outside of the gate
> >>> AFAIK.
> >>>
> >>> IOW, rather than hardcoding disable_live_snapshot=True in nova,
> >>> we should just set it in the gate CI configs, and leave it set
> >>> to False in Nova, so operators get good performance out of the
> >>> box.
> >>>
> >>> Also it has been a while since we added the workaround, and IIRC,
> >>> we've got newer Ubuntu available on at least some of the gate
> >>> hosts now, so we have the ability to test to see if it still
> >>> hits newer Ubuntu. 
> >>
> >> Here is my reconstruction of the snapshot issue from what I can remember
> >> of the conversation.
> >>
> >> Nova defaults to live snapshots. This uses the libvirt facility which
> >> dumps both memory and disk. And then we throw away the memory. For large
> >> memory guests (especially volume backed ones that might have a fast path
> >> for the disk) this leads to a lot of overhead for no gain. The
> >> workaround got them past it.
> > 
> > I think you've got it backwards there.
> > 
> > Nova defaults to *not* using live snapshots:
> > 
> > cfg.BoolOpt(
> > 'disable_libvirt_livesnapshot',
> > default=True,
> > help="""
> > Disable live snapshots when using the libvirt driver.
> > ...""")
> > 
> > 
> > When live snapshot is disabled like this, the snapshot code is unable
> > to guarantee a consistent disk state. So the libvirt nova driver will
> > stop the guest by doing a managed save (this saves all memory to
> > disk), then does the disk snapshot, then restores the managed saved
> > (which loads all memory from disk).
> > 
> > This is terrible for multiple reasons
> > 
> >   1. the guest workload stops running while snapshot is taken
> >   2. we churn disk I/O saving & loading VM memory
> >   3. you can't do it at all if host PCI devices are attached to
> >  the VM
> > 
> > Enabling live snapshots by default fixes all these problems, at the
> > risk of hitting the live snapshot bug we saw in the gate CI but never
> > anywhere else.
> 
> Ah, right. I'll propose inverting the default and we'll see if we can
> get past the testing in the gate - https://review.openstack.org/#/c/373430/

NB the bug was non-deterministic and rare, even in the gate, so the
real test is whether it gets past the gate 20 times in a row :-)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] Too many mails on announce list again :)

2016-09-20 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-09-20 10:19:04 +0200:
> Steve Martinelli wrote:
> > I think bundling the puppet, ansible and oslo releases together would
> > cut down on a considerable amount of traffic. Bundling or grouping new
> > releases may not be the most accurate, but if it encourages the right
> > folks to read the content instead of brushing it off, I think thats
> > worth while.
> 
> Yeah, I agree that the current "style" of announcing actively trains
> people to ignore announces. The trick is that it's non-trivial to
> regroup announces (as they are automatically sent as a post-job for each
> tag).
> 
> Solutions include:
> 
> * A daily job that catches releases of the day and batches them into a
> single announce (issue being you don't get notified as soon as the
> release is available, and the announce email ends up being extremely long)
> 
> * A specific -release ML where all announces are posted, with a daily
> job to generate an email (one to -announce for services, one to -dev for
> libraries) that links to them, without expanding (issue being you don't
> have the natural thread in -dev to react to a broken oslo release)
> 
> * Somehow generate the email from the openstack/release request rather
> than from the tags

One email, with less detail, generated when a file merges into
openstack/release is my preference because it's easier to implement.

Alternately we could move all of the announcements we have now to
a new -release list and folks that only want one email a day can
subscribe using digest delivery. Of course they could do that with
the list we have now, too.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Kashyap Chamarthy
On Tue, Sep 20, 2016 at 04:20:49PM +0100, Daniel P. Berrange wrote:
> On Tue, Sep 20, 2016 at 11:01:23AM -0400, Sean Dague wrote:

[...]

> > Here is my reconstruction of the snapshot issue from what I can remember
> > of the conversation.
> > 
> > Nova defaults to live snapshots. This uses the libvirt facility which
> > dumps both memory and disk. And then we throw away the memory. For large
> > memory guests (especially volume backed ones that might have a fast path
> > for the disk) this leads to a lot of overhead for no gain. The
> > workaround got them past it.
> 
> I think you've got it backwards there.
> 
> Nova defaults to *not* using live snapshots:
> 
> cfg.BoolOpt(
> 'disable_libvirt_livesnapshot',
> default=True,
> help="""
> Disable live snapshots when using the libvirt driver.
> ...""")
>
>
> When live snapshot is disabled like this, the snapshot code is unable
> to guarantee a consistent disk state. So the libvirt nova driver will
> stop the guest by doing a managed save (this saves all memory to
> disk), then does the disk snapshot, then restores the managed saved
> (which loads all memory from disk).
> 
> This is terrible for multiple reasons
> 
>   1. the guest workload stops running while snapshot is taken
>   2. we churn disk I/O saving & loading VM memory
>   3. you can't do it at all if host PCI devices are attached to
>  the VM
> 
> Enabling live snapshots by default fixes all these problems, at the
> risk of hitting the live snapshot bug we saw in the gate CI but never
> anywhere else.

Yes, FWIW, I agree.  In addition to the nice details above, enabling the
live snapshots also allows you to quiesce file systems for consistent
disk state, via the Glance image metadata properties
'hw_qemu_guest_agent' and 'os_require_quiesce'.  (Current cold snapshot
mechanism doesn't allow this.)


Anyhow, Sean seems to have submitted the change to toggle the config for
enabling live_snapshots:

https://review.openstack.org/#/c/373430/ -- "Change
disable_live_snapshot workaround"


-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-20 Thread John Griffith
On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas 
wrote:

> On 20 September 2016 at 16:24, Nikita Konovalov 
> wrote:
>
>> Hi,
>>
>> From Sahara (and Hadoop workload in general) use-case the reason we used
>> BDD was a complete absence of any overhead on compute resources
>> utilization.
>>
>> The results show that the LVM+Local target perform pretty close to BDD in
>> synthetic tests. It's a good sign for LVM. It actually shows that most of
>> the storage virtualization overhead is not caused by LVM partitions and
>> drivers themselves but rather by the iSCSI daemons.
>>
>> So I would still like to have the ability to attach partitions locally
>> bypassing the iSCSI to guarantee 2 things:
>> * Make sure that lio processes do not compete for CPU and RAM with VMs
>> running on the same host.
>> * Make sure that CPU intensive VMs (or whatever else is running nearby)
>> are not blocking the storage.
>>
>
> So these are, unless we see the effects via benchmarks, completely
> meaningless requirements. Ivan's initial benchmarks suggest that LVM+LIO is
> pretty much close enough to BDD even with iSCSI involved. If you're aware
> of a case where it isn't, the first thing to do is to provide proof via a
> reproducible benchmark. Otherwise we are likely to proceed, as John
> suggests, with the assumption that local target does not provide much
> benefit.
>
> I've a few benchmarks myself that I suspect will find areas where getting
> rid of iSCSI is benefit, however if you have any then you really need to
> step up and provide the evidence. Relying on vague claims of overhead is
> now proven to not be a good idea.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Honestly we can have both, I'll work up a bp to resurrect the idea of a
"smart" scheduling feature that lets you request the volume be on the same
node as the compute node and use it directly, and then if it's NOT it will
attach a target and use it that way (in other words you run a stripped down
c-vol service on each compute node).

Sahara keeps insisting on being a snow-flake with Cinder volumes and the
block driver, it's really not necessary.  I think we can compromise just a
little both ways, give you standard Cinder semantics for volumes, but allow
you direct acccess to them if/when requested, but have those be flexible
enough that targets *can* be attached so they meet all of the required
functionality and API implementations.  This also means that we don't have
to continue having a *special* driver in Cinder that frankly only works for
one specific use case and deployment.

I've pointed to this a number of times but it never seems to resonate...
but I never learn so I'll try it once again [1].  Note that was before the
name "brick" was hijacked and now means something completely different.

[1]: https://wiki.openstack.org/wiki/CinderBrick

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-cisco] Poll to decide networking-cisco weekly meeting times

2016-09-20 Thread Steven Dake (stdake)
Sam,

Can this meeting instead be held in the normal openstack-meeting-1 -> 4 
channels?  Having one-off meetings in #openstack-networking-cisco is totally 
fine.  Having standing team meetings there is atypical of OpenStack projects.  
The main value of using the opentack-meeting-1-4 channels is that a lot of 
people are in those channels, and it is easy to ping an individual in a broadly 
represented channel for cross-project issues rather than a project-specific 
channel.

Regards
-steve


From: "Sam Betts (sambetts)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, September 20, 2016 at 5:05 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [networking-cisco] Poll to decide networking-cisco 
weekly meeting times

For those interested in participating, we are setting up a weekly meeting to 
discuss the development and design of networking-cisco. This meeting will be 
held on the #openstack-networking-cisco channel. If you are interested please 
use the link below to provide which of the dates and times you are best for 
you. We will use this poll to ensure we organise a meeting which aligns best 
for the majority of attendees.

http://doodle.com/poll/huqedr8hac679c9y
NOTE: All the times are in UTC.

Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Sean Dague
On 09/20/2016 11:20 AM, Daniel P. Berrange wrote:
> On Tue, Sep 20, 2016 at 11:01:23AM -0400, Sean Dague wrote:
>> On 09/20/2016 10:38 AM, Daniel P. Berrange wrote:
>>> On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
 This is a bit delayed due to the release rush, finally getting back to
 writing up my experiences at the Ops Meetup.

 Nova Feedback Session
 =

 We had a double session for Feedback for Nova from Operators, raw
 etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.

 The median release people were on in the room was Kilo. Some were
 upgrading to Liberty, many had older than Kilo clouds. Remembering
 these are the larger ops environments that are engaged enough with the
 community to send people to the Ops Meetup.


 Performance Bottlenecks
 ---

 * scheduling issues with Ironic - (this is a bug we got through during
   the week after the session)
 * live snapshots actually end up performance issue for people

 The workarounds config group was not well known, and everyone in the
 room wished we advertised that a bit more. The solution for snapshot
 performance is in there

 There were also general questions about what scale cells should be
 considered at.

 ACTION: we should make sure workarounds are advertised better
>>>
>>> Workarounds ought to be something that admins are rarely, if
>>> ever, having to deal with.
>>>
>>> If the lack of live snapshot is such a major performance problem
>>> for ops, this tends to suggest that our default behaviour is wrong,
>>> rather than a need to publicise that operators should set this
>>> workaround.
>>>
>>> eg, instead of optimizing for the case of a broken live snapshot
>>> support by default, we should optimize for the case of working
>>> live snapshot by default. The broken live snapshot stuff was so
>>> rare that no one has ever reproduced it outside of the gate
>>> AFAIK.
>>>
>>> IOW, rather than hardcoding disable_live_snapshot=True in nova,
>>> we should just set it in the gate CI configs, and leave it set
>>> to False in Nova, so operators get good performance out of the
>>> box.
>>>
>>> Also it has been a while since we added the workaround, and IIRC,
>>> we've got newer Ubuntu available on at least some of the gate
>>> hosts now, so we have the ability to test to see if it still
>>> hits newer Ubuntu. 
>>
>> Here is my reconstruction of the snapshot issue from what I can remember
>> of the conversation.
>>
>> Nova defaults to live snapshots. This uses the libvirt facility which
>> dumps both memory and disk. And then we throw away the memory. For large
>> memory guests (especially volume backed ones that might have a fast path
>> for the disk) this leads to a lot of overhead for no gain. The
>> workaround got them past it.
> 
> I think you've got it backwards there.
> 
> Nova defaults to *not* using live snapshots:
> 
> cfg.BoolOpt(
> 'disable_libvirt_livesnapshot',
> default=True,
> help="""
> Disable live snapshots when using the libvirt driver.
> ...""")
> 
> 
> When live snapshot is disabled like this, the snapshot code is unable
> to guarantee a consistent disk state. So the libvirt nova driver will
> stop the guest by doing a managed save (this saves all memory to
> disk), then does the disk snapshot, then restores the managed saved
> (which loads all memory from disk).
> 
> This is terrible for multiple reasons
> 
>   1. the guest workload stops running while snapshot is taken
>   2. we churn disk I/O saving & loading VM memory
>   3. you can't do it at all if host PCI devices are attached to
>  the VM
> 
> Enabling live snapshots by default fixes all these problems, at the
> risk of hitting the live snapshot bug we saw in the gate CI but never
> anywhere else.

Ah, right. I'll propose inverting the default and we'll see if we can
get past the testing in the gate - https://review.openstack.org/#/c/373430/

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-20 Thread Steven Dake (stdake)
Mathias,

Thank you for voicing your opinion (and anyone is welcome to do that in Kolla), 
however, core reviewer votes are the only binding votes in the decision making 
process.

Regards
-steve


From: Mathias Ewald 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, September 20, 2016 at 7:25 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

Option 2

2016-09-20 16:07 GMT+02:00 Steven Dake (stdake) 
>:
Consider this a reversal of my vote for Debian deprecation.

Swapnil, thanks for bringing this fact to our attention.  It was missing from 
the original vote.  I don’t know why I didn’t bring up Benedikt’s contributions 
(which were substantial) just as Paul’s were substantial for Oracle Linux.  I 
guess the project is too busy for me to keep all the context in my brain.  The 
fact that there is no debian gate really is orthogonal to the discussion in my 
mind.  With this action we would be turning away a contributor who has made 
real contributions, and send the signal his work doesn’t matter or fit in with 
the project plans.

This is totally the wrong message to send.  Further others might interpret such 
an act as a “we don’t care about anyone’s contributions” which is not the 
culture we have cultivated since origination of the project.  We have built a 
culture of “you build it, we will accept it once it passes review”.  We want to 
hold on to that – it’s a really good thing for Kolla.  There have been 
rumblings in this thread and on irc of the expanding support matrix and our 
(lack) of dealing with it appropriately.  I think there are other ways to solve 
that problem without a policy hammer.

I added the fedora work originally along with others who have since moved on to 
other projects.  I personally have been unsuccessful at maintaining it, because 
of the change to DNF (And PTL is a 100% time commitment without a whole lot of 
time for implementation work).  That said Fedora moves too fast for me 
personally to commit to maintenance there – so my vote there remains unchanged.

Regards
-steve





On 9/20/16, 2:34 AM, "Paul Bourke" 
> wrote:

If it's the case Benedict or noone else is interested in continuing
Debian, I can reverse my vote. Though it seems I'll be outvoted anyway ;)

On 20/09/16 10:21, Swapnil Kulkarni wrote:
> On Tue, Sep 20, 2016 at 2:38 PM, Paul Bourke 
> wrote:
>> -1 for deprecating Debian.
>>
>> As I mentioned in https://review.openstack.org/#/c/369183/, Debian 
support
>> was added incrementally by Benedikt Trefzer as recently as June. So it's
>> reasonable to believe there is at least one active user of Debian.
>>
>> I would like to try get some input from him on whether he's still using 
it
>> and would be interested in helping maintain by adding gates etc.
>>
>> On 19/09/16 18:44, Jeffrey Zhang wrote:
>>>
>>> Kolla core reviewer team,
>>>
>>> Kolla supports multiple Linux distros now, including
>>>
>>> * Ubuntu
>>> * CentOS
>>> * RHEL
>>> * Fedora
>>> * Debian
>>> * OracleLinux
>>>
>>> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
>>> robust gate to ensure the quality.
>>>
>>> For Debian, Kolla hasn't any test for it and nobody reports any bug
>>> about it( i.e. nobody use Debian as base distro image). We (kolla
>>> team) also do not have enough resources to support so many Linux
>>> distros. I prefer to deprecate Debian support now.
>>>
>>> Please vote:
>>>
>>> 1. Kolla needs support Debian( if so, we need some guys to set up the
>>> gate and fix all the issues ASAP in O cycle)
>>> 2. Kolla should deprecate Debian support
>>>
>>> Voting is open for 7 days until September 27st, 2016.
>>>
>>
>> 
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> +1 for #2
>
> I agree with reasoning from Paul, though Debian support was being
> added incrementally by Benedikt Trefzer but it has been stopped in
> middle and all patches in the queue were abandoned [1]
>
> [1] https://review.openstack.org/#/q/topic:bp/build-debian,n,z
>
> __
> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 11:01:23AM -0400, Sean Dague wrote:
> On 09/20/2016 10:38 AM, Daniel P. Berrange wrote:
> > On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
> >> This is a bit delayed due to the release rush, finally getting back to
> >> writing up my experiences at the Ops Meetup.
> >>
> >> Nova Feedback Session
> >> =
> >>
> >> We had a double session for Feedback for Nova from Operators, raw
> >> etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.
> >>
> >> The median release people were on in the room was Kilo. Some were
> >> upgrading to Liberty, many had older than Kilo clouds. Remembering
> >> these are the larger ops environments that are engaged enough with the
> >> community to send people to the Ops Meetup.
> >>
> >>
> >> Performance Bottlenecks
> >> ---
> >>
> >> * scheduling issues with Ironic - (this is a bug we got through during
> >>   the week after the session)
> >> * live snapshots actually end up performance issue for people
> >>
> >> The workarounds config group was not well known, and everyone in the
> >> room wished we advertised that a bit more. The solution for snapshot
> >> performance is in there
> >>
> >> There were also general questions about what scale cells should be
> >> considered at.
> >>
> >> ACTION: we should make sure workarounds are advertised better
> > 
> > Workarounds ought to be something that admins are rarely, if
> > ever, having to deal with.
> > 
> > If the lack of live snapshot is such a major performance problem
> > for ops, this tends to suggest that our default behaviour is wrong,
> > rather than a need to publicise that operators should set this
> > workaround.
> > 
> > eg, instead of optimizing for the case of a broken live snapshot
> > support by default, we should optimize for the case of working
> > live snapshot by default. The broken live snapshot stuff was so
> > rare that no one has ever reproduced it outside of the gate
> > AFAIK.
> > 
> > IOW, rather than hardcoding disable_live_snapshot=True in nova,
> > we should just set it in the gate CI configs, and leave it set
> > to False in Nova, so operators get good performance out of the
> > box.
> > 
> > Also it has been a while since we added the workaround, and IIRC,
> > we've got newer Ubuntu available on at least some of the gate
> > hosts now, so we have the ability to test to see if it still
> > hits newer Ubuntu. 
> 
> Here is my reconstruction of the snapshot issue from what I can remember
> of the conversation.
> 
> Nova defaults to live snapshots. This uses the libvirt facility which
> dumps both memory and disk. And then we throw away the memory. For large
> memory guests (especially volume backed ones that might have a fast path
> for the disk) this leads to a lot of overhead for no gain. The
> workaround got them past it.

I think you've got it backwards there.

Nova defaults to *not* using live snapshots:

cfg.BoolOpt(
'disable_libvirt_livesnapshot',
default=True,
help="""
Disable live snapshots when using the libvirt driver.
...""")


When live snapshot is disabled like this, the snapshot code is unable
to guarantee a consistent disk state. So the libvirt nova driver will
stop the guest by doing a managed save (this saves all memory to
disk), then does the disk snapshot, then restores the managed saved
(which loads all memory from disk).

This is terrible for multiple reasons

  1. the guest workload stops running while snapshot is taken
  2. we churn disk I/O saving & loading VM memory
  3. you can't do it at all if host PCI devices are attached to
 the VM

Enabling live snapshots by default fixes all these problems, at the
risk of hitting the live snapshot bug we saw in the gate CI but never
anywhere else.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Matt Borland Searchlight UI Core Nomination

2016-09-20 Thread Brian Rosmaita
+ 1 on both issues.  Great work, Matt!

On 9/19/16, 1:13 PM, "Tripp, Travis S"  wrote:

>Hello!
> 
>I am nominating Matt Borland for Searchlight UI core. Matt has been
>working on searchlight UI since we started it and he is the primary
>author of the angular registration framework in Horizon which Searchlight
>depends upon. Matt has the second most searchlight UI commits in Newton
>[0], but more impressively, Matt has more commits on Horizon than ANY
>other person in Newton [1]. In addition, Matt is a top 5 reviewer on
>Searchlight UI [2] and a 12 reviewer on horizon [3] He has provided
>thoughtful feedback and valuable insights throughout his time on
>Searchlight.
>
>Searchlight UI core team is currently the combination of Searchlight
>cores and Horizon Cores.  With this change we would make a searchlight UI
>core group that can has core privileges in addition to those two groups.
> 
>[0] http://stackalytics.com/?metric=commits=searchlight-ui
>[1] http://stackalytics.com/?module=horizon=commits
>[2] http://stackalytics.com/report/contribution/horizon/180
>[3] http://stackalytics.com/report/contribution/searchlight-ui/180
>
>Thanks,
>Travis
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Tim Bell

On 20 Sep 2016, at 16:38, Sean Dague > 
wrote:
...
There were also general questions about what scale cells should be
considered at.

ACTION: we should make sure workarounds are advertised better
ACTION: we should have some document about "when cells"?

This is a difficult question to answer because "it depends." It's akin
to asking "how many nova-api/nova-conductor processes should I run?"
Well, what hardware is being used, how much traffic do you get, is it
bursty or sustained, are instances created and left alone or are they
torn down regularly, do you prune your database, what version of rabbit
are you using, etc...

I would expect the best answer(s) to this question are going to come
from the operators themselves. What I've seen with cellsv1 is that
someone will decide for themselves that they should put no more than X
computes in a cell and that information filters out to other operators.
That provides a starting point for a new deployment to tune from.

I don't think we need "don't go larger than N nodes" kind of advice. But
we should probably know what kinds of things we expect to be hot spots.
Like mysql load, possibly indicated by system load or high level of db
conflicts. Or rabbit mq load. Or something along those lines.

Basically the things to look out for that indicate your are approaching
a scale point where cells is going to help. That also helps in defining
what kind of scaling issues cells won't help on, which need to be
addressed in other ways (such as optimizations).

-Sean


We had an ‘interesting' experience splitting a cell which I would not recommend 
for others.

We started off letting our cells grow to about 1000 hypervisors but following 
discussions in the
large deployment team, ended up aiming for 200 or so per cell. This also 
allowed us to make the
hardware homogeneous in a cell.

We then split the original 1000 hypervisor cell into smaller ones which was 
hard work to plan.

Thus, I think people who think they may need cells are better adding new cells 
than letting their first one
grow until they are forced to do cells at a later stage and then do a split.

Tim

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #91

2016-09-20 Thread Iury Gregory
No topic/discussion in our agenda. Meeting cancelled, see you next week!


2016-09-19 18:00 GMT-03:00 Iury Gregory <iurygreg...@gmail.com>:

> Hi Puppeteers,
>
> If you have any topic to add for this week, please use the etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160920
>
> See you tomorrow =)
>
> 2016-09-12 13:23 GMT-03:00 Emilien Macchi <emil...@redhat.com>:
>
>> Hi,
>>
>> Tomorrow we'll have our weekly meeting:
>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160913
>> Feel free to add topics as usual,
>>
>> Thanks!
>>
>> On Thu, Sep 8, 2016 at 9:57 AM, Iury Gregory <iurygreg...@gmail.com>
>> wrote:
>> > No topic/discussion in our agenda, we cancelled the meeting, see you
>> next
>> > week!
>> >
>> > 2016-09-05 15:39 GMT-03:00 Iury Gregory <iurygreg...@gmail.com>:
>> >>
>> >> Hi Puppeteers,
>> >>
>> >> If you have any topic to add for this week, please use the etherpad:
>> >> https://etherpad.openstack.org/p/puppet-openstack-weekly-mee
>> ting-20160906
>> >>
>> >> See you tomorrow =)
>> >>
>> >> 2016-08-30 12:02 GMT-03:00 Emilien Macchi <emil...@redhat.com>:
>> >>>
>> >>> No topic this week, meeting cancelled!
>> >>>
>> >>> See you next week :)
>> >>>
>> >>> On Mon, Aug 29, 2016 at 1:45 PM, Emilien Macchi <emil...@redhat.com>
>> >>> wrote:
>> >>> > Hi,
>> >>> >
>> >>> > If you have any topic to add for this week, please use the etherpad:
>> >>> >
>> >>> > https://etherpad.openstack.org/p/puppet-openstack-weekly-mee
>> ting-20160830
>> >>> >
>> >>> > See you tomorrow,
>> >>> >
>> >>> > On Tue, Aug 23, 2016 at 1:08 PM, Iury Gregory <
>> iurygreg...@gmail.com>
>> >>> > wrote:
>> >>> >> No topic/discussion in our agenda, we cancelled the meeting, see
>> you
>> >>> >> next
>> >>> >> week!
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> 2016-08-22 16:19 GMT-03:00 Iury Gregory <iurygreg...@gmail.com>:
>> >>> >>>
>> >>> >>> Hi Puppeteers!
>> >>> >>>
>> >>> >>> We'll have our weekly meeting tomorrow at 3pm UTC on
>> >>> >>> #openstack-meeting-4
>> >>> >>>
>> >>> >>> Here's a first agenda:
>> >>> >>>
>> >>> >>> https://etherpad.openstack.org/p/puppet-openstack-weekly-mee
>> ting-20160823
>> >>> >>>
>> >>> >>> Feel free to add topics, and any outstanding bug and patch.
>> >>> >>>
>> >>> >>> See you tomorrow!
>> >>> >>> Thanks,
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >>
>> >>> >> ~
>> >>> >> Att[]'s
>> >>> >> Iury Gregory Melo Ferreira
>> >>> >> Master student in Computer Science at UFCG
>> >>> >> E-mail:  iurygreg...@gmail.com
>> >>> >> ~
>> >>> >>
>> >>> >>
>> >>> >> 
>> __
>> >>> >> OpenStack Development Mailing List (not for usage questions)
>> >>> >> Unsubscribe:
>> >>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >>
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Emilien Macchi
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Emilien Macchi
>> >>>
>> >>>
>> >>> 
>> __
>> >>> OpenStack Development Mailing

Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-20 Thread Duncan Thomas
On 20 September 2016 at 16:24, Nikita Konovalov 
wrote:

> Hi,
>
> From Sahara (and Hadoop workload in general) use-case the reason we used
> BDD was a complete absence of any overhead on compute resources
> utilization.
>
> The results show that the LVM+Local target perform pretty close to BDD in
> synthetic tests. It's a good sign for LVM. It actually shows that most of
> the storage virtualization overhead is not caused by LVM partitions and
> drivers themselves but rather by the iSCSI daemons.
>
> So I would still like to have the ability to attach partitions locally
> bypassing the iSCSI to guarantee 2 things:
> * Make sure that lio processes do not compete for CPU and RAM with VMs
> running on the same host.
> * Make sure that CPU intensive VMs (or whatever else is running nearby)
> are not blocking the storage.
>

So these are, unless we see the effects via benchmarks, completely
meaningless requirements. Ivan's initial benchmarks suggest that LVM+LIO is
pretty much close enough to BDD even with iSCSI involved. If you're aware
of a case where it isn't, the first thing to do is to provide proof via a
reproducible benchmark. Otherwise we are likely to proceed, as John
suggests, with the assumption that local target does not provide much
benefit.

I've a few benchmarks myself that I suspect will find areas where getting
rid of iSCSI is benefit, however if you have any then you really need to
step up and provide the evidence. Relying on vague claims of overhead is
now proven to not be a good idea.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Sean Dague
On 09/20/2016 10:38 AM, Daniel P. Berrange wrote:
> On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
>> This is a bit delayed due to the release rush, finally getting back to
>> writing up my experiences at the Ops Meetup.
>>
>> Nova Feedback Session
>> =
>>
>> We had a double session for Feedback for Nova from Operators, raw
>> etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.
>>
>> The median release people were on in the room was Kilo. Some were
>> upgrading to Liberty, many had older than Kilo clouds. Remembering
>> these are the larger ops environments that are engaged enough with the
>> community to send people to the Ops Meetup.
>>
>>
>> Performance Bottlenecks
>> ---
>>
>> * scheduling issues with Ironic - (this is a bug we got through during
>>   the week after the session)
>> * live snapshots actually end up performance issue for people
>>
>> The workarounds config group was not well known, and everyone in the
>> room wished we advertised that a bit more. The solution for snapshot
>> performance is in there
>>
>> There were also general questions about what scale cells should be
>> considered at.
>>
>> ACTION: we should make sure workarounds are advertised better
> 
> Workarounds ought to be something that admins are rarely, if
> ever, having to deal with.
> 
> If the lack of live snapshot is such a major performance problem
> for ops, this tends to suggest that our default behaviour is wrong,
> rather than a need to publicise that operators should set this
> workaround.
> 
> eg, instead of optimizing for the case of a broken live snapshot
> support by default, we should optimize for the case of working
> live snapshot by default. The broken live snapshot stuff was so
> rare that no one has ever reproduced it outside of the gate
> AFAIK.
> 
> IOW, rather than hardcoding disable_live_snapshot=True in nova,
> we should just set it in the gate CI configs, and leave it set
> to False in Nova, so operators get good performance out of the
> box.
> 
> Also it has been a while since we added the workaround, and IIRC,
> we've got newer Ubuntu available on at least some of the gate
> hosts now, so we have the ability to test to see if it still
> hits newer Ubuntu. 

Here is my reconstruction of the snapshot issue from what I can remember
of the conversation.

Nova defaults to live snapshots. This uses the libvirt facility which
dumps both memory and disk. And then we throw away the memory. For large
memory guests (especially volume backed ones that might have a fast path
for the disk) this leads to a lot of overhead for no gain. The
workaround got them past it.

Maybe there is another bug we should be addressing here, but it was an
issue out there people were seeing on the performance side.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Usability study at Barcelona Summit

2016-09-20 Thread Kruithof Jr, Pieter
Hi Chris,

Should we use the API WG meeting slot on Thursday to discuss the study?  If so, 
I will send out details for a virtual room.

Piet




On 9/20/16, 3:39 AM, "Chris Dent"  wrote:

>On Mon, 19 Sep 2016, Kruithof Jr, Pieter wrote:
>
>> I was planning to run a usability study on behalf of the API working
>> group at the Barcelona summit. The plan is to have operators
>> complete a set of common tasks, such as adjusting quotas, using the
>> projects APIs.
>
>Since you're intending to use python-openstackclient as the client
>in this tests, it sounds like you'll be doing a usability study of
>that client. This is a great thing to do, but you should be aware
>that you'll probably want a contact from the developers of that client
>as well as or instead of someone from the API-WG.
>
>The API-WG, of late, has mostly been concerned with the correctness
>of the HTTP interactions that the OpenStack APIs allow. Since the
>openstackclient abstracts these interactions the kinds of data that
>we'll find most use from tests using the client include:
>
>* are the resources made available by the APIs (and thus the actions
>   that can be performed by the client) the right ones
>* are the modes of filtering, sorting, limiting adequate
>* are their inconsistencies in the service APIs that cause there to
>   be inconsistencies in the behaviour of the client
>
>I'm sure there are plenty more, that's just off the top of my head.
>
>> Chris, 
>> 
>> Were you still able to act as my contact?
>
>Yes. Unfortunately, of the three api-wg cores, I'm the only one who
>has been able to make plans to go to summit.
>
>
>> I’ve created an etherpad to begin planning the study:
>> 
>> https://etherpad.openstack.org/p/osux-api-oct2016
>> 
>> Thanks,
>
>Thank you.
>
>
>-- 
>Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
>freenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
> This is a bit delayed due to the release rush, finally getting back to
> writing up my experiences at the Ops Meetup.
> 
> Nova Feedback Session
> =
> 
> We had a double session for Feedback for Nova from Operators, raw
> etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.
> 
> The median release people were on in the room was Kilo. Some were
> upgrading to Liberty, many had older than Kilo clouds. Remembering
> these are the larger ops environments that are engaged enough with the
> community to send people to the Ops Meetup.
> 
> 
> Performance Bottlenecks
> ---
> 
> * scheduling issues with Ironic - (this is a bug we got through during
>   the week after the session)
> * live snapshots actually end up performance issue for people
> 
> The workarounds config group was not well known, and everyone in the
> room wished we advertised that a bit more. The solution for snapshot
> performance is in there
> 
> There were also general questions about what scale cells should be
> considered at.
> 
> ACTION: we should make sure workarounds are advertised better

Workarounds ought to be something that admins are rarely, if
ever, having to deal with.

If the lack of live snapshot is such a major performance problem
for ops, this tends to suggest that our default behaviour is wrong,
rather than a need to publicise that operators should set this
workaround.

eg, instead of optimizing for the case of a broken live snapshot
support by default, we should optimize for the case of working
live snapshot by default. The broken live snapshot stuff was so
rare that no one has ever reproduced it outside of the gate
AFAIK.

IOW, rather than hardcoding disable_live_snapshot=True in nova,
we should just set it in the gate CI configs, and leave it set
to False in Nova, so operators get good performance out of the
box.

Also it has been a while since we added the workaround, and IIRC,
we've got newer Ubuntu available on at least some of the gate
hosts now, so we have the ability to test to see if it still
hits newer Ubuntu. 


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]What is definition of critical bugfixes?

2016-09-20 Thread Matt Riedemann

On 9/20/2016 4:25 AM, Rikimaru Honjo wrote:

Hi All,

I requested to review my patch in the last Weekly Nova team meeting.[1]
In this meeting, Mr. Dan Smith said following things about my patch.

* This patch is too large to merge in rc2.[2]
* Fix after Newton and backport to newton and mitaka.[3]

In my understanding, we can backport only critical bugfixes and security
patches
in Phase II.[4]
And, stable/mitaka move to Phase II after newton.

What is definition of critical bugfixes?
And, can I backport my patch to mitaka after newton?

[1]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-178

[2]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-194

[3]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-185

[4]http://docs.openstack.org/project-team-guide/stable-branches.html#support-phases


Best regards,


Critical generally means data loss, security issues, or upgrade impacts, 
i.e. does a bug cause data loss or prevent upgrades to a given release?


Latent known issues are generally not considered critical bug fixes, 
especially if they are large and complicated which means they are prone 
to introduce regressions.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Sean Dague
On 09/20/2016 10:22 AM, Andrew Laski wrote:
> Excellent writeup, thanks. Some comments inline.
> 
> 
> On Tue, Sep 20, 2016, at 09:20 AM, Sean Dague wrote:
>> 
>>
>> Performance Bottlenecks
>> ---
>>
>> * scheduling issues with Ironic - (this is a bug we got through during
>>   the week after the session)
>> * live snapshots actually end up performance issue for people
>>
>> The workarounds config group was not well known, and everyone in the
>> room wished we advertised that a bit more. The solution for snapshot
>> performance is in there.
>>
>> There were also general questions about what scale cells should be
>> considered at.
>>
>> ACTION: we should make sure workarounds are advertised better
>> ACTION: we should have some document about "when cells"?
> 
> This is a difficult question to answer because "it depends." It's akin
> to asking "how many nova-api/nova-conductor processes should I run?"
> Well, what hardware is being used, how much traffic do you get, is it
> bursty or sustained, are instances created and left alone or are they
> torn down regularly, do you prune your database, what version of rabbit
> are you using, etc...
> 
> I would expect the best answer(s) to this question are going to come
> from the operators themselves. What I've seen with cellsv1 is that
> someone will decide for themselves that they should put no more than X
> computes in a cell and that information filters out to other operators.
> That provides a starting point for a new deployment to tune from.

I don't think we need "don't go larger than N nodes" kind of advice. But
we should probably know what kinds of things we expect to be hot spots.
Like mysql load, possibly indicated by system load or high level of db
conflicts. Or rabbit mq load. Or something along those lines.

Basically the things to look out for that indicate your are approaching
a scale point where cells is going to help. That also helps in defining
what kind of scaling issues cells won't help on, which need to be
addressed in other ways (such as optimizations).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-20 Thread Mathias Ewald
Option 2

2016-09-20 16:07 GMT+02:00 Steven Dake (stdake) :

> Consider this a reversal of my vote for Debian deprecation.
>
> Swapnil, thanks for bringing this fact to our attention.  It was missing
> from the original vote.  I don’t know why I didn’t bring up Benedikt’s
> contributions (which were substantial) just as Paul’s were substantial for
> Oracle Linux.  I guess the project is too busy for me to keep all the
> context in my brain.  The fact that there is no debian gate really is
> orthogonal to the discussion in my mind.  With this action we would be
> turning away a contributor who has made real contributions, and send the
> signal his work doesn’t matter or fit in with the project plans.
>
> This is totally the wrong message to send.  Further others might interpret
> such an act as a “we don’t care about anyone’s contributions” which is not
> the culture we have cultivated since origination of the project.  We have
> built a culture of “you build it, we will accept it once it passes
> review”.  We want to hold on to that – it’s a really good thing for Kolla.
> There have been rumblings in this thread and on irc of the expanding
> support matrix and our (lack) of dealing with it appropriately.  I think
> there are other ways to solve that problem without a policy hammer.
>
> I added the fedora work originally along with others who have since moved
> on to other projects.  I personally have been unsuccessful at maintaining
> it, because of the change to DNF (And PTL is a 100% time commitment without
> a whole lot of time for implementation work).  That said Fedora moves too
> fast for me personally to commit to maintenance there – so my vote there
> remains unchanged.
>
> Regards
> -steve
>
>
>
>
>
> On 9/20/16, 2:34 AM, "Paul Bourke"  wrote:
>
> If it's the case Benedict or noone else is interested in continuing
> Debian, I can reverse my vote. Though it seems I'll be outvoted anyway
> ;)
>
> On 20/09/16 10:21, Swapnil Kulkarni wrote:
> > On Tue, Sep 20, 2016 at 2:38 PM, Paul Bourke 
> wrote:
> >> -1 for deprecating Debian.
> >>
> >> As I mentioned in https://review.openstack.org/#/c/369183/, Debian
> support
> >> was added incrementally by Benedikt Trefzer as recently as June. So
> it's
> >> reasonable to believe there is at least one active user of Debian.
> >>
> >> I would like to try get some input from him on whether he's still
> using it
> >> and would be interested in helping maintain by adding gates etc.
> >>
> >> On 19/09/16 18:44, Jeffrey Zhang wrote:
> >>>
> >>> Kolla core reviewer team,
> >>>
> >>> Kolla supports multiple Linux distros now, including
> >>>
> >>> * Ubuntu
> >>> * CentOS
> >>> * RHEL
> >>> * Fedora
> >>> * Debian
> >>> * OracleLinux
> >>>
> >>> But only Ubuntu, CentOS, and OracleLinux are widely used and we
> have
> >>> robust gate to ensure the quality.
> >>>
> >>> For Debian, Kolla hasn't any test for it and nobody reports any bug
> >>> about it( i.e. nobody use Debian as base distro image). We (kolla
> >>> team) also do not have enough resources to support so many Linux
> >>> distros. I prefer to deprecate Debian support now.
> >>>
> >>> Please vote:
> >>>
> >>> 1. Kolla needs support Debian( if so, we need some guys to set up
> the
> >>> gate and fix all the issues ASAP in O cycle)
> >>> 2. Kolla should deprecate Debian support
> >>>
> >>> Voting is open for 7 days until September 27st, 2016.
> >>>
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > +1 for #2
> >
> > I agree with reasoning from Paul, though Debian support was being
> > added incrementally by Benedikt Trefzer but it has been stopped in
> > middle and all patches in the queue were abandoned [1]
> >
> > [1] https://review.openstack.org/#/q/topic:bp/build-debian,n,z
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 

Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Andrew Laski
Excellent writeup, thanks. Some comments inline.


On Tue, Sep 20, 2016, at 09:20 AM, Sean Dague wrote:
> 
> 
> Performance Bottlenecks
> ---
> 
> * scheduling issues with Ironic - (this is a bug we got through during
>   the week after the session)
> * live snapshots actually end up performance issue for people
> 
> The workarounds config group was not well known, and everyone in the
> room wished we advertised that a bit more. The solution for snapshot
> performance is in there.
> 
> There were also general questions about what scale cells should be
> considered at.
> 
> ACTION: we should make sure workarounds are advertised better
> ACTION: we should have some document about "when cells"?

This is a difficult question to answer because "it depends." It's akin
to asking "how many nova-api/nova-conductor processes should I run?"
Well, what hardware is being used, how much traffic do you get, is it
bursty or sustained, are instances created and left alone or are they
torn down regularly, do you prune your database, what version of rabbit
are you using, etc...

I would expect the best answer(s) to this question are going to come
from the operators themselves. What I've seen with cellsv1 is that
someone will decide for themselves that they should put no more than X
computes in a cell and that information filters out to other operators.
That provides a starting point for a new deployment to tune from.

>  
> 
> Policy
> --
> 
> How are you customizing policy? People were largely making policy
> changes to protect their users that didn't really understand cloud
> semantics. Turning off features that they thought would confuse them
> (like pause). The large number of VM states is confusing, and not
> clearly useful for end users, and they would like simplification.
> 
> Ideally policy could be set on a project by project admin, because
> they would like to delegate that responsibility down.
> 
> No one was using the user_id based custom policy (yay!).
> 
> There was desire that flavors could be RBAC locked down, which was
> actually being done via policy hacks right now. Providers want to
> expose some flavors (especially those with aggregate affinity) to only
> some projects.
> 
> People were excited about the policy in code effort, only concern was
> that the defacto documentation of what you could change wouldn't be in
> the sample config.
> 
> ACTION: ensure there is policy config reference now that the sample
> file is empty

We have the "genpolicy" tox target which mimics the "genconfig" target.
It's similar to the old sample except guaranteed to be up to date, and
can include comments. Is that sufficient?


> ACTION: flavor RBAC is a thing most of the room wanted, is there a
> taker on spec / implementation?

This isn't the most clear spec ever but we have an approved backlog spec
for flavor-classes which was looking to get into this area
http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/backlog/approved/flavor-class.rst
. That would add a grouping mechanism for flavors which could then be
used for policy checks. I think having that in place would minimize the
policy explosion that might happen if policy could be attached to
individual flavors.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Dan Smith
> The current DB online data upgrade model feels *very opaque* to
> ops. They didn't realize the current model Nova was using, and didn't
> feel like it was documented anywhere.

> ACTION: document the DB data lifecycle better for operators

This is on me, so I'll take it. I've just thrown together something that
I think will help a little bit:

  https://review.openstack.org/373361

Which, instead of a blank screen and a return code, gives you something
like this:

+---+--+---+
| Migration | Total Needed | Completed |
+---+--+---+
| migrate_aggregates|  5   | 4 |
| migrate_instance_keypairs |  6   | 6 |
+---+--+---+

I'll also see about writing up some docs about the expected workflow
here. Presumably that needs to go in some fancy docs and not into the
devref, right? Can anyone point me to where that should go?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] OpenFlow version to use in the OVS agent

2016-09-20 Thread Bernard Cafarelli
In the OVSSfcAgent migration to a L2 agent extension review[1], Igor
Duarte Cardoso noticed a difference on the OpenFlow versions between a
comment and actual code.
In current code [2], we have:
# We need to dump-groups according to group Id,
# which is a feature of OpenFlow1.5
full_args = ["ovs-ofctl", "-O openflow13", cmd, self.br_name

Indeed, only OpenFlow 1.5 and later support dumping a specific group
[3]. Earlier versions of OpenFlow always dump all groups.
So current code will return all groups:
$ sudo ovs-ofctl -O OpenFlow13 dump-groups br-int 1
OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
 
group_id=1,type=select,bucket=actions=set_field:fa:16:3e:05:46:69->eth_dst,resubmit(,5),bucket=actions=set_field:fa:16:3e:cd:b7:7e->eth_dst,resubmit(,5)
 
group_id=2,type=select,bucket=actions=set_field:fa:16:3e:2d:f3:28->eth_dst,resubmit(,5)
$ sudo ovs-ofctl -O OpenFlow15 dump-groups br-int 1
OFPST_GROUP_DESC reply (OF1.5) (xid=0x2):
 
group_id=1,type=select,bucket=bucket_id:0,actions=set_field:fa:16:3e:05:46:69->eth_dst,resubmit(,5),bucket=bucket_id:1,actions=set_field:fa:16:3e:cd:b7:7e->eth_dst,resubmit(,5)

This code behavior will not change in my extension rewrite, so this
will still have to be fixed. though I am not sure on the solution:
* We can use Openflow 1.5, but its support looks experimental? And
Neutron apparently only uses up to 1.4 (for OVS firewall extension)
* Method to dump a group can "grep" the group ID in the complete dump.
Not as efficient but works with OpenFlow 1.1+
* Use another system to load balance across the port pairs?

Thoughts?
In gerrit, I kept it set to 1.5 (no impact for now as this is still
marked as WIP)

[1]: https://review.openstack.org/#/c/351789
[2]: 
https://github.com/openstack/networking-sfc/blob/master/networking_sfc/services/sfc/common/ovs_ext_lib.py
[3]: http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt

-- 
Bernard Cafarelli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-20 Thread Steven Dake (stdake)
Consider this a reversal of my vote for Debian deprecation.

Swapnil, thanks for bringing this fact to our attention.  It was missing from 
the original vote.  I don’t know why I didn’t bring up Benedikt’s contributions 
(which were substantial) just as Paul’s were substantial for Oracle Linux.  I 
guess the project is too busy for me to keep all the context in my brain.  The 
fact that there is no debian gate really is orthogonal to the discussion in my 
mind.  With this action we would be turning away a contributor who has made 
real contributions, and send the signal his work doesn’t matter or fit in with 
the project plans.

This is totally the wrong message to send.  Further others might interpret such 
an act as a “we don’t care about anyone’s contributions” which is not the 
culture we have cultivated since origination of the project.  We have built a 
culture of “you build it, we will accept it once it passes review”.  We want to 
hold on to that – it’s a really good thing for Kolla.  There have been 
rumblings in this thread and on irc of the expanding support matrix and our 
(lack) of dealing with it appropriately.  I think there are other ways to solve 
that problem without a policy hammer.

I added the fedora work originally along with others who have since moved on to 
other projects.  I personally have been unsuccessful at maintaining it, because 
of the change to DNF (And PTL is a 100% time commitment without a whole lot of 
time for implementation work).  That said Fedora moves too fast for me 
personally to commit to maintenance there – so my vote there remains unchanged.

Regards
-steve
  




On 9/20/16, 2:34 AM, "Paul Bourke"  wrote:

If it's the case Benedict or noone else is interested in continuing 
Debian, I can reverse my vote. Though it seems I'll be outvoted anyway ;)

On 20/09/16 10:21, Swapnil Kulkarni wrote:
> On Tue, Sep 20, 2016 at 2:38 PM, Paul Bourke  
wrote:
>> -1 for deprecating Debian.
>>
>> As I mentioned in https://review.openstack.org/#/c/369183/, Debian 
support
>> was added incrementally by Benedikt Trefzer as recently as June. So it's
>> reasonable to believe there is at least one active user of Debian.
>>
>> I would like to try get some input from him on whether he's still using 
it
>> and would be interested in helping maintain by adding gates etc.
>>
>> On 19/09/16 18:44, Jeffrey Zhang wrote:
>>>
>>> Kolla core reviewer team,
>>>
>>> Kolla supports multiple Linux distros now, including
>>>
>>> * Ubuntu
>>> * CentOS
>>> * RHEL
>>> * Fedora
>>> * Debian
>>> * OracleLinux
>>>
>>> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
>>> robust gate to ensure the quality.
>>>
>>> For Debian, Kolla hasn't any test for it and nobody reports any bug
>>> about it( i.e. nobody use Debian as base distro image). We (kolla
>>> team) also do not have enough resources to support so many Linux
>>> distros. I prefer to deprecate Debian support now.
>>>
>>> Please vote:
>>>
>>> 1. Kolla needs support Debian( if so, we need some guys to set up the
>>> gate and fix all the issues ASAP in O cycle)
>>> 2. Kolla should deprecate Debian support
>>>
>>> Voting is open for 7 days until September 27st, 2016.
>>>
>>
>> 
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> +1 for #2
>
> I agree with reasoning from Paul, though Debian support was being
> added incrementally by Benedikt Trefzer but it has been stopped in
> middle and all patches in the queue were abandoned [1]
>
> [1] https://review.openstack.org/#/q/topic:bp/build-debian,n,z
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] gate-keystoneclient-dsvm-functional-ubuntu-xenial is broken

2016-09-20 Thread Steve Martinelli
Since September 14th the keystoneclient functional test job has been
broken. Let's be mindful of infra resources and stop rechecking the patches
there. Anyone have time to investigate this?

See patches https://review.openstack.org/#/c/369469/ or
https://review.openstack.org/#/c/371324/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Anita Kuno

On 16-09-20 09:20 AM, Sean Dague wrote:

This is a bit delayed due to the release rush, finally getting back to
writing up my experiences at the Ops Meetup.

Nova Feedback Session
=

We had a double session for Feedback for Nova from Operators, raw
etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.

The median release people were on in the room was Kilo. Some were
upgrading to Liberty, many had older than Kilo clouds. Remembering
these are the larger ops environments that are engaged enough with the
community to send people to the Ops Meetup.


Performance Bottlenecks
---

* scheduling issues with Ironic - (this is a bug we got through during
   the week after the session)
* live snapshots actually end up performance issue for people

The workarounds config group was not well known, and everyone in the
room wished we advertised that a bit more. The solution for snapshot
performance is in there.

There were also general questions about what scale cells should be
considered at.

ACTION: we should make sure workarounds are advertised better
ACTION: we should have some document about "when cells"?

Networking
--

A number of folks in the room were still on Nova Net, and were a bit
nervous about it going away. As they are Kilo / Liberty it's still a
few upgrades before they get there, but that nervousness and concern
was definitely there.

Policy
--

How are you customizing policy? People were largely making policy
changes to protect their users that didn't really understand cloud
semantics. Turning off features that they thought would confuse them
(like pause). The large number of VM states is confusing, and not
clearly useful for end users, and they would like simplification.

Ideally policy could be set on a project by project admin, because
they would like to delegate that responsibility down.

No one was using the user_id based custom policy (yay!).

There was desire that flavors could be RBAC locked down, which was
actually being done via policy hacks right now. Providers want to
expose some flavors (especially those with aggregate affinity) to only
some projects.

People were excited about the policy in code effort, only concern was
that the defacto documentation of what you could change wouldn't be in
the sample config.

ACTION: ensure there is policy config reference now that the sample
file is empty
ACTION: flavor RBAC is a thing most of the room wanted, is there a
taker on spec / implementation?

Upgrade
---

Everyone waits to do any optional thing until they absolutely have
to.

The Cells API db caught a bunch of people off guard because it was in
Kilo (with release note) optional. Status quo in Liberty, with no
release note about it existing, then forced in Mitaka. When an
optional component is out there make sure it continues to be talked
about in releases even when it's status did not change, or people
forget.

People were on Kilo, so EC2 out of tree didn't really have any
data. About 25% of folks users have some existing AWS tooling, that
it's good to be able to just let them use to onboard them.

The current DB online data upgrade model feels *very opaque* to
ops. They didn't realize the current model Nova was using, and didn't
feel like it was documented anywhere.

ACTION: document the DB data lifecycle better for operators
ACTION: make sure we are cautious in rewarning people about changes
they have to make (like Cells API db)

API
---

API upgrade seemed fine for folks. The only question was the new
policy names, which was taking folks a bit of time to adjust to.

No one in the room was using custom API extensions (or at least
admitted to it when I asked).

Tracking Feedback
-

We talked a bit about tracking feedback. The silence on the ops list
mostly comes from people not using a particular feature, so they don't
really have an opinion.

Most ops do not have time to look at our specs. That is an unlikely
place to get feedback.

Additional Questions


There was an ask about VM HA. I stated that was beyond scope for Nova,
plus Nova's view of the world is non authoritative enough you didn't
want it to do that anyway. I told folks that the NFV efforts were
working on this kind of thing beyond Nova, and people should team up
there.

There was an ask on status of Cinder Multi Attach. We gave them a bit
of status on where things were at.

ACTION: Cinder Multi Attach should maybe be a priority effort in the
next cycle.


Upgrade Pain Points
===

Raw etherpad -
https://etherpad.openstack.org/p/NYC-ops-Upgrades-Pain-points

Most people are a couple of releases back (Kilo / Liberty or even
older). The only team CDing in the room was RAX, they are now 2 to 3
months behind master.

Everyone agrees upgrades are getting better with every release.

Most are taking change windows and downtime for upgrades.

Why are upgrades taking so long?


About half 

Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-20 Thread Nikita Konovalov
Hi,

>From Sahara (and Hadoop workload in general) use-case the reason we used
BDD was a complete absence of any overhead on compute resources
utilization.

The results show that the LVM+Local target perform pretty close to BDD in
synthetic tests. It's a good sign for LVM. It actually shows that most of
the storage virtualization overhead is not caused by LVM partitions and
drivers themselves but rather by the iSCSI daemons.

So I would still like to have the ability to attach partitions locally
bypassing the iSCSI to guarantee 2 things:
* Make sure that lio processes do not compete for CPU and RAM with VMs
running on the same host.
* Make sure that CPU intensive VMs (or whatever else is running nearby) are
not blocking the storage.

I understand that BDD is not really following the trends in Cinder and
OpenStack general approach to virtualization. So deprecating it in favor of
LVM base solution make total sense to me. However there may  be other
consumers besides Sahara that rely on BDD so it would be grate to hear
their opinion as well.

On Mon, Sep 19, 2016 at 9:01 PM, Ivan Kolodyazhny  wrote:

> + [sahara] because they are primary consumer of the BDD.
>
> John,
> Thanks for the answer. My comments are inline.
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Mon, Sep 19, 2016 at 4:41 PM, John Griffith 
> wrote:
>
>>
>>
>> On Mon, Sep 19, 2016 at 4:43 AM, Ivan Kolodyazhny  wrote:
>>
>>> Hi team,
>>>
>>> We did some performance tests [1] for LVM and BDD drivers. All tests
>>> were executed on real hardware with OpenStack Mitaka release.
>>> Unfortunately, we didn't have enough time to execute all tests and compare
>>> results. We used Sahara/Hadoop cluster with TestDFSIO and others tests.
>>>
>>> All tests were executed on the same hardware and OpenStack release. Only
>>> difference were in cinder.conf to enable needed backend and/or target
>>> driver.
>>>
>>> Tests were executed on following configurations:
>>>
>>>- LVM +TGT target
>>>- LVM+LocalTarget: PoC based on [2] spec
>>>- LVM+LIO
>>>- Block Device Driver.
>>>
>>>
>>> Feel free to ask question if any about our testing infrastructure,
>>> environment, etc.
>>>
>>>
>>> [1] https://docs.google.com/spreadsheets/d/1qS_ClylqdbtbrVSvwbbD
>>> pdWNf2lZPR_ndtW6n54GJX0/edit?usp=sharing
>>> [2] https://review.openstack.org/#/c/247880/
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ​Thanks Ivan, so I'd like to propose we (the Cinder team) discuss a few
>> things (again):
>>
>> 1. Deprecate the BDD driver
>>  Based on the data here LVM+LIO the delta in performance ​(with the
>> exception of the Terravalidate run against 3TB) doesn't seem significant
>> enough to warrant maintaining an additional driver that has only a subset
>> of features implemented.  It would be good to understand why that
>> particular test has such a significant peformance gap.
>>
> What about Local Target? Does it make sense to implement it instead BDD?
>
>>
>> 2. Consider getting buy off to move the default implementation to use the
>> LIO driver and consider deprecating the TGT driver
>>
> +1. Let's bring this topic for the next weekly meeting.
>
>
>
>>
>> I realize this probably isn't a sufficient enough data set to make those
>> two decisions but I think it's at least enough to have a more informed
>> discussion this time.
>>
>> Thanks,
>> John​
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Nikita Konovalov
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live migration IRC meeting

2016-09-20 Thread Murray, Paul (HP Cloud)
There will be a live migration meeting today – agenda and time here:
https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

Paul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Sean Dague
This is a bit delayed due to the release rush, finally getting back to
writing up my experiences at the Ops Meetup.

Nova Feedback Session
=

We had a double session for Feedback for Nova from Operators, raw
etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.

The median release people were on in the room was Kilo. Some were
upgrading to Liberty, many had older than Kilo clouds. Remembering
these are the larger ops environments that are engaged enough with the
community to send people to the Ops Meetup.


Performance Bottlenecks
---

* scheduling issues with Ironic - (this is a bug we got through during
  the week after the session)
* live snapshots actually end up performance issue for people

The workarounds config group was not well known, and everyone in the
room wished we advertised that a bit more. The solution for snapshot
performance is in there.

There were also general questions about what scale cells should be
considered at.

ACTION: we should make sure workarounds are advertised better
ACTION: we should have some document about "when cells"?

Networking
--

A number of folks in the room were still on Nova Net, and were a bit
nervous about it going away. As they are Kilo / Liberty it's still a
few upgrades before they get there, but that nervousness and concern
was definitely there.

Policy
--

How are you customizing policy? People were largely making policy
changes to protect their users that didn't really understand cloud
semantics. Turning off features that they thought would confuse them
(like pause). The large number of VM states is confusing, and not
clearly useful for end users, and they would like simplification.

Ideally policy could be set on a project by project admin, because
they would like to delegate that responsibility down.

No one was using the user_id based custom policy (yay!).

There was desire that flavors could be RBAC locked down, which was
actually being done via policy hacks right now. Providers want to
expose some flavors (especially those with aggregate affinity) to only
some projects.

People were excited about the policy in code effort, only concern was
that the defacto documentation of what you could change wouldn't be in
the sample config.

ACTION: ensure there is policy config reference now that the sample
file is empty
ACTION: flavor RBAC is a thing most of the room wanted, is there a
taker on spec / implementation?

Upgrade
---

Everyone waits to do any optional thing until they absolutely have
to.

The Cells API db caught a bunch of people off guard because it was in
Kilo (with release note) optional. Status quo in Liberty, with no
release note about it existing, then forced in Mitaka. When an
optional component is out there make sure it continues to be talked
about in releases even when it's status did not change, or people
forget.

People were on Kilo, so EC2 out of tree didn't really have any
data. About 25% of folks users have some existing AWS tooling, that
it's good to be able to just let them use to onboard them.

The current DB online data upgrade model feels *very opaque* to
ops. They didn't realize the current model Nova was using, and didn't
feel like it was documented anywhere.

ACTION: document the DB data lifecycle better for operators
ACTION: make sure we are cautious in rewarning people about changes
they have to make (like Cells API db)

API
---

API upgrade seemed fine for folks. The only question was the new
policy names, which was taking folks a bit of time to adjust to.

No one in the room was using custom API extensions (or at least
admitted to it when I asked).

Tracking Feedback
-

We talked a bit about tracking feedback. The silence on the ops list
mostly comes from people not using a particular feature, so they don't
really have an opinion.

Most ops do not have time to look at our specs. That is an unlikely
place to get feedback.

Additional Questions


There was an ask about VM HA. I stated that was beyond scope for Nova,
plus Nova's view of the world is non authoritative enough you didn't
want it to do that anyway. I told folks that the NFV efforts were
working on this kind of thing beyond Nova, and people should team up
there.

There was an ask on status of Cinder Multi Attach. We gave them a bit
of status on where things were at.

ACTION: Cinder Multi Attach should maybe be a priority effort in the
next cycle.


Upgrade Pain Points
===

Raw etherpad -
https://etherpad.openstack.org/p/NYC-ops-Upgrades-Pain-points

Most people are a couple of releases back (Kilo / Liberty or even
older). The only team CDing in the room was RAX, they are now 2 to 3
months behind master.

Everyone agrees upgrades are getting better with every release.

Most are taking change windows and downtime for upgrades.

Why are upgrades taking so long?


About half way through this session I threw this powder 

[openstack-dev] [release][requirements] requirements repo stable/newton branch created

2016-09-20 Thread Doug Hellmann
I have created the stable/newton branch in the openstack/requirements
repository. It should now be possible to recheck and successfully merge
updates to the constraints settings in the tox.ini files for project
stable/newton branches.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-20 Thread Alon Marx
Erlon,

There are exceptions to this rule ((e.g. libraries used for testing), but 
the general result of the thread is that all python libraries imported by 
the drivers must conform to OpenStack licensing. That includes Apache and 
several other licenses (e.g. MIT, LGPL).

Alon





On Tue, Sep 20, 2016 at 5:23 AM, Thierry Carrez  
wrote:
Alon Marx wrote:
> Thank you ALL for clearing up this issue.
>
> To sum up the discussion (not going into too many details):
> From legal stand point, if one uses python libraries they should be part
> of the community or confirming with the relevant licenses
> (http://governance.openstack.org/reference/licensing.html).

Alon,

Its not clear to me what you mean with this sentence. You mean, 'all 
python library that is included from the drivers must conform to OpenStack 
licensing (i.e. Apache License)'?

Erlon



If one is
> not using python libraries (e.g. rest, command line, etc.) the
> non-python executable is considered legitimate wherever it is running.
> From deployment stand point the desire is to have any piece of code that
> is required on an openstack installation would be easily downloadable.
>
> We understand the requirements now and are working on a plan taking both
> considerations into account.

I wouldn't say that the situation is cleared up -- we still need to get
our act together and present a more uniform response to that question
across multiple projects. But AFAICT your summary accurately represents
the current Cinder team position on that matter.

I hope we'll be able to hold a cross-project workshop on that question
in Barcelona, so that we further clear up what is appropriate in-tree,
as a separate project team and as an external project, across all of
OpenStack.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
  


Alon Marx 
Infrastructure and OpenStack Team Leader 
Cloud Storage Solutions 
IBM Systems 
Phone: (+972) 3 689 7824 | E-mail: alo...@il.ibm.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Alan Pevec
2016-09-20 13:27 GMT+02:00 Kashyap Chamarthy :
>>   (3) Do nothing, leave the bug unfixed in stable/liberty
>
> That was the unspoken third option, thanks for spelling it out. :-)

Yes, let's abandon both reviews.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] TL; DR A proposed subsystem maintainer model

2016-09-20 Thread Sylvain Bauza



Le 20/09/2016 13:07, Matthew Booth a écrit :

* +A is reserved for cores.

* A subsystem maintainer's domain need not be defined by the directory 
tree: subteams are a good initial model.


* A maintainer should only +2 code in their own domain in good faith, 
enforced socially, not technically.


* Subsystem maintainer is expected to request specific additional +2s 
for patches touching hot areas, eg rpc, db, api.

  * Hot areas are also good candidate domains for subsystem maintainers.
  * Hot area review need not cover the whole patch if it's not 
required: I am +2 on the DB change in this patch.


This model means that code with +2 from a maintainer only requires a 
single +2 from a core.


We could implement this incrementally by defining a couple of pilot 
subsystem maintainer domains.




tl;dr: The retrospective effort should cover that concern and discuss 
about it, but I also want to share a few thoughts meanwhile.



So, I was formerly (2 years ago) proposing about that. In case you find 
some of my thoughts in the ML, please know that I have another opinion now.


What changed during the last 2 years for me ? I worked on some important 
blueprints for my domain, and then I had to implement some changes that 
were not only for my domain. For example, for a blueprint, I needed to 
add some RPC version, I had to modify a Nova object and I had to add 
some DB migration.


When I implemented the above, I saw that I wasn't exactly knowing how it 
was working. I needed to go out of my domain, look at other changes and 
review them, ping other people in IRC that were experts in their own 
domains, and try to implement with a lot of new PSes.


Then, I thought about my previous opinion. What if I was reviewing my 
own changes ? I mean, the changes were about my domain, but I wasn't 
able to correctly make sure the patches were okay for Nova. For example, 
I could have made a terrible modification for adding a new RPC version 
that could have been terrible for my domain service if it was merged by 
that time. I wasn't really understanding why Nova objects were useful, 
why it was important to use them not only for the compute service, but 
for other APIs.


Then, I understood how I was IMHO wrong. Instead of trying to have my 
changes merged, I should rather try to understand why I was failing to 
correctly implement by the first PS.
That's why I'm far more in favor of the subteam model. Instead of trying 
to reduce the number of core people approving changes, we should rather 
create some ecosystem where people with mutual interest can help 
themselves by reviewing their respective changes, and then tell to the 
world that they think the patch is ready.  That doesn't mean that they 
thought about all the possible problems, so that's why we still need 2 
formal +2s, but at least the knowledge is shared between all subteam 
members (ideally if they respectively review between themselves) so that 
the expertise grows far more than just the domain boundaries, and for a 
bunch of people, not a single person.


Anyway, like Sean said, that concern is totally worth being discussed 
thanks to the retrospective effort, and I'm sure it will be.


-Sylvain



Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-cisco] Poll to decide networking-cisco weekly meeting times

2016-09-20 Thread Sam Betts (sambetts)
For those interested in participating, we are setting up a weekly meeting to 
discuss the development and design of networking-cisco. This meeting will be 
held on the #openstack-networking-cisco channel. If you are interested please 
use the link below to provide which of the dates and times you are best for 
you. We will use this poll to ensure we organise a meeting which aligns best 
for the majority of attendees.

http://doodle.com/poll/huqedr8hac679c9y
NOTE: All the times are in UTC.

Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-20 Thread Avishay Traeger
On Tue, Sep 20, 2016 at 8:50 AM, Alon Marx  wrote:

> 
> From deployment stand point the desire is to have any piece of code that
> is required on an openstack installation would be easily downloadable.
>

And redistributable please.

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] TL; DR A proposed subsystem maintainer model

2016-09-20 Thread Sean Dague
On 09/20/2016 07:07 AM, Matthew Booth wrote:
> * +A is reserved for cores.
> 
> * A subsystem maintainer's domain need not be defined by the directory
> tree: subteams are a good initial model.
> 
> * A maintainer should only +2 code in their own domain in good faith,
> enforced socially, not technically.
> 
> * Subsystem maintainer is expected to request specific additional +2s
> for patches touching hot areas, eg rpc, db, api.
>   * Hot areas are also good candidate domains for subsystem maintainers.
>   * Hot area review need not cover the whole patch if it's not required:
> I am +2 on the DB change in this patch.
> 
> This model means that code with +2 from a maintainer only requires a
> single +2 from a core.
> 
> We could implement this incrementally by defining a couple of pilot
> subsystem maintainer domains.

Before jumping to solutions, it's probably worth letting the
retrospective process continue and identify the thing folks think
solving would be the best benefit for the project for the next cycle.

I feel like in the past we often jumped to solutions before the problem
space was fully left out, which led to long email threads and little
action, as we hadn't come together on the problem we were trying to solve.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Kashyap Chamarthy
On Tue, Sep 20, 2016 at 11:57:26AM +0100, Daniel P. Berrange wrote:
> On Tue, Sep 20, 2016 at 12:48:49PM +0200, Kashyap Chamarthy wrote:

[...]

> > The two options at hand:
> > 
> > (1) Nova backport from master (that also adds a check for the presence
> > of 'ProcessLimits' attribute which is only present in
> > oslo.concurrency>=2.6.1; and a conditional check for 'prlimit'
> > parameter in qemu_img_info() method.)
> > 
> > https://review.openstack.org/#/c/327624/ -- "virt: set address space
> > & CPU time limits when running qemu-img"
> > 
> > (2) Or bump global-requirements for 'oslo.concurrency'
> > 
> > https://review.openstack.org/#/c/337277/5 -- Bump
> > 'global-requirements' for 'oslo.concurrency' to 2.6.1
> 
> Actually we have 3 options
> 
>   (3) Do nothing, leave the bug unfixed in stable/liberty

That was the unspoken third option, thanks for spelling it out. :-)

> While this is a security bug, it is one that has existed in every
> single openstack release ever, and it is not a particularly severe
> bug. Even if we fixed in liberty, it would still remain unfixed in
> every release before liberty. We're in the verge of releasing Newton
> at which point liberty becomes less relevant. So I question whether it
> is worth spending more effort on dealing with this in liberty
> upstream.  Downstream vendors still have the option to do either (1)
> or (2) in their own private branches if they so desire, regardless of
> whether we fix it upstream.

Sure, I agree with what you said.  This patch started off 2-ish months
ago, at that time it wasn't the "verge of releasing Newton".  That said,
if upstream feels it's not really necessary to get this into Liberty,
then I'm fine abandoning it, and close this.  That's at least brings
this to a conclusion.

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] TL; DR A proposed subsystem maintainer model

2016-09-20 Thread Matthew Booth
* +A is reserved for cores.

* A subsystem maintainer's domain need not be defined by the directory
tree: subteams are a good initial model.

* A maintainer should only +2 code in their own domain in good faith,
enforced socially, not technically.

* Subsystem maintainer is expected to request specific additional +2s for
patches touching hot areas, eg rpc, db, api.
  * Hot areas are also good candidate domains for subsystem maintainers.
  * Hot area review need not cover the whole patch if it's not required: I
am +2 on the DB change in this patch.

This model means that code with +2 from a maintainer only requires a single
+2 from a core.

We could implement this incrementally by defining a couple of pilot
subsystem maintainer domains.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 12:48:49PM +0200, Kashyap Chamarthy wrote:
> The said patch in question fixes a CVE[x] in stable/liberty.
> 
> We currently have two options, both of them have caused an impasse with
> the Nova upstream / stable maintainers.  We've had two-ish months to
> mull over this.  I'd prefer to get this out of a limbo, & bring this to
> a logical conclusion.
> 
> The two options at hand:
> 
> (1) Nova backport from master (that also adds a check for the presence
> of 'ProcessLimits' attribute which is only present in
> oslo.concurrency>=2.6.1; and a conditional check for 'prlimit'
> parameter in qemu_img_info() method.)
> 
> https://review.openstack.org/#/c/327624/ -- "virt: set address space
> & CPU time limits when running qemu-img"
> 
> (2) Or bump global-requirements for 'oslo.concurrency'
> 
> https://review.openstack.org/#/c/337277/5 -- Bump
> 'global-requirements' for 'oslo.concurrency' to 2.6.1

Actually we have 3 options

  (3) Do nothing, leave the bug unfixed in stable/liberty

While this is a security bug, it is one that has existed in every single
openstack release ever, and it is not a particularly severe bug. Even if
we fixed in liberty, it would still remain unfixed in every release before
liberty. We're in the verge of releasing Newton at which point liberty
becomes less relevant. So I question whether it is worth spending more
effort on dealing with this in liberty upstream.  Downstream vendors
still have the option to do either (1) or (2) in their own private
branches if they so desire, regardless of whether we fix it upstream.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Kashyap Chamarthy
The said patch in question fixes a CVE[x] in stable/liberty.

We currently have two options, both of them have caused an impasse with
the Nova upstream / stable maintainers.  We've had two-ish months to
mull over this.  I'd prefer to get this out of a limbo, & bring this to
a logical conclusion.

The two options at hand:

(1) Nova backport from master (that also adds a check for the presence
of 'ProcessLimits' attribute which is only present in
oslo.concurrency>=2.6.1; and a conditional check for 'prlimit'
parameter in qemu_img_info() method.)

https://review.openstack.org/#/c/327624/ -- "virt: set address space
& CPU time limits when running qemu-img"

(2) Or bump global-requirements for 'oslo.concurrency'

https://review.openstack.org/#/c/337277/5 -- Bump
'global-requirements' for 'oslo.concurrency' to 2.6.1

Both patches have had long (and useful) discussion about their merits /
demerits in the review comments in context of stable backports.  If you
have sometime, I'd recommend going through the comments in both the
reviews provides all the context, current disagreements.



[x] https://bugs.launchpad.net/nova/+bug/1449062 -- 
qemu-img calls need to be restricted by ulimit (CVE-2015-5162)

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-20 Thread Erlon Cruz
On Tue, Sep 20, 2016 at 5:23 AM, Thierry Carrez 
wrote:

> Alon Marx wrote:
> > Thank you ALL for clearing up this issue.
> >
> > To sum up the discussion (not going into too many details):
> > From legal stand point, if one uses python libraries they should be part
> > of the community or confirming with the relevant licenses
> > (http://governance.openstack.org/reference/licensing.html).


Alon,

Its not clear to me what you mean with this sentence. You mean, 'all python
library that is included from the drivers must conform to OpenStack
licensing (i.e. Apache License)'?

Erlon



If one is
> > not using python libraries (e.g. rest, command line, etc.) the
> > non-python executable is considered legitimate wherever it is running.
> > From deployment stand point the desire is to have any piece of code that
> > is required on an openstack installation would be easily downloadable.
> >
> > We understand the requirements now and are working on a plan taking both
> > considerations into account.
>
> I wouldn't say that the situation is cleared up -- we still need to get
> our act together and present a more uniform response to that question
> across multiple projects. But AFAICT your summary accurately represents
> the current Cinder team position on that matter.
>
> I hope we'll be able to hold a cross-project workshop on that question
> in Barcelona, so that we further clear up what is appropriate in-tree,
> as a separate project team and as an external project, across all of
> OpenStack.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc]a chance to meet all TCs for Tricircle big-tent application in Barcelona summit?

2016-09-20 Thread Thierry Carrez
joehuang wrote:
> Hello, all TCs,
> 
> Understand that you will be quite busy in Barcelona summit, all TCs will
> be shown in the Barcelona summit, and meet together(usually, I assumed).
> So is it possible to have a chance to meet all TCs there for Tricircle
> big-tent application https://review.openstack.org/#/c/338796/ ? F2f talk
> to answer the concerns may help the understanding and decision.

Some TC members might not be present in Barcelona (we don't even know
who will be on the TC then), and it's generally difficult to corner us
all at the same time... But yes, you should try to grab us if you can.
We won't have a formal meeting in Barcelona (beyond the Board+TC+UC
meeting on the Monday).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra]project description not updated in github repository

2016-09-20 Thread Andreas Jaeger

On 09/20/2016 09:39 AM, joehuang wrote:

Hello,

The project description of has not been updated in in github repository
even if the description was updated in gerrit/projects.yaml

In following patch, the Tricircle project description has been updated
from "Tricircle is a project for OpenStack Multiple Site Deployment
solution" to "Tricircle is to provide networking automation across
Neutron.",
https://review.openstack.org/#/c/367114/6/gerrit/projects.yaml

But the description in the github repository below the code tab is still
"Tricircle is a project for OpenStack cascading solution. ", it's old
description.

In the same patch, Trio2o project description is "Trio2o is to provide
APIs gateway for multiple OpenStack clouds to act as a single OpenStack
cloud."

And this information was presented in the github repository below the
code tab: https://github.com/openstack/trio2o

Is there any other way to update the project description in github
repository?


github is a mirror of github and there are some API limitations, we 
currently cannot update the description in github for changes.


http://git.openstack.org/cgit/openstack/tricircle is updated,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Usability study at Barcelona Summit

2016-09-20 Thread Chris Dent

On Mon, 19 Sep 2016, Kruithof Jr, Pieter wrote:


I was planning to run a usability study on behalf of the API working
group at the Barcelona summit.     The plan is to have operators
complete a set of common tasks, such as adjusting quotas, using the
projects APIs.


Since you're intending to use python-openstackclient as the client
in this tests, it sounds like you'll be doing a usability study of
that client. This is a great thing to do, but you should be aware
that you'll probably want a contact from the developers of that client
as well as or instead of someone from the API-WG.

The API-WG, of late, has mostly been concerned with the correctness
of the HTTP interactions that the OpenStack APIs allow. Since the
openstackclient abstracts these interactions the kinds of data that
we'll find most use from tests using the client include:

* are the resources made available by the APIs (and thus the actions
  that can be performed by the client) the right ones
* are the modes of filtering, sorting, limiting adequate
* are their inconsistencies in the service APIs that cause there to
  be inconsistencies in the behaviour of the client

I'm sure there are plenty more, that's just off the top of my head.

Chris, 


Were you still able to act as my contact?


Yes. Unfortunately, of the three api-wg cores, I'm the only one who
has been able to make plans to go to summit.



I’ve created an etherpad to begin planning the study:

https://etherpad.openstack.org/p/osux-api-oct2016

Thanks,


Thank you.


--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-20 Thread Paul Bourke
If it's the case Benedict or noone else is interested in continuing 
Debian, I can reverse my vote. Though it seems I'll be outvoted anyway ;)


On 20/09/16 10:21, Swapnil Kulkarni wrote:

On Tue, Sep 20, 2016 at 2:38 PM, Paul Bourke  wrote:

-1 for deprecating Debian.

As I mentioned in https://review.openstack.org/#/c/369183/, Debian support
was added incrementally by Benedikt Trefzer as recently as June. So it's
reasonable to believe there is at least one active user of Debian.

I would like to try get some input from him on whether he's still using it
and would be interested in helping maintain by adding gates etc.

On 19/09/16 18:44, Jeffrey Zhang wrote:


Kolla core reviewer team,

Kolla supports multiple Linux distros now, including

* Ubuntu
* CentOS
* RHEL
* Fedora
* Debian
* OracleLinux

But only Ubuntu, CentOS, and OracleLinux are widely used and we have
robust gate to ensure the quality.

For Debian, Kolla hasn't any test for it and nobody reports any bug
about it( i.e. nobody use Debian as base distro image). We (kolla
team) also do not have enough resources to support so many Linux
distros. I prefer to deprecate Debian support now.

Please vote:

1. Kolla needs support Debian( if so, we need some guys to set up the
gate and fix all the issues ASAP in O cycle)
2. Kolla should deprecate Debian support

Voting is open for 7 days until September 27st, 2016.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1 for #2

I agree with reasoning from Paul, though Debian support was being
added incrementally by Benedikt Trefzer but it has been stopped in
middle and all patches in the queue were abandoned [1]

[1] https://review.openstack.org/#/q/topic:bp/build-debian,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]What is definition of critical bugfixes?

2016-09-20 Thread Rikimaru Honjo

Hi All,

I requested to review my patch in the last Weekly Nova team meeting.[1]
In this meeting, Mr. Dan Smith said following things about my patch.

* This patch is too large to merge in rc2.[2]
* Fix after Newton and backport to newton and mitaka.[3]

In my understanding, we can backport only critical bugfixes and security patches
in Phase II.[4]
And, stable/mitaka move to Phase II after newton.

What is definition of critical bugfixes?
And, can I backport my patch to mitaka after newton?

[1]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-178
[2]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-194
[3]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-185
[4]http://docs.openstack.org/project-team-guide/stable-branches.html#support-phases

Best regards,
--
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-20 Thread Swapnil Kulkarni
On Tue, Sep 20, 2016 at 2:38 PM, Paul Bourke  wrote:
> -1 for deprecating Debian.
>
> As I mentioned in https://review.openstack.org/#/c/369183/, Debian support
> was added incrementally by Benedikt Trefzer as recently as June. So it's
> reasonable to believe there is at least one active user of Debian.
>
> I would like to try get some input from him on whether he's still using it
> and would be interested in helping maintain by adding gates etc.
>
> On 19/09/16 18:44, Jeffrey Zhang wrote:
>>
>> Kolla core reviewer team,
>>
>> Kolla supports multiple Linux distros now, including
>>
>> * Ubuntu
>> * CentOS
>> * RHEL
>> * Fedora
>> * Debian
>> * OracleLinux
>>
>> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
>> robust gate to ensure the quality.
>>
>> For Debian, Kolla hasn't any test for it and nobody reports any bug
>> about it( i.e. nobody use Debian as base distro image). We (kolla
>> team) also do not have enough resources to support so many Linux
>> distros. I prefer to deprecate Debian support now.
>>
>> Please vote:
>>
>> 1. Kolla needs support Debian( if so, we need some guys to set up the
>> gate and fix all the issues ASAP in O cycle)
>> 2. Kolla should deprecate Debian support
>>
>> Voting is open for 7 days until September 27st, 2016.
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1 for #2

I agree with reasoning from Paul, though Debian support was being
added incrementally by Benedikt Trefzer but it has been stopped in
middle and all patches in the queue were abandoned [1]

[1] https://review.openstack.org/#/q/topic:bp/build-debian,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-20 Thread Paul Bourke

-1 for deprecating Debian.

As I mentioned in https://review.openstack.org/#/c/369183/, Debian 
support was added incrementally by Benedikt Trefzer as recently as June. 
So it's reasonable to believe there is at least one active user of Debian.


I would like to try get some input from him on whether he's still using 
it and would be interested in helping maintain by adding gates etc.


On 19/09/16 18:44, Jeffrey Zhang wrote:

Kolla core reviewer team,

Kolla supports multiple Linux distros now, including

* Ubuntu
* CentOS
* RHEL
* Fedora
* Debian
* OracleLinux

But only Ubuntu, CentOS, and OracleLinux are widely used and we have
robust gate to ensure the quality.

For Debian, Kolla hasn't any test for it and nobody reports any bug
about it( i.e. nobody use Debian as base distro image). We (kolla
team) also do not have enough resources to support so many Linux
distros. I prefer to deprecate Debian support now.

Please vote:

1. Kolla needs support Debian( if so, we need some guys to set up the
gate and fix all the issues ASAP in O cycle)
2. Kolla should deprecate Debian support

Voting is open for 7 days until September 27st, 2016.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-20 Thread Paul Bourke

+1

On 19/09/16 18:40, Jeffrey Zhang wrote:

Kolla core reviewer team,

Kolla supports multiple Linux distros now, including

* Ubuntu
* CentOS
* RHEL
* Fedora
* Debian
* OracleLinux

But only Ubuntu, CentOS, and OracleLinux are widely used and we have
robust gate to ensure the quality.

For fedora, Kolla hasn't any test for it and nobody reports any bug
about it( i.e. nobody use fedora as base distro image). We (kolla
team) also do not have enough resources to support so many Linux
distros. I prefer to deprecate fedora support now.  This is talked in
past but inconclusive[0].

Please vote:

1. Kolla needs support fedora( if so, we need some guys to set up the
gate and fix all the issues ASAP in O cycle)
2. Kolla should deprecate fedora support

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-June/098526.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Adding ihrachys to the neutron-drivers team

2016-09-20 Thread Miguel Angel Ajo Pelayo
Congratulations Ihar!, well deserved through hard work! :)

On Mon, Sep 19, 2016 at 8:03 PM, Brian Haley  wrote:
> Congrats Ihar!
>
> -Brian
>
>
> On 09/17/2016 12:40 PM, Armando M. wrote:
>>
>> Hi folks,
>>
>> I would like to propose Ihar to become a member of the Neutron drivers
>> team [1].
>>
>> Ihar wide knowledge of the Neutron codebase, and his longstanding duties
>> as
>> stable core, downstream package whisperer, release and oslo liaison (I am
>> sure I
>> am forgetting some other capacity he is in) is going to put him at great
>> comfort
>> in the newly appointed role, and help him grow and become wise even
>> further.
>>
>> Even though we have not been meeting regularly lately we will resume our
>> Thursday meetings soon [2], and having Ihar onboard by then will be highly
>> beneficial.
>>
>> Please, join me in welcome Ihar to the team.
>>
>> Cheers,
>> Armando
>>
>> [1]
>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#drivers-team
>>
>> 
>> [2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
>> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Browser Support

2016-09-20 Thread Rob Cresswell
Agreed. I've created a bug here: https://bugs.launchpad.net/horizon/+bug/1625514

Rob

On 20 September 2016 at 09:40, amot...@gmail.com 
> wrote:
I think It is time to merge the wiki into horizon devref.
It is not a good situation that we have two contents.

2016-09-20 17:01 GMT+09:00 Rob Cresswell 
>:
> As you've noticed, this doc isn't updated currently. The current browsers
> supported are listed here:
> http://docs.openstack.org/developer/horizon/faq.html (Stable Firefox and
> Chrome, and IE 11+)
>
> Rob
>
> On 20 September 2016 at 08:23, Shinobu Kinjo 
> > wrote:
>>
>> There are ambiguous definitions like:
>>
>> IE 11 Good?
>>
>> Can we define this description as specific as possible?
>>
>>
>> On Tue, Sep 20, 2016 at 4:15 PM, Radomir Dopieralski
>> > wrote:
>> > On Tue, Sep 20, 2016 at 3:53 AM, Jason Rist 
>> > > wrote:
>> >>
>> >> This page hasn't been updated for a while - does anyone know the
>> >> latest?
>> >>
>> >> https://wiki.openstack.org/wiki/Horizon/BrowserSupport
>> >>
>> >
>> > As far as I know, there were no changes to the officially supported
>> > browser
>> > versions.
>> >
>> > The support for very old browsers (like msie 6) is going to deteriorate
>> > slowly, as the libraries that we use for styles and js drop support for
>> > them.
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Listing Domain roles (or retrieving them by name)

2016-09-20 Thread Johannes Grassler

Hello,

On 09/20/2016 10:15 AM, Johannes Grassler wrote:

is there a canonical way to either

* list roles in a given domain
* or retrieve a role from a given domain by name (preferred)


Looks like there is a way:

  osc_lib.utils.find_resource(admin_client.roles, role_name,  
domain_id=domain_id)

returns a role object for the role I'm looking for.

(admin_client is the Keystone client's RoleManager, role_name contains the 
role's name).

Cheers,

Johannes

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Release critical fixes for Newton

2016-09-20 Thread Julien Danjou
Hi,

We have two critical bugs that needs to be reviewed and fixed here for
Ceilometer:

  Move oslo.db to hard requirements list
  https://review.openstack.org/#/c/372148/

  Remove left over from old ceilometer-api binary
  https://review.openstack.org/#/c/372146/

Can we have reviews and then I'll cherry-pick to stable/newton? We'll
have to do a RC2 I guess then.

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-20 Thread Thierry Carrez
Alon Marx wrote:
> Thank you ALL for clearing up this issue.
> 
> To sum up the discussion (not going into too many details):
> From legal stand point, if one uses python libraries they should be part
> of the community or confirming with the relevant licenses
> (http://governance.openstack.org/reference/licensing.html). If one is
> not using python libraries (e.g. rest, command line, etc.) the
> non-python executable is considered legitimate wherever it is running.
> From deployment stand point the desire is to have any piece of code that
> is required on an openstack installation would be easily downloadable.
>  
> We understand the requirements now and are working on a plan taking both
> considerations into account.

I wouldn't say that the situation is cleared up -- we still need to get
our act together and present a more uniform response to that question
across multiple projects. But AFAICT your summary accurately represents
the current Cinder team position on that matter.

I hope we'll be able to hold a cross-project workshop on that question
in Barcelona, so that we further clear up what is appropriate in-tree,
as a separate project team and as an external project, across all of
OpenStack.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Too many mails on announce list again :)

2016-09-20 Thread Thierry Carrez
Steve Martinelli wrote:
> I think bundling the puppet, ansible and oslo releases together would
> cut down on a considerable amount of traffic. Bundling or grouping new
> releases may not be the most accurate, but if it encourages the right
> folks to read the content instead of brushing it off, I think thats
> worth while.

Yeah, I agree that the current "style" of announcing actively trains
people to ignore announces. The trick is that it's non-trivial to
regroup announces (as they are automatically sent as a post-job for each
tag).

Solutions include:

* A daily job that catches releases of the day and batches them into a
single announce (issue being you don't get notified as soon as the
release is available, and the announce email ends up being extremely long)

* A specific -release ML where all announces are posted, with a daily
job to generate an email (one to -announce for services, one to -dev for
libraries) that links to them, without expanding (issue being you don't
have the natural thread in -dev to react to a broken oslo release)

* Somehow generate the email from the openstack/release request rather
than from the tags

...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Listing Domain roles (or retrieving them by name)

2016-09-20 Thread Johannes Grassler

Hello,

is there a canonical way to either

* list roles in a given domain
* or retrieve a role from a given domain by name (preferred)

keystoneclient.v3.roles.RoleManager.list() does not appear to do the trick. 
While it takes a
`domain` argument, it only returns roles with a domain_id=None attribute but 
none of the roles
in the domain I specified. Also, it appears to be deprecated if this comment[0] 
in
python-openstackclient is anything to go by.

As for why I want to do this: I attempt to create the role in question and 
catch the Conflict exception
that happens if a role with that name exists already. To use that existing role 
I need its UUID though
(or a role object as keystoneclient.v3.roles.RoleManager.create() would have 
returned if it were successful).
The name does not help since I cannot use that for 
keystoneclient.v3.roles.RoleManager.grant(). Come to
think of it, a way to grant roles on a domain by name would also solve the 
problem...

Cheers,

Johannes

[0] 
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/identity/v3/role.py#L241

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Browser Support

2016-09-20 Thread Rob Cresswell
As you've noticed, this doc isn't updated currently. The current browsers 
supported are listed here: http://docs.openstack.org/developer/horizon/faq.html 
(Stable Firefox and Chrome, and IE 11+)

Rob

On 20 September 2016 at 08:23, Shinobu Kinjo 
> wrote:
There are ambiguous definitions like:

IE 11 Good?

Can we define this description as specific as possible?


On Tue, Sep 20, 2016 at 4:15 PM, Radomir Dopieralski
> wrote:
> On Tue, Sep 20, 2016 at 3:53 AM, Jason Rist 
> > wrote:
>>
>> This page hasn't been updated for a while - does anyone know the latest?
>>
>> https://wiki.openstack.org/wiki/Horizon/BrowserSupport
>>
>
> As far as I know, there were no changes to the officially supported browser
> versions.
>
> The support for very old browsers (like msie 6) is going to deteriorate
> slowly, as the libraries that we use for styles and js drop support for
> them.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Browser Support

2016-09-20 Thread Shinobu Kinjo
There are ambiguous definitions like:

IE 11 Good?

Can we define this description as specific as possible?


On Tue, Sep 20, 2016 at 4:15 PM, Radomir Dopieralski
 wrote:
> On Tue, Sep 20, 2016 at 3:53 AM, Jason Rist  wrote:
>>
>> This page hasn't been updated for a while - does anyone know the latest?
>>
>> https://wiki.openstack.org/wiki/Horizon/BrowserSupport
>>
>
> As far as I know, there were no changes to the officially supported browser
> versions.
>
> The support for very old browsers (like msie 6) is going to deteriorate
> slowly, as the libraries that we use for styles and js drop support for
> them.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Browser Support

2016-09-20 Thread Radomir Dopieralski
On Tue, Sep 20, 2016 at 3:53 AM, Jason Rist  wrote:

> This page hasn't been updated for a while - does anyone know the latest?
>
> https://wiki.openstack.org/wiki/Horizon/BrowserSupport
>
>
As far as I know, there were no changes to the officially supported browser
versions.

The support for very old browsers (like msie 6) is going to deteriorate
slowly, as the libraries that we use for styles and js drop support for
them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][mistral] mistral Newton RC1 available

2016-09-20 Thread Renat Akhmerov
Yes, we came up with a solution for this issue. Should be solved shortly.

Renat Akhmerov
@Nokia

> On 20 Sep 2016, at 00:28, Lingxian Kong  wrote:
> 
> Thanks Dougal for reporting that, we are working on the issue.
> 
> Cheers,
> Lingxian Kong (Larry)
> 
> 
> On Fri, Sep 16, 2016 at 11:27 PM, Dougal Matthews  wrote:
>> 
>> 
>> On 16 September 2016 at 00:06, Doug Hellmann  wrote:
>>> 
>>> Hello everyone,
>>> 
>>> The release candidate for mistral for the end of the Newton cycle
>>> is available!  You can find the RC1 source code tarballs at:
>>> 
>>> https://tarballs.openstack.org/mistral/mistral-3.0.0.0rc1.tar.gz
>>> 
>>> https://tarballs.openstack.org/mistral-dashboard/mistral-dashboard-3.0.0.0rc1.tar.gz
>>> 
>>> Unless release-critical issues are found that warrant a release
>>> candidate respin, this RC1 will be formally released as the final
>>> Newton release on 6 October. You are therefore strongly
>>> encouraged to test and validate this tarball!
>> 
>> 
>> I believe we (TripleO) are hitting a release critical issue here:
>> https://bugs.launchpad.net/mistral/+bug/1624284
>> 
>> I have tagged the issue.
>> 
>> 
>>> Alternatively, you can directly test the stable/newton release
>>> branch at:
>>> 
>>> http://git.openstack.org/cgit/openstack/mistral/log/?h=stable/newton
>>> 
>>> If you find an issue that could be considered release-critical,
>>> please file it at:
>>> 
>>> https://bugs.launchpad.net/mistral/+filebug
>>> 
>>> and tag it *newton-rc-potential* to bring it to the mistral release
>>> crew's attention.
>>> 
>>> Note that the "master" branch of mistral is now open for Ocata
>>> development, and feature freeze restrictions no longer apply there!
>>> 
>>> Thanks,
>>> Doug
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-20 Thread Steven Dake (stdake)
Fwiw Swapnil, I think having a solid fedora implementation would be fantastic 
to help manage the transition to centos8 whenever that happens.  At this point 
nobody has stepped up to do the work.  We can always revisit any policy or vote 
in the future if the environment changes (i.e. you are freed up to work on 
making fedora work well).  However, no majority has been reached yet, so its 
premature to call this vote closed.

Regards
-steve


From: Swapnil Kulkarni 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, September 19, 2016 at 10:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support


On Sep 19, 2016 11:11 PM, "Jeffrey Zhang" 
> wrote:
>
> Kolla core reviewer team,
>
> Kolla supports multiple Linux distros now, including
>
> * Ubuntu
> * CentOS
> * RHEL
> * Fedora
> * Debian
> * OracleLinux
>
> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
> robust gate to ensure the quality.
>
> For fedora, Kolla hasn't any test for it and nobody reports any bug
> about it( i.e. nobody use fedora as base distro image). We (kolla
> team) also do not have enough resources to support so many Linux
> distros. I prefer to deprecate fedora support now.  This is talked in
> past but inconclusive[0].
>
> Please vote:
>
> 1. Kolla needs support fedora( if so, we need some guys to set up the
> gate and fix all the issues ASAP in O cycle)
> 2. Kolla should deprecate fedora support
>
> [0] http://lists.openstack.org/pipermail/openstack-dev/2016-June/098526.html
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Unless we have specific objections apart from issues in #1, I wish to work on 
Fedora support in kolla and request to revisit the depreciation vote post 
ocata-2 milestone.

Best Regards,
Swapnil (coolsvap)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][bugs] Nova Bugs Team Meeting this Tuesday at 1800 UTC

2016-09-20 Thread Augustina Ragwitz
The next Nova Bugs Team meeting will be Tuesday, September 20 at 1800
UTC in #openstack-meeting-4

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160920T18

Feel free to add to the meeting agenda: 
https://wiki.openstack.org/wiki/Meetings/Nova/BugsTeam

-- 
Augustina Ragwitz
Señora Software Engineer
---
Ask me about contributing to OpenStack Nova!
https://wiki.openstack.org/wiki/Nova/Mentoring
---
email: aragwitz+n...@pobox.com
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev