Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-05 Thread Oleg Bondarev
Big +1

On Thu, Mar 5, 2015 at 10:47 AM, Gary Kotton  wrote:

> +100Š.000
>
> Very long overdue!
>
> On 3/4/15, 10:23 PM, "Maru Newby"  wrote:
>
> >+1 from me, Ihar has been doing great work and it will be great to have
> >him finally able to merge!
> >
> >> On Mar 4, 2015, at 11:42 AM, Kyle Mestery  wrote:
> >>
> >> I'd like to propose that we add Ihar Hrachyshka to the Neutron core
> >>reviewer team. Ihar has been doing a great job reviewing in Neutron as
> >>evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's
> >>been doing a great job keeping Neutron current there. He's already a
> >>critical reviewer for all the Neutron repositories. In addition, he's a
> >>stable maintainer. Ihar makes himself available in IRC, and has done a
> >>great job working with the entire Neutron team. His reviews are
> >>thoughtful and he really takes time to work with code submitters to
> >>ensure his feedback is addressed.
> >>
> >> I'd also like to again remind everyone that reviewing code is a
> >>responsibility, in Neutron the same as other projects. And core
> >>reviewers are especially beholden to this responsibility. I'd also like
> >>to point out and reinforce that +1/-1 reviews are super useful, and I
> >>encourage everyone to continue reviewing code across Neutron as well as
> >>the other OpenStack projects, regardless of your status as a core
> >>reviewer on these projects.
> >>
> >> Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar
> >>to the core reviewer team.
> >>
> >> Thanks!
> >> Kyle
> >>
> >> [1]
> >>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_repo
> >>rt_contribution_neutron-2Dgroup_90&d=AwICAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVe
> >>Aw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=Ai-ZNicr
> >>eqkmcjZEmVtR4ztEWETwC-yUThK_vJ_vFtc&s=QOgsZ0G74PZQYC8kvsozIDeWHYMn7t_Rk6p
> >>zqBLFui0&e=
> >>
> >>_
> >>_
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-05 Thread Sylvain Bauza


Le 05/03/2015 08:54, Rui Chen a écrit :
We will face the same issue in multiple nova-scheduler process case, 
like Sylvain say, right?


Two processes/workers can actually consume two distinct resources on 
the same HostState.




No. The problem I mentioned was related to having multiple threads 
accessing the same object in memory.
By running multiple schedulers on different hosts and listening to the 
same RPC topic, it would work - with some caveats about race conditions 
too, but that's unrelated to your proposal -


If you want to run multiple nova-scheduler services, then just fire them 
up on separate machines (that's HA, eh) and that will work.


-Sylvain





2015-03-05 13:26 GMT+08:00 Alex Xu >:


Rui, you still can run multiple nova-scheduler process now.


2015-03-05 10:55 GMT+08:00 Rui Chen mailto:chenrui.m...@gmail.com>>:

Looks like it's a complicated problem, and nova-scheduler
can't scale-out horizontally in active/active mode.

Maybe we should illustrate the problem in the HA docs.


http://docs.openstack.org/high-availability-guide/content/_schedulers.html

Thanks for everybody's attention.

2015-03-05 5:38 GMT+08:00 Mike Bayer mailto:mba...@redhat.com>>:



Attila Fazekas mailto:afaze...@redhat.com>> wrote:

> Hi,
>
> I wonder what is the planned future of the scheduling.
>
> The scheduler does a lot of high field number query,
> which is CPU expensive when you are using sqlalchemy-orm.
> Does anyone tried to switch those operations to
sqlalchemy-core ?

An upcoming feature in SQLAlchemy 1.0 will remove the vast
majority of CPU
overhead from the query side of SQLAlchemy ORM by caching
all the work done
up until the SQL is emitted, including all the function
overhead of building
up the Query object, producing a core select() object
internally from the
Query, working out a large part of the object fetch
strategies, and finally
the string compilation of the select() into a string as
well as organizing
the typing information for result columns. With a query
that is constructed
using the “Baked” feature, all of these steps are cached
in memory and held
persistently; the same query can then be re-used at which
point all of these
steps are skipped. The system produces the cache key based
on the in-place
construction of the Query using lambdas so no major
changes to code
structure are needed; just the way the Query modifications
are performed
needs to be preceded with “lambda q:”, essentially.

With this approach, the traditional session.query(Model)
approach can go
from start to SQL being emitted with an order of magnitude
less function
calls. On the fetch side, fetching individual columns
instead of full
entities has always been an option with ORM and is about
the same speed as a
Core fetch of rows. So using ORM with minimal changes to
existing ORM code
you can get performance even better than you’d get using
Core directly,
since caching of the string compilation is also added.

On the persist side, the new bulk insert / update features
provide a bridge
from ORM-mapped objects to bulk inserts/updates without
any unit of work
sorting going on. ORM mapped objects are still more
expensive to use in that
instantiation and state change is still more expensive,
but bulk
insert/update accepts dictionaries as well, which again is
competitive with
a straight Core insert.

Both of these features are completed in the master branch,
the “baked query”
feature just needs documentation, and I’m basically two or
three tickets
away from beta releases of 1.0. The “Baked” feature itself
lives as an
extension and if we really wanted, I could backport it
into oslo.db as well
so that it works against 0.9.

So I’d ask that folks please hold off on any kind of
migration from ORM to
Core for performance reasons. I’ve spent the past several
months adding
features directly to SQLAlchemy that allow an ORM-based
app to have routes
to operations that perform just as fast as that of Core
without a rewrite of
code.

> The scheduler doe

[openstack-dev] [Neutron] Better-quotas blueprint

2015-03-05 Thread Salvatore Orlando
I was hoping to push code today for [1], but unfortunately I got caught by
something else during this week and I'll miss the code proposal deadline.

I would like to apply for a FFE; however, since I've experienced FFEs in
previous release cycles, please treat this request as best-effort if there
ore other work items with higher priority applying for a FFE.

Salvatore

[1] https://blueprints.launchpad.net/neutron/+spec/better-quotas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-05 Thread Thierry Carrez
Flavio Percoco wrote:
> [...]
> I personally don't think adding new cores without cleaning up that
> list is something healthy for our community, which is what we're
> trying to improve here. Therefore I'm still -2-W on adding new folks
> without removing non-active core members.

It's also *extremely* easy to add back long-inactive members if they
happen to come back.

Again, core is not a badge, it corresponds to a set of duties. If some
people don't fill those duties anymore, it's better to remove them than
to keep them around: it maintains the standards of expected review
activity fore core reviewers at a high level.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

2015-03-05 Thread Mathieu Rohon
Hi,

I'm fine with C) and 1600 UTC would be more adapted for EU time Zone :)

However, I Agree that neutron-vpnaas meetings was mainly focus on
maintaining the current IPSec implementation, by managing the slip out,
adding StrongSwan support and adding functional tests.
Maybe we will get a broader audience once we will speak about adding new
use cases such as edge-vpn.
Edge-vpn use cases overlap with the Telco WG VPN use case [1]. May be those
edge-vpn discussions should occur during the Telco WG meeting?

[1]
https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#VPN_Instantiation

On Thu, Mar 5, 2015 at 3:02 AM, Sridhar Ramaswamy  wrote:

> Hi Paul.
>
> I'd vote for (C) and a slightly later time-slot on Tuesdays - 1630 UTC (or
> later).
>
> The meetings so far was indeed quite useful. I guess the current busy Kilo
> cycle is also contributing to the low turnout. As we pick up things going
> forward this forum will be quite useful to discuss edge-vpn and, perhaps,
> other vpn variants.
>
> - Sridhar
>
> On Tue, Mar 3, 2015 at 3:38 AM, Paul Michali  wrote:
>
>> Hi all! The email, that I sent on 2/24 didn't make it to the mailing list
>> (no wonder I didn't get responses!). I think I had an issue with my email
>> address used - sorry for the confusion!
>>
>> So, I'll hold the meeting today (1500 UTC meeting-4, if it is still
>> available), and we can discuss this...
>>
>>
>> We've been having very low turnout for meetings for the past several
>> weeks, so I'd like to ask those in the community interested in VPNaaS, what
>> the preference would be regarding meetings...
>>
>> A) hold at the same day/time, but only on-demand.
>> B) hold at a different day/time.
>> C) hold at a different day/time, but only on-demand.
>> D) hold as a on-demand topic in main Neutron meeting.
>>
>> Please vote your interest, and provide desired day/time, if you pick B or
>> C. The fallback will be (D), if there's not much interest anymore for
>> meeting, or we can't seem to come to a consensus (or super-majority :)
>>
>> Regards,
>>
>> PCM
>>
>> Twitter: @pmichali
>> TEXT: 6032894458
>> PCM (Paul Michali)
>>
>> IRC pc_m (irc.freenode.com)
>> Twitter... @pmichali
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-05 Thread John Bresnahan

On 3/4/15 11:31 PM, Thierry Carrez wrote:

Flavio Percoco wrote:

[...]
I personally don't think adding new cores without cleaning up that
list is something healthy for our community, which is what we're
trying to improve here. Therefore I'm still -2-W on adding new folks
without removing non-active core members.


It's also *extremely* easy to add back long-inactive members if they
happen to come back.

Again, core is not a badge, it corresponds to a set of duties. If some
people don't fill those duties anymore, it's better to remove them than
to keep them around: it maintains the standards of expected review
activity fore core reviewers at a high level.


While I long to have more time to properly contribute to glance and I 
miss the time when I did, as an inactive core member I also agree that 
rotation makes sense.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-05 Thread Rui Chen
My BP aims is launching multiple nova-scheduler processes on a host, like
nova-conductor.

If we run multiple nova-scheduler services on separate hosts, that will
work, forking the multiple nova-scheduler
child processes on a host that will work too? Different child processes had
different HostState object in self memory,
the only different point with HA is just launching all scheduler processes
on a host.

I'm sorry to waste some time, I just want to clarify it.


2015-03-05 17:12 GMT+08:00 Sylvain Bauza :

>
> Le 05/03/2015 08:54, Rui Chen a écrit :
>
> We will face the same issue in multiple nova-scheduler process case, like
> Sylvain say, right?
>
>  Two processes/workers can actually consume two distinct resources on the
> same HostState.
>
>
> No. The problem I mentioned was related to having multiple threads
> accessing the same object in memory.
> By running multiple schedulers on different hosts and listening to the
> same RPC topic, it would work - with some caveats about race conditions
> too, but that's unrelated to your proposal -
>
> If you want to run multiple nova-scheduler services, then just fire them
> up on separate machines (that's HA, eh) and that will work.
>
> -Sylvain
>
>
>
>
>
> 2015-03-05 13:26 GMT+08:00 Alex Xu :
>
>> Rui, you still can run multiple nova-scheduler process now.
>>
>>
>> 2015-03-05 10:55 GMT+08:00 Rui Chen :
>>
>>> Looks like it's a complicated problem, and nova-scheduler can't
>>> scale-out horizontally in active/active mode.
>>>
>>>  Maybe we should illustrate the problem in the HA docs.
>>>
>>>
>>> http://docs.openstack.org/high-availability-guide/content/_schedulers.html
>>>
>>> Thanks for everybody's attention.
>>>
>>> 2015-03-05 5:38 GMT+08:00 Mike Bayer :
>>>


 Attila Fazekas  wrote:

 > Hi,
 >
 > I wonder what is the planned future of the scheduling.
 >
 > The scheduler does a lot of high field number query,
 > which is CPU expensive when you are using sqlalchemy-orm.
 > Does anyone tried to switch those operations to sqlalchemy-core ?

 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of
 CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work
 done
 up until the SQL is emitted, including all the function overhead of
 building
 up the Query object, producing a core select() object internally from
 the
 Query, working out a large part of the object fetch strategies, and
 finally
 the string compilation of the select() into a string as well as
 organizing
 the typing information for result columns. With a query that is
 constructed
 using the “Baked” feature, all of these steps are cached in memory and
 held
 persistently; the same query can then be re-used at which point all of
 these
 steps are skipped. The system produces the cache key based on the
 in-place
 construction of the Query using lambdas so no major changes to code
 structure are needed; just the way the Query modifications are performed
 needs to be preceded with “lambda q:”, essentially.

 With this approach, the traditional session.query(Model) approach can go
 from start to SQL being emitted with an order of magnitude less function
 calls. On the fetch side, fetching individual columns instead of full
 entities has always been an option with ORM and is about the same speed
 as a
 Core fetch of rows. So using ORM with minimal changes to existing ORM
 code
 you can get performance even better than you’d get using Core directly,
 since caching of the string compilation is also added.

 On the persist side, the new bulk insert / update features provide a
 bridge
 from ORM-mapped objects to bulk inserts/updates without any unit of work
 sorting going on. ORM mapped objects are still more expensive to use in
 that
 instantiation and state change is still more expensive, but bulk
 insert/update accepts dictionaries as well, which again is competitive
 with
 a straight Core insert.

 Both of these features are completed in the master branch, the “baked
 query”
 feature just needs documentation, and I’m basically two or three tickets
 away from beta releases of 1.0. The “Baked” feature itself lives as an
 extension and if we really wanted, I could backport it into oslo.db as
 well
 so that it works against 0.9.

 So I’d ask that folks please hold off on any kind of migration from ORM
 to
 Core for performance reasons. I’ve spent the past several months adding
 features directly to SQLAlchemy that allow an ORM-based app to have
 routes
 to operations that perform just as fast as that of Core without a
 rewrite of
 code.

 > The scheduler does lot of thing in the application, like filtering
 > what can be done on the DB level more efficiently. Why it is not done

Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-03-05 Thread Deepak Shetty
Update:

   Cinder - GlusterFS CI job (ubuntu based) was added as experimental (non
voting) to cinder project [1]
Its running successfully without any issue so far [2], [3]

We will monitor it for few days and if it continues to run fine, we will
propose a patch to make it check (voting)

[1]: https://review.openstack.org/160664
[2]: https://jenkins07.openstack.org/job/gate-tempest-dsvm-full-glusterfs/
[3]: https://jenkins02.openstack.org/job/gate-tempest-dsvm-full-glusterfs/

thanx,
deepak

On Fri, Feb 27, 2015 at 10:47 PM, Deepak Shetty  wrote:

>
>
> On Fri, Feb 27, 2015 at 4:02 PM, Deepak Shetty 
> wrote:
>
>>
>>
>> On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty 
>> wrote:
>>
>>>
>>>
>>> On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty 
>>> wrote:
>>>


 On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley 
 wrote:

> On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
> [...]
> > Run 2) We removed glusterfs backend, so Cinder was configured with
> > the default storage backend i.e. LVM. We re-created the OOM here
> > too
> >
> > So that proves that glusterfs doesn't cause it, as its happening
> > without glusterfs too.
>
> Well, if you re-ran the job on the same VM then the second result is
> potentially contaminated. Luckily this hypothesis can be confirmed
> by running the second test on a fresh VM in Rackspace.
>

 Maybe true, but we did the same on hpcloud provider VM too and both time
 it ran successfully with glusterfs as the cinder backend. Also before
 starting
 the 2nd run, we did unstack and saw that free memory did go back to 5G+
 and then re-invoked your script, I believe the contamination could
 result in some
 additional testcase failures (which we did see) but shouldn't be
 related to
 whether system can OOM or not, since thats a runtime thing.

 I see that the VM is up again. We will execute the 2nd run afresh now
 and update
 here.

>>>
>>> Ran tempest with configured with default backend i.e. LVM and was able
>>> to recreate
>>> the OOM issue, so running tempest without gluster against a fresh VM
>>> reliably
>>> recreates the OOM issue, snip below from syslog.
>>>
>>> Feb 25 16:58:37 devstack-centos7-rax-dfw-979654 kernel: glance-api
>>> invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
>>>
>>> Had a discussion with clarkb on IRC and given that F20 is discontinued,
>>> F21 has issues with tempest (under debug by ianw)
>>> and centos7 also has issues on rax (as evident from this thread), the
>>> only option left is to go with ubuntu based CI job, which
>>> BharatK is working on now.
>>>
>>
>> Quick Update:
>>
>> Cinder-GlusterFS CI job on ubuntu was added (
>> https://review.openstack.org/159217)
>>
>> We ran it 3 times against our stackforge repo patch @
>> https://review.openstack.org/159711
>> and it works fine (2 testcase failures, which are expected and we're
>> working towards fixing them)
>>
>> For the logs of the 3 experimental runs, look @
>>
>> http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/
>>
>> Of the 3 jobs, 1 was schedued on rax and 2 on hpcloud, so its working
>> nicely across
>> the different cloud providers.
>>
>
> Clarkb, Fungi,
>   Given that the ubuntu job is stable, I would like to propose to add it
> as experimental to the
> openstack cinder while we work on fixing the 2 failed test cases in
> parallel
>
> thanx,
> deepak
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-05 Thread Mikhail Fedosin
Yes, it's absolutely right. For example, Nova and Neutron have official
rules for that:
https://wiki.openstack.org/wiki/Nova/CoreTeam
where it says: "A member of the team may be removed at any time by the PTL.
This is typically due to a drop off of involvement by the member such that
they are no longer meeting expectations to maintain team membership".
https://wiki.openstack.org/wiki/NeutronCore
"The PTL may remove a member from neutron-core at any time. Typically when
a member has decreased their involvement with the project through a drop in
reviews and participation in general project development, the PTL will
propose their removal and remove them. Members who have previously been
core may be fast-tracked back into core if their involvement picks back up"
So, as Louis has mentioned, it's a routine work, and why should we be any
different?
Also, I suggest to write the same wiki document for Glance to prevent these
issues in the future.

Best, Mike

2015-03-05 12:31 GMT+03:00 Thierry Carrez :

> Flavio Percoco wrote:
> > [...]
> > I personally don't think adding new cores without cleaning up that
> > list is something healthy for our community, which is what we're
> > trying to improve here. Therefore I'm still -2-W on adding new folks
> > without removing non-active core members.
>
> It's also *extremely* easy to add back long-inactive members if they
> happen to come back.
>
> Again, core is not a badge, it corresponds to a set of duties. If some
> people don't fill those duties anymore, it's better to remove them than
> to keep them around: it maintains the standards of expected review
> activity fore core reviewers at a high level.
>
> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-05 Thread Edgar Magana
No doubt about it!

+1  Cheers for a new extremely good core member!

Thanks,

Edgar

From: Kyle Mestery mailto:mest...@mestery.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 4, 2015 at 11:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron 
Core Reviewer

I'd like to propose that we add Ihar Hrachyshka to the Neutron core reviewer 
team. Ihar has been doing a great job reviewing in Neutron as evidence by his 
stats [1]. Ihar is the Oslo liaison for Neutron, he's been doing a great job 
keeping Neutron current there. He's already a critical reviewer for all the 
Neutron repositories. In addition, he's a stable maintainer. Ihar makes himself 
available in IRC, and has done a great job working with the entire Neutron 
team. His reviews are thoughtful and he really takes time to work with code 
submitters to ensure his feedback is addressed.

I'd also like to again remind everyone that reviewing code is a responsibility, 
in Neutron the same as other projects. And core reviewers are especially 
beholden to this responsibility. I'd also like to point out and reinforce that 
+1/-1 reviews are super useful, and I encourage everyone to continue reviewing 
code across Neutron as well as the other OpenStack projects, regardless of your 
status as a core reviewer on these projects.

Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to the 
core reviewer team.

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/90
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removal of v3 in tree testing

2015-03-05 Thread Sean Dague
On 03/04/2015 07:48 PM, GHANSHYAM MANN wrote:
> Hi Sean,
> 
> Yes having V3 directory/file names is very confusing now.
> 
> But current v3 sample tests cases tests v2.1 plugins. As /v3 url is
> redirected to v21 plugins, v3 sample tests make call through v3 url and
> test v2.1 plugins. 
> 
> I think we can start cleaning up the *v3* from everywhere and change it
> to v2.1 or much appropriate name.
> 
> To cleanup the same from sample files, I was planning to rearrange
> sample files structure. Please check if that direction looks good (still
> need to release patch for directory restructure)
> 
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:sample_files_structure,n,z
>  

Ok, so I'm confused, because v2 and v2.1 are supposed to be the same. So
all the v2 samples tests should be the ones that cover the v2.1 case. I
don't see why we'd have 2 versions in the stack.

Also these tests use the v3 pipeline, which means they actually use
different urls than v2 (that don't include the tenant). This discovery
all came with trying to remove the NoAuthMiddlewareV3 class.

So, we *definitely* should not have 2 sets of samples in tree. That's
just confusing and duplicative.

If the answer is retarget all of these to v2 end point, and delete
test_api_samples, that's also a good answer. But keeping both through
the release seems bad.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-05 Thread Alexis Lee
Doug Hellmann said on Wed, Mar 04, 2015 at 11:10:31AM -0500:
> I used to use email to track such things, but I have reached the point
> where keeping up with the push notifications from gerrit would consume
> all of my waking time.

Jim said if his patch was auto-abandoned, he would not find out. There
are mail filtering tools, custom dashboards, the REST API. There are a
myriad of ways to find out, it seems false to complain about a lack of
notification if you turned them off yourself and choose not to use
alternative tooling.

I'm not saying it's perfect. Gerrit could offer granular control of push
notifications. It's also awkward to filter auto-abandoned patches from
manually abandoned, which is why I think a new flag or two with bespoke
semantics is the best solution.

> As Jim and others have pointed out, we can identify those changesets
> using existing attributes rather than having to add a new flag.

This doesn't help new contributors who aren't using your custom
dashboard. The defaults have to be sensible. The default dashboard must
identify patches which will never be reviewed and a push notification
should be available for when a patch enters that state.

It also doesn't help composability. What if I want to find active
patches with less than 100 lines of code change? I have to copy the
whole query string to append my "delta:<100". Copying the query string
everywhere makes it easy for inconsistency to slip in. If the guideline
changes, will every reviewer update their dashboards and bookmarks?

Projects have demonstrated a desire to set policy on patch activity
centrally, individual custom dashboards don't allow this. Official
custom dashboards (that live on the Gerrit server) are a pain to change
AFAIK and we can't count on new contributors knowing how important they
are. Allowing projects to automatically flag patches helps both those
who read their Gerrit email and those who rely on custom dashboards.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-05 Thread Nikola Đipanov
On 03/04/2015 09:23 AM, Sylvain Bauza wrote:
> 
> Le 04/03/2015 04:51, Rui Chen a écrit :
>> Hi all,
>>
>> I want to make it easy to launch a bunch of scheduler processes on a
>> host, multiple scheduler workers will make use of multiple processors
>> of host and enhance the performance of nova-scheduler.
>>
>> I had registered a blueprint and commit a patch to implement it.
>> https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
>>
>> This patch had applied in our performance environment and pass some
>> test cases, like: concurrent booting multiple instances, currently we
>> didn't find inconsistent issue.
>>
>> IMO, nova-scheduler should been scaled horizontally on easily way, the
>> multiple workers should been supported as an out of box feature.
>>
>> Please feel free to discuss this feature, thanks.
> 
> 
> As I said when reviewing your patch, I think the problem is not just
> making sure that the scheduler is thread-safe, it's more about how the
> Scheduler is accounting resources and providing a retry if those
> consumed resources are higher than what's available.
> 
> Here, the main problem is that two workers can actually consume two
> distinct resources on the same HostState object. In that case, the
> HostState object is decremented by the number of taken resources (modulo
> what means a resource which is not an Integer...) for both, but nowhere
> in that section, it does check that it overrides the resource usage. As
> I said, it's not just about decorating a semaphore, it's more about
> rethinking how the Scheduler is managing its resources.
> 
> 
> That's why I'm -1 on your patch until [1] gets merged. Once this BP will
> be implemented, we will have a set of classes for managing heterogeneous
> types of resouces and consume them, so it would be quite easy to provide
> a check against them in the consume_from_instance() method.
> 

I feel that the above explanation does not give the full picture in
addition to being factually incorrect in several places. I have come to
realize that the current behaviour of the scheduler is subtle enough
that just reading the code is not enough to understand all the edge
cases that can come up. The evidence being that it trips up even people
that have spent significant time working on the code.

It is also important to consider the design choices in terms of
tradeoffs that they were trying to make.

So here are some facts about the way Nova does scheduling of instances
to compute hosts, considering the amount of resources requested by the
flavor (we will try to put the facts into a bigger picture later):

* Scheduler receives request to chose hosts for one or more instances.
* Upon every request (_not_ for every instance as there may be several
instances in a request) the scheduler learns the state of the resources
on all compute nodes from the central DB. This state may be inaccurate
(meaning out of date).
* Compute resources are update by each compute host periodically. This
is done by updating the row in the DB.
* The wall-clock time difference between the scheduler deciding to
schedule an instance, and the resource consumption being reflected in
the data the scheduler learns from the DB can be arbitrarily long (due
to load on the compute nodes and latency of message arrival).
* To cope with the above, there is a concept of retrying the request
that fails on a certain compute node due to the scheduling decision
being made with data stale at the moment of build, by default we will
retry 3 times before giving up.
* When running multiple instances, decisions are made in a loop, and
internal in-memory view of the resources gets updated (the widely
misunderstood consume_from_instance method is used for this), so as to
keep subsequent decisions as accurate as possible. As was described
above, this is all thrown away once the request is finished.

Now that we understand the above, we can start to consider what changes
when we introduce several concurrent scheduler processes.

Several cases come to mind:
* Concurrent requests will no longer be serialized on reading the state
of all hosts (due to how eventlet interacts with mysql driver).
* In the presence of a single request for a large number of instances
there is going to be a drift in accuracy of the decisions made by other
schedulers as they will not have the accounted for any of the instances
until they actually get claimed on their respective hosts.

All of the above limitations will likely not pose a problem under normal
load and usage and can cause issues to start appearing when nodes are
close to full or when there is heavy load. Also this changes drastically
based on how we actually chose to utilize hosts (see a very interesting
Ironic bug [1])

Weather any of the above matters to users is dependant heavily on their
use-case though. This is why I feel we should be providing more information.

Finally - I think it is important to accept that the scheduler service
will always have

[openstack-dev] [all][log] Log Working Group priorities

2015-03-05 Thread Kuvaja, Erno
Hi all,

We had our first logging workgroup meeting [1] yesterday where we agreed 3 main 
priorities for the group to focus on. Please review and provide your feedback:


1)  Educating the community About the Logging Guidelines spec

a.   
http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html

b.  Please familiarize yourself with it and try to follow the pointers, fix 
where you see your project being off from these.

c.   If this is the first time you see this spec and you think there is 
something awfully off, please let us know.

2)  Cross project specs for Request IDs and Error codes

a.   There is a spec proposals in Cinder tree [2] for Request IDs and in 
Glance tree [3] for Error codes

b.  The cross project specs are being written on the basis of these specs 
adjusted with the feedback and ideas collected from wider audience at and since 
Paris Summit

c.   Links for the specs will be provided as soon as they are available for 
review

3)  Project Liaisons for Log Working Group [4]

a.   Person helping us out to implement the work items in the project

b.  No need to be core

c.   Please, no fighting for the slots. We happily take all available hands 
onboard on this. :)

[1] http://eavesdrop.openstack.org/meetings/log_wg/2015/
[2] https://review.openstack.org/#/c/156508/
[3] https://review.openstack.org/#/c/127482
[4] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Logging_Working_Group



Thanks,

Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-05 Thread Sylvain Bauza


Le 05/03/2015 13:00, Nikola Đipanov a écrit :

On 03/04/2015 09:23 AM, Sylvain Bauza wrote:

Le 04/03/2015 04:51, Rui Chen a écrit :

Hi all,

I want to make it easy to launch a bunch of scheduler processes on a
host, multiple scheduler workers will make use of multiple processors
of host and enhance the performance of nova-scheduler.

I had registered a blueprint and commit a patch to implement it.
https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support

This patch had applied in our performance environment and pass some
test cases, like: concurrent booting multiple instances, currently we
didn't find inconsistent issue.

IMO, nova-scheduler should been scaled horizontally on easily way, the
multiple workers should been supported as an out of box feature.

Please feel free to discuss this feature, thanks.


As I said when reviewing your patch, I think the problem is not just
making sure that the scheduler is thread-safe, it's more about how the
Scheduler is accounting resources and providing a retry if those
consumed resources are higher than what's available.

Here, the main problem is that two workers can actually consume two
distinct resources on the same HostState object. In that case, the
HostState object is decremented by the number of taken resources (modulo
what means a resource which is not an Integer...) for both, but nowhere
in that section, it does check that it overrides the resource usage. As
I said, it's not just about decorating a semaphore, it's more about
rethinking how the Scheduler is managing its resources.


That's why I'm -1 on your patch until [1] gets merged. Once this BP will
be implemented, we will have a set of classes for managing heterogeneous
types of resouces and consume them, so it would be quite easy to provide
a check against them in the consume_from_instance() method.


I feel that the above explanation does not give the full picture in
addition to being factually incorrect in several places. I have come to
realize that the current behaviour of the scheduler is subtle enough
that just reading the code is not enough to understand all the edge
cases that can come up. The evidence being that it trips up even people
that have spent significant time working on the code.

It is also important to consider the design choices in terms of
tradeoffs that they were trying to make.

So here are some facts about the way Nova does scheduling of instances
to compute hosts, considering the amount of resources requested by the
flavor (we will try to put the facts into a bigger picture later):

* Scheduler receives request to chose hosts for one or more instances.
* Upon every request (_not_ for every instance as there may be several
instances in a request) the scheduler learns the state of the resources
on all compute nodes from the central DB. This state may be inaccurate
(meaning out of date).
* Compute resources are update by each compute host periodically. This
is done by updating the row in the DB.
* The wall-clock time difference between the scheduler deciding to
schedule an instance, and the resource consumption being reflected in
the data the scheduler learns from the DB can be arbitrarily long (due
to load on the compute nodes and latency of message arrival).
* To cope with the above, there is a concept of retrying the request
that fails on a certain compute node due to the scheduling decision
being made with data stale at the moment of build, by default we will
retry 3 times before giving up.
* When running multiple instances, decisions are made in a loop, and
internal in-memory view of the resources gets updated (the widely
misunderstood consume_from_instance method is used for this), so as to
keep subsequent decisions as accurate as possible. As was described
above, this is all thrown away once the request is finished.

Now that we understand the above, we can start to consider what changes
when we introduce several concurrent scheduler processes.

Several cases come to mind:
* Concurrent requests will no longer be serialized on reading the state
of all hosts (due to how eventlet interacts with mysql driver).
* In the presence of a single request for a large number of instances
there is going to be a drift in accuracy of the decisions made by other
schedulers as they will not have the accounted for any of the instances
until they actually get claimed on their respective hosts.

All of the above limitations will likely not pose a problem under normal
load and usage and can cause issues to start appearing when nodes are
close to full or when there is heavy load. Also this changes drastically
based on how we actually chose to utilize hosts (see a very interesting
Ironic bug [1])

Weather any of the above matters to users is dependant heavily on their
use-case though. This is why I feel we should be providing more information.

Finally - I think it is important to accept that the scheduler service
will always have to operate under the assumptions of stale

Re: [openstack-dev] [Murano] Should we run tests on all supported database engines?

2015-03-05 Thread Andrew Pashkin
Why do we need that, by the way?

REST API tests will cover database access, and database is not used for
anything else than API anyway. Additional DB layer will bring additional
complexity which we will consume time to maintain it. So if we do not
know why exactly we need such layer, I think we should not introduce it.

On 02.03.2015 19:05, Serg Melikyan wrote:
> Hi Andrew,
> 
> I think we should isolate DB access layer and test this layer
> extensively on MySQL & PostgreSQL. 
> 
> On Mon, Mar 2, 2015 at 6:57 PM, Andrew Pashkin  > wrote:
> 
> There is a bug:
> https://bugs.launchpad.net/murano/+bug/1419743
> 
> It's about that search on packages is failing if use Postgres as DB
> backend. The fix is pretty easy there, but how this bug should be
> tested?
> 
> I see such options:
> 1) Run all tests for all DBs that we support.
> 2) Run some tests for some DBs.
> 3) Do not test such issues at all.
> 
> Do anyone has some thoughts on this?
> 
> --
> With kind regards, Andrew Pashkin.
> cell phone - +7 (985) 898 57 59
> Skype - waves_in_fluids
> e-mail - apash...@mirantis.com 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
> http://mirantis.com  | smelik...@mirantis.com
> 
> 
> +7 (495) 640-4904, 0261
> +7 (903) 156-0836
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
With kind regards, Andrew Pashkin.
cell phone - +7 (985) 898 57 59
Skype - waves_in_fluids
e-mail - apash...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] request to backport the fix for bug 1333852 to juno

2015-03-05 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Not being involved in trove, but some general comments on backports.

On 03/04/2015 08:33 PM, Amrith Kumar wrote:
> There has been a request to backport the fix for bug 1333852 
> (https://bugs.launchpad.net/trove/+bug/1333852) which was fixed in
> Kilo into the Juno release.
> 

It would be easier if you directly link to patches in question.

> 
> 
> The change includes a database change and a small change to the
> Trove API. The change also requires a change to the trove client
> and the trove controller code (trove-api). It is arguable whether
> this is a backport or a new feature; I’m inclined to think it is
> more of an extension of an existing feature than a new feature.
> 

It depends on what is a 'database change' above. If it's a schema
change, then it's a complete no-go for backports. A change to API is
also suspicious, but without details it's hard to say. Finally, the
need to patch a client to utilize the change probably means that it's
not a bug fix (or at least, not an easy one).

Where do those flavor UUIDs come from? Were they present/supported in
nova/juno?

> 
> 
> As such, I *don’t* believe that this change should be considered a
> good candidate for backport to Juno but I’m going to see whether
> there is sufficient interest in this, to consider this change to be
> an exception.
> 

Without details, it's hard to say for sure, but for initial look, the
change you describe is too far stretching and has lots of issues that
would make backport hard if not impossible.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU+GtwAAoJEC5aWaUY1u57+M4IAMjuF/f7OTMkaT1dxmy8GpV4
/RoF06pPR5hU1oIjbjyvhRaqzTcKJBNqhuLzV7WhbynkyEuctg+QSqM/d2VQZwpp
Gt59XEiIuLUYn46oC4J/S0DZBYHjRiZqcEXrJRozfzIvMQzqkCH+TeBxo9J5E/U4
/I2rkGkDUm+XJa88M5PsTJP6Vp0nAvKQLa/Vjpe4/Ute2YMGlvFeH4NAsBy8XVWe
BSJAIds0Abe1+uNwvaDeRbKaHwcgdAG/ia9WUO+8QHx1oXpLH/190o2P+xfZ8cno
guPR2kSrzC0JLO5lfvRkjnDJd53kj/0tMf12xjzHHBC++grLUEs9i2AsvV/Dtyk=
=s/sF
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-05 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/05/2015 11:35 AM, Mikhail Fedosin wrote:
> Yes, it's absolutely right. For example, Nova and Neutron have
> official rules for that: 
> https://wiki.openstack.org/wiki/Nova/CoreTeam where it says: "A
> member of the team may be removed at any time by the PTL. This is
> typically due to a drop off of involvement by the member such that
> they are no longer meeting expectations to maintain team 
> membership". https://wiki.openstack.org/wiki/NeutronCore "The PTL
> may remove a member from neutron-core at any time. Typically when a
> member has decreased their involvement with the project through a 
> drop in reviews and participation in general project development,
> the PTL will propose their removal and remove them. Members who
> have previously been core may be fast-tracked back into core if
> their involvement picks back up" So, as Louis has mentioned, it's a
> routine work, and why should we be any different? Also, I suggest
> to write the same wiki document for Glance to prevent these issues
> in the future.
> 

Does the rule belong to e.g. governance repo? It seems like a sane
requirement for core *reviewers* to actually review code, no? Or are
there any projects that would not like to adopt such a rule formally?

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU+GxdAAoJEC5aWaUY1u579mEIAMN/wucsahaZ0yMT2/eo8t05
rIWI+lBLjNueWJgB+zNbVlVcsKBZ/hl4J0O3eE65RtlTS5Rta5hv0ymyRL1nnUZH
g/tL3ogEF0SsSBkiavVh3klGmUwsvQ+ygPN5rVgnbiJ+uO555EPlbiHwZHbcjBoI
lyUjIhWzUCX26wq7mgiTsY858UgdEt3urVHD9jTE2WNszMRLXQ7vsoAik9xDfISz
E0eZ8WVQKlNHNox0UoKbobdb3YDhmY3Ahp9Fj2cT1IScyQGORnm0hXV3+pRdWNhD
1M/gDwAf97F1lfNxPpy4JCGutbe5zoPQYLpJExzaXkqqARxUnwhB1gZ9lEG8l2o=
=lcLY
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Better-quotas blueprint

2015-03-05 Thread Kyle Mestery
On Thu, Mar 5, 2015 at 3:29 AM, Salvatore Orlando 
wrote:

> I was hoping to push code today for [1], but unfortunately I got caught by
> something else during this week and I'll miss the code proposal deadline.
>
> I would like to apply for a FFE; however, since I've experienced FFEs in
> previous release cycles, please treat this request as best-effort if there
> ore other work items with higher priority applying for a FFE.
>
> Thanks for the note Salvatore. I'm fine with granting this an FFE given
it's a good community feature and it's High priority. Do you think you'll
have code pushed out by early next week? Also, lets try to find two core
reviewers who can iterate with you quickly once you do push it so we can
land it fairly fast.

Thanks,
Kyle


> Salvatore
>
> [1] https://blueprints.launchpad.net/neutron/+spec/better-quotas
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-05 Thread Doug Hellmann


On Thu, Mar 5, 2015, at 06:59 AM, Alexis Lee wrote:
> Doug Hellmann said on Wed, Mar 04, 2015 at 11:10:31AM -0500:
> > I used to use email to track such things, but I have reached the point
> > where keeping up with the push notifications from gerrit would consume
> > all of my waking time.
> 
> Jim said if his patch was auto-abandoned, he would not find out. There
> are mail filtering tools, custom dashboards, the REST API. There are a
> myriad of ways to find out, it seems false to complain about a lack of
> notification if you turned them off yourself and choose not to use
> alternative tooling.

I can certainly do all of those things, and do, but I'm becoming a more
skilled gerrit user. My concern is about new contributors, who have not
learned gerrit or who have not learned the way we have customized our
use of gerrit. Throwing away their work because they've been away for
some period of time (think extended vacation, illness, family emergency)
is hostile. The patch is already in a state that would prevent it from
merging, no? So if it we can hide it from reviewers that don't want to
see those sorts of patches, without being hostile and without hiding it
from *all* reviewers, that seems better than automatically discarding
it.

> 
> I'm not saying it's perfect. Gerrit could offer granular control of push
> notifications. It's also awkward to filter auto-abandoned patches from
> manually abandoned, which is why I think a new flag or two with bespoke
> semantics is the best solution.
> 
> > As Jim and others have pointed out, we can identify those changesets
> > using existing attributes rather than having to add a new flag.
> 
> This doesn't help new contributors who aren't using your custom
> dashboard. The defaults have to be sensible. The default dashboard must
> identify patches which will never be reviewed and a push notification
> should be available for when a patch enters that state.

Does the default dashboard we have not seem sensible for a new
contributor? When I'm logged in it shows me all of my open changes (this
is where abandoning a change makes it "disappear") and reviews where
I've commented. A slightly more advanced user can star changes and see
all of them in a view, and watch projects and see their open changes.
More advanced users can query by project or status, and even more
advanced users can bookmark those queries or even install dashboards
into gerrit.

We have different types of users, who will want different things from
gerrit. I don't think we want the default view to match some sort of
expert view, but we should make it easier for users to do more with the
tool as they gain more experience.

> 
> It also doesn't help composability. What if I want to find active
> patches with less than 100 lines of code change? I have to copy the
> whole query string to append my "delta:<100". Copying the query string
> everywhere makes it easy for inconsistency to slip in. If the guideline
> changes, will every reviewer update their dashboards and bookmarks?

I don't think every reviewer needs to be looking at the same view, so I
don't think maintaining consistency is a problem. To share that Oslo
dashboard, we have a link in the wiki and we keep it up to date by
managing the dashboard creator input file. It doesn't change often
(usually only when we start graduating a new library) so it's not a lot
of work.

> 
> Projects have demonstrated a desire to set policy on patch activity
> centrally, individual custom dashboards don't allow this. Official

We should be focusing on maximizing the ability of reviewers to find the
patches they do want to review, rather than ensuring that everyone is
reviewing the same thing. So far I have only seen discussion of
abandoning patches that wouldn't merge anyway. Reviewers who come across
those patches will see the -1 from jenkins or the -2 from a core
reviewer and either understand that means they should skip reviewing it
or waste a small amount of their time until they do learn that. Either
way reviewers who want to ignore the patch by using a filter query will
be able to continue to do that.

> custom dashboards (that live on the Gerrit server) are a pain to change
> AFAIK and we can't count on new contributors knowing how important they
> are. Allowing projects to automatically flag patches helps both those
> who read their Gerrit email and those who rely on custom dashboards.
> 
> 
> Alexis
> -- 
> Nova Engineer, HP Cloud.  AKA lealexis, lxsli.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lis

Re: [openstack-dev] [Neutron] Better-quotas blueprint

2015-03-05 Thread Salvatore Orlando
Considering my current load I expect code to be ready by wednesday March
11th.
Maybe we can say that we can grant a FFE as long as the code makes it - in
full - by that date?

Salvatore

On 5 March 2015 at 16:03, Kyle Mestery  wrote:

> On Thu, Mar 5, 2015 at 3:29 AM, Salvatore Orlando 
> wrote:
>
>> I was hoping to push code today for [1], but unfortunately I got caught
>> by something else during this week and I'll miss the code proposal deadline.
>>
>> I would like to apply for a FFE; however, since I've experienced FFEs in
>> previous release cycles, please treat this request as best-effort if there
>> ore other work items with higher priority applying for a FFE.
>>
>> Thanks for the note Salvatore. I'm fine with granting this an FFE given
> it's a good community feature and it's High priority. Do you think you'll
> have code pushed out by early next week? Also, lets try to find two core
> reviewers who can iterate with you quickly once you do push it so we can
> land it fairly fast.
>
> Thanks,
> Kyle
>
>
>> Salvatore
>>
>> [1] https://blueprints.launchpad.net/neutron/+spec/better-quotas
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Better-quotas blueprint

2015-03-05 Thread Oleg Bondarev
On Thu, Mar 5, 2015 at 6:03 PM, Kyle Mestery  wrote:

> On Thu, Mar 5, 2015 at 3:29 AM, Salvatore Orlando 
> wrote:
>
>> I was hoping to push code today for [1], but unfortunately I got caught
>> by something else during this week and I'll miss the code proposal deadline.
>>
>> I would like to apply for a FFE; however, since I've experienced FFEs in
>> previous release cycles, please treat this request as best-effort if there
>> ore other work items with higher priority applying for a FFE.
>>
>> Thanks for the note Salvatore. I'm fine with granting this an FFE given
> it's a good community feature and it's High priority. Do you think you'll
> have code pushed out by early next week? Also, lets try to find two core
> reviewers who can iterate with you quickly once you do push it so we can
> land it fairly fast.
>

I can be one of those.

Thanks,
Oleg


>
> Thanks,
> Kyle
>
>
>> Salvatore
>>
>> [1] https://blueprints.launchpad.net/neutron/+spec/better-quotas
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Missing button to send Ctrl+Alt+Del for SPICE Console

2015-03-05 Thread Andy McCrae
Sorry to resurrect this almost a year later! I've recently run into this
issue, and it does make logging into Windows instances, via the console,
quite challenging.

On Fri, Jun 6, 2014 at 3:46 PM, Ben Nemec  wrote:

> This sounds like a reasonable thing to open a bug against Horizon on
> Launchpad.


I'm wondering if this is something that should go into Horizon or into
spice-html5 itself? I noticed this was fixed in ovirt (
https://bugzilla.redhat.com/show_bug.cgi?id=1014069), similar to how I
imagine it would be fixed in Horizon. However, this would mean if, for
whatever reason, you aren't using Horizon and get the link directly via
nova (using get-spice-console) you wouldn't get the ctrl-alt-del button.

Is there a reason it wouldn't go into spice-html5, or that making this
change in Horizon would be a better change?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Better-quotas blueprint

2015-03-05 Thread Kyle Mestery
On Thu, Mar 5, 2015 at 9:31 AM, Oleg Bondarev 
wrote:

>
>
> On Thu, Mar 5, 2015 at 6:03 PM, Kyle Mestery  wrote:
>
>> On Thu, Mar 5, 2015 at 3:29 AM, Salvatore Orlando 
>> wrote:
>>
>>> I was hoping to push code today for [1], but unfortunately I got caught
>>> by something else during this week and I'll miss the code proposal deadline.
>>>
>>> I would like to apply for a FFE; however, since I've experienced FFEs in
>>> previous release cycles, please treat this request as best-effort if there
>>> ore other work items with higher priority applying for a FFE.
>>>
>>> Thanks for the note Salvatore. I'm fine with granting this an FFE given
>> it's a good community feature and it's High priority. Do you think you'll
>> have code pushed out by early next week? Also, lets try to find two core
>> reviewers who can iterate with you quickly once you do push it so we can
>> land it fairly fast.
>>
>
> I can be one of those.
>
> Thanks Oleg! Please work closely with Salvatore on this once the code
lands. Much appreciated!


> Thanks,
> Oleg
>
>
>>
>> Thanks,
>> Kyle
>>
>>
>>> Salvatore
>>>
>>> [1] https://blueprints.launchpad.net/neutron/+spec/better-quotas
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Better-quotas blueprint

2015-03-05 Thread Kyle Mestery
On Thu, Mar 5, 2015 at 9:29 AM, Salvatore Orlando 
wrote:

> Considering my current load I expect code to be ready by wednesday March
> 11th.
> Maybe we can say that we can grant a FFE as long as the code makes it - in
> full - by that date?
>
> That's reasonable to me. Consider it done!

Kyle


> Salvatore
>
> On 5 March 2015 at 16:03, Kyle Mestery  wrote:
>
>> On Thu, Mar 5, 2015 at 3:29 AM, Salvatore Orlando 
>> wrote:
>>
>>> I was hoping to push code today for [1], but unfortunately I got caught
>>> by something else during this week and I'll miss the code proposal deadline.
>>>
>>> I would like to apply for a FFE; however, since I've experienced FFEs in
>>> previous release cycles, please treat this request as best-effort if there
>>> ore other work items with higher priority applying for a FFE.
>>>
>>> Thanks for the note Salvatore. I'm fine with granting this an FFE given
>> it's a good community feature and it's High priority. Do you think you'll
>> have code pushed out by early next week? Also, lets try to find two core
>> reviewers who can iterate with you quickly once you do push it so we can
>> land it fairly fast.
>>
>> Thanks,
>> Kyle
>>
>>
>>> Salvatore
>>>
>>> [1] https://blueprints.launchpad.net/neutron/+spec/better-quotas
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd 1.0.2 released

2015-03-05 Thread Dmitry Tantsur

Hi all!

I've made a bug fix release for ironic-discoverd today, and it's 
available on PyPI! Two issues fixed:
* Authentication was too strict, and it wasn't possible to talk to the 
API using Ironic service user. Unfortunately fixing it lead to a new 
dependency on keystonemiddleware and new configuration option 
`identity_uri` (Keystone admin endpoint).
* Drop check on power state. Ironic is stopping doing it for deploy, and 
anyway having this check prevented going from INSPECTFAIL when there was 
a problem with ramdisk. Thanks to Yuiko Takada for fixing it.


Note also that Ironic support for discoverd is hopefully close to 
landing: https://review.openstack.org/#/c/156562/ (and 
https://review.openstack.org/#/c/161132/).


Cheers,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] What does default [] for flat_networks means?

2015-03-05 Thread Anna Kamyshnikova
I worked on bug [1] and pushed change request [2] and I got Ryu CI check
failing [5], because flat_networks is not set in config. For vlan networks
empty list in 'network_vlan_ranges' means that no physical networks are
allowed [3], for flat networks there is only help string for config option
[4] and it seems that similar behaviour is expected there too. So, my
question is what should be done in this case?  Should default be changed to
'*' or should some changes be done on devstack side?

[1] - https://bugs.launchpad.net/neutron/+bug/1424548
[2] - https://review.openstack.org/160842
[3] -
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_vlan.py#L175
[4] -
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_flat.py#L32
[5] -
http://180.37.183.32/ryuci/42/160842/5/check/check-tempest-dsvm-ofagent/005e5e2/logs/devstacklog.txt.gz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] python-glanceclient release 0.16.1

2015-03-05 Thread Dr. Jens Rosenboom

Am 05/03/15 um 06:02 schrieb Nikhil Komawar:

The python-glanceclient release management team is pleased to announce:
 python-glanceclient version 0.16.1 has been released on Thursday, Mar 5th 
around 04:56 UTC.


The release includes a bugfix for [1], which is affecting us in 
Icehouse, most likely Juno is also affected. However, due to the 
requirements caps recently introduced, the bugfix will not be picked up 
by the stable branches (caps are <=0.14.2 for Icehouse and <=0.15.0 for 
Juno).


The patch itself [2] applies cleanly to the older code, so in theory it 
should be possible to build some 0.14.3 release with that and update the 
stable requirements accordingly. But I guess that this would require 
setting up stable branches for the client git repo, which currently 
don't exist.


Are there plans to do this or is there some other way to backport the 
fix? I assume that the same issue may happen with other client releases 
in the future.


[1] https://bugs.launchpad.net/python-glanceclient/+bug/1423165
[2] https://review.openstack.org/156975


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?

2015-03-05 Thread Feodor Tersin
Since the first microversion is merged (
https://review.openstack.org/#/c/140313/),
please, pay attention to a novaclient upgrade to work with microversions (
https://review.openstack.org/#/c/152569/)


On Mon, Mar 2, 2015 at 2:30 PM, Sean Dague  wrote:

> This change for the additional attributes for ec2 looks like it's
> basically ready to go, except it has the wrong microversion on it (as
> they anticipated the other changes landing ahead of them) -
> https://review.openstack.org/#/c/155853
>
> What's the plan for merging the outstanding microversions? I believe
> we're all conceptually approved on all them, and it's an important part
> of actually moving forward on the new API. It seems like we're in a bit
> of a holding pattern on all of them right now, and I'd like to make sure
> we start merging them this week so that they have breathing space before
> the freeze.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] python-glanceclient release 0.16.1

2015-03-05 Thread Ian Cordasco
The clients in general do not back port patches. Someone should work with
stable-maint to raise the cap in Icehouse and Juno. I suspect, however,
that those caps were added due to the client breaking other projects.
Proposals can be made though and ideally, openstack/requirements’ gate
jobs will catch any breakage.

On 3/5/15, 10:28, "Dr. Jens Rosenboom"  wrote:

>Am 05/03/15 um 06:02 schrieb Nikhil Komawar:
>> The python-glanceclient release management team is pleased to announce:
>>  python-glanceclient version 0.16.1 has been released on Thursday,
>>Mar 5th around 04:56 UTC.
>
>The release includes a bugfix for [1], which is affecting us in
>Icehouse, most likely Juno is also affected. However, due to the
>requirements caps recently introduced, the bugfix will not be picked up
>by the stable branches (caps are <=0.14.2 for Icehouse and <=0.15.0 for
>Juno).
>
>The patch itself [2] applies cleanly to the older code, so in theory it
>should be possible to build some 0.14.3 release with that and update the
>stable requirements accordingly. But I guess that this would require
>setting up stable branches for the client git repo, which currently
>don't exist.
>
>Are there plans to do this or is there some other way to backport the
>fix? I assume that the same issue may happen with other client releases
>in the future.
>
>[1] https://bugs.launchpad.net/python-glanceclient/+bug/1423165
>[2] https://review.openstack.org/156975
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-05 Thread Russell Bryant
To turn this stuff off, you don't need to revert.  I'd suggest just
setting the namespace contants to None, and that will result in the same
thing.

http://git.openstack.org/cgit/openstack/neutron/tree/neutron/common/constants.py#n152

It's definitely a non-backwards compatible change.  That was a conscious
choice as the interfaces are a bit of a tangled mess, IMO.  The
non-backwards compatible changes were simpler so I went that route,
because as far as I could tell, rolling upgrades were not supported.  If
they do work, it's due to luck.  There's multiple things including the
lack of testing this scenario to lack of data versioning that make it a
pretty shaky area.

However, if it worked for some people, I totally get the argument
against breaking it intentionally.  As mentioned before, a quick fix if
needed is to just set the namespace constants to None.  If someone wants
to do something to make it backwards compatible, that's even better.

-- 
Russell Bryant

On 03/04/2015 11:50 AM, Salvatore Orlando wrote:
> To put in another way I think we might say that change 154670 broke
> backward compatibility on the RPC interface.
> To be fair this probably happened because RPC interfaces were organised
> in a way such that this kind of breakage was unavoidable.
> 
> I think the strategy proposed by Assaf is a viable one. The point about
> being able to do rolling upgrades only from version N to N+1 is a
> sensible one, but it has more to do with general backward compability
> rules for RPC interfaces.
> 
> In the meanwhile this is breaking a typical upgrade scenario. If a fix
> allowing agent state updates both namespaced and not is available today
> or tomorrow, that's fine. Otherwise I'd revert just to be safe.
> 
> By the way, we were supposed to have already removed all server rpc
> callbacks in the appropriate package... did we forget out this one or is
> there a reason for which it's still in neutron.db?
> 
> Salvatore
> 
> On 4 March 2015 at 17:23, Miguel Ángel Ajo  > wrote:
> 
> I agree with Assaf, this is an issue across updates, and
> we may want (if that’s technically possible) to provide 
> access to those functions with/without namespace.
> 
> Or otherwise think about reverting for now until we find a
> migration strategy
> 
> 
> https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:master+topic:bp/rpc-docs-and-namespaces,n,z
> 
> 
> Best regards,
> Miguel Ángel Ajo
> 
> On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:
> 
>> Hello everyone,
>>
>> I'd like to highlight an issue with:
>> https://review.openstack.org/#/c/154670/
>>
>> According to my understanding, most deployments upgrade the
>> controllers first
>> and compute/network nodes later. During that time period, all
>> agents will
>> fail to report state as they're sending the report_state message
>> outside
>> of any namespace while the server is expecting that message in a
>> namespace.
>> This is a show stopper as the Neutron server will think all of its
>> agents are dead.
>>
>> I think the obvious solution is to modify the Neutron server code
>> so that
>> it accepts the report_state method both in and outside of the
>> report_state
>> RPC namespace and chuck that code away in L (Assuming we support
>> rolling upgrades
>> only from version N to N+1, which while is unfortunate, is the
>> behavior I've
>> seen in multiple places in the code).
>>
>> Finally, are there additional similar issues for other RPC methods
>> placed in a namespace
>> this cycle?
>>
>>
>> Assaf Muller, Cloud Networking Engineer
>> Red Hat
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: o

Re: [openstack-dev] [Glance] [all] python-glanceclient release 0.16.1

2015-03-05 Thread Doug Hellmann


On Thu, Mar 5, 2015, at 11:37 AM, Ian Cordasco wrote:
> The clients in general do not back port patches. Someone should work with
> stable-maint to raise the cap in Icehouse and Juno. I suspect, however,
> that those caps were added due to the client breaking other projects.
> Proposals can be made though and ideally, openstack/requirements’ gate
> jobs will catch any breakage.

Under my cross-project spec proposal [1], we will want to start managing
stable branches for clients specifically for this sort of situation.

Doug

[1] https://review.openstack.org/#/c/155072/

> 
> On 3/5/15, 10:28, "Dr. Jens Rosenboom"  wrote:
> 
> >Am 05/03/15 um 06:02 schrieb Nikhil Komawar:
> >> The python-glanceclient release management team is pleased to announce:
> >>  python-glanceclient version 0.16.1 has been released on Thursday,
> >>Mar 5th around 04:56 UTC.
> >
> >The release includes a bugfix for [1], which is affecting us in
> >Icehouse, most likely Juno is also affected. However, due to the
> >requirements caps recently introduced, the bugfix will not be picked up
> >by the stable branches (caps are <=0.14.2 for Icehouse and <=0.15.0 for
> >Juno).
> >
> >The patch itself [2] applies cleanly to the older code, so in theory it
> >should be possible to build some 0.14.3 release with that and update the
> >stable requirements accordingly. But I guess that this would require
> >setting up stable branches for the client git repo, which currently
> >don't exist.
> >
> >Are there plans to do this or is there some other way to backport the
> >fix? I assume that the same issue may happen with other client releases
> >in the future.
> >
> >[1] https://bugs.launchpad.net/python-glanceclient/+bug/1423165
> >[2] https://review.openstack.org/156975
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] [stable] python-glanceclient release 0.16.1

2015-03-05 Thread Dr. Jens Rosenboom

Am 05/03/15 um 17:37 schrieb Ian Cordasco:

The clients in general do not back port patches. Someone should work with
stable-maint to raise the cap in Icehouse and Juno. I suspect, however,
that those caps were added due to the client breaking other projects.
Proposals can be made though and ideally, openstack/requirements’ gate
jobs will catch any breakage.


I don't think that raising the cap will be feasible anymore with 
incompatible changes like the oslo namespace drop.


What prevents clients from having stable branches except "we never did 
this because up to now it wasn't necessary"?



On 3/5/15, 10:28, "Dr. Jens Rosenboom"  wrote:


Am 05/03/15 um 06:02 schrieb Nikhil Komawar:

The python-glanceclient release management team is pleased to announce:
  python-glanceclient version 0.16.1 has been released on Thursday,
Mar 5th around 04:56 UTC.


The release includes a bugfix for [1], which is affecting us in
Icehouse, most likely Juno is also affected. However, due to the
requirements caps recently introduced, the bugfix will not be picked up
by the stable branches (caps are <=0.14.2 for Icehouse and <=0.15.0 for
Juno).

The patch itself [2] applies cleanly to the older code, so in theory it
should be possible to build some 0.14.3 release with that and update the
stable requirements accordingly. But I guess that this would require
setting up stable branches for the client git repo, which currently
don't exist.

Are there plans to do this or is there some other way to backport the
fix? I assume that the same issue may happen with other client releases
in the future.

[1] https://bugs.launchpad.net/python-glanceclient/+bug/1423165
[2] https://review.openstack.org/156975


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] The None and 'none' for CONF.libvirt.cpu_mode

2015-03-05 Thread Jiang, Yunhong

> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Wednesday, March 4, 2015 9:56 AM
> To: Jiang, Yunhong
> Cc: openstack-dev@lists.openstack.org; Xu, Hejie
> Subject: Re: [nova][libvirt] The None and 'none' for CONF.libvirt.cpu_mode
> 
> On Wed, Mar 04, 2015 at 05:24:53PM +, Jiang, Yunhong wrote:
> > Daniel, thanks for your clarification.
> >
> > Another related question is, what will be the guest's real cpu model
> > is the cpu_model is None? This is about a reported regression at
> 
> The guest CPU will be unspecified - it will be some arbitrary
> hypervisor decided default which nova cannot know.
> 
> > https://bugs.launchpad.net/nova/+bug/1082414 . When the
> > instance.vcpu_model.mode is None, we should compare the source/target
> > cpu model, as the suggestion from Tony, am I right?
> 
> If CPU model is none, best we can do is compare the *host* CPU of
> the two hosts to make sure the host doesn't loose any features, as
> we have no way of knowing what features the guest is relying on.

Thanks for clarification. I will cook a patch for this issue.

--jyh

> 
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-05 Thread Assaf Muller


- Original Message -
> To turn this stuff off, you don't need to revert.  I'd suggest just
> setting the namespace contants to None, and that will result in the same
> thing.
> 
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/common/constants.py#n152
> 
> It's definitely a non-backwards compatible change.  That was a conscious
> choice as the interfaces are a bit of a tangled mess, IMO.  The
> non-backwards compatible changes were simpler so I went that route,
> because as far as I could tell, rolling upgrades were not supported.  If
> they do work, it's due to luck.  There's multiple things including the
> lack of testing this scenario to lack of data versioning that make it a
> pretty shaky area.
> 
> However, if it worked for some people, I totally get the argument
> against breaking it intentionally.  As mentioned before, a quick fix if
> needed is to just set the namespace constants to None.  If someone wants
> to do something to make it backwards compatible, that's even better.
> 

I sent out an email to the operators list to get some feedback:
http://lists.openstack.org/pipermail/openstack-operators/2015-March/006429.html

And at least one operator reported that he performed a rolling Neutron upgrade
from I to J successfully. So, I'm agreeing with you agreeing with me that we
probably don't want to mess this up knowingly, even though there is no testing
to make sure that it keeps working.

I'll follow up on IRC with you to figure out who's doing what.

> --
> Russell Bryant
> 
> On 03/04/2015 11:50 AM, Salvatore Orlando wrote:
> > To put in another way I think we might say that change 154670 broke
> > backward compatibility on the RPC interface.
> > To be fair this probably happened because RPC interfaces were organised
> > in a way such that this kind of breakage was unavoidable.
> > 
> > I think the strategy proposed by Assaf is a viable one. The point about
> > being able to do rolling upgrades only from version N to N+1 is a
> > sensible one, but it has more to do with general backward compability
> > rules for RPC interfaces.
> > 
> > In the meanwhile this is breaking a typical upgrade scenario. If a fix
> > allowing agent state updates both namespaced and not is available today
> > or tomorrow, that's fine. Otherwise I'd revert just to be safe.
> > 
> > By the way, we were supposed to have already removed all server rpc
> > callbacks in the appropriate package... did we forget out this one or is
> > there a reason for which it's still in neutron.db?
> > 
> > Salvatore
> > 
> > On 4 March 2015 at 17:23, Miguel Ángel Ajo  > > wrote:
> > 
> > I agree with Assaf, this is an issue across updates, and
> > we may want (if that’s technically possible) to provide
> > access to those functions with/without namespace.
> > 
> > Or otherwise think about reverting for now until we find a
> > migration strategy
> > 
> > 
> > https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:master+topic:bp/rpc-docs-and-namespaces,n,z
> > 
> > 
> > Best regards,
> > Miguel Ángel Ajo
> > 
> > On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:
> > 
> >> Hello everyone,
> >>
> >> I'd like to highlight an issue with:
> >> https://review.openstack.org/#/c/154670/
> >>
> >> According to my understanding, most deployments upgrade the
> >> controllers first
> >> and compute/network nodes later. During that time period, all
> >> agents will
> >> fail to report state as they're sending the report_state message
> >> outside
> >> of any namespace while the server is expecting that message in a
> >> namespace.
> >> This is a show stopper as the Neutron server will think all of its
> >> agents are dead.
> >>
> >> I think the obvious solution is to modify the Neutron server code
> >> so that
> >> it accepts the report_state method both in and outside of the
> >> report_state
> >> RPC namespace and chuck that code away in L (Assuming we support
> >> rolling upgrades
> >> only from version N to N+1, which while is unfortunate, is the
> >> behavior I've
> >> seen in multiple places in the code).
> >>
> >> Finally, are there additional similar issues for other RPC methods
> >> placed in a namespace
> >> this cycle?
> >>
> >>
> >> Assaf Muller, Cloud Networking Engineer
> >> Red Hat
> >>
> >> 
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> 
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > __
> > OpenStack Developme

[openstack-dev] HTTPD Config

2015-03-05 Thread Adam Young
I'm trying to get a grip on what the HTTPD configuration should be for 
Horizon in order for it to use HTTPS.  This rally should be the default, 
but the devstack and puppet choice of putting the Horizon config inside 
a Virtualhoat *:80  section in the config file makes it tricky.  If I 
remove the



and corresponding


Then I can enable HTTPS in devstack by:

running with SSLrequireSSL  and It inherits all of the VirstualHost 
*:443 configuration.


For Keystone, we do:

  (and 35357)

 SSLEngine On
SSLCertificateFile /opt/stack/data/CA/int-ca/devstack-cert.crt
SSLCertificateKeyFile 
/opt/stack/data/CA/int-ca/private/devstack-cert.key




I'd like to drop port 5000 all-together, as we are using a port assigned 
to a different service.  35357 is also problematic as it is in the 
middle of the Ephemeral range.  Since we are  talking about running 
everything in one web server anywya, using port 80/443 for all web stuff 
is the right approach.


Yeah, I might have mentioned this a time or two before.

So, assuming we want to be able to make both Horizon and Keystone run on 
port 443 by default, what is the right abstraction for the HTTPD 
configuration?  I am assuming we still want separate values for the 
environment:


WSGIDaemonProcess
WSGIProcessGroup
WSGIApplicationGroup
WSGIPassAuthorization


In Devstack, we set

SetEnv APACHE_RUN_USER ayoung
SetEnv APACHE_RUN_GROUP ayoung

For the Horizon Servcie ,and making this match for all HTTPD service 
makes sense, but probably want to be able to allow for separation of he 
users on Production deployments.  How should we scope these?  Or does it 
really matter?


We want to make sure we have an extensible approach that will support 
other services running on 443.




Probably time to update https://wiki.openstack.org/wiki/URLs  with the 
other services.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.policy 0.3.0 (first official) release

2015-03-05 Thread Steve Martinelli
We are thrilled to announce the first official release of oslo.policy:

oslo.policy 0.3.0: Oslo common policy enforcement

The main driving factor behind the graduation work was to have the 
security sensitive policy code managed in a library. Should a CVE level 
defect arise, releasing a new version of a library is much less trouble 
than syncing each individual project. Thanks to everyone who assisted with 
the graduation process!

Please report issues through launchpad:

https://bugs.launchpad.net/oslo.policy

oslo.policy is available through PyPI:

https://pypi.python.org/pypi/oslo.policy

Thanks!
Steve Martinelli
OpenStack Development - Keystone Core__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] pci stats format and functional tests

2015-03-05 Thread Murray, Paul (HP Cloud)
Hi All,

I know Yunhong Jiang and Daniel Berrange have been involved in the following, 
but I thought it worth sending to the list for visibility.

While writing code to convert the resource tracker to use the ComputeNode 
object realized that the api samples used in the functional tests are not the 
same as the format as the PciDevicePool object. For example: 
hypervisor-pci-detail-resp.json has something like this:

"os-pci:pci_stats": [
{
"count": 5,
"extra_info": {
"key1": "value1",
"phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]"
},
"keya": "valuea",
"product_id": "1520",
"vendor_id": "8086"
}
],

My understanding from interactions with yjiang5 in the past leads me to think 
that something like this is what is actually expected:

"os-pci:pci_stats": [
{
"count": 5,
"key1": "value1",
"phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]",
"keya": "valuea",
"product_id": "1520",
"vendor_id": "8086"
}
],

This is the way the PciDevicePool object expects the data structure to be and 
is also the way the libvirt virt driver creates pci device information (i.e. 
without the "extra_info" key). Other than that (which is actually pretty clear) 
I couldn't find anything to tell me definitively if my interpretation is 
correct and I don't want to change the functional tests without being sure they 
are wrong. So if anyone can give some guidance here I would appreciate it.

I separated this stuff out into a patch with a couple of other minor cleanups 
in preparation for the ComputeNode change, see: 
https://review.openstack.org/#/c/161843

Let me know if I am on the right track,

Cheers,
Paul


Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as "HP CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] OpenStack options for visually impaired contributors

2015-03-05 Thread John Wood
Hello folks,

I¹m curious what tools visually impaired OpenStack contributors have found
helpful for performing Gerrit reviews (the UI is difficult to scan,
especially for in-line code comments) and for Python development in
general?

Thanks,
John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] pci stats format and functional tests

2015-03-05 Thread Daniel P. Berrange
On Thu, Mar 05, 2015 at 07:39:23PM +, Murray, Paul (HP Cloud) wrote:
> Hi All,
> 
> I know Yunhong Jiang and Daniel Berrange have been involved in the following, 
> but I thought it worth sending to the list for visibility.
> 
> While writing code to convert the resource tracker to use the ComputeNode 
> object realized that the api samples used in the functional tests are not the 
> same as the format as the PciDevicePool object. For example: 
> hypervisor-pci-detail-resp.json has something like this:
> 
> "os-pci:pci_stats": [
> {
> "count": 5,
> "extra_info": {
> "key1": "value1",
> "phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]"
> },
> "keya": "valuea",
> "product_id": "1520",
> "vendor_id": "8086"
> }
> ],
> 
> My understanding from interactions with yjiang5 in the past leads me to think 
> that something like this is what is actually expected:
> 
> "os-pci:pci_stats": [
> {
> "count": 5,
> "key1": "value1",
> "phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]",
> "keya": "valuea",
> "product_id": "1520",
> "vendor_id": "8086"
> }
> ],
> 
> This is the way the PciDevicePool object expects the data structure to be and 
> is also the way the libvirt virt driver creates pci device information (i.e. 
> without the "extra_info" key). Other than that (which is actually pretty 
> clear) I couldn't find anything to tell me definitively if my interpretation 
> is correct and I don't want to change the functional tests without being sure 
> they are wrong. So if anyone can give some guidance here I would appreciate 
> it.

I'm afraid I've not actually done any work with the PCI support at the
API level - only looked at the data reported by the hypervisors in the
get_available_resource method which isn't API sensitive as its internal
code. So probably need someone else to answer these questions.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.context 0.2.0 release

2015-03-05 Thread Doug Hellmann
We are chuffed to announce the release of:

oslo.context 0.2.0: oslo.context library

This release includes a fix for a stability issue in nova’s
unit tests.

For more details, please see the git log history below and:

http://launchpad.net/oslo.context/+milestone/0.2.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.context

Changes in /home/dhellmann/repos/openstack/oslo.context 0.1.0..0.2.0


1c4757a ensure we reset contexts when fixture is used
205479f Activate pep8 check that _ is imported

Diffstat (except docs and test files)
-

oslo_context/fixture.py| 12 +++-
tox.ini|  1 -
3 files changed, 27 insertions(+), 2 deletions(-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.vmware 0.11.0 release

2015-03-05 Thread Doug Hellmann
We are pleased to announce the release of:

oslo.vmware 0.11.0: Oslo VMware library for OpenStack projects

For more details, please see the git log history below and:

http://launchpad.net/oslo.vmware/+milestone/0.11.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.vmware

Changes in /home/dhellmann/repos/openstack/oslo.vmware 0.10.0..0.11.0
-

4f3bdda Imported Translations from Transifex
a95c0ae Add get_datastore_by_ref method to oslo.vmware
1433975 Change use of random to random.SystemRandom

Diffstat (except docs and test files)
-

.../fr/LC_MESSAGES/oslo.vmware-log-warning.po  |  9 --
oslo.vmware/locale/oslo.vmware-log-warning.pot |  9 --
oslo_vmware/objects/datastore.py   | 27 +++-
oslo_vmware/vim_util.py| 36 +
6 files changed, 128 insertions(+), 5 deletions(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] python-glanceclient release 0.16.1

2015-03-05 Thread Dr. Jens Rosenboom

Am 05/03/15 um 17:52 schrieb Doug Hellmann:



On Thu, Mar 5, 2015, at 11:37 AM, Ian Cordasco wrote:

The clients in general do not back port patches. Someone should work with
stable-maint to raise the cap in Icehouse and Juno. I suspect, however,
that those caps were added due to the client breaking other projects.
Proposals can be made though and ideally, openstack/requirements’ gate
jobs will catch any breakage.


Under my cross-project spec proposal [1], we will want to start managing
stable branches for clients specifically for this sort of situation.

Doug

[1] https://review.openstack.org/#/c/155072/


Do you expect this proposal to be applied retrospectively to create 
stable/{icehouse,juno} branches or will this only be applied starting 
from Kilo onwards?


Also, your proposal is talking about libraries in general, is there 
consensus on having the python-*client projects being included there? Or 
would it make sense to mention them explicitly?


I'm a bit worried that we have a bug currently making nova and cinder 
API services explode after some uptime and no shorttern way of fixing 
this bug in the stable branches.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] oslo.policy 0.3.0 (first official) release

2015-03-05 Thread Doug Hellmann


On Thu, Mar 5, 2015, at 02:31 PM, Steve Martinelli wrote:
> We are thrilled to announce the first official release of oslo.policy:
> 
> oslo.policy 0.3.0: Oslo common policy enforcement
> 
> The main driving factor behind the graduation work was to have the 
> security sensitive policy code managed in a library. Should a CVE level 
> defect arise, releasing a new version of a library is much less trouble 
> than syncing each individual project. Thanks to everyone who assisted
> with 
> the graduation process!
> 
> Please report issues through launchpad:
> 
> https://bugs.launchpad.net/oslo.policy
> 
> oslo.policy is available through PyPI:
> 
> https://pypi.python.org/pypi/oslo.policy
> 
> Thanks!
> Steve Martinelli
> OpenStack Development - Keystone Core
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Congratulations to the oslo.policy core team!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] SQLAlchemy performance suite and upcoming features (was: [nova] blueprint about multiple workers)

2015-03-05 Thread Attila Fazekas
I see lot of improvements,
but cPython is still cPython.

When you benchmarking query related things, please try to
get the actual data from the returned objects and try to do
something with data what is not expected to be optimized out even by
a smarter compiler.

Here is my play script and several numbers:
http://www.fpaste.org/193999/25585380/raw/
Is there any faster ORM way for the same op?

Looks like still worth to convert the results to dict,
when you access the data multiple times.

dict is also the typical input type for the json serializers. 

The plain dict is good enough if you do not want to mange
which part is changed, especially when you are not planning to `save` it.

- Original Message -
> From: "Mike Bayer" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, March 4, 2015 11:30:49 PM
> Subject: Re: [openstack-dev] [all] SQLAlchemy performance suite and upcoming  
> features (was: [nova] blueprint about
> multiple workers)
> 
> 
> 
> Mike Bayer  wrote:
> 
> > 
> > 
> > Attila Fazekas  wrote:
> > 
> >> Hi,
> >> 
> >> I wonder what is the planned future of the scheduling.
> >> 
> >> The scheduler does a lot of high field number query,
> >> which is CPU expensive when you are using sqlalchemy-orm.
> >> Does anyone tried to switch those operations to sqlalchemy-core ?
> > 
> > An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of CPU
> > overhead from the query side of SQLAlchemy ORM by caching all the work done
> 
> Just to keep the Openstack community of what’s upcoming, here’s some more
> detail
> on some of the new SQLAlchemy performance features, which are based on the
> goals I first set up last summer at
> https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy.
> 
> As 1.0 features a lot of new styles of doing things that are primarily in
> the name of performance, in order to help categorize and document these
> techniques, 1.0 includes a performance suite in examples/ which features a
> comprehensive collection of common database idioms run under timing and
> function-count profiling. These idioms are broken into major categories like
> “short selects”, “large resultsets”, “bulk inserts”, and serve not only as a
> way to compare the relative performance of different techniques, but also as
> a way to provide example code categorized into use cases that illustrate the
> variety of ways to achieve that case, including the tradeoffs for each,
> across Core and ORM. So in this case, we can see what the “baked” query
> looks like in the “short_selects” suite, which times how long it takes to
> perform 1 queries, each of which return one object or row:
> 
> https://bitbucket.org/zzzeek/sqlalchemy/src/cc58a605d6cded0594f7db1caa840b3c00b78e5a/examples/performance/short_selects.py?at=ticket_3054#cl-73
> 
> The results of this suite look like the following:
> 
> test_orm_query : test a straight ORM query of the full entity. (1
> iterations); total time 7.363434 sec
> test_orm_query_cols_only : test an ORM query of only the entity columns.
> (1 iterations); total time 6.509266 sec
> test_baked_query : test a baked query of the full entity. (1 iterations);
> total time 1.999689 sec
> test_baked_query_cols_only : test a baked query of only the entity columns.
> (1 iterations); total time 1.990916 sec
> test_core_new_stmt_each_time : test core, creating a new statement each time.
> (1 iterations); total time 3.842871 sec
> test_core_reuse_stmt : test core, reusing the same statement (but recompiling
> each time). (1 iterations); total time 2.806590 sec
> test_core_reuse_stmt_compiled_cache : test core, reusing the same statement +
> compiled cache. (1 iterations); total time 0.659902 sec
> 
> Where above, “test_orm” and “test_baked” are both using the ORM API
> exclusively. We can see that the “baked” approach, returning column tuples
> is almost twice as fast as a naive Core approach, that is, one which
> constructs select() objects each time and does not attempt to use any
> compilation caching.
> 
> For the use case of fetching large numbers of rows, we can look at the
> large_resultsets suite
> (https://bitbucket.org/zzzeek/sqlalchemy/src/cc58a605d6cded0594f7db1caa840b3c00b78e5a/examples/performance/large_resultsets.py?at=ticket_3054).
> This suite illustrates a single query which fetches 500K rows. The “Baked”
> approach isn’t relevant here as we are only emitting a query once, however
> the approach we use to fetch rows is significant. Here we can see that
> ORM-based “tuple” approaches are very close in speed to the fetching of rows
> using Core directly. We also have a comparison of Core against raw DBAPI
> access, where we see very little speed improvement; an example where we
> create a very simple object for each DBAPI row fetched is also present to
> illustrate how quickly even the most minimal Python function overhead adds
> up when we do something 500K times.
> 
> test_orm_full_objects_

Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2015-03-05 Thread Brent Eagles
Hi all,

On Wed, Mar 04, 2015 at 10:52:10AM -0330, Brent Eagles wrote:
> Hi all,


 
> > Thanks Maxime. I've made some updates to the etherpad.
> > (https://etherpad.openstack.org/p/nova_vif_plug_script_spec)
> > I'm going to start some proof of concept work these evening. If I get
> > anything worth reading, I'll put it up as a WIP/Draft review. Whatever
> > state it is in I will be pushing up bits and pieces to github.
> > 
> > https://github.com/beagles/neutron_hacking vif-plug-script
> > https://github.com/beagles/nova vif-plug-script
> > 
> > Cheers,
> > 
> > Brent



The proof-of-concept hacking progressed to the point where I was able to
use the "hacked up" version of the ML2 OVS driver to trigger a test
plug/unplug script, achieving connectiving with a test VM. With the
exception of some boneheaded assumptions, things went reasonably well. I
will squash the commits and post WIP patches on gerrit tomorrow.

In my opinion, the big question now is what to do about rootwrap. The
current proof-of-concept does *not* use it because of the
sudo-stripping-environment variable issue. Environment variables can be
passed in on the command line and the 'Env' rootwrap filter employed,
but I don't know what is more workable from a deployment/3rd party
integration point of view: use rootwrap and require rootwrap filters be
added at deploy time; or don't use rootwrap from within nova and leave
it up to the plug script/executable. If rootwrap isn't used when
executing the plug script, the plug script itself could still use
rootwrap. Thoughts?

Cheers,

Brent


pgpQ3D6Ly7_2H.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack options for visually impaired contributors

2015-03-05 Thread Mark Voelker
Hi John,

I’m not visually impaired myself, but have you taken a look at Gertty?  It’s a 
console-based interface so you may be able to take advantage of other console 
customizations you’ve made (font sizing for example) and it has options that 
allow you to set color palettes (e.g. to increase contrast), set hotkeys rather 
than using mouse clicks, etc.

https://github.com/stackforge/gertty

At Your Service,

Mark T. Voelker

On Mar 5, 2015, at 2:46 PM, John Wood  wrote:

> Hello folks,
> 
> I¹m curious what tools visually impaired OpenStack contributors have found
> helpful for performing Gerrit reviews (the UI is difficult to scan,
> especially for in-line code comments) and for Python development in
> general?
> 
> Thanks,
> John
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] OpenStackClient project proposal

2015-03-05 Thread Dean Troyer
I have submitted a proposal[0] to include OpenStackClient as an official
OpenStack project.  It has existed in the git openstack/ namespace since
the beginning of the project.

In addition we propose to include two library repos that are primarily used
for client support: openstack/cliff and stackforge/os-client-config.  The
Oslo team has confirmed the move for cliff, existing Oslo cores will be
offered the choice to remain core on cliff or be removed, similar to what
has been done with other Oslo libraries that have migrated to different
projects.

os-client-config is a fairly young project created by Monty Taylor to
handle referencing multiple cloud configurations stored in a configuration
file.  It is primarily used by his Shade library and will soon be used by
OpenStackClient.  Similar to cliff, the exiting core team will remain as-is.

As of last week (26 Feb 2015 to be exact), the OpenStackClient team has
started holding regular meetings on IRC[1].  A rundown of our evaluation of
OSC with the project requirements can be found in [2].

[0] https://review.openstack.org/161885
[1]
https://wiki.openstack.org/wiki/Meetings/OpenStackClient#Next_Meeting_Agenda
[2] https://etherpad.openstack.org/p/osc-project


Thanks
dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack options for visually impaired contributors

2015-03-05 Thread James E. Blair
Hi,

Also, in either Gerrit's web UI or Gertty, there is an option for
displaying unified diffs, rather than side-by-side (it's a link on the
change page in Gerrit, and a config file option in Gertty).  I do not
know if that would be helpful, but would be happy to learn.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Bryan D. Payne
To security-doc core and other interested parties,

Nathaniel Dillon has been working consistently on the security guide since
our first mid-cycle meet up last summer.  In that time he has come to
understand the inner workings of the book and the doc process very well.
He has also been a consistent participant in improving the book content and
working to define what the book should be going forward.

I'd like to bring him on as a core member of security-doc so that he can
help with the review and approval process for new changes to the book.
Please chime in with your agreement or concerns.

Cheers,
-bryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Nathan Kinder


On 03/05/2015 01:14 PM, Bryan D. Payne wrote:
> To security-doc core and other interested parties,
> 
> Nathaniel Dillon has been working consistently on the security guide
> since our first mid-cycle meet up last summer.  In that time he has come
> to understand the inner workings of the book and the doc process very
> well.  He has also been a consistent participant in improving the book
> content and working to define what the book should be going forward.
> 
> I'd like to bring him on as a core member of security-doc so that he can
> help with the review and approval process for new changes to the book. 
> Please chime in with your agreement or concerns.

+1 on making Nathaniel core!

-NGK

> 
> Cheers,
> -bryan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Clark, Robert Graham
On 05/03/2015 21:37, "Nathan Kinder"  wrote:

>
>
>On 03/05/2015 01:14 PM, Bryan D. Payne wrote:
>> To security-doc core and other interested parties,
>> 
>> Nathaniel Dillon has been working consistently on the security guide
>> since our first mid-cycle meet up last summer.  In that time he has come
>> to understand the inner workings of the book and the doc process very
>> well.  He has also been a consistent participant in improving the book
>> content and working to define what the book should be going forward.
>> 
>> I'd like to bring him on as a core member of security-doc so that he can
>> help with the review and approval process for new changes to the book.
>> Please chime in with your agreement or concerns.
>
>+1 on making Nathaniel core!
>
>-NGK
>
>> 
>> Cheers,
>> -bryan

+1 Excellent idea.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Some questions about Openstack version and module versions

2015-03-05 Thread Kaluarachchi, Weranga
Hello
I'm a developer at SAP and interested on understanding How Openstack works. I 
would really appreciate if some of you can explain the following questions I 
have.


1.   How we connect certain version of Openstack with its module versions? 
Eg: Openstack Juno and Nova folsom-rc3

2.   If I want to install Openstack Juno then how do I know which versions 
of modules I have to install. Because modules has its own versions.

3.   What are the naming conventions of the module? Is there a connection 
between Openstack version naming and the module version naming?

4.   Do we need to know the module updates as developers?

5.   How do you update Openstack from an older version to a newer version? 
Do we have to update module by module?

Thank you very much in advance.

Weranga Kaluarachchi
SAP Canada | 910 Mainland Street, Vancouver, BC, V6B 1A9
Phone: 604-974-2297 | mailto: 
weranga.kaluarach...@sap.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] python-glanceclient release 0.16.1

2015-03-05 Thread Monty Taylor
On 03/05/2015 12:02 AM, Nikhil Komawar wrote:
> The python-glanceclient release management team is pleased to announce:
> python-glanceclient version 0.16.1 has been released on Thursday, Mar 5th 
> around 04:56 UTC.
> 
> For more information, please find the details at:
> 
> 
> https://launchpad.net/python-glanceclient/+milestone/v0.16.1
> 
> Please report the issues through launchpad:
> 
> https://bugs.launchpad.net/python-glanceclient

Thank you! This includes a fix for my favorite glanceclient bug.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removal of v3 in tree testing

2015-03-05 Thread Sean Dague
On 03/04/2015 07:48 PM, GHANSHYAM MANN wrote:
> Hi Sean,
> 
> Yes having V3 directory/file names is very confusing now.
> 
> But current v3 sample tests cases tests v2.1 plugins. As /v3 url is
> redirected to v21 plugins, v3 sample tests make call through v3 url and
> test v2.1 plugins. 
> 
> I think we can start cleaning up the *v3* from everywhere and change it
> to v2.1 or much appropriate name.
> 
> To cleanup the same from sample files, I was planning to rearrange
> sample files structure. Please check if that direction looks good (still
> need to release patch for directory restructure)
> 
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:sample_files_structure,n,z
>  

I had another chat with Alex this morning on IRC. I think my confusion
is that I don't feel like I understand how we get down to 1 set of API
samples in the tree based on that set of posted patches.

It seems like there should only be 1 set of samples in docs/ and one set
of templates. I would also argue that we should only have 1 set of tests
(though that I'm mid term flexible on).

It seems that if our concern is that both the v2 and v21 endpoints need
to have the same results, we could change the functional tox target to
run twice, once with v2 and once with v21 set as the v2 endpoint.
Eventually we'll be able to drop v2 on v2.

Anyway, in order to both assist my own work unwinding the test tree, and
to help review your work there, can you lay out your vision for cleaning
this up with all the steps involved? Hopefully that will cut down the
confusion and make all this work move faster.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Anne Gentle
Sounds great! Thanks for growing the team, too!

On Thu, Mar 5, 2015 at 3:14 PM, Bryan D. Payne  wrote:

> To security-doc core and other interested parties,
>
> Nathaniel Dillon has been working consistently on the security guide since
> our first mid-cycle meet up last summer.  In that time he has come to
> understand the inner workings of the book and the doc process very well.
> He has also been a consistent participant in improving the book content and
> working to define what the book should be going forward.
>
> I'd like to bring him on as a core member of security-doc so that he can
> help with the review and approval process for new changes to the book.
> Please chime in with your agreement or concerns.
>
> Cheers,
> -bryan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Christian Berendt
On 03/05/2015 10:14 PM, Bryan D. Payne wrote:
> I'd like to bring him on as a core member of security-doc so that he can
> help with the review and approval process for new changes to the book. 
> Please chime in with your agreement or concerns.

+1.

Christian.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] pci stats format and functional tests

2015-03-05 Thread Jiang, Yunhong
Paul, you are right that the 'extra_info' should not be in the 
os-pci:pci_stats, since it's not part of 'pool-keys' anymore, but I'm not sure 
if both 'key1' and 'phys_function' will be part of the pci_stats.

Thanks
--jyh

From: Murray, Paul (HP Cloud) [mailto:pmur...@hp.com]
Sent: Thursday, March 5, 2015 11:39 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] pci stats format and functional tests

Hi All,

I know Yunhong Jiang and Daniel Berrange have been involved in the following, 
but I thought it worth sending to the list for visibility.

While writing code to convert the resource tracker to use the ComputeNode 
object realized that the api samples used in the functional tests are not the 
same as the format as the PciDevicePool object. For example: 
hypervisor-pci-detail-resp.json has something like this:

"os-pci:pci_stats": [
{
"count": 5,
"extra_info": {
"key1": "value1",
"phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]"
},
"keya": "valuea",
"product_id": "1520",
"vendor_id": "8086"
}
],

My understanding from interactions with yjiang5 in the past leads me to think 
that something like this is what is actually expected:

"os-pci:pci_stats": [
{
"count": 5,
"key1": "value1",
"phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]",
"keya": "valuea",
"product_id": "1520",
"vendor_id": "8086"
}
],

This is the way the PciDevicePool object expects the data structure to be and 
is also the way the libvirt virt driver creates pci device information (i.e. 
without the "extra_info" key). Other than that (which is actually pretty clear) 
I couldn't find anything to tell me definitively if my interpretation is 
correct and I don't want to change the functional tests without being sure they 
are wrong. So if anyone can give some guidance here I would appreciate it.

I separated this stuff out into a patch with a couple of other minor cleanups 
in preparation for the ComputeNode change, see: 
https://review.openstack.org/#/c/161843

Let me know if I am on the right track,

Cheers,
Paul


Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as "HP CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] pci stats format and functional tests

2015-03-05 Thread Robert Li (baoli)
“extra_info” is no longer a key in the stats pool, nor the “physical_function”. 
If you check pci/stats.py, the keys are pool_keys = ['product_id', 
'vendor_id', 'numa_node’] plus whatever tags are used in the whitelist. So I 
believe it’s something like this:
"os-pci:pci_stats": [
{
"count": 5,
"key1": "value1”,
…
“keyn": "valuen",
"product_id": "1520",
"vendor_id": “8086”,
“numa_node"
},
],

And each stats entry may have different keys.

thanks,
—Robert

On 3/5/15, 5:16 PM, "Jiang, Yunhong" 
mailto:yunhong.ji...@intel.com>> wrote:

Paul, you are right that the ‘extra_info’ should not be in the 
os-pci:pci_stats, since it’s not part of ‘pool-keys’ anymore, but I’m not sure 
if both ‘key1’ and ‘phys_function’ will be part of the pci_stats.

Thanks
--jyh

From: Murray, Paul (HP Cloud) [mailto:pmur...@hp.com]
Sent: Thursday, March 5, 2015 11:39 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] pci stats format and functional tests

Hi All,

I know Yunhong Jiang and Daniel Berrange have been involved in the following, 
but I thought it worth sending to the list for visibility.

While writing code to convert the resource tracker to use the ComputeNode 
object realized that the api samples used in the functional tests are not the 
same as the format as the PciDevicePool object. For example: 
hypervisor-pci-detail-resp.json has something like this:

"os-pci:pci_stats": [
{
"count": 5,
"extra_info": {
"key1": "value1",
"phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]”
},
"keya": "valuea",
"product_id": "1520",
"vendor_id": "8086"
}
],

My understanding from interactions with yjiang5 in the past leads me to think 
that something like this is what is actually expected:

"os-pci:pci_stats": [
{
"count": 5,
"key1": "value1",
"phys_function": "[[\"0x\", \"0x04\", \"0x00\", \"0x1\"]]”,
"keya": "valuea",
"product_id": "1520",
"vendor_id": "8086"
}
],

This is the way the PciDevicePool object expects the data structure to be and 
is also the way the libvirt virt driver creates pci device information (i.e. 
without the “extra_info” key). Other than that (which is actually pretty clear) 
I couldn’t find anything to tell me definitively if my interpretation is 
correct and I don’t want to change the functional tests without being sure they 
are wrong. So if anyone can give some guidance here I would appreciate it.

I separated this stuff out into a patch with a couple of other minor cleanups 
in preparation for the ComputeNode change, see: 
https://review.openstack.org/#/c/161843

Let me know if I am on the right track,

Cheers,
Paul


Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as "HP CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-05 Thread Mikhail Fedosin
I think yes, it does. But I mean that now we're writing a document called
Glance Review Guidelines
https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5RJabsI/edit?usp=sharing
and it has a section "For cores". It's easy to include some concrete rules
there to add more clarity.

2015-03-05 17:46 GMT+03:00 Ihar Hrachyshka :

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 03/05/2015 11:35 AM, Mikhail Fedosin wrote:
> > Yes, it's absolutely right. For example, Nova and Neutron have
> > official rules for that:
> > https://wiki.openstack.org/wiki/Nova/CoreTeam where it says: "A
> > member of the team may be removed at any time by the PTL. This is
> > typically due to a drop off of involvement by the member such that
> > they are no longer meeting expectations to maintain team
> > membership". https://wiki.openstack.org/wiki/NeutronCore "The PTL
> > may remove a member from neutron-core at any time. Typically when a
> > member has decreased their involvement with the project through a
> > drop in reviews and participation in general project development,
> > the PTL will propose their removal and remove them. Members who
> > have previously been core may be fast-tracked back into core if
> > their involvement picks back up" So, as Louis has mentioned, it's a
> > routine work, and why should we be any different? Also, I suggest
> > to write the same wiki document for Glance to prevent these issues
> > in the future.
> >
>
> Does the rule belong to e.g. governance repo? It seems like a sane
> requirement for core *reviewers* to actually review code, no? Or are
> there any projects that would not like to adopt such a rule formally?
>
> /Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJU+GxdAAoJEC5aWaUY1u579mEIAMN/wucsahaZ0yMT2/eo8t05
> rIWI+lBLjNueWJgB+zNbVlVcsKBZ/hl4J0O3eE65RtlTS5Rta5hv0ymyRL1nnUZH
> g/tL3ogEF0SsSBkiavVh3klGmUwsvQ+ygPN5rVgnbiJ+uO555EPlbiHwZHbcjBoI
> lyUjIhWzUCX26wq7mgiTsY858UgdEt3urVHD9jTE2WNszMRLXQ7vsoAik9xDfISz
> E0eZ8WVQKlNHNox0UoKbobdb3YDhmY3Ahp9Fj2cT1IScyQGORnm0hXV3+pRdWNhD
> 1M/gDwAf97F1lfNxPpy4JCGutbe5zoPQYLpJExzaXkqqARxUnwhB1gZ9lEG8l2o=
> =lcLY
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] python-glanceclient release 0.16.1

2015-03-05 Thread Doug Hellmann


On Thu, Mar 5, 2015, at 03:15 PM, Dr. Jens Rosenboom wrote:
> Am 05/03/15 um 17:52 schrieb Doug Hellmann:
> >
> >
> > On Thu, Mar 5, 2015, at 11:37 AM, Ian Cordasco wrote:
> >> The clients in general do not back port patches. Someone should work with
> >> stable-maint to raise the cap in Icehouse and Juno. I suspect, however,
> >> that those caps were added due to the client breaking other projects.
> >> Proposals can be made though and ideally, openstack/requirements’ gate
> >> jobs will catch any breakage.
> >
> > Under my cross-project spec proposal [1], we will want to start managing
> > stable branches for clients specifically for this sort of situation.
> >
> > Doug
> >
> > [1] https://review.openstack.org/#/c/155072/
> 
> Do you expect this proposal to be applied retrospectively to create 
> stable/{icehouse,juno} branches or will this only be applied starting 
> from Kilo onwards?

Retroactively to stable branches still under support when we need to
backport something. For Kilo we should make them as part of the release
process.

> 
> Also, your proposal is talking about libraries in general, is there 
> consensus on having the python-*client projects being included there? Or 
> would it make sense to mention them explicitly?

They're libraries, so I thought they were covered, but we can mention
them explicitly if it makes a difference.

> 
> I'm a bit worried that we have a bug currently making nova and cinder 
> API services explode after some uptime and no shorttern way of fixing 
> this bug in the stable branches.

I can understand that. We need to get someone to determine where to
create the branch for the backports and then we can go ahead with
whatever changes need to be reviewed.

Doug

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Keystone SAML testshib error

2015-03-05 Thread Akshik DBK
stuck with following 
https://zenodo.org/record/11982/files/CERN_openlab_Luca_Tartarini.pdf

when i tried, python ecp.py -d testshib 
https://Centos-SAML:5000/v3/OSFEDERATION/identity_providers/testshib/protocols/saml2/auth
 myself
im getting 
send: 'GET /v3/OSFEDERATION/identity_providers/testshib/protocols/saml2/auth 
HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: Centos-SAML:5000\r\nPaos: 
ver="urn:liberty:paos:2003-08";"urn:oasis:names:tc:SAML:2.0:profiles:SSO:ecp"\r\nConnection:
 close\r\nAccept: text/html; application/vnd.paos+xml\r\nUser-Agent: 
Python-urllib/2.6\r\n\r\n'reply: 'HTTP/1.1 500 Internal Server 
Error\r\n'header: Date: Fri, 06 Mar 2015 00:23:16 GMTheader: Server: 
Apache/2.2.15 (CentOS)header: Vary: Accept-Encodingheader: Content-Length: 
617header: Connection: closeheader: Content-Type: text/html; 
charset=iso-8859-1First request to SP failed: HTTP Error 500: Internal Server 
Error
and in the logs i get these error
[Fri Mar 06 05:53:16 2015] [debug] mod_shib.cpp(320): [client 10.1.193.248] 
get_request_config created per-request structure[Fri Mar 06 05:53:16 2015] 
[info] Initial (No.1) HTTPS request received for child 2 (server 
Centos-SAML:5000)[Fri Mar 06 05:53:16 2015] [debug] mod_shib.cpp(917): [client 
10.1.193.248] shib_fixups entered in pid (31355)[Fri Mar 06 05:53:16 2015] 
[debug] mod_shib.cpp(917): [client 10.1.193.248] shib_fixups entered in pid 
(31355)[Fri Mar 06 05:53:16 2015] [info] [client 10.1.193.248] mod_wsgi 
(pid=27246, process='keystone-public', application=''): Loading WSGI script 
'/var/www/cgi-bin/keystone/main'.[Fri Mar 06 05:53:16 2015] [error] [client 
10.1.193.248] mod_wsgi (pid=27246): Target WSGI script 
'/var/www/cgi-bin/keystone/main' cannot be loaded as Python module.[Fri Mar 06 
05:53:16 2015] [error] [client 10.1.193.248] mod_wsgi (pid=27246): Exception 
occurred processing WSGI script '/var/www/cgi-bin/keystone/main'.[Fri Mar 06 
05:53:16 2015] [error] [client 10.1.193.248] Traceback (most recent call 
last):[Fri Mar 06 05:53:16 2015] [error] [client 10.1.193.248]   File 
"/var/www/cgi-bin/keystone/main", line 42, in [Fri Mar 06 05:53:16 
2015] [error] [client 10.1.193.248] config.setup_logging(CONF)[Fri Mar 06 
05:53:16 2015] [error] [client 10.1.193.248] TypeError: setup_logging() takes 
no arguments (1 given)[Fri Mar 06 05:53:16 2015] [debug] 
ssl_engine_kernel.c(1863): OpenSSL: Write: SSL negotiation finished 
successfully[Fri Mar 06 05:53:16 2015] [info] [client 10.1.193.248] Connection 
closed to child 2 with standard shutdown (server Centos-SAML:5000)
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Bryan D. Payne
Thanks everyone.  I've added Nathaniel to security-doc core.  Welcome
Nathaniel!

Cheers,
-bryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [Third-party-announce] Cinder Merged patch broke HDS driver

2015-03-05 Thread Asselin, Ramy
Marcus,

Don’t turn off ci, because then you could miss another regression.

Instead, simply exclude that test case:
e.g.
export 
DEVSTACK_GATE_TEMPEST_REGEX="^(?=.*tempest.api.volume)(?!.*test_snapshots_actions).*"

Ramy


From: Marcus Vinícius Ramires do Nascimento [mailto:marcus...@gmail.com]
Sent: Wednesday, March 04, 2015 1:29 PM
To: openstack-dev@lists.openstack.org; Announcements for third party CI 
operators.
Subject: [openstack-dev] [cinder] [Third-party-announce] Cinder Merged patch 
broke HDS driver

Hi folks,

This weekend, the patch "Snapshot and volume objects" 
(https://review.openstack.org/#/c/133566) was merged and this one broke our HDS 
HBSD driver and the respective CI.

When CI tries to run tempest.api.volume.admin.test_snapshots_actions the 
following error is shown:

2015-03-04 14:00:34.368 ERROR oslo_messaging.rpc.dispatcher 
[req-c941792b-963f-4a7d-a6ac-9f1d9f823fd1 915289d113dd4f9db2f2a792c18b3564 
984bc8d228c8497689dde60dc2b8f300] Exception during messag
e handling: '' object has no 
attribute 'snapshot_metadata'
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in 
wrapper
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return f(*args, 
**kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/manager.py", line 156, in lso_inner1
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return 
lso_inner2(inst, context, snapshot, **kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
431, in inner
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return f(*args, 
**kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/manager.py", line 155, in lso_inner2
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return 
f(*_args, **_kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/manager.py", line 635, in delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher snapshot.save()
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/manager.py", line 625, in delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher 
self.driver.delete_snapshot(snapshot)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in 
wrapper
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return f(*args, 
**kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_iscsi.py", line 314, in 
delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher 
self.common.delete_snapshot(snapshot)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py", line 635, in 
delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher is_vvol = 
self.get_snapshot_is_vvol(snapshot)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py", line 189, in 
get_snapshot_is_vvol
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return 
self.get_is_vvol(snapshot, 'snapshot_metadata')
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py", line 183, in 
get_is_vvol
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return 
self.get_value(obj, name, 'type') == 'V-V

Re: [openstack-dev] [all] SQLAlchemy performance suite and upcoming features (was: [nova] blueprint about multiple workers)

2015-03-05 Thread Mike Bayer


Attila Fazekas  wrote:

> I see lot of improvements,
> but cPython is still cPython.
> 
> When you benchmarking query related things, please try to
> get the actual data from the returned objects

that goes without saying. I’ve been benching SQLAlchemy and DBAPIs for many
years. New performance improvements tend to be the priority for pretty much
every major release.

> and try to do
> something with data what is not expected to be optimized out even by
> a smarter compiler.

Well I tend to favor breaking out the different elements into individual
tests here, though I guess if you’re trying to trick a JIT then the more
composed versions may be more relevant. For example, I could already tell
you that the AttributeDict thing would perform terribly without having to
mix it up with the DB access. __getattr__ is a poor performer (learned that
in SQLAlchemy 0.1 about 9 years ago).

> Here is my play script and several numbers:
> http://www.fpaste.org/193999/25585380/raw/
> Is there any faster ORM way for the same op?

Absolutely, as I’ve been saying for months all the way back in my wiki entry
on forward, query for individual columns, also skip the session.rollback()
and do a close() instead (the transaction is still rolled back, we just skip
the bookkeeping we don’t need).  You get the nice attribute access 
pattern too:

http://www.fpaste.org/194098/56040781/

def query_sqla_cols(self):
"SQLAlchemy yield(100) named tuples"
session = self.Session()
start = time.time()
summary = 0
for obj in session.query(
Ints.id, Ints.A, Ints.B, Ints.C).yield_per(100):
summary += obj.id + obj.A + obj.B + obj.C
session.rollback()
end = time.time()
return [end-start, summary]

def query_sqla_cols_a3(self):
"SQLAlchemy yield(100) named tuples 3*access"
session = self.Session()
start = time.time()
summary = 0
for obj in session.query(
Ints.id, Ints.A, Ints.B, Ints.C).yield_per(100):
summary += obj.id + obj.A + obj.B + obj.C
summary += obj.id + obj.A + obj.B + obj.C
summary += obj.id + obj.A + obj.B + obj.C
session.rollback()
end = time.time()
return [end-start, summary/3]


Here’s that:

0 SQLAlchemy yield(100) named tuples: time: 0.635045 (data [18356026L])
1 SQLAlchemy yield(100) named tuples: time: 0.630911 (data [18356026L])
2 SQLAlchemy yield(100) named tuples: time: 0.641687 (data [18356026L])
0 SQLAlchemy yield(100) named tuples 3*access: time: 0.807285 (data 
[18356026L])
1 SQLAlchemy yield(100) named tuples 3*access: time: 0.814160 (data 
[18356026L])
2 SQLAlchemy yield(100) named tuples 3*access: time: 0.829011 (data 
[18356026L])

compared to the fastest Core test:

0 SQlAlchemy core simple: time: 0.707205 (data [18356026L])
1 SQlAlchemy core simple: time: 0.702223 (data [18356026L])
2 SQlAlchemy core simple: time: 0.708816 (data [18356026L])


This is using 1.0’s named tuple which is faster than the one in 0.9. As I
discussed in the migration notes I linked, over here
http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html#new-keyedtuple-implementation-dramatically-faster
is where I discuss how I came up with that named tuple approach.

In 0.9, the tuples are much slower (but still faster than straight entities):

0 SQLAlchemy yield(100) named tuples: time: 1.083882 (data [18356026L])
1 SQLAlchemy yield(100) named tuples: time: 1.097783 (data [18356026L])
2 SQLAlchemy yield(100) named tuples: time: 1.113621 (data [18356026L])
0 SQLAlchemy yield(100) named tuples 3*access: time: 1.204280 (data 
[18356026L])
1 SQLAlchemy yield(100) named tuples 3*access: time: 1.245768 (data 
[18356026L])
2 SQLAlchemy yield(100) named tuples 3*access: time: 1.258327 (data 
[18356026L])

Also note that the difference in full object fetches for 0.9 vs. 1.0 are quite 
different:

0.9.8:

0 SQLAlchemy yield(100): time: 2.802273 (data [18356026L])
1 SQLAlchemy yield(100): time: 2.778059 (data [18356026L])
2 SQLAlchemy yield(100): time: 2.841441 (data [18356026L])

1.0:

0 SQLAlchemy yield(100): time: 2.019153 (data [18356026L])
1 SQLAlchemy yield(100): time: 2.052810 (data [18356026L])
2 SQLAlchemy yield(100): time: 2.000401 (data [18356026L])


The tests you have here are favoring row fetches over individual query time.
The original speed complaints were talking about lots of queries, not as
much lots of rows. The baked strategy is appropriate for the lots of queries
use case. Feel free to check out the performance examples I linked which
break all these out.




> Looks like still worth to convert the results to dict,
> when you access the data multiple times.
> 
> dict is also the typical input type for the json serializers. 
> 
> The plain dict is good enough if you do not want to mange
> which part is changed, especially when you a

Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-05 Thread Rui Chen
Thank you very much for in-depth discussion about this topic, @Nikola and
@Sylvain.

I agree that we should solve the technical debt firstly, and then make the
scheduler better.

Best Regards.

2015-03-05 21:12 GMT+08:00 Sylvain Bauza :

>
> Le 05/03/2015 13:00, Nikola Đipanov a écrit :
>
>  On 03/04/2015 09:23 AM, Sylvain Bauza wrote:
>>
>>> Le 04/03/2015 04:51, Rui Chen a écrit :
>>>
 Hi all,

 I want to make it easy to launch a bunch of scheduler processes on a
 host, multiple scheduler workers will make use of multiple processors
 of host and enhance the performance of nova-scheduler.

 I had registered a blueprint and commit a patch to implement it.
 https://blueprints.launchpad.net/nova/+spec/scheduler-
 multiple-workers-support

 This patch had applied in our performance environment and pass some
 test cases, like: concurrent booting multiple instances, currently we
 didn't find inconsistent issue.

 IMO, nova-scheduler should been scaled horizontally on easily way, the
 multiple workers should been supported as an out of box feature.

 Please feel free to discuss this feature, thanks.

>>>
>>> As I said when reviewing your patch, I think the problem is not just
>>> making sure that the scheduler is thread-safe, it's more about how the
>>> Scheduler is accounting resources and providing a retry if those
>>> consumed resources are higher than what's available.
>>>
>>> Here, the main problem is that two workers can actually consume two
>>> distinct resources on the same HostState object. In that case, the
>>> HostState object is decremented by the number of taken resources (modulo
>>> what means a resource which is not an Integer...) for both, but nowhere
>>> in that section, it does check that it overrides the resource usage. As
>>> I said, it's not just about decorating a semaphore, it's more about
>>> rethinking how the Scheduler is managing its resources.
>>>
>>>
>>> That's why I'm -1 on your patch until [1] gets merged. Once this BP will
>>> be implemented, we will have a set of classes for managing heterogeneous
>>> types of resouces and consume them, so it would be quite easy to provide
>>> a check against them in the consume_from_instance() method.
>>>
>>>  I feel that the above explanation does not give the full picture in
>> addition to being factually incorrect in several places. I have come to
>> realize that the current behaviour of the scheduler is subtle enough
>> that just reading the code is not enough to understand all the edge
>> cases that can come up. The evidence being that it trips up even people
>> that have spent significant time working on the code.
>>
>> It is also important to consider the design choices in terms of
>> tradeoffs that they were trying to make.
>>
>> So here are some facts about the way Nova does scheduling of instances
>> to compute hosts, considering the amount of resources requested by the
>> flavor (we will try to put the facts into a bigger picture later):
>>
>> * Scheduler receives request to chose hosts for one or more instances.
>> * Upon every request (_not_ for every instance as there may be several
>> instances in a request) the scheduler learns the state of the resources
>> on all compute nodes from the central DB. This state may be inaccurate
>> (meaning out of date).
>> * Compute resources are update by each compute host periodically. This
>> is done by updating the row in the DB.
>> * The wall-clock time difference between the scheduler deciding to
>> schedule an instance, and the resource consumption being reflected in
>> the data the scheduler learns from the DB can be arbitrarily long (due
>> to load on the compute nodes and latency of message arrival).
>> * To cope with the above, there is a concept of retrying the request
>> that fails on a certain compute node due to the scheduling decision
>> being made with data stale at the moment of build, by default we will
>> retry 3 times before giving up.
>> * When running multiple instances, decisions are made in a loop, and
>> internal in-memory view of the resources gets updated (the widely
>> misunderstood consume_from_instance method is used for this), so as to
>> keep subsequent decisions as accurate as possible. As was described
>> above, this is all thrown away once the request is finished.
>>
>> Now that we understand the above, we can start to consider what changes
>> when we introduce several concurrent scheduler processes.
>>
>> Several cases come to mind:
>> * Concurrent requests will no longer be serialized on reading the state
>> of all hosts (due to how eventlet interacts with mysql driver).
>> * In the presence of a single request for a large number of instances
>> there is going to be a drift in accuracy of the decisions made by other
>> schedulers as they will not have the accounted for any of the instances
>> until they actually get claimed on their respective hosts.
>>
>> All of the above limitations w

Re: [openstack-dev] [Trove] request to backport the fix for bug 1333852 to juno

2015-03-05 Thread Amrith Kumar
Ihar, please see responses (inline).

-amrith

| -Original Message-
| From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
| Sent: Thursday, March 05, 2015 9:43 AM
| To: openstack-dev@lists.openstack.org
| Subject: Re: [openstack-dev] [Trove] request to backport the fix for bug
| 1333852 to juno
| 
| -BEGIN PGP SIGNED MESSAGE-
| Hash: SHA1
| 
| Not being involved in trove, but some general comments on backports.
| 
| On 03/04/2015 08:33 PM, Amrith Kumar wrote:
| > There has been a request to backport the fix for bug 1333852
| > (https://bugs.launchpad.net/trove/+bug/1333852) which was fixed in
| > Kilo into the Juno release.
| >
| 
| It would be easier if you directly link to patches in question.

[amrith] These are the patches that merged into Kilo
https://review.openstack.org/#/c/115811/
https://review.openstack.org/#/c/123301/

| 
| >
| >
| > The change includes a database change and a small change to the Trove
| > API. The change also requires a change to the trove client and the
| > trove controller code (trove-api). It is arguable whether this is a
| > backport or a new feature; I'm inclined to think it is more of an
| > extension of an existing feature than a new feature.
| >
| 
| It depends on what is a 'database change' above. If it's a schema change,
| then it's a complete no-go for backports. A change to API is also

[amrith] Yes, it is a schema change. See 
https://review.openstack.org/#/c/115811/

| suspicious, but without details it's hard to say. Finally, the need to
| patch a client to utilize the change probably means that it's not a bug
| fix (or at least, not an easy one).

[amrith] The change to the API is not that complex; it changes flavor from an 
int to a string and adds logic that knows how to tell one from the other.

| 
| Where do those flavor UUIDs come from? Were they present/supported in
| nova/juno?

[amrith] Yes. On a nova boot call for example, you could specify flavor-id 
thusly:

  --flavor  Name or ID of flavor (see 'nova flavor-list').

With Trove (prior to this fix) you could only specify the ID which would be an 
integer.

| 
| >
| >
| > As such, I *don't* believe that this change should be considered a
| > good candidate for backport to Juno but I'm going to see whether there
| > is sufficient interest in this, to consider this change to be an
| > exception.
| >
| 
| Without details, it's hard to say for sure, but for initial look, the
| change you describe is too far stretching and has lots of issues that
| would make backport hard if not impossible.
|

[amrith] I agree. But I would appreciate input from others as well.
 
| /Ihar
| -BEGIN PGP SIGNATURE-
| Version: GnuPG v1
| 
| iQEcBAEBAgAGBQJU+GtwAAoJEC5aWaUY1u57+M4IAMjuF/f7OTMkaT1dxmy8GpV4
| /RoF06pPR5hU1oIjbjyvhRaqzTcKJBNqhuLzV7WhbynkyEuctg+QSqM/d2VQZwpp
| Gt59XEiIuLUYn46oC4J/S0DZBYHjRiZqcEXrJRozfzIvMQzqkCH+TeBxo9J5E/U4
| /I2rkGkDUm+XJa88M5PsTJP6Vp0nAvKQLa/Vjpe4/Ute2YMGlvFeH4NAsBy8XVWe
| BSJAIds0Abe1+uNwvaDeRbKaHwcgdAG/ia9WUO+8QHx1oXpLH/190o2P+xfZ8cno
| guPR2kSrzC0JLO5lfvRkjnDJd53kj/0tMf12xjzHHBC++grLUEs9i2AsvV/Dtyk=
| =s/sF
| -END PGP SIGNATURE-
| 
| __
| OpenStack Development Mailing List (not for usage questions)
| Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Dillon, Nathaniel
Awesome, thanks everyone! Very excited to help out with the great work already 
going on!

Nathaniel

> On Mar 5, 2015, at 5:05 PM, Bryan D. Payne  wrote:
> 
> Thanks everyone.  I've added Nathaniel to security-doc core.  Welcome 
> Nathaniel!
> 
> Cheers,
> -bryan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [Third-party-announce] Cinder Merged patch broke HDS driver

2015-03-05 Thread Thang Pham
I commented on this in your patch (https://review.openstack.org/#/c/161837/)
and posted a patch to help you along -
https://review.openstack.org/#/c/161945/.  This patch will
make "create_snapshot" and "create_volume_from_snapshot" method use
snapshot objects.  By using snapshot objects in both methods, you could now
update the driver to use snapshot objects, instead of the workaround you
had originally posted.

Regards,
Thang

On Thu, Mar 5, 2015 at 8:10 PM, Asselin, Ramy  wrote:

>  Marcus,
>
>
>
> Don’t turn off ci, because then you could miss another regression.
>
>
>
> Instead, simply exclude that test case:
>
> e.g.
>
> export
> DEVSTACK_GATE_TEMPEST_REGEX="^(?=.*tempest.api.volume)(?!.*test_snapshots_actions).*"
>
>
>
> Ramy
>
>
>
>
>
> *From:* Marcus Vinícius Ramires do Nascimento [mailto:marcus...@gmail.com]
>
> *Sent:* Wednesday, March 04, 2015 1:29 PM
> *To:* openstack-dev@lists.openstack.org; Announcements for third party CI
> operators.
> *Subject:* [openstack-dev] [cinder] [Third-party-announce] Cinder Merged
> patch broke HDS driver
>
>
>
> Hi folks,
>
>
>
> This weekend, the patch "*Snapshot and volume objects*" (
> https://review.openstack.org/#/c/133566) was merged and this one broke
> our HDS HBSD driver and the respective CI.
>
>
>
> When CI tries to run tempest.api.volume.admin.test_snapshots_actions the
> following error is shown:
>
>
>
> 2015-03-04 14:00:34.368 ERROR oslo_messaging.rpc.dispatcher
> [req-c941792b-963f-4a7d-a6ac-9f1d9f823fd1 915289d113dd4f9db2f2a792c18b3564
> 984bc8d228c8497689dde60dc2b8f300] Exception during messag
>
> e handling: '' object has no
> attribute 'snapshot_metadata'
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher Traceback
> (most recent call last):
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
> line 142, in _dispatch_and_reply
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
> executor_callback))
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
> line 186, in _dispatch
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
> executor_callback)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
> line 130, in _do_dispatch
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher result =
> func(ctxt, **new_args)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105,
> in wrapper
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
> f(*args, **kwargs)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/opt/stack/cinder/cinder/volume/manager.py", line 156, in lso_inner1
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
> lso_inner2(inst, context, snapshot, **kwargs)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py",
> line 431, in inner
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
> f(*args, **kwargs)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/opt/stack/cinder/cinder/volume/manager.py", line 155, in lso_inner2
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
> f(*_args, **_kwargs)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/opt/stack/cinder/cinder/volume/manager.py", line 635, in delete_snapshot
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
> snapshot.save()
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82,
> in __exit__
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
> six.reraise(self.type_, self.value, self.tb)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/opt/stack/cinder/cinder/volume/manager.py", line 625, in delete_snapshot
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
> self.driver.delete_snapshot(snapshot)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105,
> in wrapper
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
> f(*args, **kwargs)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_iscsi.py", line 314,
> in delete_snapshot
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
> self.common.delete_snapshot(snapshot)
>
> 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
> "/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py",

Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-05 Thread Li Ma
Hi all, actually I'm writing the same mail topic for zeromq driver,
but I haven't done it yet. Thank you for proposing this topic,
ozamiatin.

1. ZeroMQ functionality

Actually I proposed a session topic in the coming summit to show our
production system, named 'Distributed Messaging System for OpenStack
at Scale'. I don't know whether it will be allowed to present.
Otherwise, if it is possible, I can share my experience in design
summit.

Currently, AWCloud (the company I'm working) deployed more than 20
private clouds and 3 public clouds for our customers in production,
scaling from 10 to 500 physical nodes without any performance issue.
The performance dominates all the existing drivers in every aspect.
All is using ZeroMQ driver. We started improving ZeroMQ driver in
Icehouse and currently the modified driver has switched to
oslo.messaging.

As all knows, ZeroMQ has been unmaintainable for long. My colleagues
and I continuously contribute patches to upstream. The progress is a
little bit slow because we are doing everything just in our spare time
and the review procedure is also not efficient.

Here are two important patches [1], [2], for matchmaker redis. When
they are landed, I think ZeroMQ driver is capable of running in small
deployments.

The only functionality for large-scale deployment that lacks in the
current upstream codebase is socket pool scheduling (to provide
lifecycle management, including recycle and reuse zeromq sockets). It
was done several months ago and we are willing to contribute. I plan
to propose a blueprint in the next release.

2. Why ZeroMQ matters for OpenStack

ZeroMQ is the only driver that depends on a stable library not an open
source product. This is the most important thing that comes up my
mind. When we deploy clouds with RabbitMQ or Qpid, we need
comprehensive knowledge from their community, from deployment best
practice to performance tuning for different scales. As an open source
product, no doubt that bugs are always there. You need to push lots of
things in different communities rather than OpenStack community.
Finally, it is not that working, you all know it, right?

ZeroMQ library itself is just encapsulation of sockets and is stable
enough and widely used in large-scale cluster communication for long.
We can build our own messaging system for inter-component RPC. We can
improve it for OpenStack and have the governance for codebase. We
don't need to rely on different products out of the community.
Actually, only ZeroMQ provides the possibility.

IMO, we can just keep it and improve it and finally it becomes another
choice for operators.

3. ZeroMQ integration

I've been working on the integration of ZeroMQ and DevStack for a
while and actually it is working right now. I updated the deployment
guide [3].

I think it is the time to bring a non-voting gate for ZeroMQ and we
can make the functional tests work.

4. ZeroMQ blueprints

We'd love to provide blueprints to improve ZeroMQ, as ozamiatin does.
According to my estimation, ZeroMQ can be another choice for
production in 1-2 release cycles due to bp review and patch review
procedure.

5. ZeroMQ discussion

Here I'd like to say sorry for this driver. Due to spare time and
timezone, I'm not available for IRC or other meeting or discussions.
But if it is possible, should we create a subgroup for ZeroMQ and
schedule meetings for it? If we can schedule in advance or at a fixed
date & time, I'm in.

6. Feedback to ozamiatin's suggestions

I'm with you in most all the proposals, but for packages, I think we
can just separate all the components in a sub-directory. This step is
enough at the current stage.

Packaging the components are complicated. I don't think it is possible
for oslo.messaging to break into two packages, like oslo.messaging and
oslo.messaging.zeromq. And I cannot see the benefit clearly.

For priorities, I think the number 1, 6 and 7 have the high priority,
especially 7. Because ZeroMQ is pretty new for everyone, we do need
more paper work to promote and introduce it to the community. By the
way, I made a wiki before and everyone is welcome to update it [4].

[1] https://review.openstack.org/#/c/152471/
[2] https://review.openstack.org/#/c/155673/
[3] https://review.openstack.org/#/c/130943/
[4] https://wiki.openstack.org/wiki/ZeroMQ

Thanks a lot,
Li Ma (Nick)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-05 Thread Eric Windisch
On Wed, Mar 4, 2015 at 12:10 PM, ozamiatin  wrote:

> Hi,
>
> By this e-mail I'd like to start a discussion about current zmq driver
> internal design problems I've found out.
> I wish to collect here all proposals and known issues. I hope this
> discussion will be continued on Liberty design summit.
> And hope it will drive our further zmq driver development efforts.
>
> ZMQ Driver issues list (I address all issues with # and references are in
> []):
>
> 1. ZMQContext per socket (blocker is neutron improper usage of messaging
> via fork) [3]
> 2. Too many different contexts.
> We have InternalContext used for ZmqProxy, RPCContext used in
> ZmqReactor, and ZmqListener.
> There is also zmq.Context which is zmq API entity. We need to consider
> a possibility to unify their usage over inheritance (maybe stick to
> RPCContext)
> or to hide them as internal entities in their modules (see refactoring
> #6)
>

The code, when I abandoned it, was moving toward fixing these issues, but
for backwards compatibility was doing so in a staged fashion across the
stable releases.

I agree it's pretty bad. Fixing this now, with the driver in a less stable
state should be easier, as maintaining compatibility is of less importance.



> 3. Topic related code everywhere. We have no topic entity. It is all
> string operations.
> We need some topics management entity and topic itself as an entity
> (not a string).
> It causes issues like [4], [5]. (I'm already working on it).
> There was a spec related [7].
>

Good! It's ugly. I had proposed a patch at one point, but I believe the
decision was that it was better and cleaner to move toward the
oslo.messaging abstraction as we solve the topic issue. Now that
oslo.messaging exists, I agree it's well past time to fix this particular
ugliness.


> 4. Manual implementation of messaging patterns.
>Now we can observe poor usage of zmq features in zmq driver. Almost
> everything is implemented over PUSH/PULL.
>
> 4.1 Manual polling - use zmq.Poller (listening and replying for
> multiple sockets)
> 4.2 Manual request/reply implementation for call [1].
> Using of REQ/REP (ROUTER/DEALER) socket solves many issues. A lot
> of code may be reduced.
> 4.3 Timeouts waiting
>

There are very specific reasons for the use of PUSH/PULL. I'm firmly of the
belief that it's the only viable solution for an OpenStack RPC driver. This
has to do with how asynchronous programming in Python is performed, with
how edge-triggered versus level-triggered events are processed, and general
state management for REQ/REP sockets.

I could be proven wrong, but I burned quite a bit of time in the beginning
of the ZMQ effort looking at REQ/REP before realizing that PUSH/PULL was
the only reasonable solution. Granted, this was over 3 years ago, so I
would not be too surprised if my assumptions are no longer valid.



> 5. Add possibility to work without eventlet [2]. #4.1 is also related
> here, we can reuse many of the implemented solutions
>like zmq.Poller over asynchronous sockets in one separate thread
> (instead of spawning on each new socket).
>I will update the spec [2] on that.
>

Great. This was one of the motivations behind oslo.messaging and it would
be great to see this come to fruition.


> 6. Put all zmq driver related stuff (matchmakers, most classes from
> zmq_impl) into a separate package.
>Don't keep all classes (ZmqClient, ZmqProxy, Topics management,
> ZmqListener, ZmqSocket, ZmqReactor)
>in one impl_zmq.py module.
>

Seems fine. In fact, I think a lot of code could be shared with an AMQP v1
driver...


> 7. Need more technical documentation on the driver like [6].
>I'm willing to prepare a current driver architecture overview with some
> graphics UML charts, and to continue discuss the driver architecture.
>

Documentation has always been a sore point. +2

-- 
Regards,
Eric Windisch
ᐧ
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

2015-03-05 Thread Mohammad Hanif
Hi all,

I would also vote for (C) with 1600 UTC or later.  This  will hopefully 
increase more participation from the Pacific time zone.

Thanks,
—Hanif.

From: Mathieu Rohon
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Thursday, March 5, 2015 at 1:52 AM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

Hi,

I'm fine with C) and 1600 UTC would be more adapted for EU time Zone :)

However, I Agree that neutron-vpnaas meetings was mainly focus on maintaining 
the current IPSec implementation, by managing the slip out, adding StrongSwan 
support and adding functional tests.
Maybe we will get a broader audience once we will speak about adding new use 
cases such as edge-vpn.
Edge-vpn use cases overlap with the Telco WG VPN use case [1]. May be those 
edge-vpn discussions should occur during the Telco WG meeting?

[1]https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#VPN_Instantiation

On Thu, Mar 5, 2015 at 3:02 AM, Sridhar Ramaswamy 
mailto:sric...@gmail.com>> wrote:
Hi Paul.

I'd vote for (C) and a slightly later time-slot on Tuesdays - 1630 UTC (or 
later).

The meetings so far was indeed quite useful. I guess the current busy Kilo 
cycle is also contributing to the low turnout. As we pick up things going 
forward this forum will be quite useful to discuss edge-vpn and, perhaps, other 
vpn variants.

- Sridhar

On Tue, Mar 3, 2015 at 3:38 AM, Paul Michali 
mailto:p...@michali.net>> wrote:
Hi all! The email, that I sent on 2/24 didn't make it to the mailing list (no 
wonder I didn't get responses!). I think I had an issue with my email address 
used - sorry for the confusion!

So, I'll hold the meeting today (1500 UTC meeting-4, if it is still available), 
and we can discuss this...


We've been having very low turnout for meetings for the past several weeks, so 
I'd like to ask those in the community interested in VPNaaS, what the 
preference would be regarding meetings...

A) hold at the same day/time, but only on-demand.
B) hold at a different day/time.
C) hold at a different day/time, but only on-demand.
D) hold as a on-demand topic in main Neutron meeting.

Please vote your interest, and provide desired day/time, if you pick B or C. 
The fallback will be (D), if there's not much interest anymore for meeting, or 
we can't seem to come to a consensus (or super-majority :)

Regards,

PCM

Twitter: @pmichali
TEXT: 6032894458
PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removal of v3 in tree testing

2015-03-05 Thread GHANSHYAM MANN
Hi Sean,

That is very nice idea to keep only 1 set of tests and run those twice via
tox.

Actually my main goal was-
- 1. Create clean sample file structure for V2. V2.1 and micro-versions
   Something like below-
 api_samples/
 extensions/
 v2.0/ - v2 sample files
 v2.1/- v2.1 sample files
 v2.2/- v2.2 sample files
 and so on

- 2.  Merge sample files between v2 and v2.1.

But your idea is much better which almost covers mine (except dir structure
for microversion which can/should be work after that).

As there are many extensions merged/split from v2 -> v2.1, we need to twist
tests to work for both v2 and v2.1.
For exmple, v2 flavor-swap, flavor-disable, flavor-extraData has been
merged to single flavor plugin in v2.1.

So running v2.1 flavor tests for v2 needs above extensions to be enabled in
that tests. It looks something like -
https://review.openstack.org/#/c/162016/

-- 
Thanks & Regards
Ghanshyam Mann


On Fri, Mar 6, 2015 at 7:00 AM, Sean Dague  wrote:
On 03/04/2015 07:48 PM, GHANSHYAM MANN wrote:
> Hi Sean,
>
> Yes having V3 directory/file names is very confusing now.
>
> But current v3 sample tests cases tests v2.1 plugins. As /v3 url is
> redirected to v21 plugins, v3 sample tests make call through v3 url and
> test v2.1 plugins.
>
> I think we can start cleaning up the *v3* from everywhere and change it
> to v2.1 or much appropriate name.
>
> To cleanup the same from sample files, I was planning to rearrange
> sample files structure. Please check if that direction looks good (still
> need to release patch for directory restructure)
>
>
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:sample_files_structure,n,z

I had another chat with Alex this morning on IRC. I think my confusion
is that I don't feel like I understand how we get down to 1 set of API
samples in the tree based on that set of posted patches.

It seems like there should only be 1 set of samples in docs/ and one set
of templates. I would also argue that we should only have 1 set of tests
(though that I'm mid term flexible on).

It seems that if our concern is that both the v2 and v21 endpoints need
to have the same results, we could change the functional tox target to
run twice, once with v2 and once with v21 set as the v2 endpoint.
Eventually we'll be able to drop v2 on v2.

Anyway, in order to both assist my own work unwinding the test tree, and
to help review your work there, can you lay out your vision for cleaning
this up with all the steps involved? Hopefully that will cut down the
confusion and make all this work move faster.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

2015-03-05 Thread Joshua Zhang
Hi all,

I would also vote for (A) with 1500 UTC which is 23:00 in Beijing time
-:)

On Fri, Mar 6, 2015 at 1:22 PM, Mohammad Hanif  wrote:

>   Hi all,
>
>  I would also vote for (C) with 1600 UTC or later.  This  will hopefully
> increase more participation from the Pacific time zone.
>
>  Thanks,
> —Hanif.
>
>   From: Mathieu Rohon
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> Date: Thursday, March 5, 2015 at 1:52 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings
>
> Hi,
>
>  I'm fine with C) and 1600 UTC would be more adapted for EU time Zone :)
>
>  However, I Agree that neutron-vpnaas meetings was mainly focus on
> maintaining the current IPSec implementation, by managing the slip out,
> adding StrongSwan support and adding functional tests.
>  Maybe we will get a broader audience once we will speak about adding new
> use cases such as edge-vpn.
>  Edge-vpn use cases overlap with the Telco WG VPN use case [1]. May be
> those edge-vpn discussions should occur during the Telco WG meeting?
>
> [1]
> https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#VPN_Instantiation
>
> On Thu, Mar 5, 2015 at 3:02 AM, Sridhar Ramaswamy 
> wrote:
>
>> Hi Paul.
>>
>>  I'd vote for (C) and a slightly later time-slot on Tuesdays - 1630 UTC
>> (or later).
>>
>>  The meetings so far was indeed quite useful. I guess the current busy
>> Kilo cycle is also contributing to the low turnout. As we pick up things
>> going forward this forum will be quite useful to discuss edge-vpn and,
>> perhaps, other vpn variants.
>>
>>  - Sridhar
>>
>>  On Tue, Mar 3, 2015 at 3:38 AM, Paul Michali  wrote:
>>
>>>  Hi all! The email, that I sent on 2/24 didn't make it to the mailing
>>> list (no wonder I didn't get responses!). I think I had an issue with my
>>> email address used - sorry for the confusion!
>>>
>>>  So, I'll hold the meeting today (1500 UTC meeting-4, if it is still
>>> available), and we can discuss this...
>>>
>>>
>>>  We've been having very low turnout for meetings for the past several
>>> weeks, so I'd like to ask those in the community interested in VPNaaS, what
>>> the preference would be regarding meetings...
>>>
>>>  A) hold at the same day/time, but only on-demand.
>>> B) hold at a different day/time.
>>> C) hold at a different day/time, but only on-demand.
>>> D) hold as a on-demand topic in main Neutron meeting.
>>>
>>>  Please vote your interest, and provide desired day/time, if you pick B
>>> or C. The fallback will be (D), if there's not much interest anymore for
>>> meeting, or we can't seem to come to a consensus (or super-majority :)
>>>
>>>  Regards,
>>>
>>>  PCM
>>>
>>>  Twitter: @pmichali
>>> TEXT: 6032894458
>>> PCM (Paul Michali)
>>>
>>>  IRC pc_m (irc.freenode.com)
>>> Twitter... @pmichali
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards
Zhang Hua(张华)
Software Engineer | Canonical
IRC:  zhhuabj
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Andreas Jaeger
+1 on Nathan!

Andreas

On Thu, Mar 5, 2015 at 10:14 PM, Bryan D. Payne  wrote:

> To security-doc core and other interested parties,
>
> Nathaniel Dillon has been working consistently on the security guide since
> our first mid-cycle meet up last summer.  In that time he has come to
> understand the inner workings of the book and the doc process very well.
> He has also been a consistent participant in improving the book content and
> working to define what the book should be going forward.
>
> I'd like to bring him on as a core member of security-doc so that he can
> help with the review and approval process for new changes to the book.
> Please chime in with your agreement or concerns.
>
> Cheers,
> -bryan
>



-- 
Andreas Jaeger
 jaegera...@gmail.com
  http://andreasjaeger.blogspot.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Andreas Jaeger
I meant Nathaniel - sorry for the typo,

Andreas

On Fri, Mar 6, 2015 at 7:18 AM, Andreas Jaeger  wrote:

> +1 on Nathan!
>
> Andreas
>
> On Thu, Mar 5, 2015 at 10:14 PM, Bryan D. Payne  wrote:
>
>> To security-doc core and other interested parties,
>>
>> Nathaniel Dillon has been working consistently on the security guide
>> since our first mid-cycle meet up last summer.  In that time he has come to
>> understand the inner workings of the book and the doc process very well.
>> He has also been a consistent participant in improving the book content and
>> working to define what the book should be going forward.
>>
>> I'd like to bring him on as a core member of security-doc so that he can
>> help with the review and approval process for new changes to the book.
>> Please chime in with your agreement or concerns.
>>
>> Cheers,
>> -bryan
>>
>
>
>
> --
> Andreas Jaeger
>  jaegera...@gmail.com
>   http://andreasjaeger.blogspot.com
>



-- 
Andreas Jaeger
 jaegera...@gmail.com
  http://andreasjaeger.blogspot.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Group Based Policy project proposal

2015-03-05 Thread Sumit Naiksatam
Hi All,

The OpenStack Group Based Policy team of contributors has submitted a
proposal [1] to add “Group Based Policy” as a project in the OpenStack
namespace in accordance with the new governance changes [2]. We would
request the TC to take note and consider this proposal during the next
meeting. The team will be happy to answer questions and provide more
information via ML, and/or IRC on the #openstack-gbp channel.

Thanks,
~Sumit, IRC: SumitNaiksatam
(on behalf of GBP-team).

[1] https://review.openstack.org/#/c/161902
[2] http://governance.openstack.org/reference/new-projects-requirements.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HTTPD Config

2015-03-05 Thread Matthias Runge
On 05/03/15 19:49, Adam Young wrote:

> 
> I'd like to drop port 5000 all-together, as we are using a port assigned
> to a different service.  35357 is also problematic as it is in the
> middle of the Ephemeral range.  Since we are  talking about running
> everything in one web server anywya, using port 80/443 for all web stuff
> is the right approach.

I have thought about this as well. The issue here is, URLs for keystone
and horizon will probably clash.
(is https://server/api/... a keystone or a call for horizon).

No matter what we do in devstack, this is something, horizon and
keystone devs need to fix first. E.g. in Horizon, we still discover hard
coded URLs here and there. To catch that kind of things, I had a patch
up for review, to easily configure moving Horizon from using http server
root to something different.

I would expect the same thing for keystone, too.

Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Nathaniel Dillon for security-doc core

2015-03-05 Thread Andreas Jaeger
On 03/06/2015 03:24 AM, Dillon, Nathaniel wrote:
> Awesome, thanks everyone! Very excited to help out with the great work 
> already going on!

Welcome, Nathaniel!

Andreas

> Nathaniel
> 
>> On Mar 5, 2015, at 5:05 PM, Bryan D. Payne  wrote:
>>
>> Thanks everyone.  I've added Nathaniel to security-doc core.  Welcome 
>> Nathaniel!
>>
>> Cheers,
>> -bryan
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev