Re: [openstack-dev] [Heat][Neutron] Refactoring heat LBaaS architecture according Neutron API

2014-02-20 Thread Mitsuru Kanabuchi

Hi Sergey,

On Thu, 20 Feb 2014 19:58:14 +0400
Sergey Kraynev  wrote:

> Hello community.
> 
> I'd like to discuss feature of Neutron LBaaS in Heat.
> Currently Heat resources are not identical to Neutron's.
> There are four resources here:
> 'OS::Neutron::HealthMonitor'
> 'OS::Neutron::Pool'
> 'OS::Neutron::PoolMember'
> 'OS::Neutron::LoadBalancer'
> 
> According to this representation the VIP is a part of resource
> Loadbalancer, whereas Neutron has separate object VIP.  I think it should
> be changed to conform with Neutron's implementation.
> So the main question: what is the best way to change it? I see following
> options:
> 
> 1. Move VIP in separate resource in icehouse release (without any
> additions).
> Possibly we should support both (old and new) implementation for users too.
>  IMO, it has also one big danger, because now we have stable version of it
> and have not enough time to check new approach.
> Also I think it does not make sense now, because Neutron team are
> discussing new object model (
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/027480.html)
> and it will be implemented in Juno.
> 
> 2. The second idea is to wait all architecture changes that are planed in
> Juno in Neutron. (look at link above)
> Then we could recreate or change Heat LBaaS architecture at all.

+1

In my understand, it's not necessarily the case that be identical with
underlying resources. Actually, several heat resources aren't identical
with underlying resources by mainly dependency reasons.

IMO, we should wait change of neutron's side model definition,
then we should consider what resource model is appropriate from heat's
perspective. Targeting Juno would be appropriate timing to consider
refactoring LBaaS model.

> Your feedback and other ideas about better implementation plan are welcome.
> 
> Regards,
> Sergey.

Regards,


  Mitsuru Kanabuchi
NTT Software Corporation
E-Mail : kanabuchi.mits...@po.ntts.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring IP Availability

2014-02-20 Thread Tim Bell

Are these hooks generic enough to be included upstream ? This may solve a 
problem we've been struggling with.

Tim

> -Original Message-
> From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
> Sent: 20 February 2014 22:59
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Monitoring IP Availability
> 
> On Thu, Feb 20, 2014 at 12:53:51AM +, Vilobh Meshram wrote:
> > Hello OpenStack Dev,
> >
> > We wanted to have your input on how different companies/organizations, 
> > using Openstack, are monitoring IP availability as this
> can be useful to track the used IP’s and total number of IP’s.
> 
> A while ago I added hooks to Nova-network to forward floating-ip allocations 
> into an existing management system, since this system
> was the source of truth for IP address management inside Comcast.
> 
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Kenichi Oomichi
> -Original Message-
> From: Christopher Yeoh [mailto:cbky...@gmail.com]
> Sent: Thursday, February 20, 2014 11:44 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] v3 API in Icehouse
> 
> On Wed, 19 Feb 2014 12:36:46 -0500
> Russell Bryant  wrote:
> 
> > Greetings,
> >
> > The v3 API effort has been going for a few release cycles now.  As we
> > approach the Icehouse release, we are faced with the following
> > question: "Is it time to mark v3 stable?"
> >
> > My opinion is that I think we need to leave v3 marked as experimental
> > for Icehouse.
> >
> 
> Although I'm very eager to get the V3 API released, I do agree with you.
> As you have said we will be living with both the V2 and V3 APIs for a
> very long time. And at this point there would be simply too many last
> minute changes to the V3 API for us to be confident that we have it
> right "enough" to release as a stable API.

Through v3 API development, we have found a lot of the existing v2 API
input validation problems. but we have concentrated v3 API development
without fixing the problems of v2 API.

After Icehouse release, v2 API would be still CURRENT and v3 API would
be EXPERIMENTAL. So should we fix v2 API problems also in the remaining
Icehouse cycle?


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Thomas Goirand
On 02/21/2014 06:13 AM, Vincent Untz wrote:
>Le jeudi 20 février 2014, à 12:02 -0800, Armando M. a écrit :
>> No action on a negative review means automatic expiration, if you lose
>> interest in something you care about whose fault is that?
> 
> I beg to disagree. If we let patches go to automatic expiration, then we
> as a project will just lose contributors.

Vincent expressed exactly what I think.

For those who didn't get it, I was talking about this one (for SQLite):
https://review.openstack.org/#/c/52757/

which has similarities with this one (for Postgresql):
https://review.openstack.org/#/c/68611/

The only reason why mark did put a -2 was (as per the review):
"SQLite is not a supported production database for Neutron. Placing a -2
until the team can discuss whether or not to change this policy for
Icehouse."

When is this scheduled? Are we waiting for a full release cycle?!?

Restoring my patch so many times is a sign that
- I really though it was just missing sore core reviewers
- I cared for this patch
- I thought it didn't need further modifications or information

I did restore my patche many times, then got tired of it, and just gave
up. After a certain point (more than 2 months in my case), I considered
it wasn't worth the effort. My feeling was that nobody cared or
preferred not to get my patch approved without saying it in the review.

Le jeudi 20 février 2014, à 12:02 -0800, Armando M. a écrit :
> I feel your frustration

Yes, very frustrating indeed... :)

The only thing that gets me to write again in this list about this is
because someone else came to the conclusion of the exact same patch
through another path (eg: postgresql instead of sqlite), but it seems
it's not getting the attention it should. If this didn't happen, I
believe I would have just completely gave up, and wouldn't have open my
big mouth in this list. :)

> Patch [2]: I did put a -1, but I have nothing against this patch per
> se.

I fail to understand the logic here. If you have "nothing against [a]
patch", then just vote +1 ...

Now, is the gate broken somehow? It doesn't seem to pass the
check-tempest-dsvm-neutron-pg gating test ... :/

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Jay Pipes
On Thu, 2014-02-20 at 08:22 -0500, Sean Dague wrote:
> I agree that we shouldn't be rushing something that's not ready, but I
> guess it raises kind of a meta issue.
> 
> When we started this journey this was because v2 has a ton of warts, is
> completely wonky on the code internals, which leads to plenty of bugs.
> v3 was both a surface clean up, but it was also a massive internals
> clean up. I think comparing servers.py:create is a good look at the
> differences:
> 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L768
> - v2
> 
> vs.
> 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py#L415
> - v3
> 
> v3 was small on user surface changes for a reason, because the idea was
> that it would be a quick cut over, the migration pain would be minimal,
> and v2 could be dropped relatively quickly (2 cycles).
> 
> However if the new thinking is that v2 is going to be around for a
> long time then I think it raises questions about this whole approach.
> Because dual maintenance is bad. We see this today where stable/* trees
> end up broken in CI for weeks because no one is working on it.
> 
> We're also duplicating a lot of test and review energy in having 2 API
> stacks. Even before v3 has come out of experimental it's consumed a huge
> amount of review resource on both the Nova and Tempest sides to get it
> to it's current state.
> 
> So my feeling is that in order to get more energy and focus on the API,
> we need some kind of game plan to get us to a single API version, with a
> single data payload in L (or on the outside, M). If the decision is v2
> must be in both those releases (and possibly beyond), then it seems like
> asking other hard questions.
> 
> * why do a v3 at all? instead do we figure out a way to be able to
> evolve v2 in a backwards compatible way.
> * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
> that working around all that code isn't a velocity inhibitor in the
> cleanups required in v2? Because some of the crazy hacks that exist to
> make XML structures work for the json in v2 is kind of special.
> 
> This big bang approach to API development may just have run it's course,
> and no longer be a useful development model. Which is good to find out.
> Would have been nice to find out earlier... but not all lessons are easy
> or cheap. :)

All excellent points, Sean.

I would add that I personally would love to see all API extensions
removed from the API eventually. I'd also love to see the use of JSON
Schema and JSON-Home utilized across the API for discovery purposes.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][glance] Question about evacuate with no shared storage..

2014-02-20 Thread Sangeeta Singh
Hi,

At my organization we do not use a shared storage for VM disks  but need to 
evacuate VMs  from a HV that is down or having problems to another HV. The 
evacuate command only allows the evacuated VM to have the base image. What I am 
interested in is to create a snapshot of the VM on the down HV and then be able 
to use the evacuate command by specifying the snapshot for the image.

Has anyone had such a use case? Is there a command that uses snapshots in this 
way to recreate VM on a new HV.

Thanks for the pointers.

Sangeeta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][review] Please treat -1s on check-tripleo-*-precise as voting.

2014-02-20 Thread Robert Collins
On 18 February 2014 04:30, Derek Higgins  wrote:
> On 17/02/14 01:25, Robert Collins wrote:
>> Hi!
>>
>> The nascent tripleo-gate is now running on all tripleo repositories,
>> *and should pass*, but are not yet voting. They aren't voting because
>> we cannot submit to the gate unless jenkins votes verified... *and* we
>> have no redundancy for the tripleo-ci cloud now, so any glitch in the
>> current region will take out our ability to land changes.
>>
>> We're working up the path to having two regions as fast as we can- and
>> once we do we should be up to check or perhaps even gate in short
>> order :).
>>
>> Note: unless you *expand* the jenkins vote, you can't tell if a -1 occurred.
>>
>> If, for some reason, we have an infrastructure failure that means
>> spurious -1's will be occurring, then we'll put that in the #tripleo
>> topic.
>
> It looks like we've hit a glitch, network access to our ci-overcloud
> controller seems to be gone, I think invoking this clause is needed
> until the problem is sorted, will update the topic and am working on
> diagnosing the problem.

So we fixed that clause, but infra took us out of rotation as we took
nodepool down before it was fixed.

We've now:
 - improved nodepool to handle downclouds more gracefully
 - moved the tripleo cloud using jobs to dedicated check and
experimental pipelines
 - and been reinstated

So - please look for comments from check-tripleo before approving merges!

The tripleo test cloud is still one region, CI is running on 10
hypervisors and 10 emulated baremetal backend systems, so we have
reasonable capacity.

Additionally, running 'check experimental' will now run tripleo jobs
against everything we include in tripleo images - nova, cinder, swift
etc etc.

See the config layout.yaml for details, and I'll send a broader
announcement once we've had a little bit of run-time with this.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-02-20 Thread Georgy Okrokvertskhov
Hi,


If I am not mistaken Mistral team listed Live migration as a potential use
case for workflow engine. There is no much details though.
https://wiki.openstack.org/wiki/Mistral#Live_migration

As I know Mistral plan to implement generic event handling mechanism when
one can bind any kind of workflow with external event triggered by
Ceilometer or other monitoring system. This bound workflow can actually
define live migration logic.

Thanks
Georgy


On Thu, Feb 20, 2014 at 3:04 PM, Sean Dague  wrote:

> On 02/20/2014 05:32 PM, Russell Bryant wrote:
> > On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
> >> Hi,
> >>
> >> Would like to know if there's any interest on having 'automatic
> >> evacuation' feature when a compute node goes down.
> >> I found 3 bps related to this topic:
> >>[1] Adding a periodic task and using ServiceGroup API for
> >> compute-node status
> >>[2] Using ceilometer to trigger the evacuate api.
> >>[3] Include some kind of H/A plugin  by using a 'resource
> >> optimization service'
> >>
> >> Most of those BP's have comments like 'this logic should not reside in
> >> nova', so that's
> >> why i am asking what should be the best approach to have something like
> >> that.
> >>
> >> Should this be ignored, and just rely on external monitoring tools to
> >> trigger the evacuation?
> >> There are complex scenarios that require lot of logic that won't fit
> >> into nova nor any other OS component. (For instance: sometimes it will
> >> be faster to reboot the node or compute-nova than starting the
> >> evacuation, but if it fail X times then trigger an evacuation, etc )
> >>
> >> Any thought/comment// about this?
> >>
> >> Regards
> >> Leandro
> >>
> >> [1]
> https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
> >> [2]
> >>
> https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically
> >> [3]
> >>
> https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
> >
> > My opinion is that I would like to see this logic done outside of Nova.
>
> Right now Nova is the only service that really understands the compute
> topology of hosts, though it's understanding of liveness is really not
> sufficient to handle this kind of HA thing anyway.
>
> I think that's the real problem to solve. How to provide notifications
> to somewhere outside of Nova on host death. And the question is, should
> Nova be involved in just that part, keeping track of node liveness and
> signaling up for someone else to deal with it? Honestly that part I'm
> more on the fence about. Because putting another service in place to
> just handle that monitoring seems overkill.
>
> I 100% agree that all the policy, reacting, logic for this should be
> outside of Nova. Be it Heat or somewhere else.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Devstack failing when configuring it to use v3 api

2014-02-20 Thread Vijayendra Bvs
Hi,

In order to explore domains in OS, I tried setting up devstack with the v3
api by changing this line in stackrc from:

IDENTITY_API_VERSION=2.0

to:

IDENTITY_API_VERSION=3


and running stack.sh. However, stack.sh runs into a host of errors that
follow the same pattern, with an error "ERROR: cliff.app The resource could
not be found. (HTTP 404)". An example goes below:



+ export OS_URL=http://192.168.44.184:35357/v2.0

+ OS_URL=http://192.168.44.184:35357/v2.0

+ create_keystone_accounts

++ openstack project create admin

++ grep ' id '

++ get_field 2

++ read data

INFO: urllib3.connectionpool Starting new HTTP connection (1):
192.168.44.184

ERROR: cliff.app The resource could not be found. (HTTP 404)

+ ADMIN_TENANT=

++ openstack user create admin --project '' --email
ad...@example.com--password admin

++ get_field 2



Any idea why this happens? Has anyone seen this before with devstack and/or
know of any workarounds? Am I missing some configuration in that I need to
do more than simply set IDENTITY_API_VERSION=3 ?


Thanks,

Regards,

Vijay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] "SAML consumption" Blueprints

2014-02-20 Thread Dolph Mathews
On Thu, Feb 20, 2014 at 4:18 AM, Marco Fargetta
wrote:

> Dear all,
>
> I am interested to the integration of SAML with keystone and I am analysing
> the following blueprint and its implementation:
>
> https://blueprints.launchpad.net/keystone/+spec/saml-id
>
> https://review.openstack.org/#/c/71353/
>
>
> Looking at the code there is something I cannot undertand. In the code it
> seems you
> will use apache httpd with mod_shib (or other alternatives) to parse saml
> assertion
> and the code inside keystone will read only the values extrapolated by the
> front-end server.
>

That's correct (for icehouse development, at least).


>
> If this is the case, it is not clear to me why you need to register the
> IdPs, with its certificate,
> in keystone using the new federation API. You can filter the IdP in the
> server so why do you need this extra list?
> What is the use of the IdP list and the certificate?
>

This reflects our original design, and it has evolved a bit from the
original design to be a bit more simplified. With the additional dependency
on mod_shib / mod_mellon, we are no longer implementing the certificates
API, but we do still need the IdP API. The IdP API specifically allows us
to track the source of an identity, and apply the correct authorization
mapping (producing the project- and domain-based role assignments that
OpenStack is accostomed to to) to the federated attributes coming from
mod_shib / mod_mellon. The benefit is that federated identities from one
source can have a different level of authorization than those identities
from a different source, even if they (theoretically) had the exact same
SAML assertions.


>
> Is still this implementation open to discussion or the design is frozen
> for the icehouse release?
>

It is certainly still open to discussion (and the implementation open to
review!), but we're past feature proposal freeze; anything that would
require new development (beyond what is already in review) will have to
wait a few weeks for Juno.


>
> Thanks in advance,
> Marco
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread 郭小熙
agree with that , thank you the the input ,  let hold on this until we have
a clear roadmap.


2014-02-21 8:47 GMT+08:00 Sean Dague :

> On 02/20/2014 07:34 PM, Joe Gordon wrote:
> > On Thu, Feb 20, 2014 at 3:48 PM, 郭小熙  wrote:
> >> Yes, a Jenkins job is not useful currently. I would like to submit some
> >> commits to
> >> fix known python 3 support issues as we did in oslo-incubator. Another
> >> question
> >> is how to make new changes avoid regression, Maybe we need add more
> rules
> >> about
> >> this in hacking and consider python 3 support  in review process.
> >
> > What is the benefit in doing this? Until we have a roadmap to get the
> > dependencies working this seems premature.
>
> +1
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread Sean Dague
On 02/20/2014 07:34 PM, Joe Gordon wrote:
> On Thu, Feb 20, 2014 at 3:48 PM, 郭小熙  wrote:
>> Yes, a Jenkins job is not useful currently. I would like to submit some
>> commits to
>> fix known python 3 support issues as we did in oslo-incubator. Another
>> question
>> is how to make new changes avoid regression, Maybe we need add more rules
>> about
>> this in hacking and consider python 3 support  in review process.
> 
> What is the benefit in doing this? Until we have a roadmap to get the
> dependencies working this seems premature.

+1

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread Joe Gordon
On Thu, Feb 20, 2014 at 3:48 PM, 郭小熙  wrote:
> Yes, a Jenkins job is not useful currently. I would like to submit some
> commits to
> fix known python 3 support issues as we did in oslo-incubator. Another
> question
> is how to make new changes avoid regression, Maybe we need add more rules
> about
> this in hacking and consider python 3 support  in review process.

What is the benefit in doing this? Until we have a roadmap to get the
dependencies working this seems premature.

>
> 2014-02-21 5:10 GMT+08:00 Russell Bryant :
>
>> On 02/20/2014 09:43 AM, 郭小熙 wrote:
>> > We will move to Python33 in the future. More and more OpenStack projects
>> > including python-novaclient are Python33 compatible. Do we have plan to
>> > make Nova python33 compatible ?
>> >
>> > As I know, oslo.messaging will not support python33 in Icehouse,this is
>> > just one dependency for Nova, that means we can't finish it in Icehouse
>> > for Nova. I registered one blueprint [1]to make us move to Python33
>> > smoothly in the future. Python33 compatibility would be taken into
>> > account while reviewing code.
>> >
>> > We have to add py33 check/gate jobs to check Py33 compatibility. This
>> > blueprint could be marked as implemented only until Nova code can pass
>> > these jobs.
>> >
>> > [1] https://blueprints.launchpad.net/nova/+spec/nova-py3kcompat
>>
>> Python 3 support is certainly a goal that *all* OpenStack projects
>> should be aiming for.  However, for Nova, I don't think Nova's code is
>> actually our biggest hurdle.  The hardest parts are dependencies that we
>> have that don't support Python 3.  A big example is eventlet.  We're so
>> far off that I don't even think a CI job is useful yet.
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> ChangBo Guo(gcb)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread 郭小熙
Yes, a Jenkins job is not useful currently. I would like to submit some
commits to
fix known python 3 support issues as we did in oslo-incubator. Another
question
is how to make new changes avoid regression, Maybe we need add more rules
about
this in hacking and consider python 3 support  in review process.

2014-02-21 5:10 GMT+08:00 Russell Bryant :

> On 02/20/2014 09:43 AM, 郭小熙 wrote:
> > We will move to Python33 in the future. More and more OpenStack projects
> > including python-novaclient are Python33 compatible. Do we have plan to
> > make Nova python33 compatible ?
> >
> > As I know, oslo.messaging will not support python33 in Icehouse,this is
> > just one dependency for Nova, that means we can't finish it in Icehouse
> > for Nova. I registered one blueprint [1]to make us move to Python33
> > smoothly in the future. Python33 compatibility would be taken into
> > account while reviewing code.
> >
> > We have to add py33 check/gate jobs to check Py33 compatibility. This
> > blueprint could be marked as implemented only until Nova code can pass
> > these jobs.
> >
> > [1] https://blueprints.launchpad.net/nova/+spec/nova-py3kcompat
>
> Python 3 support is certainly a goal that *all* OpenStack projects
> should be aiming for.  However, for Nova, I don't think Nova's code is
> actually our biggest hurdle.  The hardest parts are dependencies that we
> have that don't support Python 3.  A big example is eventlet.  We're so
> far off that I don't even think a CI job is useful yet.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Sean Dague
On 02/20/2014 06:30 PM, Sabari Murugesan wrote:
> But I do think running a job with lowest version may still help a
> developer realize that a feature in the latest library is not available
> in an older supported version. The person can then bump up the library's
> min version in the requirements. Today, it;s not possible to find this
> out until someone else runs into an issue with the older lib.
> 
> Ref:- I have noticed some bugs in the past and I did post
> 
>  about
> this to openstack-infra. From Jeremy Stanley's comment on that thread, I
> understand that it may be tough to setup it up this way due to
> transitive dependencies. 
> 
> -Sabari

So we've actually talked about doing this by using the
global-requirements lever. Basically modify the list to set everything
there to minimums, and see what happens. If you or someone else is
interested, I'd be happy to help walk you through it, but it personally
ended up low priority to get working.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Sabari Murugesan
But I do think running a job with lowest version may still help a developer 
realize that a feature in the latest library is not available in an older 
supported version. The person can then bump up the library's min version in the 
requirements. Today, it;s not possible to find this out until someone else runs 
into an issue with the older lib.

Ref:- I have noticed some bugs in the past and I did post about this to 
openstack-infra. From Jeremy Stanley's comment on that thread, I understand 
that it may be tough to setup it up this way due to transitive dependencies. 

-Sabari

On Feb 20, 2014, at 3:06 PM, Sean Dague wrote:

> On 02/20/2014 05:50 PM, Christopher Yeoh wrote:
>> On Thu, 20 Feb 2014 14:45:03 -0500
>> Sean Dague  wrote:
>>> 
>>> So I'm one of the first people to utter "if it isn't tested, it's
>>> probably broken", however I also think we need to be realistic about
>>> the fact that if you did out the permutations of dependencies and
>>> config options, we'd have as many test matrix scenarios as grains of
>>> sand on the planet.
>> 
>> I think it makes sense to at test with all the most recent versions.
>> Perhaps all the lowest as well, though I'm not sure how common that
>> configuration would really be. And we can't do all of the various
>> combinations. 
>> 
>> But how about testing on a periodic job the version
>> configurations used by some of the major distros? So we know on which
>> distros we're going to be broken on by default (if we don't get around
>> to fixing them straight away) which will really help inform those
>> trying out the latest from git. Or are the distros doing this anyway
>> (either way having the info public and feeding back to our bug tracker
>> would be handy).
> 
> Honestly, I think our experience in even doing a homogenous gate means I
> think we should let the distros come in with 3rd party testing on those
> results. Because we don't really have the bw, in either people or
> machines, to explode that matrix much in infra.
> 
>   -Sean
> 
> -- 
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> https://urldefense.proofpoint.com/v1/url?u=http://dague.net/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=t%2BccgWNa3RGhEJzfoLbZKcoWP6fDp0ZLoIyTd0CiIdk%3D%0A&m=kRy3u472erydn9aD1jOg%2FCQqzv1p0pQeNd%2BI8WXeZLk%3D%0A&s=493bac7a717a985bdb4d64eca320969ecb2a6df00d7ee8f5d8bcaf895dd24fe8
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=t%2BccgWNa3RGhEJzfoLbZKcoWP6fDp0ZLoIyTd0CiIdk%3D%0A&m=kRy3u472erydn9aD1jOg%2FCQqzv1p0pQeNd%2BI8WXeZLk%3D%0A&s=d484eedf637b01155f20eeda79378a9d506d6a39060517e70e868dcab571f3c6

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] async / threading for python 2 and 3

2014-02-20 Thread Angus Salkeld

On 20/02/14 13:52 +0100, victor stinner wrote:

Hi,


On 19/02/14 10:09 +0100, Julien Danjou wrote:
>On Wed, Feb 19 2014, Angus Salkeld wrote:
>
>> 2) use tulip and give up python 2
>
>+ use trollius to have Python 2 support.
>
>  https://pypi.python.org/pypi/trollius

So I have been giving this a go.


FYI I'm the author of Trollius project.


Cool, Hi.




We use pecan and wsme (like ceilometer), I wanted to use
a httpserver library in place of wsgiref.server so had a
look at a couple and can't use them as they all have "yield from"
all over the place (i.e. python 3 only). The quesion I have
is:
How useful is trollius if we can't use other thirdparty libraries
written for asyncio?
https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/server.py#L171

Maybe I am missing something?


(Tulip and Trollius unit tests use wsgiref.simple_server module of the standard 
library. It works but you said that you don't want to use it.)



I saw that, but it's used as a test server and doesn't run an asyncio loop.
wsgiref.server does not have any of the yields that we need around reads/writes
but more importantly we need an add_reader() so the whole server does not
block waiting for a new connection. My usecase is to receive a REST request
return a 202 (Accepted) and process the request asyncronously. If the
http server then goes around it's loop and blocks on a read (waiting for
the next request) I never get any processing time scheduled.


Honestly, I have no answer to your question right now ("How useful is trollius ..."). 
asyncio developers are working on fixing last bugs in asyncio (Trollius is a fork, I merge regulary 
updates from Tulip into Trollius) and adding some late features before the Python 3.4 release. This 
Python release will be somehow the "version 1.0" of asyncio and will freeze the API. 
Right now, I'm working on a proof-on-concept of eventlet hub using asyncio event loop. So it may be 
possible to use eventlet and asyncio APIs are the same time. And maybe slowly replace eventlet with 
asyncio, or at least use asyncio in new code.


Ok, thanks for the update.

-Angus



I asked your question on Tulip mailing list to see how a single code base could 
support Tulip (yield from) and Trollius (yield). At least check if it's 
technically possible.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Christopher Yeoh
On Thu, 20 Feb 2014 18:06:25 -0500
Sean Dague  wrote:
> 
> Honestly, I think our experience in even doing a homogenous gate
> means I think we should let the distros come in with 3rd party
> testing on those results. Because we don't really have the bw, in
> either people or machines, to explode that matrix much in infra.

Yes I think its even better if the distros would step up and
offer to provide the resources to do that testing and make the results
public.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Sean Dague
On 02/20/2014 05:50 PM, Christopher Yeoh wrote:
> On Thu, 20 Feb 2014 14:45:03 -0500
> Sean Dague  wrote:
>>
>> So I'm one of the first people to utter "if it isn't tested, it's
>> probably broken", however I also think we need to be realistic about
>> the fact that if you did out the permutations of dependencies and
>> config options, we'd have as many test matrix scenarios as grains of
>> sand on the planet.
> 
> I think it makes sense to at test with all the most recent versions.
> Perhaps all the lowest as well, though I'm not sure how common that
> configuration would really be. And we can't do all of the various
> combinations. 
> 
> But how about testing on a periodic job the version
> configurations used by some of the major distros? So we know on which
> distros we're going to be broken on by default (if we don't get around
> to fixing them straight away) which will really help inform those
> trying out the latest from git. Or are the distros doing this anyway
> (either way having the info public and feeding back to our bug tracker
> would be handy).

Honestly, I think our experience in even doing a homogenous gate means I
think we should let the distros come in with 3rd party testing on those
results. Because we don't really have the bw, in either people or
machines, to explode that matrix much in infra.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-02-20 Thread Sean Dague
On 02/20/2014 05:32 PM, Russell Bryant wrote:
> On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
>> Hi,
>>
>> Would like to know if there's any interest on having 'automatic
>> evacuation' feature when a compute node goes down.
>> I found 3 bps related to this topic:
>>[1] Adding a periodic task and using ServiceGroup API for
>> compute-node status
>>[2] Using ceilometer to trigger the evacuate api.
>>[3] Include some kind of H/A plugin  by using a 'resource
>> optimization service'
>>
>> Most of those BP's have comments like 'this logic should not reside in
>> nova', so that's
>> why i am asking what should be the best approach to have something like
>> that.
>>
>> Should this be ignored, and just rely on external monitoring tools to
>> trigger the evacuation?
>> There are complex scenarios that require lot of logic that won't fit
>> into nova nor any other OS component. (For instance: sometimes it will
>> be faster to reboot the node or compute-nova than starting the
>> evacuation, but if it fail X times then trigger an evacuation, etc )
>>
>> Any thought/comment// about this?
>>
>> Regards
>> Leandro
>>
>> [1] https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
>> [2]
>> https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically
>> [3]
>> https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
> 
> My opinion is that I would like to see this logic done outside of Nova.

Right now Nova is the only service that really understands the compute
topology of hosts, though it's understanding of liveness is really not
sufficient to handle this kind of HA thing anyway.

I think that's the real problem to solve. How to provide notifications
to somewhere outside of Nova on host death. And the question is, should
Nova be involved in just that part, keeping track of node liveness and
signaling up for someone else to deal with it? Honestly that part I'm
more on the fence about. Because putting another service in place to
just handle that monitoring seems overkill.

I 100% agree that all the policy, reacting, logic for this should be
outside of Nova. Be it Heat or somewhere else.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Christopher Yeoh
On Thu, 20 Feb 2014 14:45:03 -0500
Sean Dague  wrote:
> 
> So I'm one of the first people to utter "if it isn't tested, it's
> probably broken", however I also think we need to be realistic about
> the fact that if you did out the permutations of dependencies and
> config options, we'd have as many test matrix scenarios as grains of
> sand on the planet.

I think it makes sense to at test with all the most recent versions.
Perhaps all the lowest as well, though I'm not sure how common that
configuration would really be. And we can't do all of the various
combinations. 

But how about testing on a periodic job the version
configurations used by some of the major distros? So we know on which
distros we're going to be broken on by default (if we don't get around
to fixing them straight away) which will really help inform those
trying out the latest from git. Or are the distros doing this anyway
(either way having the info public and feeding back to our bug tracker
would be handy).

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Armando M.
On 20 February 2014 14:13, Vincent Untz  wrote:
> Le jeudi 20 février 2014, à 12:02 -0800, Armando M. a écrit :
>> Thomas,
>>
>> I feel your frustration, however before complaining please do follow
>> the actual chain of events.
>>
>> Patch [1]: I asked a question which I never received an answer to.
>> Patch [2]: I did put a -1, but I have nothing against this patch per
>> se. This was only been recently abandoned and my -1 lied primarily to
>> give patch [1] the opportunity to be resumed.
>
> Well, I did reply to your comment on the same day, so I'm not sure what
> else I, as submitter, could have done more to address your comment and
> convince you to change the -1 to +1.
>
>> No action on a negative review means automatic expiration, if you lose
>> interest in something you care about whose fault is that?
>
> I beg to disagree. If we let patches go to automatic expiration, then we
> as a project will just lose contributors. I don't think we should accept
> that as a fatality.

The power to restore a change is the hands of the contributor, not the reviewer.

Issues have different priorities and people shouldn't feel singled out
if their changes lose steam. The best course of action is to keep
sticking by them until the light at the end of the tunnel is in sight
:)

That said, I think one of issue that affect the delay of approvals of
patches dealing with DB migrations (that apply across multiple Neutron
releases) is the lack of a stable CI job (like Grenade) that validate
them and relieve the core reviewer of some burden of going through the
patch, the testbed etc.

This is coming though, we just need to be more patient, venting
frustration doesn't fix code!

A.

>
> I just restored the patch, btw :-)
>
> Vincent
>
>> A.
>>
>> [1] = https://review.openstack.org/#/c/52757
>> [2] = https://review.openstack.org/#/c/68611
>>
>> On 19 February 2014 06:28, Thomas Goirand  wrote:
>> > Hi,
>> >
>> > I've seen this one:
>> > https://review.openstack.org/#/c/68611/
>> >
>> > which is suppose to fix something for Postgress. This is funny, because
>> > I was doing the exact same patch for fixing it for SQLite. Though this
>> > was before the last summit in HK.
>> >
>> > Since then, I just gave up on having my Debian specific patch [1] being
>> > upstreamed. No review, despite my insistence. Mark, on the HK summit,
>> > told me that it was pending discussion about what would be the policy
>> > for SQLite.
>> >
>> > Guys, this is disappointing. That's the 2nd time the same patch is being
>> > blocked, with no explanations.
>> >
>> > Could 2 core reviewers have a *serious* look at this patch, and explain
>> > why it's not ok for it to be approved? If nobody says why, then could
>> > this be approved, so we can move on?
>> >
>> > Cheers,
>> >
>> > Thomas Goirand (zigo)
>> >
>> > [1]
>> > http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Les gens heureux ne sont pas pressés.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sphinxcontrib-pecanwsme 0.7 released

2014-02-20 Thread Doug Hellmann
On Thu, Feb 20, 2014 at 1:10 PM, Sylvain Bauza wrote:

> Hi Doug,
>
>
> 2014-02-20 17:37 GMT+01:00 Doug Hellmann :
>
> sphinxcontrib-pecanwsme is an extension to Sphinx for documenting APIs
>> built with the Pecan web framework and WSME.
>>
>> What's New?
>> ===
>>
>> - Remove the trailing slash from the end of the URLs, as it results in
>> misleading feature documentation, see Ceilometer bug #1202744.
>>
>>
>>
> Do you have a review in progress for updating global requirements ? At the
> moment, it's still pointing to 0.6.
>

It's >=0.6, so the new version should be picked up without any change.

Doug



>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-02-20 Thread Russell Bryant
On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
> Hi,
> 
> Would like to know if there's any interest on having 'automatic
> evacuation' feature when a compute node goes down.
> I found 3 bps related to this topic:
>[1] Adding a periodic task and using ServiceGroup API for
> compute-node status
>[2] Using ceilometer to trigger the evacuate api.
>[3] Include some kind of H/A plugin  by using a 'resource
> optimization service'
> 
> Most of those BP's have comments like 'this logic should not reside in
> nova', so that's
> why i am asking what should be the best approach to have something like
> that.
> 
> Should this be ignored, and just rely on external monitoring tools to
> trigger the evacuation?
> There are complex scenarios that require lot of logic that won't fit
> into nova nor any other OS component. (For instance: sometimes it will
> be faster to reboot the node or compute-nova than starting the
> evacuation, but if it fail X times then trigger an evacuation, etc )
> 
> Any thought/comment// about this?
> 
> Regards
> Leandro
> 
> [1] https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
> [2]
> https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically
> [3]
> https://blueprints.launchpad.net/nova/+spec/resource-optimization-service

My opinion is that I would like to see this logic done outside of Nova.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-02-20 Thread 한승진
Im also curious about that.
I think it is proper that ceilometer should have the role of Mornitoring vm.
Nova just opens auto-evacuate API, when a trigger calls the API nova calls
like shelve api for the vm re-spawn.
What do you think of this?
2014. 2. 21. 오전 7:13에 "Costantino, Leandro I" <
leandro.i.costant...@intel.com>님이 작성:

> Hi,
>
> Would like to know if there's any interest on having 'automatic
> evacuation' feature when a compute node goes down.
> I found 3 bps related to this topic:
>[1] Adding a periodic task and using ServiceGroup API for compute-node
> status
>[2] Using ceilometer to trigger the evacuate api.
>[3] Include some kind of H/A plugin  by using a 'resource optimization
> service'
>
> Most of those BP's have comments like 'this logic should not reside in
> nova', so that's
> why i am asking what should be the best approach to have something like
> that.
>
> Should this be ignored, and just rely on external monitoring tools to
> trigger the evacuation?
> There are complex scenarios that require lot of logic that won't fit into
> nova nor any other OS component. (For instance: sometimes it will be faster
> to reboot the node or compute-nova than starting the evacuation, but if it
> fail X times then trigger an evacuation, etc )
>
> Any thought/comment// about this?
>
> Regards
> Leandro
>
> [1] https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-
> when-host-broken
> [2] https://blueprints.launchpad.net/nova/+spec/evacuate-
> instance-automatically
> [3] https://blueprints.launchpad.net/nova/+spec/resource-
> optimization-service
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-20 Thread David Kranz

On 02/20/2014 04:53 PM, Sean Dague wrote:

On 02/20/2014 04:31 PM, David Kranz wrote:

Running this test in tempest requires an ami image triple to be on the
disk where tempest is running in order for the test to upload it. It
would be a lot easier if this test could use a simple image file
instead. That image file could even be obtained from the cloud being
tested while configuring tempest. Is there a reason to keep the
three-part image?

I have no issue changing this to a single part image, as long as we
could find a way that we can make it work with cirros in the gate
(mostly because it can run in really low mem footprint).

Is there a cirros single part image somewhere? Honestly it would be much
simpler even in the devstack environment.

-Sean

http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Vincent Untz
Le jeudi 20 février 2014, à 12:02 -0800, Armando M. a écrit :
> Thomas,
> 
> I feel your frustration, however before complaining please do follow
> the actual chain of events.
> 
> Patch [1]: I asked a question which I never received an answer to.
> Patch [2]: I did put a -1, but I have nothing against this patch per
> se. This was only been recently abandoned and my -1 lied primarily to
> give patch [1] the opportunity to be resumed.

Well, I did reply to your comment on the same day, so I'm not sure what
else I, as submitter, could have done more to address your comment and
convince you to change the -1 to +1.

> No action on a negative review means automatic expiration, if you lose
> interest in something you care about whose fault is that?

I beg to disagree. If we let patches go to automatic expiration, then we
as a project will just lose contributors. I don't think we should accept
that as a fatality.

I just restored the patch, btw :-)

Vincent

> A.
> 
> [1] = https://review.openstack.org/#/c/52757
> [2] = https://review.openstack.org/#/c/68611
> 
> On 19 February 2014 06:28, Thomas Goirand  wrote:
> > Hi,
> >
> > I've seen this one:
> > https://review.openstack.org/#/c/68611/
> >
> > which is suppose to fix something for Postgress. This is funny, because
> > I was doing the exact same patch for fixing it for SQLite. Though this
> > was before the last summit in HK.
> >
> > Since then, I just gave up on having my Debian specific patch [1] being
> > upstreamed. No review, despite my insistence. Mark, on the HK summit,
> > told me that it was pending discussion about what would be the policy
> > for SQLite.
> >
> > Guys, this is disappointing. That's the 2nd time the same patch is being
> > blocked, with no explanations.
> >
> > Could 2 core reviewers have a *serious* look at this patch, and explain
> > why it's not ok for it to be approved? If nobody says why, then could
> > this be approved, so we can move on?
> >
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
> > [1]
> > http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Les gens heureux ne sont pas pressés.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Including Domains in Nova

2014-02-20 Thread Vishvananda Ishaya
Hi Henrique

I disagree with the idea that the other services should use domains. They need 
a concept of hierarchical ownership which we have been discussiong. Domains is 
one way of representing such an ownership hierarchy but i think it is too 
limited.

The POC code I created for hierarchical multitenancy[1] makes nova support 
something similar to what you want for listing projects. It needs to be 
extended to quotas and images, but as a concept it seems to work just fine.

There are a few remaining issues to work out around displaying the names of the 
hierarchy but I think this is a superior direction to adding a separate domain 
concept into the other services.

Vish

[1] 
https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f
On Feb 19, 2014, at 4:21 AM, Henrique Truta  
wrote:

> Hi everyone.
>  
> It is necessary to make Nova support the Domain quotas and create a new 
> administrative perspective. Here are some reasons why Nova should support 
> domains:
>  
> 1 - It's interesting to keep the main Openstack components sharing the same 
> concept, once it has already been made in Keystone. In Keystone, the domain 
> defines more administrative boundaries and makes management of its entities 
> easier.
>  
> 2 - Nova shouldn’t be so tied in to projects. Keystone was created to 
> abstract concepts like these to other modules, like Nova. In addition, Nova 
> needs to be flexible enough to work with the new functionalities that 
> Keystone will provide. If we keep the Nova tied in to projects (or domains), 
> we will be far from the Nova focus which is providing compute services.
>  
> 3 - There is also the Domain Quota Driver BP 
> (https://blueprints.launchpad.net/nova/+spec/domain-quota-driver), which 
> implementation has already began. This Blueprint allows the user to handle 
> quotas at domain level. Nova requires domains to make this feature work 
> properly, right above the project level. There is also an implementation that 
> includes the domain information on the token context. This implementation 
> have to be included as well: https://review.openstack.org/#/c/55870/ .
>  
> 4 - The Nova API must be extended in order to enable domain-level operations, 
> that only work at project-level such as:
> - Listing, viewing and deleting images;
> - Deleting and listing servers;
> - Perform server actions like changing passwords, reboot, rebuild and 
> resize;
> - CRUD and listing on server metadata;
> In addition to provide quota management through the API and establishment of 
> a new administrative scope.
>  
> In order to accomplish these features, the token must contain the domain 
> informations, which will be included as mentioned in item 3. Then, the Nova 
> API calls will be changed to consider the domain information and when a call 
> referent to a project is made (e.g. servers).
>  
> What do you think about it? Any additional suggestions?
>  
> Thanks.
> 
> Henrique Truta
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Automatic Evacuation

2014-02-20 Thread Costantino, Leandro I

Hi,

Would like to know if there's any interest on having 'automatic 
evacuation' feature when a compute node goes down.

I found 3 bps related to this topic:
   [1] Adding a periodic task and using ServiceGroup API for 
compute-node status

   [2] Using ceilometer to trigger the evacuate api.
   [3] Include some kind of H/A plugin  by using a 'resource 
optimization service'


Most of those BP's have comments like 'this logic should not reside in 
nova', so that's
why i am asking what should be the best approach to have something like 
that.


Should this be ignored, and just rely on external monitoring tools to 
trigger the evacuation?
There are complex scenarios that require lot of logic that won't fit 
into nova nor any other OS component. (For instance: sometimes it will 
be faster to reboot the node or compute-nova than starting the 
evacuation, but if it fail X times then trigger an evacuation, etc )


Any thought/comment// about this?

Regards
Leandro

[1] https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
[2] 
https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically
[3] 
https://blueprints.launchpad.net/nova/+spec/resource-optimization-service


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring IP Availability

2014-02-20 Thread Collins, Sean
On Thu, Feb 20, 2014 at 12:53:51AM +, Vilobh Meshram wrote:
> Hello OpenStack Dev,
> 
> We wanted to have your input on how different companies/organizations, using 
> Openstack, are monitoring IP availability as this can be useful to track the 
> used IP’s and total number of IP’s.

A while ago I added hooks to Nova-network to forward
floating-ip allocations into an existing management system,
since this system was the source of truth for IP address management
inside Comcast.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-20 Thread Sean Dague
On 02/20/2014 04:31 PM, David Kranz wrote:
> Running this test in tempest requires an ami image triple to be on the
> disk where tempest is running in order for the test to upload it. It
> would be a lot easier if this test could use a simple image file
> instead. That image file could even be obtained from the cloud being
> tested while configuring tempest. Is there a reason to keep the
> three-part image?

I have no issue changing this to a single part image, as long as we
could find a way that we can make it work with cirros in the gate
(mostly because it can run in really low mem footprint).

Is there a cirros single part image somewhere? Honestly it would be much
simpler even in the devstack environment.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-20 Thread Frittoli, Andrea (HP Cloud)
Thanks David, +++ 

This is a strong dependency to devstack, and it would be nice if we could
lose it.

andrea

-Original Message-
From: David Kranz [mailto:dkr...@redhat.com] 
Sent: 20 February 2014 21:32
To: OpenStack Development Mailing List
Subject: [openstack-dev] [qa] Does scenario.test_minimum_basic need to
upload ami images?

Running this test in tempest requires an ami image triple to be on the disk
where tempest is running in order for the test to upload it. It would be a
lot easier if this test could use a simple image file instead. That image
file could even be obtained from the cloud being tested while configuring
tempest. Is there a reason to keep the three-part image?

  -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-20 Thread David Kranz
Running this test in tempest requires an ami image triple to be on the 
disk where tempest is running in order for the test to upload it. It 
would be a lot easier if this test could use a simple image file 
instead. That image file could even be obtained from the cloud being 
tested while configuring tempest. Is there a reason to keep the 
three-part image?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread Russell Bryant
On 02/20/2014 09:43 AM, 郭小熙 wrote:
> We will move to Python33 in the future. More and more OpenStack projects
> including python-novaclient are Python33 compatible. Do we have plan to
> make Nova python33 compatible ?
> 
> As I know, oslo.messaging will not support python33 in Icehouse,this is
> just one dependency for Nova, that means we can't finish it in Icehouse
> for Nova. I registered one blueprint [1]to make us move to Python33
> smoothly in the future. Python33 compatibility would be taken into
> account while reviewing code.
> 
> We have to add py33 check/gate jobs to check Py33 compatibility. This
> blueprint could be marked as implemented only until Nova code can pass
> these jobs.
> 
> [1] https://blueprints.launchpad.net/nova/+spec/nova-py3kcompat

Python 3 support is certainly a goal that *all* OpenStack projects
should be aiming for.  However, for Nova, I don't think Nova's code is
actually our biggest hurdle.  The hardest parts are dependencies that we
have that don't support Python 3.  A big example is eventlet.  We're so
far off that I don't even think a CI job is useful yet.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring IP Availability

2014-02-20 Thread Jay Pipes
On Thu, 2014-02-20 at 00:53 +, Vilobh Meshram wrote:
> Hello OpenStack Dev,
> 
> We wanted to have your input on how different companies/organizations,
> using Openstack, are monitoring IP availability as this can be useful
> to track the used IP’s and total number of IP’s.

I presume you are talking about monitoring the number of available
public floating IP addresses? At AT&T, we just had a Nagios check that
queried the Nova or Neutron database to see if the number of available
public IP addresses went below a certain threshold.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Armando M.
Thomas,

I feel your frustration, however before complaining please do follow
the actual chain of events.

Patch [1]: I asked a question which I never received an answer to.
Patch [2]: I did put a -1, but I have nothing against this patch per
se. This was only been recently abandoned and my -1 lied primarily to
give patch [1] the opportunity to be resumed.

No action on a negative review means automatic expiration, if you lose
interest in something you care about whose fault is that?

A.

[1] = https://review.openstack.org/#/c/52757
[2] = https://review.openstack.org/#/c/68611

On 19 February 2014 06:28, Thomas Goirand  wrote:
> Hi,
>
> I've seen this one:
> https://review.openstack.org/#/c/68611/
>
> which is suppose to fix something for Postgress. This is funny, because
> I was doing the exact same patch for fixing it for SQLite. Though this
> was before the last summit in HK.
>
> Since then, I just gave up on having my Debian specific patch [1] being
> upstreamed. No review, despite my insistence. Mark, on the HK summit,
> told me that it was pending discussion about what would be the policy
> for SQLite.
>
> Guys, this is disappointing. That's the 2nd time the same patch is being
> blocked, with no explanations.
>
> Could 2 core reviewers have a *serious* look at this patch, and explain
> why it's not ok for it to be approved? If nobody says why, then could
> this be approved, so we can move on?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> [1]
> http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Sean Dague
On 02/20/2014 01:31 PM, Joe Gordon wrote:
> Hi All,
> 
> I discussion recently came up inside of nova about what it means
> supported version for a dependency means.  in libvirt we gate on the
> minimal version that we support but for all python dependencies we
> gate on the highest version that passes our requirements. While we all
> agree that having two different ways of choosing which version to test
> (min and max) is bad, there are good arguments for doing both.
> 
> testing most recent version:
> * We want to make sure we support the latest and greatest
> * Bug fixes
> * Quickly discover backwards incompatible changes so we can deal
> with them as they arise instead of in batch
> 
> Testing lowest version supported:
> * Make sure we don't land any code that breaks compatibility with
> the lowest version we say we support
> 
> 
> A few questions and ideas on how to move forward.
> * How do other projects deal with this? This problem isn't unique
> in OpenStack.
> * What are the issues with making one gate job use the latest
> versions and one use the lowest supported versions?
> * Only test some things on every commit or every day (periodic
> jobs)? But no one ever fixes those things when they break? who wants
> to own them? distros? deployers?
> * Other solutions?
> * Does it make sense to gate on the lowest version of libvirt but
> the highest version of python libs?
> * Given our finite resources what gets us the furthest?

So I'm one of the first people to utter "if it isn't tested, it's
probably broken", however I also think we need to be realistic about the
fact that if you did out the permutations of dependencies and config
options, we'd have as many test matrix scenarios as grains of sand on
the planet.

I do think in some ways this is unique to OpenStack, in that our
automated testing is head and shoulders above any other Open Source
project out there, and most proprietary software systems I've seen.

So this is about being pragmatic. In our dependency testing we are
actually testing with most recent versions of everything. So I would
think that even with libvirt, we should err in that direction.

That being said, we also need to be a little bit careful about taking
such a hard line about "supported vs. not" based on only what's in the
gate. Because if we did the following things would be listed as
unsupported (in increasing level of ridiculousness):

 * Live migration
 * Using qpid or zmq
 * Running on anything other than Ubuntu 12.04
 * Running on multiple nodes

Supported to me means we think it should work, and if it doesn't, it's a
high priority bug that will get fixed quickly. Testing is our sanity
check. But it can't be considered that it will catch everything, at
least not before the heat death of the universe.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes Feb 20

2014-02-20 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-20-18.03.html
Log:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-20-18.03.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] renaming: initial voting

2014-02-20 Thread Sergey Lukjanov
We've agreed to send top 5 options to foundation for review, more details -
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-20-18.03.html


On Thu, Feb 20, 2014 at 10:02 PM, Sergey Lukjanov wrote:

> I've contacted foundation and they are ready to verify 5 options, so,
> we'll choose them on todays irc team meeting (starting right now).
>
>
> On Wed, Feb 19, 2014 at 12:27 AM, Sergey Lukjanov 
> wrote:
>
>> The voting is ended, you can ding results here -
>> http://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=10&id=E_5dd4f18fde38ce8e&algorithm=beatpath
>>
>> So, the new name options for more detailed discussion are:
>>
>> 1. Gravity  (Condorcet winner: wins contests with all other choices)
>> 2. Sahara  loses to Gravity by 10-8
>> 3. Quazar  loses to Gravity by 13-3, loses to Sahara by 12-6
>> 4. Stellar  loses to Gravity by 13-4, loses to Quazar by 9-7
>> 5. Caravan  loses to Gravity by 12-5, loses to Stellar by 9-7
>> 6. Tied:
>> Fusor  loses to Gravity by 13-2, loses to Caravan by 9-4
>> Maestro  loses to Gravity by 15-3, loses to Quazar by 9-5
>> Magellanic  loses to Gravity by 15-0, loses to Caravan by 9-5
>> 9. Magellan  loses to Gravity by 16-1, loses to Maestro by 7-4
>> 10. Stackadoop  loses to Gravity by 14-6, loses to Magellan by 8-6
>>
>> Thanks for voting.
>>
>>
>> On Tue, Feb 18, 2014 at 10:52 AM, Sergey Lukjanov > > wrote:
>>
>>> Currently, we have only 19/47 votes, so, I'm adding one more day.
>>>
>>>
>>> On Fri, Feb 14, 2014 at 3:04 PM, Sergey Lukjanov >> > wrote:
>>>
 Hi folks,

 I've created a poll to select 10 candidates for new Savanna name. It's
 a first round of selecting new name for our lovely project. This poll will
 be ended in Monday, Feb 17.

 You should receive an email from "Sergey Lukjanov (CIVS poll
 supervisor) slukja...@mirantis.com"  via cs.cornell.edu with topic
 "Poll: Savanna new name candidates".

 Thank you!

 P.S. I've bcced all ATCs, don't panic.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

>>>
>>>
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Savanna Technical Lead
>>> Mirantis Inc.
>>>
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Sanchez, Cristian A
I agree with Bauza that the main purpose of Climate is to reserve resources, 
and in the case of keystone it should reserve tenant, users, domains, etc.

So, it could be possible that climate is not the module in which the tenant 
“lease” information should be saved. As stated in the use case, the only 
purpose of this BP is to allow the creation of tenants with start and end 
dates. Then when creating resources in that tenant (like VMs) climate could 
take “lease” information from the tenant itself and create actual leases for 
the VMs.

Any thoughts of this?

From: Sylvain Bauza mailto:sylvain.ba...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: jueves, 20 de febrero de 2014 15:57
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design




2014-02-20 19:32 GMT+01:00 Dina Belova 
mailto:dbel...@mirantis.com>>:
Sylvain, as I understand in BP description, Christian is about not exactly 
reserving tenants itself like we actually do with VMs/hosts - it's just naming 
for that. I think he is about two moments:

1) mark some tenants as "needed to be reserved" - speaking about resources 
assigned to it
2) reserve these resources via Climate (VMs for first approximation)


Well, I understood your BP, that's Christian's message which was a bit 
misunderstanding.
Speaking of marking a tenant as "reserved" would then mean that it does have 
kind of priority vs. another tenant. But again, at said, how could you ensure 
at the marking (ie. at lease creation) that Climate can honor contracts with 
resources that haven't been explicitely defined ?

I suppose Christian is speaking now about hacking tenants creation process to 
mark them as "needed to be reserved" (1st step).


Again, a lease is mutually and exclusively linked with explicit resources. If 
you say "create a lease, for the love" without speaking of what, I don't see 
the interest in Climate, unless I missed something obvious.

-Sylvain
Christian, correct me if I'm wrong, please
Waiting for your comments


On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>> wrote:
Hi Christian,

2014-02-20 18:10 GMT+01:00 Martinez, Christian 
mailto:christian.marti...@intel.com>>:

Hello all,
I’m working in the following BP: 
https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept, in 
which the idea is to have the possibility to create “special” tenants that have 
a lease for all of its associated resources.

The BP is in discussing phase and we were having conversations on IRC about 
what approach should we follow.


Before speaking about implementation,  I would definitely know the usecases you 
want to design.
What kind of resources do you want to provision using Climate ? The basic thing 
is, what is the rationale thinking about hooking tenant creation ? Could you 
please be more explicit ?

At the tenant creation, Climate wouldn't have no information in terms of 
calculating the resources asked, because the resources wouldn't have been 
allocated before. So, generating a lease on top of this would be like a 
non-formal contract in between Climate and the user, accounting nothing.

The main reason behind Climate is to provide SLAs for either user requests or 
projects requests, meaning that's duty of Climate to guarantee that the desired 
associated resource with the lease will be created in the future.
Speaking of Keystone, the Keystone objects are tenants, users or domains. In 
that case, if Climate would be hooking Keystone, that would say that Climate 
ensures that the cloud will have enough capacity for creating these resources 
in the future.

IMHO, that's not worth implementing it.


First of all, we need to add some “parameters or flags” during the tenant 
creation so we can know that the associated resources need to have a lease. 
Does anyone know if Keystone has similar functionality to Nova in relation with 
Hooks/API extensions (something like the stuff mentioned on 
http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first idea is 
to intercept the tenant creation call (as it’s being done with climate-nova) 
and use that information to associate a lease quota to the resources assigned 
to that tenant.


Keystone has no way to know which resources are associated within a tenant, see 
how the middleware authentication is done here [1]
Regarding the BP, the motivation is to possibly 'leasify' all the VMs from one 
single tenant. IIRC, that should still be duty of Nova to handle that workflow 
and send the requests to Climate.

-Sylvain

[1] : http://docs.openstack.org/developer/keystone/middlewarearchitecture.html



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.o

Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-20 Thread Edgar Magana
Congratulations Oleg!!!

No need for welcoming you to the team, you were already part of  ;-)

Edgar

From:  Oleg Bondarev 
Reply-To:  OpenStack List 
Date:  Thursday, February 20, 2014 6:43 AM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

Thanks Mark,

thanks everyone for voiting! I'm so happy to become a member of this really
great team!

Oleg


On Thu, Feb 20, 2014 at 6:29 PM, Mark McClain 
wrote:
>  I¹d like to welcome Oleg as member of the core Neutron team as he has
> received more than enough +1s and no negative votes from the other cores.
> 
> mark
> 
> On Feb 10, 2014, at 6:28 PM, Mark McClain  wrote:
> 
>> > All-
>> >
>> > I¹d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg
>> has been valuable contributor to Neutron by actively reviewing, working on
>> bugs, and contributing code.
>> >
>> > Neutron cores please reply back with +1/0/-1 votes.
>> >
>> > mark
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Sylvain Bauza
2014-02-20 19:32 GMT+01:00 Dina Belova :

> Sylvain, as I understand in BP description, Christian is about not exactly
> reserving tenants itself like we actually do with VMs/hosts - it's just
> naming for that. I think he is about two moments:
>
> 1) mark some tenants as "needed to be reserved" - speaking about resources
> assigned to it
> 2) reserve these resources via Climate (VMs for first approximation)
>
>
Well, I understood your BP, that's Christian's message which was a bit
misunderstanding.
Speaking of marking a tenant as "reserved" would then mean that it does
have kind of priority vs. another tenant. But again, at said, how could you
ensure at the marking (ie. at lease creation) that Climate can honor
contracts with resources that haven't been explicitely defined ?


> I suppose Christian is speaking now about hacking tenants creation process
> to mark them as "needed to be reserved" (1st step).
>
>
Again, a lease is mutually and exclusively linked with explicit resources.
If you say "create a lease, for the love" without speaking of what, I don't
see the interest in Climate, unless I missed something obvious.

-Sylvain

> Christian, correct me if I'm wrong, please
> Waiting for your comments
>
>
> On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza 
> wrote:
>
>> Hi Christian,
>>
>> 2014-02-20 18:10 GMT+01:00 Martinez, Christian <
>> christian.marti...@intel.com>:
>>
>>   Hello all,
>>>
>>> I'm working in the following BP:
>>> https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept,
>>> in which the idea is to have the possibility to create "special" tenants
>>> that have a lease for all of its associated resources.
>>>
>>>
>>>
>>> The BP is in discussing phase and we were having conversations on IRC
>>> about what approach should we follow.
>>>
>>>
>>>
>>
>> Before speaking about implementation,  I would definitely know the
>> usecases you want to design.
>> What kind of resources do you want to provision using Climate ? The basic
>> thing is, what is the rationale thinking about hooking tenant creation ?
>> Could you please be more explicit ?
>>
>> At the tenant creation, Climate wouldn't have no information in terms of
>> calculating the resources asked, because the resources wouldn't have been
>> allocated before. So, generating a lease on top of this would be like a
>> non-formal contract in between Climate and the user, accounting nothing.
>>
>> The main reason behind Climate is to provide SLAs for either user
>> requests or projects requests, meaning that's duty of Climate to guarantee
>> that the desired associated resource with the lease will be created in the
>> future.
>> Speaking of Keystone, the Keystone objects are tenants, users or domains.
>> In that case, if Climate would be hooking Keystone, that would say that
>> Climate ensures that the cloud will have enough capacity for creating these
>> resources in the future.
>>
>> IMHO, that's not worth implementing it.
>>
>>
>>  First of all, we need to add some "parameters or flags" during the
>>> tenant creation so we can know that the associated resources need to have a
>>> lease. Does anyone know if Keystone has similar functionality to Nova in
>>> relation with Hooks/API extensions (something like the stuff mentioned on
>>> http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first
>>> idea is to intercept the tenant creation call (as it's being done with
>>> climate-nova) and use that information to associate a lease quota to the
>>> resources assigned to that tenant.
>>>
>>>
>>
>> Keystone has no way to know which resources are associated within a
>> tenant, see how the middleware authentication is done here [1]
>> Regarding the BP, the motivation is to possibly 'leasify' all the VMs
>> from one single tenant. IIRC, that should still be duty of Nova to handle
>> that workflow and send the requests to Climate.
>>
>> -Sylvain
>>
>> [1] :
>> http://docs.openstack.org/developer/keystone/middlewarearchitecture.html
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Designate Icehouse-2 Release

2014-02-20 Thread Betsy Luzader
Today we have released Designate Icehouse-2. The high-level launchpad details 
can be found at https://launchpad.net/designate/icehouse/icehouse-2, as well as 
a link to the tar file. This release includes almost a dozen blueprints, 
including one for Domain Import/Export, as well as numerous bug fixes.

If you any questions, you can reach the team via this Openstack dev group with 
the subject line [designate] or via our IRC chat room at #openstack-dns.

Betsy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Martinez, Christian
Dina: Yes, I'm talking about that. Thanks for the clarification.

Sylvain, let me put the use case that we have:
As part of project/tenant creation we would like to mark the tenant in such a 
way that climate will automatically create a lease for the resources. All 
non-production tenants/projects will be granted a default quota and all 
resources should have associated leases. Climate leases will trigger work-flows 
via notifications. The work-flows defined in mistral will provide automation to 
achieve some of our non-production capacity management needs. We expect Mistral 
work-flows to trigger emails, ability for customer to extend lease and finally 
for the resource to potentially be backed up and then deleted.
We have also considered implementing a non-climate process to automatically 
create the leases for all non-production tenants.

Regarding the resources to be considered,
For us and our need managing just the VM resource is sufficient for the 
foreseeable future.

Also, I think that we should consider casanch1's comments on the BP:
"we must also have a blueprint that allow the user to create "Tenant Types" 
with 'default' lease attributes. Then when creating a tenant, the user can 
specify lease dates and/or tenant type"



From: Dina Belova [mailto:dbel...@mirantis.com]
Sent: Thursday, February 20, 2014 3:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design

Sylvain, as I understand in BP description, Christian is about not exactly 
reserving tenants itself like we actually do with VMs/hosts - it's just naming 
for that. I think he is about two moments:

1) mark some tenants as "needed to be reserved" - speaking about resources 
assigned to it
2) reserve these resources via Climate (VMs for first approximation)

I suppose Christian is speaking now about hacking tenants creation process to 
mark them as "needed to be reserved" (1st step).

Christian, correct me if I'm wrong, please
Waiting for your comments

On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>> wrote:
Hi Christian,

2014-02-20 18:10 GMT+01:00 Martinez, Christian 
mailto:christian.marti...@intel.com>>:

Hello all,
I'm working in the following BP: 
https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept, in 
which the idea is to have the possibility to create "special" tenants that have 
a lease for all of its associated resources.

The BP is in discussing phase and we were having conversations on IRC about 
what approach should we follow.


Before speaking about implementation,  I would definitely know the usecases you 
want to design.
What kind of resources do you want to provision using Climate ? The basic thing 
is, what is the rationale thinking about hooking tenant creation ? Could you 
please be more explicit ?

At the tenant creation, Climate wouldn't have no information in terms of 
calculating the resources asked, because the resources wouldn't have been 
allocated before. So, generating a lease on top of this would be like a 
non-formal contract in between Climate and the user, accounting nothing.

The main reason behind Climate is to provide SLAs for either user requests or 
projects requests, meaning that's duty of Climate to guarantee that the desired 
associated resource with the lease will be created in the future.
Speaking of Keystone, the Keystone objects are tenants, users or domains. In 
that case, if Climate would be hooking Keystone, that would say that Climate 
ensures that the cloud will have enough capacity for creating these resources 
in the future.

IMHO, that's not worth implementing it.


First of all, we need to add some "parameters or flags" during the tenant 
creation so we can know that the associated resources need to have a lease. 
Does anyone know if Keystone has similar functionality to Nova in relation with 
Hooks/API extensions (something like the stuff mentioned on 
http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first idea is 
to intercept the tenant creation call (as it's being done with climate-nova) 
and use that information to associate a lease quota to the resources assigned 
to that tenant.


Keystone has no way to know which resources are associated within a tenant, see 
how the middleware authentication is done here [1]
Regarding the BP, the motivation is to possibly 'leasify' all the VMs from one 
single tenant. IIRC, that should still be duty of Nova to handle that workflow 
and send the requests to Climate.

-Sylvain

[1] : http://docs.openstack.org/developer/keystone/middlewarearchitecture.html



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
Ope

Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Dina Belova
Sylvain, as I understand in BP description, Christian is about not exactly
reserving tenants itself like we actually do with VMs/hosts - it's just
naming for that. I think he is about two moments:

1) mark some tenants as "needed to be reserved" - speaking about resources
assigned to it
2) reserve these resources via Climate (VMs for first approximation)

I suppose Christian is speaking now about hacking tenants creation process
to mark them as "needed to be reserved" (1st step).

Christian, correct me if I'm wrong, please
Waiting for your comments


On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza wrote:

> Hi Christian,
>
> 2014-02-20 18:10 GMT+01:00 Martinez, Christian <
> christian.marti...@intel.com>:
>
>   Hello all,
>>
>> I'm working in the following BP:
>> https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept,
>> in which the idea is to have the possibility to create "special" tenants
>> that have a lease for all of its associated resources.
>>
>>
>>
>> The BP is in discussing phase and we were having conversations on IRC
>> about what approach should we follow.
>>
>>
>>
>
> Before speaking about implementation,  I would definitely know the
> usecases you want to design.
> What kind of resources do you want to provision using Climate ? The basic
> thing is, what is the rationale thinking about hooking tenant creation ?
> Could you please be more explicit ?
>
> At the tenant creation, Climate wouldn't have no information in terms of
> calculating the resources asked, because the resources wouldn't have been
> allocated before. So, generating a lease on top of this would be like a
> non-formal contract in between Climate and the user, accounting nothing.
>
> The main reason behind Climate is to provide SLAs for either user requests
> or projects requests, meaning that's duty of Climate to guarantee that the
> desired associated resource with the lease will be created in the future.
> Speaking of Keystone, the Keystone objects are tenants, users or domains.
> In that case, if Climate would be hooking Keystone, that would say that
> Climate ensures that the cloud will have enough capacity for creating these
> resources in the future.
>
> IMHO, that's not worth implementing it.
>
>
>  First of all, we need to add some "parameters or flags" during the
>> tenant creation so we can know that the associated resources need to have a
>> lease. Does anyone know if Keystone has similar functionality to Nova in
>> relation with Hooks/API extensions (something like the stuff mentioned on
>> http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first
>> idea is to intercept the tenant creation call (as it's being done with
>> climate-nova) and use that information to associate a lease quota to the
>> resources assigned to that tenant.
>>
>>
>
> Keystone has no way to know which resources are associated within a
> tenant, see how the middleware authentication is done here [1]
> Regarding the BP, the motivation is to possibly 'leasify' all the VMs from
> one single tenant. IIRC, that should still be duty of Nova to handle that
> workflow and send the requests to Climate.
>
> -Sylvain
>
> [1] :
> http://docs.openstack.org/developer/keystone/middlewarearchitecture.html
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Incubation Request: Murano

2014-02-20 Thread Georgy Okrokvertskhov
All,

Murano is the OpenStack Application Catalog service which has been
developing on stackforge almost 11 months. Murano has been presented on HK
summit on unconference track and now we would like to apply for incubation
during Juno release.

As the first step we would like to get feedback from TC on Murano readiness
from OpenStack processes standpoint as well as open up conversation around
mission and how it fits OpenStack ecosystem.

Murano incubation request form is here:
https://wiki.openstack.org/wiki/Murano/Incubation

As a part of incubation request we are looking for an advice from TC on the
governance model for Murano. Murano may potentially fit to the expanding
scope of Image program, if it will be transformed to Catalog program. Also
it potentially fits Orchestration program, and as a third option there
might be a value in creation of a new standalone Application Catalog
program. We have pros and cons analysis in Murano Incubation request form.

Murano team  has been working on Murano as a community project. All our
code and bugs/specs are hosted at OpenStack Gerrit and Launchpad
correspondingly. Unit tests and all pep8/hacking checks are run at
OpenStack Jenkins and we have integration tests running at our own Jenkins
server for each patch set. Murano also has all necessary scripts for
devstack integration. We have been holding weekly IRC meetings for the last
7 months and discussing architectural questions there and in openstack-dev
mailing lists as well.

Murano related information is here:

Launchpad: https://launchpad.net/murano

Murano Wiki page: https://wiki.openstack.org/wiki/Murano

Murano Documentation: https://wiki.openstack.org/wiki/Murano/Documentation

Murano IRC channel: #murano

With this we would like to start the process of incubation application
review.

Thanks
Georgy

-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Joe Gordon
Hi All,

I discussion recently came up inside of nova about what it means
supported version for a dependency means.  in libvirt we gate on the
minimal version that we support but for all python dependencies we
gate on the highest version that passes our requirements. While we all
agree that having two different ways of choosing which version to test
(min and max) is bad, there are good arguments for doing both.

testing most recent version:
* We want to make sure we support the latest and greatest
* Bug fixes
* Quickly discover backwards incompatible changes so we can deal
with them as they arise instead of in batch

Testing lowest version supported:
* Make sure we don't land any code that breaks compatibility with
the lowest version we say we support


A few questions and ideas on how to move forward.
* How do other projects deal with this? This problem isn't unique
in OpenStack.
* What are the issues with making one gate job use the latest
versions and one use the lowest supported versions?
* Only test some things on every commit or every day (periodic
jobs)? But no one ever fixes those things when they break? who wants
to own them? distros? deployers?
* Other solutions?
* Does it make sense to gate on the lowest version of libvirt but
the highest version of python libs?
* Given our finite resources what gets us the furthest?


best,
Joe Gordon
John Garbutt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-20 Thread Nader Lahouti
Thanks Dolph for link. The document shows the format of the message and doesn't 
give any info on how to listen to the notification. 
Is there any other document showing the detail on how to listen or get these 
notifications ?

Regards,
Nader.

> On Feb 20, 2014, at 9:06 AM, Dolph Mathews  wrote:
> 
> Yes, see:
> 
>   http://docs.openstack.org/developer/keystone/event_notifications.html
> 
>> On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti  
>> wrote:
>> Hi All,
>> 
>> I have a question regarding creating/deleting a tenant in openstack (using 
>> horizon or CLI). Is there any notification mechanism in place so that an 
>> application get informed of such an event?
>> 
>> If not, can it be done using plugin to send create/delete notification to an 
>> application?
>> 
>> Appreciate your suggestion and help.
>> 
>> Regards,
>> Nader.
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sphinxcontrib-pecanwsme 0.7 released

2014-02-20 Thread Sylvain Bauza
Hi Doug,


2014-02-20 17:37 GMT+01:00 Doug Hellmann :

> sphinxcontrib-pecanwsme is an extension to Sphinx for documenting APIs
> built with the Pecan web framework and WSME.
>
> What's New?
> ===
>
> - Remove the trailing slash from the end of the URLs, as it results in
> misleading feature documentation, see Ceilometer bug #1202744.
>
>
>
Do you have a review in progress for updating global requirements ? At the
moment, it's still pointing to 0.6.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Sylvain Bauza
Hi Christian,

2014-02-20 18:10 GMT+01:00 Martinez, Christian :

>  Hello all,
>
> I'm working in the following BP:
> https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept,
> in which the idea is to have the possibility to create "special" tenants
> that have a lease for all of its associated resources.
>
>
>
> The BP is in discussing phase and we were having conversations on IRC
> about what approach should we follow.
>
>
>

Before speaking about implementation,  I would definitely know the usecases
you want to design.
What kind of resources do you want to provision using Climate ? The basic
thing is, what is the rationale thinking about hooking tenant creation ?
Could you please be more explicit ?

At the tenant creation, Climate wouldn't have no information in terms of
calculating the resources asked, because the resources wouldn't have been
allocated before. So, generating a lease on top of this would be like a
non-formal contract in between Climate and the user, accounting nothing.

The main reason behind Climate is to provide SLAs for either user requests
or projects requests, meaning that's duty of Climate to guarantee that the
desired associated resource with the lease will be created in the future.
Speaking of Keystone, the Keystone objects are tenants, users or domains.
In that case, if Climate would be hooking Keystone, that would say that
Climate ensures that the cloud will have enough capacity for creating these
resources in the future.

IMHO, that's not worth implementing it.


 First of all, we need to add some "parameters or flags" during the tenant
> creation so we can know that the associated resources need to have a lease.
> Does anyone know if Keystone has similar functionality to Nova in relation
> with Hooks/API extensions (something like the stuff mentioned on
> http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first
> idea is to intercept the tenant creation call (as it's being done with
> climate-nova) and use that information to associate a lease quota to the
> resources assigned to that tenant.
>
>

Keystone has no way to know which resources are associated within a tenant,
see how the middleware authentication is done here [1]
Regarding the BP, the motivation is to possibly 'leasify' all the VMs from
one single tenant. IIRC, that should still be duty of Nova to handle that
workflow and send the requests to Climate.

-Sylvain

[1] :
http://docs.openstack.org/developer/keystone/middlewarearchitecture.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] renaming: initial voting

2014-02-20 Thread Sergey Lukjanov
I've contacted foundation and they are ready to verify 5 options, so, we'll
choose them on todays irc team meeting (starting right now).


On Wed, Feb 19, 2014 at 12:27 AM, Sergey Lukjanov wrote:

> The voting is ended, you can ding results here -
> http://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=10&id=E_5dd4f18fde38ce8e&algorithm=beatpath
>
> So, the new name options for more detailed discussion are:
>
> 1. Gravity  (Condorcet winner: wins contests with all other choices)
> 2. Sahara  loses to Gravity by 10-8
> 3. Quazar  loses to Gravity by 13-3, loses to Sahara by 12-6
> 4. Stellar  loses to Gravity by 13-4, loses to Quazar by 9-7
> 5. Caravan  loses to Gravity by 12-5, loses to Stellar by 9-7
> 6. Tied:
> Fusor  loses to Gravity by 13-2, loses to Caravan by 9-4
> Maestro  loses to Gravity by 15-3, loses to Quazar by 9-5
> Magellanic  loses to Gravity by 15-0, loses to Caravan by 9-5
> 9. Magellan  loses to Gravity by 16-1, loses to Maestro by 7-4
> 10. Stackadoop  loses to Gravity by 14-6, loses to Magellan by 8-6
>
> Thanks for voting.
>
>
> On Tue, Feb 18, 2014 at 10:52 AM, Sergey Lukjanov 
> wrote:
>
>> Currently, we have only 19/47 votes, so, I'm adding one more day.
>>
>>
>> On Fri, Feb 14, 2014 at 3:04 PM, Sergey Lukjanov 
>> wrote:
>>
>>> Hi folks,
>>>
>>> I've created a poll to select 10 candidates for new Savanna name. It's a
>>> first round of selecting new name for our lovely project. This poll will be
>>> ended in Monday, Feb 17.
>>>
>>> You should receive an email from "Sergey Lukjanov (CIVS poll supervisor)
>>> slukja...@mirantis.com"  via cs.cornell.edu with topic "Poll: Savanna
>>> new name candidates".
>>>
>>> Thank you!
>>>
>>> P.S. I've bcced all ATCs, don't panic.
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Savanna Technical Lead
>>> Mirantis Inc.
>>>
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Meeting minutes, and Next Steps/new meeting time.

2014-02-20 Thread Jesse Noller
Hi Everyone;

Our first python-openstack meeting was awesome: and I really want to thank 
everyone who came, and for Doug teaching me the meeting bot :)

Minutes:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.html
Minutes 
(text):http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.txt
Log:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.log.html

Note that coming out of this we will be moving the meetings to Tuesdays, 19:00 
UTC / 1pm CST starting on Tuesday March 4th. Next week there will not be a 
meeting while we discuss and flesh out next steps and requested items (API, 
names, extensions and internal HTTP API).

If you want to participate: please join us on free node: #openstack-sdks 

https://wiki.openstack.org/wiki/PythonOpenStackSDK

Jesse
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Is there anything blocking the libvirt driver from implementing the host_maintenance_mode API?

2014-02-20 Thread Matt Riedemann



On 2/19/2014 4:05 PM, Matt Riedemann wrote:

The os-hosts OS API extension [1] showed up before I was working on the
project and I see that only the VMware and XenAPI drivers implement it,
but was wondering why the libvirt driver doesn't - either no one wants
it, or there is some technical reason behind not implementing it for
that driver?

[1]
http://docs.openstack.org/api/openstack-compute/2/content/PUT_os-hosts-v2_updateHost_v2__tenant_id__os-hosts__host_name__ext-os-hosts.html




By the way, am I missing something when I think that this extension is 
already covered if you're:


1. Looking to get the node out of the scheduling loop, you can just 
disable it with os-services/disable?


2. Looking to evacuate instances off a failed host (or one that's in 
"maintenance mode"), just use the evacuate server action.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Re: [openstack-qa] Not Able to Run Tempest API Tests

2014-02-20 Thread David Kranz

On 02/20/2014 05:58 AM, om prakash pandey wrote:
I am not able to run Tempest API tests. The typical ERROR I am getting 
is "Connection Timed Out".


When checking into the logs I found out that tempest is trying to 
access the admin URL which is a private IP for our deployment. Now, 
Tempest is designed to access only the Public API endpoints, so is 
this something to do with my Tempest Configuration OR A problem with 
the Deployment itself.
Please use openstack-dev prefixed with [qa] in the subject. The 
openstack-qa list is not being used anymore.


I think the problem you are having is that,
by default, tempest creates a new tenant and user for each test class. 
Doing so requires admin credentials which are specified in tempest.conf. 
You can

run tempest without this feature by setting these values in tempest.conf:

allow_tenant_isolation = false

If you do this you will not be able to run tempest in parallel and a 
number of tests that require admin to run at all will fail.


Also, if you are using master, the use of nose is not supported any 
more. You will need to use testr.


 -David



ERROR: test suite for 'tempest.api.compute.limits.test_absolute_limits.AbsoluteLimitsTestJSON'>

--
Traceback (most recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 208, in run

self.setUp()
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 291, in setUp

self.setupContext(ancestor)
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 314, in setupContext

try_run(context, names)
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/util.py", 
line 469, in try_run

return func()
  File 
"/opt/stack/tempest/tempest/api/compute/limits/test_absolute_limits.py", 
line 25, in setUpClass

super(AbsoluteLimitsTestJSON, cls).setUpClass()
  File "/opt/stack/tempest/tempest/api/compute/base.py", line 183, in 
setUpClass

super(BaseV2ComputeTest, cls).setUpClass()
  File "/opt/stack/tempest/tempest/api/compute/base.py", line 39, in 
setUpClass

os = cls.get_client_manager()
  File "/opt/stack/tempest/tempest/test.py", line 288, in 
get_client_manager

creds = cls.isolated_creds.get_primary_creds()
  File "/opt/stack/tempest/tempest/common/isolated_creds.py", line 
367, in get_primary_creds

user, tenant = self._create_creds()
  File "/opt/stack/tempest/tempest/common/isolated_creds.py", line 
166, in _create_creds

description=tenant_desc)
  File "/opt/stack/tempest/tempest/common/isolated_creds.py", line 81, 
in _create_tenant

name=name, description=description)
  File 
"/opt/stack/tempest/tempest/services/identity/json/identity_client.py", line 
63, in create_tenant

resp, body = self.post('tenants', post_body, self.headers)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 154, 
in post

return self.request('POST', url, headers, body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 276, 
in request

headers=headers, body=body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 260, 
in _request

req_url, method, headers=req_headers, body=req_body)
  File "/opt/stack/tempest/tempest/common/http.py", line 25, in request
return super(ClosingHttp, self).request(*args, **new_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 1571, in request
(response, content) = self._request(conn, authority, uri, 
request_uri, method, body, headers, redirections, cachekey)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 1318, in _request
(response, content) = self._conn_request(conn, request_uri, 
method, body, headers)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 1291, in _conn_request

conn.connect()
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 913, in connect

raise socket.error, msg
error: [Errno 110] Connection timed out
 >> begin captured stdout << -
connect: (10.135.120.120, 35357) 
connect fail: (10.135.120.120, 35357)

- >> end captured stdout << --

==
ERROR: test suite for 'tempest.api.compute.limits.test_absolute_limits.AbsoluteLimitsTestXML'>

--
Traceback (most recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 208, in run

self.setUp()
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 291, in setUp

self.setupContext(ancestor)
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 314, in setupCo

[openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Martinez, Christian
Hello all,
I'm working in the following BP: 
https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept, in 
which the idea is to have the possibility to create "special" tenants that have 
a lease for all of its associated resources.

The BP is in discussing phase and we were having conversations on IRC about 
what approach should we follow.

First of all, we need to add some "parameters or flags" during the tenant 
creation so we can know that the associated resources need to have a lease. 
Does anyone know if Keystone has similar functionality to Nova in relation with 
Hooks/API extensions (something like the stuff mentioned on 
http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first idea is 
to intercept the tenant creation call (as it's being done with climate-nova) 
and use that information to associate a lease quota to the resources assigned 
to that tenant.

I'm not sure if this is the right approach or if this is even possible, so 
feedback is welcomed.

Regards,
Christian


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-20 Thread Dolph Mathews
Yes, see:

  http://docs.openstack.org/developer/keystone/event_notifications.html

On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti wrote:

> Hi All,
>
> I have a question regarding creating/deleting a tenant in openstack (using
> horizon or CLI). Is there any notification mechanism in place so that an
> application get informed of such an event?
>
> If not, can it be done using plugin to send create/delete notification to
> an application?
>
> Appreciate your suggestion and help.
>
> Regards,
> Nader.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-20 Thread Nader Lahouti
Hi All,

I have a question regarding creating/deleting a tenant in openstack (using
horizon or CLI). Is there any notification mechanism in place so that an
application get informed of such an event?

If not, can it be done using plugin to send create/delete notification to
an application?

Appreciate your suggestion and help.

Regards,
Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sphinxcontrib-pecanwsme 0.7 released

2014-02-20 Thread Doug Hellmann
sphinxcontrib-pecanwsme is an extension to Sphinx for documenting APIs
 
built with the Pecan web framework and WSME.

What's New?
===

- Remove the trailing slash from the end of the URLs, as it results in
misleading feature documentation, see Ceilometer bug #1202744.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Christopher Yeoh
On Thu, 20 Feb 2014 15:24:22 +
John Garbutt  wrote:
> 
> > Also micro and extension versioning is not the magic bullet which
> > will get us out of trouble in the future. Especially with the core
> > changes. Because even though versioning allows us to make changes,
> > for similar reasons to not being able to just drop V2 after a
> > couple of cycles we'll still need to keep supporting (and testing)
> > the old behaviour for a significant period of time (we have often
> > quietly ignored this issue in the past).
> 
> I thought the versions were all backwards compatible (for some
> definition of that).

They are, but say you add a backwards compatible change that allows you
to specific a flag that significantly changes the behaviour of the call.
At least from say a tempest point of view you have to test that method
both (with all the various existing possibilities)  with that flag
enabled and with it disabled. So we've doubled the test burden for the
call.

> I did agree with you before now, but I maybe we have "too many" people
> using v2 already. I have been wondering about a half way house...
> 
> So, changes in v3 I would love to keep (highest priority first, as I
> see it):
> * versioning extensions
> * task API
> * internal wiring fix ups (policy, everything is an extension, split
> up extensions)
> * return code fix ups
> * better input validation
> * url (consistency) changes
> * document consistency changes
> 

I think by the time we put these in, we're essentially forcing
people off the old V2 API anyway because we will break existing apps.
We're just being stealthy about it and not bumping the api version.

Why not just instead tell people upfront that they have to move to the
V3 API within X cycles because the V2 API is being removed? 

> Assuming we are happy to retro-fix versions, the question is how much
> do we allow to change between those versions.
> 
> I am good with stuff that would not break a "correct" old API client:
> * allow an API to warn users it is "deprecated"
> * extra attributes may be added in the return document
> * return codes for error cases might get updated
> * the x in 2xx might change for success cases
> * stricter validation of inputs might occur
> 
> Ideally, we only do this once, like at the same time we add in the
> versioning of extensions.
> 
> Having two urls for a single extension seems like quite a low cost
> "fix up", that we can keep in tree. Unit tests should be able to cover
> that, or at least only a small amount of functional tests.
> 
> The clean ups to the format of documents returned, this one is harder:
> * let the client (somehow) choose the new version, maybe a new URL, or
> maybe an Accepts header
> * keep the old version via some "converter" that can be easily unit
> tested in isolation
> 
> The general idea, is to get the fix ups we want, but make it easier to
> maintain, and try to reduce the functional test version, by using the
> knowledge there is so much common code, we don't need a full
> duplication of tests.

Hrm I'm not so sure about that. The API layer is pretty thin and so
essentially there is a lot of common code, but from a tempest point of
view we still fully test against both APIs. I'm not sure I'd feel that
comfortable about not doing so. 

> As far as implementation, I think it would be to make the v3 code
> support v2 and v3, in the process change v3 so thats possible, then
> drop the v2 code.

So I guess my question is adding a v2 backwards compatibility mode to
the v3 code any less burden than just simply keeping the v2 code
around? I don't think it is if it complicates the v2 code too much.

Though I think someone did bring up at the meeting (I'm not sure if it
was you) if it was possible to have a V2<->V3 translation layer. So
perhaps we could have a separate service that just sat in the middle
between a client and nova-api which translated requests and responses
just for V2 API requests to V3 format. And proxied to
neutron/cinder/glance where necessary. That may be one possible solution
to supporting V2 for longer than we'd like whilst at the same time being
able to remove it from Nova. Probably not a trivial thing to implement
but I think it addresses some of the concerns mentioned about keeping
the v2 API around.

This sort of technique might be able to be used to remove the ec2 api
code as well

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] git-review patch: Fix parsing of SCP-style URLs

2014-02-20 Thread Alexander Jones
Really don't want to have to resolve conflicts again or battle through making 
the test suite succeed... Please can someone merge this? 

https://review.openstack.org/#/c/72751/ 
https://bugs.launchpad.net/git-review/+bug/1279016 

Thanks! 

Alexander Jones 
Double Negative R&D 
www.dneg.com 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Neutron] Refactoring heat LBaaS architecture according Neutron API

2014-02-20 Thread Sergey Kraynev
Hello community.

I'd like to discuss feature of Neutron LBaaS in Heat.
Currently Heat resources are not identical to Neutron's.
There are four resources here:
'OS::Neutron::HealthMonitor'
'OS::Neutron::Pool'
'OS::Neutron::PoolMember'
'OS::Neutron::LoadBalancer'

According to this representation the VIP is a part of resource
Loadbalancer, whereas Neutron has separate object VIP.  I think it should
be changed to conform with Neutron's implementation.
So the main question: what is the best way to change it? I see following
options:

1. Move VIP in separate resource in icehouse release (without any
additions).
Possibly we should support both (old and new) implementation for users too.
 IMO, it has also one big danger, because now we have stable version of it
and have not enough time to check new approach.
Also I think it does not make sense now, because Neutron team are
discussing new object model (
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027480.html)
and it will be implemented in Juno.

2. The second idea is to wait all architecture changes that are planed in
Juno in Neutron. (look at link above)
Then we could recreate or change Heat LBaaS architecture at all.

Your feedback and other ideas about better implementation plan are welcome.

Regards,
Sergey.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-20 Thread Sergey Lukjanov
Yup, current implementation depends on GNU getopt.

Julien, cool, that means I'm not crazy :)

About using common getopt functionally - at least long args will be removed
to support non GNU getopt. Rewriting it on pure python will be more useful
IMO.


On Thu, Feb 20, 2014 at 1:45 PM, Julien Danjou  wrote:

> On Thu, Feb 20 2014, Chmouel Boudjnah wrote:
>
> > In which sort of system setup other than macosx/freebsd
> generate_sample.sh
> > is not working?
>
> Likely everywhere GNU tools are not standard. So that's every system
> _except_ GNU/Linux ones I'd say. :)
>
> --
> Julien Danjou
> -- Free Software hacker
> -- http://julien.danjou.info
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread John Garbutt
On 20 February 2014 14:55, Christopher Yeoh  wrote:
> On Thu, 20 Feb 2014 08:22:57 -0500
> Sean Dague  wrote:
>>
>> We're also duplicating a lot of test and review energy in having 2 API
>> stacks. Even before v3 has come out of experimental it's consumed a
>> huge amount of review resource on both the Nova and Tempest sides to
>> get it to it's current state.
>>
>> So my feeling is that in order to get more energy and focus on the
>> API, we need some kind of game plan to get us to a single API
>> version, with a single data payload in L (or on the outside, M). If
>> the decision is v2 must be in both those releases (and possibly
>> beyond), then it seems like asking other hard questions.
>>
>> * why do a v3 at all? instead do we figure out a way to be able to
>> evolve v2 in a backwards compatible way.
>
> So there's lots of changes (cleanups) made between v2 and v3 which are
> really not possible to do in a backwards compatible way. One example
> is that we're a lot stricter and consistent on input validation in v3
> than v2 which is better both from a user and server point of view.
> Another is that the tasks API would be a lot uglier and really look
> "bolted on" if we tried to do so. Also doing so doesn't actually reduce
> the test load as if we're still supporting the old 'look' of the api we
> still need to test for it separately to the new 'look' even if we don't
> bump the api major version.
>
> In terms of code sharing (and we've experimented a bit with this for
> v2/v3) I think in most cases ends up actually being easier having two
> quite completely separate trees because it ends up diverging so much
> that having if statements around everywhere to handle the different
> cases is actually a higher maintenance burden (much harder to read)
> than just knowing that you might have to make changes in two quite
> separate places.

Maybe, but what about a slightly less different v3, that would enable
such an approach?

>> * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
>> that working around all that code isn't a velocity inhibitor in the
>> cleanups required in v2? Because some of the crazy hacks that exist to
>> make XML structures work for the json in v2 is kind of special.
>
> So I don't think we can do that for similar reasons we can't just drop
> V2 after a couple of cycles. We should be encouraging people off, not
> forcing them off.

We could look to remove XML support, with the same cycle we considered
dropping v2.

>> This big bang approach to API development may just have run it's
>> course, and no longer be a useful development model. Which is good to
>> find out. Would have been nice to find out earlier... but not all
>> lessons are easy or cheap. :)
>
> So I think what the v3 gives us is much more consistent and clean
> API base to start from. It's a clean break from the past. But we have to
> be much more careful about any future API changes/enhancements than we
> traditionally have done in the past especially with any changes which
> affect the core. I think we've already significantly raised the quality
> bar in what we allow for both v2 and v3 in Icehouse compared to previous
> releases (those frustrated with trying to get API changes in will
> probably agree) but I'd like us to get even stricter about it in the
> future because getting it wrong in the API design has a MUCH higher
> long term impact than bugs in most other areas. Requiring an API spec
> upfront (and reviewing it) with a blueprint for any new API features
> should IMO be compulsory before a blueprint is approved.

I think we need to go down this (slightly painful) path.

Particularly because there is so much continuous deployment going on.

> Also micro and extension versioning is not the magic bullet which will
> get us out of trouble in the future. Especially with the core changes.
> Because even though versioning allows us to make changes, for similar
> reasons to not being able to just drop V2 after a couple of cycles
> we'll still need to keep supporting (and testing) the old behaviour for
> a significant period of time (we have often quietly ignored
> this issue in the past).

I thought the versions were all backwards compatible (for some
definition of that).

> Ultimately the only way to free ourselves from the maintenance of two
> API versions (and I'll claim this is rather misleading as it actually
> has more dimensions to it than this) is to convince users to move from
> the V2 API to the "new one". And it doesn't make much difference
> whether we call it V3 or V2.1 we still have very similar maintenance
> burdens if we want to make the sorts of API changes that we have done
> for V3.

I did agree with you before now, but I maybe we have "too many" people
using v2 already. I have been wondering about a half way house...

So, changes in v3 I would love to keep (highest priority first, as I see it):
* versioning extensions
* task API
* internal wiring fix ups (policy, everything is an ext

Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Imre Farkas

On 02/20/2014 03:57 PM, Tomas Sedovic wrote:

On 20/02/14 15:41, Radomir Dopieralski wrote:

On 20/02/14 15:00, Tomas Sedovic wrote:


Are we even sure we need to store the passwords in the first place? All
this encryption talk seems very premature to me.


How are you going to redeploy without them?



What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.


I am not sure if all these use cases have different password 
requirement. If you check devtest, no matter whether you are creating or 
just updating your overcloud, all the parameters have to be provided for 
the heat template:

https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125

I would rather not require the user to enter 5/10/15 different passwords 
every time Tuskar updates the stack. I think it's much better to 
autogenerate the passwords for the first time, provide an option to 
override them, then save and encrypt them in Tuskar. So +1 for designing 
a proper system for storing the passwords.


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 16:02, Radomir Dopieralski wrote:
> On 20/02/14 15:57, Tomas Sedovic wrote:
>> On 20/02/14 15:41, Radomir Dopieralski wrote:
>>> On 20/02/14 15:00, Tomas Sedovic wrote:
>>>
 Are we even sure we need to store the passwords in the first place? All
 this encryption talk seems very premature to me.
>>>
>>> How are you going to redeploy without them?
>>>
>>
>> What do you mean by redeploy?
>>
>> 1. Deploy a brand new overcloud, overwriting the old one
>> 2. Updating the services in the existing overcloud (i.e. image updates)
>> 3. Adding new machines to the existing overcloud
>> 4. Autoscaling
>> 5. Something else
>> 6. All of the above
> 
> I mean clicking "scale" in tuskar-ui.
> 

Right. So either Heat's able to handle this on its own or we fix it to
be able to do that or we ask for the necessary parameters again.

I really dislike having to do crypto in tuskar.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Sean Dague
On 02/20/2014 09:55 AM, Christopher Yeoh wrote:
> On Thu, 20 Feb 2014 08:22:57 -0500
> Sean Dague  wrote:
>>
>> We're also duplicating a lot of test and review energy in having 2 API
>> stacks. Even before v3 has come out of experimental it's consumed a
>> huge amount of review resource on both the Nova and Tempest sides to
>> get it to it's current state.
>>
>> So my feeling is that in order to get more energy and focus on the
>> API, we need some kind of game plan to get us to a single API
>> version, with a single data payload in L (or on the outside, M). If
>> the decision is v2 must be in both those releases (and possibly
>> beyond), then it seems like asking other hard questions.
>>
>> * why do a v3 at all? instead do we figure out a way to be able to
>> evolve v2 in a backwards compatible way.
> 
> So there's lots of changes (cleanups) made between v2 and v3 which are
> really not possible to do in a backwards compatible way. One example
> is that we're a lot stricter and consistent on input validation in v3
> than v2 which is better both from a user and server point of view.
> Another is that the tasks API would be a lot uglier and really look
> "bolted on" if we tried to do so. Also doing so doesn't actually reduce
> the test load as if we're still supporting the old 'look' of the api we
> still need to test for it separately to the new 'look' even if we don't
> bump the api major version. 
> 
> In terms of code sharing (and we've experimented a bit with this for
> v2/v3) I think in most cases ends up actually being easier having two
> quite completely separate trees because it ends up diverging so much
> that having if statements around everywhere to handle the different
> cases is actually a higher maintenance burden (much harder to read)
> than just knowing that you might have to make changes in two quite
> separate places. 
> 
>> * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
>> that working around all that code isn't a velocity inhibitor in the
>> cleanups required in v2? Because some of the crazy hacks that exist to
>> make XML structures work for the json in v2 is kind of special.
> 
> So I don't think we can do that for similar reasons we can't just drop
> V2 after a couple of cycles. We should be encouraging people off, not
> forcing them off. 
> 
>> This big bang approach to API development may just have run it's
>> course, and no longer be a useful development model. Which is good to
>> find out. Would have been nice to find out earlier... but not all
>> lessons are easy or cheap. :)
> 
> So I think what the v3 gives us is much more consistent and clean
> API base to start from. It's a clean break from the past. But we have to
> be much more careful about any future API changes/enhancements than we
> traditionally have done in the past especially with any changes which
> affect the core. I think we've already significantly raised the quality
> bar in what we allow for both v2 and v3 in Icehouse compared to previous
> releases (those frustrated with trying to get API changes in will
> probably agree) but I'd like us to get even stricter about it in the
> future because getting it wrong in the API design has a MUCH higher
> long term impact than bugs in most other areas. Requiring an API spec
> upfront (and reviewing it) with a blueprint for any new API features
> should IMO be compulsory before a blueprint is approved. 
> 
> Also micro and extension versioning is not the magic bullet which will
> get us out of trouble in the future. Especially with the core changes.
> Because even though versioning allows us to make changes, for similar
> reasons to not being able to just drop V2 after a couple of cycles
> we'll still need to keep supporting (and testing) the old behaviour for
> a significant period of time (we have often quietly ignored
> this issue in the past).
> 
> Ultimately the only way to free ourselves from the maintenance of two
> API versions (and I'll claim this is rather misleading as it actually
> has more dimensions to it than this) is to convince users to move from
> the V2 API to the "new one". And it doesn't make much difference
> whether we call it V3 or V2.1 we still have very similar maintenance
> burdens if we want to make the sorts of API changes that we have done
> for V3.

I want to flip this a little bit around. As an API consumer for an
upstream service I actually get excited when they announce a new version
and give me some new nobs to play with. Often times I'll even email
providers asking for certain API interfaces get exposed.

I do think we need to actually start from the end goal and work
backwards. My assumption is that 1 API vs. with 1 Data Format in L/M is
our end goal. I think that there are huge technical debt costs with
anything else. Our current course and speed makes us have 3 APIs/Formats
in that time frame.

There is no easy way out of this, but I think that the current course
and speed inhibits us in a lot of ways, not leas

Re: [openstack-dev] [Cinder]Do you think volume force delete operation should not apply to the volume being used?

2014-02-20 Thread yunling

From cinder code, we know that volume delete operation could be classify into 
three categories;
1. General delelte:  delete volumes that are in the status of available, error, 
error_restoring, error_extending. 
2. Force delete: delete volumes that are in the status of extending, attaching, 
detaching, await-transfering, backing or restoring. 
3. Others: volumes that are attached or in the progress of migrate operation 
can't be force deleted. 
 
We know that volume's status in attaching/detaching also means that the volume 
is "in-use", not only in "attached" status and in the progress of volume 
migration.
Cinder force delete sometimes can delelte "in-use" volumes, and sometimes can 
not deleted "in-use" volumes.
 
My question is as follows:
1. Do you think volume force delete operation should not apply to the volume 
being used?
eg. Should attaching/detaching/backing volume can't be force delete ?




From: yunlingz...@hotmail.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev][Cinder]Do you think volume force delete operation 
should not apply to the volume being used?
Date: Mon, 17 Feb 2014 13:13:45 +






Hi stackers: 

  

I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder when 
doing volume force delete operation on an attaching volume. 

I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and detached. 

  

How do you think? 






thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 15:57, Tomas Sedovic wrote:
> On 20/02/14 15:41, Radomir Dopieralski wrote:
>> On 20/02/14 15:00, Tomas Sedovic wrote:
>>
>>> Are we even sure we need to store the passwords in the first place? All
>>> this encryption talk seems very premature to me.
>>
>> How are you going to redeploy without them?
>>
> 
> What do you mean by redeploy?
> 
> 1. Deploy a brand new overcloud, overwriting the old one
> 2. Updating the services in the existing overcloud (i.e. image updates)
> 3. Adding new machines to the existing overcloud
> 4. Autoscaling
> 5. Something else
> 6. All of the above

I mean clicking "scale" in tuskar-ui.
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 15:41, Radomir Dopieralski wrote:
> On 20/02/14 15:00, Tomas Sedovic wrote:
> 
>> Are we even sure we need to store the passwords in the first place? All
>> this encryption talk seems very premature to me.
> 
> How are you going to redeploy without them?
> 

What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Christopher Yeoh
On Thu, 20 Feb 2014 08:22:57 -0500
Sean Dague  wrote:
> 
> We're also duplicating a lot of test and review energy in having 2 API
> stacks. Even before v3 has come out of experimental it's consumed a
> huge amount of review resource on both the Nova and Tempest sides to
> get it to it's current state.
> 
> So my feeling is that in order to get more energy and focus on the
> API, we need some kind of game plan to get us to a single API
> version, with a single data payload in L (or on the outside, M). If
> the decision is v2 must be in both those releases (and possibly
> beyond), then it seems like asking other hard questions.
> 
> * why do a v3 at all? instead do we figure out a way to be able to
> evolve v2 in a backwards compatible way.

So there's lots of changes (cleanups) made between v2 and v3 which are
really not possible to do in a backwards compatible way. One example
is that we're a lot stricter and consistent on input validation in v3
than v2 which is better both from a user and server point of view.
Another is that the tasks API would be a lot uglier and really look
"bolted on" if we tried to do so. Also doing so doesn't actually reduce
the test load as if we're still supporting the old 'look' of the api we
still need to test for it separately to the new 'look' even if we don't
bump the api major version. 

In terms of code sharing (and we've experimented a bit with this for
v2/v3) I think in most cases ends up actually being easier having two
quite completely separate trees because it ends up diverging so much
that having if statements around everywhere to handle the different
cases is actually a higher maintenance burden (much harder to read)
than just knowing that you might have to make changes in two quite
separate places. 

> * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
> that working around all that code isn't a velocity inhibitor in the
> cleanups required in v2? Because some of the crazy hacks that exist to
> make XML structures work for the json in v2 is kind of special.

So I don't think we can do that for similar reasons we can't just drop
V2 after a couple of cycles. We should be encouraging people off, not
forcing them off. 

> This big bang approach to API development may just have run it's
> course, and no longer be a useful development model. Which is good to
> find out. Would have been nice to find out earlier... but not all
> lessons are easy or cheap. :)

So I think what the v3 gives us is much more consistent and clean
API base to start from. It's a clean break from the past. But we have to
be much more careful about any future API changes/enhancements than we
traditionally have done in the past especially with any changes which
affect the core. I think we've already significantly raised the quality
bar in what we allow for both v2 and v3 in Icehouse compared to previous
releases (those frustrated with trying to get API changes in will
probably agree) but I'd like us to get even stricter about it in the
future because getting it wrong in the API design has a MUCH higher
long term impact than bugs in most other areas. Requiring an API spec
upfront (and reviewing it) with a blueprint for any new API features
should IMO be compulsory before a blueprint is approved. 

Also micro and extension versioning is not the magic bullet which will
get us out of trouble in the future. Especially with the core changes.
Because even though versioning allows us to make changes, for similar
reasons to not being able to just drop V2 after a couple of cycles
we'll still need to keep supporting (and testing) the old behaviour for
a significant period of time (we have often quietly ignored
this issue in the past).

Ultimately the only way to free ourselves from the maintenance of two
API versions (and I'll claim this is rather misleading as it actually
has more dimensions to it than this) is to convince users to move from
the V2 API to the "new one". And it doesn't make much difference
whether we call it V3 or V2.1 we still have very similar maintenance
burdens if we want to make the sorts of API changes that we have done
for V3.

Chris

> 
>   -Sean
> 
> On 02/19/2014 12:36 PM, Russell Bryant wrote:
> > Greetings,
> > 
> > The v3 API effort has been going for a few release cycles now.  As
> > we approach the Icehouse release, we are faced with the following
> > question: "Is it time to mark v3 stable?"
> > 
> > My opinion is that I think we need to leave v3 marked as
> > experimental for Icehouse.
> > 
> > There are a number of reasons for this:
> > 
> > 1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
> > last week made me come to the realization that v2 won't be going
> > away *any* time soon.  In some cases, users have long term API
> > support expectations (perhaps based on experience with EC2).  In
> > the best case, we have to get all of the SDKs updated to the new
> > API, and then get to the point where everyone is using a new enough
> > ver

Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-20 Thread Oleg Bondarev
Thanks Mark,

thanks everyone for voiting! I'm so happy to become a member of this really
great team!

Oleg


On Thu, Feb 20, 2014 at 6:29 PM, Mark McClain wrote:

>  I'd like to welcome Oleg as member of the core Neutron team as he has
> received more than enough +1s and no negative votes from the other cores.
>
> mark
>
> On Feb 10, 2014, at 6:28 PM, Mark McClain  wrote:
>
> > All-
> >
> > I'd like to nominate Oleg Bondarev to become a Neutron core reviewer.
>  Oleg has been valuable contributor to Neutron by actively reviewing,
> working on bugs, and contributing code.
> >
> > Neutron cores please reply back with +1/0/-1 votes.
> >
> > mark
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread 郭小熙
We will move to Python33 in the future. More and more OpenStack projects
including python-novaclient are Python33 compatible. Do we have plan to
make Nova python33 compatible ?

As I know, oslo.messaging will not support python33 in Icehouse,this is
just one dependency for Nova, that means we can't finish it in Icehouse for
Nova. I registered one blueprint [1]to make us move to Python33 smoothly in
the future. Python33 compatibility would be taken into account while
reviewing code.

We have to add py33 check/gate jobs to check Py33 compatibility. This
blueprint could be marked as implemented only until Nova code can pass
these jobs.
[1] https://blueprints.launchpad.net/nova/+spec/nova-py3kcompat

-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Ha yes, I completely forget IPv6 case.
Sorry and forget that thread.

Édouard.


On Thu, Feb 20, 2014 at 3:34 PM, Veiga, Anthony <
anthony_ve...@cable.comcast.com> wrote:

>  This would break IPv6.  The gateway address, according to RFC 4861[1]
> Section 4.2 regarding Router Advertisements: "Source Address MUST be the
> link-local address assigned to the interface from which this message is
> sent".  This means that if you configure a subnet with a Globally Unique
> Address scope, the gateway by definition cannot be in the configured
> subnet.  Please don't force this option, as it will break work going on in
> the Neutron IPv6 sub-team.
> -Anthony
>
>  [1] http://tools.ietf.org/html/rfc4861
>
>   Hi,
>
>  Neutron permits to set a gateway IP outside of the subnet cidr by
> default. And, thanks to the garyk's patch [1], it's possible to change this
> default behavior with config flag 'force_gateway_on_subnet'.
>
>  This flag was added to keep the backward compatibility for people who
> need to set the gateway outside of the subnet.
>
>  I think this behavior does not reflect the classic usage of subnets. So
> I propose to update the default value of the flag 'force_gateway_on_subnet'
> to True.
>
>  Any thought?
>
>  [1] https://review.openstack.org/#/c/19048/
>
>  Regards,
> Édouard.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 15:00, Tomas Sedovic wrote:

> Are we even sure we need to store the passwords in the first place? All
> this encryption talk seems very premature to me.

How are you going to redeploy without them?
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Veiga, Anthony
This would break IPv6.  The gateway address, according to RFC 4861[1] Section 
4.2 regarding Router Advertisements: "Source Address MUST be the link-local 
address assigned to the interface from which this message is sent".  This means 
that if you configure a subnet with a Globally Unique Address scope, the 
gateway by definition cannot be in the configured subnet.  Please don't force 
this option, as it will break work going on in the Neutron IPv6 sub-team.
-Anthony

[1] http://tools.ietf.org/html/rfc4861

Hi,

Neutron permits to set a gateway IP outside of the subnet cidr by default. And, 
thanks to the garyk's patch [1], it's possible to change this default behavior 
with config flag 'force_gateway_on_subnet'.

This flag was added to keep the backward compatibility for people who need to 
set the gateway outside of the subnet.

I think this behavior does not reflect the classic usage of subnets. So I 
propose to update the default value of the flag 'force_gateway_on_subnet' to 
True.

Any thought?

[1] https://review.openstack.org/#/c/19048/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-20 Thread Mark McClain
 I’d like to welcome Oleg as member of the core Neutron team as he has received 
more than enough +1s and no negative votes from the other cores. 

mark

On Feb 10, 2014, at 6:28 PM, Mark McClain  wrote:

> All-
> 
> I’d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg 
> has been valuable contributor to Neutron by actively reviewing, working on 
> bugs, and contributing code.
> 
> Neutron cores please reply back with +1/0/-1 votes.
> 
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Looking back, perhaps we should remove that flag and only authorize the
admin user to be able to set the gateway IP outside of the subnet cidr (for
tricky network), like only admin user can create provider network. And
require classic users to set gatway IP inside the subnet cidr.

Édouard.


On Thu, Feb 20, 2014 at 3:15 PM, Édouard Thuleau  wrote:

> Hi,
>
> Neutron permits to set a gateway IP outside of the subnet cidr by default.
> And, thanks to the garyk's patch [1], it's possible to change this default
> behavior with config flag 'force_gateway_on_subnet'.
>
> This flag was added to keep the backward compatibility for people who need
> to set the gateway outside of the subnet.
>
> I think this behavior does not reflect the classic usage of subnets. So I
> propose to update the default value of the flag 'force_gateway_on_subnet'
> to True.
>
> Any thought?
>
> [1] https://review.openstack.org/#/c/19048/
>
> Regards,
> Édouard.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 14:47, Radomir Dopieralski wrote:
> On 20/02/14 14:10, Jiří Stránský wrote:
>> On 20.2.2014 12:18, Radomir Dopieralski wrote:
> 
>>> Thinking about it some more, all the uses of the passwords come as a
>>> result of an action initiated by the user either by tuskar-ui, or by
>>> the tuskar command-line client. So maybe we could put the key in their
>>> configuration and send it with the request to (re)deploy. Tuskar-API
>>> would still need to keep it for the duration of deployment (to register
>>> the services at the end), but that's it.
>>
>> This would be possible, but it would damage the user experience quite a
>> bit. Afaik other deployment tools solve password storage the same way we
>> do now.
> 
> I don't think it would damage the user experience so much. All you need
> is an additional configuration option in Tuskar-UI and Tuskar-client,
> the encryption key.
> 
> That key would be used to encrypt the passwords when they are first sent
> to Tuskar-API, and also added to the (re)deployment calls.


Are we even sure we need to store the passwords in the first place? All
this encryption talk seems very premature to me.

> 
> This way, if the database leaks due to a security hole in MySQL or bad
> engineering practices administering the database, the passwords are
> still inaccessible. To get them, the attacker would need to get
> *both* the database and the config files from host on which Tuskar-UI runs.
> 
> With the tuskar-client it's a little bit more obnoxious, because you
> would need to configure it on every host from which you want to use it,
> but you already need to do some configuration to point it at the
> tuskar-api and authenticate it, so it's not so bad.
> 
> I agree that this complicates the whole process a little, and adds
> another potential failure point though.
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Matt Riedemann



On 2/20/2014 7:22 AM, Sean Dague wrote:

I agree that we shouldn't be rushing something that's not ready, but I
guess it raises kind of a meta issue.

When we started this journey this was because v2 has a ton of warts, is
completely wonky on the code internals, which leads to plenty of bugs.
v3 was both a surface clean up, but it was also a massive internals
clean up. I think comparing servers.py:create is a good look at the
differences:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L768
- v2

vs.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py#L415
- v3

v3 was small on user surface changes for a reason, because the idea was
that it would be a quick cut over, the migration pain would be minimal,
and v2 could be dropped relatively quickly (2 cycles).

However if the new thinking is that v2 is going to be around for a
long time then I think it raises questions about this whole approach.
Because dual maintenance is bad. We see this today where stable/* trees
end up broken in CI for weeks because no one is working on it.

We're also duplicating a lot of test and review energy in having 2 API
stacks. Even before v3 has come out of experimental it's consumed a huge
amount of review resource on both the Nova and Tempest sides to get it
to it's current state.

So my feeling is that in order to get more energy and focus on the API,
we need some kind of game plan to get us to a single API version, with a
single data payload in L (or on the outside, M). If the decision is v2
must be in both those releases (and possibly beyond), then it seems like
asking other hard questions.

* why do a v3 at all? instead do we figure out a way to be able to
evolve v2 in a backwards compatible way.
* if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
that working around all that code isn't a velocity inhibitor in the
cleanups required in v2? Because some of the crazy hacks that exist to
make XML structures work for the json in v2 is kind of special.


I also have something on the nova meeting agenda today about how some 
things should be handled in the V2 API now that we know it's going to be 
around for awhile and that we're working towards a more transparent 
integration with Neutron, since we have some bugs on that subject with 
differing viewpoints on how to handle them/what's supported:


https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting



This big bang approach to API development may just have run it's course,
and no longer be a useful development model. Which is good to find out.
Would have been nice to find out earlier... but not all lessons are easy
or cheap. :)

-Sean

On 02/19/2014 12:36 PM, Russell Bryant wrote:

Greetings,

The v3 API effort has been going for a few release cycles now.  As we
approach the Icehouse release, we are faced with the following question:
"Is it time to mark v3 stable?"

My opinion is that I think we need to leave v3 marked as experimental
for Icehouse.

There are a number of reasons for this:

1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
last week made me come to the realization that v2 won't be going away
*any* time soon.  In some cases, users have long term API support
expectations (perhaps based on experience with EC2).  In the best case,
we have to get all of the SDKs updated to the new API, and then get to
the point where everyone is using a new enough version of all of these
SDKs to use the new API.  I don't think that's going to be quick.

We really don't want to be in a situation where we're having to force
any sort of migration to a new API.  The new API should be compelling
enough that everyone *wants* to migrate to it.  If that's not the case,
we haven't done our job.

2) There's actually quite a bit still left on the existing v3 todo list.
  We have some notes here:

https://etherpad.openstack.org/p/NovaV3APIDoneCriteria

One thing is nova-network support.  Since nova-network is still not
deprecated, we certainly can't deprecate the v2 API without nova-network
support in v3.  We removed it from v3 assuming nova-network would be
deprecated in time.

Another issue is that we discussed the tasks API as the big new API
feature we would include in v3.  Unfortunately, it's not going to be
complete for Icehouse.  It's possible we may have some initial parts
merged, but it's much smaller scope than what we originally envisioned.
  Without this, I honestly worry that there's not quite enough compelling
functionality yet to encourage a lot of people to migrate.

3) v3 has taken a lot more time and a lot more effort than anyone
thought.  This makes it even more important that we're not going to need
a v4 any time soon.  Due to various things still not quite wrapped up,
I'm just not confident enough that what we have is something we all feel
is Nova's API of the future.


Let's all take some time to reflect on what has happened with

Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Tzu-Mainn Chen
Multiple flavors, but a single flavor per role, correct?

Mainn

- Original Message -
> I think we still are going to multiple flavors for I, e.g.:
> https://review.openstack.org/#/c/74762/
> On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> > 
> > On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > > Hi.
> > >
> > > While implementing CRUD operations for node profiles in Tuskar (which
> > > are essentially Nova flavors renamed) I encountered editing of flavors
> > > and I have some doubts about it.
> > >
> > > Editing of nova flavors in Horizon is implemented as
> > > deleting-then-creating with a _new_ flavor ID.
> > > For us it essentially means that all links to flavor/profile (e.g. from
> > > overcloud role) will become broken. We had the following proposals:
> > > - Update links automatically after editing by e.g. fetching all
> > > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > > concurrent editing of either node profiles or overcloud roles.
> > >Even worse, are we sure that user really wants overcloud roles to be
> > > updated?
> > 
> > This is a big question. Editing has always been a complicated concept in
> > Tuskar. How soon do you want the effects of the edit to be made live?
> > Should it only apply to future creations or should it be applied to
> > anything running off the old configuration? What's the policy on how to
> > apply that (canary v. the-other-one-i-cant-remember-the-name-for v.
> > something else)?
> > 
> > > - The same as previous but with confirmation from user. Also risk of
> > > race conditions.
> > > - Do not update links. User may be confused: operation called "edit"
> > > should not delete anything, nor is it supposed to invalidate links. One
> > > of the ideas was to show also deleted flavors/profiles in a separate
> > > table.
> > > - Implement clone operation instead of editing. Shows user a creation
> > > form with data prefilled from original profile. Original profile will
> > > stay and should be deleted manually. All links also have to be updated
> > > manually.
> > > - Do not implement editing, only creating and deleting (that's what I
> > > did for now in https://review.openstack.org/#/c/73576/ ).
> > 
> > I'm +1 on not implementing editing. It's why we wanted to standardize on
> > a single flavor for Icehouse in the first place, the use cases around
> > editing or multiple flavors are very complicated.
> > 
> > > Any ideas on what to do?
> > >
> > > Thanks in advance,
> > > Dmitry Tantsur
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Hi,

Neutron permits to set a gateway IP outside of the subnet cidr by default.
And, thanks to the garyk's patch [1], it's possible to change this default
behavior with config flag 'force_gateway_on_subnet'.

This flag was added to keep the backward compatibility for people who need
to set the gateway outside of the subnet.

I think this behavior does not reflect the classic usage of subnets. So I
propose to update the default value of the flag 'force_gateway_on_subnet'
to True.

Any thought?

[1] https://review.openstack.org/#/c/19048/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] when icehouse will be frozen

2014-02-20 Thread Miguel Angel Ajo

Ok

My previous answer was actually about the Feature proposal freeze
which happened two days ago.

Cheers,
Miguel Ángel.

On 02/20/2014 11:27 AM, Thierry Carrez wrote:

马煜 wrote:

who know when to freezy icehouse version ?

my bp on ml2 driver has been approved, code is under review,
but I have some trouble to deploy third-party ci on which tempest test run.

Feature freeze is on March 4th [1], so featureful code shall be proposed
*and* merged by then. I suspect Neutron core won't approve it until the
3rd party CI testing is in order, though, so if you can't get it to work
by then it may have to live out of the tree for the Icehouse release.

Neutron drivers shall be able to give you more precisions.

[1] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] [horizon] Reverse order of settings modules inclusion

2014-02-20 Thread Timur Sufiev
Hello!

In Murano's dashboard we have around 20 parameters that should be changed
or added to DJANGO_SETTINGS_MODULE. Currently all these parameters are
embedded into openstack_dashboard.settings during install with sed and
tools alike (and removed during uninstall).

Recently more clean and elegant way was devised [1]: not to insert code
into openstack_dashboard.settings, but define all Murano-specific settings
in its own config file which in turn imports the contents of
openstack_dashboard.settings. That also requires to change
DJANGO_SETTINGS_MODULE environment variable from
'openstack_dashboard.settings' to 'muranodashboard.settings' in django.wsgi
file which is referenced in apache config as the entry point for
Django-served site (we do not touch original openstack_dashboard wsgi-file,
but edit apache config to point to our own muranodashboard/wsgi/django.wsgi
file with appropriate environment variable).

While this approach has obvious advantages:
* Murano-specific parameters are clearly separated from common
openstack_dashboard parameters;
* moreover, customizable Murano parameters inside
/etc/murano/murano-dashboard/settings-{prologue,epilogue}.py files are
separated from constant muranodashboard settings somewhere in /usr/,
it also reverses in some sense relation between openstack_dashboard and
muranodashboard: now muranodashboard.settings file is the main entry point
for the whole openstack_dashboard Django application.

As to me, it doesn't pose a serious drawback, because openstack_dashboard
is still a main Django application which uses Murano as one of its
dashboards, only the order of settings inclusion is reversed. But after
investigating and implementing this scheme I might be a bit biased :)... So
I'd like to hear your opinion, whether this way of augmenting
openstack_dashboard settings is viable or not?

[1] https://review.openstack.org/#/c/68125/

-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Matt Riedemann



On 2/19/2014 12:26 PM, Chris Behrens wrote:

+1. I'd like to leave it experimental as well. I think the task work is 
important to the future of nova-api and I'd like to make sure we're not rushing 
anything. We're going to need to live with old API versions for a long time, so 
it's important that we get it right. I'm also not convinced there's a 
compelling enough reason for one to move to v3 as it is. Extension versioning 
is important, but I'm not sure it can't be backported to v2 in the meantime.


Thinking about what would differentiate V3, tasks is the big one but the 
common request ID [1] is something that could be a nice carrot for 
getting people to move eventually.


[1] https://blueprints.launchpad.net/nova/+spec/cross-service-request-id



- Chris


On Feb 19, 2014, at 9:36 AM, Russell Bryant  wrote:

Greetings,

The v3 API effort has been going for a few release cycles now.  As we
approach the Icehouse release, we are faced with the following question:
"Is it time to mark v3 stable?"

My opinion is that I think we need to leave v3 marked as experimental
for Icehouse.

There are a number of reasons for this:

1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
last week made me come to the realization that v2 won't be going away
*any* time soon.  In some cases, users have long term API support
expectations (perhaps based on experience with EC2).  In the best case,
we have to get all of the SDKs updated to the new API, and then get to
the point where everyone is using a new enough version of all of these
SDKs to use the new API.  I don't think that's going to be quick.

We really don't want to be in a situation where we're having to force
any sort of migration to a new API.  The new API should be compelling
enough that everyone *wants* to migrate to it.  If that's not the case,
we haven't done our job.

2) There's actually quite a bit still left on the existing v3 todo list.
We have some notes here:

https://etherpad.openstack.org/p/NovaV3APIDoneCriteria

One thing is nova-network support.  Since nova-network is still not
deprecated, we certainly can't deprecate the v2 API without nova-network
support in v3.  We removed it from v3 assuming nova-network would be
deprecated in time.

Another issue is that we discussed the tasks API as the big new API
feature we would include in v3.  Unfortunately, it's not going to be
complete for Icehouse.  It's possible we may have some initial parts
merged, but it's much smaller scope than what we originally envisioned.
Without this, I honestly worry that there's not quite enough compelling
functionality yet to encourage a lot of people to migrate.

3) v3 has taken a lot more time and a lot more effort than anyone
thought.  This makes it even more important that we're not going to need
a v4 any time soon.  Due to various things still not quite wrapped up,
I'm just not confident enough that what we have is something we all feel
is Nova's API of the future.


Let's all take some time to reflect on what has happened with v3 so far
and what it means for how we should move forward.  We can regroup for Juno.

Finally, I would like to thank everyone who has helped with the effort
so far.  Many hours have been put in to code and reviews for this.  I
would like to specifically thank Christopher Yeoh for his work here.
Chris has done an *enormous* amount of work on this and deserves credit
for it.  He has taken on a task much bigger than anyone anticipated.
Thanks, Chris!

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
I think we still are going to multiple flavors for I, e.g.:
https://review.openstack.org/#/c/74762/
On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> 
> On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > Hi.
> >
> > While implementing CRUD operations for node profiles in Tuskar (which
> > are essentially Nova flavors renamed) I encountered editing of flavors
> > and I have some doubts about it.
> >
> > Editing of nova flavors in Horizon is implemented as
> > deleting-then-creating with a _new_ flavor ID.
> > For us it essentially means that all links to flavor/profile (e.g. from
> > overcloud role) will become broken. We had the following proposals:
> > - Update links automatically after editing by e.g. fetching all
> > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > concurrent editing of either node profiles or overcloud roles.
> >Even worse, are we sure that user really wants overcloud roles to be
> > updated?
> 
> This is a big question. Editing has always been a complicated concept in 
> Tuskar. How soon do you want the effects of the edit to be made live? 
> Should it only apply to future creations or should it be applied to 
> anything running off the old configuration? What's the policy on how to 
> apply that (canary v. the-other-one-i-cant-remember-the-name-for v. 
> something else)?
> 
> > - The same as previous but with confirmation from user. Also risk of
> > race conditions.
> > - Do not update links. User may be confused: operation called "edit"
> > should not delete anything, nor is it supposed to invalidate links. One
> > of the ideas was to show also deleted flavors/profiles in a separate
> > table.
> > - Implement clone operation instead of editing. Shows user a creation
> > form with data prefilled from original profile. Original profile will
> > stay and should be deleted manually. All links also have to be updated
> > manually.
> > - Do not implement editing, only creating and deleting (that's what I
> > did for now in https://review.openstack.org/#/c/73576/ ).
> 
> I'm +1 on not implementing editing. It's why we wanted to standardize on 
> a single flavor for Icehouse in the first place, the use cases around 
> editing or multiple flavors are very complicated.
> 
> > Any ideas on what to do?
> >
> > Thanks in advance,
> > Dmitry Tantsur
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Jay Dobies

Just to throw this out there, is this something we need for Icehouse?

Yes, I fully acknowledge that it's an ugly security hole. But what's our 
story for how stable/clean Tuskar will be for Icehouse? I don't believe 
the intention is for people to use this in a production environment yet, 
so it will be people trying things out in a test environment. I don't 
think it's absurd to document that we haven't finished hardening the 
security yet and to not use super-sensitive passwords.


If there was a simple answer, I likely wouldn't even suggest this. But 
there's some real design and thought that needs to take place and, 
frankly, we're running out of time. Keeping in mind the intended usage 
of the Icehouse release of Tuskar, it might make sense to shelve this 
for now and file a big fat bug that we address in Juno.


On 02/20/2014 08:47 AM, Radomir Dopieralski wrote:

On 20/02/14 14:10, Jiří Stránský wrote:

On 20.2.2014 12:18, Radomir Dopieralski wrote:



Thinking about it some more, all the uses of the passwords come as a
result of an action initiated by the user either by tuskar-ui, or by
the tuskar command-line client. So maybe we could put the key in their
configuration and send it with the request to (re)deploy. Tuskar-API
would still need to keep it for the duration of deployment (to register
the services at the end), but that's it.


This would be possible, but it would damage the user experience quite a
bit. Afaik other deployment tools solve password storage the same way we
do now.


I don't think it would damage the user experience so much. All you need
is an additional configuration option in Tuskar-UI and Tuskar-client,
the encryption key.

That key would be used to encrypt the passwords when they are first sent
to Tuskar-API, and also added to the (re)deployment calls.

This way, if the database leaks due to a security hole in MySQL or bad
engineering practices administering the database, the passwords are
still inaccessible. To get them, the attacker would need to get
*both* the database and the config files from host on which Tuskar-UI runs.

With the tuskar-client it's a little bit more obnoxious, because you
would need to configure it on every host from which you want to use it,
but you already need to do some configuration to point it at the
tuskar-api and authenticate it, so it's not so bad.

I agree that this complicates the whole process a little, and adds
another potential failure point though.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Alex Xu

On 2014年02月20日 10:44, Christopher Yeoh wrote:

On Wed, 19 Feb 2014 12:36:46 -0500
Russell Bryant  wrote:


Greetings,

The v3 API effort has been going for a few release cycles now.  As we
approach the Icehouse release, we are faced with the following
question: "Is it time to mark v3 stable?"

My opinion is that I think we need to leave v3 marked as experimental
for Icehouse.


Although I'm very eager to get the V3 API released, I do agree with you.
As you have said we will be living with both the V2 and V3 APIs for a
very long time. And at this point there would be simply too many last
minute changes to the V3 API for us to be confident that we have it
right "enough" to release as a stable API.


+1


We really don't want to be in a situation where we're having to force
any sort of migration to a new API.  The new API should be compelling
enough that everyone *wants* to migrate to it.  If that's not the
case, we haven't done our job.

+1


Let's all take some time to reflect on what has happened with v3 so
far and what it means for how we should move forward.  We can regroup
for Juno.

Finally, I would like to thank everyone who has helped with the effort
so far.  Many hours have been put in to code and reviews for this.  I
would like to specifically thank Christopher Yeoh for his work here.
Chris has done an *enormous* amount of work on this and deserves
credit for it.  He has taken on a task much bigger than anyone
anticipated. Thanks, Chris!

Thanks Russell, that's much appreciated. I'm also very thankful to
everyone who has worked on the V3 API either through patches and/or
reviews, especially Alex Xu and Ivan Zhu who have done a lot of work on
it in Havana and Icehouse.


Thank you, Chris, hope we get a great v3 api.



Chris.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Jay Dobies



On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:

Hi.

While implementing CRUD operations for node profiles in Tuskar (which
are essentially Nova flavors renamed) I encountered editing of flavors
and I have some doubts about it.

Editing of nova flavors in Horizon is implemented as
deleting-then-creating with a _new_ flavor ID.
For us it essentially means that all links to flavor/profile (e.g. from
overcloud role) will become broken. We had the following proposals:
- Update links automatically after editing by e.g. fetching all
overcloud roles and fixing flavor ID. Poses risk of race conditions with
concurrent editing of either node profiles or overcloud roles.
   Even worse, are we sure that user really wants overcloud roles to be
updated?


This is a big question. Editing has always been a complicated concept in 
Tuskar. How soon do you want the effects of the edit to be made live? 
Should it only apply to future creations or should it be applied to 
anything running off the old configuration? What's the policy on how to 
apply that (canary v. the-other-one-i-cant-remember-the-name-for v. 
something else)?



- The same as previous but with confirmation from user. Also risk of
race conditions.
- Do not update links. User may be confused: operation called "edit"
should not delete anything, nor is it supposed to invalidate links. One
of the ideas was to show also deleted flavors/profiles in a separate
table.
- Implement clone operation instead of editing. Shows user a creation
form with data prefilled from original profile. Original profile will
stay and should be deleted manually. All links also have to be updated
manually.
- Do not implement editing, only creating and deleting (that's what I
did for now in https://review.openstack.org/#/c/73576/ ).


I'm +1 on not implementing editing. It's why we wanted to standardize on 
a single flavor for Icehouse in the first place, the use cases around 
editing or multiple flavors are very complicated.



Any ideas on what to do?

Thanks in advance,
Dmitry Tantsur


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 14:10, Jiří Stránský wrote:
> On 20.2.2014 12:18, Radomir Dopieralski wrote:

>> Thinking about it some more, all the uses of the passwords come as a
>> result of an action initiated by the user either by tuskar-ui, or by
>> the tuskar command-line client. So maybe we could put the key in their
>> configuration and send it with the request to (re)deploy. Tuskar-API
>> would still need to keep it for the duration of deployment (to register
>> the services at the end), but that's it.
> 
> This would be possible, but it would damage the user experience quite a
> bit. Afaik other deployment tools solve password storage the same way we
> do now.

I don't think it would damage the user experience so much. All you need
is an additional configuration option in Tuskar-UI and Tuskar-client,
the encryption key.

That key would be used to encrypt the passwords when they are first sent
to Tuskar-API, and also added to the (re)deployment calls.

This way, if the database leaks due to a security hole in MySQL or bad
engineering practices administering the database, the passwords are
still inaccessible. To get them, the attacker would need to get
*both* the database and the config files from host on which Tuskar-UI runs.

With the tuskar-client it's a little bit more obnoxious, because you
would need to configure it on every host from which you want to use it,
but you already need to do some configuration to point it at the
tuskar-api and authenticate it, so it's not so bad.

I agree that this complicates the whole process a little, and adds
another potential failure point though.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 14:10, Jiří Stránský wrote:
> On 20.2.2014 12:18, Radomir Dopieralski wrote:
>> On 20/02/14 12:02, Radomir Dopieralski wrote:
>>> Anybody who gets access to Tuskar-API gets the
>>> passwords, whether we encrypt them or not. Anybody who doesn't have
>>> access to Tuskar-API doesn't get the passwords, whether we encrypt
>>> them or not.
> 
> Yeah, i think so too.
> 
>> Thinking about it some more, all the uses of the passwords come as a
>> result of an action initiated by the user either by tuskar-ui, or by
>> the tuskar command-line client. So maybe we could put the key in their
>> configuration and send it with the request to (re)deploy. Tuskar-API
>> would still need to keep it for the duration of deployment (to register
>> the services at the end), but that's it.
> 
> This would be possible, but it would damage the user experience quite a
> bit. Afaik other deployment tools solve password storage the same way we
> do now.
> 
> Imho keeping the passwords the way we do now is not among the biggest
> OpenStack security risks. I think we can make the assumption that
> undercloud will not be publicly accessible, so a potential external
> attacker would have to first gain network access to the undercloud
> machines and only then they can start trying to exploit Tuskar API to
> hand out the passwords. Overcloud services (which are meant to be
> publicly accessible) have their service passwords accessible in
> plaintext, e.g. in nova.conf you'll find nova password and neutron
> password -- i think this is comparatively greater security risk.

This to me reads as: we should fix the OpenStack services not to store
passwords in their service.conf, not making the situation worse by
storing them in even more places.

> 
> So if we can come up with a solution where the benefits outweigh the
> drawbacks and it makes sense in broader view at OpenStack security, we
> should go for it, but so far i'm not convinced there is such a solution.
> Just my 2 cents :)
> 
> Jirka
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-02-20 Thread John Dennis
On 02/19/2014 08:58 PM, Adam Young wrote:
>> Can you give more detail here? I can see arguments for both ways of
>> doing this but continuing to use ids for ownership is an easier
>> choice. Here is my thinking:
>>
>> 1. all of the projects use ids for ownership currently so it is a
>> smaller change
> That does not change.  It is the hierarchy that is labeled by name.
> 
>> 2. renaming a project in keystone would not invalidate the ownership
>> hierarchy (Note that moving a project around would invalidate the
>> hierarchy in both cases)
>>
> Renaming would not change anything.
> 
> I would say the rule should be this:  Ids are basically uuids, and are
> immutable.  Names a mutable.  Each project has a parent Id.  A project
> can either be referenced directly by ID, oir hierarchically by name.  In
> addition, you can navigate to a project by traversing the set of ids,
> but you need to know where you are going.  THus the array
> 
> ['abcd1234',fedd3213','3e3e3e3e'] would be a way to find a project, but
> the project ID for the lead node would still be just '3e3e3e3e'.

The analogy I see here is the unix file system which is organized into a
tree structure by inodes, each inode has a name (technically it can have
more than one name). But the fundamental point is the structure is
formed by id's (e.g. inodes), the path name of a file is transitory and
depends only on what name is bound to the id at the moment. It's a very
rich and powerful abstraction. The same concept is used in many database
schemas, an object has a primary key which is numeric and a name. You
can change the name easily without affecting any references to the id.



-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-02-20 Thread Tomas Sedovic
On 19/02/14 08:48, Clint Byrum wrote:
> Since picking up Heat and trying to think about how to express clusters
> of things, I've been troubled by how poorly the CFN language supports
> using lists. There has always been the Fn::Select function for
> dereferencing arrays and maps, and recently we added a nice enhancement
> to HOT to allow referencing these directly in get_attr and get_param.
> 
> However, this does not help us when we want to do something with all of
> the members of a list.
> 
> In many applications I suspect the template authors will want to do what
> we want to do now in TripleO. We have a list of identical servers and
> we'd like to fetch the same attribute from them all, join it with other
> attributes, and return that as a string.
> 
> The specific case is that we need to have all of the hosts in a cluster
> of machines addressable in /etc/hosts (please, Designate, save us,
> eventually. ;). The way to do this if we had just explicit resources
> named NovaCompute0, NovaCompute1, would be:
> 
>   str_join:
> - "\n"
> - - str_join:
> - ' '
> - get_attr:
>   - NovaCompute0
>   - networks.ctlplane.0
> - get_attr:
>   - NovaCompute0
>   - name
>   - str_join:
> - ' '
> - get_attr:
>   - NovaCompute1
>   - networks.ctplane.0
> - get_attr:
>   - NovaCompute1
>   - name
> 
> Now, what I'd really like to do is this:
> 
> map:
>   - str_join:
> - "\n"
> - - str_join:
>   - ' '
>   - get_attr:
> - "$1"
> - networks.ctlplane.0
>   - get_attr:
> - "$1"
> - name
>   - - NovaCompute0
> - NovaCompute1
> 
> This would be helpful for the instances of resource groups too, as we
> can make sure they return a list. The above then becomes:
> 
> 
> map:
>   - str_join:
> - "\n"
> - - str_join:
>   - ' '
>   - get_attr:
> - "$1"
> - networks.ctlplane.0
>   - get_attr:
> - "$1"
> - name
>   - get_attr:
>   - NovaComputeGroup
>   - member_resources
> 
> Thoughts on this idea? I will throw together an implementation soon but
> wanted to get this idea out there into the hive mind ASAP.

I think it's missing lambdas and recursion ;-).

Joking aside, I like it. As long as we don't actually turn this into
anything remotely resembling turing-completeness, having useful data
processing primitives is good.

Now onto the bikeshed: could we denote the arguments with something
that's more obviously looking like a Heat specific notation and not a
user-entered string?

E.g. replace "$1" with {Arg: 1}

It's a bit uglier but more obvious to spot what's going on.

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Sean Dague
I agree that we shouldn't be rushing something that's not ready, but I
guess it raises kind of a meta issue.

When we started this journey this was because v2 has a ton of warts, is
completely wonky on the code internals, which leads to plenty of bugs.
v3 was both a surface clean up, but it was also a massive internals
clean up. I think comparing servers.py:create is a good look at the
differences:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L768
- v2

vs.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py#L415
- v3

v3 was small on user surface changes for a reason, because the idea was
that it would be a quick cut over, the migration pain would be minimal,
and v2 could be dropped relatively quickly (2 cycles).

However if the new thinking is that v2 is going to be around for a
long time then I think it raises questions about this whole approach.
Because dual maintenance is bad. We see this today where stable/* trees
end up broken in CI for weeks because no one is working on it.

We're also duplicating a lot of test and review energy in having 2 API
stacks. Even before v3 has come out of experimental it's consumed a huge
amount of review resource on both the Nova and Tempest sides to get it
to it's current state.

So my feeling is that in order to get more energy and focus on the API,
we need some kind of game plan to get us to a single API version, with a
single data payload in L (or on the outside, M). If the decision is v2
must be in both those releases (and possibly beyond), then it seems like
asking other hard questions.

* why do a v3 at all? instead do we figure out a way to be able to
evolve v2 in a backwards compatible way.
* if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
that working around all that code isn't a velocity inhibitor in the
cleanups required in v2? Because some of the crazy hacks that exist to
make XML structures work for the json in v2 is kind of special.

This big bang approach to API development may just have run it's course,
and no longer be a useful development model. Which is good to find out.
Would have been nice to find out earlier... but not all lessons are easy
or cheap. :)

-Sean

On 02/19/2014 12:36 PM, Russell Bryant wrote:
> Greetings,
> 
> The v3 API effort has been going for a few release cycles now.  As we
> approach the Icehouse release, we are faced with the following question:
> "Is it time to mark v3 stable?"
> 
> My opinion is that I think we need to leave v3 marked as experimental
> for Icehouse.
> 
> There are a number of reasons for this:
> 
> 1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
> last week made me come to the realization that v2 won't be going away
> *any* time soon.  In some cases, users have long term API support
> expectations (perhaps based on experience with EC2).  In the best case,
> we have to get all of the SDKs updated to the new API, and then get to
> the point where everyone is using a new enough version of all of these
> SDKs to use the new API.  I don't think that's going to be quick.
> 
> We really don't want to be in a situation where we're having to force
> any sort of migration to a new API.  The new API should be compelling
> enough that everyone *wants* to migrate to it.  If that's not the case,
> we haven't done our job.
> 
> 2) There's actually quite a bit still left on the existing v3 todo list.
>  We have some notes here:
> 
> https://etherpad.openstack.org/p/NovaV3APIDoneCriteria
> 
> One thing is nova-network support.  Since nova-network is still not
> deprecated, we certainly can't deprecate the v2 API without nova-network
> support in v3.  We removed it from v3 assuming nova-network would be
> deprecated in time.
> 
> Another issue is that we discussed the tasks API as the big new API
> feature we would include in v3.  Unfortunately, it's not going to be
> complete for Icehouse.  It's possible we may have some initial parts
> merged, but it's much smaller scope than what we originally envisioned.
>  Without this, I honestly worry that there's not quite enough compelling
> functionality yet to encourage a lot of people to migrate.
> 
> 3) v3 has taken a lot more time and a lot more effort than anyone
> thought.  This makes it even more important that we're not going to need
> a v4 any time soon.  Due to various things still not quite wrapped up,
> I'm just not confident enough that what we have is something we all feel
> is Nova's API of the future.
> 
> 
> Let's all take some time to reflect on what has happened with v3 so far
> and what it means for how we should move forward.  We can regroup for Juno.
> 
> Finally, I would like to thank everyone who has helped with the effort
> so far.  Many hours have been put in to code and reviews for this.  I
> would like to specifically thank Christopher Yeoh for his work here.
> Chris has done an *enormous* amount of work on this and deserv

Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 10:12, Radomir Dopieralski wrote:
> On 19/02/14 18:29, Dougal Matthews wrote:
>> The question for me, is what passwords will we have and when do we need
>> them? Are any of the passwords required long term.
> 
> We will need whatever the Heat template needs to generate all the
> configuration files. That includes passwords for all services that are
> going to be configured, such as, for example, Swift or MySQL.


This is a one-time operation, though, isn't it? You pass those
parameters to Heat when you run stack-create. Heat and os-*-config will
handle the rest.

> 
> I'm not sure about the exact mechanisms in Heat, but I would guess that
> we will need all the parameters, including passwords, when the templates
> are re-generated. We could probably generate new passwords every time,
> though.

What do you mean by regenarating the templates? Do you mean when we want
to update the deployment (e.g. using heat stack-update)?

> 
>> If we do need to store passwords it becomes a somewhat thorny issue, how
>> does Tuskar know what a password is? If this is flagged up by the
>> UI/client then we are relying on the user to tell us which isn't wise.
> 
> All the template parameters that are passwords are marked in the Heat
> parameter list that we get from it as "NoEcho": "true", so we do have an
> idea about which parts are sensitive.
> 

If at all possible, we should not store any passwords or keys whatsoever.

We may have to pass them through from the user to an API (and then
promptly forget them) or possible hold onto them for a little while (in
RAM) but never persisting them anywhere.

Let's go through the specific cases where we do handle passwords and
what to do with them.

Looking at devtest, I can see two places where the user deals with
passwords:

http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html

1) in the step 10. (Deploy an overcloud) we pass the various overcloud
service passwords and keys to Heat (it's things like the Keystone Admin
Token & password, SSL key & cert, nova/heat/cinder/glance service
passwords, etc.).

I'm assuming this could include any database and AMQP passwords in the
future.

2) step 17 & 18 (Perform admin setup of your overcloud) where pass some
of the same passwords to Keystone to set up the Overcloud OpenStack
services (compute, metering, orchestration, etc.)

And that's it.

I'd love if we could eventually push the steps 17 & 18 into our Heat
templates, it's where they belong I think (please correct me if that's
wrong).

Regardless, all the passwords here are user-specified. When you install
OpenStack, you have to come up with a bunch of passwords up front and
use them to set the various services up.

Now Tuskar serves as an intermediary. It should ask for these passwords
and then perform the steps you'd otherwise do manually and then *forget*
the passwords again.

Since we're using the passwords in 2 steps (10 and 17), we can't just
pass them to Heat and immediately forget them. But we can pass them in
step 10, wait for it to finish, pass them to step 17 and forget them then.

So here's the workflow:

1. The user wants to deploy the overcloud through the UI
2. They're asked to fill in all the necessary information (including the
passwords) -- or we autogenerate it which doesn't change anything
3. Tuskar UI sends a request to Tuskar API including the passwords
3.1. Tuskar UI forgets the passwords (this isn't an explicit action, we
don't store them anywhere)
4. Tuskar API fetches/builds the correct Heat template
5. Tuskar API calls heat stack-create and passes in all the params
(including passwords)
6. Tuskar API waits for heat stack-create to finish
7. Tuskar API issues a bunch of keystone calls to set up the services
(with the specified passwords)
8. Tuskar API forgets the passwords

The asynchronous nature of Heat stack-create may make this a bit more
difficult but the point should still stand -- we should not persist the
passwords. We may have to store them somewhere for a short duration, but
not throughout the entire lifecycle of the overcloud.

I'm not sure if we have to pass the unchanged parameters to Heat again
during stack-update (they may or may not be stored on the metadata
server). If we do, I'd vote we ask the user to re-enter them instead of
storing them somewhere.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Jiří Stránský

On 20.2.2014 12:18, Radomir Dopieralski wrote:

On 20/02/14 12:02, Radomir Dopieralski wrote:

Anybody who gets access to Tuskar-API gets the
passwords, whether we encrypt them or not. Anybody who doesn't have
access to Tuskar-API doesn't get the passwords, whether we encrypt
them or not.


Yeah, i think so too.


Thinking about it some more, all the uses of the passwords come as a
result of an action initiated by the user either by tuskar-ui, or by
the tuskar command-line client. So maybe we could put the key in their
configuration and send it with the request to (re)deploy. Tuskar-API
would still need to keep it for the duration of deployment (to register
the services at the end), but that's it.


This would be possible, but it would damage the user experience quite a 
bit. Afaik other deployment tools solve password storage the same way we 
do now.


Imho keeping the passwords the way we do now is not among the biggest 
OpenStack security risks. I think we can make the assumption that 
undercloud will not be publicly accessible, so a potential external 
attacker would have to first gain network access to the undercloud 
machines and only then they can start trying to exploit Tuskar API to 
hand out the passwords. Overcloud services (which are meant to be 
publicly accessible) have their service passwords accessible in 
plaintext, e.g. in nova.conf you'll find nova password and neutron 
password -- i think this is comparatively greater security risk.


So if we can come up with a solution where the benefits outweigh the 
drawbacks and it makes sense in broader view at OpenStack security, we 
should go for it, but so far i'm not convinced there is such a solution. 
Just my 2 cents :)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] async / threading for python 2 and 3

2014-02-20 Thread victor stinner
Hi,

> On 19/02/14 10:09 +0100, Julien Danjou wrote:
> >On Wed, Feb 19 2014, Angus Salkeld wrote:
> >
> >> 2) use tulip and give up python 2
> >
> >+ use trollius to have Python 2 support.
> >
> >  https://pypi.python.org/pypi/trollius
> 
> So I have been giving this a go.

FYI I'm the author of Trollius project.

> We use pecan and wsme (like ceilometer), I wanted to use
> a httpserver library in place of wsgiref.server so had a
> look at a couple and can't use them as they all have "yield from"
> all over the place (i.e. python 3 only). The quesion I have
> is:
> How useful is trollius if we can't use other thirdparty libraries
> written for asyncio?
> https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/server.py#L171
> 
> Maybe I am missing something?

(Tulip and Trollius unit tests use wsgiref.simple_server module of the standard 
library. It works but you said that you don't want to use it.)

Honestly, I have no answer to your question right now ("How useful is trollius 
..."). asyncio developers are working on fixing last bugs in asyncio (Trollius 
is a fork, I merge regulary updates from Tulip into Trollius) and adding some 
late features before the Python 3.4 release. This Python release will be 
somehow the "version 1.0" of asyncio and will freeze the API. Right now, I'm 
working on a proof-on-concept of eventlet hub using asyncio event loop. So it 
may be possible to use eventlet and asyncio APIs are the same time. And maybe 
slowly replace eventlet with asyncio, or at least use asyncio in new code.

I asked your question on Tulip mailing list to see how a single code base could 
support Tulip (yield from) and Trollius (yield). At least check if it's 
technically possible.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-20 Thread Jay Lau
Thanks Alexander for the detail explanation, really very helpful!

What I meant for a windows cluster is actually a windows application, such
as a WebSphere cluster or a hadoop windows cluster.

Seems I can use Cloudbase Init to do the post-deploy actions on windows,
but I cannot do some scale up or scale down for this cluster as currently
there is no cfn-tools for windows, is it correct?

Thanks,

Jay



2014-02-20 18:24 GMT+08:00 Alexander Tivelkov :

> Hi Jay,
>
> Windows support in Heat is being developed, but is not complete yet,
> afaik. You may already use Cloudbase Init to do the post-deploy actions on
> windows - check [1] for the details.
>
> Meanwhile, running a windows cluster is a much more complicated task then
> just deploying a number of windows instances (if I understand you correctly
> and you speak about Microsoft Failover Cluster, see [2]): to build it in
> the cloud you will have to execute quite a complex workflow after the nodes
> are actually deployed, which is not possible with Heat (at least for now).
>
> Murano project ([3]) does this on top of Heat, as it was initially
> designed as Windows Data Center as a Service, so I suggest you too take a
> look at it. You may also check this video ([4]) which demonstrates how
> Murano is used to deploy a failover cluster of Windows 2012 with a
> clustered MS SQL server on top of it.
>
>
> [1] http://wiki.cloudbase.it/heat-windows
> [2] http://technet.microsoft.com/library/hh831579
> [3] https://wiki.openstack.org/Murano
> [4] http://www.youtube.com/watch?v=Y_CmrZfKy18
>
> --
> Regards,
> Alexander Tivelkov
>
>
> On Thu, Feb 20, 2014 at 2:02 PM, Jay Lau  wrote:
>
>>
>> Hi,
>>
>> Does HEAT support provisioning windows cluster?  If so, can I also use
>> user-data to do some post install work for windows instance? Is there any
>> example template for this?
>>
>> Thanks,
>>
>> Jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-20 Thread Dong Liu

Dolph, thanks for the information you provided.

Now I have two question:
1. Will neutron handle this event notification in the future?
2. I also wish neutron could verify that tenant_id is existent.

thanks

于 2014-02-20 4:33, Dolph Mathews 写道:

There's an open bug [1] against nova & neutron to handle notifications
[2] from keystone about such events. I'd love to see that happen during
Juno!

[1] https://bugs.launchpad.net/nova/+bug/967832
[2] http://docs.openstack.org/developer/keystone/event_notifications.html

On Mon, Feb 17, 2014 at 6:35 AM, Yongsheng Gong mailto:gong...@unitedstack.com>> wrote:

It is not easy to enhance it. If we check the tenant_id on creation,
if should we  also to do some job when keystone delete tenant?


On Mon, Feb 17, 2014 at 6:41 AM, Dolph Mathews
mailto:dolph.math...@gmail.com>> wrote:

keystoneclient.middlware.auth_token passes a project ID (and
name, for convenience) to the underlying application through the
WSGI environment, and already ensures that this value can not be
manipulated by the end user.

Project ID's (redundantly) passed through other means, such as
URLs, are up to the service to independently verify against
keystone (or equivalently, against the WSGI environment), but
can be directly manipulated by the end user if no checks are in
place.

Without auth_token in place to manage multitenant authorization,
I'd still expect services to blindly trust the values provided
in the environment (useful for both debugging the service and
alternative deployment architectures).

On Sun, Feb 16, 2014 at 8:52 AM, Dong Liu mailto:willowd...@gmail.com>> wrote:

Hi stackers:

I found that when creating network subnet and other
resources, the attribute tenant_id
can be set by admin tenant. But we did not verify that if
the tanent_id is real in keystone.

I know that we could use neutron without keystone, but do
you think tenant_id should
be verified when we using neutron with keystone.

thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift]stable/havana Jenkins failed

2014-02-20 Thread Dong Liu
Thank you Alan and Pete, I will wait for devstack-gate core approve 
patch https://review.openstack.org/#/c/74451/


2014-02-20 17:14, Alan Pevec :

I notice that we have changed "from swiftclient import Connection,
HTTPException" to "from swiftclient import Connection, RequestException"
at 2014-02-14, I don't know is it relational.

I have reported a bug for this:
https://bugs.launchpad.net/swift/+bug/1281886


Bug is a duplicate of https://bugs.launchpad.net/openstack-ci/+bug/1281540
and has been also discussed in the other thread
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027476.html

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
Hi.

While implementing CRUD operations for node profiles in Tuskar (which
are essentially Nova flavors renamed) I encountered editing of flavors
and I have some doubts about it.

Editing of nova flavors in Horizon is implemented as
deleting-then-creating with a _new_ flavor ID.
For us it essentially means that all links to flavor/profile (e.g. from
overcloud role) will become broken. We had the following proposals:
- Update links automatically after editing by e.g. fetching all
overcloud roles and fixing flavor ID. Poses risk of race conditions with
concurrent editing of either node profiles or overcloud roles.
  Even worse, are we sure that user really wants overcloud roles to be
updated?
- The same as previous but with confirmation from user. Also risk of
race conditions.
- Do not update links. User may be confused: operation called "edit"
should not delete anything, nor is it supposed to invalidate links. One
of the ideas was to show also deleted flavors/profiles in a separate
table.
- Implement clone operation instead of editing. Shows user a creation
form with data prefilled from original profile. Original profile will
stay and should be deleted manually. All links also have to be updated
manually.
- Do not implement editing, only creating and deleting (that's what I
did for now in https://review.openstack.org/#/c/73576/ ).

Any ideas on what to do?

Thanks in advance,
Dmitry Tantsur


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >