[openstack-dev] [Horizon] Is it the time to separate horizon and openstack_dashboard?

2013-06-21 Thread Zhigang Wang
Hi,

I think horizon will be very attractive to Django developers if we
separate it from openstack_dashboard.

Is it the time to do it?

I found it's pretty easy to use the horizon app in a standard Django
project and it just works.

Current issues:

* Some places still tight to OpenStack.
* It pulls all the OpenStack related dependencies, e.g., the client
api modules, openstack_auth, etc.
* The test cases.

Thanks,

Zhigang

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling a tenant still allow user token

2013-06-21 Thread Dolph Mathews
On Fri, Jun 21, 2013 at 5:25 AM, Chmouel Boudjnah wrote:

> Hello,
>
> [moving on the public mailing list since this bug is anyway public]
>
> On 3 Jun 2013, at 17:25, Dolph Mathews  wrote:
>
> Apologies for the delayed response on this. We have several related open
> bugs and I wanted to investigate them all at once, and perhaps fix them all
> in one pass.
> Disabling a tenant/project should result in existing tokens scoped to that
> tenant/project being immediately invalidated, so I think Chmouel's analysis
> is absolutely valid.
> Regarding "list_users_in_project" -- as Guang suggested, the semantics of
> that call are inherently complicated,
>
>
>
> looking into this it seems that we have already such function :
>
>
> https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql.py#L608
>
> Should it get fixed?
>
> so ideally we can just ask the token driver to revoke tokens with some
> context (a user OR a tenant OR a user+tenant combination). We've been going
> down that direction, but have been incredibly inconsistent in how it's
> utilized. I'd like to have a framework to consistently apply the
> consequences of disabling/deleting any entity in the system.
>
>
> agreed, I think this should be doable if we can modify :
>
>
> https://github.com/openstack/keystone/blob/master/keystone/token/core.py#L169
>
> changing the default user_id to None
>
> as for the getting the tokens for a specify project/tenant if we are not
> using a list_users_in_project would that mean we need to parse all the
> tokens to get the metadatas/extras tenant_id or there is some more
> efficient ways?
>

Currently the memcache token backend and SQL token backend each have their
own advantages, and I'd like to get the best of both worlds and use each as
intended. So, store tokens with these fields indexed appropriately in SQL
and cache them in memcache (and if memcache/etc isn't available, in-memory
in python).


>
> Chmouel.
>
>
> -Dolph
>
>
> On Wed, May 29, 2013 at 9:59 AM, Yee, Guang  wrote:
>
>> Users does not really belong to a project. They have access to, or
>> associated with, a project via role grant(s). Therefore, when disabling a
>> project, we should only invalidate the tokens scoped to that project. But
>> yes, you should be able to use the same code to invalidate the tokens when
>> disabling a project.
>>
>> ** **
>>
>>
>> https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L164
>> 
>>
>> ** **
>>
>> We have to be careful with list_users_in_project as user can associate
>> with project with either direct role grant, or indirectly via group
>> membership and group grant.  This is going to get complicated with the
>> addition of inherited role grants.
>>
>> ** **
>>
>> ** **
>>
>> Guang
>>
>> ** **
>>
>> ** **
>>
>> *From:* Chmouel Boudjnah [mailto:chmo...@enovance.com]
>> *Sent:* Wednesday, May 29, 2013 2:23 AM
>> *To:* Adam Young; Dolph Mathews; Henry Nash; Joseph Heck; Yee, Guang;
>> d...@enovance.com
>> *Subject:* disabling a tenant still allow user token
>>
>> ** **
>>
>> Hi,
>>
>> Apologies for the direct email but I will be happy to move this on
>> openstack-dev@ before to make sure it's not security involved.
>>
>> I'd like to bring you this bug :
>>
>> https://bugs.launchpad.net/keystone/+bug/1179955
>>
>> to your attention.
>>
>> Basically for the TL;DR when disabling a tenant don't disable the tokens
>> of the user attached to it.
>>
>> We could probably do that :
>>
>>
>> https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L164
>>
>> when updating a tenant. but I need to find a way to list users attached
>> to a tenant (without having to list all the users).
>>
>> not being able to list_users_in_project() is it something intended by
>> keystone?
>>
>> Do you see a workaround for how to delete tokens of all users belonging
>> to a tenants?
>>
>> Let me know what do you think.
>>
>> Cheers,
>> Chmouel.
>>
>
>
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Consolidate CLI Authentication

2013-06-21 Thread Dolph Mathews
That blueprint doesn't have anything to do with the linked code review, so
I'm not sure which you're asking about relative to other projects.

The goal for "Consolidate CLI Authentication" is to allow
python-keystoneclient to own the UX around authentication parameters, and
allow other clients (e.g. python-openstackclient) to benefit from that work.

The v3 auth patch exposed Identity API v3 auth features through the python
API, not python-keystoneclient's CLI. Other projects (with the exception of
horizon) will eventually benefit for free by consuming
keystoneclient.middleware.auth_token.


On Fri, Jun 21, 2013 at 7:59 AM, Ciocari, Juliano (Brazil R&D-ECL) <
juliano.cioc...@hp.com> wrote:

> Hi,
>
> I have some questions regarding the "Consolidate CLI Authentication" (
> https://etherpad.openstack.org/keystoneclient-cli-auth and
> https://review.openstack.org/#/c/21942/).
>
> It looks like that the code for keystone client is almost ready for merge.
> What are the plans for the other clients (nova, glance, etc) to use this
> code (if any)? Is there any related change expected on horizon?
>
> Thanks,
> - Juliano
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [quantum] Deadlock on quantum port-create

2013-06-21 Thread Salvatore Orlando
Hi Jay,

there are indeed downsides with this setting.
The code currently uses connection pooling in a way that each
subtransaction end up using a distinct connection from the pool. As we have
nested transactions in multiple points in Neutron's code, this leads to a
situation where you can exhaust your pool.

This issue is already addressed by openstack.common db session management.
Neutron is moving to that too. The patch [1] is under review at the moment,
and we hope to be able to merge it soon.

Another issue [2] has been reported which leads to connection exhaustion
both at small and large scales, independently of whether db pooling is
enabled. We are retriaging this issue after we've been informed that
openstack.common db support (and reduction of db accesses from policy
engine) did not solve the issue.

Further, some plugins which drive a 3rd party backend might incur other
issues when db pooling is enabled. As DB pooling increases the level of
concurrency, it might happen that short-lived queries to the backend are
performed while another long running query is executing. This is usually
not harmful, except in cases when the short-lived query alters the portion
of the state of the backend which the long running query is retrieving.
Such events are usually observed during the initial synchronization of the
DHCP server, and have been significantly mitigated by recent improvements
in this procedure.

Regards,
Salvatore

[1] https://review.openstack.org/#/c/27265/
[2] https://bugs.launchpad.net/tripleo/+bug/1184484



On 21 June 2013 20:44, Jay Buffington  wrote:

> I'm moving a thread we had with some vmware guys to this list to make it
> public.
>
> We had a problem with quantum deadlocking when it got several requests in
> quick
> succession.  Aaron suggested we set sql_dbpool_enable = True.  We did and
> it
> seemed to resolve our issue.
>
> What are the downsides of turning on sql_dbpool_enable?  Should it be on
> by default?
>
> Thanks,
> Jay
>
>
> >> We are currently experience the following problem in our environment:
> >> issuing 5 'quantum port-create' commands in parallel effectively
> deadlocks quantum:
> >>
> >> $ for n in $(seq 5); do echo 'quantum --insecure port-create
> stage-net1'; done | parallel
> >> An unknown exception occurred.
> >> Request Failed: internal server error while processing your request.
> >> An unexpected error occurred in the NVP Plugin:Unable to get logical
> switches
>
> On Jun 21, 2013, at 9:36 AM, Aaron Rosen  wrote:
> > We've encountered this issue as well. I'd try enabling:
> > # Enable the use of eventlet's db_pool for MySQL. The flags
> sql_min_pool_size,
> > # sql_max_pool_size and sql_idle_timeout are relevant only if this is
> enabled.
> >
> > sql_dbpool_enable = True
> >
> > in nvp.ini to see if that helps at all. In our internal cloud we removed
> the
> > creations of the lports in nvp from the transaction. Salvatore is
> working on
> > an async back-end to the plugin that will solve this and improve the
> plugin
> > performance.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Allocation of IPs

2013-06-21 Thread Edgar Magana
Mark,

Can you point me to the BP for this feature?
I want to keep an eye on it.

Thanks,

Edgar

From:  Mark McClain 
Reply-To:  OpenStack List 
Date:  Friday, June 21, 2013 12:41 PM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Networking] Allocation of IPs

There will be a deployment option where you can configure the default IP
allocator.  Additionally, the allocator will be configurable at subnet
creation time.

mark


On Jun 20, 2013, at 4:51 PM, Edgar Magana  wrote:

> Could it be possible to add a flag to disable the allocation for the IP?
> If the "no allocation" flag is enabled, all ports will have an empty value for
> IPs.
>  It will increase the config parameters in quantum, should we try it?
> 
> Edgar
> 
> From:  Mark McClain 
> Reply-To:  OpenStack List 
> Date:  Thursday, June 20, 2013 1:13 PM
> To:  OpenStack List 
> Subject:  Re: [openstack-dev] [Networking] Allocation of IPs
> 
> There's work under way to make IP allocation pluggable. One of the options
> will include not having an allocator for a subnet.
> 
> mark
> 
> On Jun 20, 2013, at 2:36 PM, Edgar Magana  wrote:
> 
>> Developers,
>> 
>> So far in Networking (formerly Quantum) IPs are pre-allocated when a new port
>> is created by the following def:
>> _allocate_ips_for_port(self, context, network, port):
>> 
>> If we are using a real DHCP (not the dnsmasq process) that does not accept
>> static IP allocation because it only allocates IPs based on its own
>> algorithm, how can we tell Networking to not allocate an IP at all?
>> I don¹t think that is possible based on the code but I would like to know if
>> somebody has gone through the same problem and have a workaround solution.
>> 
>> Cheers,
>> 
>> Edgar
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/li
> stinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking][ml2] Common files (was: Device configuration)

2013-06-21 Thread Rich Curran (rcurran)
Hi All -

Some light reading and softball questions for a Friday afternoon.

Here are some common files that the cisco mech driver (and I'm assuming the 
other mechanism drivers) will need to start off.
common/exceptions.py
common/constants.py

We might even want to consider moving config.py into the common/ directory.

In the Cisco (sub)plugin we have some specific DB requirements. I'm thinking we 
may want a db/ directory also.

XML vendor specific strings. Store under common/snippets_"vendorname".py?

How do we want to abbreviate mechanism driver files? md, mech_driver, mdriver, 
mech (contrasting to "type_xyz" drivers)

Thanks,
Rich

From: Andre Pech [mailto:ap...@aristanetworks.com]
Sent: Wednesday, June 19, 2013 12:24 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [networking][ml2] Device configuration

I don't feel too strongly either way. It seems like mechanism drivers will have 
distinct sets of variables, so having a common subsection name may be 
confusing. Plus I like the clarity of the type of the mechanism driver being in 
the section name rather than hidden in a type name.

Maybe we prepend ML2_MECH to that to make it clear that these are variables for 
mechanism drivers? Something like:

[ML2_MECH_CISCO_NEXUS:
...

[ML2_MECH_ARISTA:]
...

As I said before, am fine with either of the proposals below too.

Andre

On Wed, Jun 19, 2013 at 8:59 AM, Rich Curran (rcurran) 
mailto:rcur...@cisco.com>> wrote:
Hi All -

Regarding the configuration (ml2_conf.ini) of mechanism devices. Cisco defines 
external devices in our INI file like this:

# Nexus Switch Format.
# [NEXUS_SWITCH:]
# = <- for cisco nexus devices a port would be in the form 
"/"
# ssh_port=
# username= <- used as login username to the switch
# password= <- password for that username

Any thoughts on how we want to define this configuration info under ML2?

The simplest solution is to have separate INI sections for each vendor/device.
Ex.
[CISCO_NEXUS:]
Variables defined only for cisco nexus devices

[ARISTA_XYZ]
Whatever you want.

Note that the cisco INI file can define more than one NEXUS_SWITCH section 
(different IPs).

We could think about creating more generic section headers but would need a 
variable telling us what device owns that section.
Ex.
[ML2_MECH_DEVICE:]
variable1 =
variable2 =
type = cisco_nexus

The cisco config.py file doesn't have default values for these sections. At 
init time we create a dictionary of these devices.

Thoughts?

Thanks,
Rich





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Celery

2013-06-21 Thread Jessica Lucci
Hello all,

Included here is a link to a Celery wiki (distributed task queue) explaining 
what the Celery project is and how it works. Currently, celery is being used in 
a distributed pattern for the WIP task flow project. As such, links to both the 
distributed project, and its' parent task flow project, have been included for 
your viewing pleasure. Please feel free to ask any questions/address any 
concerns regarding either celery or the task flow project as a whole.

Celery: https://wiki.openstack.org/wiki/Celery
Distributed: https://wiki.openstack.org/wiki/DistributedTaskManagement
TaskFlow: https://wiki.openstack.org/wiki/TaskFlow

Thanks!
Jessica Lucci
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Doug Hellmann
On Fri, Jun 21, 2013 at 3:46 PM, Vishvananda Ishaya
wrote:

>
> On Jun 21, 2013, at 12:38 PM, Doug Hellmann 
> wrote:
>
>
>
>
> On Thu, Jun 20, 2013 at 2:00 PM, Vishvananda Ishaya  > wrote:
>
>>
>> On Jun 20, 2013, at 10:22 AM, Brant Knudson  wrote:
>>
>> How about a mapping of JSON concepts to XML like:
>>
>> collections:
>>   the-value  ... 
>>   the-value  ... 
>>
>> values:
>> text
>> 
>> 
>> 
>> number
>>
>> This type of mapping would remove any ambiguities. Ambiguities and
>> complexity are problems I've seen with the XML-JSON mapping in Keystone.
>> Plus the fact that it's so not-XML would convince users to switch to JSON.
>> With a simple mapping, I don't think it would be necessary to test all the
>> interfaces for both XML and JSON, just test the mapping code.
>>
>>
>> +1 for something like this. JSON primary + autgenerated XML. I think the
>> ideal version would be autogeneration of xml from jsonschema and some
>> method for prettifying the xml representation via jsonschema tags. The
>> jsonschema + tags approach is probably a bit further off (maybe for v4?),
>> so having an auto conversion which is ugly but functional seems better than
>> no XML support at all.
>>
>> Vish
>>
>
> Let's please not invent something new for this. We're building a high
> level platform. We shouldn't have to screw around with making so many low
> level frameworks to do things for which tools already exist. WSME will
> handle serialization, cleanly, in both XML and JSON already. Let's just use
> that.
>
> Doug
>
>
> Doug,
>
> Switching to WSME for v3 is out of scope at this point I think. Definitely
> worth considering for v4 though.
>
> Vish
>

Absolutely - we agreed about that weeks ago. I assumed, however, that
decision meant we would just continue to use the existing serialization
code. I thought this discussion was moving toward writing something new,
and I wanted to head that off.

Doug


>
>
>
>>
>>
>>
>>
>> On Thu, Jun 20, 2013 at 11:11 AM, Jorge Williams <
>> jorge.willi...@rackspace.com> wrote:
>>
>>>
>>> On Jun 20, 2013, at 10:51 AM, Russell Bryant wrote:
>>>
>>> > On 06/20/2013 11:20 AM, Brian Elliott wrote:
>>> >> On Jun 19, 2013, at 7:34 PM, Christopher Yeoh 
>>> wrote:
>>> >>
>>> >>> Hi,
>>> >>>
>>> >>> Just wondering what people thought about how necessary it is to keep
>>> XML support for the Nova v3 API, given that if we want to drop it doing so
>>> during the v2->v3 transition is pretty much the ideal time to do so.
>>> >>>
>>> >>> The current plan is to keep it and is what we have been doing so far
>>> when porting extensions, but there are pretty obvious long term development
>>> and test savings if we only have one API format to support.
>>> >>>
>>> >>> Regards,
>>> >>>
>>> >>> Chris
>>> >>>
>>> >>
>>> >> Can we support CORBA?
>>> >>
>>> >> No really, it'd be great to drop support for it while we can.
>>> >
>>> > I agree personally ... but this has come up before, and when polling
>>> the
>>> > larger audience (and not just the dev list), there is still a large
>>> > amount of demand for XML support (or at least that was my
>>> > interpretation).  So, I think it should stay.
>>> >
>>> > I'm all for anything that makes supporting both easier.  It doesn't
>>> have
>>> > to be the ideal XML representation.  If we wanted to adopt different
>>> > formatting to make supporting it easier (automatic conversion from json
>>> > in the code I guess), I'd be fine with that.
>>> >
>>>
>>>
>>> I agree, we can change the XML representation to make it easy to convert
>>> between XML and JSON.  If I could go back in time, that would definitely be
>>> something I would do different.  3.0 gives us an opportunity to start over
>>> in that regard.Extensions may still be "tricky" because you still want
>>> to use namespaces, but having a simpler mapping may simplify the process of
>>> supporting both.
>>>
>>> -jOrGe W.
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/m

Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Vishvananda Ishaya

On Jun 21, 2013, at 12:38 PM, Doug Hellmann  wrote:

> 
> 
> 
> On Thu, Jun 20, 2013 at 2:00 PM, Vishvananda Ishaya  
> wrote:
> 
> On Jun 20, 2013, at 10:22 AM, Brant Knudson  wrote:
> 
>> How about a mapping of JSON concepts to XML like:
>> 
>> collections:
>>   the-value  ... 
>>   the-value  ... 
>> 
>> values:
>> text
>> 
>> 
>> 
>> number
>> 
>> This type of mapping would remove any ambiguities. Ambiguities and 
>> complexity are problems I've seen with the XML-JSON mapping in Keystone. 
>> Plus the fact that it's so not-XML would convince users to switch to JSON. 
>> With a simple mapping, I don't think it would be necessary to test all the 
>> interfaces for both XML and JSON, just test the mapping code.
> 
> +1 for something like this. JSON primary + autgenerated XML. I think the 
> ideal version would be autogeneration of xml from jsonschema and some method 
> for prettifying the xml representation via jsonschema tags. The jsonschema + 
> tags approach is probably a bit further off (maybe for v4?), so having an 
> auto conversion which is ugly but functional seems better than no XML support 
> at all.
> 
> Vish
> 
> Let's please not invent something new for this. We're building a high level 
> platform. We shouldn't have to screw around with making so many low level 
> frameworks to do things for which tools already exist. WSME will handle 
> serialization, cleanly, in both XML and JSON already. Let's just use that.
> 
> Doug

Doug,

Switching to WSME for v3 is out of scope at this point I think. Definitely 
worth considering for v4 though.

Vish

>  
> 
>> 
>> 
>> 
>> On Thu, Jun 20, 2013 at 11:11 AM, Jorge Williams 
>>  wrote:
>> 
>> On Jun 20, 2013, at 10:51 AM, Russell Bryant wrote:
>> 
>> > On 06/20/2013 11:20 AM, Brian Elliott wrote:
>> >> On Jun 19, 2013, at 7:34 PM, Christopher Yeoh  wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> Just wondering what people thought about how necessary it is to keep XML 
>> >>> support for the Nova v3 API, given that if we want to drop it doing so 
>> >>> during the v2->v3 transition is pretty much the ideal time to do so.
>> >>>
>> >>> The current plan is to keep it and is what we have been doing so far 
>> >>> when porting extensions, but there are pretty obvious long term 
>> >>> development and test savings if we only have one API format to support.
>> >>>
>> >>> Regards,
>> >>>
>> >>> Chris
>> >>>
>> >>
>> >> Can we support CORBA?
>> >>
>> >> No really, it'd be great to drop support for it while we can.
>> >
>> > I agree personally ... but this has come up before, and when polling the
>> > larger audience (and not just the dev list), there is still a large
>> > amount of demand for XML support (or at least that was my
>> > interpretation).  So, I think it should stay.
>> >
>> > I'm all for anything that makes supporting both easier.  It doesn't have
>> > to be the ideal XML representation.  If we wanted to adopt different
>> > formatting to make supporting it easier (automatic conversion from json
>> > in the code I guess), I'd be fine with that.
>> >
>> 
>> 
>> I agree, we can change the XML representation to make it easy to convert 
>> between XML and JSON.  If I could go back in time, that would definitely be 
>> something I would do different.  3.0 gives us an opportunity to start over 
>> in that regard.Extensions may still be "tricky" because you still want 
>> to use namespaces, but having a simpler mapping may simplify the process of 
>> supporting both.
>> 
>> -jOrGe W.
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [quantum] Deadlock on quantum port-create

2013-06-21 Thread Jay Buffington
I'm moving a thread we had with some vmware guys to this list to make it
public.

We had a problem with quantum deadlocking when it got several requests in
quick
succession.  Aaron suggested we set sql_dbpool_enable = True.  We did and it
seemed to resolve our issue.

What are the downsides of turning on sql_dbpool_enable?  Should it be on by
default?

Thanks,
Jay


>> We are currently experience the following problem in our environment:
>> issuing 5 'quantum port-create' commands in parallel effectively
deadlocks quantum:
>>
>> $ for n in $(seq 5); do echo 'quantum --insecure port-create
stage-net1'; done | parallel
>> An unknown exception occurred.
>> Request Failed: internal server error while processing your request.
>> An unexpected error occurred in the NVP Plugin:Unable to get logical
switches

On Jun 21, 2013, at 9:36 AM, Aaron Rosen  wrote:
> We've encountered this issue as well. I'd try enabling:
> # Enable the use of eventlet's db_pool for MySQL. The flags
sql_min_pool_size,
> # sql_max_pool_size and sql_idle_timeout are relevant only if this is
enabled.
>
> sql_dbpool_enable = True
>
> in nvp.ini to see if that helps at all. In our internal cloud we removed
the
> creations of the lports in nvp from the transaction. Salvatore is working
on
> an async back-end to the plugin that will solve this and improve the
plugin
> performance.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Allocation of IPs

2013-06-21 Thread Mark McClain
There will be a deployment option where you can configure the default IP 
allocator.  Additionally, the allocator will be configurable at subnet creation 
time.

mark


On Jun 20, 2013, at 4:51 PM, Edgar Magana  wrote:

> Could it be possible to add a flag to disable the allocation for the IP?
> If the "no allocation" flag is enabled, all ports will have an empty value 
> for IPs.
>  It will increase the config parameters in quantum, should we try it?
> 
> Edgar
> 
> From: Mark McClain 
> Reply-To: OpenStack List 
> Date: Thursday, June 20, 2013 1:13 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Networking] Allocation of IPs
> 
> There's work under way to make IP allocation pluggable. One of the options 
> will include not having an allocator for a subnet.
> 
> mark
> 
> On Jun 20, 2013, at 2:36 PM, Edgar Magana  wrote:
> 
>> Developers,
>> 
>> So far in Networking (formerly Quantum) IPs are pre-allocated when a new 
>> port is created by the following def:
>> _allocate_ips_for_port(self, context, network, port):
>> 
>> If we are using a real DHCP (not the dnsmasq process) that does not accept 
>> static IP allocation because it only allocates IPs based on its own 
>> algorithm, how can we tell Networking to not allocate an IP at all?
>> I don’t think that is possible based on the code but I would like to know if 
>> somebody has gone through the same problem and have a workaround solution.
>> 
>> Cheers,
>> 
>> Edgar
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___ OpenStack-dev mailing list 
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Doug Hellmann
On Thu, Jun 20, 2013 at 2:00 PM, Vishvananda Ishaya
wrote:

>
> On Jun 20, 2013, at 10:22 AM, Brant Knudson  wrote:
>
> How about a mapping of JSON concepts to XML like:
>
> collections:
>   the-value  ... 
>   the-value  ... 
>
> values:
> text
> 
> 
> 
> number
>
> This type of mapping would remove any ambiguities. Ambiguities and
> complexity are problems I've seen with the XML-JSON mapping in Keystone.
> Plus the fact that it's so not-XML would convince users to switch to JSON.
> With a simple mapping, I don't think it would be necessary to test all the
> interfaces for both XML and JSON, just test the mapping code.
>
>
> +1 for something like this. JSON primary + autgenerated XML. I think the
> ideal version would be autogeneration of xml from jsonschema and some
> method for prettifying the xml representation via jsonschema tags. The
> jsonschema + tags approach is probably a bit further off (maybe for v4?),
> so having an auto conversion which is ugly but functional seems better than
> no XML support at all.
>
> Vish
>

Let's please not invent something new for this. We're building a high level
platform. We shouldn't have to screw around with making so many low level
frameworks to do things for which tools already exist. WSME will handle
serialization, cleanly, in both XML and JSON already. Let's just use that.

Doug


>
>
>
>
> On Thu, Jun 20, 2013 at 11:11 AM, Jorge Williams <
> jorge.willi...@rackspace.com> wrote:
>
>>
>> On Jun 20, 2013, at 10:51 AM, Russell Bryant wrote:
>>
>> > On 06/20/2013 11:20 AM, Brian Elliott wrote:
>> >> On Jun 19, 2013, at 7:34 PM, Christopher Yeoh 
>> wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> Just wondering what people thought about how necessary it is to keep
>> XML support for the Nova v3 API, given that if we want to drop it doing so
>> during the v2->v3 transition is pretty much the ideal time to do so.
>> >>>
>> >>> The current plan is to keep it and is what we have been doing so far
>> when porting extensions, but there are pretty obvious long term development
>> and test savings if we only have one API format to support.
>> >>>
>> >>> Regards,
>> >>>
>> >>> Chris
>> >>>
>> >>
>> >> Can we support CORBA?
>> >>
>> >> No really, it'd be great to drop support for it while we can.
>> >
>> > I agree personally ... but this has come up before, and when polling the
>> > larger audience (and not just the dev list), there is still a large
>> > amount of demand for XML support (or at least that was my
>> > interpretation).  So, I think it should stay.
>> >
>> > I'm all for anything that makes supporting both easier.  It doesn't have
>> > to be the ideal XML representation.  If we wanted to adopt different
>> > formatting to make supporting it easier (automatic conversion from json
>> > in the code I guess), I'd be fine with that.
>> >
>>
>>
>> I agree, we can change the XML representation to make it easy to convert
>> between XML and JSON.  If I could go back in time, that would definitely be
>> something I would do different.  3.0 gives us an opportunity to start over
>> in that regard.Extensions may still be "tricky" because you still want
>> to use namespaces, but having a simpler mapping may simplify the process of
>> supporting both.
>>
>> -jOrGe W.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Doug Hellmann
On Thu, Jun 20, 2013 at 12:44 PM, Russell Bryant  wrote:

> On 06/20/2013 12:00 PM, Thierry Carrez wrote:
> > Christopher Yeoh wrote:
> >> Just wondering what people thought about how necessary it is to keep XML
> >> support for the Nova v3 API, given that if we want to drop it doing so
> >> during the v2->v3 transition is pretty much the ideal time to do so.
> >
> > Although I hate XML as much as anyone else, I think it would be
> > interesting to raise that question on the general user mailing-list.
> >
> > We have been discussing that in the past, and while there was mostly
> > consensus against XML (in OpenStack API) on the development list, when
> > the issue was raised with users, in the end they made up a
> > sufficiently-good rationale for us to keep it in past versions of the
> API :)
> >
>
> Yes, and I suspect we'd arrive the same result again.
>
> I'd rather hear ideas for things that would make it easier to support
> both.  The window is open for changes to make that easier.
>

Supporting both was one of the benefits I identified in WSME. Think of it
as a declarative layer for the API, just like SQLAlchemy has declarative
table definitions. As a developer, you never have to think about the format
of the data on the wire because by the time you get it in the API endpoint,
it's an object.

Doug


>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quantum's new name is….

2013-06-21 Thread Stefano Maffulli
On 06/19/2013 09:14 AM, Mark McClain wrote:
> The OpenStack Networking team is happy to announce that the Quantum
> project will be changing its name to Neutron. You'll soon see Neutron
> in lots of places as we work to implement the name change within
> OpenStack.

Congratulations for the cool name. I just changed the name of the
mailman topic :)

Tag your subject lines with [Neutron] if you intend to discuss OpenStack
Networking

Cheers,
stef




-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] VMwareAPI sub-team status update

2013-06-21 Thread Shawn Hartsock
Greetings Stackers!

It's time for the end of the week report from your friendly neighborhood VMware 
API team. We are making good progress, but I'll just point out that if you want 
to be sure something makes the Havana-2 deadline (based on length of time for 
code review and so on) I would seriously consider having your work done and 
*posted* by Monday morning July 8th. That means you effectively have 2 working 
weeks to "polish" those Havana-2 patches to a "high-gloss" or risk moving them 
to Havana-3. (And if you are a in the US you've got a short work week one of 
those weeks in July.)

As a point of trivia, I've posted a developer's guide drafted (with a lot of 
help) from the rest of my team here at VMware. 

OpenStack + VMware Developer's quick-start guide:
  https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide
Pass it around and help us improve it. It belongs to all of us now.

First up, posted reviews and their statuses...

Ready for core review:
* https://review.openstack.org/#/c/30036/ <- (same old patch just rebased)
* https://review.openstack.org/#/c/29396/
* https://review.openstack.org/#/c/30822/
* https://review.openstack.org/#/c/33482/

Needs VMwareAPI expert's attention:
If you know VMware's API's please take some time with these so we can pass them 
on to the core-reviewers.
* https://review.openstack.org/#/c/27885/
* https://review.openstack.org/#/c/29453/
* https://review.openstack.org/#/c/30282/
* https://review.openstack.org/#/c/30289/
* https://review.openstack.org/#/c/32695/
* https://review.openstack.org/#/c/33100/

Work In Progress (not ready for general review):
* https://review.openstack.org/#/c/30628/ <- needs traversal spec work & API 
expert participation
* https://review.openstack.org/#/c/33088/ <- needs more discussion
* https://review.openstack.org/#/c/33504/ <- needs more discussion

Thanks for the responses to my last update quite a few more of our reviews were 
merged! Even though we're dealing with some muddling in the Blueprint area, I 
think we're going to produce a better product in the long-run for our efforts. 
Which brings me to ...

Blueprints
* improve-vmware-disk-usage - 
https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
   * Thanks to Yaguang Tang staying up late to chat with us, we've identified a 
relation to 
 vmware-image-clone-strategy:
 + https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
 I was planning on working on this BP for Havana-3 I will try and move it 
 up on my personal schedule or someone else can talk to me and take
 the blueprint over if I'm not moving fast enough for you.

* 
https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
  * We had some internal review with VMware API experts and I hope to provide a
sample of some improvement code to the blueprint next week.
  * We had a new bug report: https://bugs.launchpad.net/nova/+bug/1192192 
this is a reported regression in "live migration" which conflicts with
other changes. We're still discussing how to resolve clusters (VMware
vSphere clusters ... when DRS is turned on ... automatically manage 
their own live-migration behaviors without admin intervention) and 
live-migration features which allow manual live-migration. One solution
is to check for DRS then raise an error effectively disabling manual
live-migration. The other is to expose individual hosts (ruining that
lovely automation in the cluster.)

So far, that's *all* the blueprints we have scheduled for Havana right now. I 
think that's a function mostly of how much work has been poured into making the 
existing features solid. 

Critical/High priority & Open Bugs
 * http://goo.gl/lvis7

I'm happy to report that every open bug in Critical or High priority status has 
someone assigned to it and actively working on a fix. This is really great 
progress! Keep it up! Rah rah rah! GO TEAM VMware API!

Weekly Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI

# Shawn Hartsock - VMware's Nova Compute driver maintainer guy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

2013-06-21 Thread Kant, Arun


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Thursday, June 20, 2013 6:30 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

On 06/20/2013 04:50 PM, Ali, Haneef wrote:

1)  I'm really not sure how that will solve the original issue (Token table 
size increase).  Of course we can have a job to remove the expired token.
It is not expiry that is the issue, but revocation.  Expirey is handled by the 
fact that the token is a signed document with a timestamp in it.  We don't 
really need to store expired tokens at all.
[Arun] One of the issue is unlimited number of active tokens possible through 
keystone for same credentials. Which can possibly be turned into DoS attack on 
cloud services. So we can possibly look into keystone for solution as token 
generation is one of its key responsibility. Removal of expired tokens is 
separate aspect which will be needed after some point regardless of token is 
re-used or not.



2)  We really have to think how the other services are using keystone.  
Keystone "createToken" volume is going to increase. Fixing one issue going to 
create another one.
Yes it will.  But in the past, the load was on Keystone token validate, and PKI 
has removed that load.  Right now, the greater load on Keystone is coming from 
token create, but that is because token caching is not in place.  With proper 
caching, Keystone would be hit only once for most workloads.  It is currently 
hit for every Remote call.  It is not the token generation that is the issue, 
but the issuing of the tokens that needs to be throttled back.
[Arun] We cannot just have solution for  happy path/situations. Being available 
in cloud , there are going to be varying types of clients and cannot just 
expect that each of them will have caching or will always be able to work with 
PKI token format (like third party services/applications running *on the 
cloud*). Throttling of token issuance request will require complex 
rate-limiting logic because of various input combinations and business rules 
associated with it. There can be another solution where keystone re-uses active 
token based on some selection logic and still able to server auth request 
without rate limiting errors.



1.   If I  understood correctly  swift is using memcache to increase the  
validateToken performance.  What will happen to it?  Obviously load  to  
"validateToken" will also increase.
Validate token happens in process with PKI tokens, not via remote call. 
Memcache just prevents swift from having to make that check more than once per 
token.  Revocation still needs to be checked every time.
[Arun] There are issues with PKI token approach as well (lifespan of token, 
data size limit, role and status changes after token generation). If shorter 
timespan is used, then essentially we will be increasing createToken requests.




2.  In few cases I have seen VM creation taking more than 5 min.  ( 
download image from glance and create vm).   Short lived token ( 5 min) will be 
a real fun  in this case.
That is what trusts are for.  Nova should not be using a bearer token to 
perform operations on behalf of the user.  Nova should be getting a delegated 
token via a trust to perform those operations.  If a vm takes 5 minutes, it 
should not matter if the tokens time out, as Nova will get a token when it 
needs it. Bearer tokens are  a poor design approach, and we have work going on 
that will remedy that.
[Arun] Not sure how delegated token or current v3 trust/ role model is going to 
work here as token needs to have user roles (or at least delegated permissions 
with user's *all* privilege) to work on user behalf. Are we talking about 
impersonating user by Nova application in some way?
In short-lived (non-PKI format), we are just diverting request load from 
validate token to create token which is relatively expensive operation.

We need some smarter mechanism to limit proliferation of tokens as they are 
essentially user's credentials for a limited time.



Thanks
Haneef



From: Ravi Chunduru [mailto:ravi...@gmail.com]
Sent: Thursday, June 20, 2013 11:49 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

+1
On Thu, Jun 20, 2013 at 11:37 AM, Dolph Mathews 
mailto:dolph.math...@gmail.com>> wrote:

On Wed, Jun 19, 2013 at 2:20 PM, Adam Young 
mailto:ayo...@redhat.com>> wrote:
I really want to go the other way on this:  I want token to be very short 
lived, ideally something like 1 minute, but probably 5 minutes to account for 
clock skew.  I want to get rid of token revocation list checking.  I'd like to 
get away from revocation altogether:  tokens are not stored in the backend.  If 
they are ephemeral, we can just check that the token has a valid signature and 
that the time has not expired.

+10







On 06/19/2013 12:59 PM, Ravi Chunduru wrote:
Thats still an open item in this thread.

Let me summarize once aga

Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Qing He

Russell,
That's great initiative!  I'm wondering if a framework/abstraction layer can be 
built so that different algorithms can be plugged in. I'm sure we can learn 
from the none-vm world:
http://en.wikipedia.org/wiki/Processor_affinity

Thanks,
Qing

> 
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 20 June 2013 17:48
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Efficiently pin running VMs to physical 
> CPUs automatically
> 
> On 06/20/2013 10:36 AM, Giorgio Franceschi wrote:
>> Hello, I created a blueprint for the implementation of:
>> 
>> A tool for pinning automatically each running virtual CPU to a 
>> physical one in the most efficient way, balancing load across 
>> sockets/cores and maximizing cache sharing/minimizing cache misses. 
>> Ideally able to be run on-demand, as a periodic job, or be triggered 
>> by events on the host (vm spawn/destroy).
>> 
>> Find it at 
>> https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning
>> 
>> Any inputappreciated!
> 
> I'm actually surprised to see a new tool for this kind of thing.
> 
> Have you seen numad?
> 
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The future of run_tests.sh

2013-06-21 Thread Monty Taylor
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/21/2013 01:44 PM, Joe Gordon wrote:
> It sounds like the censuses in this thread is:
> 
> In the long run, we want to kill run_tests.sh in favor of
> explaining how to use the underlying tools in a TESTING file.

I agree. I'd like to add that 'long run' here is potentially a couple
of cycles away. I think we definitely don't want to get rid of a thing
that a project is currently using without an answer for all of its use
cases.

> But in the short term, we should start moving toward using a
> TESTING file (such as https://review.openstack.org/#/c/33456/) but
> keep run_test.sh for the time being as there are things it does
> that we don't have simple ways of doing yet.  Since run_tests.sh
> will be around for a while it does make sense to move it into
> oslo.
> 
> 
> best, Joe
> 
> 
> On Tue, Jun 18, 2013 at 11:44 AM, Monty Taylor
> mailto:mord...@inaugust.com>> wrote:
> 
> 
> 
> On 06/18/2013 08:44 AM, Julien Danjou wrote:
>> FWIW, I think we never really had a run_tests.sh in Ceilometer 
>> like other projects might have, and we don't have one anymore
>> for weeks, and that never looked like a problem.
> 
>> We just rely on tox and on a good working listing in 
>> requirements.txt and test-requirements.txt, so you can build a
>> venv yourself if you'd like.
> 
> A couple of followups to things in this thread so far:
> 
> - Running tests consistently both in and out of virtualenv.
> 
> Super important. Part of the problem is that setuptools "test"
> command is a broken pile of garbage. So we have a patch coming to
> pbr that will sort that out - and at least as a next step, tox and
> run_tests.sh can both run python setup.py test and it will work
> both in and out of a venv, regardless of whether the repo uses nose
> or testr.
> 
> - Individual tests
> 
> nose and tox and testr and run_tests.sh all support running
> individual tests just fine. The invocation is slightly different
> for each. For me testr is hte friendliest because it defaults to
> regexes - so "testr run test_foo" will happily run 
> nova.tests.integration.deep_directory.foo.TestFoo.test_foo. But -
> all four mechanisms work here fine.
> 
> - pbr
> 
> Dropping in to a debugger while running via testr is currently 
> problematic, but is currently on the table to be sorted. In the 
> meantime, the workaround is to run testtools.run directly, which 
> run_tests.sh does for you if you specify a single test. I think
> this is probably the single greatest current reason to keep
> run_tests.sh at the moment - because as much as you can learn the
> cantrips around doing it, it's not a good UI.
> 
> - nova vs. testr
> 
> In general, things are moving towards testr being the default. I
> don't think there will be anybody cutting off people's hands for
> using nose, but I strongly recommend taking a second to learn testr
> a bit. It's got some great features and is built on top of a
> completely machine parsable test result streaming protocol, which
> means we can do some pretty cool stuff with it.
> 
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iEYEARECAAYFAlHElUUACgkQ2Jv7/VK1RgGMggCfYIuErSqwiCUKhgCnZKSyjVlw
2gYAoNDkQR6VP8mP2w6rGY6WwRTpOwxy
=svBU
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Celery

2013-06-21 Thread Joshua Harlow
Sweet, thanks jessica for the awesome docs and work.

From: Jessica Lucci 
mailto:jessica.lu...@rackspace.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 21, 2013 10:33 AM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] Celery

Hello all,

Included here is a link to a Celery wiki, explaining what the Celery project is 
and how it works. Currently, celery is being used in a distributed pattern for 
the WIP task flow project. As such, links to both the distributed project, and 
its' parent task flow project have been included for your viewing pleasure. 
Please feel free to ask any questions/address any concerns regarding either 
celery or the task flow project as a whole. (:

Celery: https://wiki.openstack.org/wiki/Celery
Distributed: https://wiki.openstack.org/wiki/DistributedTaskManagement
TaskFlow: https://wiki.openstack.org/wiki/TaskFlow

Thanks!
Jessica Lucci
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The future of run_tests.sh

2013-06-21 Thread Joe Gordon
It sounds like the censuses in this thread is:

In the long run, we want to kill run_tests.sh in favor of explaining how to
use the underlying tools in a TESTING file.

But in the short term, we should start moving toward using a TESTING file
(such as https://review.openstack.org/#/c/33456/) but keep run_test.sh for
the time being as there are things it does that we don't have simple ways
of doing yet.  Since run_tests.sh will be around for a while it does make
sense to move it into oslo.


best,
Joe


On Tue, Jun 18, 2013 at 11:44 AM, Monty Taylor  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
>
> On 06/18/2013 08:44 AM, Julien Danjou wrote:
> > FWIW, I think we never really had a run_tests.sh in Ceilometer
> > like other projects might have, and we don't have one anymore for
> > weeks, and that never looked like a problem.
> >
> > We just rely on tox and on a good working listing in
> > requirements.txt and test-requirements.txt, so you can build a venv
> > yourself if you'd like.
>
> A couple of followups to things in this thread so far:
>
> - - Running tests consistently both in and out of virtualenv.
>
> Super important. Part of the problem is that setuptools "test" command
> is a broken pile of garbage. So we have a patch coming to pbr that
> will sort that out - and at least as a next step, tox and run_tests.sh
> can both run python setup.py test and it will work both in and out of
> a venv, regardless of whether the repo uses nose or testr.
>
> - - Individual tests
>
> nose and tox and testr and run_tests.sh all support running individual
> tests just fine. The invocation is slightly different for each. For me
> testr is hte friendliest because it defaults to regexes - so "testr
> run test_foo" will happily run
> nova.tests.integration.deep_directory.foo.TestFoo.test_foo. But - all
> four mechanisms work here fine.
>
> - - pbr
>
> Dropping in to a debugger while running via testr is currently
> problematic, but is currently on the table to be sorted. In the
> meantime, the workaround is to run testtools.run directly, which
> run_tests.sh does for you if you specify a single test. I think this
> is probably the single greatest current reason to keep run_tests.sh at
> the moment - because as much as you can learn the cantrips around
> doing it, it's not a good UI.
>
> - - nova vs. testr
>
> In general, things are moving towards testr being the default. I don't
> think there will be anybody cutting off people's hands for using nose,
> but I strongly recommend taking a second to learn testr a bit. It's
> got some great features and is built on top of a completely machine
> parsable test result streaming protocol, which means we can do some
> pretty cool stuff with it.
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.12 (GNU/Linux)
> Comment: Using GnuPG with undefined - http://www.enigmail.net/
>
> iEYEARECAAYFAlHAqn4ACgkQ2Jv7/VK1RgGZ9gCdHe8AhG8uQi7nkBz1UbZHUjvJ
> KskAoKddVUPBZnXAtzNpBiwazRid0gu7
> =eGE3
> -END PGP SIGNATURE-
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Celery

2013-06-21 Thread Jessica Lucci
Hello all,

Included here is a link to a Celery wiki, explaining what the Celery project is 
and how it works. Currently, celery is being used in a distributed pattern for 
the WIP task flow project. As such, links to both the distributed project, and 
its' parent task flow project have been included for your viewing pleasure. 
Please feel free to ask any questions/address any concerns regarding either 
celery or the task flow project as a whole. (:

Celery: https://wiki.openstack.org/wiki/Celery
Distributed: https://wiki.openstack.org/wiki/DistributedTaskManagement
TaskFlow: https://wiki.openstack.org/wiki/TaskFlow

Thanks!
Jessica Lucci
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Alessandro Pilotti
> It seems that numad is libvirt specific - is that the case?

Hi, Hyper-V 2012 supports NUMA as well. It'd be great to plan an hypervisor 
independent solution from the start.


On 21.06.2013, at 11:13, "Bob Ball"  wrote:

> It seems that numad is libvirt specific - is that the case?
> 
> I'm not sure if there is a daemon for other hypervisors but would it make 
> sense to have this functionality in OpenStack so we can extend it to work for 
> each hypervisor allowing it to control the affinity in their own way?  I 
> guess this would need the Pinhead tool to either support multiple hypervisors 
> or provide the pinning strategy to Nova which could then invoke the 
> individual drivers.
> 
> Outside numa optimisations I think there are good reasons for Nova to support 
> modifying the affinity / pinning rules - for example I can imagine that some 
> flavours might be permitted dedicated or isolated vCPUs?  Integrating this 
> tool would allow us to provide it further hints/rules defined by the flavour 
> or administrator.
> 
> Bob
> 
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com] 
> Sent: 20 June 2013 17:48
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Efficiently pin running VMs to physical CPUs 
> automatically
> 
> On 06/20/2013 10:36 AM, Giorgio Franceschi wrote:
>> Hello, I created a blueprint for the implementation of:
>> 
>> A tool for pinning automatically each running virtual CPU to a physical
>> one in the most efficient way, balancing load across sockets/cores and
>> maximizing cache sharing/minimizing cache misses. Ideally able to be run
>> on-demand, as a periodic job, or be triggered by events on the host (vm
>> spawn/destroy).
>> 
>> Find it at https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning
>> 
>> Any inputappreciated!
> 
> I'm actually surprised to see a new tool for this kind of thing.
> 
> Have you seen numad?
> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuring Quantum REST Proxy Plugin

2013-06-21 Thread Sumit Naiksatam
Thanks Salvatore. That's right, the configuration for server and port
resides in:
etc/quantum/plugins/bigswitch/restproxy.ini

Let us know if you need further help.

~Sumit.

On Fri, Jun 21, 2013 at 8:37 AM, Salvatore Orlando  wrote:
> Hi Julio,
>
> If I get your message correctly, you have a proxy which is pretty much a
> shim layer between the big switch plugin (QuantumRestProxyV2) and the
> OpenNaaS server.
> In this case all you need to do is to configure the [restproxy] session of
> etc/quantum/plugins/bigswitch/restproxy.ini with the endpoint of your
> OpenNaaS server.
>
> Regards,
> Salvatore
>
>
> On 18 June 2013 14:13, Julio Carlos Barrera Juez
>  wrote:
>>
>> Hi.
>>
>> We're trying to configure Quantum REST Proxy Plugin to use an external
>> Network service developed by ourselves in the context of OpenNaaS Project
>> [1]. We have developed a REST server to listen Proxy requests. We want to
>> modify Plugin configuration as described in OpenStack official documentation
>> [2] and OpenStack Wiki [3].
>>
>> It is possible to configure path of the URL in the plugin configuration
>> like server host and port?
>>
>> Thank you!
>>
>>
>> [1] OpenNaaS Project - http://www.opennaas.org/
>> [2] OpenStack official documentation -
>> http://docs.openstack.org/trunk/openstack-network/admin/content/bigswitch_floodlight_plugin.html
>> [3] OpenStack Wiki -
>> https://wiki.openstack.org/wiki/Quantum/RestProxyPlugin#Quantum_Rest_Proxy_Plugin
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Daniel P. Berrange
On Fri, Jun 21, 2013 at 05:09:24PM +, Bob Ball wrote:
> Sorry, my point about numad being libvirt specific was that I
> couldn't find references to other hypervisors using numad for
> their placement.  I recognise that it's not _tied_ to libvirt
> but the reality seems to be that only libvirt uses it.
> 
> Xen, for example, can't use numad because dom0 might only know
> about a subset of the system - it'd make sense for dom0 to only
> be placed on a single numa node. Xen does of course have its own
> automatic placement to take account of the numa nodes - I assume
> this is also true of other hypervisors.

That is merely a limitation of the current impl, not a technology
roadblock. numad could easily be made to ask the Xen hypervisor
what the topology of the entire host was if that was desired for
Xen.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells design issue

2013-06-21 Thread Mark McLoughlin
On Fri, 2013-06-21 at 09:30 -0700, Chris Behrens wrote:
> > On Mon, Jun 17, 2013 at 2:14 AM, Mark McLoughlin 
> wrote:
> > I don't know whether I like it yet or not, but here's how it might
> look:
> > 
> >  [cells]
> >  parents = parent1
> >  children = child1, child2
> > 
> >  [cell:parent1]
> >  transport_url = qpid://host1/nova
> > 
> >  [cell:child1]
> >  transport_url = qpid://host2/child1_nova
> > 
> >  [cell:child2]
> >  transport_url = qpid://host2/child2_nova
> […]
> 
> Yeah, that's what I was picturing if going that route.  I guess the
> code for it is not bad at all.  But with oslo.config, can I reload
> (re-parse) the config file later, or does the service need to be
> restarted?

Support for reloading should get merged soon:

https://review.openstack.org/32231

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Bob Ball
Sorry, my point about numad being libvirt specific was that I couldn't find 
references to other hypervisors using numad for their placement.  I recognise 
that it's not _tied_ to libvirt but the reality seems to be that only libvirt 
uses it.

Xen, for example, can't use numad because dom0 might only know about a subset 
of the system - it'd make sense for dom0 to only be placed on a single numa 
node. Xen does of course have its own automatic placement to take account of 
the numa nodes - I assume this is also true of other hypervisors.

Perhaps my question is a broader one about whether we want Nova to have some 
influence in the pinning rules, or if we just want to ignore numa placement and 
let each hypervisor to do it in its own way?

Bob

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: 21 June 2013 10:55
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Efficiently pin running VMs to physical CPUs 
automatically

On Fri, Jun 21, 2013 at 09:10:32AM +, Bob Ball wrote:
> It seems that numad is libvirt specific - is that the case?

No, it is a completely independant project

  https://git.fedorahosted.org/git/numad.git

It existed before libvirt started using it for automatic placement.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] autoscaling question

2013-06-21 Thread Patrick Petit

Dear All,

I'd like to have some confirmation about the mechanism that is going to 
be used to inform Heat's clients about instance create and destroy in an 
auto-scaling group. I am referring to the wiki page at 
https://wiki.openstack.org/wiki/Heat/AutoScaling.


I assume, but I may be wrong, that the same eventing mechanism than the 
one being used for stack creation will be used...


An instance create in an auto-scaling group will generate an IN_PROGRESS 
event for the instance being created followed by CREATE_COMPLETE or 
CREATE_FAILED based on the value returned by cfn-signal. Similarly, an 
instance destroy will generate a DELETE_IN_PROGRESS event for the 
instance being destroyed followed by a DELETE_COMPLETE or DELETE_FAILED 
in case the instance can't be destroyed in the group.


Adding a group id in the event details will be helpful to figure out 
what group the instance belongs to.


Thanks in advance for the clarification.
Patrick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] re: discussion about passing metadata into provider stacks as parameters

2013-06-21 Thread Zane Bitter

On 21/06/13 07:49, Angus Salkeld wrote:

On 20/06/13 22:19 -0400, cbjc...@linux.vnet.ibm.com wrote:


So anyway, let's get back to the topic this thread was discussing
about - "passing meta data into provider stacks".

It seems that we have all reached an agreement that deletepolicy and
updatepolicy will be passed as params, and metadata will be exposed to
provider templates through a function

In terms of implemetation,

MetaData:

- add a resolve method to template.py to handle
{'Fn::ProvidedResource': 'Metadata'}


I think the name needs a little thought, how about:

{'Fn::ResourceFacade': 'Metadata'}


It was my thought that we would handle DeletePolicy and UpdatePolicy in 
the same way as Metadata:


{'Fn::ResourceFacade': 'DeletePolicy'}
{'Fn::ResourceFacade': 'UpdatePolicy'}

And, in fact, none of this should be hardcoded, so it should just work 
like Fn::Select on the resource facade's template snippet.


Which actually suggests another possible syntax:

{'Fn::Select': ['DeletePolicy', {'OS::Heat::ResourceFacade'}]

but I'm persuaded that accessing these will be common enough that it's 
worth sticking with the simpler Fn::ResourceFacade syntax.


cheers,
Zane.



-Angus


DeletePolicy/UpdatePolicy:

- add stack_resource.StackResource.compose_policy_params() -> Json
encoded delete and update policies

- have create_with_template update params with delete/update policies
composed by compose_policy_params
(json-parameters implementation is already in review, hope it will be
available soon)


I will start the implementation if there is no objection.


Liang



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Basic configuration with VMs with a local private network

2013-06-21 Thread Salvatore Orlando
I reckon the admin guide [1] contains sufficiently up-to-date information
for the grizzly release.
Please let me know if you find it lacks important information. We'll be
more than happy to make the necessary amendments.

Your scenario appears to be fairly simple. On the compute node you will
need nova-compute and the openvswitch plugin's L2 agent.
Quantum server, and all the other Openstack services, should run on the
controller node.
Then you should just create your network and subnet using the Quantum API.
Quantum ports will be created when VMs are booted.

Salvatore


[1]
http://docs.openstack.org/trunk/openstack-network/admin/content/index.html


On 21 June 2013 09:53, Julio Carlos Barrera Juez <
juliocarlos.barr...@i2cat.net> wrote:

> Hi.
>
> We are trying to get a basic scenario with two VMs with private IP
> addresses configured in a Compute node controlled by a Controller node. We
> want to achieve a basic private network with some VMs. We tried using Open
> vSwitch Quantum plugin to configure the network, but we have not achieved
> our objective by now.
>
> Is there any guide or or basic scenario like this  tutorial? We have found
> a bad documentation about basic networking in OpenStack using existing
> Quantum plugins, and the Open vSwitch documentation about it is too old (~2
> years).
>
> Thank you in advance.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells design issue

2013-06-21 Thread Chris Behrens

On Jun 21, 2013, at 9:16 AM, Armando Migliaccio  wrote:

> In my view a cell should only know about the queue it's connected to, and let 
> the 'global' message queue to do its job of dispatching the messages to the 
> right recipient: that would solve the problem altogether.
> 
> Were federated queues and topic routing not considered fit for the purpose? I 
> guess the drawback with this is that it is tight to Rabbit.

If you're referring to the rabbit federation plugin, no, it was not considered. 
  I'm not even sure that via rabbit queues is the right way to talk cell to 
cell.  But I really do not want to get into a full blown cells communication 
design discussion here.  We can do that in another thread, if we need to do so. 
:)

It is what it is today and this thread is just about how to express the 
configuration for it.

Regarding Mark's config suggestion:

> On Mon, Jun 17, 2013 at 2:14 AM, Mark McLoughlin  wrote:
> I don't know whether I like it yet or not, but here's how it might look:
> 
>  [cells]
>  parents = parent1
>  children = child1, child2
> 
>  [cell:parent1]
>  transport_url = qpid://host1/nova
> 
>  [cell:child1]
>  transport_url = qpid://host2/child1_nova
> 
>  [cell:child2]
>  transport_url = qpid://host2/child2_nova
[…]

Yeah, that's what I was picturing if going that route.  I guess the code for it 
is not bad at all.  But with oslo.config, can I reload (re-parse) the config 
file later, or does the service need to be restarted?

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells design issue

2013-06-21 Thread Kevin L. Mitchell
On Fri, 2013-06-21 at 09:16 -0700, Armando Migliaccio wrote:
> In my view a cell should only know about the queue it's connected to,
> and let the 'global' message queue to do its job of dispatching the
> messages to the right recipient: that would solve the problem
> altogether.

There is no "global" message queue in the context of cells.

> Were federated queues and topic routing not considered fit for the
> purpose? I guess the drawback with this is that it is tight to Rabbit.

Again, there's no single message queue in the context of cells.  I'm
assuming that was to avoid a bottleneck, but Chris Behrens would be able
to say better exactly why this design choice was made.  All I'm doing in
this discussion is trying to address one element of the current design;
I'm not trying to redesign cell communication.
-- 
Kevin L. Mitchell 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SecurityImpact tagging in gerrit

2013-06-21 Thread Daniel P. Berrange
On Fri, Jun 21, 2013 at 12:08:43PM -0400, Yun Mao wrote:
> Interesting. Does it automatically make the commit in "stealth mode" so
> that it's not seen in public? Thanks,

This tag is about asking for design input / code review from people with
security expertize for new work. As such the code is all public.

Fixes for security flaws in existing code which need to be kept private
should not be sent via Gerrit. They should be reported privately as per
the guidelines here:

  http://www.openstack.org/projects/openstack-security/

> On Fri, Jun 21, 2013 at 11:26 AM, Bryan D. Payne  wrote:
> 
> > This is a quick note to announce that the OpenStack gerrit system supports
> > a SecurityImpact tag.  If you are familiar with the DocImpact tag, this
> > works in a similar fashion.
> >
> > Please use this in the commit message for any commits that you feel would
> > benefit from a security review.  Commits with this tag in the commit
> > message will automatically trigger an email message to the OpenStack
> > Security Group, allowing you to quickly tap into some of the security
> > expertise in our community.
> >
> > PTLs -- Please help spread the word an encourage use of this within your
> > projects.
> >
> > Cheers,
> > -bryan


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells design issue

2013-06-21 Thread Armando Migliaccio
In my view a cell should only know about the queue it's connected to, and
let the 'global' message queue to do its job of dispatching the messages to
the right recipient: that would solve the problem altogether.

Were federated  queues and topic
routingnot
considered fit for the purpose? I guess the drawback with this is that
it is tight to Rabbit.

On Mon, Jun 17, 2013 at 2:14 AM, Mark McLoughlin  wrote:

> On Fri, 2013-06-14 at 12:41 -0700, Chris Behrens wrote:
> > On Jun 13, 2013, at 11:26 PM, Mark McLoughlin  wrote:
> >
> > > Is there any reason not to just put it in nova.conf as transport URLs?
> > >
> > > (By nova.conf, I mean cfg.CONF for nova which means you could happily
> do
> > > e.g. --config-file nova-cells.conf too)
> >
> > The issue with using cfg.CONF is that we need the same config options
> > for each cell.  So, I guess you'd have to dynamically create the
> > config groups with cell name as the group name… and have some sort of
> > global option or an option in a different config group that specifies
> > the list of cells.  I think it just gets kinda nasty.  But let me know
> > if I'm missing a good way to do it.  It seems like JSON is going to be
> > a little more flexible. :)
>
> I don't know whether I like it yet or not, but here's how it might look:
>
>  [cells]
>  parents = parent1
>  children = child1, child2
>
>  [cell:parent1]
>  transport_url = qpid://host1/nova
>
>  [cell:child1]
>  transport_url = qpid://host2/child1_nova
>
>  [cell:child2]
>  transport_url = qpid://host2/child2_nova
>
> Code for parsing that is:
>
>  from oslo.config import cfg
>
>  cells_opts = [
>  cfg.ListOpt('parents', default=[]),
>  cfg.ListOpt('children', default=[]),
>  ]
>
>  cell_opts = [
>  cfg.StrOpt('api_url'),
>  cfg.StrOpt('transport_url'),
>  cfg.FloatOpt('weight_offset', default=0.0),
>  cfg.FloatOpt('weight_scale', default=1.0),
>  ]
>
>  conf = cfg.CONF
>  conf(['--config-file=cells.conf'])
>  conf.register_opts(cells_opts, group='cells')
>
>  def get_cell(conf, name):
>  group_name = 'cell:' + name
>  conf.register_opts(cell_opts, group=group_name)
>  return conf[group_name]
>
>  for cell in conf.cells.parents + conf.cells.children:
>  print cell, get_cell(conf, cell).items()
>
>
> Ok ... I think I do like it :)
>
> Cheers,
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SecurityImpact tagging in gerrit

2013-06-21 Thread Thierry Carrez
Yun Mao wrote:
> Interesting. Does it automatically make the commit in "stealth mode" so
> that it's not seen in public? Thanks,

Not at all.

(1) there is no stealth mode in Gerrit
(2) this is meant to get security design advice on new features, not to
submit exploitable vulnerabilities. For the latter, see
http://www.openstack.org/projects/openstack-security/

Thanks!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SecurityImpact tagging in gerrit

2013-06-21 Thread Yun Mao
Interesting. Does it automatically make the commit in "stealth mode" so
that it's not seen in public? Thanks,

Yun


On Fri, Jun 21, 2013 at 11:26 AM, Bryan D. Payne  wrote:

> This is a quick note to announce that the OpenStack gerrit system supports
> a SecurityImpact tag.  If you are familiar with the DocImpact tag, this
> works in a similar fashion.
>
> Please use this in the commit message for any commits that you feel would
> benefit from a security review.  Commits with this tag in the commit
> message will automatically trigger an email message to the OpenStack
> Security Group, allowing you to quickly tap into some of the security
> expertise in our community.
>
> PTLs -- Please help spread the word an encourage use of this within your
> projects.
>
> Cheers,
> -bryan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][State-Management] Task/Workflow requirements for Heat

2013-06-21 Thread Joshua Harlow
Sweet, thx Zane. 

Looking forward to figuring out how we can make this all work. Fun times ahead 
;)

Sent from my really tiny device...

On Jun 21, 2013, at 1:24 AM, "Zane Bitter"  wrote:

> On 20/06/13 21:34, Joshua Harlow wrote:
>> Thanks Adrian for adding that,
>> 
>> Zane, it would be great if you could show up. I have a few questions about
>> said heat requirements, especially about how the current mechanism
>> accomplishes those requirements.
> 
> Sorry for missing that meeting, I left the house right after sending that 
> email. Unfortunately (for you ;) I won't be around for the next couple of 
> weeks, but let's definitely sync when I get back.
> 
>> IMHO I'd rather not have 2 workflow libraries (aka your scheduler.py) and
>> taskflow. It would be advantageous I think to focus on one way if we can.
>> This would be beneficial to all and if we can merge those ideas into
>> taskflow I'm all for it personally. Since one of the possible
>> ending-points for taskflow is in oslo, that would seem like a useful merge
>> of ideas and code instead of a divergent approach.
> 
> +1
> 
> I wanted to wait until I had tested it with some more complicated use cases 
> before trying to push it outside of Heat. Now that that is done and I have a 
> reasonable level of confidence in it, it would be good to explore which parts 
> can be rolled into TaskFlow and which can be replaced by existing stuff in 
> TaskFlow. Documenting the requirements that it is currently satisfying in 
> Heat was the first step in that process.
> 
> cheers,
> Zane.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing OS_AUTH_SYSTEM

2013-06-21 Thread Monty Taylor


On 06/21/2013 06:53 AM, Chmouel Boudjnah wrote:
> Hello,
> 
> We have discussed this some time ago to remove the OS_AUTH_SYSTEM from
> novaclient since this was implemented for RAX and these days RAX has
> moved to pyrax.

Rackspace should be ashamed for having spent a single second of effort
on pyrax instead of on python-openstackclient. It's actually really
insulting.

> Since last time I have looked into this it seems that there was some
> updates to it :
> 
> https://github.com/openstack/python-novaclient/blob/master/novaclient/auth_plugin.py
> 
> This made me wondering if it was needed by other people and why?
> 
> This is some preliminary works to move novaclient to use
> keystoneclient instead of implementing its own[1] client to keystone.
> If the OS_AUTH_SYSTEM feature was really needed[2] we should then
> moving it to keystoneclient.

Agree. If this is a general feature someone needs, we should move it to
keystoneclient.


> [1] weirdo with bunch of obsoletes stuff I may need to add.
> [2] and IMO this goes against a one true open cloud.
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuring Quantum REST Proxy Plugin

2013-06-21 Thread Salvatore Orlando
Hi Julio,

If I get your message correctly, you have a proxy which is pretty much a
shim layer between the big switch plugin (QuantumRestProxyV2) and the
OpenNaaS server.
In this case all you need to do is to configure the [restproxy] session of
etc/quantum/plugins/bigswitch/restproxy.ini with the endpoint of your
OpenNaaS server.

Regards,
Salvatore


On 18 June 2013 14:13, Julio Carlos Barrera Juez <
juliocarlos.barr...@i2cat.net> wrote:

> Hi.
>
> We're trying to configure Quantum REST Proxy Plugin to use an external
> Network service developed by ourselves in the context of OpenNaaS Project
> [1]. We have developed a REST server to listen Proxy requests. We want to
> modify Plugin configuration as described in OpenStack official
> documentation [2] and OpenStack Wiki [3].
>
> It is possible to configure path of the URL in the plugin configuration
> like server host and port?
>
> Thank you!
>
>
>  [1] OpenNaaS Project - http://www.opennaas.org/
> [2] OpenStack official documentation -
> http://docs.openstack.org/trunk/openstack-network/admin/content/bigswitch_floodlight_plugin.html
> [3] OpenStack Wiki -
> https://wiki.openstack.org/wiki/Quantum/RestProxyPlugin#Quantum_Rest_Proxy_Plugin
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SecurityImpact tagging in gerrit

2013-06-21 Thread Bryan D. Payne
This is a quick note to announce that the OpenStack gerrit system supports
a SecurityImpact tag.  If you are familiar with the DocImpact tag, this
works in a similar fashion.

Please use this in the commit message for any commits that you feel would
benefit from a security review.  Commits with this tag in the commit
message will automatically trigger an email message to the OpenStack
Security Group, allowing you to quickly tap into some of the security
expertise in our community.

PTLs -- Please help spread the word an encourage use of this within your
projects.

Cheers,
-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] about the ovs plugin & ovs setup for the tunnel network type

2013-06-21 Thread Dan Wendlandt
For the list, I'll post the same response i gave you when you pinged me
off-list about this:

t was a long time ago when I wrote that, and the original goal was to use a
single bridge, but there some something about how OVS worked at the time
that required the two bridges.  I think it was related to how the OVS
bridge needed to learn to associate MAC addresses + VLANs to particular
tunnel ports, but I don't remember the details.  This stuff was pretty
simple, so I'm guessing if you mess around with it for a little bit, you'll
either find that it now works (due to some change in OVS) or that it still
doesn't (in which case, it will be obvious why).


On Thu, Jun 20, 2013 at 9:00 AM, Armando Migliaccio
wrote:

> Something similar was discussed a while back on this channel:
>
> http://lists.openstack.org/pipermail/openstack-dev/2013-May/008752.html
>
> See if it helps.
>
> Cheers,
> Armando
>
> On Wed, Jun 5, 2013 at 1:11 PM, Jani, Nrupal wrote:
>
>>  Hi there,
>>
>> ** **
>>
>> I am little new to the openstack networking project, previously known as
>> quantumJ
>>
>> ** **
>>
>> Anyway I have few simple questions regarding the way ovs gets configured
>> the way it is in the current form in kvm!!
>>
>> ** **
>>
>> Here it goes,
>>
>> ** **
>>
>> **-  **As I understand, OVS setups two datapaths instances
>> br-int & br-tun & uses patch port to connect them. Additionally it uses
>> local vlans in the br-int for the vm-vm traffic!!
>>
>> **o   **I understand the reason behind the current setup but I am not
>> sure why it needs to be like it?
>>
>> **§  **can’t the same features can be supported with single instance
>> like br-int & fllows are setup correctly to get things right including
>> quantum security groups?
>>
>> ** **
>>
>> ** **
>>
>> I know there must be some technical reasons behind all these but I just
>> want get some history & also want to know whether anyone is planning to
>> enhance it in future?
>>
>> ** **
>>
>> Thx,
>>
>> ** **
>>
>> Nrupal.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs] Proposed simplification around blueprint tracking

2013-06-21 Thread Thierry Carrez
Thierry Carrez wrote:
> A script will automatically and regularly align "series goal" with
> "target milestone", so that the series and milestone views are
> consistent (if someone sets target milestone to "havana-3" then the
> series goal will be set to "havana").

Now if the Launchpad API was exporting the "series goal" property, that
would be easier... investigating workarounds.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding 'rm' to compute filter

2013-06-21 Thread Karajgi, Rohit
Hi,

Referring to the Jenkins failure logs on 
https://review.openstack.org/#/c/32549/3,
Log at 
http://logs.openstack.org/32549/3/check/gate-nova-python27/25158/console.html

The command that the test tried to execute using nova's rootwrap was:
COMMAND=/home/jenkins/workspace/gate-nova-python27/.tox/py27/bin/nova-rootwrap 
/etc/nova/rootwrap.conf rm 
/tmp/tmp.WVIZziaxuv/tmp_2n7x0/tmpbuRC0e/instance-fake.log

I am not sure if the CI infrastructure will allow this as it is attempting to 
perform 'rm' operation as a root user which is unsafe. But the test above fails.

Also, some thoughts hit me by relooking at the patch:

log_file_path = '%s/%s.log' % (CONF.libvirt_log_path, instance_name)

Assuming this libvirt_log_path = /var/log/libvirt ,  and as  /var/log is owned 
by 'root' user, then in the utils.execute, run_as_root=True is acceptable.

If the libvirt_log_path is configured something else, say /opt/data/logs/xyz, 
which does not require root access to perform 'rm', then we don't need 
'run_as_root' as True.

As mentioned above, in compute filter adding '/bin/rm'  with root privilege in 
the code is unsafe if some wrong tests are added to Jenkins, they might end up 
doing 'rm' on 
another directory as a root user.

Thoughts on how this issue be addressed in CI, or code?


Best Regards,
Rohit Karajgi | Technical Analyst | NTT Data Global Technology Services Private 
Ltd | w. +91.20.6604.1500 x 627 |  m. +91 992.242.9639 | 
rohit.kara...@nttdata.com

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Consolidate CLI Authentication

2013-06-21 Thread
Hi,

I have some questions regarding the "Consolidate CLI Authentication" 
(https://etherpad.openstack.org/keystoneclient-cli-auth and 
https://review.openstack.org/#/c/21942/).

It looks like that the code for keystone client is almost ready for merge. What 
are the plans for the other clients (nova, glance, etc) to use this code (if 
any)? Is there any related change expected on horizon?

Thanks,
- Juliano



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

2013-06-21 Thread Roman Podolyaka
Hello Sean, all,

Currently there are ~30 test classes in DB API tests, containing ~370 test
cases. setUpClass()/tearDownClass() would be definitely an improvement, but
applying of all DB schema migrations for MySQL 30 times is going to take a
long time...

Thanks,
Roman


On Fri, Jun 21, 2013 at 3:02 PM, Sean Dague  wrote:

> On 06/21/2013 07:40 AM, Roman Podolyaka wrote:
>
>> Hi, all!
>>
>> In Nova we've got a DB access layer known as "DB API" and tests for it.
>> Currently, those tests are run only for SQLite in-memory DB, which is
>> great for speed, but doesn't allow us to spot backend-specific errors.
>>
>> There is a blueprint
>> (https://blueprints.launchpad.**net/nova/+spec/db-api-tests-**
>> on-all-backends
>> )
>> by Boris Pavlovic, which goal is to run the DB API tests on all DB
>> backends (e. g. SQLite, MySQL and PosgreSQL). Recently, I've been
>> working on implementation of this BP
>> (https://review.openstack.org/**#/c/33236/
>> ).
>>
>> The chosen approach for implementing this is best explained by going
>> through a list of problems which must be solved:
>>
>> 1. Tests should be executed concurrently by testr.
>>
>> testr creates a few worker processes each running a portion of test
>> cases. When SQLite in-memory DB is used for testing, each of those
>> processes has it's own DB in its address space, so no race conditions
>> are possible. If we used a shared MySQL/PostgreSQL DB, the test suite
>> would fail due to various race conditions. Thus, we must create a
>> separate DB for each of test running processes and drop those, when all
>> tests end.
>>
>> The question is, where we should create/drop those DBs? There are a few
>> possible places in our code:
>> 1) setUp()/tearDown() methods of test cases. These are executed for
>> each test case (there are ~370 tests in test_db_api). So it must be a
>> bad idea to create/apply migrations/drop DB 370 times, if MySQL or
>> PostgreSQL are used instead of SQLite in-memory DB
>> 2) testr supports creation of isolated test environments
>> (https://testrepository.**readthedocs.org/en/latest/**
>> MANUAL.html#remote-or-**isolated-test-environments
>> ).
>> Long story short: we can specify commands to execute before tests are
>> run, after test have ended and how to run tests
>>  3) module/package level setUp()/tearDown(), but these are probably
>> supported only in nosetest
>>
>
> How many Classes are we talking about? We're actually going after a
> similar problem in Tempest that setUp isn't cheap, so Matt Treinish has an
> experimental patch to testr which allows class level partitioning instead.
> Then you can use setupClass / teardownClass for expensive resource setup.
>
>
>  So:
>> 1) before tests are run, a few test DBs are created (the number of
>> created DBs is equal to the used concurrency level value)
>> 2) for each test running process an env variable, containing the
>> connection string to the created DB, is set;
>> 3) after all test running processes have ended, the created DBs are
>> dropped.
>>
>> 2. Tests cleanup should be fast.
>>
>> For SQLite in-memory DB we use "create DB/apply migrations/run test/drop
>> DB" pattern, but that would be too slow for running tests on MySQL or
>> PostgreSQL.
>>
>> Another option would be to create DB only once for each of test running
>> processes, apply DB migrations and then run each test case within a DB
>> transaction which is rolled back after a test ends. Combining with
>> something like "fsync = off" option of PostgreSQL this approach works
>> really fast (on my PC it takes ~5 s to run DB API tests on SQLite and
>> ~10 s on PostgreSQL).
>>
>
> I like the idea of creating a transaction in setup, and triggering
> rollback in teardown, that's pretty clever.
>
>
>  3. Tests should be easy to run for developers as well as for Jenkins.
>>
>> DB API tests are the only tests which should be run on different
>> backends. All other test cases can be run on SQLite. The convenient way
>> to do this is to create a separate tox env, running only DB API tests.
>> Developers specify the DB connection string which effectively defines
>> the backend that should be used for running tests.
>>
>> I'd rather not run those tests 'opportunistically' in py26 and py27 as
>> we do for migrations, because they are going to be broken for some time
>> (most problems are described here
>> https://docs.google.com/a/**mirantis.com/document/d/**1H82lIxd54CRmy-**
>> 22oPRUS1sBkEtiguMU8N0whtye-BE/**edit
>> ).
>> So it would be really nice to have a separate non-voting gate test.
>>
>
> Seperate tox env is the right approach IMHO, that would let it run
> issolated non-voting until we get to t

Re: [openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

2013-06-21 Thread Sean Dague

On 06/21/2013 07:40 AM, Roman Podolyaka wrote:

Hi, all!

In Nova we've got a DB access layer known as "DB API" and tests for it.
Currently, those tests are run only for SQLite in-memory DB, which is
great for speed, but doesn't allow us to spot backend-specific errors.

There is a blueprint
(https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends)
by Boris Pavlovic, which goal is to run the DB API tests on all DB
backends (e. g. SQLite, MySQL and PosgreSQL). Recently, I've been
working on implementation of this BP
(https://review.openstack.org/#/c/33236/).

The chosen approach for implementing this is best explained by going
through a list of problems which must be solved:

1. Tests should be executed concurrently by testr.

testr creates a few worker processes each running a portion of test
cases. When SQLite in-memory DB is used for testing, each of those
processes has it's own DB in its address space, so no race conditions
are possible. If we used a shared MySQL/PostgreSQL DB, the test suite
would fail due to various race conditions. Thus, we must create a
separate DB for each of test running processes and drop those, when all
tests end.

The question is, where we should create/drop those DBs? There are a few
possible places in our code:
1) setUp()/tearDown() methods of test cases. These are executed for
each test case (there are ~370 tests in test_db_api). So it must be a
bad idea to create/apply migrations/drop DB 370 times, if MySQL or
PostgreSQL are used instead of SQLite in-memory DB
2) testr supports creation of isolated test environments
(https://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments).
Long story short: we can specify commands to execute before tests are
run, after test have ended and how to run tests
 3) module/package level setUp()/tearDown(), but these are probably
supported only in nosetest


How many Classes are we talking about? We're actually going after a 
similar problem in Tempest that setUp isn't cheap, so Matt Treinish has 
an experimental patch to testr which allows class level partitioning 
instead. Then you can use setupClass / teardownClass for expensive 
resource setup.



So:
1) before tests are run, a few test DBs are created (the number of
created DBs is equal to the used concurrency level value)
2) for each test running process an env variable, containing the
connection string to the created DB, is set;
3) after all test running processes have ended, the created DBs are
dropped.

2. Tests cleanup should be fast.

For SQLite in-memory DB we use "create DB/apply migrations/run test/drop
DB" pattern, but that would be too slow for running tests on MySQL or
PostgreSQL.

Another option would be to create DB only once for each of test running
processes, apply DB migrations and then run each test case within a DB
transaction which is rolled back after a test ends. Combining with
something like "fsync = off" option of PostgreSQL this approach works
really fast (on my PC it takes ~5 s to run DB API tests on SQLite and
~10 s on PostgreSQL).


I like the idea of creating a transaction in setup, and triggering 
rollback in teardown, that's pretty clever.



3. Tests should be easy to run for developers as well as for Jenkins.

DB API tests are the only tests which should be run on different
backends. All other test cases can be run on SQLite. The convenient way
to do this is to create a separate tox env, running only DB API tests.
Developers specify the DB connection string which effectively defines
the backend that should be used for running tests.

I'd rather not run those tests 'opportunistically' in py26 and py27 as
we do for migrations, because they are going to be broken for some time
(most problems are described here
https://docs.google.com/a/mirantis.com/document/d/1H82lIxd54CRmy-22oPRUS1sBkEtiguMU8N0whtye-BE/edit).
So it would be really nice to have a separate non-voting gate test.


Seperate tox env is the right approach IMHO, that would let it run 
issolated non-voting until we get to the bottom of the issues. For 
simplicity I'd still use the opportunistic db user / pass, as that will 
mean it could run upstream today.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

2013-06-21 Thread Roman Podolyaka
Hi, all!

In Nova we've got a DB access layer known as "DB API" and tests for it.
Currently, those tests are run only for SQLite in-memory DB, which is great
for speed, but doesn't allow us to spot backend-specific errors.

There is a blueprint (
https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends)
by Boris Pavlovic, which goal is to run the DB API tests on all DB backends
(e. g. SQLite, MySQL and PosgreSQL). Recently, I've been working on
implementation of this BP (https://review.openstack.org/#/c/33236/).

The chosen approach for implementing this is best explained by going
through a list of problems which must be solved:

1. Tests should be executed concurrently by testr.

testr creates a few worker processes each running a portion of test cases.
When SQLite in-memory DB is used for testing, each of those processes has
it's own DB in its address space, so no race conditions are possible. If we
used a shared MySQL/PostgreSQL DB, the test suite would fail due to various
race conditions. Thus, we must create a separate DB for each of test
running processes and drop those, when all tests end.

The question is, where we should create/drop those DBs? There are a few
possible places in our code:
   1) setUp()/tearDown() methods of test cases. These are executed for each
test case (there are ~370 tests in test_db_api). So it must be a bad idea
to create/apply migrations/drop DB 370 times, if MySQL or PostgreSQL are
used instead of SQLite in-memory DB
   2) testr supports creation of isolated test environments (
https://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments).
Long story short: we can specify commands to execute before tests are run,
after test have ended and how to run tests
3) module/package level setUp()/tearDown(), but these are probably
supported only in nosetest

So:
   1) before tests are run, a few test DBs are created (the number of
created DBs is equal to the used concurrency level value)
   2) for each test running process an env variable, containing the
connection string to the created DB, is set;
   3) after all test running processes have ended, the created DBs are
dropped.

2. Tests cleanup should be fast.

For SQLite in-memory DB we use "create DB/apply migrations/run test/drop
DB" pattern, but that would be too slow for running tests on MySQL or
PostgreSQL.

Another option would be to create DB only once for each of test running
processes, apply DB migrations and then run each test case within a DB
transaction which is rolled back after a test ends. Combining with
something like "fsync = off" option of PostgreSQL this approach works
really fast (on my PC it takes ~5 s to run DB API tests on SQLite and ~10 s
on PostgreSQL).

3. Tests should be easy to run for developers as well as for Jenkins.

DB API tests are the only tests which should be run on different backends.
All other test cases can be run on SQLite. The convenient way to do this is
to create a separate tox env, running only DB API tests. Developers specify
the DB connection string which effectively defines the backend that should
be used for running tests.

I'd rather not run those tests 'opportunistically' in py26 and py27 as we
do for migrations, because they are going to be broken for some time (most
problems are described here
https://docs.google.com/a/mirantis.com/document/d/1H82lIxd54CRmy-22oPRUS1sBkEtiguMU8N0whtye-BE/edit).
So it would be really nice to have a separate non-voting gate test.


I would really like to receive some comments from Nova and Infra guys
on whether this is an acceptable approach of running DB API tests and how
we can improve this.

Thanks,
Roman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing OS_AUTH_SYSTEM

2013-06-21 Thread Álvaro López García
Hi,

some comments inline.
On Fri 21 Jun 2013 (12:53), Chmouel Boudjnah wrote:
> Hello,
>
> We have discussed this some time ago to remove the OS_AUTH_SYSTEM from
> novaclient since this was implemented for RAX and these days RAX has
> moved to pyrax.
> 
> Since last time I have looked into this it seems that there was some
> updates to it :
> 
> https://github.com/openstack/python-novaclient/blob/master/novaclient/auth_plugin.py
> 
> This made me wondering if it was needed by other people and why?

I am the one responsible for that commit, so obviously we are
using it :-)

We are using it for X.509 auth, where we need to authenticate against
the HTTP server where keystone is running with the user certificate
instead of using the passwordCredentials dict. Basic or Digest auth are
other use cases for this system.

IMHO, as long as keystone allows for external authentication (as it does),
the auth plugin system on the client side should exist.

> This is some preliminary works to move novaclient to use
> keystoneclient instead of implementing its own[1] client to keystone.
> If the OS_AUTH_SYSTEM feature was really needed[2] we should then
> moving it to keystoneclient.

I think it is needed, and I think it should be moved to keystoneclient,
then let the other clients use keystoneclient for auth.

> Thoughts?
> 
> Chmouel.
> 
> [1] weirdo with bunch of obsoletes stuff I may need to add.

Completely agree.

> [2] and IMO this goes against a one true open cloud.

Why do you think [2] goes in that direction?

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://devel.ifca.es/~aloga/
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
"If you haven't used grep, you've missed one of the simple pleasures of
 life." -- Brian Kernighan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Removing OS_AUTH_SYSTEM

2013-06-21 Thread Chmouel Boudjnah
Hello,

We have discussed this some time ago to remove the OS_AUTH_SYSTEM from
novaclient since this was implemented for RAX and these days RAX has
moved to pyrax.

Since last time I have looked into this it seems that there was some
updates to it :

https://github.com/openstack/python-novaclient/blob/master/novaclient/auth_plugin.py

This made me wondering if it was needed by other people and why?

This is some preliminary works to move novaclient to use
keystoneclient instead of implementing its own[1] client to keystone.
If the OS_AUTH_SYSTEM feature was really needed[2] we should then
moving it to keystoneclient.

Thoughts?

Chmouel.

[1] weirdo with bunch of obsoletes stuff I may need to add.
[2] and IMO this goes against a one true open cloud.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread John Garbutt
+1 for a world class JSON API, tooling and docs.

I was +1 for a cheap XML API, but maybe that will make those who want
XML even more unhappy?

John

On 20 June 2013 21:57, Kevin L. Mitchell  wrote:
> On Thu, 2013-06-20 at 16:02 -0400, Sean Dague wrote:
>> There are lots of nice things we could do, given time and people. But
>> the reality is that relatively few people are actually working on the
>> API code, documentation, tooling around it.
>>
>> I would much rather have us deliver a world class JSON API with
>> validation and schema and comprehensive testing, than the current state
>> of our JSON + XML approach which is poorly documented and only partially
>> tested.
>
> +1.  Let's drop XML like a hot rock.
> --
> Kevin L. Mitchell 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling a tenant still allow user token

2013-06-21 Thread Chmouel Boudjnah
Hello,

[moving on the public mailing list since this bug is anyway public]

On 3 Jun 2013, at 17:25, Dolph Mathews  wrote:

> Apologies for the delayed response on this. We have several related open bugs 
> and I wanted to investigate them all at once, and perhaps fix them all in one 
> pass.
> Disabling a tenant/project should result in existing tokens scoped to that 
> tenant/project being immediately invalidated, so I think Chmouel's analysis 
> is absolutely valid.
> Regarding "list_users_in_project" -- as Guang suggested, the semantics of 
> that call are inherently complicated,


looking into this it seems that we have already such function :

https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql.py#L608

Should it get fixed?

> so ideally we can just ask the token driver to revoke tokens with some 
> context (a user OR a tenant OR a user+tenant combination). We've been going 
> down that direction, but have been incredibly inconsistent in how it's 
> utilized. I'd like to have a framework to consistently apply the consequences 
> of disabling/deleting any entity in the system.
> 

agreed, I think this should be doable if we can modify :

https://github.com/openstack/keystone/blob/master/keystone/token/core.py#L169

changing the default user_id to None

as for the getting the tokens for a specify project/tenant if we are not using 
a list_users_in_project would that mean we need to parse all the tokens to get 
the metadatas/extras tenant_id or there is some more efficient ways?

Chmouel.

> 
> -Dolph
> 
> 
> On Wed, May 29, 2013 at 9:59 AM, Yee, Guang  wrote:
> Users does not really belong to a project. They have access to, or associated 
> with, a project via role grant(s). Therefore, when disabling a project, we 
> should only invalidate the tokens scoped to that project. But yes, you should 
> be able to use the same code to invalidate the tokens when disabling a 
> project.
> 
>  
> 
> https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L164
> 
>  
> 
> We have to be careful with list_users_in_project as user can associate with 
> project with either direct role grant, or indirectly via group membership and 
> group grant.  This is going to get complicated with the addition of inherited 
> role grants.
> 
>  
> 
>  
> 
> Guang
> 
>  
> 
>  
> 
> From: Chmouel Boudjnah [mailto:chmo...@enovance.com] 
> Sent: Wednesday, May 29, 2013 2:23 AM
> To: Adam Young; Dolph Mathews; Henry Nash; Joseph Heck; Yee, Guang; 
> d...@enovance.com
> Subject: disabling a tenant still allow user token
> 
>  
> 
> Hi,
> 
> Apologies for the direct email but I will be happy to move this on 
> openstack-dev@ before to make sure it's not security involved.
> 
> I'd like to bring you this bug :
> 
> https://bugs.launchpad.net/keystone/+bug/1179955
> 
> to your attention.
> 
> Basically for the TL;DR when disabling a tenant don't disable the tokens of 
> the user attached to it. 
> 
> We could probably do that :
> 
> https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L164
> 
> when updating a tenant. but I need to find a way to list users attached to a 
> tenant (without having to list all the users).
> 
> not being able to list_users_in_project() is it something intended by 
> keystone?
> 
> Do you see a workaround for how to delete tokens of all users belonging to a 
> tenants?
> 
> Let me know what do you think.
> 
> Cheers,
> Chmouel.
> 
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Daniel P. Berrange
On Thu, Jun 20, 2013 at 12:48:16PM -0400, Russell Bryant wrote:
> On 06/20/2013 10:36 AM, Giorgio Franceschi wrote:
> > Hello, I created a blueprint for the implementation of:
> > 
> > A tool for pinning automatically each running virtual CPU to a physical
> > one in the most efficient way, balancing load across sockets/cores and
> > maximizing cache sharing/minimizing cache misses. Ideally able to be run
> > on-demand, as a periodic job, or be triggered by events on the host (vm
> > spawn/destroy).
> > 
> > Find it at https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning
> > 
> > Any inputappreciated!
> 
> I'm actually surprised to see a new tool for this kind of thing.
> 
> Have you seen numad?

The approach used by 'pinhead' tool dscribed in the blueprint seems
to be pretty much equivalent to what 'numad' is already providing
for Libvirt KVM and LXC guests.

NB, numad is actually a standalone program for optimizing NUMA
placement of any processes on a server. Libvirt talks to it when
starting a guest to request info on where best to place the guest.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Daniel P. Berrange
On Fri, Jun 21, 2013 at 09:10:32AM +, Bob Ball wrote:
> It seems that numad is libvirt specific - is that the case?

No, it is a completely independant project

  https://git.fedorahosted.org/git/numad.git

It existed before libvirt started using it for automatic placement.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Adding a xenapi_max_ephemeral_disk_size_gb flag

2013-06-21 Thread John Garbutt
Its just that the limit you want to set varies depending on the
flavor. So flavor = 10TB, limit = 2000GB, the final disk is a bit of
an odd size, maybe 1024GB would be a better split.

Although you make a good point, maybe we should set the "splitting
point" in extra specs in the flavor, rather than a config value.

John

On 20 June 2013 18:21, Russell Bryant  wrote:
> On 06/20/2013 01:02 PM, John Garbutt wrote:
>> Hi,
>>
>> I have had some discussions about if I should add a config flag in this 
>> change:
>> https://review.openstack.org/#/c/32760/
>>
>> I am looking to support adding a large amount of ephemeral disk space
>> to a VM, but the VHD format has a limit of around 2TB per disk. To
>> work around this in XenServer, I plan to add several smaller disks to
>> make up the full ephemeral disk space.
>>
>> To me it seems worth adding a configuration flag (in this case),
>> because there is no easy way to guess the correct value, and there
>> doesn't seem to be a great value to hardcode it to. Having said that,
>> I suspect the default value will be all most people need, whether or
>> not they have very large ephemeral disk space in their flavors.
>>
>> I am curious about what people think (in general, and in this case)
>> about the tradeoff between: hardcode, magic heuristic calculation, add
>> config
>
> Is it really worth adding a config option for this when you effectively
> set the limit already by configuring flavors?  Or am I missing something?
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Bob Ball
It seems that numad is libvirt specific - is that the case?

I'm not sure if there is a daemon for other hypervisors but would it make sense 
to have this functionality in OpenStack so we can extend it to work for each 
hypervisor allowing it to control the affinity in their own way?  I guess this 
would need the Pinhead tool to either support multiple hypervisors or provide 
the pinning strategy to Nova which could then invoke the individual drivers.

Outside numa optimisations I think there are good reasons for Nova to support 
modifying the affinity / pinning rules - for example I can imagine that some 
flavours might be permitted dedicated or isolated vCPUs?  Integrating this tool 
would allow us to provide it further hints/rules defined by the flavour or 
administrator.

Bob

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: 20 June 2013 17:48
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Efficiently pin running VMs to physical CPUs 
automatically

On 06/20/2013 10:36 AM, Giorgio Franceschi wrote:
> Hello, I created a blueprint for the implementation of:
> 
> A tool for pinning automatically each running virtual CPU to a physical
> one in the most efficient way, balancing load across sockets/cores and
> maximizing cache sharing/minimizing cache misses. Ideally able to be run
> on-demand, as a periodic job, or be triggered by events on the host (vm
> spawn/destroy).
> 
> Find it at https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning
> 
> Any inputappreciated!

I'm actually surprised to see a new tool for this kind of thing.

Have you seen numad?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Giorgio Franceschi

On 21-06-13 07:24, Kashyap Chamarthy wrote:

On 06/20/2013 10:18 PM, Russell Bryant wrote:

On 06/20/2013 10:36 AM, Giorgio Franceschi wrote:

Hello, I created a blueprint for the implementation of:

A tool for pinning automatically each running virtual CPU to a physical
one in the most efficient way, balancing load across sockets/cores and
maximizing cache sharing/minimizing cache misses. Ideally able to be run
on-demand, as a periodic job, or be triggered by events on the host (vm
spawn/destroy).

Find it at https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning

Any inputappreciated!

I'm actually surprised to see a new tool for this kind of thing.

Have you seen numad?


And a related post by Dan Berrange (but on lower layers -- libvirt) which 
explains how to
do vcpu pinning and control NUMA affinity --
http://berrange.com/posts/2010/02/12/controlling-guest-cpu-numa-affinity-in-libvirt-with-qemu-kvm-xen/





Yes Kashyap, thanks for the link, I had read that article while
researching the problem. It addresses the same issues, but the solution
is based on install-time, static configuration. We want something
requiring as little config as possible and able to allocate VMs at
runtime. Basically, what the author suggests in section "Fine tuning CPU
affinity at runtime", but automated for large-scale, hands-off environments.

Russell, thanks for your suggestion, I did not know of numad. It looks
interesting, but the way I understand it, it is a system-wide
NUMA-binding daemon without any configuration options or fine-tuning
capabilities. We want something that only deals with relevant kvm
processes, not all processes on the system, and also we would like to
make it configurable so that VM can advertise their "pinnability",
because sometimes you might not want all running domains on a host to be
treated the same. This is planned for a future release. Would numad be
suited to this task, in your opinion? I suppose one could use it as a
querying tool, with the -w switch, and apply its suggestions
selectively. Then, it would basically replace the strategy-making part
of pinhead. I will investigate this shortly.

Thanks everyone for your help, any further input much appreciated!
De informatie in dit e-mailbericht en eventuele bijlagen is vertrouwelijk en is 
alleen bestemd voor de beoogde ontvanger(s). Indien u dit bericht ten onrechte 
heeft ontvangen, wordt u verzocht de verzender daarvan in kennis te stellen en 
het bericht te vernietigen. Het is niet toegestaan de hierin opgenomen 
informatie op welke wijze dan ook te gebruiken of openbaar te maken. The 
information contained in this e-mail, including possible attachments, is 
confidential and is solely for the use of the intended recipient(s). Should you 
have received this e-mail unintentionally you are then requested to inform the 
sender and to destroy the message.It is prohibited to use or disclose the 
information this message contains in whatsoever way.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Basic configuration with VMs with a local private network

2013-06-21 Thread Julio Carlos Barrera Juez
Hi.

We are trying to get a basic scenario with two VMs with private IP
addresses configured in a Compute node controlled by a Controller node. We
want to achieve a basic private network with some VMs. We tried using Open
vSwitch Quantum plugin to configure the network, but we have not achieved
our objective by now.

Is there any guide or or basic scenario like this  tutorial? We have found
a bad documentation about basic networking in OpenStack using existing
Quantum plugins, and the Open vSwitch documentation about it is too old (~2
years).

Thank you in advance.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][State-Management] Task/Workflow requirements for Heat

2013-06-21 Thread Zane Bitter

On 20/06/13 21:34, Joshua Harlow wrote:

Thanks Adrian for adding that,

Zane, it would be great if you could show up. I have a few questions about
said heat requirements, especially about how the current mechanism
accomplishes those requirements.


Sorry for missing that meeting, I left the house right after sending 
that email. Unfortunately (for you ;) I won't be around for the next 
couple of weeks, but let's definitely sync when I get back.



IMHO I'd rather not have 2 workflow libraries (aka your scheduler.py) and
taskflow. It would be advantageous I think to focus on one way if we can.
This would be beneficial to all and if we can merge those ideas into
taskflow I'm all for it personally. Since one of the possible
ending-points for taskflow is in oslo, that would seem like a useful merge
of ideas and code instead of a divergent approach.


+1

I wanted to wait until I had tested it with some more complicated use 
cases before trying to push it outside of Heat. Now that that is done 
and I have a reasonable level of confidence in it, it would be good to 
explore which parts can be rolled into TaskFlow and which can be 
replaced by existing stuff in TaskFlow. Documenting the requirements 
that it is currently satisfying in Heat was the first step in that process.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] re: discussion about passing metadata into provider stacks as parameters

2013-06-21 Thread Liang Chen
On Fri, 2013-06-21 at 15:49 +1000, Angus Salkeld wrote:
> On 20/06/13 22:19 -0400, cbjc...@linux.vnet.ibm.com wrote:
> >
> >So anyway, let's get back to the topic this thread was discussing 
> >about - "passing meta data into provider stacks".
> >
> >It seems that we have all reached an agreement that deletepolicy and 
> >updatepolicy will be passed as params, and metadata will be exposed 
> >to provider templates through a function
> >
> >In terms of implemetation,
> >
> >MetaData:
> >
> >- add a resolve method to template.py to handle  
> >{'Fn::ProvidedResource': 'Metadata'}
> 
> I think the name needs a little thought, how about:
> 
> {'Fn::ResourceFacade': 'Metadata'}
> 
> -Angus

Yeah, sounds better~

> >
> >DeletePolicy/UpdatePolicy:
> >
> >- add stack_resource.StackResource.compose_policy_params() -> Json 
> >encoded delete and update policies
> >
> >- have create_with_template update params with delete/update policies 
> >composed by compose_policy_params
> >(json-parameters implementation is already in review, hope it will be 
> >available soon)
> >
> >
> >I will start the implementation if there is no objection.
> >
> >
> >Liang
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev