Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-24 Thread Assaf Muller


- Original Message -
 Hi,
 
 I want to know the admin_state_up attribute about networks but I
 have not found any describes.
 
 Can you help me to understand it? Thank you very much.
 

There's a discussion about this in this bug [1].
From what I gather, nobody knows what admin_state_up is actually supposed
to do with respect to networks.

[1] https://bugs.launchpad.net/neutron/+bug/1237807

 
 Regard,
 
 Lee Li
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-24 Thread Petr Blaho
On Fri, Feb 21, 2014 at 10:24:24AM +0100, Tomas Sedovic wrote:
 On 20/02/14 16:24, Imre Farkas wrote:
  On 02/20/2014 03:57 PM, Tomas Sedovic wrote:
  On 20/02/14 15:41, Radomir Dopieralski wrote:
  On 20/02/14 15:00, Tomas Sedovic wrote:
 
  Are we even sure we need to store the passwords in the first place? All
  this encryption talk seems very premature to me.
 
  How are you going to redeploy without them?
 
 
  What do you mean by redeploy?
 
  1. Deploy a brand new overcloud, overwriting the old one
  2. Updating the services in the existing overcloud (i.e. image updates)
  3. Adding new machines to the existing overcloud
  4. Autoscaling
  5. Something else
  6. All of the above
 
  I'd guess each of these have different password workflow requirements.
  
  I am not sure if all these use cases have different password
  requirement. If you check devtest, no matter whether you are creating or
  just updating your overcloud, all the parameters have to be provided for
  the heat template:
  https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125
  
  
  I would rather not require the user to enter 5/10/15 different passwords
  every time Tuskar updates the stack. I think it's much better to
  autogenerate the passwords for the first time, provide an option to
  override them, then save and encrypt them in Tuskar. So +1 for designing
  a proper system for storing the passwords.
 
 Well if that is the case and we can't change the templates/heat to
 change that, the secrets should be put in Keystone or at least go
 through Keystone. Or use Barbican or whatever.
 
 We shouldn't be implementing crypto in Tuskar.

+1

 
  
  Imre
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-24 Thread Lingxian Kong
I think 'tenant_id' should always be validated when creating neutron
resources, whether or not Neutron can handle the notifications from
Keystone when tenant is deleted.

thoughts?


2014-02-20 20:21 GMT+08:00 Dong Liu willowd...@gmail.com:

 Dolph, thanks for the information you provided.

 Now I have two question:
 1. Will neutron handle this event notification in the future?
 2. I also wish neutron could verify that tenant_id is existent.

 thanks

 于 2014-02-20 4:33, Dolph Mathews 写道:

 There's an open bug [1] against nova  neutron to handle notifications
 [2] from keystone about such events. I'd love to see that happen during
 Juno!

 [1] https://bugs.launchpad.net/nova/+bug/967832
 [2] http://docs.openstack.org/developer/keystone/event_notifications.html

 On Mon, Feb 17, 2014 at 6:35 AM, Yongsheng Gong gong...@unitedstack.com
 mailto:gong...@unitedstack.com wrote:

 It is not easy to enhance it. If we check the tenant_id on creation,
 if should we  also to do some job when keystone delete tenant?


 On Mon, Feb 17, 2014 at 6:41 AM, Dolph Mathews
 dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:

 keystoneclient.middlware.auth_token passes a project ID (and
 name, for convenience) to the underlying application through the
 WSGI environment, and already ensures that this value can not be
 manipulated by the end user.

 Project ID's (redundantly) passed through other means, such as
 URLs, are up to the service to independently verify against
 keystone (or equivalently, against the WSGI environment), but
 can be directly manipulated by the end user if no checks are in
 place.

 Without auth_token in place to manage multitenant authorization,
 I'd still expect services to blindly trust the values provided
 in the environment (useful for both debugging the service and
 alternative deployment architectures).

 On Sun, Feb 16, 2014 at 8:52 AM, Dong Liu willowd...@gmail.com
 mailto:willowd...@gmail.com wrote:

 Hi stackers:

 I found that when creating network subnet and other
 resources, the attribute tenant_id
 can be set by admin tenant. But we did not verify that if
 the tanent_id is real in keystone.

 I know that we could use neutron without keystone, but do
 you think tenant_id should
 be verified when we using neutron with keystone.

 thanks
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-24 Thread Damon Wang
Hi Lee,

Here is a discussion maybe help:
https://answers.launchpad.net/neutron/+question/230508

Damon


2014-02-24 16:03 GMT+08:00 Assaf Muller amul...@redhat.com:



 - Original Message -
  Hi,
 
  I want to know the admin_state_up attribute about networks but I
  have not found any describes.
 
  Can you help me to understand it? Thank you very much.
 

 There's a discussion about this in this bug [1].
 From what I gather, nobody knows what admin_state_up is actually supposed
 to do with respect to networks.

 [1] https://bugs.launchpad.net/neutron/+bug/1237807

 
  Regard,
 
  Lee Li
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-24 Thread 黎林果
Thanks you very much.

IMHO when admin_state_up is false that entity should be down, meaning
network should be down.
otherwise what it the usage of admin_state_up ? same is true for port
admin_state_up

It likes switch's power button?

2014-02-24 16:03 GMT+08:00 Assaf Muller amul...@redhat.com:


 - Original Message -
 Hi,

 I want to know the admin_state_up attribute about networks but I
 have not found any describes.

 Can you help me to understand it? Thank you very much.


 There's a discussion about this in this bug [1].
 From what I gather, nobody knows what admin_state_up is actually supposed
 to do with respect to networks.

 [1] https://bugs.launchpad.net/neutron/+bug/1237807


 Regard,

 Lee Li

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting reminder - 02/24/2014

2014-02-24 Thread Renat Akhmerov
Hi Mistral team,

This is a reminder that we’ll have another IRC community meeting today 
(#openstack-meeting) at 16.00 UTC.

Here’s the agenda:
Review action items
Discuss current status
Continue DSL discussion
Open discussion (roadblocks, suggestions, etc.)

You can also find the agenda and the links to the previous meeting minutes and 
logs at https://wiki.openstack.org/wiki/Meetings/MistralAgenda.

Please follow up on this email if you have additional items to discuss.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Win 2008 R2 VMDK upload to glance

2014-02-24 Thread Alessandro Pilotti
Hi,

Did you install the VMWare tools and sysprepped the image before adding it to 
Glance?

Here's how we automate the creation of Windows OpenStack images for KVM, 
Hyper-V and VMWare (unattended setup, hypervisor drivers, windows updates, 
Cloudbase-Init and sysprep):

https://github.com/cloudbase/windows-openstack-imaging-tools

More details:

http://www.cloudbase.it/create-windows-openstack-images/

Alessandro

 On 24.02.2014, at 09:29, Abhishek Soni virtualserv...@gmail.com wrote:
 
 Hi All,
 
 I wanted to upload a new Win 2k8 R2 VMDK file to glance. 
 
 I have created a new windows VM with 20 GB hdd on ESXi 5.1 and then uploaded 
 the flat file to Glance. But when ever VM boots from openstack, it goes to 
 recovery mode.
 
 Any help or point to correct solution document will be of great help! I am 
 facing this issue only with Windows VMs.
 
 Thanks!
 
 Abhishek
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-24 Thread Ladislav Smola

On 02/23/2014 01:16 AM, Clint Byrum wrote:

Excerpts from Imre Farkas's message of 2014-02-20 15:24:17 +:

On 02/20/2014 03:57 PM, Tomas Sedovic wrote:

On 20/02/14 15:41, Radomir Dopieralski wrote:

On 20/02/14 15:00, Tomas Sedovic wrote:


Are we even sure we need to store the passwords in the first place? All
this encryption talk seems very premature to me.

How are you going to redeploy without them?


What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.

I am not sure if all these use cases have different password
requirement. If you check devtest, no matter whether you are creating or
just updating your overcloud, all the parameters have to be provided for
the heat template:
https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125

I would rather not require the user to enter 5/10/15 different passwords
every time Tuskar updates the stack. I think it's much better to
autogenerate the passwords for the first time, provide an option to
override them, then save and encrypt them in Tuskar. So +1 for designing
a proper system for storing the passwords.

Tuskar does not need to reinvent this.

Use OS::Heat::RandomString in the templates.

If any of them need to be exposed to the user, use an output on the
template.

If they need to be regenerated, you can pass a salt parameter.


Do we actually need to expose to the user something else than AdminPassword?

We are using tripleo-heat-templates currently, so we will need to make 
the change

there.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-24 Thread Dougal Matthews

On 21/02/14 10:03, Radomir Dopieralski wrote:

On 21/02/14 10:38, Dougal Matthews wrote:

At the moment we are guessing, we don't even know how many passwords
we need to store. So we should progress with the safest and simplest
option (which is to not store them) and consider changing this later.


I think we have a pretty good idea:
https://github.com/openstack/tuskar-ui/blob/master/tuskar_ui/infrastructure/overcloud/workflows/undeployed_configuration.py#L23-L222

(just count the NoEcho: true lines)


Right, that's a good list. There seemed to be guessing in others emails
which is what I was referring too. Thanks for the pointer.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Auto-created ports count against port quota bug status

2014-02-24 Thread Jaume Devesa
Hello all, 

I have the following bug assigned for icehouse~3 milestone: 
https://bugs.launchpad.net/neutron/+bug/1212338

I proposed a (maybe too much) trivial solution in the launchpad comments, but I 
haven't received a confirmation that solution would solve the issue. Could a 
more experienced neutron developer confirm me that is the path to follow or 
propose another one, please?

Thanks!
-- 
Jaume Devesa

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-24 Thread Thierry Carrez
Boris Renski wrote:
 There are a couple of additional things we are working on driver
 verification front that, I believe, it is now time to socialize with the
 dev community:
 
 1. In addition to the scare tactic of deprecating the code for those
 that don't test their drivers, we also want to implement a carrot-tactic
 of granting a trademark privilege to those that do. Specifically,
 storage vendors that have achieved Stage 4 or Thierry's ladder below,
 will have the right granted by the openstack foundation to publically
 endorse themselves as OpenStack Verified Block Storage Vendors. I've
 spoken to the vast majority of the foundation board members about
 this as well as Mark and Jonathan, and, everybody appears to be onboard.
 I plan to formally have a foundation board discussion on this topic
 during the upcoming March 3rd board meeting and would like to gather the
 feedback of the dev community on this. So please feedback away...

The end result of the scare tactic will be that only tested drivers are
kept *in* OpenStack. The others might be compatible with OpenStack, but
they won't be shipped within the main code. So it's quite natural to me
if the Block Storage vendors, Network plugin vendors and Compute
hypervisor vendors that fulfill 3rd-party testing requirements can call
their drivers a part of OpenStack, and have trademark usage rules they
can use for public self-endorsement.

 2. As a stepping stone to #1, we are working on implementing an
 interactive dashboard (not just for devs, but for OpenStack users as
 well) that will display the results of driver tests against trunk
 and stable dot releases pulled directly from CI even stream (mock
 attached). The way we are thinking of implementing this right now is
 next to each of the drivers in
 https://wiki.openstack.org/wiki/CinderSupportMatrix, specify if the
 compatibility is self-reported or verified. Verified will be a link
 to the dashboard for a particular driver kinda like in the mock.

This sounds especially useful as we go through steps 1-2 of the ladder,
and try to track the extent of our testing for current drivers.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-24 Thread Dmitry
I think this is the great new service which will accompany OpenStack Solum
similarly as Bosh is accompanying Cloud Foundry and CloudForms is
accompanying OpenShift.
I wouldn't call it the Application Catalog but the Service Catalog, because
of the primary focus on the Service Life-cycle management (in opposite to
application lifecycle management which is focused on code-to-binary,
execution, remote debugging and log grabbing etc).
In order to make Murano the real Service Catalog, it should support (over
DSL) run-time events processing such as service scaling (up/out/in/down),
healing, replication, live-migration etc.
In addition, it should support a template creation which could be used by
Solum similar to Heroku BuildPack.
I would happy to hear your opinion on how you envision the Murano's roadmap.

Thanks,
Dmitry
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 02:06:50 -0500
Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-02-24 at 17:20 +1030, Christopher Yeoh wrote:
  - Proposed way forward:
- Release the V3 API in Juno with nova-network and tasks support
- Feature freeze the V2 API when the V3 API is released
  - Set the timeline for deprecation of V2 so users have a lot
of warning
  - Fallback for those who really don't want to move after
deprecation is an API service which translates between V2 and V3
requests, but removes the dual API support burden from Nova.
 
 And when do you think we can begin the process of deprecating the V3
 API and removing API extensions and XML translation support?

So did you mean V2 API here? I don't understand why you think the V3
API would need deprecating any time soon.

XML support has already been removed from the V3 API and I think the
patch to mark XML as deprecated for the V2 API and eventual removal in
Juno has already landed. So at least for part of the V2 API a one cycle
deprecation period has been seen as reasonable.

When it comes to API extensions I think that is actually more a
question of policy than anything else. The actual implementation behind
the scenes of a plugin architecture makes a lot of sense whether we
have extensions or not. It forces a good level of isolation between API
features and clarity of interaction where its needed - all of which
much is better from a maintenance point of view.

Now whether we have parts of the API which are optional or not is
really a policy decision as to whether we will force deployers to use
all of the plugins or a subset (eg currently the core). There is
the technical support for doing so in the V3 API (essentially what is
used to enforce the core of the API). And a major API version bump is
not required to change this. Perhaps this part falls in to the
DefCore discussions :-)

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-24 Thread Imre Farkas

On 02/24/2014 09:39 AM, Ladislav Smola wrote:

On 02/23/2014 01:16 AM, Clint Byrum wrote:

Excerpts from Imre Farkas's message of 2014-02-20 15:24:17 +:

On 02/20/2014 03:57 PM, Tomas Sedovic wrote:

On 20/02/14 15:41, Radomir Dopieralski wrote:

On 20/02/14 15:00, Tomas Sedovic wrote:


Are we even sure we need to store the passwords in the first
place? All
this encryption talk seems very premature to me.

How are you going to redeploy without them?


What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.

I am not sure if all these use cases have different password
requirement. If you check devtest, no matter whether you are creating or
just updating your overcloud, all the parameters have to be provided for
the heat template:
https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125


I would rather not require the user to enter 5/10/15 different passwords
every time Tuskar updates the stack. I think it's much better to
autogenerate the passwords for the first time, provide an option to
override them, then save and encrypt them in Tuskar. So +1 for designing
a proper system for storing the passwords.

Tuskar does not need to reinvent this.

Use OS::Heat::RandomString in the templates.

If any of them need to be exposed to the user, use an output on the
template.

If they need to be regenerated, you can pass a salt parameter.


Do we actually need to expose to the user something else than
AdminPassword?


I think we should not. The MySQL password or any of the service 
passwords are implementation details of the cloud, it should not be 
used by anyone, except for OpenStack internally. The administrator 
should setup separate user accounts with the proper privileges to access 
these services.


Imre


We are using tripleo-heat-templates currently, so we will need to make
the change
there.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-24 Thread Swann Croiset
Hi Nader,

These notifications must be handled by Ceilometer like others [1].
it is surprising that it does not already identity meters indeed...
probably nobody needs them before you.
I guess it remains to open a BP and code them like I recently did for Heat
[2]


http://docs.openstack.org/developer/ceilometer/measurements.html
https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications


2014-02-20 19:10 GMT+01:00 Nader Lahouti nader.laho...@gmail.com:

 Thanks Dolph for link. The document shows the format of the message and
 doesn't give any info on how to listen to the notification.
 Is there any other document showing the detail on how to listen or get
 these notifications ?

 Regards,
 Nader.

 On Feb 20, 2014, at 9:06 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 Yes, see:

   http://docs.openstack.org/developer/keystone/event_notifications.html

 On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti 
 nader.laho...@gmail.comwrote:

 Hi All,

 I have a question regarding creating/deleting a tenant in openstack
 (using horizon or CLI). Is there any notification mechanism in place so
 that an application get informed of such an event?

 If not, can it be done using plugin to send create/delete notification to
 an application?

 Appreciate your suggestion and help.

 Regards,
 Nader.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] AMQP 1.0 based messaging driver available for review

2014-02-24 Thread Gordon Sim
I have been working with Ken Giusti on an AMQP 1.0 messaging driver for 
olso.messaging. The code for this is now available for review at

https://review.openstack.org/#/c/75815/

As previously mentioned on this list, this uses the Apache Qpid Proton 
'protocol engine' library. This is not a standalone client library, it 
includes no IO or threading, and simply provides the encoding and 
protocol rules. This was chosen to give full flexibility in other 
aspects of the client, allowing it to be tailored to best suit the 
olso.messaging usecase. It does mean however that the driver has a 
little more code than might be expected had a full client library been used.


The code uses a new directory (package) layout for this driver, based on 
suggestions from Flavio Percoco (thanks Flavio!). There is a protocols 
directory under _drivers, with amqp being used for this one. This name 
was preferred to proton to make it clear that the intention is to speak 
clean, neutral AMQP 1.0 and to avoid tying the driver code to specific 
intermediaries.


With that new amqp package, there is a subdirectory called engine which 
contains some generic wrappers around the lower level proton APIs. It 
may be that at some point this layer, or something similar, is available 
either directly in proton or in some supplementary library at which 
point we would have the option of dropping some of the code from within 
oslo.messaging itself.


The driver.py module implements the defined driver API, mostly by making 
requests on the controller.py module where most of the protocol mapping 
logic lies. The io is driven by a separate thread and the event loop for 
this is defined in eventloop.py. The threads calling on the driver 
communicate with the io thread (which uses non-blocking io), using Queues.


So far the testing has largely been through ad hoc clients and servers 
for the oslo.messaging API. I have also used it with nova under 
devstack. My knowledge of openstack is still very low. I followed the 
very helpful advice offered by Russell Bryant here: 
https://ask.openstack.org/en/question/28/openstack-api-mocker-or-simulator/


However I am eager to learn more and any suggestions for things to carry 
out with real openstack service will be greatly appreciated. I also plan 
to work on some functional tests at the olso.messaging API level that 
could be used to test any of the drivers. This would allow the lack of 
covergae with the current qpid driver to be addressed as welll as 
providing more confidence with this new driver (and would I think be 
useful for any other driver implementer who lacks sufficient knowledge 
of the other openstack services, or simply as a way of catching more 
issues earlier).


The current driver available for review requires an intermediary of some 
form, whether a 'broker' that supports AMQP 1.0 or something slightly 
different such as the Qpid Dispatch Router previously mentioned on this 
list.


I have tested successfully with qpidd and qpid dispatch router. At 
present RabbitMQ does not support 'dynamic nodes' necessary for 
temporary reply queues in 1.0, but I have a workaround planned for that.


This email is already growing rather long, so I'll leave it at that for 
now but would be delighted to answer any questions or address nay 
feedback whether here or through the review request above, or the 
associated blueprint: 
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation


I will communicate regarding enhancements and additions to the code.

--Gordon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] : error during gate-tempest-dsvm-neutron-large-ops when attaching floating IP

2014-02-24 Thread LELOUP Julien
Hello everyone,

I'm currently pushing a stress test in Tempest which at some point tries to 
attach a floating IP to newly started servers . The change I'm talking about is 
available here : https://review.openstack.org/#/c/74067/ .

This test runs fine on my local devstack, but I have an issue during Jenkins 
tests, on the gate 'gate-tempest-dsvm-neutron-large-ops'.
The error happen when my test wants to attach a floating IP using the Neutron 
API :

* Last week, the error was about the fact that no ports was attached to 
my servers : I wasn't specifying any nics during the server creation, but since 
there was only one network available from the nova point of view, I believed it 
was automatically attached to my server, thus creating a port for it. This 
explains why this code is working on my local devstack and openstack deployment.

* In order to resolve this I'm currently forcing the 'nics' parameter 
during the server creation in order to have my server attached to the 'private' 
network and so have a port to use later during floating IP attachment. This 
triggers a new error : I can request Neutron to get the UUID of the network 
named 'private', but when I'm specifying this UUID in the 'nics' parameter, I 
get an error saying that this network is not found.

I don't know if the issue is related to my code, the tempest configuration used 
in the gate or the devstack deployed for the gate.

Can someone give me some insights about this gate ? Is there something peculiar 
in the devstack deployment used for this gate that can explain the errors I'm 
having ?

Thanks in advance for your time :)

Best Regards,

Julien LELOUP
julien.lel...@3ds.com

This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-02-24 Thread Thierry Carrez
Mark Washenberger wrote:
 Prior to this email, I was imagining that we would expand the Images
 program to go beyond storing just block device images, and into more
 structured items like whole Nova instance templates, Heat templates, and
 Murano packages. In this scheme, Glance would know everything there is
 to know about a resource--its type, format, location, size, and
 relationships to other resources--but it would not know or offer any
 links for how a resource is to be used.

I'm a bit uncomfortable as well. Part of our role at the Technical
Committee is to make sure additions do not overlap in scope and make
sense as a whole.

Murano seems to cover two functions. The first one is publishing,
cataloging and discovering software stacks. The second one is to deploy
those software stacks and potentially manage their lifecycle.

In the OpenStack integrated release we already have Glance as a
publication/catalog/discovery component and Heat as the workload
orchestration end. Georgy clearly identified those two facets, since the
incubation request lists those two programs as potential homes for Murano.

The problem is, Orchestration doesn't care about the Catalog part of
Murano, and Glance doesn't care about the Orchestration part of Murano.
Murano spans the scope of two established programs. It's not different
enough to really warrant its own program, and it's too monolithic to fit
in our current landscape.

I see two ways out: Murano can continue to live as a separate
application that lives on top of OpenStack and consumes various
OpenStack components. Or its functionality can be split and subsumed by
Glance and Heat, with Murano developers pushing it there. There seems to
be interest in both those programs to add features that Murano covers.
The question is, could we replicate Murano's featureset completely in
those existing components ? Or is there anything Murano-unique that
wouldn't fit in existing projects ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-24 Thread Alexander Tivelkov
Hi Dmitry,

I agree with you on this vision, however we have to think more on the
terminology: Service Catalog in OpenStack relates to Keystone (where by
Service we mean Openstack's infrastructure-level services).
I understand your concerns on runtime lifecycle vs code-to-binary
lifecycle though - it is absolutely valid, and we do not want to have any
overlap with Solum in this matter.

--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 1:47 PM, Dmitry mey...@gmail.com wrote:

 I think this is the great new service which will accompany OpenStack Solum
 similarly as Bosh is accompanying Cloud Foundry and CloudForms is
 accompanying OpenShift.
 I wouldn't call it the Application Catalog but the Service Catalog,
 because of the primary focus on the Service Life-cycle management (in
 opposite to application lifecycle management which is focused on
 code-to-binary, execution, remote debugging and log grabbing etc).
 In order to make Murano the real Service Catalog, it should support (over
 DSL) run-time events processing such as service scaling (up/out/in/down),
 healing, replication, live-migration etc.
 In addition, it should support a template creation which could be used
 by Solum similar to Heroku BuildPack.
 I would happy to hear your opinion on how you envision the Murano's
 roadmap.

 Thanks,
 Dmitry

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job very close to voting - call to arms by neutron team

2014-02-24 Thread Rossella Sblendido

Ciao Salvatore,

thanks a lot for analyzing the failures!

This link is not working for me:
7) https://bugs.launchpad.net/neutron/+bug/1253533

I took a minor bug that was not assigned. Most of the bugs are assigned 
to you, I was wondering if you´d use some help. I guess we can 
coordinate better when you are online.


cheers,

Rossella

On 02/23/2014 03:14 AM, Salvatore Orlando wrote:

I have tried to collect more information on neutron full job failures.

So far there have been 219 failures and 891 successes, for an overall 
success rate of 19.8% which is inline with Sean's evaluation.
The count has performed exclusively on jobs executed against master 
branch. The failure rate for stable/havana is higher; indeed the job 
there still triggers bug 1273386 as it performs nbd mounting, and 
several fixes for the l2/l3 agents were not backported (or not 
backportable).


It is worth noting that actually some of the failures were because of 
infra issues. Unfortunately, it is not obvious to me how to define a 
logstash query for that. Nevertheless, it will be better to err on the 
side of safety and estimate failure rate to be about 20%.


I did then a classification of 63 failures, finding out the following:
- 25 failures were for infra issues, 1 failure was due to a flaw in a 
patch, leaving 37 real failures to analyse
   * In the same timeframe 203 jobs succeeded, giving a potential 
failure rate after excluding infra issues of 15.7%

- 2 bugs were responsible for 25 of these 37 failures
   * they are the SSH protocol banner issue, and the well-knows DB 
lock timeouts
- bug 1253896 (the infamous SSH timeout bug) was hit only twice. The 
elastic recheck count is much higher because failures for the SSH 
protocol banner error (1265495) are being classified as bug 1253896.
   * actually in the past 48 hours only 2 voting neutron jobs hit this 
failure. This is probably a great improvement compared with a few 
weeks ago.
- Some failures are due to bug already known and tracked, other 
failures are due to bugs either unforeseen so far or not tracked. In 
the latter case a bug report has been filed.


It seems therefore that there are two high priority bugs to address:
1) https://bugs.launchpad.net/neutron/+bug/1283522 (16 occurrences, 
43.2% of failure, 6.67% globally)
* Check whether we can resume the split between API server and RPC 
server discussion)
2) https://bugs.launchpad.net/neutron/+bug/1265495 (9/37 = 24.3% of 
failures, 3.75% globally)


And several minor bugs (affecting tempest and/or neutron)
Each one of the following bugs was found no more than twice in our 
analysis:
3) https://bugs.launchpad.net/neutron/+bug/1254890 (possibly a nova 
bug, but it hit the neutron full job once)

4) https://bugs.launchpad.net/neutron/+bug/1283599
5) https://bugs.launchpad.net/neutron/+bug/1277439
6) https://bugs.launchpad.net/neutron/+bug/1253896
7) https://bugs.launchpad.net/neutron/+bug/1253533
8) https://bugs.launchpad.net/tempest/+bug/1283535 (possibly not a 
neutron bug)
9) https://bugs.launchpad.net/tempest/+bug/1253993 (need to devise new 
solutions for improving agent loop times)
   * there is already a patch under review for bulking device details 
requests

10) https://bugs.launchpad.net/neutron/+bug/1283518

In my humble opinion, it is therefore important to have immediately a 
plan for ensuring bugs #1 and #2 are solved or at least consistently 
mitigated by icehouse. It would also be good to identify assignees for 
bug #3 to bug #10.


Regards,
Salvatore


On 21 February 2014 14:44, Sean Dague s...@dague.net 
mailto:s...@dague.net wrote:


Yesterday during the QA meeting we realized that the neutron full job,
which includes tenant isolation, and full parallelism, was passing
quite
often in the experimental queue. Which was actually news to most
of us,
as no one had been keeping a close eye on it.

I moved that to a non-voting job on all projects. A spot check
overnight
is that it's failing about twice as often as the regular neutron job.
Which is too high a failure rate to make it voting, but it's close.

This would be the time for a final hard push by the neutron team
to get
to the bottom of these failures to bring the pass rate to the level of
the existing neutron job, then we could make neutron full voting.

This is a *huge* move forward from where things were at the Havana
summit. I want to thank the Neutron team for getting so aggressive
about
getting this testing working. I was skeptical we could get there
within
the cycle, but a last push could actually get us neutron parity in the
gate by i3.

-Sean

--
Sean Dague
Samsung Research America
s...@dague.net mailto:s...@dague.net / sean.da...@samsung.com
mailto:sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
   

[openstack-dev] [neutron]The mechanism of physical_network segmentation_id is logical?

2014-02-24 Thread 黎林果
Hi stackers,

  When create a network, if we don't set provider:network_type,
provider:physical_network or provider:segmentation_id, the
network_type will be from cfg, but the other tow is from db's first
record. Code is

(physical_network,
 segmentation_id) = ovs_db_v2.reserve_vlan(session)



  There has tow questions.
  1, network_vlan_ranges = physnet1:100:200
 Can we config much physical_networks by cfg?

  2, If yes, the physical_network should be uncertainty. Dose this logical?


Regards!

Lee Li

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Missing tests

2014-02-24 Thread Vinod Kumar Boppanna
Hi,

I had uploaded to Gerrit the code for Domain Quota Management. One of the test 
is failing due to the missing tests for the following extensions.

Extensions are missing tests: ['os-extended-hypervisors', 
'os-extended-services-delete']

What can i do now? (these extensions are not done by me)

Regards,
Vinod Kumar Boppanna

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Unified Guest Agent in Savanna

2014-02-24 Thread Dmitry Mescheryakov
Hello folks,

Not long ago we had a discussion on unified guest agent [1] - a way for
performing actions 'inside' VMs. Such thing is needed for PaaS projects for
tasks such as application reconfiguration and user requests pass-through.

As a proof of concept I've made os_collect_config as a guest agent [2]
based on the design proposed in [1].

Now I am focused on making an agent for Savanna. I'd like to invite
everyone to review the initial my initial CR [3]. All subsequent changes
will be listed as dependent on this one.

This is going to be a more complex thing then os_collect_config rewrite.
For instance, here we need to handle agent installation and configuration.
Also I am going to check what can be done for more fine grained
authorization.

Also Sergey Lukjanov and me proposed a talk on the agent, so feel free to
vote for it in case you're interested.

Thanks,

Dmitry


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021476.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/024968.html
[3] https://review.openstack.org/#/c/71015
[4]
https://www.openstack.org/vote-atlanta/Presentation/unified-guest-agent-for-openstack
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job very close to voting - call to arms by neutron team

2014-02-24 Thread Salvatore Orlando
Hi Rossella,

I had no idea most of the bugs were assigned to me.

I have pushed several patches for bug 1253896 and that's why launchpad is
stating I own the ticket.
But if you find another fault causing that bug, feel free to push a patch
for it.
I think today I will push only a patch for bug 1283518, I won't be able to
work on any other of them, so feel free to pick all the bugs you want!

I will ensure I de-assign myself from all the other bugs. It would be a
shame if contributors are turned away because of this!

Salvatore

PS: the correct link is https://bugs.launchpad.net/neutron/+bug/1283533



On 24 February 2014 11:14, Rossella Sblendido rsblend...@suse.com wrote:

  Ciao Salvatore,

 thanks a lot for analyzing the failures!

 This link is not working for me:
 7) https://bugs.launchpad.net/neutron/+bug/1253533

 I took a minor bug that was not assigned. Most of the bugs are assigned to
 you, I was wondering if you´d use some help. I guess we can coordinate
 better when you are online.

 cheers,

 Rossella


 On 02/23/2014 03:14 AM, Salvatore Orlando wrote:

 I have tried to collect more information on neutron full job failures.

  So far there have been 219 failures and 891 successes, for an overall
 success rate of 19.8% which is inline with Sean's evaluation.
 The count has performed exclusively on jobs executed against master
 branch. The failure rate for stable/havana is higher; indeed the job there
 still triggers bug 1273386 as it performs nbd mounting, and several fixes
 for the l2/l3 agents were not backported (or not backportable).

  It is worth noting that actually some of the failures were because of
 infra issues. Unfortunately, it is not obvious to me how to define a
 logstash query for that. Nevertheless, it will be better to err on the side
 of safety and estimate failure rate to be about 20%.

  I did then a classification of 63 failures, finding out the following:
 - 25 failures were for infra issues, 1 failure was due to a flaw in a
 patch, leaving 37 real failures to analyse
* In the same timeframe 203 jobs succeeded, giving a potential failure
 rate after excluding infra issues of 15.7%
 - 2 bugs were responsible for 25 of these 37 failures
* they are the SSH protocol banner issue, and the well-knows DB lock
 timeouts
 - bug 1253896 (the infamous SSH timeout bug) was hit only twice. The
 elastic recheck count is much higher because failures for the SSH protocol
 banner error (1265495) are being classified as bug 1253896.
* actually in the past 48 hours only 2 voting neutron jobs hit this
 failure. This is probably a great improvement compared with a few weeks ago.
 - Some failures are due to bug already known and tracked, other failures
 are due to bugs either unforeseen so far or not tracked. In the latter case
 a bug report has been filed.

  It seems therefore that there are two high priority bugs to address:
 1) https://bugs.launchpad.net/neutron/+bug/1283522 (16 occurrences, 43.2%
 of failure, 6.67% globally)
  * Check whether we can resume the split between API server and RPC
 server discussion)
 2) https://bugs.launchpad.net/neutron/+bug/1265495 (9/37 = 24.3% of
 failures, 3.75% globally)

  And several minor bugs (affecting tempest and/or neutron)
 Each one of the following bugs was found no more than twice in our
 analysis:
 3) https://bugs.launchpad.net/neutron/+bug/1254890 (possibly a nova bug,
 but it hit the neutron full job once)
 4) https://bugs.launchpad.net/neutron/+bug/1283599
 5) https://bugs.launchpad.net/neutron/+bug/1277439
 6) https://bugs.launchpad.net/neutron/+bug/1253896
 7) https://bugs.launchpad.net/neutron/+bug/1253533
  8) https://bugs.launchpad.net/tempest/+bug/1283535 (possibly not a
 neutron bug)
 9) https://bugs.launchpad.net/tempest/+bug/1253993 (need to devise new
 solutions for improving agent loop times)
* there is already a patch under review for bulking device details
 requests
 10) https://bugs.launchpad.net/neutron/+bug/1283518

  In my humble opinion, it is therefore important to have immediately a
 plan for ensuring bugs #1 and #2 are solved or at least consistently
 mitigated by icehouse. It would also be good to identify assignees for bug
 #3 to bug #10.

  Regards,
 Salvatore


 On 21 February 2014 14:44, Sean Dague s...@dague.net wrote:

 Yesterday during the QA meeting we realized that the neutron full job,
 which includes tenant isolation, and full parallelism, was passing quite
 often in the experimental queue. Which was actually news to most of us,
 as no one had been keeping a close eye on it.

 I moved that to a non-voting job on all projects. A spot check overnight
 is that it's failing about twice as often as the regular neutron job.
 Which is too high a failure rate to make it voting, but it's close.

 This would be the time for a final hard push by the neutron team to get
 to the bottom of these failures to bring the pass rate to the level of
 the existing neutron job, then we could make neutron 

[openstack-dev] [Keystone] Tenant expiration dates

2014-02-24 Thread Sanchez, Cristian A
Hi,
I’m thinking about creating a blueprint to allow the creating of tenants 
defining start-date and end-date of that tenant. These dates will define a time 
window in which the tenant is considered ‘enabled’ and auth tokens will be 
given only when current time is between those dates.
This can be particularly useful for projects like Climate where resources are 
reserved. And any resource (like VMs) created for a tenant will have the same 
expiration dates as the tenant.

Do you think this is something that can be added to Keystone?

Thanks

Cristian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] oslo.messaging rampant errors in nova-api logs

2014-02-24 Thread Sean Dague
I'm looking at whether we can get ourselves to enforcing only known
ERRORs in logs. In doing so one of the most visible issues on non
neutron runs is oslo.messaging spewing approximately 50:

ERROR oslo.messaging.notify._impl_messaging [-] Could not send
notification to notifications

http://logs.openstack.org/45/75245/3/check/check-tempest-dsvm-full/7ad149e/logs/screen-n-api.txt.gz?level=TRACE

We could whitelist this, however, this looks like a deeper issue.
Something that should actually be solved prior to release.

Really need some eyes in here from people more familiar with the
oslo.messaging code, and why we'd be tripping a circular reference
violation here.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Tenant expiration dates

2014-02-24 Thread Dina Belova
Cristian, hello

I believe that should not be done in such direct way, really.
Why not using project.extra field in DB to store this info? Is that not
appropriate for your ideas or there will be problems with there
implementing using extras?

Thanks,
Dina


On Mon, Feb 24, 2014 at 5:25 PM, Sanchez, Cristian A 
cristian.a.sanc...@intel.com wrote:

 Hi,
 I'm thinking about creating a blueprint to allow the creating of tenants
 defining start-date and end-date of that tenant. These dates will define a
 time window in which the tenant is considered 'enabled' and auth tokens
 will be given only when current time is between those dates.
 This can be particularly useful for projects like Climate where resources
 are reserved. And any resource (like VMs) created for a tenant will have
 the same expiration dates as the tenant.

 Do you think this is something that can be added to Keystone?

 Thanks

 Cristian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]The mechanism of physical_network segmentation_id is logical?

2014-02-24 Thread Robert Kukura
On 02/24/2014 07:09 AM, 黎林果 wrote:
 Hi stackers,
 
   When create a network, if we don't set provider:network_type,
 provider:physical_network or provider:segmentation_id, the
 network_type will be from cfg, but the other tow is from db's first
 record. Code is
 
 (physical_network,
  segmentation_id) = ovs_db_v2.reserve_vlan(session)
 
 
 
   There has tow questions.
   1, network_vlan_ranges = physnet1:100:200
  Can we config much physical_networks by cfg?

Hi Lee,

You can configure multiple physical_networks. For example:

network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3

This makes ranges of VLAN tags on physnet1 and physnet2 available for
allocation as tenant networks (assuming tenant_network_type = vlan).

This also makes physnet1, physnet2, and physnet3 available for
allocation of VLAN (and flat for OVS) provider networks (with admin
privilege). Note that physnet3 is available for allocation of provider
networks, but not for tenant networks because it does not have a range
of VLANs specified.

 
   2, If yes, the physical_network should be uncertainty. Dose this logical?

Each physical_network is considered to be a separate VLAN trunk, so VLAN
2345 on physnet1 is a different isolated network than VLAN 2345 on
physnet2. All the specified (physical_network,segmentation_id) tuples
form a pool of available tenant networks. Normal tenants have no
visibility of which physical_network trunk their networks get allocated on.

-Bob

 
 
 Regards!
 
 Lee Li
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ilo driver need to submit a code change in nova ironic driver

2014-02-24 Thread Faizan Barmawer
Hi All,

I am currently working on ilo driver for ironic project.
As part of this implementation and to integrate with nova ironic driver (
https://review.openstack.org/#/c/51328/) we need to make changes to
driver.py and ironic_driver_fields.py files, to pass down ilo driver
specific fields to the ironic node. Since nova ironic driver code review
still in progress and not yet integrated into openstack, we have not
included this piece of code in the ilo driver code review patch (
https://review.openstack.org/#/c/73787/).

We need your suggestion on delivering this part of ilo driver code change
in nova ironic driver.
- Should we wait for the completion of nova ironic driver and then raise a
defect to submit these changes? or
- should we raise a defect now and submit for review, giving the dependency
on the nova ironic driver review? or
- Can we use the existing blueprint for ilo driver to raise a separate
review for this code change giving nova ironic driver as dependency?

Please suggest a better way of delivering these changes.

Thanks  Regards,
Barmawer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Outreach Program for Women - May-Aug 2014

2014-02-24 Thread Anne Gentle
Hi all,
Thanks the OpenStack Foundation for funding a spot for an intern with the
GNOME Outreach Program for Women for May-August 2014. I've updated the
OpenStack wiki page and would like to recruit mentors for this round. If
you're interested in mentoring an intern and have a good idea for a 3-month
project, please sign up here:

https://wiki.openstack.org/wiki/OutreachProgramForWomen

and add your project idea here:

https://wiki.openstack.org/wiki/OutreachProgramForWomen/Ideas

If your org would like to fund an intern for $6250 please reach out!

We should find out today if we're also participating in the Google Summer
of Code internship program so it's an exciting week for mentors. Your
support is much appreciated!

Thanks,
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Fixed recent gate issues

2014-02-24 Thread Alan Pevec
2014-02-23 10:52 GMT+01:00 Gary Kotton gkot...@vmware.com:
 It looks like this does not solve the issue.

Yeah https://review.openstack.org/74451 doesn't solve the issue
completely, we have
SKIP_EXERCISES=boot_from_volume,bundle,client-env,euca,swift,client-args
but failure is now in Grenade's Javelin script:

+ swift upload javelin /etc/hosts
...(same Traceback)...
[ERROR] /opt/stack/new/grenade/setup-javelin:151 Swift upload failed


  I wonder if we need the same change for stable/havana.

devstack-gate master branch handles all projects branches, patch above
was inside if stable/grizzly

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] review priority list

2014-02-24 Thread Doug Hellmann
Team -

I've made a list of the reviews associated with bugs and blueprints for
icehouse-3, to make it easier to prioritize them. Some are very close to
being ready to merge, so please take a look through the list for any
changes you haven't already reviewed.

Thanks!
Doug


* oslo.db graduation (other db work should probably move to the oslo.db
library repository to let the incubator version of the code stabilize)

https://review.openstack.org/#/c/71874/ - Refactor database migration
manager to use given engine

https://review.openstack.org/#/c/74963/ - Prevent races in opportunistic db
test cases

https://review.openstack.org/#/c/74081/ - Add a base test case for DB
schema comparison

* systemd integration

https://blueprints.launchpad.net/oslo/+spec/service-readiness

https://review.openstack.org/#/c/72683/ - notify calling process we are
ready to serve

* once-per-request-filters

https://blueprints.launchpad.net/oslo/+spec/once-per-request-filters

https://review.openstack.org/#/c/65424/ - Allow filters to only run once
per request if their data is static

* lockutils  posix_ipc

https://review.openstack.org/#/c/69420/ - Use Posix IPC in lockutils

* notification subscription

https://blueprints.launchpad.net/oslo.messaging/+spec/notification-subscriber-server

https://review.openstack.org/#/c/61675/ - Allow to requeue the notification
message

https://review.openstack.org/#/c/70106/ - Add multiple exchange per
listerner in fake driver

* Bug: lack of tests for qpid driver

https://bugs.launchpad.net/oslo.messaging/+bug/1255239

https://review.openstack.org/#/c/75853/ - Adds unit test cases to impl_qpid

* Bug: qpid reconnection delay can't be configured

https://bugs.launchpad.net/oslo.messaging/+bug/1281148

https://review.openstack.org/#/c/74315/ - User a more accurate max_delay
for reconnects

* Bug: Misleading warning about MySQL TRADITIONAL mode not being set

https://bugs.launchpad.net/oslo/+bug/1271706

https://review.openstack.org/#/c/68473/17- Introduce a method to set any
MySQL session SQL mode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] review priority list

2014-02-24 Thread Doug Hellmann
I suppose if I'm going to leave anything out, it's best for me to leave out
the blueprint I'm working on.

* oslo.test graduation

https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-test

https://review.openstack.org/#/c/74408/ - Set up tox to run cross-project
tests



On Mon, Feb 24, 2014 at 9:52 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 Team -

 I've made a list of the reviews associated with bugs and blueprints for
 icehouse-3, to make it easier to prioritize them. Some are very close to
 being ready to merge, so please take a look through the list for any
 changes you haven't already reviewed.

 Thanks!
 Doug


 * oslo.db graduation (other db work should probably move to the oslo.db
 library repository to let the incubator version of the code stabilize)

 https://review.openstack.org/#/c/71874/ - Refactor database migration
 manager to use given engine

 https://review.openstack.org/#/c/74963/ - Prevent races in opportunistic
 db test cases

 https://review.openstack.org/#/c/74081/ - Add a base test case for DB
 schema comparison

 * systemd integration

 https://blueprints.launchpad.net/oslo/+spec/service-readiness

 https://review.openstack.org/#/c/72683/ - notify calling process we are
 ready to serve

 * once-per-request-filters

 https://blueprints.launchpad.net/oslo/+spec/once-per-request-filters

 https://review.openstack.org/#/c/65424/ - Allow filters to only run once
 per request if their data is static

 * lockutils  posix_ipc

 https://review.openstack.org/#/c/69420/ - Use Posix IPC in lockutils

 * notification subscription


 https://blueprints.launchpad.net/oslo.messaging/+spec/notification-subscriber-server

 https://review.openstack.org/#/c/61675/ - Allow to requeue the
 notification message

 https://review.openstack.org/#/c/70106/ - Add multiple exchange per
 listerner in fake driver

 * Bug: lack of tests for qpid driver

 https://bugs.launchpad.net/oslo.messaging/+bug/1255239

 https://review.openstack.org/#/c/75853/ - Adds unit test cases to
 impl_qpid

 * Bug: qpid reconnection delay can't be configured

 https://bugs.launchpad.net/oslo.messaging/+bug/1281148

 https://review.openstack.org/#/c/74315/ - User a more accurate max_delay
 for reconnects

 * Bug: Misleading warning about MySQL TRADITIONAL mode not being set

 https://bugs.launchpad.net/oslo/+bug/1271706

 https://review.openstack.org/#/c/68473/17- Introduce a method to set any
 MySQL session SQL mode

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] oslo.messaging rampant errors in nova-api logs

2014-02-24 Thread Matt Riedemann



On Monday, February 24, 2014 7:26:04 AM, Sean Dague wrote:

I'm looking at whether we can get ourselves to enforcing only known
ERRORs in logs. In doing so one of the most visible issues on non
neutron runs is oslo.messaging spewing approximately 50:

ERROR oslo.messaging.notify._impl_messaging [-] Could not send
notification to notifications

http://logs.openstack.org/45/75245/3/check/check-tempest-dsvm-full/7ad149e/logs/screen-n-api.txt.gz?level=TRACE

We could whitelist this, however, this looks like a deeper issue.
Something that should actually be solved prior to release.

Really need some eyes in here from people more familiar with the
oslo.messaging code, and why we'd be tripping a circular reference
violation here.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


FYI there is a bug for it too:

https://bugs.launchpad.net/nova/+bug/1283270

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Randy Tuttle
Has anyone experienced this issue when running tox. I'm trying to figure if
this is some limit of tox environment or something else. I've seen this
referenced in other projects, but can't seem to zero in on a proper fix.

tox -e py27

[...8...snip a lot]

neutron.tests.unit.test_routerserviceinsertion\nneutron.tests.unit.test_security_groups_rpc\nneutron.tests.unit.test_servicetype=\xc1\xf1\x19',
stderr=None
error: testr failed (3)
ERROR: InvocationError:
'/Users/rtuttle/projects/neutron/.tox/py27/bin/python -m
neutron.openstack.common.lockutils python setup.py testr --slowest
--testr-args='
__ summary
__
ERROR:   py27: commands failed

It seems that what it may be complaining about is a missing oslo.config. If
I try to load the final module noted from above (i.e.,
neutron.tests.unit.test_servicetype), I get an error about the missing
module.

Python 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type help, copyright, credits or license for more information.
 import neutron.tests.unit.test_servicetype
Traceback (most recent call last):
  File stdin, line 1, in module
  File neutron/tests/unit/__init__.py, line 20, in module
from oslo.config import cfg
ImportError: No module named oslo.config

Cheers,
Randy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] meeting 1400UTC 28 Feb 2014

2014-02-24 Thread Doug Hellmann
I'd like for the oslo team to meet this Friday, 28 Feb, at 1400 UTC to
review our icehouse-3 status and talk about integrating with the security
response team.

If you have anything else we need to go over, please add it to
https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Feb 24 2014

2014-02-24 Thread Anne Gentle
Hi all,
Just kicked off announcing another round for the Outreach Program for
Women, exciting! I also wanted to give a round up from the world of docs so
here goes.

1. In review and merged this past week:

We've had a good number of reviews going through as well as changes to doc
tools to improve including JSON validation. Also you can now substitute
docs-draft instead of logs in the URL from a review and see built
documentation. WOO.

2. High priority doc work:

I'd say our first priority is bug triage and fixing, for core projects
first. For the operator docs (openstack-manuals) we're at 360 open bugs
with over 50 to be triaged. For the API docs we have about 150 open bugs.

I attended the Trove mid-cycle meetup last week. They've got their API docs
as a priority, and have a writer assigned to work on install docs. This is
great news! So reviewers, be ready to take a look at patches.

As always, the install guide is a high priority. We're having a good
discussion about the install guide and default configuration on the
openstack-docs mailing list, feel free to join us.

3. Doc work going on that I know of:

Matt Kassawara and install crew have been working on a 3-node install with
neutron on icehouse, nice work! Thanks Phil Hopkins,

The final-final edits to the Operations Guide ship today to go to their
production team. I have been porting to the feature/edits branch about once
a week to keep our master branch where we do reviews, and the feature/edits
branch where production will occur. I'll back port the index entries about
once a week.

Shaun McCance continues to work on scraping code for configuration options
information and ensuring we capture all of the options across projects.

4. New incoming doc requests:

On the openstack-dev list we've had a conversation [2] about adding a Heat
template authors chapter to the End User Guide. Who's interested in this?
I'm going to ask a few select people I have in mind but also wanted to let
you all know we're looking for great integration there.

5. Doc tools updates:

In order to publish Markdown documents like the Image API v2 spec, Andreas
worked hard to get the openstack-doc-tools utility released a few times.
Release notes are here:
https://github.com/openstack/openstack-doc-tools#release-notes. Next up for
the doc tools are testing JSON and XML API sample requests and responses
(which uncovered over 80 flaws). Great improvements here.

6. Other doc news:

I've sent a request to add three questions about API documentation to the
User Survey. [1]

Diane split the Conventions page into three parts: Writing Style, DocBook
Markup, and WADL Markup.

I'm in San Antonio this week for an internal developer conference, so I'll
be less available on IRC than usual. Still planning to run the weekly docs
meeting though, so join in the fun! This week is the 4th Wednesday, see you
at 14:00:00 UTC in #openstack-meeting-alt.

[1]
http://lists.openstack.org/pipermail/user-committee/2014-February/000242.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027129.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Alexander Tivelkov
Hi,

I would like to initiate one more discussion about an approach we selected
to solve a particular problem in Murano.

The problem statement is the following: We have multiple entities like low
level resources and high level application definitions. Each entity does
some specific actions for example to create a VM or deploy application
configuration. We want each entity's workflow code reusable in order to
simplify development for a new application as the current approach with XML
based rules requires significant efforts.

After internal discussions inside Murano team we come up to the solution
which uses a well known programmatic concept - classes, their inheritance
and composition.

In this thread I would like to share our ideas and discuss the
implementation details.

We want to represent each and every entity being manipulated by Murano, as
an instance of some class. These classes will define structure of the
entities and their behavior. Different entities may be combined together,
interacting with each other, forming a composite environment. The
inheritance may be used to extract common structure and functionality into
generic superclasses, while having their subclasses to define only their
specific attributes and actions.

This approach is better to explain on some example. Let's consider the
Active Directory windows service. This is one of the currently present
Murano Applications, and its structure and deployment workflow is pretty
complex. Let's see how it may be simplified by using the proposed
object-oriented approach.

First, let's just describe an Active Directory service in plain English.

Active Directory service consists of several Controllers: exactly one
Primary Domain Controller and, optionally, several Secondary Domain
Controllers. Controllers (both primary and Secondary) are special Windows
Instances, having an active directory server role activated. Their specific
difference is in the configuration scripts which are executed on them after
the roles are activated. Also, Secondary Domain Controllers have an ability
to join to a domain, while Primary Domain Controller cannot do it.

Windows Instances are regular machines having some limitations on the their
images (it should, obviously, be Windows image) and hardware flavor
(windows is usually demanding on resources). Also, windows machines may
have some specific operations, like configuring windows firewall rules or
defining local administrator password.

And the machines in general (both Windows and any others) are simple
entities which know how to create virtual machines in OpenStack clouds.

Now, let's map this model to object-oriented concept. We get the following
classes:


   1.

   Instance. Defines common properties of virtual machines (flavor, image,
   hostname) and deployment workflow which executes a HEAT template to create
   an instance in the cloud.
   2.

   WindowsInstance - inherits Instance. Defines local administrator account
   password and extends base deployment workflow to set this password and
   configure windows firewall - after the instance is deployed.
   3.

   DomainMember - inherits Windows instance, defines a machine which can
   join an Active Directory. Adds a join domain workflow step
   4.

   DomainController - inherits Windows instance, adds an Install AD Role
   workflow steps and extends the Deploy step to call it.
   5.

   PrimaryController - inherits DomainContoller, adds a Configure as
   Primary DC workflow step and extends Deploy step to call it. Also adds a
   domainIpAddress property which is set during the deployment.
   6.

   SecondaryController, inherits both DomainMember and DomainController.
   Adds a Configure as Secondary DC worflow step and extends Deploy() step
   to call it and the join domain step inherited from the Domain Member
   class.
   7.

   ActiveDirectory -  a primary class which defines an Active Directory
   application. Defines properties for PrimaryController and
   SecondaryControllers and a Deploy workflow which call appropriate
   workflows on the controllers.


The simplified class diagram may look like this:





So, this approach allows to decompose the AD deployment workflow into
simple isolated parts, explicitly manage the state and create reusable
entities (of course classes like Instance, WindowsInstance, DomainMember
may be used by other Murano Applications). For me this looks much, much
better than the current implicit state machine which we run based on XML
rules.

What do you think about this approach, folks? Do you think it will be
easily understood by application developers? Will it be easy to write
workflows this way? Do you see any drawbacks here?

Waiting for your feedback.


--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Alexander Tivelkov
Sorry folks, I didn't put the proper image url. Here it is:


https://creately.com/diagram/hrxk86gv2/kvbckU5hne8C0r0sofJDdtYgxc%3D


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 7:39 PM, Alexander Tivelkov
ativel...@mirantis.comwrote:

 Hi,

 I would like to initiate one more discussion about an approach we selected
 to solve a particular problem in Murano.

 The problem statement is the following: We have multiple entities like low
 level resources and high level application definitions. Each entity does
 some specific actions for example to create a VM or deploy application
 configuration. We want each entity's workflow code reusable in order to
 simplify development for a new application as the current approach with XML
 based rules requires significant efforts.

 After internal discussions inside Murano team we come up to the solution
 which uses a well known programmatic concept - classes, their inheritance
 and composition.

 In this thread I would like to share our ideas and discuss the
 implementation details.

 We want to represent each and every entity being manipulated by Murano, as
 an instance of some class. These classes will define structure of the
 entities and their behavior. Different entities may be combined together,
 interacting with each other, forming a composite environment. The
 inheritance may be used to extract common structure and functionality into
 generic superclasses, while having their subclasses to define only their
 specific attributes and actions.

 This approach is better to explain on some example. Let's consider the
 Active Directory windows service. This is one of the currently present
 Murano Applications, and its structure and deployment workflow is pretty
 complex. Let's see how it may be simplified by using the proposed
 object-oriented approach.

 First, let's just describe an Active Directory service in plain English.

 Active Directory service consists of several Controllers: exactly one
 Primary Domain Controller and, optionally, several Secondary Domain
 Controllers. Controllers (both primary and Secondary) are special Windows
 Instances, having an active directory server role activated. Their specific
 difference is in the configuration scripts which are executed on them after
 the roles are activated. Also, Secondary Domain Controllers have an ability
 to join to a domain, while Primary Domain Controller cannot do it.

 Windows Instances are regular machines having some limitations on the
 their images (it should, obviously, be Windows image) and hardware flavor
 (windows is usually demanding on resources). Also, windows machines may
 have some specific operations, like configuring windows firewall rules or
 defining local administrator password.

 And the machines in general (both Windows and any others) are simple
 entities which know how to create virtual machines in OpenStack clouds.

 Now, let's map this model to object-oriented concept. We get the following
 classes:


1.

Instance. Defines common properties of virtual machines (flavor,
image, hostname) and deployment workflow which executes a HEAT template to
create an instance in the cloud.
2.

WindowsInstance - inherits Instance. Defines local administrator
account password and extends base deployment workflow to set this password
and configure windows firewall - after the instance is deployed.
3.

DomainMember - inherits Windows instance, defines a machine which can
join an Active Directory. Adds a join domain workflow step
4.

DomainController - inherits Windows instance, adds an Install AD
Role workflow steps and extends the Deploy step to call it.
5.

PrimaryController - inherits DomainContoller, adds a Configure as
Primary DC workflow step and extends Deploy step to call it. Also adds a
domainIpAddress property which is set during the deployment.
6.

SecondaryController, inherits both DomainMember and DomainController.
Adds a Configure as Secondary DC worflow step and extends Deploy() step
to call it and the join domain step inherited from the Domain Member
class.
7.

ActiveDirectory -  a primary class which defines an Active Directory
application. Defines properties for PrimaryController and
SecondaryControllers and a Deploy workflow which call appropriate
workflows on the controllers.


 The simplified class diagram may look like this:





 So, this approach allows to decompose the AD deployment workflow into
 simple isolated parts, explicitly manage the state and create reusable
 entities (of course classes like Instance, WindowsInstance, DomainMember
 may be used by other Murano Applications). For me this looks much, much
 better than the current implicit state machine which we run based on XML
 rules.

 What do you think about this approach, folks? Do you think it will be
 easily understood by application developers? Will it be easy to write
 workflows this way? 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Dan Smith
 - We want to make backwards incompatible changes to the API
   and whether we do it in-place with V2 or by releasing V3
   we'll have some form of dual API support burden.

IMHO, the cost of maintaining both APIs (which are largely duplicated)
for almost any amount of time outweighs the cost of localized changes.

   - Not making backwards incompatible changes means:
 - retaining an inconsistent API
 - not being able to fix numerous input validation issues
 - have to forever proxy for glance/cinder/neutron with all
   the problems that entails.

The neutron stickiness aside, I don't see a problem leaving the proxying
in place for the foreseeable future. I think that it's reasonable to
mark them as deprecated, encourage people not to use them, and maybe
even (with a core api version to mark the change) say that they're not
supported anymore.

I also think that breaking our users because we decided to split A into
B and C on the backend kind of sucks. I imagine that continuing to do
that at the API layer (when we're clearly going to keep doing it on the
backend) is going to earn us a bit of a reputation.

   - Backporting V3 infrastructure changes to V2 would be a
 considerable amount of programmer/review time

While acknowledging that you (and others) have done that for v3 already,
I have to think that such an effort is much less costly than maintaining
two complete overlapping pieces of API code.

 - The V3 API as-is has:
   - lower maintenance
   - is easier to understand and use (consistent).
   - Much better input validation which is baked-in (json-schema)
 rather than ad-hoc and incomplete.

In case it's not clear, there is no question that the implementation of
v3 is technically superior in my mind. So, thanks for that :)

IMHO, it is also:

- twice the code
- different enough to be annoying to convert existing clients to use
- not currently different enough to justify the pain

 - Proposed way forward:
   - Release the V3 API in Juno with nova-network and tasks support
   - Feature freeze the V2 API when the V3 API is released
 - Set the timeline for deprecation of V2 so users have a lot
   of warning

This feels a lot like holding our users hostage in order to get them to
move. We're basically saying We tweaked a few things, fixed some
spelling errors, and changed some date stamp formats. You will have to
port your client, or no new features for you! That's obviously a little
hyperbolic, but I think that deployers of APIv2 would probably feel like
that's the story they have to give to their users.

 Firstly I'd like to step back a bit and ask the question whether we
 ever want to fix up the various problems with the V2 API that involve
 backwards incompatible changes. These range from inconsistent naming
 through the urls and data expected and returned, to poor and
 inconsistent input validation and removal of all the proxying Nova
 does to cinder, glance and neutron. I believe the answer to this is
 yes - inconsistencies in the API make it harder to use (eg do I have a
 instance or a server, and a project or a tenant just to name a
 couple) and more error prone and proxying has caused several painful to
 fix issues for us.

I naively think that we could figure out a way to move things forward
without having to completely break older clients. It's clear that other
services (with much larger and more widely-used APIs) are able to do it.

That said, I think the corollary to the above question is: do we ever
want to knowingly break an existing client for either of:

1. arbitrary user-invisible backend changes in implementation or
   service organization
2. purely cosmetic aspects like spelling, naming, etc

IMHO, we should do whatever we can to avoid breaking them except for the
most extreme cases.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] L3 HA VRRP concerns

2014-02-24 Thread Assaf Muller
Hi everyone,

A few concerns have popped up recently about [1] which I'd like to share and 
discuss,
and would love to hear your thoughts Sylvain.

1) Is there a way through the API to know, for a given router, what agent is 
hosting
the active instance? This might be very important for admins to know.

2) The current approach is to create an administrative network and subnet for 
VRRP traffic per router group /
per router. Is this network counted in the quota for the tenant? (Clearly it 
shouldn't). Same
question for the HA ports created for each router instance.

3) The administrative network is created per router and takes away from the 
VLAN ranges if using
VLAN tenant networks (For a tunneling based deployment this is a non-issue). 
Maybe we could
consider a change that creates an administrative network per tenant (Which 
would then limit
the solution to up to 255 routers because of VRRP'd group limit), or an admin 
network per 255
routers?

4) Maybe the VRRP hello and dead times should be configurable? I can see admins 
that would love to
up or down these numbers.

5) The administrative / VRRP networks, subnets and ports that are created - 
Will they be marked in any way
as an 'internal' network or some equivalent tag? Otherwise they'd show up when 
running neutron net-list,
in the Horizon networks listing as well as the graphical topology drawing 
(Which, personally, is what
bothers me most about this). I'd love them tagged and hidden from the normal 
net-list output,
and something like a 'neutron net-list --all' introduced.

6) The IP subnet chosen for VRRP traffic is specified in neutron.conf. If a 
tenant creates a subnet
with the same range, and attaches a HA router to that subnet, the operation 
will fail as the router
cannot have different interfaces belonging to the same subnet. Nir suggested to 
look into using
the 169.254.0.0/16 range as the default because we know it will (hopefully) not 
be allocated by tenants.

[1] https://blueprints.launchpad.net/neutron/+spec/l3-high-availability


Assaf Muller, Cloud Networking Engineer 
Red Hat 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-24 Thread Jay Pipes
On Mon, 2014-02-24 at 16:23 +0800, Lingxian Kong wrote:
 I think 'tenant_id' should always be validated when creating neutron
 resources, whether or not Neutron can handle the notifications from
 Keystone when tenant is deleted.

-1

Personally, I think this cross-service request is likely too expensive
to do on every single request to Neutron. It's already expensive enough
to use Keystone when not using PKI tokens, and adding another round trip
to Keystone for this kind of thing is not appealing to me. The tenant is
already validated when it is used to get the authentication token used
in requests to Neutron, so other than the scenarios where a tenant is
deleted in Keystone (which, with notifications in Keystone, there is now
a solution for), I don't see much value in the extra expense this would
cause.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-24 Thread David Peraza
Hello all,

I have been trying some new ideas on scheduler and I think I'm reaching a 
resource issue. I'm running 6 compute service right on my 4 CPU 4 Gig VM, and I 
started to get some memory allocation issues. Keystone and Nova are already 
complaining there is not enough memory. The obvious solution to add more 
candidates is to get another VM and set another 6 Fake compute service. I could 
do that but I think I need to be able to scale more without the need to use 
this much resources. I will like to simulate a cloud of 100 maybe 1000 compute 
nodes that do nothing (Fake driver) this should not take this much memory. 
Anyone knows of a more efficient way to  simulate many computes? I was thinking 
changing the Fake driver to report many compute services in different threads 
instead of having to spawn a process per compute service. Any other ideas?

Regards,
David Peraza


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Mark McClain

On Feb 21, 2014, at 1:29 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I’ve voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Stan Lagun
Hi Alex,

Personally I like the approach and how you explain it. I just would like to
know your opinion on how this is better from someone write Heat template
that creates Active Directory  lets say with one primary and one secondary
controller and then publish it somewhere. Since Heat do supports software
configuration as of late and has concept of environments [1] that Steven
Hardy generously pointed out in another mailing thread that can be used for
composition as well it seems like everything you said can be done by Heat
alone

[1]:
http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html


On Mon, Feb 24, 2014 at 7:51 PM, Alexander Tivelkov
ativel...@mirantis.comwrote:

 Sorry folks, I didn't put the proper image url. Here it is:


 https://creately.com/diagram/hrxk86gv2/kvbckU5hne8C0r0sofJDdtYgxc%3D


 --
 Regards,
 Alexander Tivelkov


 On Mon, Feb 24, 2014 at 7:39 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 Hi,

 I would like to initiate one more discussion about an approach we
 selected to solve a particular problem in Murano.

 The problem statement is the following: We have multiple entities like
 low level resources and high level application definitions. Each entity
 does some specific actions for example to create a VM or deploy application
 configuration. We want each entity's workflow code reusable in order to
 simplify development for a new application as the current approach with XML
 based rules requires significant efforts.

 After internal discussions inside Murano team we come up to the solution
 which uses a well known programmatic concept - classes, their inheritance
 and composition.

 In this thread I would like to share our ideas and discuss the
 implementation details.

 We want to represent each and every entity being manipulated by Murano,
 as an instance of some class. These classes will define structure of the
 entities and their behavior. Different entities may be combined together,
 interacting with each other, forming a composite environment. The
 inheritance may be used to extract common structure and functionality into
 generic superclasses, while having their subclasses to define only their
 specific attributes and actions.

 This approach is better to explain on some example. Let's consider the
 Active Directory windows service. This is one of the currently present
 Murano Applications, and its structure and deployment workflow is pretty
 complex. Let's see how it may be simplified by using the proposed
 object-oriented approach.

 First, let's just describe an Active Directory service in plain English.

 Active Directory service consists of several Controllers: exactly one
 Primary Domain Controller and, optionally, several Secondary Domain
 Controllers. Controllers (both primary and Secondary) are special Windows
 Instances, having an active directory server role activated. Their specific
 difference is in the configuration scripts which are executed on them after
 the roles are activated. Also, Secondary Domain Controllers have an ability
 to join to a domain, while Primary Domain Controller cannot do it.

 Windows Instances are regular machines having some limitations on the
 their images (it should, obviously, be Windows image) and hardware flavor
 (windows is usually demanding on resources). Also, windows machines may
 have some specific operations, like configuring windows firewall rules or
 defining local administrator password.

 And the machines in general (both Windows and any others) are simple
 entities which know how to create virtual machines in OpenStack clouds.

 Now, let's map this model to object-oriented concept. We get the
 following classes:


1.

Instance. Defines common properties of virtual machines (flavor,
image, hostname) and deployment workflow which executes a HEAT template to
create an instance in the cloud.
2.

WindowsInstance - inherits Instance. Defines local administrator
account password and extends base deployment workflow to set this password
and configure windows firewall - after the instance is deployed.
3.

DomainMember - inherits Windows instance, defines a machine which can
join an Active Directory. Adds a join domain workflow step
4.

DomainController - inherits Windows instance, adds an Install AD
Role workflow steps and extends the Deploy step to call it.
5.

PrimaryController - inherits DomainContoller, adds a Configure as
Primary DC workflow step and extends Deploy step to call it. Also adds 
 a
domainIpAddress property which is set during the deployment.
6.

SecondaryController, inherits both DomainMember and DomainController.
Adds a Configure as Secondary DC worflow step and extends Deploy() step
to call it and the join domain step inherited from the Domain Member
class.
7.

ActiveDirectory -  a primary class which defines an Active Directory
application. Defines properties for 

Re: [openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Collins, Sean
Yes - it's a problem with non-Linux platforms not being able to install
pyudev, which is a requirement for the linuxbridge plugin, which makes
testr barf when it hits an ImportError.

http://lists.openstack.org/pipermail/openstack-dev/2014-January/023268.html

In the past, I've run tox -e py26 as a workaround, since for some reason
testr shrugs off the ImportError in python 2.6.


-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Jay Pipes
On Mon, 2014-02-24 at 20:22 +1030, Christopher Yeoh wrote:
 On Mon, 24 Feb 2014 02:06:50 -0500
 Jay Pipes jaypi...@gmail.com wrote:
 
  On Mon, 2014-02-24 at 17:20 +1030, Christopher Yeoh wrote:
   - Proposed way forward:
 - Release the V3 API in Juno with nova-network and tasks support
 - Feature freeze the V2 API when the V3 API is released
   - Set the timeline for deprecation of V2 so users have a lot
 of warning
   - Fallback for those who really don't want to move after
 deprecation is an API service which translates between V2 and V3
 requests, but removes the dual API support burden from Nova.
  
  And when do you think we can begin the process of deprecating the V3
  API and removing API extensions and XML translation support?
 
 So did you mean V2 API here? I don't understand why you think the V3
 API would need deprecating any time soon.

No, I meant v3.

 XML support has already been removed from the V3 API and I think the
 patch to mark XML as deprecated for the V2 API and eventual removal in
 Juno has already landed. So at least for part of the V2 API a one cycle
 deprecation period has been seen as reasonable.

OK, very sorry, I must have missed that announcement. I did not realize
that XML support had already been removed from v3.

 When it comes to API extensions I think that is actually more a
 question of policy than anything else. The actual implementation behind
 the scenes of a plugin architecture makes a lot of sense whether we
 have extensions or not. 

An API extension is not a plugin. And I'm not arguing against a plugin
architecture -- the difference is that a driver/plugin architecture
enables a single public API to have difference backend implementations.

Please see my diatribe on that here:

https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg13660.html

 It forces a good level of isolation between API
 features and clarity of interaction where its needed - all of which
 much is better from a maintenance point of view.

Sorry, I have to violently disagree with you on that one. The API
extensions (in Nova, Neutron, Keystone, et al) have muddied the code
immeasurably and bled implementation into the public API -- something
that is antithetical to good public API design.

Drivers and plugins belong in the implementation layer. Not in the
public API layer.

 Now whether we have parts of the API which are optional or not is
 really a policy decision as to whether we will force deployers to use
 all of the plugins or a subset (eg currently the core). 

It's not about forcing providers to support all of the public API.
It's about providing a single, well-documented, consistent HTTP REST API
for *consumers* of that API. Whether a provider chooses to, for example,
deploy with nova-network or Neutron, or Xen vs. KVM, or support block
migration for that matter *should have no effect on the public API*. The
fact that those choices currently *do* effect the public API that is
consumed by the client is a major indication of the weakness of the API.

 There is
 the technical support for doing so in the V3 API (essentially what is
 used to enforce the core of the API). And a major API version bump is
 not required to change this. Perhaps this part falls in to the
 DefCore discussions :-)

I don't see how this discussion falls into the DefCore discussion.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-24 Thread John Garbutt
On 24 February 2014 16:24, David Peraza david_per...@persistentsys.com wrote:
 Hello all,

 I have been trying some new ideas on scheduler and I think I'm reaching a
 resource issue. I'm running 6 compute service right on my 4 CPU 4 Gig VM,
 and I started to get some memory allocation issues. Keystone and Nova are
 already complaining there is not enough memory. The obvious solution to add
 more candidates is to get another VM and set another 6 Fake compute service.
 I could do that but I think I need to be able to scale more without the need
 to use this much resources. I will like to simulate a cloud of 100 maybe
 1000 compute nodes that do nothing (Fake driver) this should not take this
 much memory. Anyone knows of a more efficient way to  simulate many
 computes? I was thinking changing the Fake driver to report many compute
 services in different threads instead of having to spawn a process per
 compute service. Any other ideas?

It depends what you want to test, but I was able to look at tuning the
filters and weights using the test at the end of this file:
https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_caching_scheduler.py

Cheers,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 01:50 AM, Christopher Yeoh wrote:
 Hi,
 
 There has recently been some speculation around the V3 API and whether
 we should go forward with it or instead backport many of the changes
 to the V2 API. I believe that the core of the concern is the extra
 maintenance and test burden that supporting two APIs means and the
 length of time before we are able to deprecate the V2 API and return
 to maintaining only one (well two including EC2) API again.

Yes, this is a major concern.  It has taken an enormous amount of work
to get to where we are, and v3 isn't done.  It's a good time to
re-evaluate whether we are on the right path.

The more I think about it, the more I think that our absolute top goal
should be to maintain a stable API for as long as we can reasonably do
so.  I believe that's what is best for our users.  I think if you gave
people a choice, they would prefer an inconsistent API that works for
years over dealing with non-backwards compatible jumps to get a nicer
looking one.

The v3 API and its unit tests are roughly 25k lines of code.  This also
doesn't include the changes necessary in novaclient or tempest.  That's
just *our* code.  It explodes out from there into every SDK, and then
end user apps.  This should not be taken lightly.

 This email is rather long so here's the TL;DR version:
 
 - We want to make backwards incompatible changes to the API
   and whether we do it in-place with V2 or by releasing V3
   we'll have some form of dual API support burden.
   - Not making backwards incompatible changes means:
 - retaining an inconsistent API

I actually think this isn't so bad, as discussed above.

 - not being able to fix numerous input validation issues

I'm not convinced, actually.  Surely we can do a lot of cleanup here.
Perhaps you have some examples of what we couldn't do in the existing API?

If it's a case of wanting to be more strict, some would argue that the
current behavior isn't so bad (see robustness principle [1]):

Be conservative in what you do, be liberal in what you accept from
others (often reworded as Be conservative in what you send, be
liberal in what you accept).

There's a decent counter argument to this, too.  However, I still fall
back on it being best to just not break existing clients above all else.

 - have to forever proxy for glance/cinder/neutron with all
   the problems that entails.

I don't think I'm as bothered by the proxying as others are.  Perhaps
it's not architecturally pretty, but it's worth it to maintain
compatibility for our users.

   - Backporting V3 infrastructure changes to V2 would be a
 considerable amount of programmer/review time

Agreed, but so is the ongoing maintenance and development of v3.

 
 - The V3 API as-is has:
   - lower maintenance
   - is easier to understand and use (consistent).
   - Much better input validation which is baked-in (json-schema)
 rather than ad-hoc and incomplete.

So here's the rub ... with the exception of the consistency bits, none
of this is visible to users, which makes me think we should be able to
do all of this on v2.

 - Whilst we have existing users of the API we also have a lot more
   users in the future. It would be much better to allow them to use
   the API we want to get to as soon as possible, rather than trying
   to evolve the V2 API and forcing them along the transition that they
   could otherwise avoid.

I'm not sure I understand this.  A key point is that I think any
evolving of the V2 API has to be backwards compatible, so there's no
forcing them along involved.

 - We already have feature parity for the V3 API (nova-network being
   the exception due to the very recent unfreezing of it), novaclient
   support, and a reasonable transition path for V2 users.
 
 - Proposed way forward:
   - Release the V3 API in Juno with nova-network and tasks support
   - Feature freeze the V2 API when the V3 API is released
 - Set the timeline for deprecation of V2 so users have a lot
   of warning
 - Fallback for those who really don't want to move after
   deprecation is an API service which translates between V2 and V3
   requests, but removes the dual API support burden from Nova.

One of my biggest principles with a new API is that we should not have
to force a migration with a strict timeline like this.  If we haven't
built something compelling enough to get people to *want* to migrate as
soon as they are able, then we haven't done our job.  Deprecation of the
old thing should only be done when we feel it's no longer wanted or used
by the vast majority.  I just don't see that happening any time soon.

We have a couple of ways forward right now.

1) Continue as we have been, and plan to release v3 once we have a
compelling enough feature set.

2) Take what we have learned from v3 and apply it to v2.  For example:

 - The plugin infrastructure is an internal implementation detail that
   can be done with the existing API.

 - 

[openstack-dev] [Mistral] Community meeting minutes - 02/24/2014

2014-02-24 Thread Renat Akhmerov
Folks,

Thanks for joining us at #openstack-meeting. Here are the links to the meeting 
minutes and log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-02-24-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-02-24-16.00.log.html

Next meeting will be held on March 3. Looking forward to chat with you again.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Is there anything blocking the libvirt driver from implementing the host_maintenance_mode API?

2014-02-24 Thread Chris Friesen

On 02/20/2014 11:38 AM, Matt Riedemann wrote:



On 2/19/2014 4:05 PM, Matt Riedemann wrote:

The os-hosts OS API extension [1] showed up before I was working on the
project and I see that only the VMware and XenAPI drivers implement it,
but was wondering why the libvirt driver doesn't - either no one wants
it, or there is some technical reason behind not implementing it for
that driver?

[1]
http://docs.openstack.org/api/openstack-compute/2/content/PUT_os-hosts-v2_updateHost_v2__tenant_id__os-hosts__host_name__ext-os-hosts.html



By the way, am I missing something when I think that this extension is
already covered if you're:

1. Looking to get the node out of the scheduling loop, you can just
disable it with os-services/disable?

2. Looking to evacuate instances off a failed host (or one that's in
maintenance mode), just use the evacuate server action.


In compute/api.py the API.evacuate() routine errors out if 
self.servicegroup_api.service_is_up(service) is true, which means that 
you can't evacuate from a compute node that is disabled, you need to 
migrate instead.


So, the alternative is basically to disable the service, then get a list 
of all the servers on the compute host, then kick off the migration 
(either cold or live) of each of the servers.  Then because migration 
uses a cast instead of a call you need to poll all the migrations 
for success or late failures.  Once you have no failed migrations and no 
servers running on the host then you're good.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Collins, Sean
Sorry - fired off this e-mail without looking too closely at your log
output - I just saw the escape characters and the long lines from tox
and it reminded me of the last discussion we had about it. It's
probably not the same error as I was describing.

That's the tough thing that I strongly dislike about Testr - when it
fails, it fails spectacularly and it's very hard to determine what
happened, for mere idiots like myself.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler sub-group meeting tomorrow (2/25)

2014-02-24 Thread Dugger, Donald D
All-

I'm tempted to cancel the gantt meeting for tomorrow.  The only topics I have 
are the no-db scheduler update (we can probably do that via email) and the 
gantt code forklift (I've been out with the flu and there's no progress on 
that).

I'm willing to chair but I'd like to have some specific topics to talk about.

Suggestions anyone?

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Mark,

I'm not sure I understand what are implementation details in the workflow I
have proposed in the email above, could you point to them?

Thanks,
Eugene.



On Mon, Feb 24, 2014 at 8:31 PM, Mark McClain mmccl...@yahoo-inc.comwrote:


  On Feb 21, 2014, at 1:29 PM, Jay Pipes jaypi...@gmail.com wrote:

 I disagree on this point. I believe that the more implementation details
 bleed into the API, the harder the API is to evolve and improve, and the
 less flexible the API becomes.

 I'd personally love to see the next version of the LBaaS API be a
 complete breakaway from any implementation specifics and refocus itself
 to be a control plane API that is written from the perspective of the
 *user* of a load balancing service, not the perspective of developers of
 load balancer products.


 I agree with Jay.  We the API needs to be user centric and free of
 implementation details.  One of my concerns I've voiced in some of the IRC
 discussions is that too many implementation details are exposed to the user.

  mark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-24 Thread Nader Lahouti
Hi Swann,

I was able to listen to keystone notification by setting notifications in
the keystone.conf file. I only needed the notification (CURD) for project
and handle it in my plugin code so don't need ceilometer to handle them.
The other issue is that the notification is only for limited to resource_id
 and don't have other information such as project name.


Thanks,
Nader.




On Mon, Feb 24, 2014 at 2:10 AM, Swann Croiset swan...@gmail.com wrote:


 Hi Nader,

 These notifications must be handled by Ceilometer like others [1].
 it is surprising that it does not already identity meters indeed...
 probably nobody needs them before you.
  I guess it remains to open a BP and code them like I recently did for
 Heat [2]


 http://docs.openstack.org/developer/ceilometer/measurements.html
 https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications


 2014-02-20 19:10 GMT+01:00 Nader Lahouti nader.laho...@gmail.com:

 Thanks Dolph for link. The document shows the format of the message and
 doesn't give any info on how to listen to the notification.
 Is there any other document showing the detail on how to listen or get
 these notifications ?

 Regards,
 Nader.

 On Feb 20, 2014, at 9:06 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 Yes, see:

   http://docs.openstack.org/developer/keystone/event_notifications.html

 On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti 
 nader.laho...@gmail.comwrote:

 Hi All,

 I have a question regarding creating/deleting a tenant in openstack
 (using horizon or CLI). Is there any notification mechanism in place so
 that an application get informed of such an event?

 If not, can it be done using plugin to send create/delete notification
 to an application?

 Appreciate your suggestion and help.

 Regards,
 Nader.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Samuel Bercovici
Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current state management will not 
handle such relationship well.
To me this means that the state management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:


I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-24 Thread yunhong jiang
On Mon, 2014-02-24 at 04:10 +, Liuji (Jeremy) wrote:
 I have found a BP about USB device passthrough in
 https://blueprints.launchpad.net/nova/+spec/host-usb-passthrough. 
 I have also read the latest nova code and make sure it doesn't support
 USB passthrough by now.
 
 Are there any progress or plan for USB passthrough?

I don't know anyone is working on USB passthrough.

--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Matt Riedemann



On 2/24/2014 10:13 AM, Russell Bryant wrote:

On 02/24/2014 01:50 AM, Christopher Yeoh wrote:

Hi,

There has recently been some speculation around the V3 API and whether
we should go forward with it or instead backport many of the changes
to the V2 API. I believe that the core of the concern is the extra
maintenance and test burden that supporting two APIs means and the
length of time before we are able to deprecate the V2 API and return
to maintaining only one (well two including EC2) API again.


Yes, this is a major concern.  It has taken an enormous amount of work
to get to where we are, and v3 isn't done.  It's a good time to
re-evaluate whether we are on the right path.

The more I think about it, the more I think that our absolute top goal
should be to maintain a stable API for as long as we can reasonably do
so.  I believe that's what is best for our users.  I think if you gave
people a choice, they would prefer an inconsistent API that works for
years over dealing with non-backwards compatible jumps to get a nicer
looking one.

The v3 API and its unit tests are roughly 25k lines of code.  This also
doesn't include the changes necessary in novaclient or tempest.  That's
just *our* code.  It explodes out from there into every SDK, and then
end user apps.  This should not be taken lightly.


This email is rather long so here's the TL;DR version:

- We want to make backwards incompatible changes to the API
   and whether we do it in-place with V2 or by releasing V3
   we'll have some form of dual API support burden.
   - Not making backwards incompatible changes means:
 - retaining an inconsistent API


I actually think this isn't so bad, as discussed above.


 - not being able to fix numerous input validation issues


I'm not convinced, actually.  Surely we can do a lot of cleanup here.
Perhaps you have some examples of what we couldn't do in the existing API?

If it's a case of wanting to be more strict, some would argue that the
current behavior isn't so bad (see robustness principle [1]):

 Be conservative in what you do, be liberal in what you accept from
 others (often reworded as Be conservative in what you send, be
 liberal in what you accept).

There's a decent counter argument to this, too.  However, I still fall
back on it being best to just not break existing clients above all else.


 - have to forever proxy for glance/cinder/neutron with all
   the problems that entails.


I don't think I'm as bothered by the proxying as others are.  Perhaps
it's not architecturally pretty, but it's worth it to maintain
compatibility for our users.


+1 to this, I think this is also related to what Jay Pipes is saying in 
his reply:


Whether a provider chooses to, for example,
deploy with nova-network or Neutron, or Xen vs. KVM, or support block
migration for that matter *should have no effect on the public API*. The
fact that those choices currently *do* effect the public API that is
consumed by the client is a major indication of the weakness of the API.

As a consumer, I don't want to have to know which V2 APIs work and which 
don't depending on if I'm using nova-network or Neutron.





   - Backporting V3 infrastructure changes to V2 would be a
 considerable amount of programmer/review time


Agreed, but so is the ongoing maintenance and development of v3.



- The V3 API as-is has:
   - lower maintenance
   - is easier to understand and use (consistent).
   - Much better input validation which is baked-in (json-schema)
 rather than ad-hoc and incomplete.


So here's the rub ... with the exception of the consistency bits, none
of this is visible to users, which makes me think we should be able to
do all of this on v2.


- Whilst we have existing users of the API we also have a lot more
   users in the future. It would be much better to allow them to use
   the API we want to get to as soon as possible, rather than trying
   to evolve the V2 API and forcing them along the transition that they
   could otherwise avoid.


I'm not sure I understand this.  A key point is that I think any
evolving of the V2 API has to be backwards compatible, so there's no
forcing them along involved.


- We already have feature parity for the V3 API (nova-network being
   the exception due to the very recent unfreezing of it), novaclient
   support, and a reasonable transition path for V2 users.

- Proposed way forward:
   - Release the V3 API in Juno with nova-network and tasks support
   - Feature freeze the V2 API when the V3 API is released
 - Set the timeline for deprecation of V2 so users have a lot
   of warning
 - Fallback for those who really don't want to move after
   deprecation is an API service which translates between V2 and V3
   requests, but removes the dual API support burden from Nova.


One of my biggest principles with a new API is that we should not have
to force a migration with a strict timeline like this.  If we haven't
built something 

Re: [openstack-dev] [Ironic] ilo driver need to submit a code change in nova ironic driver

2014-02-24 Thread Chris K
Hi Barmawer,

Currently the Ironic Nova driver is blocked from merging. The Ironic team
is working on getting all the pieces in place for our C.I. testing. At this
point I would say your best path is to create your patch with 51328 as a
dependency. Please note that the nova driver will most likely be going
through several more revisions as we get closer. This will mean that your
dependent patch will need to rebased as new Nova driver patches are pushed
up. This is very common, I am just pointing it out so that you can keep an
eye out for the [OUTDATED] tag on the review. Also please tag your
dependent patch with implements bp:deprecate-baremetal-driver this will
ensure your patch is added to the Blue Print, and make it clear that is
part of the deprecate-baremetal-driver patch set.


Chris Krelle


On Mon, Feb 24, 2014 at 6:05 AM, Faizan Barmawer
faizan.barma...@gmail.comwrote:

 Hi All,

 I am currently working on ilo driver for ironic project.
 As part of this implementation and to integrate with nova ironic driver (
 https://review.openstack.org/#/c/51328/) we need to make changes to
 driver.py and ironic_driver_fields.py files, to pass down ilo driver
 specific fields to the ironic node. Since nova ironic driver code review
 still in progress and not yet integrated into openstack, we have not
 included this piece of code in the ilo driver code review patch (
 https://review.openstack.org/#/c/73787/).

 We need your suggestion on delivering this part of ilo driver code change
 in nova ironic driver.
 - Should we wait for the completion of nova ironic driver and then raise a
 defect to submit these changes? or
 - should we raise a defect now and submit for review, giving the
 dependency on the nova ironic driver review? or
 - Can we use the existing blueprint for ilo driver to raise a separate
 review for this code change giving nova ironic driver as dependency?

 Please suggest a better way of delivering these changes.

 Thanks  Regards,
 Barmawer

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Folks,

So far everyone agrees that the model should be pure logical, but no one
came up with the API and meaningful implementation details (at least at
idea level) of such obj model.
As I've pointed out, 'pure logical' object model has some API and user
experience inconsistencies that we need to sort out before we implement it.
I'd like to see real details proposed for such 'pure logical' object model.

Let's also consider the cost of the change - it's easier to do it gradually
than rewrite it from scratch.

Thanks,
Eugene.



On Mon, Feb 24, 2014 at 9:36 PM, Samuel Bercovici samu...@radware.comwrote:

  Hi,



 I also agree that the model should be pure logical.

 I think that the existing model is almost correct but the pool should be
 made pure logical. This means that the vip ßàpool relationships needs
 also to become any to any.

 Eugene, has rightfully pointed that the current state management will
 not handle such relationship well.

 To me this means that the state management is broken and not the model.

 I will propose an update to the state management in the next few days.



 Regards,

 -Sam.









 *From:* Mark McClain [mailto:mmccl...@yahoo-inc.com]
 *Sent:* Monday, February 24, 2014 6:32 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion





 On Feb 21, 2014, at 1:29 PM, Jay Pipes jaypi...@gmail.com wrote:



  I disagree on this point. I believe that the more implementation details
 bleed into the API, the harder the API is to evolve and improve, and the
 less flexible the API becomes.

 I'd personally love to see the next version of the LBaaS API be a
 complete breakaway from any implementation specifics and refocus itself
 to be a control plane API that is written from the perspective of the
 *user* of a load balancing service, not the perspective of developers of
 load balancer products.



 I agree with Jay.  We the API needs to be user centric and free of
 implementation details.  One of my concerns I've voiced in some of the IRC
 discussions is that too many implementation details are exposed to the user.



 mark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-24 Thread gustavo panizzo gfa
On 02/24/2014 01:10 AM, Liuji (Jeremy) wrote:
 Hi, Boris and all other guys:

 I have found a BP about USB device passthrough in 
 https://blueprints.launchpad.net/nova/+spec/host-usb-passthrough. 
 I have also read the latest nova code and make sure it doesn't support USB 
 passthrough by now.

 Are there any progress or plan for USB passthrough?
use usbip, it works today and is awesome!

http://usbip.sourceforge.net/



 Thanks,
 Jeremy Liu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Alexander Tivelkov
Hi Stan,

It is good that we are on a common ground here :)

Of course this can be done by Heat. In fact - it will be, in the very same
manner as it always was, I am pretty sure we've discussed this many times
already. When Heat Software config is fully implemented, it will be
possible to use it instead of our Agent execution plans for software
configuration - it the very same manner as we use regular heat templates
for resource allocation.

Heat does indeed support template composition - but we don't want our
end-users to do learn how to do that: we want them just to combine existing
application on higher-level. Murano will use the template composition under
the hood, but only in the way which is designed by application publisher.
If the publisher has decided to configure the software with using Heat
Software Config, then this option will be used. If some other (probably
some legacy ) way of doing this was preferred, Murano should be able to
support that and allow to create such workflows.

Also, there may be workflow steps which are not covered by Heat by design.
For example, application publisher may include creating instance snapshots,
data migrations, backups etc into the deployment or maintenance workflows.
I don't see how these may be done by Heat, while Murano should definitely
support this scenarios.

So, as a conclusion, Murano should not be though of as a Heat alternative:
it is a different tool located on the different layer of the stack, aiming
different user audience - and, the most important - using the Heat
underneath.


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 8:36 PM, Stan Lagun sla...@mirantis.com wrote:

 Hi Alex,

 Personally I like the approach and how you explain it. I just would like
 to know your opinion on how this is better from someone write Heat template
 that creates Active Directory  lets say with one primary and one secondary
 controller and then publish it somewhere. Since Heat do supports software
 configuration as of late and has concept of environments [1] that Steven
 Hardy generously pointed out in another mailing thread that can be used for
 composition as well it seems like everything you said can be done by Heat
 alone

 [1]:
 http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html


 On Mon, Feb 24, 2014 at 7:51 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 Sorry folks, I didn't put the proper image url. Here it is:


 https://creately.com/diagram/hrxk86gv2/kvbckU5hne8C0r0sofJDdtYgxc%3D


 --
 Regards,
 Alexander Tivelkov


 On Mon, Feb 24, 2014 at 7:39 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 Hi,

 I would like to initiate one more discussion about an approach we
 selected to solve a particular problem in Murano.

 The problem statement is the following: We have multiple entities like
 low level resources and high level application definitions. Each entity
 does some specific actions for example to create a VM or deploy application
 configuration. We want each entity's workflow code reusable in order to
 simplify development for a new application as the current approach with XML
 based rules requires significant efforts.

 After internal discussions inside Murano team we come up to the solution
 which uses a well known programmatic concept - classes, their inheritance
 and composition.

 In this thread I would like to share our ideas and discuss the
 implementation details.

 We want to represent each and every entity being manipulated by Murano,
 as an instance of some class. These classes will define structure of the
 entities and their behavior. Different entities may be combined together,
 interacting with each other, forming a composite environment. The
 inheritance may be used to extract common structure and functionality into
 generic superclasses, while having their subclasses to define only their
 specific attributes and actions.

 This approach is better to explain on some example. Let's consider the
 Active Directory windows service. This is one of the currently present
 Murano Applications, and its structure and deployment workflow is pretty
 complex. Let's see how it may be simplified by using the proposed
 object-oriented approach.

 First, let's just describe an Active Directory service in plain English.

 Active Directory service consists of several Controllers: exactly one
 Primary Domain Controller and, optionally, several Secondary Domain
 Controllers. Controllers (both primary and Secondary) are special Windows
 Instances, having an active directory server role activated. Their specific
 difference is in the configuration scripts which are executed on them after
 the roles are activated. Also, Secondary Domain Controllers have an ability
 to join to a domain, while Primary Domain Controller cannot do it.

 Windows Instances are regular machines having some limitations on the
 their images (it should, obviously, be Windows image) and hardware flavor
 (windows is usually 

Re: [openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Ben Nemec
 

On 2014-02-24 09:02, Randy Tuttle wrote: 

 Has anyone experienced this issue when running tox. I'm trying to figure if 
 this is some limit of tox environment or something else. I've seen this 
 referenced in other projects, but can't seem to zero in on a proper fix.
 
 tox -e py27
 
 [...8...snip a lot]
 
 neutron.tests.unit.test_routerserviceinsertionnneutron.tests.unit.test_security_groups_rpcnneutron.tests.unit.test_servicetype=xc1xf1x19',
  stderr=None
 error: testr failed (3)
 ERROR: InvocationError: '/Users/rtuttle/projects/neutron/.tox/py27/bin/python 
 -m neutron.openstack.common.lockutils python setup.py testr --slowest 
 --testr-args='
 __ summary 
 __
 ERROR: py27: commands failed
 
 It seems that what it may be complaining about is a missing oslo.config. If I 
 try to load the final module noted from above (i.e., 
 neutron.tests.unit.test_servicetype), I get an error about the missing module.
 
 Python 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45) 
 [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
 Type help, copyright, credits or license for more information.
 import neutron.tests.unit.test_servicetype
 Traceback (most recent call last):
 File stdin, line 1, in module
 File neutron/tests/unit/__init__.py, line 20, in module
 from oslo.config import cfg
 ImportError: No module named oslo.config
 
 Cheers, Randy

We hit a similar problem in some of the other projects recently, but it
doesn't look like that applies to Neutron because it isn't using
site-packages in its tox runs anyway. The first thing I would check is
whether oslo.config is installed in the py27 tox venv. It might be a
good idea to just wipe your .tox directory and start fresh if you
haven't done that recently. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Randy Tuttle
Thanks guys.

Yes, Ben, I can see oslo.config installed in tox sub-directory. I will try
to wipe tox out and try again. You are right though, the tox.ini only has
site-packages for Jenkins noted.

Sean, I think your first email response might be right. I am running on a
Mac instead of Ubuntu box. I think, based on my research on this, that the
last module (or even a series of them) may not have loaded, and this is
proven when I try with import. Here's the thread I've been reading.

https://bugs.launchpad.net/nova/+bug/1271097

Cheers


On Mon, Feb 24, 2014 at 1:05 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-02-24 09:02, Randy Tuttle wrote:

Has anyone experienced this issue when running tox. I'm trying to
 figure if this is some limit of tox environment or something else. I've
 seen this referenced in other projects, but can't seem to zero in on a
 proper fix.

 tox -e py27

 [...8...snip a lot]

 neutron.tests.unit.test_routerserviceinsertion\nneutron.tests.unit.test_security_groups_rpc\nneutron.tests.unit.test_servicetype=\xc1\xf1\x19',
 stderr=None
 error: testr failed (3)
 ERROR: InvocationError:
 '/Users/rtuttle/projects/neutron/.tox/py27/bin/python -m
 neutron.openstack.common.lockutils python setup.py testr --slowest
 --testr-args='
 __ summary
 __
 ERROR:   py27: commands failed

 It seems that what it may be complaining about is a missing oslo.config.
 If I try to load the final module noted from above (i.e.,
 neutron.tests.unit.test_servicetype), I get an error about the missing
 module.

 Python 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45)
 [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
 Type help, copyright, credits or license for more information.
  import neutron.tests.unit.test_servicetype
 Traceback (most recent call last):
   File stdin, line 1, in module
   File neutron/tests/unit/__init__.py, line 20, in module
 from oslo.config import cfg
 ImportError: No module named oslo.config

 Cheers,
 Randy

 We hit a similar problem in some of the other projects recently, but it
 doesn't look like that applies to Neutron because it isn't using
 site-packages in its tox runs anyway.  The first thing I would check is
 whether oslo.config is installed in the py27 tox venv.  It might be a good
 idea to just wipe your .tox directory and start fresh if you haven't done
 that recently.

 -Ben


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-24 Thread Lance D Bragstad

Response below.


Best Regards,

Lance Bragstad
ldbra...@us.ibm.com

Nader Lahouti nader.laho...@gmail.com wrote on 02/24/2014 11:31:10 AM:

 From: Nader Lahouti nader.laho...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 02/24/2014 11:37 AM
 Subject: Re: [openstack-dev] [keystone] Notification When Creating/
 Deleting a Tenant in openstack

 Hi Swann,

 I was able to listen to keystone notification by setting
 notifications in the keystone.conf file. I only needed the
 notification (CURD) for project and handle it in my plugin code so
 don't need ceilometer to handle them.
 The other issue is that the notification is only for limited to
 resource_id  and don't have other information such as project name.

The idea behind this when we originally implemented notifications in
Keystone was to
provide the resource being changed, such as 'user', 'project', 'trust' and
the uuid of that
resource. From there your plugin and could request more information from
Keystone by doing a
GET on that resource. This way would could keep the payload of the
notification sent minimal
in case all the information on the resource wasn't required.


 Thanks,
 Nader.



 On Mon, Feb 24, 2014 at 2:10 AM, Swann Croiset swan...@gmail.com wrote:

 Hi Nader,

 These notifications must be handled by Ceilometer like others [1].
 it is surprising that it does not already identity meters indeed...
 probably nobody needs them before you.
 I guess it remains to open a BP and code them like I recently did for
Heat [2]


 http://docs.openstack.org/developer/ceilometer/measurements.html

https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications


 2014-02-20 19:10 GMT+01:00 Nader Lahouti nader.laho...@gmail.com:

 Thanks Dolph for link. The document shows the format of the message
 and doesn't give any info on how to listen to the notification.
 Is there any other document showing the detail on how to listen or
 get these notifications ?

 Regards,
 Nader.

 On Feb 20, 2014, at 9:06 AM, Dolph Mathews dolph.math...@gmail.com
wrote:

 Yes, see:

   http://docs.openstack.org/developer/keystone/event_notifications.html

 On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti nader.laho...@gmail.com
  wrote:
 Hi All,

 I have a question regarding creating/deleting a tenant in openstack
 (using horizon or CLI). Is there any notification mechanism in place
 so that an application get informed of such an event?

 If not, can it be done using plugin to send create/delete
 notification to an application?

 Appreciate your suggestion and help.

 Regards,
 Nader.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA VRRP concerns

2014-02-24 Thread Salvatore Orlando
Hi Assaf,

some comments inline.
As a general comment, I'd prefer to move all the discussions to gerrit
since the patches are now in review.
This unless you have design concerns (the ones below look more related to
the implementation to me)

Salvatore


On 24 February 2014 15:58, Assaf Muller amul...@redhat.com wrote:

 Hi everyone,

 A few concerns have popped up recently about [1] which I'd like to share
 and discuss,
 and would love to hear your thoughts Sylvain.

 1) Is there a way through the API to know, for a given router, what agent
 is hosting
 the active instance? This might be very important for admins to know.


I reckon the current agent management extension already provides this
information, but I'll double check this.
This is an admin-only extension.



 2) The current approach is to create an administrative network and subnet
 for VRRP traffic per router group /
 per router. Is this network counted in the quota for the tenant? (Clearly
 it shouldn't). Same
 question for the HA ports created for each router instance.


That is a good point. I have not reviewed the implementation so I cannot
provide a final answer.
I think it should be possible to assign to admins rather than tenants; if
not I would consider this an important enhancement, but I would not hold
progress on the patches currently on review because of this.


 3) The administrative network is created per router and takes away from
 the VLAN ranges if using
 VLAN tenant networks (For a tunneling based deployment this is a
 non-issue). Maybe we could
 consider a change that creates an administrative network per tenant (Which
 would then limit
 the solution to up to 255 routers because of VRRP'd group limit), or an
 admin network per 255
 routers?


I am not able to comment on this question. I'm sure the author(s) will be
able to.



 4) Maybe the VRRP hello and dead times should be configurable? I can see
 admins that would love to
 up or down these numbers.


I reckon this a reasonable thing to have. This could be either pointed out
in the reviews or pushed as an additional change on top of the other ones
in review.


 5) The administrative / VRRP networks, subnets and ports that are created
 - Will they be marked in any way
 as an 'internal' network or some equivalent tag? Otherwise they'd show up
 when running neutron net-list,
 in the Horizon networks listing as well as the graphical topology drawing
 (Which, personally, is what
 bothers me most about this). I'd love them tagged and hidden from the
 normal net-list output,
 and something like a 'neutron net-list --all' introduced.


I agree this should be avoided; this is also connected to the point you
raised at #2.



 6) The IP subnet chosen for VRRP traffic is specified in neutron.conf. If
 a tenant creates a subnet
 with the same range, and attaches a HA router to that subnet, the
 operation will fail as the router
 cannot have different interfaces belonging to the same subnet. Nir
 suggested to look into using
 the 169.254.0.0/16 range as the default because we know it will
 (hopefully) not be allocated by tenants.


We adopted a similar approach in the NSX plugin for a service network which
the plugin uses for metadata access.
In that case we used the link-local network, but perhaps an easier solution
would be to make the cidr specified in neutron.conf reserved thus
preventing tenants from specifying subnets overlapping with this range in
the first place.
I reckong the link-local range is a good candidate for the default value.



 [1] https://blueprints.launchpad.net/neutron/+spec/l3-high-availability


 Assaf Muller, Cloud Networking Engineer
 Red Hat

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Missing tests

2014-02-24 Thread Martins, Tiago
HI!
I'm sorry it took me this long to answer you.
During the fixtures of the UT , there must be somewhere where you can add the 
extensions to load them, so their tests won't break.Could you send me a link to 
your patch in gerrit?

From: Vinod Kumar Boppanna [mailto:vinod.kumar.boppa...@cern.ch]
Sent: segunda-feira, 24 de fevereiro de 2014 09:47
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Missing tests

Hi,

I had uploaded to Gerrit the code for Domain Quota Management. One of the test 
is failing due to the missing tests for the following extensions.

Extensions are missing tests: ['os-extended-hypervisors', 
'os-extended-services-delete']

What can i do now? (these extensions are not done by me)

Regards,
Vinod Kumar Boppanna

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack and GSoC 2014

2014-02-24 Thread Davanum Srinivas
Hi all,

We're in! Just got notified by Admin Team that our Organization
Application has been accepted. I've updated the etherpad with the full
responses from them.

https://etherpad.openstack.org/p/gsoc2014orgapp

thanks,
dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday February 25th at 19:00 UTC

2014-02-24 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday February 25th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-24 Thread W Chan
Renat,

Regarding your comments on change https://review.openstack.org/#/c/75609/,
I don't think the port to oslo.messaging is just a swap from pika to
oslo.messaging.  OpenStack services as I understand is usually implemented
as an RPC client/server over a messaging transport.  Sync vs async calls
are done via the RPC client call and cast respectively.  The messaging
transport is abstracted and concrete implementation is done via
drivers/plugins.  So the architecture of the executor if ported to
oslo.messaging needs to include a client, a server, and a transport.  The
consumer (in this case the mistral engine) instantiates an instance of the
client for the executor, makes the method call to handle task, the client
then sends the request over the transport to the server.  The server picks
up the request from the exchange and processes the request.  If cast
(async), the client side returns immediately.  If call (sync), the client
side waits for a response from the server over a reply_q (a unique queue
for the session in the transport).  Also, oslo.messaging allows versioning
in the message. Major version change indicates API contract changes.  Minor
version indicates backend changes but with API compatibility.

So, where I'm headed with this change...  I'm implementing the basic
structure/scaffolding for the new executor service using oslo.messaging
(default transport with rabbit).  Since the whole change will take a few
rounds, I don't want to disrupt any changes that the team is making at the
moment and so I'm building the structure separately.  I'm also adding
versioning (v1) in the module structure to anticipate any versioning
changes in the future.   I expect the change request will lead to some
discussion as we are doing here.  I will migrate the core operations of the
executor (handle_task, handle_task_error, do_task_action) to the server
component when we agree on the architecture and switch the consumer
(engine) to use the new RPC client for the executor instead of sending the
message to the queue over pika.  Also, the launcher for
./mistral/cmd/task_executor.py will change as well in subsequent round.  An
example launcher is here
https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine.
 The interceptor project here is what I use to research how oslo.messaging
works.  I hope this is clear. The blueprint only changes how the request
and response are being transported.  It shouldn't change how the executor
currently works.

Finally, can you clarify the difference between local vs scalable engine?
 I personally do not prefer to explicitly name the engine scalable because
this requirement should be in the engine by default and we do not need to
explicitly state/separate that.  But if this is a roadblock for the change,
I can put the scalable structure back in the change to move this forward.

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-24 Thread David Peraza
Thanks John,

I also think it is a good idea to test the algorithm at unit test level, but I 
will like to try out over amqp as well, that is, we process and threads talking 
to each other over rabbit or qpid. I'm trying to test out performance as well. 

Regards,
David Peraza

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Monday, February 24, 2014 11:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for 
scheduler testing

On 24 February 2014 16:24, David Peraza david_per...@persistentsys.com wrote:
 Hello all,

 I have been trying some new ideas on scheduler and I think I'm 
 reaching a resource issue. I'm running 6 compute service right on my 4 
 CPU 4 Gig VM, and I started to get some memory allocation issues. 
 Keystone and Nova are already complaining there is not enough memory. 
 The obvious solution to add more candidates is to get another VM and set 
 another 6 Fake compute service.
 I could do that but I think I need to be able to scale more without 
 the need to use this much resources. I will like to simulate a cloud 
 of 100 maybe
 1000 compute nodes that do nothing (Fake driver) this should not take 
 this much memory. Anyone knows of a more efficient way to  simulate 
 many computes? I was thinking changing the Fake driver to report many 
 compute services in different threads instead of having to spawn a 
 process per compute service. Any other ideas?

It depends what you want to test, but I was able to look at tuning the filters 
and weights using the test at the end of this file:
https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_caching_scheduler.py

Cheers,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Satori Project Update (Configuration Discovery)

2014-02-24 Thread Ziad Sawalha
We had our first team meeting[1] today and will be holding weekly team meetings 
on Mondays at 15:00 UTC on #openstack-meeting-alt.

An early prototype of Satori is available on pypi [2].

We’re working towards adding the following features before making an 
announcement to the user list on availability of satori:

- usability improvements such as update docs and additional CLI error trapping
- include an in-host discovery component (that logs on to servers and discover 
running and/or installed software).

We’re available on #satori and eager to get feedback on the work we are doing.

Ziad

[1] https://wiki.openstack.org/wiki/Satori/MeetingLogs
[2] https://pypi.python.org/pypi/satori


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-24 Thread Ben Nemec

On 2014-02-21 17:09, Sean Dague wrote:

On 02/21/2014 05:28 PM, Clark Boylan wrote:
On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec openst...@nemebean.com 
wrote:

On 2014-02-21 13:01, Mike Spreitzer wrote:

https://bugs.launchpad.net/devstack/+bug/1203680 is literally about 
Glance
but Nova has the same problem.  There is a fix released, but just 
merging
that fix accomplishes nothing --- we need people who run DevStack to 
set the
new variable (INSTALL_TESTONLY_PACKAGES).  This is something that 
needs to
be documented (in http://devstack.org/configuration.html and all the 
places
that tell people how to do unit testing, for examples), so that 
people know

to do it, right?



IMHO, that should be enabled by default.  Every developer using 
devstack is
going to want to run unit tests at some point (or should anyway...), 
and if
the gate doesn't want the extra install time for something like 
tempest that
probably doesn't need these packages, then it's much simpler to 
disable it
in that one config instead of every separate config used by every 
developer.


-Ben



I would be wary of relying on devstack to configure your unittest
environments. Just like it takes over the node you run it on, devstack
takes full ownership of the repos it clones and will do potentially
lossy things like `git reset --hard` when you don't expect it to. +1
to documenting the requirements for unittesting, not sure I would
include devstack in that documentation.


Agreed, I never run unit tests in the devstack tree. I run them on my
laptop or other non dedicated computers. That's why we do unit tests in
virtual envs, they don't need a full environment.

Also many of the unit tests can't be run when openstack services are
actually running, because they try to bind to ports that openstack
services use.

It's one of the reasons I've never considered that path a priority in
devstack.

-Sean



What is the point of devstack if we can't use it for development?  Are 
we really telling people that they shouldn't be altering the code in 
/opt/stack because it's owned by devstack, and devstack reserves the 
right to blow it away any time it feels the urge?  And if that's not 
what we're saying, aren't they going to want to run unit tests before 
they push their changes from /opt/stack?  I don't think it's reasonable 
to tell them that they have to copy their code to another system to run 
unit tests on it.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack and GSoC 2014

2014-02-24 Thread Victoria Martínez de la Cruz
So happy to hear that! Congrats all!


2014-02-24 16:16 GMT-03:00 Davanum Srinivas dava...@gmail.com:

 Hi all,

 We're in! Just got notified by Admin Team that our Organization
 Application has been accepted. I've updated the etherpad with the full
 responses from them.

 https://etherpad.openstack.org/p/gsoc2014orgapp

 thanks,
 dims

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Jay Pipes
Thanks, Eugene! I've given the API a bit of thought today and jotted
down some thoughts below.

On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
 Could you provide some examples -- even in the pseudo-CLI
 commands like
 I did below. It's really difficult to understand where the
 limits are
 without specific examples.
 You know, I always look at the API proposal from implementation
 standpoint also, so here's what I see.
 In the cli workflow that you described above, everything is fine,
 because the driver knowы how and where to deploy each object
 that you provide in your command, because it's basically a batch.

Yes, that is true.

 When we're talking about separate objectы that form a loadbalancer -
 vips, pools, members, it becomes not clear how to map them backends
 and at which point.

Understood, but I think we can make some headway here. Examples below.

 So here's an example I usually give:
 We have 2 VIPs (in fact, one address and 2 ports listening for http
 and https, now we call them listeners), 
 both listeners pass request to a webapp server farm, and http listener
 also passes requests to static image servers by processing incoming
 request URIs by L7 rules.
 So object topology is:
 
 
  Listener1 (addr:80)   Listener2(addr:443)
| \/
| \/
|  X
|  / \
  pool1(webapp) pool2(static imgs)
 sorry for that stone age pic :)
 
 
 The proposal that we discuss can create such object topology by the
 following sequence of commands:
 1) create-vip --name VipName address=addr
 returns vid_id
 2) create-listener --name listener1 --port 80 --protocol http --vip_id
 vip_id
 returns listener_id1
 3) create-listener --name listener2 --port 443 --protocol https
 --sl-params params --vip_id vip_id
 
 returns listener_id2

 4) create-pool --name pool1 members
 
 returns pool_id1
 5) create-pool --name pool1 members
 returns pool_id2
 
 6) set-listener-pool listener_id1 pool_id1 --default
 7) set-listener-pool listener_id1 pool_id2 --l7policy policy
 
 7) set-listener-pool listener_id2 pool_id1 --default

 That's a generic workflow that allows you to create such config. The
 question is at which point the backend is chosen.

From a user's perspective, they don't care about VIPs, listeners or
pools :) All the user cares about is:

 * being able to add or remove backend nodes that should be balanced
across
 * being able to set some policies about how traffic should be directed

I do realize that AWS ELB's API uses the term listener in its API, but
I'm not convinced this is the best term. And I'm not convinced that
there is a need for a pool resource at all.

Could the above steps #1 through #6 be instead represented in the
following way?

# Assume we've created a load balancer with ID $BALANCER_ID using
# Something like I showed in my original response:

neutron balancer-create --type=advanced --front=ip \
 --back=list_of_ips --algorithm=least-connections \
 --topology=active-standby

neutron balancer-configure $BALANCER_ID --front-protocol=http \
 --front-port=80 --back-protocol=http --back-port=80

neutron balancer-configure $BALANCER_ID --front-protocol=https \
 --front-port=443 --back-protocol=https --back-port=443

Likewise, we could configure the load balancer to send front-end HTTPS
traffic (terminated at the load balancer) to back-end HTTP services:

neutron balancer-configure $BALANCER_ID --front-protocol=https \
 --front-port=443 --back-protocol=http --back-port=80

No mention of listeners, VIPs, or pools at all.

The REST API for the balancer-update CLI command above might be
something like this:

PUT /balancers/{balancer_id}

with JSON body of request like so:

{
  front-port: 443,
  front-protocol: https,
  back-port: 80,
  back-protocol: http
}

And the code handling the above request would simply look to see if the
load balancer had a routing entry for the front-end port and protocol
of (443, https) and set the entry to route to back-end port and protocol
of (80, http).

For the advanced L7 policy heuristics, it makes sense to me to use a
similar strategy. For example (using a similar example from ELB):

neutron l7-policy-create --type=ssl-negotiation \
 --attr=ProtocolSSLv3=true \
 --attr=ProtocolTLSv1.1=true \
 --attr=DHE-RSA-AES256-SHA256=true \
 --attr=Server-Defined-Cipher-Order=true

Presume above returns an ID for the policy $L7_POLICY_ID. We could then
assign that policy to operate on the front-end of the load balancer by
doing:

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID --port=443

There's no need to specify --front-port of course, since the policy only
applies to the front-end.

There is also no need to refer to a listener object, no need to call
anything a VIP, nor any reason to use the term pool in the API.

Best,
-jay

 In our current 

Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-24 Thread Sean Dague
On 02/24/2014 03:10 PM, Ben Nemec wrote:
 On 2014-02-21 17:09, Sean Dague wrote:
 On 02/21/2014 05:28 PM, Clark Boylan wrote:
 On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec openst...@nemebean.com
 wrote:
 On 2014-02-21 13:01, Mike Spreitzer wrote:

 https://bugs.launchpad.net/devstack/+bug/1203680 is literally about
 Glance
 but Nova has the same problem.  There is a fix released, but just
 merging
 that fix accomplishes nothing --- we need people who run DevStack to
 set the
 new variable (INSTALL_TESTONLY_PACKAGES).  This is something that
 needs to
 be documented (in http://devstack.org/configuration.html and all the
 places
 that tell people how to do unit testing, for examples), so that
 people know
 to do it, right?



 IMHO, that should be enabled by default.  Every developer using
 devstack is
 going to want to run unit tests at some point (or should anyway...),
 and if
 the gate doesn't want the extra install time for something like
 tempest that
 probably doesn't need these packages, then it's much simpler to
 disable it
 in that one config instead of every separate config used by every
 developer.

 -Ben


 I would be wary of relying on devstack to configure your unittest
 environments. Just like it takes over the node you run it on, devstack
 takes full ownership of the repos it clones and will do potentially
 lossy things like `git reset --hard` when you don't expect it to. +1
 to documenting the requirements for unittesting, not sure I would
 include devstack in that documentation.

 Agreed, I never run unit tests in the devstack tree. I run them on my
 laptop or other non dedicated computers. That's why we do unit tests in
 virtual envs, they don't need a full environment.

 Also many of the unit tests can't be run when openstack services are
 actually running, because they try to bind to ports that openstack
 services use.

 It's one of the reasons I've never considered that path a priority in
 devstack.

 -Sean

 
 What is the point of devstack if we can't use it for development?  

I builds you a consistent cloud.

 Are
 we really telling people that they shouldn't be altering the code in
 /opt/stack because it's owned by devstack, and devstack reserves the
 right to blow it away any time it feels the urge? 

Actually, I tell people that all that time. Most of them don't listen to
me. :)

Devstack defaults to RECLONE=False, but that tends to break people in
other ways (like having month old trees they are building against). But
the reality is I've watched tons of people have their work reset on them
because they were developing in /opt/stack, so I tell people don't do
that (and if they do it anyway, at least they realize it's dangerous).

 And if that's not
 what we're saying, aren't they going to want to run unit tests before
 they push their changes from /opt/stack?  I don't think it's reasonable
 to tell them that they have to copy their code to another system to run
 unit tests on it.

Devstack can clone from alternate sources, and that's my approach on
anything long running. For instance, keeping trees in ~/code/ and adjust
localrc to use those trees/branches that I'm using (with the added
benefit of being able to easily reclone the rest of the tree).

Lots of people use devstack + vagrant, and do basically the same thing
with their laptop repos being mounted up into the guest.

And some people do it the way you are suggesting above.

The point is, for better or worse, what we have is a set of tools from
which you can assemble a workflow that suits your needs. We don't have a
prescribed this is the one way to develop approach. There is some
assumption that you'll pull together something from the tools provided.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Feedback on SSL implementation

2014-02-24 Thread Eugene Nikanorov
Hi,

Barbican is the storage option we're considering, however it seems that
there's not much progress with incubation of it.

Another week point of our current state is a lack of secure communication
between neutron server and the agent, but that is solvable.

Thanks,
Eugene.


On Fri, Feb 21, 2014 at 11:42 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Wed, 2014-02-19 at 22:01 -0800, Stephen Balukoff wrote:

  Front-end versus back-end protocols:
  It's actually really common for a HTTPS-enabled front-end to speak
  HTTP to the back-end.  The assumption here is that the back-end
  network is trusted and therefore we don't need to bother with the
  (considerable) extra CPU overhead of encrypting the back-end traffic.
  To be honest, if you're going to speak HTTPS on the front-end and the
  back-end, then the only possible reason for even terminating SSL on
  the load balancer is to insert the X-Fowarded-For header. In this
  scenario, you lose almost all the benefit of doing SSL offloading at
  all!

 This is exactly correct.

  If we make a policy decision right here not to allow front-end and
  back-end protocol to mismatch, this will break a lot of topologies.

 Yep.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core

2014-02-24 Thread Sergey Lukjanov
Unanimously.

Congratulations, Andrew, welcome to the core team!


On Fri, Feb 21, 2014 at 4:46 PM, Matthew Farrellee m...@redhat.com wrote:

 On 02/19/2014 05:40 PM, Sergey Lukjanov wrote:

 Hey folks,

 I'd like to nominate Andrew Lazarew (alazarev) for savanna-core.

 He is among the top reviewers of Savanna subprojects. Andrew is working
 on Savanna full time since September 2013 and is very familiar with
 current codebase. His code contributions and reviews have demonstrated a
 good knowledge of Savanna internals. Andrew have a valuable knowledge of
 both core and EDP parts, IDH plugin and Hadoop itself. He's working on
 both bugs and new features implementation.

 Some links:

 http://stackalytics.com/report/reviews/savanna-group/30
 http://stackalytics.com/report/reviews/savanna-group/90
 http://stackalytics.com/report/reviews/savanna-group/180
 https://review.openstack.org/#/q/owner:alazarev+savanna+AND+
 -status:abandoned,n,z
 https://launchpad.net/~alazarev

 Savanna cores, please, reply with +1/0/-1 votes.

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.


 fyi, some of those links don't work, but these do,

 http://stackalytics.com/report/contribution/savanna-group/30
 http://stackalytics.com/report/contribution/savanna-group/90
 http://stackalytics.com/report/contribution/savanna-group/180

 i'm very happy to see andrew evolving in the savanna community, making
 meaningful contributions, demonstrating a reasoned approach to resolve
 disagreements, and following guidelines such as GitCommitMessages more
 closely. i expect he will continue his growth as well as influence others
 to contribute positively.

 +1

 best,


 matt




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Hi Jay,

Thanks for suggestions. I get the idea.
I'm not sure the essence of this API is much different then what we have
now.
1) We operate on parameters of loadbalancer rather then on
vips/pools/listeners. No matter how we name them, the notions are there.
2) I see two opposite preferences: one is that user doesn't care about
'loadbalancer' in favor of pools/vips/listeners ('pure logical API')
another is vice versa (yours).
3) The approach of providing $BALANCER_ID to pretty much every call solves
all my concerns, I like it.
Basically that was my initial code proposal (it's not exactly the same, but
it's very close).
The idea of my proposal was to have that 'balancer' resource plus being
able to operate on vips/pools/etc.
In this direction we could evolve from existing API to the API in your
latest suggestion.

Thanks,
Eugene.


On Tue, Feb 25, 2014 at 12:35 AM, Jay Pipes jaypi...@gmail.com wrote:

 Thanks, Eugene! I've given the API a bit of thought today and jotted
 down some thoughts below.

 On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
  Could you provide some examples -- even in the pseudo-CLI
  commands like
  I did below. It's really difficult to understand where the
  limits are
  without specific examples.
  You know, I always look at the API proposal from implementation
  standpoint also, so here's what I see.
  In the cli workflow that you described above, everything is fine,
  because the driver knowы how and where to deploy each object
  that you provide in your command, because it's basically a batch.

 Yes, that is true.

  When we're talking about separate objectы that form a loadbalancer -
  vips, pools, members, it becomes not clear how to map them backends
  and at which point.

 Understood, but I think we can make some headway here. Examples below.

  So here's an example I usually give:
  We have 2 VIPs (in fact, one address and 2 ports listening for http
  and https, now we call them listeners),
  both listeners pass request to a webapp server farm, and http listener
  also passes requests to static image servers by processing incoming
  request URIs by L7 rules.
  So object topology is:
 
 
   Listener1 (addr:80)   Listener2(addr:443)
 | \/
 | \/
 |  X
 |  / \
   pool1(webapp) pool2(static imgs)
  sorry for that stone age pic :)
 
 
  The proposal that we discuss can create such object topology by the
  following sequence of commands:
  1) create-vip --name VipName address=addr
  returns vid_id
  2) create-listener --name listener1 --port 80 --protocol http --vip_id
  vip_id
  returns listener_id1
  3) create-listener --name listener2 --port 443 --protocol https
  --sl-params params --vip_id vip_id
 
  returns listener_id2

  4) create-pool --name pool1 members
 
  returns pool_id1
  5) create-pool --name pool1 members
  returns pool_id2
 
  6) set-listener-pool listener_id1 pool_id1 --default
  7) set-listener-pool listener_id1 pool_id2 --l7policy policy
 
  7) set-listener-pool listener_id2 pool_id1 --default

  That's a generic workflow that allows you to create such config. The
  question is at which point the backend is chosen.

 From a user's perspective, they don't care about VIPs, listeners or
 pools :) All the user cares about is:

  * being able to add or remove backend nodes that should be balanced
 across
  * being able to set some policies about how traffic should be directed

 I do realize that AWS ELB's API uses the term listener in its API, but
 I'm not convinced this is the best term. And I'm not convinced that
 there is a need for a pool resource at all.

 Could the above steps #1 through #6 be instead represented in the
 following way?

 # Assume we've created a load balancer with ID $BALANCER_ID using
 # Something like I showed in my original response:

 neutron balancer-create --type=advanced --front=ip \
  --back=list_of_ips --algorithm=least-connections \
  --topology=active-standby

 neutron balancer-configure $BALANCER_ID --front-protocol=http \
  --front-port=80 --back-protocol=http --back-port=80

 neutron balancer-configure $BALANCER_ID --front-protocol=https \
  --front-port=443 --back-protocol=https --back-port=443

 Likewise, we could configure the load balancer to send front-end HTTPS
 traffic (terminated at the load balancer) to back-end HTTP services:

 neutron balancer-configure $BALANCER_ID --front-protocol=https \
  --front-port=443 --back-protocol=http --back-port=80

 No mention of listeners, VIPs, or pools at all.

 The REST API for the balancer-update CLI command above might be
 something like this:

 PUT /balancers/{balancer_id}

 with JSON body of request like so:

 {
   front-port: 443,
   front-protocol: https,
   back-port: 80,
   back-protocol: http
 }

 And the code handling the above request 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 07:56:19 -0800
Dan Smith d...@danplanet.com wrote:

  - We want to make backwards incompatible changes to the API
and whether we do it in-place with V2 or by releasing V3
we'll have some form of dual API support burden.
 
 IMHO, the cost of maintaining both APIs (which are largely duplicated)
 for almost any amount of time outweighs the cost of localized changes.

The API layer is a actually quite a very thin layer on top of the rest
of Nova. Most of the logic in the API code is really just checking
incoming data, calling the underlying nova logic and then massaging
what is returned in the correct format. So as soon as you change the
format the cost of localised changes is pretty much the same as
duplicating the APIs. In fact I'd argue in many cases its more because
in terms of code readability its a lot worse and techniques like using
decorators for jsonschema for input validation are a lot harder to
implement. And unit and tempest tests still need to be duplicated.

 
 The neutron stickiness aside, I don't see a problem leaving the
 proxying in place for the foreseeable future. I think that it's
 reasonable to mark them as deprecated, encourage people not to use
 them, and maybe even (with a core api version to mark the change) say
 that they're not supported anymore.
 

I don't understand why this is also not seen as forcing people off V2
to V3 which is being given as a reason for not being able to set a
reasonable deprecation time for V2. This will require major changes for
people using the V2 API to change how they use it. 


 I also think that breaking our users because we decided to split A
 into B and C on the backend kind of sucks. I imagine that continuing
 to do that at the API layer (when we're clearly going to keep doing
 it on the backend) is going to earn us a bit of a reputation.

In all the discussions we've (as in the Nova group) had over the API
there has been a pretty clear consensus that proxying is quite
suboptimal (there are caching issues etc) and the long term goal is to
remove it from Nova. Why the change now? 

 
- Backporting V3 infrastructure changes to V2 would be a
  considerable amount of programmer/review time
 
 While acknowledging that you (and others) have done that for v3
 already, I have to think that such an effort is much less costly than
 maintaining two complete overlapping pieces of API code.

I strongly disagree here. I think you're overestimating the
amount of maintenance effort this involves and significantly
underestimating how much effort and review time a backport is going to
take.

 - twice the code
 - different enough to be annoying to convert existing clients to use
 - not currently different enough to justify the pain

For starters, It's not twice the code because we don't do things like
proxying and because we are able to logically separate out input
validation jsonschema. 

v2 API: ~14600 LOC
v3 API: ~7300 LOC (~8600 LOC if nova-network as-is added back in,
though the actually increase would almost certainly be a lot smaller)

And that's with a lot of the jsonschema patches not landed. So its
actually getting *smaller*. Long term which looks the better from a
maintenance point of view 

And I think you're continuing to look at it solely from the point of
view of pain for existing users of the API and not considering the pain
for new users who have to work out how to use the API. Eg just one
simple example, but how many people new to the API get confused about
what they are meant to send when it asks for instance_uuid when
they've never received one - is at server uuid - and if so what's the
difference? Do I have to do some sort of conversion? Similar issues
around project and tenant. And when writing code they have to remember
for this part of the API they pass it as server_uuid, in another
instance_uuid, or maybe its just id? All of these looked at
individually may look like small costs or barriers to using the API but
they all add up and they end up being imposed over a lot of people.

 This feels a lot like holding our users hostage in order to get them
 to move. We're basically saying We tweaked a few things, fixed some
 spelling errors, and changed some date stamp formats. You will have to
 port your client, or no new features for you! That's obviously a
 little hyperbolic, but I think that deployers of APIv2 would probably
 feel like that's the story they have to give to their users.

And how is say removing proxying or making *any* backwards incompatible
change any different? And this sort of situation is very common with
major library version upgrades. If you want new features you have to
port to the library version which requires changes to your app (that's
why its a major library version not a minor one).

 I naively think that we could figure out a way to move things forward
 without having to completely break older clients. It's clear that
 other services (with much larger and more widely-used APIs) are 

Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Keith Bray
Have you considered writing Heat resource plug-ins that perform (or configure 
within other services) instance snapshots, backups, or whatever other 
maintenance workflow possibilities you want that don't exist?  Then these 
maintenance workflows you mention could be expressed in the Heat template 
forming a single place for the application architecture definition, including 
defining the configuration for services that need to be application aware 
throughout the application's life .  As you describe things in Murano, I 
interpret that you are layering application architecture specific information 
and workflows into a DSL in a layer above Heat, which means information 
pertinent to the application as an ongoing concern would be disjoint.  
Fragmenting the necessary information to wholly define an 
infrastructure/application architecture could make it difficult to share the 
application and modify the application stack.

I would be interested in a library that allows for composing Heat templates 
from snippets or fragments of pre-written Heat DSL... The library's job 
could be to ensure that the snippets, when combined, create a valid Heat 
template free from conflict amongst resources, parameters, and outputs.  The 
interaction with the library, I think, would belong in Horizon, and the 
Application Catalog and/or Snippets Catalog could be implemented within 
Glance.

Also, there may be workflow steps which are not covered by Heat by design. 
For example, application publisher may include creating instance snapshots, 
data migrations, backups etc into the deployment or maintenance workflows. I 
don't see how these may be done by Heat, while Murano should definitely 
support this scenarios.

From: Alexander Tivelkov ativel...@mirantis.commailto:ativel...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 24, 2014 12:18 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Murano] Object-oriented approach for defining 
Murano Applications

Hi Stan,

It is good that we are on a common ground here :)

Of course this can be done by Heat. In fact - it will be, in the very same 
manner as it always was, I am pretty sure we've discussed this many times 
already. When Heat Software config is fully implemented, it will be possible to 
use it instead of our Agent execution plans for software configuration - it the 
very same manner as we use regular heat templates for resource allocation.

Heat does indeed support template composition - but we don't want our end-users 
to do learn how to do that: we want them just to combine existing application 
on higher-level. Murano will use the template composition under the hood, but 
only in the way which is designed by application publisher. If the publisher 
has decided to configure the software with using Heat Software Config, then 
this option will be used. If some other (probably some legacy ) way of doing 
this was preferred, Murano should be able to support that and allow to create 
such workflows.

Also, there may be workflow steps which are not covered by Heat by design. For 
example, application publisher may include creating instance snapshots, data 
migrations, backups etc into the deployment or maintenance workflows. I don't 
see how these may be done by Heat, while Murano should definitely support this 
scenarios.

So, as a conclusion, Murano should not be though of as a Heat alternative: it 
is a different tool located on the different layer of the stack, aiming 
different user audience - and, the most important - using the Heat underneath.


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 8:36 PM, Stan Lagun 
sla...@mirantis.commailto:sla...@mirantis.com wrote:
Hi Alex,

Personally I like the approach and how you explain it. I just would like to 
know your opinion on how this is better from someone write Heat template that 
creates Active Directory  lets say with one primary and one secondary 
controller and then publish it somewhere. Since Heat do supports software 
configuration as of late and has concept of environments [1] that Steven Hardy 
generously pointed out in another mailing thread that can be used for 
composition as well it seems like everything you said can be done by Heat alone

[1]: 
http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html


On Mon, Feb 24, 2014 at 7:51 PM, Alexander Tivelkov 
ativel...@mirantis.commailto:ativel...@mirantis.com wrote:
Sorry folks, I didn't put the proper image url. Here it is:


https://creately.com/diagram/hrxk86gv2/kvbckU5hne8C0r0sofJDdtYgxc%3D


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 7:39 PM, Alexander Tivelkov 
ativel...@mirantis.commailto:ativel...@mirantis.com wrote:

Hi,


I would like to initiate one 

Re: [openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-02-24 Thread Stefano Maffulli
On 02/17/2014 05:21 PM, Steve Kowalik wrote:
 I found it completely non-obvious too, and had to go back and look for
 the link. If the promotion code text box was always visible with the
 Apply button grayed out when the text box is empty, I think that would help.

Unfortunately the site is managed by eventbrite and we have little
control over their UX choices.

Since we know it's quite easy to miss the spot to redeem the invitation
code, we include a screenshot in the invitation email: there is an arrow
there, showing where to click to enter the discount code. If you have
other ideas on how to make the process more obvious let us know.

Cheers,
Stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Starting to postpone work to Juno

2014-02-24 Thread Devananda van der Veen
Hi all,

For the last few meetings, we've been discussing how to prioritize the work
that we need to get done as we approach the close of Icehouse development.
There's still some distance between where we are and where we need to be --
integration with other projects (eg. Nova), CI testing of that integration
(eg. via devstack), and fixing bugs that we continue to find.

As core reviewers need to focus their time during the last week of I-3,
we've discussed postponing cosmetic changes, particularly patches that just
refactor code without any performance or feature benefit, to the start of
Juno. [1] So, later today I am going to block patches that do not have
important functional changes and are non-trivial in scope (eg, take more
than a minute to read), are related to low-priority or wishlist items, or
are not targeted to Icehouse.

Near the end of the week, I will retarget incomplete blueprints to the Juno
release.

Next week is the TripleO developer sprint, which coincides with the close
of I-3. Many Ironic developers and more than half of our core review team
will also be there. This will give us a good opportunity to hammer out
testing and integration issues and work on bug fixes.

Over the next month, I would like us to stabilize what we have, add further
integration and functional testing to our gate, and write deployer/usage
documentation.

Regards,
Devananda


[1]

We actually voted on this last week, I didn't follow through, and Chris
reminded me during the meeting today...

http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-02-17-19.00.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Morgan Fainberg
On the topic of backwards incompatible changes:

I strongly believe that breaking current clients that use the APIs directly is 
the worst option possible. All the arguments about needing to know which APIs 
work based upon which backend drivers are used are all valid, but making an API 
incompatible change when we’ve made the contract that the current API will be 
stable is a very bad approach. Breaking current clients isn’t just breaking 
“novaclient, it would also break any customers that are developing directly 
against the API. In the case of cloud deployments with real-world production 
loads on them (and custom development around the APIs) upgrading between major 
versions is already difficult to orchestrate (timing, approvals, etc), if we 
add in the need to re-work large swaths of code due to API changes, it will 
become even more onerous and perhaps drive deployers to forego the upgrades in 
lieu of stability.

If the perception is that we don’t have stable APIs (especially when we are 
ostensibly versioning them), driving adoption of OpenStack becomes 
significantly more difficult. Difficulty in driving further adoption would be a 
big negative to both the project and the community.

TL;DR, “don’t break the contract”. If we are seriously making incompatible 
changes (and we will be regardless of the direction) the only reasonable option 
is a new major version.
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On February 24, 2014 at 10:16:31, Matt Riedemann (mrie...@linux.vnet.ibm.com) 
wrote:



On 2/24/2014 10:13 AM, Russell Bryant wrote:  
 On 02/24/2014 01:50 AM, Christopher Yeoh wrote:  
 Hi,  
  
 There has recently been some speculation around the V3 API and whether  
 we should go forward with it or instead backport many of the changes  
 to the V2 API. I believe that the core of the concern is the extra  
 maintenance and test burden that supporting two APIs means and the  
 length of time before we are able to deprecate the V2 API and return  
 to maintaining only one (well two including EC2) API again.  
  
 Yes, this is a major concern. It has taken an enormous amount of work  
 to get to where we are, and v3 isn't done. It's a good time to  
 re-evaluate whether we are on the right path.  
  
 The more I think about it, the more I think that our absolute top goal  
 should be to maintain a stable API for as long as we can reasonably do  
 so. I believe that's what is best for our users. I think if you gave  
 people a choice, they would prefer an inconsistent API that works for  
 years over dealing with non-backwards compatible jumps to get a nicer  
 looking one.  
  
 The v3 API and its unit tests are roughly 25k lines of code. This also  
 doesn't include the changes necessary in novaclient or tempest. That's  
 just *our* code. It explodes out from there into every SDK, and then  
 end user apps. This should not be taken lightly.  
  
 This email is rather long so here's the TL;DR version:  
  
 - We want to make backwards incompatible changes to the API  
 and whether we do it in-place with V2 or by releasing V3  
 we'll have some form of dual API support burden.  
 - Not making backwards incompatible changes means:  
 - retaining an inconsistent API  
  
 I actually think this isn't so bad, as discussed above.  
  
 - not being able to fix numerous input validation issues  
  
 I'm not convinced, actually. Surely we can do a lot of cleanup here.  
 Perhaps you have some examples of what we couldn't do in the existing API?  
  
 If it's a case of wanting to be more strict, some would argue that the  
 current behavior isn't so bad (see robustness principle [1]):  
  
 Be conservative in what you do, be liberal in what you accept from  
 others (often reworded as Be conservative in what you send, be  
 liberal in what you accept).  
  
 There's a decent counter argument to this, too. However, I still fall  
 back on it being best to just not break existing clients above all else.  
  
 - have to forever proxy for glance/cinder/neutron with all  
 the problems that entails.  
  
 I don't think I'm as bothered by the proxying as others are. Perhaps  
 it's not architecturally pretty, but it's worth it to maintain  
 compatibility for our users.  

+1 to this, I think this is also related to what Jay Pipes is saying in  
his reply:  

Whether a provider chooses to, for example,  
deploy with nova-network or Neutron, or Xen vs. KVM, or support block  
migration for that matter *should have no effect on the public API*. The  
fact that those choices currently *do* effect the public API that is  
consumed by the client is a major indication of the weakness of the API.  

As a consumer, I don't want to have to know which V2 APIs work and which  
don't depending on if I'm using nova-network or Neutron.  

  
 - Backporting V3 infrastructure changes to V2 would be a  
 considerable amount of programmer/review time  
  
 Agreed, but so is the 

Re: [openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-02-24 Thread Collins, Sean
Make sure that you also log in, or have your username and password handy before 
you redeem it.

If you click a link to send a password reset, you'll lose your session, and the 
invite code is a one-time use – I had to dig through my history to get the URL 
back, since the back button did not work correctly.

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] why doesn't _rollback_live_migration() always call rollback_live_migration_at_destination()?

2014-02-24 Thread Chris Friesen

I'm looking at the live migration rollback code and I'm a bit confused.

When setting up a live migration we unconditionally run 
ComputeManager.pre_live_migration() on the destination host to do 
various things including setting up networks on the host.


If something goes wrong with the live migration in 
ComputeManager._rollback_live_migration() we will only call 
self.compute_rpcapi.rollback_live_migration_at_destination() if we're 
doing block migration or volume-backed migration that isn't shared storage.


However, looking at 
ComputeManager.rollback_live_migration_at_destination(), I also see it 
cleaning up networking as well as block device.


What happens if we have a shared-storage instance that we try to migrate 
and fail and end up rolling back?  Are we going to end up with messed-up 
networking on the destination host because we never actually cleaned it up?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Christopher Armstrong
On Mon, Feb 24, 2014 at 4:20 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi Keith,

 Thank you for bringing up this question. We think that it could be done
 inside Heat. This is a part of our future roadmap to bring more stuff to
 Heat and pass all actual work to the heat engine. However it will require a
 collaboration between Heat and Murano teams, so that is why we want to have
 incubated status, to start better integration with other projects being a
 part of OpenStack community. I will understand Heat team when they refuse
 to change Heat templates to satisfy the requirements of the project which
 does not officially belong to OpenStack. With incubation status it will be
 much easier.
 As for the actual work, backups and snapshots are processes. It will be
 hard to express them in a good way in current HOT template. We see that we
 will use Mistral resources defined in Heat which will trig the events for
 backup and backup workflow associated with the application can be defined
 outside of Heat. I don't think that Heat team will include workflow
 definitions as a part of template format, while they can allow us to use
 resources which reference such workflows stored in a catalog. It can be an
 extension for HOT Software config for example, but we need to validate this
 approach with the heat team.


For what it's worth, there's already precedent for including non-OpenStack
resource plugins in Heat, in a contrib directory (which is still tested
with the CI infrastructure).




-- 
IRC: radix
Christopher Armstrong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 11:13:11 -0500
Russell Bryant rbry...@redhat.com wrote:
 
 Yes, this is a major concern.  It has taken an enormous amount of work
 to get to where we are, and v3 isn't done.  It's a good time to
 re-evaluate whether we are on the right path.

So I think its important to point out that we pretty much were done
before the last minute nova-network unfreezing which became a new
requirement for V3 in I-3. And the unfortunate unexpected delay in the
tasks API work. If either of those hadn't occurred we could have made
up the difference in I-3 - and even then we *could* have made it
but for I think reasonable risk purposes in trying to merge a lot of
code at the last minute decided to delay.

 The more I think about it, the more I think that our absolute top goal
 should be to maintain a stable API for as long as we can reasonably do
 so.  I believe that's what is best for our users.  I think if you gave
 people a choice, they would prefer an inconsistent API that works for
 years over dealing with non-backwards compatible jumps to get a nicer
 looking one.
 
 The v3 API and its unit tests are roughly 25k lines of code.  This
 also doesn't include the changes necessary in novaclient or tempest.
 That's just *our* code.  It explodes out from there into every SDK,
 and then end user apps.  This should not be taken lightly.

So the v2 API and its unit tests are around 43k LOC. And this is even
with the v3 API having more tests for the better input validation we do.

Just taking this down to burden in terms of LOC (and this may be one
of the worst metrics ever). If we proceeded with the v3 API and
maintained the V2 API for say 4 cycles, thats and extra burden of 100k
LOC compared to just doing the v2 API. But we'd pay that off in just 2
and a bit cycles once the the v2 API is removed because we'd now be
maintaining around 25k LOC instead of 43k LOC.

 
 If it's a case of wanting to be more strict, some would argue that the
 current behavior isn't so bad (see robustness principle [1]):
 
 Be conservative in what you do, be liberal in what you accept
 from others (often reworded as Be conservative in what you send, be
 liberal in what you accept).

Sometimes the problem is that people send extraneous data and they're
never told that what they're doing is wrong. But really no harm
caused, everything still works. I'm sure there are plenty of examples
of this happening. 

But the bigger issue around input validation being too lax is
that people send optional parameters (perhaps with a typo, or perhaps
simply in the wrong place) and the API layer quietly ignores them. The
users think they've requested some behaviour, the API says yep,
sure!, but it doesn't actually do what they want. We've even seen
this sort of thing in our api samples which automatically flows through
to our documentation!

 There's a decent counter argument to this, too.  However, I still fall
 back on it being best to just not break existing clients above all
 else.

I agree, we shouldn't break existing clients - within a major version.
That's why we need to make API rev.

  - The V3 API as-is has:
- lower maintenance
- is easier to understand and use (consistent).
- Much better input validation which is baked-in (json-schema)
  rather than ad-hoc and incomplete.
 
 So here's the rub ... with the exception of the consistency bits, none
 of this is visible to users, which makes me think we should be able to
 do all of this on v2.

As discussed above we can't really do a lot on input validation
either. And I think the pain of doing the backport is being greatly
underestimated. In doing the v3 port we arranged the patches so much of
it in terms of review was similar to doing patches to V2 rather than
starting from new code. And I know how hard that was to get it all in
during a period when it was easier to review bandwidth.

 
  - Whilst we have existing users of the API we also have a lot more
users in the future. It would be much better to allow them to use
the API we want to get to as soon as possible, rather than trying
to evolve the V2 API and forcing them along the transition that
  they could otherwise avoid.
 
 I'm not sure I understand this.  A key point is that I think any
 evolving of the V2 API has to be backwards compatible, so there's no
 forcing them along involved.

Well other people have been suggesting we can just deprecate parts (be
it proxying or other bits we really don't like) and then make the
backwards incompatible change. I think we've already said we'll do it
for XML for the V2 API and force them off to JSON.

  - Proposed way forward:
- Release the V3 API in Juno with nova-network and tasks support
- Feature freeze the V2 API when the V3 API is released
  - Set the timeline for deprecation of V2 so users have a lot
of warning
  - Fallback for those who really don't want to move after
deprecation is an API service which translates between V2 and 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 05:01 PM, Morgan Fainberg wrote:
 On the topic of backwards incompatible changes:
 
 I strongly believe that breaking current clients that use the APIs
 directly is the worst option possible. All the arguments about needing
 to know which APIs work based upon which backend drivers are used are
 all valid, but making an API incompatible change when we’ve made the
 contract that the current API will be stable is a very bad approach.
 Breaking current clients isn’t just breaking “novaclient, it would also
 break any customers that are developing directly against the API. In the
 case of cloud deployments with real-world production loads on them (and
 custom development around the APIs) upgrading between major versions is
 already difficult to orchestrate (timing, approvals, etc), if we add in
 the need to re-work large swaths of code due to API changes, it will
 become even more onerous and perhaps drive deployers to forego the
 upgrades in lieu of stability.
 
 If the perception is that we don’t have stable APIs (especially when we
 are ostensibly versioning them), driving adoption of OpenStack becomes
 significantly more difficult. Difficulty in driving further adoption
 would be a big negative to both the project and the community.
 
 TL;DR, “don’t break the contract”. If we are seriously making
 incompatible changes (and we will be regardless of the direction) the
 only reasonable option is a new major version.

FWIW, I do *not* consider non backwards compatible changes to be on the
table for the existing API.  Evolving it would have to be done in a
backwards compatible way.  I'm completely in agreement with that.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Dan Smith
 The API layer is a actually quite a very thin layer on top of the
 rest of Nova. Most of the logic in the API code is really just
 checking incoming data, calling the underlying nova logic and then
 massaging what is returned in the correct format. So as soon as you
 change the format the cost of localised changes is pretty much the
 same as duplicating the APIs. In fact I'd argue in many cases its
 more because in terms of code readability its a lot worse and
 techniques like using decorators for jsonschema for input validation
 are a lot harder to implement. And unit and tempest tests still need
 to be duplicated.

Making any change to the backend is double the effort with the two trees
as it would be with one API. I agree that changing/augmenting the format
of a call means some localized if this then that code, but that's
minor compared to what it takes to do things on the backend, IMHO.

 I don't understand why this is also not seen as forcing people off
 V2 to V3 which is being given as a reason for not being able to set
 a reasonable deprecation time for V2. This will require major changes
 for people using the V2 API to change how they use it.

Well, deprecating them doesn't require the change. Removing them does. I
think we can probably keep the proxying in a deprecated form for a very
long time, hopefully encouraging new users to do it right without
breaking existing users who don't care. Hopefully losing out on the
functionality they miss by not talking directly to Neutron (for example)
will be a good carrot to avoid using the proxy APIs.

 In all the discussions we've (as in the Nova group) had over the API 
 there has been a pretty clear consensus that proxying is quite 
 suboptimal (there are caching issues etc) and the long term goal is
 to remove it from Nova. Why the change now?

This is just MHO, of course. I don't think I've been party to those
conversations. I understand why the proxying is bad, but that's a
different issue from whether we drop it and break our users.

 I strongly disagree here. I think you're overestimating the amount of
 maintenance effort this involves and significantly underestimating
 how much effort and review time a backport is going to take.

Fair enough. I'm going from my experience over the last few cycles of
changing how the API communicates with the backend. This is something
we'll have to continue to evolve over time, and right now it
Sucks Big Time(tm) :)

 - twice the code
 For starters, It's not twice the code because we don't do things
 like proxying and because we are able to logically separate out
 input validation jsonschema.

You're right, I should have said twice the code for changes between the
API and the backend.

 Eg just one simple example, but how many people new to the API get
 confused about what they are meant to send when it asks for
 instance_uuid when they've never received one - is at server uuid -
 and if so what's the difference? Do I have to do some sort of
 conversion? Similar issues around project and tenant. And when
 writing code they have to remember for this part of the API they pass
 it as server_uuid, in another instance_uuid, or maybe its just id?
 All of these looked at individually may look like small costs or
 barriers to using the API but they all add up and they end up being
 imposed over a lot of people.

Yup, it's ugly, no doubt. I think that particular situation is probably
(hopefully?) covered up by the various client libraries (and/or docs)
that we have. If not, I think it's probably something we can improve
from an experience perspective on that end. But yeah, I know the public
API docs would still have that ambiguity.

 And how is say removing proxying or making *any* backwards
 incompatible change any different?

It's not. That's why I said maybe remove it some day :)

 Well if you never deprecate the only way to do it is to maintain the 
 old API forever (including test). And just take the hit on all that 
 involves.

Sure. Hopefully people that actually deploy and support our API will
chime in here about whether they think that effort is worth not telling
their users to totally rewrite their clients.

If we keep v2 and v3, I think we start in icehouse with a very large
surface, which will increase over time. If we don't, then we start with
v2 and end up with only the delta over time.

 What about the tasks API? We that discussed at the mid cycle summit
 and decided that the alternative backwards compatible way of doing it
 was too ugly and we didn't want to do that. But that's exactly what
 we'd be doing if we implemented them in the v2 API and it would be a 
 feature which ends up looking bolted because of the otherwise 
 significant non backwards compatible API changes we can't do.

If we version the core API and let the client declare the version it
speaks in a header, we could iterate on that interface right? If they're
version X, return the server object and a task header, if =X return
the task. We 

Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Georgy Okrokvertskhov
Hi Keith,

Thank you for bringing up this question. We think that it could be done
inside Heat. This is a part of our future roadmap to bring more stuff to
Heat and pass all actual work to the heat engine. However it will require a
collaboration between Heat and Murano teams, so that is why we want to have
incubated status, to start better integration with other projects being a
part of OpenStack community. I will understand Heat team when they refuse
to change Heat templates to satisfy the requirements of the project which
does not officially belong to OpenStack. With incubation status it will be
much easier.
As for the actual work, backups and snapshots are processes. It will be
hard to express them in a good way in current HOT template. We see that we
will use Mistral resources defined in Heat which will trig the events for
backup and backup workflow associated with the application can be defined
outside of Heat. I don't think that Heat team will include workflow
definitions as a part of template format, while they can allow us to use
resources which reference such workflows stored in a catalog. It can be an
extension for HOT Software config for example, but we need to validate this
approach with the heat team.

The idea of Heat template generation library\engine is exactly what we have
implemented. Murano engine uses its own application definition to generate
valid Heat templates from snippets. As there is no preliminary knowledge of
actual snippet content, Murano package definition language allows
application writer to specify application requirements, application
constraints, data transformation rules and assertions to make a heat
template generation process predictable and manageable. I think this is an
essential part of Catalog as it tightly coupled with the way how
applications and its resources are defined.

Thanks
Georgy


On Mon, Feb 24, 2014 at 1:44 PM, Keith Bray keith.b...@rackspace.comwrote:

  Have you considered writing Heat resource plug-ins that perform (or
 configure within other services) instance snapshots, backups, or whatever
 other maintenance workflow possibilities you want that don't exist?  Then
 these maintenance workflows you mention could be expressed in the Heat
 template forming a single place for the application architecture
 definition, including defining the configuration for services that need to
 be application aware throughout the application's life .  As you describe
 things in Murano, I interpret that you are layering application
 architecture specific information and workflows into a DSL in a layer above
 Heat, which means information pertinent to the application as an ongoing
 concern would be disjoint.  Fragmenting the necessary information to wholly
 define an infrastructure/application architecture could make it difficult
 to share the application and modify the application stack.

  I would be interested in a library that allows for composing Heat
 templates from snippets or fragments of pre-written Heat DSL... The
 library's job could be to ensure that the snippets, when combined, create a
 valid Heat template free from conflict amongst resources, parameters, and
 outputs.  The interaction with the library, I think, would belong in
 Horizon, and the Application Catalog and/or Snippets Catalog could be
 implemented within Glance.

  Also, there may be workflow steps which are not covered by Heat by
 design. For example, application publisher may include creating instance
 snapshots, data migrations, backups etc into the deployment or maintenance
 workflows. I don't see how these may be done by Heat, while Murano should
 definitely support this scenarios.

   From: Alexander Tivelkov ativel...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, February 24, 2014 12:18 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Murano] Object-oriented approach for
 defining Murano Applications

   Hi Stan,

  It is good that we are on a common ground here :)

  Of course this can be done by Heat. In fact - it will be, in the very
 same manner as it always was, I am pretty sure we've discussed this many
 times already. When Heat Software config is fully implemented, it will be
 possible to use it instead of our Agent execution plans for software
 configuration - it the very same manner as we use regular heat templates
 for resource allocation.

  Heat does indeed support template composition - but we don't want our
 end-users to do learn how to do that: we want them just to combine existing
 application on higher-level. Murano will use the template composition under
 the hood, but only in the way which is designed by application publisher.
 If the publisher has decided to configure the software with using Heat
 Software Config, then this option will be used. If some other (probably
 some legacy ) way of 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Chris Friesen

On 02/24/2014 04:01 PM, Morgan Fainberg wrote:


TL;DR, “don’t break the contract”. If we are seriously making
incompatible changes (and we will be regardless of the direction) the
only reasonable option is a new major version.


Agreed.  I don't think we can possibly consider making 
backwards-incompatible changes without changing the version number.


We could stay with V2 and make as many backwards-compatible changes as 
possible using a minor version. This could include things like adding 
support for unified terminology as long as we *also* continue to support 
the old terminology.  The downside of this is that the code gets messy.


On the other hand, if we need to make backwards incompatible changes 
then we need to bump the version number.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 11:48:41 -0500
Jay Pipes jaypi...@gmail.com wrote:
 It's not about forcing providers to support all of the public API.
 It's about providing a single, well-documented, consistent HTTP REST
 API for *consumers* of that API. Whether a provider chooses to, for
 example, deploy with nova-network or Neutron, or Xen vs. KVM, or
 support block migration for that matter *should have no effect on the
 public API*. The fact that those choices currently *do* effect the
 public API that is consumed by the client is a major indication of
 the weakness of the API.

So for the nova-network/neutron issue its more a result of either
support for neutron was never implemented or new nova-network features
were added without corresponding neutron support. I agree its not a
good place to be in, but isn't really relevant to whether we have
extensions or not.

Similarly with a Xen vs KVM situation I don't think its an extension
related issue. In V2 we have features in *core* which are only supported
by some virt backends. It perhaps comes down to not being willing to
say either that we will force all virt backends to support all features
in the API or they don't get in the tree. Or alternatively be willing
to say no to any feature in the API which can not be currently
implemented in all virt backends. The former greatly increases the
barrier to getting a hypervisor included, the latter restricts Nova
development to the speed of the slowest developing and least
mature hypervisor supported.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 05:26 PM, Christopher Yeoh wrote:
 - Whilst we have existing users of the API we also have a lot more
   users in the future. It would be much better to allow them to use
   the API we want to get to as soon as possible, rather than trying
   to evolve the V2 API and forcing them along the transition that
 they could otherwise avoid.

 I'm not sure I understand this.  A key point is that I think any
 evolving of the V2 API has to be backwards compatible, so there's no
 forcing them along involved.
 
 Well other people have been suggesting we can just deprecate parts (be
 it proxying or other bits we really don't like) and then make the
 backwards incompatible change. I think we've already said we'll do it
 for XML for the V2 API and force them off to JSON.

Well, marking deprecated is different than removing it.  We have to get
good data that shows that it's not actually being used before can
actually remove it.  Marking it deprecated at least signals that we
don't consider it actively maintained and that it may go away in the future.

I also consider the XML situation a bit different than changing
specifics of a given API extension, for example.  We're talking about
potentially removing an entire API vs changing an API while it's in use.

 2) Take what we have learned from v3 and apply it to v2.  For example:

 snip
  - revisit a new major API when we get to the point of wanting to
effectively do a re-write, where we are majorly re-thinking the
way our API is designed (from an external perspective, not internal
implementation).
 
 Ultimately I think what this would means is punting any significant API
 improvements several years down the track and effectively throwing away
 a lot of the worked we've done in the last year on the API

One of the important questions is how much improvement can we make to v2
without breaking backwards compatibility?

What can we *not* do in a backwards compatible manner?  How much does it
hurt to give those things up?  How does that compare to the cost of dual
maintenance?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Michael Davies
On Tue, Feb 25, 2014 at 8:31 AM, Morgan Fainberg m...@metacloud.com wrote:

 On the topic of backwards incompatible changes:

 I strongly believe that breaking current clients that use the APIs
 directly is the worst option possible. All the arguments about needing to
 know which APIs work based upon which backend drivers are used are all
 valid, but making an API incompatible change when we've made the contract
 that the current API will be stable is a very bad approach. Breaking
 current clients isn't just breaking novaclient, it would also break any
 customers that are developing directly against the API. In the case of
 cloud deployments with real-world production loads on them (and custom
 development around the APIs) upgrading between major versions is already
 difficult to orchestrate (timing, approvals, etc), if we add in the need to
 re-work large swaths of code due to API changes, it will become even more
 onerous and perhaps drive deployers to forego the upgrades in lieu of
 stability.

 If the perception is that we don't have stable APIs (especially when we
 are ostensibly versioning them), driving adoption of OpenStack becomes
 significantly more difficult. Difficulty in driving further adoption would
 be a big negative to both the project and the community.

 TL;DR, don't break the contract. If we are seriously making incompatible
 changes (and we will be regardless of the direction) the only reasonable
 option is a new major version


I'm absolutely in agreement here - thanks Morgan for raising this.

Changing the API on consumers means forcing them to re-evaluate their
options: Should I fix my usage of the API, or is it time to try another
solution?  The implementation cost is mostly the same.  We can't assume
that API breakages won't lead to customers leaving.  It's worth noting that
competing cloud APIs are inconsistent, and frankly awful.  But they don't
change because it's all about the commercial interest of retaining
customers and supporting a cornucopia of SDKs.

Any changes to a versioned API need to be completely backwards compatible,
and we shouldn't assume changes aren't going to break things - we should
test the crap out of them so as to ensure this is the case. Or put another
way, any time we touch a stable API, we need to be extremely careful.

If we want new features, if we want to clean up existing interfaces, it's
far better to move to a new API version (even with the maintenance burden
of supporting another API) than try and bolt something on the side.  This
includes improving input validation, because we should not be changing the
functionality presented to end-users on a stable API, even if it's for
their own good.  What it comes down to is strongly supporting the consumers
of our software.  We need to make things easy for those who support and
develop against the APIs.

Hope this helps,

Michael...
-- 
Michael Davies   mich...@the-davies.net
Rackspace Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Sean Dague
It's really easy to just say don't break the contract. Until we got
the level of testing that we currently have in Tempest, the contract was
broken pretty regularly. I'm sure there are still breaks in it around
the edges where we aren't clamping down on people today.

So the history of v2 is far from being a stable API in the traditional
sense.

Which isn't to say we're trying to go and make the whole thing fluid.
However there has to be a path forward for incremental improvement,
because there are massive short comings in the existing API.

While a big bang approach might work for smaller interfaces, the Nova
API surface is huge. So huge, it's not even fully documented. Which
means we're at a state where you aren't implementing to an API, you are
implementing to an implementation. And if you look at HP and RAX you'll
find enough differences to make you scratch your head a bunch. And
that's only 2 data points. I'm sure the private cloud products have all
kinds of funkiness in them.

So we do really need to be pragmatic here as well. Because our
experience with v3 so far has been doing a major version bump on Nova is
a minimum of 2 years, and that doesn't reach a completion point that
anyone's happy with to switch over.

So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project. So whatever
gets decided about v3, the thing that's important to me is a sane way to
be able to add backwards compatible changes (which we actually don't
have today, and I don't think any other service in OpenStack does
either), as well a mechanism for deprecating parts of the API. With some
future decision about whether removing them makes sense.

-Sean

On 02/24/2014 05:01 PM, Morgan Fainberg wrote:
 On the topic of backwards incompatible changes:
 
 I strongly believe that breaking current clients that use the APIs
 directly is the worst option possible. All the arguments about needing
 to know which APIs work based upon which backend drivers are used are
 all valid, but making an API incompatible change when we’ve made the
 contract that the current API will be stable is a very bad approach.
 Breaking current clients isn’t just breaking “novaclient, it would also
 break any customers that are developing directly against the API. In the
 case of cloud deployments with real-world production loads on them (and
 custom development around the APIs) upgrading between major versions is
 already difficult to orchestrate (timing, approvals, etc), if we add in
 the need to re-work large swaths of code due to API changes, it will
 become even more onerous and perhaps drive deployers to forego the
 upgrades in lieu of stability.
 
 If the perception is that we don’t have stable APIs (especially when we
 are ostensibly versioning them), driving adoption of OpenStack becomes
 significantly more difficult. Difficulty in driving further adoption
 would be a big negative to both the project and the community.
 
 TL;DR, “don’t break the contract”. If we are seriously making
 incompatible changes (and we will be regardless of the direction) the
 only reasonable option is a new major version.
 
 *—*
 *Morgan Fainberg*
 Principal Software Engineer
 Core Developer, Keystone
 m...@metacloud.com mailto:m...@metacloud.com
 
 
 On February 24, 2014 at 10:16:31, Matt Riedemann
 (mrie...@linux.vnet.ibm.com mailto://mrie...@linux.vnet.ibm.com) wrote:
 


 On 2/24/2014 10:13 AM, Russell Bryant wrote:
  On 02/24/2014 01:50 AM, Christopher Yeoh wrote:
  Hi,
 
  There has recently been some speculation around the V3 API and whether
  we should go forward with it or instead backport many of the changes
  to the V2 API. I believe that the core of the concern is the extra
  maintenance and test burden that supporting two APIs means and the
  length of time before we are able to deprecate the V2 API and return
  to maintaining only one (well two including EC2) API again.
 
  Yes, this is a major concern.  It has taken an enormous amount of work
  to get to where we are, and v3 isn't done.  It's a good time to
  re-evaluate whether we are on the right path.
 
  The more I think about it, the more I think that our absolute top goal
  should be to maintain a stable API for as long as we can reasonably do
  so.  I believe that's what is best for our users.  I think if you gave
  people a choice, they would prefer an inconsistent API that works for
  years over dealing with non-backwards compatible jumps to get a nicer
  looking one.
 
  The v3 API and its unit tests are roughly 25k lines of code.  This also
  doesn't include the changes necessary in novaclient or tempest.  That's
  just *our* code.  It explodes out from there into every SDK, and then
  end user apps.  This should not be taken lightly.
 
  This email is rather long so here's the TL;DR version:
 
  - We want to make backwards incompatible changes to the API
 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 05:49 PM, Michael Davies wrote:
 On Tue, Feb 25, 2014 at 8:31 AM, Morgan Fainberg m...@metacloud.com
 mailto:m...@metacloud.com wrote:
 
 On the topic of backwards incompatible changes:
 
 I strongly believe that breaking current clients that use the APIs
 directly is the worst option possible. All the arguments about
 needing to know which APIs work based upon which backend drivers are
 used are all valid, but making an API incompatible change when we’ve
 made the contract that the current API will be stable is a very bad
 approach. Breaking current clients isn’t just breaking “novaclient,
 it would also break any customers that are developing directly
 against the API. In the case of cloud deployments with real-world
 production loads on them (and custom development around the APIs)
 upgrading between major versions is already difficult to orchestrate
 (timing, approvals, etc), if we add in the need to re-work large
 swaths of code due to API changes, it will become even more onerous
 and perhaps drive deployers to forego the upgrades in lieu of stability.
 
 If the perception is that we don’t have stable APIs (especially when
 we are ostensibly versioning them), driving adoption of OpenStack
 becomes significantly more difficult. Difficulty in driving further
 adoption would be a big negative to both the project and the community.
 
 TL;DR, “don’t break the contract”. If we are seriously making
 incompatible changes (and we will be regardless of the direction)
 the only reasonable option is a new major version
 
 
 I'm absolutely in agreement here - thanks Morgan for raising this.
 
 Changing the API on consumers means forcing them to re-evaluate their
 options: Should I fix my usage of the API, or is it time to try another
 solution?  The implementation cost is mostly the same.  We can't assume
 that API breakages won't lead to customers leaving.  It's worth noting
 that competing cloud APIs are inconsistent, and frankly awful.  But they
 don't change because it's all about the commercial interest of retaining
 customers and supporting a cornucopia of SDKs.
 
 Any changes to a versioned API need to be completely backwards
 compatible, and we shouldn't assume changes aren't going to break things
 - we should test the crap out of them so as to ensure this is the case.
 Or put another way, any time we touch a stable API, we need to be
 extremely careful.
 
 If we want new features, if we want to clean up existing interfaces,
 it's far better to move to a new API version (even with the maintenance
 burden of supporting another API) than try and bolt something on the
 side.  This includes improving input validation, because we should not
 be changing the functionality presented to end-users on a stable API,
 even if it's for their own good.  What it comes down to is strongly
 supporting the consumers of our software.  We need to make things easy
 for those who support and develop against the APIs.

Let's please avoid too much violent agreement on this.  There seems to
have been some confusion spurred by Morgan's post.

I don't think *anybody* is in favor of non backwards compatible changes
to an existing API.  The short version of choices discussed in this thread:

1) Continue developing v3 (non backwards compat changes until we call it
stable).  Maintain v2 and v3 until we reach a point that we can drop v2
(there is debate about when that could be)

2) Focus on v2 only, and figure out ways to add features and evolve it
**but only in backwards compatible ways**

3) Some other possible view of a way forward that hasn't been brought up
yet, but I'm totally open to ideas

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] tox error

2014-02-24 Thread Shixiong Shang
Hi, guys:

I run into this error while running fox…..but it gave me this error…Seems like 
it is related to Neutron LB. Did you see this issue before? If so, how to fix 
it?

Thanks!

Shixiong


shshang@net-ubuntu2:~/github/neutron$ tox -v -e py27
……...
tests.unit.test_wsgi.XMLDictSerializerTest.test_xml_with_utf8\xa2\xbe\xf7u\xb3 
`@d\x17text/plain;charset=utf8\rimport 
errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent\x85\xc5\x1a\\', 
stderr=None
error: testr failed (3)
ERROR: InvocationError: '/home/shshang/github/neutron/.tox/py27/bin/python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest 
--testr-args='

 summary 

ERROR:   py27: commands failed


(py27)shshang@net-ubuntu2:~/github/neutron/.tox/py27/bin$ python
Python 2.7.5+ (default, Sep 19 2013, 13:48:49)
[GCC 4.8.1] on linux2
Type help, copyright, credits or license for more information.
 import errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent
Traceback (most recent call last):
  File stdin, line 1, in module
ImportError: No module named 
errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Chris Friesen

On 02/24/2014 04:59 PM, Sean Dague wrote:


So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project.


Does it necessarily need to be that way though?  Maybe we bump the 
version number every time we make a non-backwards-compatible change, 
even if it's just removing an API call that has been deprecated for a while.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Sean Dague
On 02/24/2014 06:13 PM, Chris Friesen wrote:
 On 02/24/2014 04:59 PM, Sean Dague wrote:
 
 So, that begs a new approach. Because I think at this point even if we
 did put out Nova v3, there can never be a v4. It's too much, too big,
 and doesn't fit in the incremental nature of the project.
 
 Does it necessarily need to be that way though?  Maybe we bump the
 version number every time we make a non-backwards-compatible change,
 even if it's just removing an API call that has been deprecated for a
 while.

So I'm not sure how this is different than the keep v2 and use
microversioning suggestion that is already in this thread.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >