Re: [openstack-dev] [qa][tempest] Where to do response body validation

2014-03-12 Thread Kenichi Oomichi

Hi Chris,

Thank you for picking it up,

> -Original Message-
> From: Christopher Yeoh [mailto:cbky...@gmail.com]
> Sent: Thursday, March 13, 2014 1:56 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [qa][tempest] Where to do response body validation
> 
> The new tempest body response validation is being added to individual
> testcases. See this as an example:
> 
> https://review.openstack.org/#/c/78149
> 
> After having a look at https://review.openstack.org/#/c/80174/
> I'm now thinking that perhaps we should be doing the response validation
> in the tempest/services/compute classes. And only check the
> response body if the status code is a success code (and then check that
> it is an appropriate success code).
> 
> I think this will lead to fewer changes needed in the end as the
> response body checking will not needed to be added to individual tests.
> 
> There may be some complications with handling extensions, but I think
> they are all implement backwards compatible behaviour so should be ok.
> 
> Anyone have any thoughts about this alternative approach?

I like the above idea that the response body validation will be operated
in REST client. Tempest will be able to check response body anytime and
reduce the test code.

One concern is that Nova API returns different response body when admin
user or not. I'd like to check response body containing attributes what
admin user can get. For example, "get server info" API returns a response
including "OS-EXT-STS" and "OS-EXT-SRV-ATT" attributes when an admin user.

So how about operating the basic validation(without special attributes)
only in each REST client?
The the special validation(such as the above admin info) would be operated
in each test. The schema size for the special cases could be reduced.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-12 Thread Yuriy Taraday
On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo wrote:

> I'm not familiar with unix domain sockets at low level, but , I wonder
> if authentication could be achieved just with permissions (only users in
> group "neutron" or group "rootwrap" accessing this service.
>

It can be enforced, but it is not needed at all (see below).


> I find it an interesting alternative, to the other proposed solutions, but
> there are some challenges associated with this solution, which could make
> it more complicated:
>
> 1) Access control, file system permission based or token based,
>

If we pass the token to the calling process through a pipe bound to stdout,
it won't be intercepted so token-based authentication for further requests
is secure enough.

2) stdout/stderr/return encapsulation/forwarding to the caller,
>if we have a simple/fast RPC mechanism we can use, it's a matter
>of serializing a dictionary.
>

RPC implementation in multiprocessing module uses either xmlrpclib or
pickle-based RPC. It should be enough to pass output of a command.
If we ever hit performance problem with passing long strings we can even
pass opened pipe's descriptors over UNIX socket to let caller interact with
spawned process directly.


> 3) client side implementation for 1 + 2.
>

Most of the code should be placed in oslo.rootwrap. Services using it
should replaces calls to root_helper with appropriate client calls like
this:

if run_as_root:
  if CONF.use_rootwrap_daemon:
oslo.rootwrap.client.call(cmd)

All logic around spawning rootwrap daemon and interacting with it should be
hidden so that changes to services will be minimum.

4) It would need to accept new domain socket connections in green threads
> to avoid spawning a new process to handle a new connection.
>

We can do connection pooling if we ever run into performance problems with
connecting new socket for every rootwrap call (which is unlikely).
On the daemon side I would avoid using fancy libraries (eventlet) because
of both new fat requirement for oslo.rootwrap (it depends on six only
currently) and running more possibly buggy and unsafe code with elevated
privileges.
Simple threaded daemon should be enough given it will handle needs of only
one service process.


> The advantages:
>* we wouldn't need to break the only-python-rule.
>* we don't need to rewrite/translate rootwrap.
>
> The disadvantages:
>   * it needs changes on the client side (neutron + other projects),
>

As I said, changes should be minimal.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest] Where to do response body validation

2014-03-12 Thread Valeriy Ponomaryov
I disagree to moving this logic to "tempest/services/*". The idea of these
modules - assemble requests and return responses. Testing and verification
should be wrapped over it. Either base class or tests, it depends on
situation...

-- 
Kind Regards
Valeriy Ponomaryov


On Thu, Mar 13, 2014 at 6:55 AM, Christopher Yeoh  wrote:

> Hi,
>
> The new tempest body response validation is being added to individual
> testcases. See this as an example:
>
> https://review.openstack.org/#/c/78149
>
> After having a look at https://review.openstack.org/#/c/80174/
> I'm now thinking that perhaps we should be doing the response validation
> in the tempest/services/compute classes. And only check the
> response body if the status code is a success code (and then check that
> it is an appropriate success code).
>
> I think this will lead to fewer changes needed in the end as the
> response body checking will not needed to be added to individual tests.
>
> There may be some complications with handling extensions, but I think
> they are all implement backwards compatible behaviour so should be ok.
>
> Anyone have any thoughts about this alternative approach?
>
> Regards,
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] icehouse-3 release cross reference is added into www.xrefs.info

2014-03-12 Thread John Smith
icehouse-3 release cross reference is added into www.xrefs.info, check
it out http://www.xrefs.info. Thx. xrefs.info admin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova API meeting

2014-03-12 Thread Christopher Yeoh
Hi,

Just a reminder that first Nova API meeting is being held tomorrow
Friday UTC . In other timezones:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
AEDT 11:00 (Fri)
ACDT 10:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance]A question abount copy_from.

2014-03-12 Thread 王宏
Hi all.

When create image using v1 API we can upload image file indirectly from the
external source using the x-glance-api-copy-from header. But there is no
copy_from parameter in v2 API. I think this paramater maybe is replaced by
"locations". But may I know the reason that why we remove copy_from
parameter from v2 API? When use locations if the external source is
destroyed we can not get the image file anymore.

Best regards.
wanghong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][tempest] Where to do response body validation

2014-03-12 Thread Christopher Yeoh
Hi,

The new tempest body response validation is being added to individual
testcases. See this as an example:

https://review.openstack.org/#/c/78149

After having a look at https://review.openstack.org/#/c/80174/ 
I'm now thinking that perhaps we should be doing the response validation
in the tempest/services/compute classes. And only check the
response body if the status code is a success code (and then check that
it is an appropriate success code).

I think this will lead to fewer changes needed in the end as the
response body checking will not needed to be added to individual tests.

There may be some complications with handling extensions, but I think
they are all implement backwards compatible behaviour so should be ok.

Anyone have any thoughts about this alternative approach?

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] 5 unicode unit test failures when building Debian package

2014-03-12 Thread Thomas Goirand
Hi,

Since Havana, I've been ignoring the 5 unit test failures that I always
get. Though I think it'd be nice to have them fixed. The log file is
available over here:

https://icehouse.dev-debian.pkgs.enovance.com/job/keystone/59/console

Does anyone know what's going on? It'd be nice if I could solve these.

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Advanced Services Common requirements IRC meeting

2014-03-12 Thread Sumit Naiksatam
Hi,

This is a reminder - we will be having this meeting in
#openstack-meeting-3 on March 13th (Thursday) at 18:00 UTC. The
proposed agenda is as follows:

* Flavors/service-type framework
* Service insertion/chaining
* Group policy requirements
* Vendor plugins for L3 services

We can also decide the time/day/frequency of future meetings.

Meeting wiki: https://wiki.openstack.org/wiki/Meetings/AdvancedServices

Thanks,
~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-12 Thread Mohammad Banikazemi


Tom Fifield  wrote on 03/12/2014 10:51:54 PM:

> From: Tom Fifield 
> To: "OpenStack Development Mailing List (not for usage questions)"
> , Edgar Magana ,
> Date: 03/12/2014 10:59 PM
> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
>
> On 13/03/14 13:43, Mohammad Banikazemi wrote:
> > Thanks for your response.
> >
> > It looks like the page you are referring to gets populated
automatically
> > and I see a link already added to it for the new plugin. I also see a
> > file corresponding to the new plugin having been created and populated
> > with the plugin config options in the latest openstack-manuals cloned
> > from github.
> >
> > After talking to the docs people on #openstack-docs, now I know that
> > these files get created automatically and periodically. Any changes to
> > the docs should come through changes to the config file in the code
> > which will be automatically picked up at some point when the docs
> > scripts get executed.
>
> Just to clarify one point - the text comes from the code, in the oslo
> option registration's helptext, not from the configuration files in etc.
>

Thanks for clarifying this point and for the initial information as well.
Yes, by "config file in the code" I was referring to the config.py file in
our plugin (and a few other Neutron plugins I have seen) where the plugin
options and corresponding helptexts get registered by using register_opts()
from oslo.


> > It looks like there is nothing to be done in this front for adding the
> > docs for the new plugin. If that seems reasonable, I will close the bug
> > I had opened for the the docs for our plugin.
> >
> > Thanks,
> >
> > -Mohammad
> >
> >
> >
> >
> >
> > Inactive hide details for Edgar Magana ---03/12/2014 06:10:31 PM---You
> > should be able to add your plugin here: http://docs.openEdgar Magana
> > ---03/12/2014 06:10:31 PM---You should be able to add your plugin here:
> > http://docs.openstack.org/havana/config-reference/conten
> >
> > From: Edgar Magana 
> > To: Mohammad Banikazemi/Watson/IBM@IBMUS, "OpenStack Development
Mailing
> > List (not for usage questions)" ,
> > Date: 03/12/2014 06:10 PM
> > Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
> >
> >

> >
> >
> >
> > You should be able to add your plugin here:
> > _http://docs.openstack.org/havana/config-reference/content/
> networking-options-plugins.html_
> >
> > Thanks,
> >
> > Edgar
> >
> > *From: *Mohammad Banikazemi <_...@us.ibm.com_ >*
> > Date: *Monday, March 10, 2014 2:40 PM*
> > To: *OpenStack List <_openstack-dev@lists.openstack.org_
> > >*
> > Cc: *Edgar Magana <_emagana@plumgrid.com_
>*
> > Subject: *Re: [openstack-dev] [Neutron] Docs for new plugins
> >
> > Would like to know what to do for adding documentation for a new
plugin.
> > Can someone point me to the right place/process please.
> >
> > Thanks,
> >
> > Mohammad
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-12 Thread W Chan
   - I can write a method in base test to start local executor.  I will do
   that as a separate bp.
   - After the engine is made standalone, the API will communicate to the
   engine and the engine to the executor via the oslo.messaging transport.
This means that for the "local" option, we need to start all three
   components (API, engine, and executor) on the same process.  If the long
   term goal as you stated above is to use separate launchers for these
   components, this means that the API launcher needs to duplicate all the
   logic to launch the engine and the executor. Hence, my proposal here is to
   move the logic to launch the components into a common module and either
   have a single generic launch script that launch specific components based
   on the CLI options or have separate launch scripts that reference the
   appropriate launch function from the common module.
   - The RPC client/server in oslo.messaging do not determine the
   transport.  The transport is determine via oslo.config and then given
   explicitly to the RPC client/server.
   
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and
   
https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
examples for the client and server respectively.  The in process Queue
   is instantiated within this transport object from the fake driver.  For the
   "local" option, all three components need to share the same transport in
   order to have the Queue in scope. Thus, we will need some method to have
   this transport object visible to all three components and hence my proposal
   to use a global variable and a factory method.



On Tue, Mar 11, 2014 at 10:34 PM, Renat Akhmerov wrote:

>
> On 12 Mar 2014, at 06:37, W Chan  wrote:
>
> Here're the proposed changes.
> 1) Rewrite the launch script to be more generic which contains option to
> launch all components (i.e. API, engine, executor) on the same process but
> over separate threads or launch each individually.
>
>
> You mentioned test_executor.py so I think it would make sense first to
> refactor the code in there related with acquiring transport and launching
> executor. My suggestions are:
>
>- In test base class (mistral.tests.base.BaseTest) create the new
>method *start_local_executor()* that would deal with getting a fake
>driver inside and all that stuff. This would be enough for tests where we
>need to run engine and check something. start_local_executor() can be just
>a part of setUp() method for such tests.
>- As for the launch script I have the following thoughts:
>   - Long-term launch scripts should be different for all API, engine
>   and executor. Now API and engine start within the same process but it's
>   just a temporary solution.
>   - Launch script for engine (which is the same as API's for now)
>   should have an option *--use-local-executor* to be able to run an
>   executor along with engine itself within the same process.
>
>
> 2) Move transport to a global variables, similar to global _engine and
> then shared by the different component.
>
>
> Not sure why we need it. Can you please explain more detailed here? The
> better way would be to initialize engine and executor with transport when
> we create them. If our current structure doesn't allow this easily we
> should discuss it and change it.
>
> In mistral.engine.engine.py we now have:
>
>  def load_engine():
> global _engine
> module_name = cfg.CONF.engine.engine
> module = importutils.import_module(module_name)
> _engine = module.get_engine()
>
> As an option we could have the code that loads engine in engine launch
> script (once we decouple it from API process) so that when we call
> get_engine() we could pass in all needed configuration parameters like
> transport.
>
> 3) Modified the engine and the executor to use a factory method to get the
> global transport
>
>
> If we made a decision on #2 we won't need it.
>
>
> A side note: when we discuss things like that I really miss DI container :)
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Arnaud Legendre for Glance Core

2014-03-12 Thread Fei Long Wang
+1 !!!

Thanks & Best regards,
Fei Long Wang (王飞龙)
-
Core of Glance and Marconi
IBM Cloud OpenStack Platform
Tel: 8610-82450513 | T/L: 905-0513
Email: flw...@cn.ibm.com
China Systems & Technology Laboratory in Beijing
-




From:   Mark Washenberger 
To: OpenStack Development Mailing List
,
Date:   03/13/2014 10:24 AM
Subject:[openstack-dev] [Glance] Nominating Arnaud Legendre for Glance
Core



Hi folks,

I'd like to nominate Arnaud Legendre to join Glance Core. Over the past
cycle his reviews have been consistently high quality and I feel confident
in his ability to assess the design of new features and the overall
direction for Glance.

If anyone has any concerns, please share them with me. If I don't hear any,
I'll make the membership change official in about a week.

Thanks for your consideration. And thanks for all your hard work, Arnaud!

markwash___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-12 Thread Damon Wang
Good work on the documentation, it's helpful.

Damon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-12 Thread Tom Fifield

On 13/03/14 13:43, Mohammad Banikazemi wrote:

Thanks for your response.

It looks like the page you are referring to gets populated automatically
and I see a link already added to it for the new plugin. I also see a
file corresponding to the new plugin having been created and populated
with the plugin config options in the latest openstack-manuals cloned
from github.

After talking to the docs people on #openstack-docs, now I know that
these files get created automatically and periodically. Any changes to
the docs should come through changes to the config file in the code
which will be automatically picked up at some point when the docs
scripts get executed.


Just to clarify one point - the text comes from the code, in the oslo 
option registration's helptext, not from the configuration files in etc.



It looks like there is nothing to be done in this front for adding the
docs for the new plugin. If that seems reasonable, I will close the bug
I had opened for the the docs for our plugin.

Thanks,

-Mohammad





Inactive hide details for Edgar Magana ---03/12/2014 06:10:31 PM---You
should be able to add your plugin here: http://docs.openEdgar Magana
---03/12/2014 06:10:31 PM---You should be able to add your plugin here:
http://docs.openstack.org/havana/config-reference/conten

From: Edgar Magana 
To: Mohammad Banikazemi/Watson/IBM@IBMUS, "OpenStack Development Mailing
List (not for usage questions)" ,
Date: 03/12/2014 06:10 PM
Subject: Re: [openstack-dev] [Neutron] Docs for new plugins





You should be able to add your plugin here:
_http://docs.openstack.org/havana/config-reference/content/networking-options-plugins.html_

Thanks,

Edgar

*From: *Mohammad Banikazemi <_...@us.ibm.com_ >*
Date: *Monday, March 10, 2014 2:40 PM*
To: *OpenStack List <_openstack-dev@lists.openstack.org_
>*
Cc: *Edgar Magana <_emagana@plumgrid.com_ >*
Subject: *Re: [openstack-dev] [Neutron] Docs for new plugins

Would like to know what to do for adding documentation for a new plugin.
Can someone point me to the right place/process please.

Thanks,

Mohammad



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-12 Thread Mohammad Banikazemi

Thanks for your response.

It looks like the page you are referring to gets populated automatically
and I see a link already added to it for the new plugin. I also see a file
corresponding to the new plugin having been created and populated with the
plugin config options in the latest openstack-manuals cloned from github.

After talking to the docs people on #openstack-docs, now I know that these
files get created automatically and periodically. Any changes to the docs
should come through changes to the config file in the code which will be
automatically picked up at some point when the docs scripts get executed.

It looks like there is nothing to be done in this front for adding the docs
for the new plugin. If that seems reasonable, I will close the bug I had
opened for the the docs for our plugin.

Thanks,

-Mohammad







From:   Edgar Magana 
To: Mohammad Banikazemi/Watson/IBM@IBMUS, "OpenStack Development
Mailing List (not for usage questions)"
,
Date:   03/12/2014 06:10 PM
Subject:Re: [openstack-dev] [Neutron] Docs for new plugins



You should be able to add your plugin here:
http://docs.openstack.org/havana/config-reference/content/networking-options-plugins.html

Thanks,

Edgar

From: Mohammad Banikazemi 
Date: Monday, March 10, 2014 2:40 PM
To: OpenStack List 
Cc: Edgar Magana 
Subject: Re: [openstack-dev] [Neutron] Docs for new plugins



Would like to know what to do for adding documentation for a new plugin.
Can someone point me to the right place/process please.

Thanks,

Mohammad

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Nominating Arnaud Legendre for Glance Core

2014-03-12 Thread Mark Washenberger
Hi folks,

I'd like to nominate Arnaud Legendre to join Glance Core. Over the past
cycle his reviews have been consistently high quality and I feel confident
in his ability to assess the design of new features and the overall
direction for Glance.

If anyone has any concerns, please share them with me. If I don't hear any,
I'll make the membership change official in about a week.

Thanks for your consideration. And thanks for all your hard work, Arnaud!

markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-12 Thread Joshua Harlow
So taskflow has tasks, which seems comparable to actions?

I guess I should get tired of asking but why recreate the same stuff ;)

The questions listed:

- Does action need to have revert() method along with run() method?
- How does action expose errors occurring during it's work?

- In what form does action return a result?


And more @ https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign

And quite a few others that haven't been mentioned (how does a action
retry? How does a action report partial progress? What's the
intertask/state persistence mechanism?) have been worked on by the
taskflow team for a while now...

https://github.com/openstack/taskflow/blob/master/taskflow/task.py#L31
(and others...)

Anyways, I know mistral is still POC/pilot/prototype... but seems like
more duplicate worked that could just be avoided ;)

-Josh

-Original Message-
From: Renat Akhmerov 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Tuesday, March 11, 2014 at 11:32 PM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [Mistral] Actions design BP

>Team,
>
>I started summarizing all the thoughts and ideas that we¹ve been
>discussing for a while regarding actions. The main driver for this work
>is that the system keeps evolving and we still don¹t have a comprehensive
>understanding of that part. Additionally, we keep getting a lot of
>requests and questions from our potential users which are related to
>actions (Œwill they be extensible?¹, Œwill they have dry-run feature?¹,
>Œwhat are the ways to configure and group them?¹ and so on and so forth).
>So although we¹re still in a Pilot phase we need to start this work in
>parallel. Even now lack of solid understanding of it creates a lot of
>problems in pilot development.
>
>I created a BP at launchpad [0] which has a reference to detailed
>specification [1]. It¹s still in progress but you could already leave
>your early feedback so that I don¹t go in a wrong direction too far.
>
>The highest priority now is still finishing the pilot so we shouldn¹t
>start implementing everything described in BP right now. However, some of
>the things have to be adjusted asap (like Action interface and the main
>implementation principles).
>
>[0]: 
>https://blueprints.launchpad.net/mistral/+spec/mistral-actions-design
>[1]: https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign
>
>Renat Akhmerov
>@ Mirantis Inc.
>
>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 14-00 UTC

2014-03-12 Thread Samuel Bercovici
Hi Eugene,

I am with Evgeny on a business trip so we will not be able to join this time.
I have not seen any progress on the model side. Did I miss anything?
Will look for the meeting summary

Regards,
-Sam.


From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, March 12, 2014 10:21 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 14-00 UTC

Hi neutron and lbaas folks,

Let's keep our regular meeting on Thursday, at 14-00 UTC at #openstack-meeting

We'll update on current status and continue object model discussion.
We have many new folks that are recently showed the interest in lbaas project 
asking for mini summit. I think it would be helpful for everyone interested in 
lbaas to join the meeting.

Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-12 Thread Andronidis Anastasios
Ok, thank you very much!

Anastasis

On 13 Μαρ 2014, at 1:58 π.μ., Davanum Srinivas  wrote:

> Andronidis,
> 
> not sure, we can ask others on the irc meeting tomorrow.
> 
> Please answer the questions on the template, and if you see the last
> one is about links to your proposal on the openstack wiki.
> 
> On Wed, Mar 12, 2014 at 8:40 PM, Andronidis Anastasios
>  wrote:
>> Hello everyone,
>> 
>> I am a student and I can not see "Connections" anywhere. I also tried to 
>> re-loging, but still nothing. Is it sure that this "Connections" link exists 
>> in students too?
>> 
>> I also have a second question, concerning the template on google-melange. Do 
>> we have to just answer the questions on the template? Or shall we also paste 
>> our proposal that we wrote on the openstack wiki?
>> 
>> Kindly,
>> Anastasis
>> 
>> On 12 Μαρ 2014, at 10:46 μ.μ., Sriram Subramanian  
>> wrote:
>> 
>>> Victoria,
>>> 
>>> When you click "My Dashboard" on the left hand side, you will see 
>>> Connections, Proposals etc on your right, in the dashboard. Right below 
>>> "Connections", there are two links in smaller font, one which is the link 
>>> to Connect (circled in blue in the attached snapshot).
>>> If you tried right after creating your profile, try logging out and in. 
>>> When I created the profile, I remember having some issues around accessing 
>>> profile (not the dashboard, but entire profile).
>>> 
>>> thanks
>>> -Sriram
>>> 
>>> 
>>> On Wed, Mar 12, 2014 at 1:32 PM, Victoria Martínez de la Cruz 
>>>  wrote:
>>> Hi,
>>> 
>>> Thanks for working on the template, it sure ease things for students.
>>> 
>>> I can't find the "Connect with organizations" link, does anyone have the 
>>> same problem?
>>> 
>>> I confirm my assistance to tomorrow's meeting, thanks for organizing it! +1
>>> 
>>> Cheers,
>>> 
>>> Victoria
>>> 
>>> 
>>> 
>>> 2014-03-11 14:29 GMT-03:00 Davanum Srinivas :
>>> 
>>> Hi,
>>> 
>>> Mentors:
>>> * Please click on "My Dashboard" then "Connect with organizations" and
>>> request a connection as a mentor (on the GSoC web site -
>>> http://www.google-melange.com/)
>>> 
>>> Students:
>>> * Please see the Application template you will need to fill in on the GSoC 
>>> site.
>>>  http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
>>> * Please click on "My Dashboard" then "Connect with organizations" and
>>> request a connection
>>> 
>>> Both Mentors and Students:
>>> Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
>>> UTC for about 30 mins to meet and greet since all application deadline
>>> is next week. If this time is not convenient, please send me a note
>>> and i'll arrange for another time say on friday as well.
>>> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09&p1=43&am=30
>>> 
>>> We need to get an idea of how many slots we need to apply for based on
>>> really strong applications with properly fleshed out project ideas and
>>> mentor support. Hoping the meeting on IRC will nudge the students and
>>> mentors work towards that goal.
>>> 
>>> Thanks,
>>> dims
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>>> 
>>> --
>>> Thanks,
>>> -Sriram
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Davanum Srinivas :: http://davanum.wordpress.com
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-12 Thread Davanum Srinivas
Andronidis,

not sure, we can ask others on the irc meeting tomorrow.

Please answer the questions on the template, and if you see the last
one is about links to your proposal on the openstack wiki.

On Wed, Mar 12, 2014 at 8:40 PM, Andronidis Anastasios
 wrote:
> Hello everyone,
>
> I am a student and I can not see "Connections" anywhere. I also tried to 
> re-loging, but still nothing. Is it sure that this "Connections" link exists 
> in students too?
>
> I also have a second question, concerning the template on google-melange. Do 
> we have to just answer the questions on the template? Or shall we also paste 
> our proposal that we wrote on the openstack wiki?
>
> Kindly,
> Anastasis
>
> On 12 Μαρ 2014, at 10:46 μ.μ., Sriram Subramanian  
> wrote:
>
>> Victoria,
>>
>> When you click "My Dashboard" on the left hand side, you will see 
>> Connections, Proposals etc on your right, in the dashboard. Right below 
>> "Connections", there are two links in smaller font, one which is the link to 
>> Connect (circled in blue in the attached snapshot).
>> If you tried right after creating your profile, try logging out and in. When 
>> I created the profile, I remember having some issues around accessing 
>> profile (not the dashboard, but entire profile).
>>
>> thanks
>> -Sriram
>>
>>
>> On Wed, Mar 12, 2014 at 1:32 PM, Victoria Martínez de la Cruz 
>>  wrote:
>> Hi,
>>
>> Thanks for working on the template, it sure ease things for students.
>>
>> I can't find the "Connect with organizations" link, does anyone have the 
>> same problem?
>>
>> I confirm my assistance to tomorrow's meeting, thanks for organizing it! +1
>>
>> Cheers,
>>
>> Victoria
>>
>>
>>
>> 2014-03-11 14:29 GMT-03:00 Davanum Srinivas :
>>
>> Hi,
>>
>> Mentors:
>> * Please click on "My Dashboard" then "Connect with organizations" and
>> request a connection as a mentor (on the GSoC web site -
>> http://www.google-melange.com/)
>>
>> Students:
>> * Please see the Application template you will need to fill in on the GSoC 
>> site.
>>   http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
>> * Please click on "My Dashboard" then "Connect with organizations" and
>> request a connection
>>
>> Both Mentors and Students:
>> Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
>> UTC for about 30 mins to meet and greet since all application deadline
>> is next week. If this time is not convenient, please send me a note
>> and i'll arrange for another time say on friday as well.
>> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09&p1=43&am=30
>>
>> We need to get an idea of how many slots we need to apply for based on
>> really strong applications with properly fleshed out project ideas and
>> mentor support. Hoping the meeting on IRC will nudge the students and
>> mentors work towards that goal.
>>
>> Thanks,
>> dims
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Thanks,
>> -Sriram
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-12 Thread Andronidis Anastasios
Hello everyone,

I am a student and I can not see "Connections" anywhere. I also tried to 
re-loging, but still nothing. Is it sure that this "Connections" link exists in 
students too?

I also have a second question, concerning the template on google-melange. Do we 
have to just answer the questions on the template? Or shall we also paste our 
proposal that we wrote on the openstack wiki?

Kindly,
Anastasis

On 12 Μαρ 2014, at 10:46 μ.μ., Sriram Subramanian  wrote:

> Victoria,
> 
> When you click "My Dashboard" on the left hand side, you will see 
> Connections, Proposals etc on your right, in the dashboard. Right below 
> "Connections", there are two links in smaller font, one which is the link to 
> Connect (circled in blue in the attached snapshot). 
> If you tried right after creating your profile, try logging out and in. When 
> I created the profile, I remember having some issues around accessing profile 
> (not the dashboard, but entire profile). 
> 
> thanks
> -Sriram
> 
> 
> On Wed, Mar 12, 2014 at 1:32 PM, Victoria Martínez de la Cruz 
>  wrote:
> Hi,
> 
> Thanks for working on the template, it sure ease things for students.
> 
> I can't find the "Connect with organizations" link, does anyone have the same 
> problem? 
> 
> I confirm my assistance to tomorrow's meeting, thanks for organizing it! +1
> 
> Cheers,
> 
> Victoria
> 
> 
> 
> 2014-03-11 14:29 GMT-03:00 Davanum Srinivas :
> 
> Hi,
> 
> Mentors:
> * Please click on "My Dashboard" then "Connect with organizations" and
> request a connection as a mentor (on the GSoC web site -
> http://www.google-melange.com/)
> 
> Students:
> * Please see the Application template you will need to fill in on the GSoC 
> site.
>   http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
> * Please click on "My Dashboard" then "Connect with organizations" and
> request a connection
> 
> Both Mentors and Students:
> Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
> UTC for about 30 mins to meet and greet since all application deadline
> is next week. If this time is not convenient, please send me a note
> and i'll arrange for another time say on friday as well.
> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09&p1=43&am=30
> 
> We need to get an idea of how many slots we need to apply for based on
> really strong applications with properly fleshed out project ideas and
> mentor support. Hoping the meeting on IRC will nudge the students and
> mentors work towards that goal.
> 
> Thanks,
> dims
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Thanks,
> -Sriram
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-12 Thread Arnaud Legendre
Hi Matt,

I totally agree with you and actually we have been discussing this a lot 
internally the last few weeks.
. As a top priority, the driver MUST integrate with oslo.vmware. This will be 
achieved through this chain of patches [1]. We want these patches to be merged 
before other things.
I think we should stop introducing more complexity which makes the task of 
refactoring more and more complicated. The integration with oslo.vmware is not 
a refactoring but should be seen as a way to get a more "lightweight" version 
of the driver which will make the task of refactoring a bit easier.
. Then, we want to actually refactor, we got several meetings to know what is 
the best strategy to adopt going forward (and avoid reproducing the same 
mistakes).
The highest priority is spawn(): we need to make it modular, remove nested 
methods. This refactoring work should include the integration with the image 
handler framework [2] and introducing the notion of image type object to avoid 
all these conditions on types of images inside the core logic.
. I would like to see you cores to be "involved" in this design since you will 
be reviewing the code at some point. "involved" here can be interpreted as 
review the design, and/ or actually participate to the design discussions. I 
would like to get your POV on this.

Let me know if this approach makes sense.

Thanks,
Arnaud

[1] https://review.openstack.org/#/c/70175/
[2] https://review.openstack.org/#/c/33409/


- Original Message -
From: "Matt Riedemann" 
To: openstack-dev@lists.openstack.org
Sent: Wednesday, March 12, 2014 11:28:23 AM
Subject: Re: [openstack-dev] [nova] An analysis of code review in Nova



On 2/25/2014 6:36 AM, Matthew Booth wrote:
> I'm new to Nova. After some frustration with the review process,
> specifically in the VMware driver, I decided to try to visualise how the
> review process is working across Nova. To that end, I've created 2
> graphs, both attached to this mail.
>
> Both graphs show a nova directory tree pruned at the point that a
> directory contains less than 2% of total LOCs. Additionally, /tests and
> /locale are pruned as they make the resulting graph much busier without
> adding a great deal of useful information. The data for both graphs was
> generated from the most recent 1000 changes in gerrit on Monday 24th Feb
> 2014. This includes all pending changes, just over 500, and just under
> 500 recently merged changes.
>
> pending.svg shows the percentage of LOCs which have an outstanding
> change against them. This is one measure of how hard it is to write new
> code in Nova.
>
> merged.svg shows the average length of time between the
> ultimately-accepted version of a change being pushed and being approved.
>
> Note that there are inaccuracies in these graphs, but they should be
> mostly good. Details of generation here:
> https://urldefense.proofpoint.com/v1/url?u=https://github.com/mdbooth/heatmap&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0A&m=q%2BhYPEq%2BGxlDrGrMdbYCWuaLhZOwXwRpMQwWxkSied4%3D%0A&s=9a9e8ba562a81e0d00ca4190fbda306617637473ba5e721e4071d8d0ae20175c.
>  This code is obviously
> single-purpose, but is free for re-use if anyone feels so inclined.
>
> The first graph above (pending.svg) is the one I was most interested in,
> and shows exactly what I expected it to. Note the size of 'vmwareapi'.
> If you check out Nova master, 24% of the vmwareapi driver has an
> outstanding change against it. It is practically impossible to write new
> code in vmwareapi without stomping on an oustanding patch. Compare that
> to the libvirt driver at a much healthier 3%.
>
> The second graph (merged.svg) is an attempt to look at why that is.
> Again comparing the VMware driver with the libvirt we can see that at 12
> days, it takes much longer for a change to be approved in the VMware
> driver than in the libvirt driver. I suspect that this isn't the whole
> story, which is likely a combination of a much longer review time with
> very active development.
>
> What's the impact of this? As I said above, it obviously makes it very
> hard to come in as a new developer of the VMware driver when almost a
> quarter of it has been rewritten, but you can't see it. I am very new to
> this and others should validate my conclusions, but I also believe this
> is having a detrimental impact to code quality. Remember that the above
> 12 day approval is only the time for the final version to be approved.
> If a change goes through multiple versions, each of those also has an
> increased review period, meaning that the time from first submission to
> final inclusion is typically very, very protracted. The VMware driver
> has its fair share of high priority issues and functionality gaps, and
> the developers are motived to get it in the best possible shape as
> quickly as possible. However, it is my impression that when problems
> stem from structural issues, the developers choose to add metaphori

[openstack-dev] [QA][Tempest] Bug Day - Wed, 19th

2014-03-12 Thread Mauro S M Rodrigues

Hello everybody!

In the last QA meeting I stepped ahead and volunteered to organize 
another QA Bug Day.


This week wasn't a good one, so I thought to schedule it to the next 
Wednesday (March, 19th). If you think we need more time or something, 
please let me know.


== Actions ==
Basically I'm proposing the follow actions for the QA Bug Day, nothing 
much new here:


1st - Triage those 48 bugs in [1], this includes:
* Prioritize it;
* Mark any duplications;
* Add tags and any other project that can be related to the bug so 
we can have the right eyes on it;
* Some cool extra stuff: comments with any suggestions, links to 
logstash queries so we can have the real dimension of how critic the bug 
in question is;


2nd - Assign yourself to some of the unassigned bugs if possible so we 
can c(see [2])


3rd - Dedicate some time to review the 55 In Progress bugs (see [3]) 
AND/OR be in touch with the current assignee in case the bug hadn't 
recent activity (see [4]) so we can put it back into triage steps.


And depending on how the things happen, I would suggest to not forget 
Grenade which is also part of the QA Program and extend that effort into 
it (see Grenade References with the same indexes of tempest's).


So that's pretty much it, I would like to hear any suggestion or opinion 
that you guys may have.



== Tempest references ==
[1] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.importance%3Alist=CRITICAL&field.importance%3Alist=HIGH&field.importance%3Alist=MEDIUM&field.importance%3Alist=LOW&assignee_option=none&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/tempest/+bugs?search=Search&field.status=In+Progress
[4] - 
https://bugs.launchpad.net/tempest/+bugs?search=Search&field.status=In+Progress&orderby=date_last_updated


== Grenade references ==
[1] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.importance%3Alist=CRITICAL&field.importance%3Alist=HIGH&field.importance%3Alist=MEDIUM&field.importance%3Alist=LOW&assignee_option=none&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/grenade/+bugs?search=Search&field.status=In+Progress
[4] - 
https://bugs.launchpad.net/grenade/+bugs?search=Search&field.status=In+Progress&orderby=date_last_updated



--
mauro(sr)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-03-12 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
2014-03-13!!! 

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow
Docs: http://docs.openstack.org/developer/taskflow
Blueprints: https://blueprints.launchpad.net/taskflow


## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Open reviews for 0.2!
- Progress on gathering initial glance and cinder status and next steps...
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, reviews needing help, questions and
answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-12 Thread Matt Riedemann



On 3/12/2014 6:32 PM, Dan Smith wrote:

I'm confused as to why we arrived at the decision to revert the commits
since Jay's patch was accepted. I'd like some details about this
decision, and what new steps we need to take to get this back in for Juno.


Jay's fix resolved the immediate problem that was reported by the user.
However, after realizing why the bug manifested itself and why it didn't
occur during our testing, all of the core members involved recommended a
revert as the least-risky course of action at this point. If it took
almost no time for that change to break a user that wasn't even using
the feature, we're fearful about what may crop up later.

We talked with the patch author (zhiyan) in IRC for a while after making
the decision to revert about what the path forward for Juno is. The
tl;dr as I recall is:

  1. Full Glance v2 API support merged
  2. Tests in tempest and nova that exercise Glance v2, and the new
 feature
  3. Push the feature patches back in

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Those are essentially the steps as I remember them too.  Sean changed 
the dependencies in the blueprints so the nova glance v2 blueprint is 
the root dependency, then multiple images and then the other download 
handler blueprints at the top.  I haven't checked but the blueprints 
should be marked as not complete (not sure what that would be now) and 
marked for next, the v2 glance root blueprint should be marked as high 
priority too so we get the proper focus when Juno opens up.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-12 Thread Dan Smith
> I'm confused as to why we arrived at the decision to revert the commits
> since Jay's patch was accepted. I'd like some details about this
> decision, and what new steps we need to take to get this back in for Juno.

Jay's fix resolved the immediate problem that was reported by the user.
However, after realizing why the bug manifested itself and why it didn't
occur during our testing, all of the core members involved recommended a
revert as the least-risky course of action at this point. If it took
almost no time for that change to break a user that wasn't even using
the feature, we're fearful about what may crop up later.

We talked with the patch author (zhiyan) in IRC for a while after making
the decision to revert about what the path forward for Juno is. The
tl;dr as I recall is:

 1. Full Glance v2 API support merged
 2. Tests in tempest and nova that exercise Glance v2, and the new
feature
 3. Push the feature patches back in

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-12 Thread Andrew Woodward
I'm confused as to why we arrived at the decision to revert the commits
since Jay's patch was accepted. I'd like some details about this decision,
and what new steps we need to take to get this back in for Juno.


On Wed, Mar 12, 2014 at 3:57 AM, Sean Dague  wrote:

> On 03/12/2014 05:51 AM, Daniel P. Berrange wrote:
> > On Tue, Mar 11, 2014 at 03:31:19PM -0500, Matt Riedemann wrote:
> >>
> >>
> >> On 3/11/2014 3:11 PM, Jay Pipes wrote:
> >>> On Tue, 2014-03-11 at 14:18 -0500, Matt Riedemann wrote:
> 
>  On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:
> > On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague  wrote:
> >> On 03/07/2014 11:16 AM, Russell Bryant wrote:
> >>> On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
>  On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
> > I'd Like to request A FFE for the remaining patches in the
> Ephemeral
> > RBD image support chain
> >
> > https://review.openstack.org/#/c/59148/
> > https://review.openstack.org/#/c/59149/
> >
> > are still open after their dependency
> > https://review.openstack.org/#/c/33409/ was merged.
> >
> > These should be low risk as:
> > 1. We have been testing with this code in place.
> > 2. It's nearly all contained within the RBD driver.
> >
> > This is needed as it implements an essential functionality that
> has
> > been missing in the RBD driver and this will become the second
> release
> > it's been attempted to be merged into.
> 
>  Add me as a sponsor.
> >>>
> >>> OK, great.  That's two.
> >>>
> >>> We have a hard deadline of Tuesday to get these FFEs merged
> (regardless
> >>> of gate status).
> >>>
> >>
> >> As alt release manager, FFE approved based on Russell's approval.
> >>
> >> The merge deadline for Tuesday is the release meeting, not end of
> day.
> >> If it's not merged by the release meeting, it's dead, no exceptions.
> >
> > Both commits were merged, thanks a lot to everyone who helped land
> > this in Icehouse! Especially to Russel and Sean for approving the
> FFE,
> > and to Daniel, Michael, and Vish for reviewing the patches!
> >
> 
>  There was a bug reported today [1] that looks like a regression in
> this
>  new code, so we need people involved in this looking at it as soon as
>  possible because we have a proposed revert in case we need to yank it
>  out [2].
> 
>  [1] https://bugs.launchpad.net/nova/+bug/1291014
>  [2]
> 
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z
> >>>
> >>> Note that I have identified the source of the problem and am pushing a
> >>> patch shortly with unit tests.
> >>>
> >>
> >> My concern is how much else where assumes nova is working with the
> >> glance v2 API because there was a nova blueprint [1] to make nova
> >> work with the glance V2 API but that never landed in Icehouse, so
> >> I'm worried about wack-a-mole type problems here, especially since
> >> there is no tempest coverage for testing multiple image location
> >> support via nova.
> >>
> >> [1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api
> >
> > Does anyone understand how we can have missed this glance API compat
> > problem in gate and/or day-to-day development. Presumably the people
> > developing this feature were using a standard devstack environment
> > and so would have been relying on whatever is currently committed
> > in tree, and so not impacted by whatever blueprint did not land.
> > So why would it have worked for them and passed gate tests but then
> > fail in this way due to glance API changes ?
>
> It's a little complicated, and comes down to a few reasons.
>
> First, it's a client compatibility issue. Tempest doesn't test the
> clients (mostly) because the API is our unbreakable interface, not the
> clients (that being said, client compatibility should be something those
> teams strive for). The clients actually hide too much of the API, so
> testing through the clients won't give the Tempest API tests strict
> enough results.
>
> So direct testing of this wasn't expected.
>
> We do a ton of indirect testing as well. Where we call Nova and it calls
> Glance. Because no one made progress on -
> https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api, no one
> got to the point of realizing it was non trivial to enable on the gate
> side. So all the indirect calls are v1 still (
>
> http://logs.openstack.org/29/79329/1/check/check-tempest-dsvm-full/cea4ff0/logs/screen-g-api.txt.gz?level=INFO
>
> There is a third way we could have caught this, which is the scenario
> tests in tempest, which use the official clients. Probably for the same
> reasons as #2, those haven't been enabled on v2. Realistically the
> scenario tests probably wouldn't have caught this bre

Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-12 Thread Christopher Yeoh
On Wed, 12 Mar 2014 14:31:13 +
"Murray, Paul (HP Cloud Services)"  wrote:

> Reviewing this thread to come to a conclusion (for myself at
> least - and hopefully so I can document something so reviewers know
> why I did it)
> 
> For approach:
> 1. plugins should use stevedore with entry points (as stated by
> Russell) 2. the plugins should be explicitly selected through
> configuration 
> 
> For api stability:
> I'm not sure there was a consensus. Personally I would write a base
> class for the plugins and document in it that the interface is
> unstable. Sound good?

Even if you don't want to make any guarantees around API stability I'd
suggest still putting some versioning info in at the start. So at least
the various parts can detect and warn when they might be broken.

Chris

> 
> BTW: this is one of those things that could be put in a place to make
> and record decisions (like the gerrit idea for blueprints). But now I
> am referring to another thread
> [http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html
> ]
> 
> Paul.
> 
> 
> -Original Message-
> From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
> Sent: 04 March 2014 21:25
> To: Murray, Paul (HP Cloud Services)
> Cc: OpenStack Development Mailing List (not for usage questions);
> d...@danplanet.com Subject: Re: [openstack-dev] [Nova] What is the
> currently accepted way to do plugins
> 
> And sorry, as to your original problem, the loadables approach is
> kinda messy since only the classes that are loaded when *that* module
> are loaded are used (vs. explicitly specifying them in a config). You
> may get different results when the flow changes.
> 
> Either entry-points or config would give reliable results.
> 
> 
> On 03/04/2014 03:21 PM, Murray, Paul (HP Cloud Services) wrote:
> > In a chat with Dan Smith on IRC, he was suggesting that the
> > important thing was not to use class paths in the config file. I
> > can see that internal implementation should not be exposed in the
> > config files - that way the implementation can change without
> > impacting the nova users/operators.
> 
> There's plenty of easy ways to deal with that problem vs. entry
> points.
> 
> MyModule.get_my_plugin() ... which can point to anywhere in the
> module permanently.
> 
> Also, we don't have any of the headaches of merging setup.cfg
> sections (as we see with oslo.* integration).
> 
> > Sandy, I'm not sure I really get the security argument. Python
> > provides every means possible to inject code, not sure plugins are
> > so different. Certainly agree on choosing which plugins you want to
> > use though.
> 
> The concern is that any compromised part of the python eco-system can
> get auto-loaded with the entry-point mechanism. Let's say Nova
> auto-loads all modules with entry-points the [foo] section. All I
> have to do is create a setup that has a [foo] section and my code is
> loaded. Explicit is better than implicit.
> 
> So, assuming we don't auto-load modules ... what does the entry-point
> approach buy us?
> 
> 
> > From: Russell Bryant [rbry...@redhat.com] We should be careful
> > though. We need to limit what we expose as external plug points,
> > even if we consider them unstable.  If we don't want it to be
> > public, it may not make sense for it to be a plugin interface at
> > all.
> 
> I'm not sure what the concern with introducing new extension points
> is? OpenStack is basically just a big bag of plugins. If it's
> optional, it's supposed to be a plugin (according to the design
> tenets).
> 
> 
> 
> > 
> > --
> > Russell Bryant
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Marconi] oslo.messaging on VMs

2014-03-12 Thread Dmitry Mescheryakov
Hey folks,

Just wanted to thank you all for the input, it is really valuable.
Indeed it seems like overall Marconi does what is needed, so I'll
experiment with it.

Thanks,

Dmitry

2014-03-07 0:16 GMT+04:00 Georgy Okrokvertskhov :
> As a result of this discussion, I think we need also involve Marconi  team
> to this discussion. (I am sorry for changing the Subject).
>
> I am not very familiar with Marconi project details, but at first look it
> looks like it can help to setup separate MQ infrastructure for agent <->
> service communication.
>
> I don't have any specific design suggestions and I hope Marconi team will
> help us to find a right approach.
>
> It looks like that option with oslo.message framework has now lower priority
> due to security reasons.
>
> Thanks
> Georgy
>
>
> On Thu, Mar 6, 2014 at 11:33 AM, Steven Dake  wrote:
>>
>> On 03/06/2014 10:24 AM, Daniel P. Berrange wrote:
>>>
>>> On Thu, Mar 06, 2014 at 07:25:37PM +0400, Dmitry Mescheryakov wrote:

 Hello folks,

 A number of OpenStack and related projects have a need to perform
 operations inside VMs running on OpenStack. A natural solution would
 be an agent running inside the VM and performing tasks.

 One of the key questions here is how to communicate with the agent. An
 idea which was discussed some time ago is to use oslo.messaging for
 that. That is an RPC framework - what is needed. You can use different
 transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
 connectivity your OpenStack networking can provide. At the same time
 there is a number of things to consider, like networking, security,
 packaging, etc.

 So, messaging people, what is your opinion on that idea? I've already
 raised that question in the list [1], but seems like not everybody who
 has something to say participated. So I am resending with the
 different topic. For example, yesterday we started discussing security
 of the solution in the openstack-oslo channel. Doug Hellmann at the
 start raised two questions: is it possible to separate different
 tenants or applications with credentials and ACL so that they use
 different queues? My opinion that it is possible using RabbitMQ/Qpid
 management interface: for each application we can automatically create
 a new user with permission to access only her queues. Another question
 raised by Doug is how to mitigate a DOS attack coming from one tenant
 so that it does not affect another tenant. The thing is though
 different applications will use different queues, they are going to
 use a single broker.
>>>
>>> Looking at it from the security POV, I'd absolutely not want to
>>> have any tenant VMs connected to the message bus that openstack
>>> is using between its hosts. Even if you have security policies
>>> in place, the inherent architectural risk of such a design is
>>> just far too great. One small bug or misconfiguration and it
>>> opens the door to a guest owning the entire cloud infrastructure.
>>> Any channel between a guest and host should be isolated per guest,
>>> so there's no possibility of guest messages finding their way out
>>> to either the host or to other guests.
>>>
>>> If there was still a desire to use oslo.messaging, then at the
>>> very least you'd want a completely isolated message bus for guest
>>> comms, with no connection to the message bus used between hosts.
>>> Ideally the message bus would be separate per guest too, which
>>> means it ceases to be a bus really - just a point-to-point link
>>> between the virt host + guest OS that happens to use the oslo.messaging
>>> wire format.
>>>
>>> Regards,
>>> Daniel
>>
>> I agree and have raised this in the past.
>>
>> IMO oslo.messaging is a complete nonstarter for guest communication
>> because of security concerns.
>>
>> We do not want guests communicating on the same message bus as
>> infrastructure.  The response to that was "well just have all the guests
>> communicate on their own unique messaging server infrastructure".  The
>> downside of this is one guests activity could damage a different guest
>> because of a lack of isolation and the nature in which message buses work.
>> The only workable solution which ensures security is a unique message bus
>> per guest - which means a unique daemon per guest.  Surely there has to be a
>> better way.
>>
>> The idea of isolating guests on a user basis, but allowing them to all
>> exchange messages on one topic doesn't make logical sense to me.  I just
>> don't think its possible, unless somehow rpc delivery were changed to
>> deliver credentials enforced by the RPC server in addition to calling
>> messages.  Then some type of credential management would need to be done for
>> each guest in the infrastructure wishing to use the shared message bus.
>>
>> The requirements of oslo.messaging solution for a shared agent is that the
>> agent would only be able to listen a

Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-12 Thread Edgar Magana
You should be able to add your plugin here:
http://docs.openstack.org/havana/config-reference/content/networking-options
-plugins.html

Thanks,

Edgar

From:  Mohammad Banikazemi 
Date:  Monday, March 10, 2014 2:40 PM
To:  OpenStack List 
Cc:  Edgar Magana 
Subject:  Re: [openstack-dev] [Neutron] Docs for new plugins

Would like to know what to do for adding documentation for a new plugin. Can
someone point me to the right place/process please.

Thanks,

Mohammad


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-12 Thread Sriram Subramanian
Victoria,

When you click "My Dashboard" on the left hand side, you will see
Connections, Proposals etc on your right, in the dashboard. Right below
"Connections", there are two links in smaller font, one which is the link
to Connect (circled in blue in the attached snapshot).
If you tried right after creating your profile, try logging out and in.
When I created the profile, I remember having some issues around accessing
profile (not the dashboard, but entire profile).

thanks
-Sriram


On Wed, Mar 12, 2014 at 1:32 PM, Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com> wrote:

> Hi,
>
> Thanks for working on the template, it sure ease things for students.
>
> I can't find the "Connect with organizations" link, does anyone have the
> same problem?
>
> I confirm my assistance to tomorrow's meeting, thanks for organizing it! +1
>
> Cheers,
>
> Victoria
>
>
>
> 2014-03-11 14:29 GMT-03:00 Davanum Srinivas :
>
> Hi,
>>
>> Mentors:
>> * Please click on "My Dashboard" then "Connect with organizations" and
>> request a connection as a mentor (on the GSoC web site -
>> http://www.google-melange.com/)
>>
>> Students:
>> * Please see the Application template you will need to fill in on the
>> GSoC site.
>>   http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
>> * Please click on "My Dashboard" then "Connect with organizations" and
>> request a connection
>>
>> Both Mentors and Students:
>> Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
>> UTC for about 30 mins to meet and greet since all application deadline
>> is next week. If this time is not convenient, please send me a note
>> and i'll arrange for another time say on friday as well.
>>
>> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09&p1=43&am=30
>>
>> We need to get an idea of how many slots we need to apply for based on
>> really strong applications with properly fleshed out project ideas and
>> mentor support. Hoping the meeting on IRC will nudge the students and
>> mentors work towards that goal.
>>
>> Thanks,
>> dims
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,
-Sriram
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Multi-Node][ML2][MechanismDriver] How to load mechanism driver on compute nodes

2014-03-12 Thread Sławek Kapłoński
Hello,

Maybe I'm not an expert but You probably should use agent (like openvswitch 
agent) and send such messages from neutron server to compute nodes via RPC.
This is how I do such things (I'm using openvswitch agent which is modified by 
me a little bit)

--
Pozdrawiam
Sławek Kapłoński

Dnia środa, 12 marca 2014 14:27:01 Nader Lahouti pisze:
> Hi All,
> 
> I installed multi nodes openstack (using devstack)  with ML2 as core
> plugin. I need to perform a task when update_port_pre/post_commit methods
> is called during installation of VM.
> As I enabled q-svc and q-agt on control and only q-agt on the compute node,
> compute node won't run neutron-server so no mechanism driver is loaded.
> I see the port event on compute node with this failure shown below.
> 
> Is there anyway to load mechanism driver for compute nodes?
> 
> 2014-03-12 10:35:05.418 TRACE
> neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
> "/opt/stack/neutron/neutron/openstack/common/rpc/amqp.py", line 516, in
> __iter__
> 
> 2014-03-12 10:35:05.418 TRACE
> neutron.plugins.openvswitch.agent.ovs_neutron_agent raise result
> 
> 2014-03-12 10:35:05.418 TRACE
> neutron.plugins.openvswitch.agent.ovs_neutron_agent RemoteError: Remote
> error: MechanismDriverError update_port_postcommit failed.
> 
> 2014-03-12 10:35:05.418 TRACE
> neutron.plugins.openvswitch.agent.ovs_neutron_agent [u'Traceback (most
> recent call last):\n', u'  File
> "/opt/stack/neutron/neutron/openstack/common/rpc/amqp.py", line 438, in
> _process_data\n**args)\n', u'  File
> "/opt/stack/neutron/neutron/common/rpc.py", line 44, in dispatch\n
> neutron_ctxt, version, method, namespace, **kwargs)\n', u'  File
> "/opt/stack/neutron/neutron/openstack/common/rpc/dispatcher.py", line 172,
> in dispatch\nresult = getattr(proxyobj, method)(ctxt, **kwargs)\n', u'
> File "/opt/stack/neutron/neutron/plugins/ml2/rpc.py", line 192, in
> update_device_up\nq_const.PORT_STATUS_ACTIVE)\n', u'  File
> "/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 681, in
> update_port_status\n
> self.mechanism_manager.update_port_postcommit(mech_context)\n', u'  File
> "/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 395, in
> update_port_postcommit\nself._call_on_drivers("update_port_postcommit",
> context)\n', u'  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
> line 167, in _call_on_drivers\nmethod=method_name\n',
> u'MechanismDriverError: update_port_postcommit failed.\n'].
> 
> 
> Regards,
> 
> Nader

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][Multi-Node][ML2][MechanismDriver] How to load mechanism driver on compute nodes

2014-03-12 Thread Nader Lahouti
Hi All,

I installed multi nodes openstack (using devstack)  with ML2 as core
plugin. I need to perform a task when update_port_pre/post_commit methods
is called during installation of VM.
As I enabled q-svc and q-agt on control and only q-agt on the compute node,
compute node won't run neutron-server so no mechanism driver is loaded.
I see the port event on compute node with this failure shown below.

Is there anyway to load mechanism driver for compute nodes?

2014-03-12 10:35:05.418 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
"/opt/stack/neutron/neutron/openstack/common/rpc/amqp.py", line 516, in
__iter__

2014-03-12 10:35:05.418 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise result

2014-03-12 10:35:05.418 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent RemoteError: Remote
error: MechanismDriverError update_port_postcommit failed.

2014-03-12 10:35:05.418 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent [u'Traceback (most
recent call last):\n', u'  File
"/opt/stack/neutron/neutron/openstack/common/rpc/amqp.py", line 438, in
_process_data\n**args)\n', u'  File
"/opt/stack/neutron/neutron/common/rpc.py", line 44, in dispatch\n
neutron_ctxt, version, method, namespace, **kwargs)\n', u'  File
"/opt/stack/neutron/neutron/openstack/common/rpc/dispatcher.py", line 172,
in dispatch\nresult = getattr(proxyobj, method)(ctxt, **kwargs)\n', u'
File "/opt/stack/neutron/neutron/plugins/ml2/rpc.py", line 192, in
update_device_up\nq_const.PORT_STATUS_ACTIVE)\n', u'  File
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 681, in
update_port_status\n
self.mechanism_manager.update_port_postcommit(mech_context)\n', u'  File
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 395, in
update_port_postcommit\nself._call_on_drivers("update_port_postcommit",
context)\n', u'  File "/opt/stack/neutron/neutron/plugins/ml2/managers.py",
line 167, in _call_on_drivers\nmethod=method_name\n',
u'MechanismDriverError: update_port_postcommit failed.\n'].


Regards,

Nader
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Mar 13 1800 UTC [savanna]

2014-03-12 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Agenda_for_March.2C_13

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140313T18

The main topic is "Do we need backward compat for renaming?"

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] tgt restart fails in Cinder startup "start: job failed to start"

2014-03-12 Thread Sukhdev Kapur
Hi Roey,

I made this change and have been running this fix on 4 different servers. I
believe this fix works.  Things are working very smoothly.

I think we need to incorporate this change into devstack scripts or capture
it in the documentation so that it saves some grief to the next person.

Thanks
-Sukhdev




On Tue, Mar 11, 2014 at 3:06 AM, Roey Chen  wrote:

>  Forwarding the answer to the relevant mailing lists:
>
>
>
> ---
>
>
>
> Hi,
>
>
>
> Hope this could help,
>
>
>
> I've encountered this issue myself not to long ago on Ubuntu 12.04 host,
>
> it didn't happen again after messing with the Kernel Semaphore Limits
> parameters [1]:
>
>
>
> Adding this [2] line to `/etc/sysctl.conf` seems to do the trick.
>
>
>
>
>
> - Roey
>
>
>
>
>
> [1] http://paste.openstack.org/show/73086/
>
> [2] http://paste.openstack.org/show/73082/
>
>
>
>
>
> *From:* Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
> *Sent:* Monday, March 10, 2014 5:56 PM
> *To:* Dane Leblanc (leblancd)
> *Cc:* OpenStack Development Mailing List (not for usage questions);
> openstack-in...@lists.openstack.org; openstack...@lists.openstack.org
>
> *Subject:* Re: [OpenStack-Infra] tgt restart fails in Cinder startup
> "start: job failed to start"
>
>
>
> I see the same issue. This issue has crept in during the latest flurry of
> check-ins. I started noticing this issue a day or two before the Icehouse
> Feature Freeze deadline.
>
>
>
> I tried restarting tgt as well, but, it does not help.
>
>
>
> However, rebooting the VM helps clear it up.
>
>
>
> Has anybody else seen it as well? Does anybody have a solution for it?
>
>
>
> Thanks
>
> -Sukhdev
>
>
>
>
>
>
>
>
>
> On Mon, Mar 10, 2014 at 8:37 AM, Dane Leblanc (leblancd) <
> lebla...@cisco.com> wrote:
>
> I don't know if anyone can give me some troubleshooting advice with this
> issue.
>
> I'm seeing an occasional problem whereby after several DevStack
> unstack.sh/stack.sh cycles, the tgt daemon (tgtd) fails to start during
> Cinder startup.  Here's a snippet from the stack.sh log:
>
> 2014-03-10 07:09:45.214 | Starting Cinder
> 2014-03-10 07:09:45.215 | + return 0
> 2014-03-10 07:09:45.216 | + sudo rm -f /etc/tgt/conf.d/stack.conf
> 2014-03-10 07:09:45.217 | + _configure_tgt_for_config_d
> 2014-03-10 07:09:45.218 | + [[ ! -d /etc/tgt/stack.d/ ]]
> 2014-03-10 07:09:45.219 | + is_ubuntu
> 2014-03-10 07:09:45.220 | + [[ -z deb ]]
> 2014-03-10 07:09:45.221 | + '[' deb = deb ']'
> 2014-03-10 07:09:45.222 | + sudo service tgt restart
> 2014-03-10 07:09:45.223 | stop: Unknown instance:
> 2014-03-10 07:09:45.619 | start: Job failed to start
> jenkins@neutronpluginsci:~/devstack$ 2014-03-10 07:09:45.621 | + exit_trap
> 2014-03-10 07:09:45.622 | + local r=1
> 2014-03-10 07:09:45.623 | ++ jobs -p
> 2014-03-10 07:09:45.624 | + jobs=
> 2014-03-10 07:09:45.625 | + [[ -n '' ]]
> 2014-03-10 07:09:45.626 | + exit 1
>
> If I try to restart tgt manually without success:
>
> jenkins@neutronpluginsci:~$ sudo service tgt restart
> stop: Unknown instance:
> start: Job failed to start
> jenkins@neutronpluginsci:~$ sudo tgtd
> librdmacm: couldn't read ABI version.
> librdmacm: assuming: 4
> CMA: unable to get RDMA device list
> (null): iser_ib_init(3263) Failed to initialize RDMA; load kernel modules?
> (null): fcoe_init(214) (null)
> (null): fcoe_create_interface(171) no interface specified.
> jenkins@neutronpluginsci:~$
>
> The config in /etc/tgt is:
>
> jenkins@neutronpluginsci:/etc/tgt$ ls -l
> total 8
> drwxr-xr-x 2 root root 4096 Mar 10 07:03 conf.d
> lrwxrwxrwx 1 root root   30 Mar 10 06:50 stack.d ->
> /opt/stack/data/cinder/volumes
> -rw-r--r-- 1 root root   58 Mar 10 07:07 targets.conf
> jenkins@neutronpluginsci:/etc/tgt$ cat targets.conf
> include /etc/tgt/conf.d/*.conf
> include /etc/tgt/stack.d/*
> jenkins@neutronpluginsci:/etc/tgt$ ls conf.d
> jenkins@neutronpluginsci:/etc/tgt$ ls /opt/stack/data/cinder/volumes
> jenkins@neutronpluginsci:/etc/tgt$
>
> I don't know if there's any missing Cinder config in my DevStack localrc
> files. Here's one that I'm using:
>
> MYSQL_PASSWORD=nova
> RABBIT_PASSWORD=nova
> SERVICE_TOKEN=nova
> SERVICE_PASSWORD=nova
> ADMIN_PASSWORD=nova
>
> ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit
> enable_service mysql
> disable_service n-net
> enable_service q-svc
> enable_service q-agt
> enable_service q-l3
> enable_service q-dhcp
> enable_service q-meta
> enable_service q-lbaas
> enable_service neutron
> enable_service tempest
> VOLUME_BACKING_FILE_SIZE=2052M
> Q_PLUGIN=cisco
> declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)
> declare -A
> Q_CISCO_PLUGIN_SWITCH_INFO=([10.0.100.243]=admin:Cisco12345:22:neutronpluginsci:1/9)
> NCCLIENT_REPO=git://github.com/CiscoSystems/ncclient.git
> PHYSICAL_NETWORK=physnet1
> OVS_PHYSICAL_BRIDGE=br-eth1
> TENANT_VLAN_RANGE=810:819
> ENABLE_TENANT_VLANS=True
> API_RATE_LIMIT=False
> VERBOSE=True
> DEBUG=True
> LOGFILE=/opt/stack/logs/stack.sh.log

Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-12 Thread Chris Armstrong
Hi Kevin,

The design of OS::Heat::AutoScalingGroup should not require explicit support 
for load balancers. The design is meant to allow you to create a resource that 
wraps up both a OS::Heat::Server and a PoolMember in a template and use it via 
a Stack resource.

(Note that Mike was talking about the new OS::Heat::AutoScalingGroup resource, 
not AWS::AutoScaling::AutoScalingGroup).

So, while I haven’t tested this case with PoolMember specifically, and there 
may still be bugs, no more feature implementation should be necessary (I hope).

--
Christopher Armstrong
IRC: radix



On March 12, 2014 at 1:52:53 PM, Fox, Kevin M 
(kevin@pnnl.gov) wrote:

I submitted a blueprint a while back that I think is relevant:

https://blueprints.launchpad.net/heat/+spec/elasticloadbalancing-lbaas

Currently heat autoscaling doesn't interact with Neutron lbaas and the 
configurable bits aren't configurable enough to allow it without code changes 
as far as I can tell.

I think its only a few days of work, but the OpenStack CLA is preventing me 
from contributing. :/

Thanks,
Kevin


From: Mike Spreitzer [mspre...@us.ibm.com]
Sent: Wednesday, March 12, 2014 11:34 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a nested 
stack that includes a OS::Neutron::PoolMember?  Should I expect this to work?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-12 Thread prabhakar Kudva

 Hi Tim,

Thanks for your comments. 
Would be happy to contribute to the propsal and code.
 
The existing code already reflects the thoughts below, and got me
in the line of ideas. Please orrect me if I am wrong as I am 
learning with these discussions:
 
One part (reflected by code in "policy" directory is the generic 
"condition-> action engine" which could take logic primitives and 
(in the future) python functions, evaluate the conditions and
execute the action.  This portable core engine be used for any kind of policy 
enforcement
(as by other OS projects), such as for data center monitoring and repair, 
service level enforcement, compliance policies, optimization (energy,
performance) etc... at any level of the stack.  This core engine seems possibly 
a combination of logic  reasoning/unification and python function 
evaluation, and python code actions.
 
Second part (reflected by code in "server") are the applications
for various purposes.  These could be project specific, task specific.
We could add a diverse set of examples.  The example I have worked
with seems closer to compliance (as in net owner, vm owner check),
and we will add more.
 
Prabhakar
 
Date: Wed, 12 Mar 2014 12:33:35 -0700
From: thinri...@vmware.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress] Policy types

Hi Prabhakar,
Thanks for the feedback.  I'd be interested to hear what other policy types you 
have in mind.
To answer your questions...
We're planning on extending our policy language in such a way that you can use 
Python functions as conditions ("" in the grammar) in rules.  That's on 
my todo-list but didn't mention it yesterday as we were short on time.  There 
will be some syntactic restrictions so that we can properly execute those 
Python functions (i.e. we need to always be able to compute the inputs to the 
function).  I had thought it was just an implementation detail I hadn't gotten 
around to (all Datalog implementations I've seen have such things), but it 
sounds like it's worth writing up a proposal and sending it around before 
implementing.  If that's a pressing concern for you, let me know and I'll bump 
it up the stack (a little).  If you'd like, feel free to draft a proposal (or 
remind me to do it once in a while).
As for actions, I typically think of them as API calls to other OS components 
like Nova.  But they could just as easily be Python functions.  But I would 
want to avoid an action that changes Congress's internal data structures 
directly (e.g. adding a new policy statement).  Such actions have caused 
trouble in the past for policy languages (though for declarative programming 
languages like Prolog they are less problematic).  I don't think there's anyway 
we can stop people from creating such actions, but I think we should advocate 
against them.  
Tim
From: "prabhakar Kudva" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, March 12, 2014 11:34:04 AM
Subject: Re: [openstack-dev] [Congress] Policy types




Hi Tim, All,
 
I was in the discussion yesterday (kudva), and would like to start gradually
contributing to the code base.
 
So, this discussion below is based on my limited exploration of Congress
code, running it. I am trying some small pieces to implement to familiarize.
Please view it as such. As I start adding code, I am sure, my thoughts will
be more evolved.
 
I agree with the three types you outline. I also agree that these will grow.
We are already thinking of expanding congress for various other types of
policies.  But those would be a manageable start.
 
Regarding the comment below. I was wondering if all conditions, and actions
could be both:
1. python functions (for conditions they eval
2. policy primitives.  
 
The advantage of 1, is that it is just executed and a True or False returned
by Python for conditions. For actions, python functions are executed to respond 
to conditions.
This controls the growth of policies and adding more primitives, and makes it 
flexible (say
to use alarms, monitors, os clients, nova actions etc).
 
The advantage of 2, is the ability to use unification (as in unify.py) and do
some logic reduction.  This gives us the full strength of extensive and mature 
logic reasoning and reduction methods.
 
One possibility is that it checks which one the two it is and does the 
appropriate
evaluation for condition and action.
 

>There are drawbacks to this proposal as well. >- We will have 3 separate 
>policies that are conceptually very similar. As the policies grow larger, it 
>will become >increasingly difficult to keep the policies synchronized. This 
>problem can be mitigated to some extent by having >all 3 share a library of 
>policy statements that they all apply in different ways (and such a library 
>mechanism is >already implemented). >- As cloud services change their 
>behavior, policies may need to be re-written. For example, right now Nova does 
>>not consult Congress before c

Re: [openstack-dev] [Ironic] Manual scheduling nodes in maintenance mode

2014-03-12 Thread Clint Byrum
Excerpts from Chris Jones's message of 2014-03-12 13:07:21 -0700:
> Hey
> 
> I wanted to throw out an idea that came to me while I was working on
> diagnosing some hardware issues in the Tripleo CD rack at the sprint last
> week.
> 
> Specifically, if a particular node has been dropped from automatic
> scheduling by the operator, I think it would be super useful to be able to
> still manually schedule the node. Examples might be that someone is
> diagnosing a hardware issue and wants to boot an image that has all their
> favourite diagnostic tools in it, or they might be booting an image they
> use for updating firmwares, etc (frankly, just being able to boot a
> generic, unmodified host OS on a node can be super useful if you're trying
> to crash cart the machine for something hardware related).
> 
> Any thoughts? :)
> 

+1 from me, as I've been in the exact same boat (perhaps the same piece
of hardware even. ;)

I imagine it as a nova scheduler hint that finds its way into Ironic
eventually.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-12 Thread Victoria Martínez de la Cruz
Hi,

Thanks for working on the template, it sure ease things for students.

I can't find the "Connect with organizations" link, does anyone have the
same problem?

I confirm my assistance to tomorrow's meeting, thanks for organizing it! +1

Cheers,

Victoria



2014-03-11 14:29 GMT-03:00 Davanum Srinivas :

> Hi,
>
> Mentors:
> * Please click on "My Dashboard" then "Connect with organizations" and
> request a connection as a mentor (on the GSoC web site -
> http://www.google-melange.com/)
>
> Students:
> * Please see the Application template you will need to fill in on the GSoC
> site.
>   http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
> * Please click on "My Dashboard" then "Connect with organizations" and
> request a connection
>
> Both Mentors and Students:
> Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
> UTC for about 30 mins to meet and greet since all application deadline
> is next week. If this time is not convenient, please send me a note
> and i'll arrange for another time say on friday as well.
>
> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09&p1=43&am=30
>
> We need to get an idea of how many slots we need to apply for based on
> really strong applications with properly fleshed out project ideas and
> mentor support. Hoping the meeting on IRC will nudge the students and
> mentors work towards that goal.
>
> Thanks,
> dims
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 14-00 UTC

2014-03-12 Thread Eugene Nikanorov
Hi neutron and lbaas folks,

Let's keep our regular meeting on Thursday, at 14-00 UTC at
#openstack-meeting

We'll update on current status and continue object model discussion.
We have many new folks that are recently showed the interest in lbaas
project asking for mini summit. I think it would be helpful for everyone
interested in lbaas to join the meeting.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Optimizing ML2 <-> EOS Sync (#12)

2014-03-12 Thread Sukhdev Kapur
Can somebody explain to me the purpose of this email. Is there any action
required of me regarding this?

Thanks
-Sukhdev



On Wed, Mar 12, 2014 at 1:10 PM, openstack-gerrit
wrote:

> Thank you for contributing to openstack/neutron!
>
> openstack/neutron uses Gerrit for code review.
>
> Please visit http://wiki.openstack.org/GerritWorkflow and follow the
> instructions there to upload your change to Gerrit.
>
> —
> Reply to this email directly or view it on 
> GitHub
> .
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Tests depending on conf boolean values that are False in the gate

2014-03-12 Thread David Kranz
I have recently reviewed a few patches (around testing consoles) that 
add new tests whose execution depends on False config options or 
introduce new ones that are False by default. The result is that the new 
tests do not run in any job. Since we have a policy that there should 
only be code in tempest that we see actually running, it would be good 
to enable these features in devstack if possible, and change the default 
to be True if that makes sense.


I just wanted to clarify that the "must run" policy covers this case in 
addition to the other configurations such as different drivers, plugins, 
etc.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Manual scheduling nodes in maintenance mode

2014-03-12 Thread Chris Jones
Hey

I wanted to throw out an idea that came to me while I was working on
diagnosing some hardware issues in the Tripleo CD rack at the sprint last
week.

Specifically, if a particular node has been dropped from automatic
scheduling by the operator, I think it would be super useful to be able to
still manually schedule the node. Examples might be that someone is
diagnosing a hardware issue and wants to boot an image that has all their
favourite diagnostic tools in it, or they might be booting an image they
use for updating firmwares, etc (frankly, just being able to boot a
generic, unmodified host OS on a node can be super useful if you're trying
to crash cart the machine for something hardware related).

Any thoughts? :)

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Data Integration]

2014-03-12 Thread Tim Hinrichs
Hi Rajdeep, 

This is an great problem to work on because it confronts one of the assumptions 
we're making in Congress: that cloud services can be represented as a 
collection of tables in a reasonable way. You're asking good questions here. 

More responses inline. 

Tim 


- Original Message -



From: "Rajdeep Dua"  
To: openstack-dev@lists.openstack.org 
Sent: Wednesday, March 12, 2014 11:54:28 AM 
Subject: [openstack-dev] [Congress][Data Integration] 

Need some guidance on how to convert nested types into flat tuples. 
Also should we reorder the tuple values in a particular sequence? 




Order of tuples doesn't matter. Order of columns (values) within a tuple 
doesn't really matter either, except that all tuples must use the same order 
and the policies we write must know which column is which. 




Thanks 
Rajdeep 

As an example i have shown networks and ports tuples with some nested types 

networks - tuple format 
--- 

keys (for reference) 

{'status','subnets', 
'name','test-network','provider:physical_network','admin_state_up', 
'tenant_id','provider:network_type','router:external', 
'shared',id,'provider:segmentation_id'} 

values 
--- 
('ACTIVE', ['4cef03d0-1d02-40bb-8c99-2f442aac6ab0'], 'test-network', None, 
True, 
'570fe78a1dc54cffa053bd802984ede2', 'gre', False, False, 
'240ff9df-df35-43ae-9df5-27fae87f2492', 4) 




Here we'd want to pull the List out and replace it with an ID. Then create 
another table that shows which subnets belong to the list with that ID. (You 
can think of the ID as a pointer to the list---in the C/C++ sense.) So 
something like... 

network( 'ACTIVE', 'ID1', 'test-network', None, True, 
'570fe78a1dc54cffa053bd802984ede2', 'gre', False, False, 
'240ff9df-df35-43ae-9df5-27fae87f2492', 4) 

element('ID1', '4cef03d0-1d02-40bb-8c99-2f442aac6ab0') 
element('ID1', ) 

The other thing to think about is whether we want 1 table with 10 columns or we 
want 10 tables with 2 columns each. In this example, we would have... 

network('net1') 
network.status('net1', 'ACTIVE' ) 
network.subnets('net1', 'ID1') 
network.name('net1', 'test-network') 
... 

The period is just another character in the tablename. Nothing fancy happening 
here. 

The ports example below would need a similar flattening. To handle 
dictionaries, I would use the dot-notation shown above. 
A single Neutron API call might populate several Congress tables. 

Tim 




ports - tuple format 
 
keys (for reference) 

{'status','binding:host_id', 'name', 'allowed_address_pairs', 
'admin_state_up', 'network_id', 
'tenant_id', 'extra_dhcp_opts': [], 
'binding:vif_type', 'device_owner', 
'binding:capabilities', 'mac_address', 
'fixed_ips' , 'id', 'security_groups', 
'device_id'} 

Values 

('ACTIVE', 'havana', '', [], True, '240ff9df-df35-43ae-9df5-27fae87f2492', 
'570fe78a1dc54cffa053bd802984ede2', [], 'ovs', 'network:router_interface', 
{'port_filter': True}, 'fa:16:3e:ab:90:df', [{'subnet_id': 
'4cef03d0-1d02-40bb-8c99-2f442aac6ab0', 'ip_address': '90.0.0.1'}], 
'0a2ce569-85a8-45ec-abb3-0d4b34ff69ba', [], 
'864e4acf-bf8e-4664-8cf7-ad5daa95681e') 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=A86YVKfBX5U3g6F7eNScJYjr6Qwjv4dyDyVxE9Im8g8%3D%0A&s=0345ab3711a58ec1ebcee08649f047826cec593f57e9843df0fec2f8cfb03b42
 






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-12 Thread Ben Nemec
 

On 2014-03-11 20:34, Joshua Harlow wrote: 

> https://status.github.com/messages [1] 
> 
> * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
> mitigations we have in place are proving effective in protecting us and we're 
> hopeful that we've got this one resolved.' 
> 
> If you were cloning from github.org and not http://git.openstack.org [2] then 
> you were likely seeing some of the DDoS attack in action.

Unfortunately I don't think novnc is in git.openstack.org because it's
not an OpenStack project. I wonder if we should investigate adopting it
(if the author(s) are amenable to that) since we're using the git
version of it. Maybe that's already been considered and I just don't
know about it. :-) 

-Ben 

> From: Sukhdev Kapur 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Tuesday, March 11, 2014 at 4:08 PM
> To: "Dane Leblanc (leblancd)" 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> , "openstack-in...@lists.openstack.org" 
> 
> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
> noVNC from github.com/kanaka 
> 
> I have noticed that even clone of devstack has failed few times within last 
> couple of hours - it was running fairly smooth so far. 
> 
> -Sukhdev 
> 
> On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur  wrote:
> 
> [adding openstack-dev list as well ] 
> 
> I have noticed that this has stated hitting my builds within last few hours. 
> I have noticed exact same failures on almost 10 builds. 
> Looks like something has happened within last few hours - perhaps the load? 
> 
> -Sukhdev 
> 
> On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd)  
> wrote: 
> 
> Apologies if this is the wrong audience for this question... 
> 
> I'm seeing intermittent failures running stack.sh whereby 'git clone 
> https://github.com/kanaka/noVNC.git [3] /opt/stack/noVNC' is returning 
> various errors. Below are 2 examples. 
> 
> Is this a known issue? Are there any localrc settings which might help here? 
> 
> Example 1: 
> 
> 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc 
> 
> 2014-03-11 15:00:33.780 | + return 0 
> 
> 2014-03-11 15:00:33.781 | ++ trueorfalse False 
> 
> 2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False 
> 
> 2014-03-11 15:00:33.783 | + '[' False = True ']' 
> 
> 2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC 
> 
> 2014-03-11 15:00:33.785 | + git_clone https://github.com/kanaka/noVNC.git [3] 
> /opt/stack/noVNC master 
> 
> 2014-03-11 15:00:33.786 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git 
> [3] 
> 
> 2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC 
> 
> 2014-03-11 15:00:33.789 | + GIT_REF=master 
> 
> 2014-03-11 15:00:33.790 | ++ trueorfalse False False 
> 
> 2014-03-11 15:00:33.791 | + RECLONE=False 
> 
> 2014-03-11 15:00:33.792 | + [[ False = True ]] 
> 
> 2014-03-11 15:00:33.793 | + echo master 
> 
> 2014-03-11 15:00:33.794 | + egrep -q '^refs' 
> 
> 2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]] 
> 
> 2014-03-11 15:00:33.796 | + [[ False = True ]] 
> 
> 2014-03-11 15:00:33.797 | + git_timed clone 
> https://github.com/kanaka/noVNC.git [3] /opt/stack/noVNC 
> 
> 2014-03-11 15:00:33.798 | + local count=0 
> 
> 2014-03-11 15:00:33.799 | + local timeout=0 
> 
> 2014-03-11 15:00:33.801 | + [[ -n 0 ]] 
> 
> 2014-03-11 15:00:33.802 | + timeout=0 
> 
> 2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone 
> https://github.com/kanaka/noVNC.git [3] /opt/stack/noVNC 
> 
> 2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'... 
> 
> 2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200 
> 
> 2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly 
> 
> 2014-03-11 15:03:13.697 | fatal: early EOF 
> 
> 2014-03-11 15:03:13.698 | fatal: index-pack failed 
> 
> 2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]] 
> 
> 2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone' 
> https://github.com/kanaka/noVNC.git [3] '/opt/stack/noVNC]' 
> 
> 2014-03-11 15:03:13.701 | + local exitcode=0 
> 
> 2014-03-11 15:03:13.702 | [Call Trace] 
> 
> 2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova 
> 
> 2014-03-11 15:03:13.705 | /var/lib/jenkins/devstack/lib/nova:618:git_clone 
> 
> 2014-03-11 15:03:13.706 | 
> /var/lib/jenkins/devstack/functions-common:543:git_timed 
> 
> 2014-03-11 15:03:13.707 | /var/lib/jenkins/devstack/functions-common:596:die 
> 
> 2014-03-11 15:03:13.708 | [ERROR] 
> /var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone 
> https://github.com/kanaka/noVNC.git [3] /opt/stack/noVNC] 
> 
> Example 2: 
> 
> 2014-03-11 14:12:58.472 | + is_service_enabled n-novnc
> 
> 2014-03-11 14:12:58.473 | + return 0
> 
> 2014-03-11 14:12:58.474 | ++ trueorfalse False
> 
> 2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False
> 
> 2014-03-11 14:12:58.476 | + '[' False = True ']'
> 
> 2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC
> 
> 2014-03-11 14:12:58.478 | + g

Re: [openstack-dev] [Congress] Policy types

2014-03-12 Thread Tim Hinrichs
Hi Prabhakar, 

Thanks for the feedback. I'd be interested to hear what other policy types you 
have in mind. 

To answer your questions... 

We're planning on extending our policy language in such a way that you can use 
Python functions as conditions ("" in the grammar) in rules. That's on my 
todo-list but didn't mention it yesterday as we were short on time. There will 
be some syntactic restrictions so that we can properly execute those Python 
functions (i.e. we need to always be able to compute the inputs to the 
function). I had thought it was just an implementation detail I hadn't gotten 
around to (all Datalog implementations I've seen have such things), but it 
sounds like it's worth writing up a proposal and sending it around before 
implementing. If that's a pressing concern for you, let me know and I'll bump 
it up the stack (a little). If you'd like, feel free to draft a proposal (or 
remind me to do it once in a while). 

As for actions, I typically think of them as API calls to other OS components 
like Nova. But they could just as easily be Python functions. But I would want 
to avoid an action that changes Congress's internal data structures directly 
(e.g. adding a new policy statement). Such actions have caused trouble in the 
past for policy languages (though for declarative programming languages like 
Prolog they are less problematic). I don't think there's anyway we can stop 
people from creating such actions, but I think we should advocate against them. 

Tim 

- Original Message -

From: "prabhakar Kudva"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Wednesday, March 12, 2014 11:34:04 AM 
Subject: Re: [openstack-dev] [Congress] Policy types 

Hi Tim, All, 

I was in the discussion yesterday (kudva), and would like to start gradually 
contributing to the code base. 

So, this discussion below is based on my limited exploration of Congress 
code, running it. I am trying some small pieces to implement to familiarize. 
Please view it as such. As I start adding code, I am sure, my thoughts will 
be more evolved. 

I agree with the three types you outline. I also agree that these will grow. 
We are already thinking of expanding congress for various other types of 
policies. But those would be a manageable start. 

Regarding the comment below. I was wondering if all conditions, and actions 
could be both: 
1. python functions (for conditions they eval 
2. policy primitives. 

The advantage of 1, is that it is just executed and a True or False returned 
by Python for conditions. For actions, python functions are executed to respond 
to conditions. 
This controls the growth of policies and adding more primitives, and makes it 
flexible (say 
to use alarms, monitors, os clients, nova actions etc). 

The advantage of 2, is the ability to use unification (as in unify.py) and do 
some logic reduction. This gives us the full strength of extensive and mature 
logic reasoning and reduction methods. 

One possibility is that it checks which one the two it is and does the 
appropriate 
evaluation for condition and action. 


>There are drawbacks to this proposal as well. 
>- We will have 3 separate policies that are conceptually very similar. As the 
>policies grow larger, it will become >increasingly difficult to keep the 
>policies synchronized. This problem can be mitigated to some extent by having 
>>all 3 share a library of policy statements that they all apply in different 
>ways (and such a library mechanism is >already implemented). 
>- As cloud services change their behavior, policies may need to be re-written. 
>For example, right now Nova does >not consult Congress before creating a VM; 
>thus, to enforce policy surrounding VMs, the best we could do is >write a 
>Condition-Action policy that adjusts VM configuration when it learns about new 
>VMs being created. If we >later make Nova consult with Congress before 
>creating a VM, we need to write an Access-control policy that puts >the proper 
>controls in place. 

Thanks, 

Prabhakar Kudva 




Date: Wed, 12 Mar 2014 10:05:23 -0700 
From: thinri...@vmware.com 
To: openstack-dev@lists.openstack.org 
Subject: [openstack-dev] [Congress] Policy types 

Hi all, 

We started a discussion on IRC yesterday that I'd like to continue. The main 
question is what kind of policy does a Congress user actually write? I can see 
three options. The first two focus on actions (API calls that make changes to 
the state of the cloud) and the last focuses on just the cloud state. (By 
"state of the cloud" I mean all the information Congress can see about all the 
cloud services it is managing, e.g. all the information we can get through API 
calls to Nova, Neutron, Cinder, Heat, ...). 

1) Access Control (e.g. Linux, XACML, AD): which *actions* can be performed by 
other cloud services (for each state of the cloud) 
2) Condition Action: which *actions* Congress should execute (for each state of 
the cloud) 
3) Classifi

[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-03-12 Thread Carl Baldwin
Tomorrow's meeting will be at 1500 UTC in #openstack-meeting-3.  The
current agenda can be found at
https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

Watch out for your local daylight savings time shifts.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Error on running tox

2014-03-12 Thread Manas Kelshikar
Works ok if I directly run nosetests from the virtual environment or even
in the IDE. I see this error only when I run tox.

In anycase after looking at the specific testcases I can tell that the test
makes some ordering assumptions which may not be required. I will send out
a patch soon and we can move the conversation to the review.

Thanks!


On Wed, Mar 12, 2014 at 11:00 AM, Manas Kelshikar wrote:

> I pasted only for python 2.6 but exact same errors with 2.7. Also, I
> posted this question after I nuked my entire dev folder so this was being
> run on a new environment.
>
> /Manas
>
>
> On Wed, Mar 12, 2014 at 4:44 AM, Renat Akhmerov wrote:
>
>> I would just try to recreate virtual environments. We haven't been able
>> to reproduce this problem so far.
>>
>> Renat Akhmerov
>> @ Mirantis Inc.
>>
>>
>>
>> On 12 Mar 2014, at 16:32, Nikolay Makhotkin 
>> wrote:
>>
>> maybe something wrong with python2.6?
>>
>> .tox/py26/lib/python2.6/site-packages/mock.py", line 1201, in patched
>>
>>
>> what if you try it on py27?
>>
>>
>>
>> On Wed, Mar 12, 2014 at 10:08 AM, Renat Akhmerov 
>> wrote:
>>
>>> Ok. I might be related with oslo.messaging change that we merged in
>>> yesterday but I don't see at this point how exactly.
>>>
>>> Renat Akhmerov
>>> @ Mirantis Inc.
>>>
>>>
>>>
>>> On 12 Mar 2014, at 12:38, Manas Kelshikar  wrote:
>>>
>>> Yes it is 100% reproducible.
>>>
>>> Was hoping it was environmental i.e. missing some dependency etc. but
>>> since that does not seem to be the case I shall debug locally and report
>>> back.
>>>
>>> Thanks!
>>>
>>>
>>> On Tue, Mar 11, 2014 at 9:54 PM, Renat Akhmerov 
>>> wrote:
>>>
 Hm.. Interesting. CI wasn't able to reveal this for some reason.

 My first guess is that there's a race condition somewhere. Did you try
 to debug it? And is this error 100% repeatable?

 Renat Akhmerov
 @ Mirantis Inc.



 On 12 Mar 2014, at 11:18, Manas Kelshikar  wrote:

 I see this error when I run tox. I pulled down a latest copy of master
 and tried to setup the environment. Any ideas?

 See http://paste.openstack.org/show/73213/ for details. Any help is
 appreciated.


 Thanks,

 Manas
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Nikolay
>>  ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-12 Thread Davanum Srinivas
As was brought up on the IRC, +1 to refactor/rebase the code and adopt
oslo.vmware in the process as well. The downside is a hit/rebase to
all the reviews in progress. I strongly believe this is the right time
to do this.

-- dims

On Wed, Mar 12, 2014 at 2:28 PM, Matt Riedemann
 wrote:
>
>
> On 2/25/2014 6:36 AM, Matthew Booth wrote:
>>
>> I'm new to Nova. After some frustration with the review process,
>> specifically in the VMware driver, I decided to try to visualise how the
>> review process is working across Nova. To that end, I've created 2
>> graphs, both attached to this mail.
>>
>> Both graphs show a nova directory tree pruned at the point that a
>> directory contains less than 2% of total LOCs. Additionally, /tests and
>> /locale are pruned as they make the resulting graph much busier without
>> adding a great deal of useful information. The data for both graphs was
>> generated from the most recent 1000 changes in gerrit on Monday 24th Feb
>> 2014. This includes all pending changes, just over 500, and just under
>> 500 recently merged changes.
>>
>> pending.svg shows the percentage of LOCs which have an outstanding
>> change against them. This is one measure of how hard it is to write new
>> code in Nova.
>>
>> merged.svg shows the average length of time between the
>> ultimately-accepted version of a change being pushed and being approved.
>>
>> Note that there are inaccuracies in these graphs, but they should be
>> mostly good. Details of generation here:
>> https://github.com/mdbooth/heatmap. This code is obviously
>> single-purpose, but is free for re-use if anyone feels so inclined.
>>
>> The first graph above (pending.svg) is the one I was most interested in,
>> and shows exactly what I expected it to. Note the size of 'vmwareapi'.
>> If you check out Nova master, 24% of the vmwareapi driver has an
>> outstanding change against it. It is practically impossible to write new
>> code in vmwareapi without stomping on an oustanding patch. Compare that
>> to the libvirt driver at a much healthier 3%.
>>
>> The second graph (merged.svg) is an attempt to look at why that is.
>> Again comparing the VMware driver with the libvirt we can see that at 12
>> days, it takes much longer for a change to be approved in the VMware
>> driver than in the libvirt driver. I suspect that this isn't the whole
>> story, which is likely a combination of a much longer review time with
>> very active development.
>>
>> What's the impact of this? As I said above, it obviously makes it very
>> hard to come in as a new developer of the VMware driver when almost a
>> quarter of it has been rewritten, but you can't see it. I am very new to
>> this and others should validate my conclusions, but I also believe this
>> is having a detrimental impact to code quality. Remember that the above
>> 12 day approval is only the time for the final version to be approved.
>> If a change goes through multiple versions, each of those also has an
>> increased review period, meaning that the time from first submission to
>> final inclusion is typically very, very protracted. The VMware driver
>> has its fair share of high priority issues and functionality gaps, and
>> the developers are motived to get it in the best possible shape as
>> quickly as possible. However, it is my impression that when problems
>> stem from structural issues, the developers choose to add metaphorical
>> gaffer tape rather than fix them, because fixing both creates a
>> dependency chain which pushes the user-visible fix months into the
>> future. In this respect the review process is dysfunctional, and is
>> actively detrimental to code quality.
>>
>> Unfortunately I'm not yet sufficiently familiar with the project to
>> offer a solution. A core reviewer who regularly looks at it is an
>> obvious fix. A less obvious fix might involve a process which allows
>> developers to work on a fork which is periodically merged, rather like
>> the kernel.
>>
>> Matt
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> When I originally read this I had some ideas in mind for a response
> regarding review latency with the vmware driver patches, but felt like
> anything I said, albeit what I consider honest, would sound bad/offensive in
> some way, and didn't want to convey that.
>
> But this came up in IRC today:
>
> https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor
>
> That spurred some discussion around this same topic and I think highlights
> one of the major issues, which is code quality and the design of the driver.
>
> For example, the driver's spawn method is huge and there are a lot of nested
> methods within it.  There are a lot of vmware patches and a lot of
> blueprints, and a lot of them touch spawn.  When I'm reviewing them, I'm
> looking for new conditions and checking to see if those are unit tested

[openstack-dev] [Congress][Data Integration]

2014-03-12 Thread Rajdeep Dua
Need some guidance on how to convert nested types into flat tuples.
Also should we reorder the tuple values in a particular sequence?

Thanks
Rajdeep

As an example i have shown networks and ports tuples with some nested types

networks - tuple format
---

keys (for reference)

{'status','subnets',
'name','test-network','provider:physical_network','admin_state_up',
'tenant_id','provider:network_type','router:external',
'shared',id,'provider:segmentation_id'}

values
---
('ACTIVE', ['4cef03d0-1d02-40bb-8c99-2f442aac6ab0'], 'test-network', None,
True,
'570fe78a1dc54cffa053bd802984ede2', 'gre', False, False,
'240ff9df-df35-43ae-9df5-27fae87f2492', 4)


ports - tuple format

keys (for reference)

{'status','binding:host_id', 'name', 'allowed_address_pairs',
'admin_state_up', 'network_id',
'tenant_id', 'extra_dhcp_opts': [],
'binding:vif_type', 'device_owner',
'binding:capabilities', 'mac_address',
'fixed_ips' , 'id', 'security_groups',
'device_id'}

Values

('ACTIVE', 'havana', '', [], True, '240ff9df-df35-43ae-9df5-27fae87f2492',
'570fe78a1dc54cffa053bd802984ede2', [], 'ovs', 'network:router_interface',
{'port_filter': True}, 'fa:16:3e:ab:90:df', [{'subnet_id':
'4cef03d0-1d02-40bb-8c99-2f442aac6ab0', 'ip_address': '90.0.0.1'}],
'0a2ce569-85a8-45ec-abb3-0d4b34ff69ba', [],
'864e4acf-bf8e-4664-8cf7-ad5daa95681e')
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-12 Thread Fox, Kevin M
I submitted a blueprint a while back that I think is relevant:

https://blueprints.launchpad.net/heat/+spec/elasticloadbalancing-lbaas

Currently heat autoscaling doesn't interact with Neutron lbaas and the 
configurable bits aren't configurable enough to allow it without code changes 
as far as I can tell.

I think its only a few days of work, but the OpenStack CLA is preventing me 
from contributing. :/

Thanks,
Kevin


From: Mike Spreitzer [mspre...@us.ibm.com]
Sent: Wednesday, March 12, 2014 11:34 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a nested 
stack that includes a OS::Neutron::PoolMember?  Should I expect this to work?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-12 Thread Bruce Montague

Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
the following list sketches some speculative OpenStack DR use cases. These use 
cases do not reflect any specific product behavior and span a wide spectrum. 
This list is not a proposal, it is intended primarily to solicit additional 
discussion. The first basic use case, (1), is described in a bit more detail 
than the others; many of the others are elaborations on this basic theme. 



* (1) [Single VM]

A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
Services) installed runs a key application and integral database. VSS can 
quiesce the app, database, filesystem, and I/O on demand and can be invoked 
external to the guest.

   a. The VM's volumes, including the boot volume, are replicated to a remote 
DR site (another OpenStack deployment).

   b. Some form of replicated VM or VM metadata exists at the remote site. This 
VM/description includes the replicated volumes. Some systems might use cold 
migration or some form of wide-area live VM migration to establish this remote 
site VM/description.

   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
volumes in an application-consistent state. This state is flushed all the way 
through to the remote volumes. As each remote volume reaches its 
application-consistent state, this is recognized in some fashion, perhaps by an 
in-band signal, and a snapshot of the volume is made at the remote site. Volume 
replication is re-enabled immediately following the snapshot. A backup is then 
made of the snapshot on the remote site. At the completion of this cycle, 
application-consistent volume snapshots and backups exist on the remote site.

   d.  When a disaster or firedrill happens, the replication network connection 
is cut. The remote site VM pre-created or defined so as to use the replicated 
volumes is then booted, using the latest application-consistent state of the 
replicated volumes. The entire VM environment (management accounts, networking, 
external firewalling, console access, etc..), similar to that of the primary, 
either needs to pre-exist in some fashion on the secondary or be created 
dynamically by the DR system. The booting VM either needs to attach to a 
virtual network environment similar to at the primary site or the VM needs to 
have boot code that can alter its network personality. Networking configuration 
may occur in conjunction with an update to DNS and other networking 
infrastructure. It is necessary for all required networking configuration  to 
be pre-specified or done automatically. No manual admin activity should be 
required. Environment requirements may be stored in a DR configuration or databa
 se associated with the replication. 

   e. In a firedrill or test, the virtual network environment at the remote 
site may be a "test bubble" isolated from the real network, with some provision 
for protected access (such as NAT). Automatic testing is necessary to verify 
that replication succeeded. These tests need to be configurable by the end-user 
and admin and integrated with DR orchestration.

   f. After the VM has booted and been operational, the network connection 
between the two sites is re-established. A replication connection between the 
replicated volumes is restablished, and the replicated volumes are re-synced, 
with the roles of primary and secondary reversed. (Ongoing replication in this 
configuration may occur, driven from the new primary.)

   g. A planned failback of the VM to the old primary proceeds similar to the 
failover from the old primary to the old replica, but with roles reversed and 
the process minimizing offline time and data loss.



* (2) [Core tenant/project infrastructure VMs] 

Twenty VMs power the core infrastructure of a group using a private cloud 
(OpenStack in their own datacenter). Not all VMs run Windows with VSS, some run 
Linux with some equivalent mechanism, such as qemu-ga, driving fsfreeze and 
signal scripts. These VMs are replicated to a remote OpenStack deployment, in a 
fashion similar to (1). Orchestration occurring at the remote site on failover 
is more complex (correct VM boot order is orchestrated, DHCP service is 
configured as expected, all IPs are made available and verified). An equivalent 
virtual network topology consisting of multiple networks or subnets might be 
pre-created or dynamically created at failover time. 

   a. Storage for all volumes of all VMs might be on a single storage backend 
(logically a single large volume containing many smaller sub-volumes, examples 
being a VMware datastore or Hyper-V CSV). This entire large volume might be 
replicated between similar storage backends at the primary and secondary site. 
A single replicated large volume thus replicates all the tenant VM's volumes. 
The DR system must trigger quiesce of all volumes to applica

Re: [openstack-dev] [Congress] Policy types

2014-03-12 Thread prabhakar Kudva
Hi Tim, All,
 
I was in the discussion yesterday (kudva), and would like to start gradually
contributing to the code base.
 
So, this discussion below is based on my limited exploration of Congress
code, running it. I am trying some small pieces to implement to familiarize.
Please view it as such. As I start adding code, I am sure, my thoughts will
be more evolved.
 
I agree with the three types you outline. I also agree that these will grow.
We are already thinking of expanding congress for various other types of
policies.  But those would be a manageable start.
 
Regarding the comment below. I was wondering if all conditions, and actions
could be both:
1. python functions (for conditions they eval
2. policy primitives.  
 
The advantage of 1, is that it is just executed and a True or False returned
by Python for conditions. For actions, python functions are executed to respond 
to conditions.
This controls the growth of policies and adding more primitives, and makes it 
flexible (say
to use alarms, monitors, os clients, nova actions etc).
 
The advantage of 2, is the ability to use unification (as in unify.py) and do
some logic reduction.  This gives us the full strength of extensive and mature 
logic reasoning and reduction methods.
 
One possibility is that it checks which one the two it is and does the 
appropriate
evaluation for condition and action.
 

>There are drawbacks to this proposal as well. >- We will have 3 separate 
>policies that are conceptually very similar. As the policies grow larger, it 
>will become >increasingly difficult to keep the policies synchronized. This 
>problem can be mitigated to some extent by having >all 3 share a library of 
>policy statements that they all apply in different ways (and such a library 
>mechanism is >already implemented). >- As cloud services change their 
>behavior, policies may need to be re-written. For example, right now Nova does 
>>not consult Congress before creating a VM; thus, to enforce policy 
>surrounding VMs, the best we could do is >write a Condition-Action policy that 
>adjusts VM configuration when it learns about new VMs being created. If we 
>>later make Nova consult with Congress before creating a VM, we need to write 
>an Access-control policy that puts >the proper controls in place. 
Thanks,
 
Prabhakar Kudva



 Date: Wed, 12 Mar 2014 10:05:23 -0700
From: thinri...@vmware.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Congress] Policy types

Hi all,
We started a discussion on IRC yesterday that I'd like to continue.  The main 
question is what kind of policy does a Congress user actually write?  I can see 
three options.  The first two focus on actions (API calls that make changes to 
the state of the cloud) and the last focuses on just the cloud state.  (By 
"state of the cloud" I mean all the information Congress can see about all the 
cloud services it is managing, e.g. all the information we can get through API 
calls to Nova, Neutron, Cinder, Heat, ...).
1) Access Control (e.g. Linux, XACML, AD): which *actions* can be performed by 
other cloud services (for each state of the cloud)2) Condition Action: which 
*actions* Congress should execute (for each state of the cloud)3) 
Classification (currently supported in Congress): which *states* violate 
real-world policy.   [For those of you who have read docs/white-papers/etc.  
I'm using "Classification" in this note to mean the combination of the current 
"Classification" and "Action Description" policies.]
The important observation is that each of these policies could contain 
different information from each of the others.
- Access Control vs Condition Action.  The Access Control policy tells *other 
cloud services* which actions they are *allowed* to execute.  The Condition 
Action policy tells *Congress* which actions it *must* execute.  These policies 
differ because they constrain different sets of cloud services.
- Access Control vs. Classification.  The Access Control policy might permit 
some users to violate the Classification policy in some situations  (e.g. to 
fix violation A, we might need to cause violation B before eliminating both).   
These policies differ because a violation in one policy might be be a violation 
in the other.
- Classification vs. Condition Action.  The Classification policy might imply 
which actions *could* eliminate a given violation, but the Condition Action 
policy would dictate which of those actions *should* be executed (e.g. the 
Classification policy might tell us that disconnecting a network and deleting a 
VM would both eliminate a particular violation, but the Condition Action policy 
would tell us which to choose).  And the Condition Action policy need not 
eliminate all the violations present in the Classification policy.  Again these 
policies differ because a violation in one policy might not be a violation in 
the other. 
I'm proposing that for the first release of Congress we support all 3 of these 
polici

[openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-12 Thread Mike Spreitzer
Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a 
nested stack that includes a OS::Neutron::PoolMember?  Should I expect 
this to work?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday March 13th at 17:00UTC

2014-03-12 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 13th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
03:30 ACDT
18:00 CET
12:00 CDT
10:00 PDT

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-12 Thread Jason Dunsmore
On Wed, Mar 12 2014, John Dennis wrote:

> On 03/12/2014 01:22 PM, Zane Bitter wrote:
>> On 10/03/14 20:29, Robert Collins wrote:
>>> Which bits look raw? It should only show text/* attachments, non-text
>>> should be named but not dumped.
>> 
>> I was thinking of the:
>> 
>> pythonlogging:'': {{{
>> 
>> part.
>
> Yes, this is the primary culprit, it's output obscures most everything
> else concerning test results. Sometimes it's essential information.
> Therefore you should be able to control whether it's displayed or not.

The pythonlogging section didn't used to be so verbose, at least for
Heat's unit tests.  I submitted 3 bugs to clean up the test output a few
weeks ago:

https://bugs.launchpad.net/heat/+bug/1281226
https://bugs.launchpad.net/oslo/+bug/1280454
https://bugs.launchpad.net/oslo/+bug/1280435

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-12 Thread Matt Riedemann



On 2/25/2014 6:36 AM, Matthew Booth wrote:

I'm new to Nova. After some frustration with the review process,
specifically in the VMware driver, I decided to try to visualise how the
review process is working across Nova. To that end, I've created 2
graphs, both attached to this mail.

Both graphs show a nova directory tree pruned at the point that a
directory contains less than 2% of total LOCs. Additionally, /tests and
/locale are pruned as they make the resulting graph much busier without
adding a great deal of useful information. The data for both graphs was
generated from the most recent 1000 changes in gerrit on Monday 24th Feb
2014. This includes all pending changes, just over 500, and just under
500 recently merged changes.

pending.svg shows the percentage of LOCs which have an outstanding
change against them. This is one measure of how hard it is to write new
code in Nova.

merged.svg shows the average length of time between the
ultimately-accepted version of a change being pushed and being approved.

Note that there are inaccuracies in these graphs, but they should be
mostly good. Details of generation here:
https://github.com/mdbooth/heatmap. This code is obviously
single-purpose, but is free for re-use if anyone feels so inclined.

The first graph above (pending.svg) is the one I was most interested in,
and shows exactly what I expected it to. Note the size of 'vmwareapi'.
If you check out Nova master, 24% of the vmwareapi driver has an
outstanding change against it. It is practically impossible to write new
code in vmwareapi without stomping on an oustanding patch. Compare that
to the libvirt driver at a much healthier 3%.

The second graph (merged.svg) is an attempt to look at why that is.
Again comparing the VMware driver with the libvirt we can see that at 12
days, it takes much longer for a change to be approved in the VMware
driver than in the libvirt driver. I suspect that this isn't the whole
story, which is likely a combination of a much longer review time with
very active development.

What's the impact of this? As I said above, it obviously makes it very
hard to come in as a new developer of the VMware driver when almost a
quarter of it has been rewritten, but you can't see it. I am very new to
this and others should validate my conclusions, but I also believe this
is having a detrimental impact to code quality. Remember that the above
12 day approval is only the time for the final version to be approved.
If a change goes through multiple versions, each of those also has an
increased review period, meaning that the time from first submission to
final inclusion is typically very, very protracted. The VMware driver
has its fair share of high priority issues and functionality gaps, and
the developers are motived to get it in the best possible shape as
quickly as possible. However, it is my impression that when problems
stem from structural issues, the developers choose to add metaphorical
gaffer tape rather than fix them, because fixing both creates a
dependency chain which pushes the user-visible fix months into the
future. In this respect the review process is dysfunctional, and is
actively detrimental to code quality.

Unfortunately I'm not yet sufficiently familiar with the project to
offer a solution. A core reviewer who regularly looks at it is an
obvious fix. A less obvious fix might involve a process which allows
developers to work on a fork which is periodically merged, rather like
the kernel.

Matt



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



When I originally read this I had some ideas in mind for a response 
regarding review latency with the vmware driver patches, but felt like 
anything I said, albeit what I consider honest, would sound 
bad/offensive in some way, and didn't want to convey that.


But this came up in IRC today:

https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor

That spurred some discussion around this same topic and I think 
highlights one of the major issues, which is code quality and the design 
of the driver.


For example, the driver's spawn method is huge and there are a lot of 
nested methods within it.  There are a lot of vmware patches and a lot 
of blueprints, and a lot of them touch spawn.  When I'm reviewing them, 
I'm looking for new conditions and checking to see if those are unit 
tested (positive/error cases) and a lot of the time I find it very 
difficult to tell if they are or not.  I think a lot of that has to do 
with how the vmware tests are scaffolded with fakes to simulate a 
vcenter backend, but it makes it very difficult if you're not extremely 
familiar with that code to know if something is covered or not.


And I've actually asked in bp reviews before, 'you have this new/changed 
condition, where is it tested' and the response is more or less 'we plan 
on refactoring this later

Re: [openstack-dev] [Congress] Policy types

2014-03-12 Thread p
Hi Tim, All,
 
I was in the discussion yesterday (kudva), and would like to start gradually
contributing to the code base.
 
So, this discussion below is based on my limited exploration of Congress
code, running it. I am trying some small pieces to implement to familiarize.
Please view it as such. As I start adding code, I am sure, my thoughts will
be more evolved.
 
I agree with the three types you outline. I also agree that these will grow.
We are already thinking of expanding congress for various other types of
policies.  But those would be a manageable start.
 
Regarding the comment below. I was wondering if all conditions, and actions
could be both:
1. python functions (for conditions they eval
2. policy primitives.  
 
The advantage of 1, is that it is just executed and a True or False returned
by Python for conditions. For actions, python functions are executed to respond 
to conditions.
This controls the growth of policies and adding more primitives, and makes it 
flexible (say
to use alarms, monitors, os clients, nova actions etc).
 
The advantage of 2, is the ability to use unification (as in unify.py) and do
some logic reduction.  This gives us the full strength of extensive and mature 
logic reasoning and reduction methods.
 
One possibility is that it checks which one the two it is and does the 
appropriate
evaluation for condition and action.
 
>There are drawbacks to this proposal as well.  >- We will have 3 separate 
>policies that are conceptually very similar.  As the policies grow larger, it 
>will become >increasingly difficult to keep the policies synchronized.  This 
>problem can be mitigated to some extent by having >all 3 share a library of 
>policy statements that they all apply in different ways (and such a library 
>mechanism is >already implemented). >- As cloud services change their 
>behavior, policies may need to be re-written.  For example, right now Nova 
>does >not consult Congress before creating a VM; thus, to enforce policy 
>surrounding VMs, the best we could do is >write a Condition-Action policy that 
>adjusts VM configuration when it learns about new VMs being created.  If we 
>>later make Nova consult with Congress before creating a VM, we need to write 
>an Access-control policy that puts >the proper controls in place. 
Thanks,
 
Prabhakar Kudva

Date: Wed, 12 Mar 2014 10:05:23 -0700
From: thinri...@vmware.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Congress] Policy types

Hi all,
We started a discussion on IRC yesterday that I'd like to continue.  The main 
question is what kind of policy does a Congress user actually write?  I can see 
three options.  The first two focus on actions (API calls that make changes to 
the state of the cloud) and the last focuses on just the cloud state.  (By 
"state of the cloud" I mean all the information Congress can see about all the 
cloud services it is managing, e.g. all the information we can get through API 
calls to Nova, Neutron, Cinder, Heat, ...).
1) Access Control (e.g. Linux, XACML, AD): which *actions* can be performed by 
other cloud services (for each state of the cloud)2) Condition Action: which 
*actions* Congress should execute (for each state of the cloud)3) 
Classification (currently supported in Congress): which *states* violate 
real-world policy.   [For those of you who have read docs/white-papers/etc.  
I'm using "Classification" in this note to mean the combination of the current 
"Classification" and "Action Description" policies.]
The important observation is that each of these policies could contain 
different information from each of the others.
- Access Control vs Condition Action.  The Access Control policy tells *other 
cloud services* which actions they are *allowed* to execute.  The Condition 
Action policy tells *Congress* which actions it *must* execute.  These policies 
differ because they constrain different sets of cloud services.
- Access Control vs. Classification.  The Access Control policy might permit 
some users to violate the Classification policy in some situations  (e.g. to 
fix violation A, we might need to cause violation B before eliminating both).   
These policies differ because a violation in one policy might be be a violation 
in the other.
- Classification vs. Condition Action.  The Classification policy might imply 
which actions *could* eliminate a given violation, but the Condition Action 
policy would dictate which of those actions *should* be executed (e.g. the 
Classification policy might tell us that disconnecting a network and deleting a 
VM would both eliminate a particular violation, but the Condition Action policy 
would tell us which to choose).  And the Condition Action policy need not 
eliminate all the violations present in the Classification policy.  Again these 
policies differ because a violation in one policy might not be a violation in 
the other. 
I'm proposing that for the first release of Congress we support all 3 of these 
polic

Re: [openstack-dev] [Glance] Need to revert "Don't enable all stores by default"

2014-03-12 Thread Mark Washenberger
On Wed, Mar 12, 2014 at 6:40 AM, Sean Dague  wrote:

> On 03/12/2014 09:01 AM, Flavio Percoco wrote:
> > On 11/03/14 16:25 -0700, Clint Byrum wrote:
> >> Hi. I asked in #openstack-glance a few times today but got no response,
> >> so sorry for the list spam.
> >>
> >> https://review.openstack.org/#/c/79710/
> >>
> >> This change introduces a backward incompatible change to defaults with
> >> Havana. If a user has chosen to configure swift, but did not add swift
> >> to the known_stores, then when that user upgrades Glance, Glance will
> >> fail to start because their swift configuration will be invalid.
> >>
> >> This broke TripleO btw, which tries hard to use default configurations.
> >>
> >> Also I am not really sure why this approach was taken. If a user has
> >> explicitly put swift configuration options in their config file, why
> >> not just load swift store? Oslo.config will help here in that you can
> >> just add all of the config options but not actually expect them to be
> >> set. It seems entirely backwards to just fail in this case.
> >>
> >
> > Here's an attempt to fix this issues without reverting the patch.
> > Feedback appreciated.
> >
> > https://review.openstack.org/#/c/79935/
>
> ACK. Looks pretty good. You might want to consider using one of the oslo
> deprecation functions to make it consistent on the deprecation side.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Sorry, I suppose I should have interrogated the backwards-incompatibility
assumptions people were making about this change a bit more.

It looks like the latest patch is a great deprecation mechanism. Thanks for
working out a solution, Flavio et al.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travis yaml file on Horizon

2014-03-12 Thread Monty Taylor

On 03/12/2014 05:00 AM, Adam Nelson wrote:

I see that horizon doesn't use Travis CI

>

Is that a political decision?


No. It's a technical one. I'm not sure if you know, but OpenStack runs a 
pretty massive and amazing CI system.



If not, would there be resistance to adding a minimal .travis.yml file
to horizon/master so that other people can use travis for their public
horizon repos?  Travis requires the file to exist on all branches for
historical reasons.


Yes. There would be resistance. We don't use Travis.


This wouldn't oblige the use of Travis but it would make it easier for
me since we're actively developing on a public fork and use Travis.


I suggest doing your development directly on horizon rather than on a 
fork. If you do, you'll get the benefit of our CI and project gating 
infrastructure, as well as the ability to collaborate with a bunch of 
really awesome developers.


We're a really open project and love contributions. We're also pretty 
against things that incentivize people not contributing back.



--
Kili - Cloud for Africa: kili.io 
Musings: twitter.com/varud 
More Musings: varud.com 
About Adam: www.linkedin.com/in/adamcnelson



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-12 Thread Tim Bell



> 
> If you want to archive images per-say, on deletion just export it to a 
> 'backup tape' (for example) and store enough of the metadata
> on that 'tape' to re-insert it if this is really desired and then delete it 
> from the database (or do the export... asynchronously). The
> same could be said with VMs, although likely not all resources, aka 
> networks/.../ make sense to do this.
> 
> So instead of deleted = 1, wait for cleaner, just save the resource (if
> possible) + enough metadata on some other system ('backup tape', alternate 
> storage location, hdfs, ceph...) and leave it there unless
> it's really needed. Making the database more complex (and all associated 
> code) to achieve this same goal seems like a hack that just
> needs to be addressed with a better way to do archiving.
> 
> In a cloudy world of course people would be able to recreate everything they 
> need on-demand so who needs undelete anyway ;-)
> 

I have no problem if there was an existing process integrated into all of the 
OpenStack components which would produce me an archive trail with meta data and 
a command to recover the object from that data.

Currently, my understanding is that there is no such function and thus the 
proposal to remove the deleted column is premature.

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-12 Thread Jay Pipes
On Wed, 2014-03-12 at 17:35 +, Tim Bell wrote:
> And if the same mistake is done for a cinder volume or a trove database ?

Snapshots and backups?

Best,
-jay

> 
> > -Original Message-
> > From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
> > Sent: 12 March 2014 17:02
> > To: OpenStack Development Mailing List (not for usage questions)
> > Cc: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of 
> > soft deletion (step by step)
> > 
> > Understandable,
> > 
> > Humans will be humans after all.
> > 
> > To me if openstsck is a cloud platform then coming along with it should be 
> > best practices that come with the usage of a cloud
> > platform (treat your instances as ephemeral, use configuration management, 
> > save your stuff in source control...). I have been
> > preaching similar stuff at y! and getting people into the right mindset 
> > around "the cloud" is IMHO more important than making
> > openstack fit peoples non-cloudy mindset.
> > 
> > Because once u teach a person to use the cloud right u don't need to have 
> > openstack compensate for them using it incorrectly.
> > 
> > Sent from my really tiny device...
> > 
> > > On Mar 12, 2014, at 4:45 AM, "CARVER, PAUL"  wrote:
> > >
> > > I have personally witnessed someone (honestly, not me) select "Terminate 
> > > Instance" when they meant "Reboot Instance" and that
> > mistake is way too easy. I'm not sure if it was a brain mistake or mere 
> > slip of the mouse, but it's enough to make people really
> > nervous in a production environment. If there's one thing you can count on 
> > about human beings, it's that they'll make mistakes
> > sooner or later. Any system that assumes infallible human beings as a 
> > design criteria is making an invalid assumption.
> > >
> > > --
> > > Paul Carver
> > > VO: 732-545-7377
> > > Cell: 908-803-1656
> > > E: pcar...@att.com
> > > Q Instant Message
> > >
> > >
> > > -Original Message-
> > > From: Tim Bell [mailto:tim.b...@cern.ch]
> > > Sent: Tuesday, March 11, 2014 15:43
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid
> > > of soft deletion (step by step)
> > >
> > >
> > > Typical cases are user error where someone accidentally deletes an item 
> > > from a tenant. The image guys have a good structure
> > where images become unavailable and are recoverable for a certain period of 
> > time. A regular periodic task cleans up deleted items
> > after a configurable number of seconds to avoid constant database growth.
> > >
> > > My preference would be to follow this model universally (an archive table 
> > > is a nice way to do it without disturbing production).
> > >
> > > Tim
> > >
> > >
> > >>> On Tue, Mar 11, 2014, Mike Wilson  wrote:
> > >>> Undeleting things is an important use case in my opinion. We do this
> > >>> in our environment on a regular basis. In that light I'm not sure
> > >>> that it would be appropriate just to log the deletion and git rid of
> > >>> the row. I would like to see it go to an archival table where it is 
> > >>> easily restored.
> > >>
> > >> I'm curious, what are you undeleting and why?
> > >>
> > >> JE
> > >>
> > >>
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Error on running tox

2014-03-12 Thread Manas Kelshikar
I pasted only for python 2.6 but exact same errors with 2.7. Also, I posted
this question after I nuked my entire dev folder so this was being run on a
new environment.

/Manas


On Wed, Mar 12, 2014 at 4:44 AM, Renat Akhmerov wrote:

> I would just try to recreate virtual environments. We haven't been able to
> reproduce this problem so far.
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 12 Mar 2014, at 16:32, Nikolay Makhotkin 
> wrote:
>
> maybe something wrong with python2.6?
>
> .tox/py26/lib/python2.6/site-packages/mock.py", line 1201, in patched
>
>
> what if you try it on py27?
>
>
>
> On Wed, Mar 12, 2014 at 10:08 AM, Renat Akhmerov 
> wrote:
>
>> Ok. I might be related with oslo.messaging change that we merged in
>> yesterday but I don't see at this point how exactly.
>>
>> Renat Akhmerov
>> @ Mirantis Inc.
>>
>>
>>
>> On 12 Mar 2014, at 12:38, Manas Kelshikar  wrote:
>>
>> Yes it is 100% reproducible.
>>
>> Was hoping it was environmental i.e. missing some dependency etc. but
>> since that does not seem to be the case I shall debug locally and report
>> back.
>>
>> Thanks!
>>
>>
>> On Tue, Mar 11, 2014 at 9:54 PM, Renat Akhmerov 
>> wrote:
>>
>>> Hm.. Interesting. CI wasn't able to reveal this for some reason.
>>>
>>> My first guess is that there's a race condition somewhere. Did you try
>>> to debug it? And is this error 100% repeatable?
>>>
>>> Renat Akhmerov
>>> @ Mirantis Inc.
>>>
>>>
>>>
>>> On 12 Mar 2014, at 11:18, Manas Kelshikar  wrote:
>>>
>>> I see this error when I run tox. I pulled down a latest copy of master
>>> and tried to setup the environment. Any ideas?
>>>
>>> See http://paste.openstack.org/show/73213/ for details. Any help is
>>> appreciated.
>>>
>>>
>>> Thanks,
>>>
>>> Manas
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Nikolay
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] test_launch_instance_post questions

2014-03-12 Thread Abishek Subramanian (absubram)
Hi all, jpich, amotoki and toshi,

I'm including a link to a small set of diff to show you what I'm trying to
do.
It is obviously a small subset of what I want to do.
https://www.dropbox.com/s/r7khv7gvdcd02gl/launch_instance_post_diff.patch

But to illustrate the issue I am seeing and to help my understanding of
this 
test, all I'm doing is this - I've added a new network in the neutron_data
that is similar to the first network. Then I have replaced the nics
argument 
which is needed to launch an instance. It now takes my new network instead
of
the first network.

However the test fails because when it is run, it is supposed to expect my
new network
but it actually finds the first network in the code.


The problem in code is here -
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashbo
ards/project/instances/workflows/create_instance.py

Lines 700-705:
netids = context.get('network_id', None)
if netids:
nics = [{"net-id": netid, "v4-fixed-ip": ""}
for netid in netids]
else:
nics = None

This part is confusing. In the UT environment, can I please get some help
in understanding how this line

seems to always pick the first network when this particular test is run?


Thanks!


On 3/11/14 9:58 AM, "Abishek Subramanian (absubram)" 
wrote:

>Hi,
>
>Can I please get some help with this UT?
>I am having a little issue with the nics argument -
>nics = [{"net-id": netid, "v4-fixed-ip": ""}
>
>
>I wish to add a second network to this argument, but somehow
>the UT only picks up the first network.
>
>Any guidance will be appreciated.
>
>
>Thanks!
>
>
>On 3/6/14 12:06 PM, "Abishek Subramanian (absubram)" 
>wrote:
>
>>Hi,
>>
>>I had a couple of questions regarding this UT and the
>>JS template that it ends up using.
>>Hopefully someone can point me in the right direction
>>and help me understand this a little better.
>>
>>I see that for this particular UT, we have a total of 3 networks
>>in the network_list (the second network is supposed to be disabled
>>though).
>>For the nic argument needed by the nova/server_create API though we
>>only pass the first network's net_id.
>>
>>I am trying to modify this unit test so as to be able to accept 2
>>network_ids 
>>instead of just one. This should be possible yes?
>>We can have two nics in an instance of just one?
>>However, I always see that when the test runs,
>>in code it only finds the first network from the list.
>>
>>This line of code -
>>
>> if netids:
>>nics = [{"net-id": netid, "v4-fixed-ip": ""}
>>for netid in netids]
>>
>>There's always just one net-id in this dictionary even though I've added
>>a new network in the neutron test_data. Can someone please help me
>>figure out what I might be doing wrong?
>>
>>How does the JS code in horizon.instances.js file work?
>>I assume this is where the network list is obtained from?
>>How does this translate in the unit test environment?
>>
>>
>>
>>Thanks!
>>Abishek
>>
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] development workflows

2014-03-12 Thread Greg Lucas
Craig Vyvial wrote:
> Thats really cool. 

+1, thanks Mat for writing this up. I'm going to experiment with this
approach...

Mat Lowery wrote:
> Assuming my doc is desirable in some form, where is the best place to put it? 
> Thanks.

This seems similar to some of the pages under
https://wiki.openstack.org/wiki/Trove#Development so perhaps add it there?

You could also pull the 'Testing' section out as a separate page since
it does not seem tied to VM set-up and is itself a useful overview of
the various test frameworks in play here.

Thanks,
-- 
Greg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-12 Thread John Dennis
On 03/12/2014 01:22 PM, Zane Bitter wrote:
> On 10/03/14 20:29, Robert Collins wrote:
>> Which bits look raw? It should only show text/* attachments, non-text
>> should be named but not dumped.
> 
> I was thinking of the:
> 
> pythonlogging:'': {{{
> 
> part.

Yes, this is the primary culprit, it's output obscures most everything
else concerning test results. Sometimes it's essential information.
Therefore you should be able to control whether it's displayed or not.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-12 Thread Tim Bell

And if the same mistake is done for a cinder volume or a trove database ?

Tim

> -Original Message-
> From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
> Sent: 12 March 2014 17:02
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
> deletion (step by step)
> 
> Understandable,
> 
> Humans will be humans after all.
> 
> To me if openstsck is a cloud platform then coming along with it should be 
> best practices that come with the usage of a cloud
> platform (treat your instances as ephemeral, use configuration management, 
> save your stuff in source control...). I have been
> preaching similar stuff at y! and getting people into the right mindset 
> around "the cloud" is IMHO more important than making
> openstack fit peoples non-cloudy mindset.
> 
> Because once u teach a person to use the cloud right u don't need to have 
> openstack compensate for them using it incorrectly.
> 
> Sent from my really tiny device...
> 
> > On Mar 12, 2014, at 4:45 AM, "CARVER, PAUL"  wrote:
> >
> > I have personally witnessed someone (honestly, not me) select "Terminate 
> > Instance" when they meant "Reboot Instance" and that
> mistake is way too easy. I'm not sure if it was a brain mistake or mere slip 
> of the mouse, but it's enough to make people really
> nervous in a production environment. If there's one thing you can count on 
> about human beings, it's that they'll make mistakes
> sooner or later. Any system that assumes infallible human beings as a design 
> criteria is making an invalid assumption.
> >
> > --
> > Paul Carver
> > VO: 732-545-7377
> > Cell: 908-803-1656
> > E: pcar...@att.com
> > Q Instant Message
> >
> >
> > -Original Message-
> > From: Tim Bell [mailto:tim.b...@cern.ch]
> > Sent: Tuesday, March 11, 2014 15:43
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid
> > of soft deletion (step by step)
> >
> >
> > Typical cases are user error where someone accidentally deletes an item 
> > from a tenant. The image guys have a good structure
> where images become unavailable and are recoverable for a certain period of 
> time. A regular periodic task cleans up deleted items
> after a configurable number of seconds to avoid constant database growth.
> >
> > My preference would be to follow this model universally (an archive table 
> > is a nice way to do it without disturbing production).
> >
> > Tim
> >
> >
> >>> On Tue, Mar 11, 2014, Mike Wilson  wrote:
> >>> Undeleting things is an important use case in my opinion. We do this
> >>> in our environment on a regular basis. In that light I'm not sure
> >>> that it would be appropriate just to log the deletion and git rid of
> >>> the row. I would like to see it go to an archival table where it is 
> >>> easily restored.
> >>
> >> I'm curious, what are you undeleting and why?
> >>
> >> JE
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Need to revert "Don't enable all stores by default"

2014-03-12 Thread Jay Pipes
On Wed, 2014-03-12 at 13:21 -0400, Sean Dague wrote:
> On 03/12/2014 01:10 PM, Jay Pipes wrote:
> > On Wed, 2014-03-12 at 07:18 -0400, Sean Dague wrote:
> >> On 03/12/2014 06:38 AM, Flavio Percoco wrote:
> >>> On 11/03/14 16:25 -0700, Clint Byrum wrote:
>  Hi. I asked in #openstack-glance a few times today but got no response,
>  so sorry for the list spam.
> 
>  https://review.openstack.org/#/c/79710/
> 
>  This change introduces a backward incompatible change to defaults with
>  Havana. If a user has chosen to configure swift, but did not add swift
>  to the known_stores, then when that user upgrades Glance, Glance will
>  fail to start because their swift configuration will be invalid.
> 
>  This broke TripleO btw, which tries hard to use default configurations.
> >>>
> >>> I don't think this change has to be reverted. We could add an upgrade
> >>> path for this. We could enable a driver if its config options were set
> >>> and warn the user about this change. Also, we could make sure we
> >>> import all drivers and the config options are registered but that we
> >>> don't try to enable them.
> >>>
> >>> Also, I don't expect (yeah, I know this is not always the case) users to
> >>> blindly upgrade if they really care about their cloud deployment.
> >>> Since this change will be part of the change log and the release
> >>> notes, I expect the user to be aware of it.
> >>
> >> OpenStack's 2 largest public clouds don't wait for releases, so that's
> >> not really a good answer.
> >>
> 
>  Also I am not really sure why this approach was taken. If a user has
>  explicitly put swift configuration options in their config file, why
>  not just load swift store? Oslo.config will help here in that you can
>  just add all of the config options but not actually expect them to be
>  set. It seems entirely backwards to just fail in this case.
> >>>
> >>> This is exactly the problem. With the current approach, the user has
> >>> not explicitly enabled the swift store. The user just put swift
> >>> configs. With the current 'enable all and let it fail' approach, it is
> >>> really confusing for users to see all the failures and it's not nice to
> >>> enable things by default for the user.
> >>>
> >>> Thanks for raising this issue, I didn't think about this corner
> >>> case.
> >>> Flavio
> >>
> >> In fairness, this wasn't a corner case. Grenade was blocking this change
> >> for the whole cycle until a change was made in stable/havana devstack
> >> that sneaked around it with https://review.openstack.org/#/c/75827/.  :)
> >>
> >> In addition, the commit in question for glance -
> >> https://review.openstack.org/#/c/59150/ didn't have UpgradeImpact, which
> >> is the signaling mechanism for these kinds of issues.
> >>
> >> I do think this is a real issue. OpenStack really is expected to be CD
> >> upgradable, not just post release and post release notes. Commits in
> >> OpenStack need to take that into account.
> >>
> >> A compatibility behavior should be put in place here.
> >>
> >> I do agree the current behavior isn't nice with gorpy error messages all
> >> the time. However, a completely legitimate approach would be:
> >>
> >> If configuration for a storage back end existed, but the driver wasn't
> >> explicitly set, load and configure that driver and throw a big
> >> DEPRECATION WARNING in the logs that Glance will require explicit
> >> loading of drivers in an upcoming release. That would let you move
> >> forward, and provide some user signally.
> > 
> > That's pretty much what already happens. On startup, Glance will log a
> > message about a particular store driver being disabled [1] because
> > configuration settings were not set properly. [2]
> > 
> > IIRC, on startup is the only place these messages occur (not, for
> > instance, every time somebody uploads an image), so I'm not entirely
> > sure what the big deal was.
> > 
> > Long term, moving stores into entrypoints might be a cleaner solution,
> > but you will still need to validate configuration for those endpoints on
> > startup -- all endpoints give you is a cleaner method than "set your new
> > store in the known_stores configuration option".
> 
> The issue is this was a failure going from old defaults to new defaults,
> which is why it was actually blocked by grenade for months.
> 
> The difference is actually we should be working with the drivers that
> were configured, and deprecate the fact that you can get away without
> specifying them.
> 
> Then you can roll forward gracefully, see a warning message on your
> working new config, go handle the situation, and later when the
> backwards compatible behavior is removed you are ok.

I think you may have misunderstood me :) I was saying that I don't
understand what the big deal was about log messages on startup around
failure to configure drivers properly. I didn't think that was something
that needed to be "fixed".

Best,
-jay


_

Re: [openstack-dev] [oslo.messaging] mongodb notification driver

2014-03-12 Thread Sandy Walsh
You may want to consider StackTach for troubleshooting (that's what it was 
initially created for)

https://github.com/rackerlabs/stacktach

It will consume and record the events as well as give you a gui and cmdline 
tools for tracing calls by server, request_id, event type, etc. 

Ping me if you have any issues getting it going.

Cheers
-S


From: Hiroyuki Eguchi [h-egu...@az.jp.nec.com]
Sent: Tuesday, March 11, 2014 11:09 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [oslo.messaging] mongodb notification driver

I'm envisioning a mongodb notification driver.

Currently, For troubleshooting, I'm using a log driver of notification, and 
sent notification log to rsyslog server, and store log in database using 
rsyslog-mysql package.

I would like to make it more simple, So I came up with this feature.

Ceilometer can manage notifications using mongodb, but Ceilometer should have 
the role of Metering, not Troubleshooting.

If you have any comments or suggestion, please let me know.
And please let me know if there's any discussion about this.

Thanks.
--hiroyuki

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-12 Thread Johannes Erdfelt
On Wed, Mar 12, 2014, CARVER, PAUL  wrote:
> I have personally witnessed someone (honestly, not me) select "Terminate
> Instance" when they meant "Reboot Instance" and that mistake is way too
> easy. I'm not sure if it was a brain mistake or mere slip of the mouse,
> but it's enough to make people really nervous in a production
> environment. If there's one thing you can count on about human beings,
> it's that they'll make mistakes sooner or later. Any system that
> assumes infallible human beings as a design criteria is making an
> invalid assumption.

I think there might be some confusion about what soft-delete we're
talking about.

Nova has two orthogonal "soft-delete" features:
1) Database rows are never deleted from the database. They are just
   marked as deleted via a column. This is unexposed to users and is an
   implementation detail in the current code.
2) Instance deletion can be deferred until a later time. This is called
   deferred-delete and soft-delete in the code. If the feature is
   enabled and the instance that has't been reclaimed, it can be
   restored with the 'nova restore' command.

This thread is about the database soft-delete and not the instance
soft-delete.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-12 Thread Zane Bitter

On 10/03/14 20:29, Robert Collins wrote:

Which bits look raw? It should only show text/* attachments, non-text
should be named but not dumped.


I was thinking of the:

pythonlogging:'': {{{

part.

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Need to revert "Don't enable all stores by default"

2014-03-12 Thread Sean Dague
On 03/12/2014 01:10 PM, Jay Pipes wrote:
> On Wed, 2014-03-12 at 07:18 -0400, Sean Dague wrote:
>> On 03/12/2014 06:38 AM, Flavio Percoco wrote:
>>> On 11/03/14 16:25 -0700, Clint Byrum wrote:
 Hi. I asked in #openstack-glance a few times today but got no response,
 so sorry for the list spam.

 https://review.openstack.org/#/c/79710/

 This change introduces a backward incompatible change to defaults with
 Havana. If a user has chosen to configure swift, but did not add swift
 to the known_stores, then when that user upgrades Glance, Glance will
 fail to start because their swift configuration will be invalid.

 This broke TripleO btw, which tries hard to use default configurations.
>>>
>>> I don't think this change has to be reverted. We could add an upgrade
>>> path for this. We could enable a driver if its config options were set
>>> and warn the user about this change. Also, we could make sure we
>>> import all drivers and the config options are registered but that we
>>> don't try to enable them.
>>>
>>> Also, I don't expect (yeah, I know this is not always the case) users to
>>> blindly upgrade if they really care about their cloud deployment.
>>> Since this change will be part of the change log and the release
>>> notes, I expect the user to be aware of it.
>>
>> OpenStack's 2 largest public clouds don't wait for releases, so that's
>> not really a good answer.
>>

 Also I am not really sure why this approach was taken. If a user has
 explicitly put swift configuration options in their config file, why
 not just load swift store? Oslo.config will help here in that you can
 just add all of the config options but not actually expect them to be
 set. It seems entirely backwards to just fail in this case.
>>>
>>> This is exactly the problem. With the current approach, the user has
>>> not explicitly enabled the swift store. The user just put swift
>>> configs. With the current 'enable all and let it fail' approach, it is
>>> really confusing for users to see all the failures and it's not nice to
>>> enable things by default for the user.
>>>
>>> Thanks for raising this issue, I didn't think about this corner
>>> case.
>>> Flavio
>>
>> In fairness, this wasn't a corner case. Grenade was blocking this change
>> for the whole cycle until a change was made in stable/havana devstack
>> that sneaked around it with https://review.openstack.org/#/c/75827/.  :)
>>
>> In addition, the commit in question for glance -
>> https://review.openstack.org/#/c/59150/ didn't have UpgradeImpact, which
>> is the signaling mechanism for these kinds of issues.
>>
>> I do think this is a real issue. OpenStack really is expected to be CD
>> upgradable, not just post release and post release notes. Commits in
>> OpenStack need to take that into account.
>>
>> A compatibility behavior should be put in place here.
>>
>> I do agree the current behavior isn't nice with gorpy error messages all
>> the time. However, a completely legitimate approach would be:
>>
>> If configuration for a storage back end existed, but the driver wasn't
>> explicitly set, load and configure that driver and throw a big
>> DEPRECATION WARNING in the logs that Glance will require explicit
>> loading of drivers in an upcoming release. That would let you move
>> forward, and provide some user signally.
> 
> That's pretty much what already happens. On startup, Glance will log a
> message about a particular store driver being disabled [1] because
> configuration settings were not set properly. [2]
> 
> IIRC, on startup is the only place these messages occur (not, for
> instance, every time somebody uploads an image), so I'm not entirely
> sure what the big deal was.
> 
> Long term, moving stores into entrypoints might be a cleaner solution,
> but you will still need to validate configuration for those endpoints on
> startup -- all endpoints give you is a cleaner method than "set your new
> store in the known_stores configuration option".

The issue is this was a failure going from old defaults to new defaults,
which is why it was actually blocked by grenade for months.

The difference is actually we should be working with the drivers that
were configured, and deprecate the fact that you can get away without
specifying them.

Then you can roll forward gracefully, see a warning message on your
working new config, go handle the situation, and later when the
backwards compatible behavior is removed you are ok.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-12 Thread Jay Pipes
On Wed, 2014-03-12 at 11:37 +, CARVER, PAUL wrote:
> I have personally witnessed someone (honestly, not me) select "Terminate 
> Instance" when they meant "Reboot Instance" and that mistake is way too easy. 
> I'm not sure if it was a brain mistake or mere slip of the mouse, but it's 
> enough to make people really nervous in a production environment. If there's 
> one thing you can count on about human beings, it's that they'll make 
> mistakes sooner or later. Any system that assumes infallible human beings as 
> a design criteria is making an invalid assumption.

That's why GUIs should have a dialog box that says "Are you sure you
want to terminate this server?".

There's prevention of common mistakes, and then there's going out of
your way to ensure that the cloud acts like a text editor with an
unlimiited undo buffer.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Need to revert "Don't enable all stores by default"

2014-03-12 Thread Jay Pipes
On Wed, 2014-03-12 at 07:18 -0400, Sean Dague wrote:
> On 03/12/2014 06:38 AM, Flavio Percoco wrote:
> > On 11/03/14 16:25 -0700, Clint Byrum wrote:
> >> Hi. I asked in #openstack-glance a few times today but got no response,
> >> so sorry for the list spam.
> >>
> >> https://review.openstack.org/#/c/79710/
> >>
> >> This change introduces a backward incompatible change to defaults with
> >> Havana. If a user has chosen to configure swift, but did not add swift
> >> to the known_stores, then when that user upgrades Glance, Glance will
> >> fail to start because their swift configuration will be invalid.
> >>
> >> This broke TripleO btw, which tries hard to use default configurations.
> > 
> > I don't think this change has to be reverted. We could add an upgrade
> > path for this. We could enable a driver if its config options were set
> > and warn the user about this change. Also, we could make sure we
> > import all drivers and the config options are registered but that we
> > don't try to enable them.
> > 
> > Also, I don't expect (yeah, I know this is not always the case) users to
> > blindly upgrade if they really care about their cloud deployment.
> > Since this change will be part of the change log and the release
> > notes, I expect the user to be aware of it.
> 
> OpenStack's 2 largest public clouds don't wait for releases, so that's
> not really a good answer.
> 
> >>
> >> Also I am not really sure why this approach was taken. If a user has
> >> explicitly put swift configuration options in their config file, why
> >> not just load swift store? Oslo.config will help here in that you can
> >> just add all of the config options but not actually expect them to be
> >> set. It seems entirely backwards to just fail in this case.
> > 
> > This is exactly the problem. With the current approach, the user has
> > not explicitly enabled the swift store. The user just put swift
> > configs. With the current 'enable all and let it fail' approach, it is
> > really confusing for users to see all the failures and it's not nice to
> > enable things by default for the user.
> > 
> > Thanks for raising this issue, I didn't think about this corner
> > case.
> > Flavio
> 
> In fairness, this wasn't a corner case. Grenade was blocking this change
> for the whole cycle until a change was made in stable/havana devstack
> that sneaked around it with https://review.openstack.org/#/c/75827/.  :)
> 
> In addition, the commit in question for glance -
> https://review.openstack.org/#/c/59150/ didn't have UpgradeImpact, which
> is the signaling mechanism for these kinds of issues.
> 
> I do think this is a real issue. OpenStack really is expected to be CD
> upgradable, not just post release and post release notes. Commits in
> OpenStack need to take that into account.
> 
> A compatibility behavior should be put in place here.
> 
> I do agree the current behavior isn't nice with gorpy error messages all
> the time. However, a completely legitimate approach would be:
> 
> If configuration for a storage back end existed, but the driver wasn't
> explicitly set, load and configure that driver and throw a big
> DEPRECATION WARNING in the logs that Glance will require explicit
> loading of drivers in an upcoming release. That would let you move
> forward, and provide some user signally.

That's pretty much what already happens. On startup, Glance will log a
message about a particular store driver being disabled [1] because
configuration settings were not set properly. [2]

IIRC, on startup is the only place these messages occur (not, for
instance, every time somebody uploads an image), so I'm not entirely
sure what the big deal was.

Long term, moving stores into entrypoints might be a cleaner solution,
but you will still need to validate configuration for those endpoints on
startup -- all endpoints give you is a cleaner method than "set your new
store in the known_stores configuration option".

Best,
-jay

[1]
https://github.com/openstack/glance/blob/master/glance/store/__init__.py#L180

[2]
por ejemplo:
https://github.com/openstack/glance/blob/master/glance/store/filesystem.py#L179
https://github.com/openstack/glance/blob/master/glance/store/swift.py#L655

> That is definitely more effort, but as a community we've decided to
> support the CD method for OpenStack, which means we need to take account
> of these kinds of cases.
> 
>   -Sean
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-12 Thread Dina Belova
>
> The biggest concern seemed to be that we weren't sure whether Climate
> makes sense as an independent project or not.  We think it may make more
> sense to integrate what Climate does today into Nova directly.  More
> generally, we think reservations of resources may best belong in the
> APIs responsible for managing those resources, similar to how quota
> management for resources lives in the resource APIs.
> There is some expectation that this type of functionality will extend
> beyond Nova, but for that we could look at creating a shared library of
> code to ease implementing this sort of thing in each API that needs it.


Russel, sure. I guess we'll discuss that more carefully on summit, and I
love see that feature implemented in the best way it should be done. I
think in person discussion will help here much. I'm hoping to collect more
feedback before summit to have multiple view on this problem.

I truly agree with the fact that possibly users should not use a separate
> API for reserving resources, but that would be worth duty for the project
> itself (Nova, Cinder or even Heat). That said, we think that there is need
> for having a global ordonancer managing resources and not siloing the
> resources. Hence that's why we still think there is still a need for a
> Climate Manager.
> Once I said that, there are different ways to plug in with the Manager,
> our proposal is to deliver a REST API and a python client so that there
> could be still some operator access for managing the resources if needed.
> The other way would be to only expose an RPC interface like the scheduler
> does at the moment but as the move to Pecan/WSME is already close to be
> done (reviews currently in progress), that's still a good opportunity for
> leveraging the existing bits of code.


Sylvain, I quite agree with you.

-- Dina


On Wed, Mar 12, 2014 at 8:14 PM, Sylvain Bauza wrote:

> Hi Russell,
> Thanks for replying,
>
>
> 2014-03-12 16:46 GMT+01:00 Russell Bryant :
>
> On 03/12/2014 07:35 AM, Dina Belova wrote:
>> > Thanks TC for spending time on Blazar (ex. Climate, in process of
>> > renaming) discussion!
>> >
>> > It was decided that potentially reservation idea is interesting for OS
>> > and it'll be great to have cross-project session on ongoing Atlanta
>> > Summit and discuss future of reservation/scheduling management in
>> OpenStack.
>> >
>> > Here is link to cross-project session proposal:
>> >
>> > http://summit.openstack.org/cfp/details/45
>> >
>> > Thanks everyone and let's keep working on that idea.
>>
>> Yes, I do think it would be useful to discuss this in person.  However,
>> I don't think that was the most important feedback from the TC meeting.
>>
>> The biggest concern seemed to be that we weren't sure whether Climate
>> makes sense as an independent project or not.  We think it may make more
>> sense to integrate what Climate does today into Nova directly.  More
>> generally, we think reservations of resources may best belong in the
>> APIs responsible for managing those resources, similar to how quota
>> management for resources lives in the resource APIs.
>>
>> There is some expectation that this type of functionality will extend
>> beyond Nova, but for that we could look at creating a shared library of
>> code to ease implementing this sort of thing in each API that needs it.
>>
>
>
> That's really a good question, so maybe I could give some feedback on how
> we deal with the existing use-cases.
> About the possible integration with Nova, that's already something we did
> for the virtual instances use-case, thanks to an API extension responsible
> for checking if a scheduler hint called 'reservation' was spent, and if so,
> take use of the python-climateclient package to send a request to Climate.
>
> I truly agree with the fact that possibly users should not use a separate
> API for reserving resources, but that would be worth duty for the project
> itself (Nova, Cinder or even Heat). That said, we think that there is need
> for having a global ordonancer managing resources and not siloing the
> resources. Hence that's why we still think there is still a need for a
> Climate Manager.
>
> Once I said that, there are different ways to plug in with the Manager,
> our proposal is to deliver a REST API and a python client so that there
> could be still some operator access for managing the resources if needed.
> The other way would be to only expose an RPC interface like the scheduler
> does at the moment but as the move to Pecan/WSME is already close to be
> done (reviews currently in progress), that's still a good opportunity for
> leveraging the existing bits of code.
>
> -Sylvain
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev

[openstack-dev] [Congress] Policy types

2014-03-12 Thread Tim Hinrichs
Hi all, 

We started a discussion on IRC yesterday that I'd like to continue. The main 
question is what kind of policy does a Congress user actually write? I can see 
three options. The first two focus on actions (API calls that make changes to 
the state of the cloud) and the last focuses on just the cloud state. (By 
"state of the cloud" I mean all the information Congress can see about all the 
cloud services it is managing, e.g. all the information we can get through API 
calls to Nova, Neutron, Cinder, Heat, ...). 

1) Access Control (e.g. Linux, XACML, AD): which *actions* can be performed by 
other cloud services (for each state of the cloud) 
2) Condition Action: which *actions* Congress should execute (for each state of 
the cloud) 
3) Classification (currently supported in Congress): which *states* violate 
real-world policy. [For those of you who have read docs/white-papers/etc. I'm 
using "Classification" in this note to mean the combination of the current 
"Classification" and "Action Description" policies.] 

The important observation is that each of these policies could contain 
different information from each of the others. 

- Access Control vs Condition Action. The Access Control policy tells *other 
cloud services* which actions they are *allowed* to execute. The Condition 
Action policy tells *Congress* which actions it *must* execute. These policies 
differ because they constrain different sets of cloud services. 

- Access Control vs. Classification. The Access Control policy might permit 
some users to violate the Classification policy in some situations (e.g. to fix 
violation A, we might need to cause violation B before eliminating both). These 
policies differ because a violation in one policy might be be a violation in 
the other. 

- Classification vs. Condition Action. The Classification policy might imply 
which actions *could* eliminate a given violation, but the Condition Action 
policy would dictate which of those actions *should* be executed (e.g. the 
Classification policy might tell us that disconnecting a network and deleting a 
VM would both eliminate a particular violation, but the Condition Action policy 
would tell us which to choose). And the Condition Action policy need not 
eliminate all the violations present in the Classification policy. Again these 
policies differ because a violation in one policy might not be a violation in 
the other. 

I'm proposing that for the first release of Congress we support all 3 of these 
policies. When a user inserts/deletes a policy statement, she chooses which 
policy it belongs to. All would be written in basically the same syntax but 
would be used in 3 different scenarios: 

- Prevention: If a component wants to consult Congress before taking action to 
see if that action is allowed, Congress checks the Access Control policy. 

- Reaction: When Congress learns of a change in the cloud's state, it checks 
the Condition Action policy to see which actions should be executed (if any). 

- Monitoring: If a user wants to simply check if the cloud's state is in 
compliance and monitor compliance over time, she writes and queries the 
Classification policy. 

There are several benefits to this proposal. 
- It allows users to choose any of the policy types, if they only want one of 
them. From our discussions with potential users, most seem to want one of these 
3 policy types (and are uninterested in the others). 
- It makes the introduction to Congress relatively simple. We describe 3 
different uses of policy (Prevention, Reaction, Monitoring) and then explain 
which policy to use in which case. 
- This allows us to focus on implementing a single policy-engine technology (a 
Datalog policy language and evaluation algorithms), which gives us the 
opportunity to make it solid. 

There are drawbacks to this proposal as well. 
- We will have 3 separate policies that are conceptually very similar. As the 
policies grow larger, it will become increasingly difficult to keep the 
policies synchronized. This problem can be mitigated to some extent by having 
all 3 share a library of policy statements that they all apply in different 
ways (and such a library mechanism is already implemented). 
- As cloud services change their behavior, policies may need to be re-written. 
For example, right now Nova does not consult Congress before creating a VM; 
thus, to enforce policy surrounding VMs, the best we could do is write a 
Condition-Action policy that adjusts VM configuration when it learns about new 
VMs being created. If we later make Nova consult with Congress before creating 
a VM, we need to write an Access-control policy that puts the proper controls 
in place. 

These drawbacks were the original motivation for supporting only the 
Classification policy and attempting to derive the Access Control and Condition 
Action policies from it. But given that we can't always derive the proper 
Access Control and Condition Action policies from t

[openstack-dev] any recommendations for live debugging of openstack services?

2014-03-12 Thread Chris Friesen


Are there any tools that people can recommend for live debugging of 
openstack services?


I'm looking for a mechanism where I could take a running system that 
isn't behaving the way I expect and somehow poke around inside the 
program while it keeps running.  (Sort of like tracepoints in gdb.)


I've seen mention of things like twisted.manhole and 
eventlet.backdoor...has anyone used this sort of thing with openstack? 
Are there better options?


Also, has anyone ever seen an implementation of watchpoints for python? 
 By that I mean the ability to set a breakpoint if the value of a 
variable changes.  I found 
"https://sourceforge.net/blog/watchpoints-in-python/"; but it looks 
pretty hacky.


Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Conductor support for networking in Icehouse

2014-03-12 Thread Dan Smith
> Hmm... I guess the blueprint summary led me to believe that nova-network
> no longer needs to hit the database.

Yeah, using objects doesn't necessarily mean that the rest of the direct
database accesses go away. However, I quickly cooked up the rest of what
is required to get this done:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1290568,n,z

Review would be great. The last patch wedges the database like we do in
compute to make sure that the tests pass without talking to the database
itself. Would be a nice feature for icehouse to say that multihost
compute nodes are now db-clean.

Thanks!

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-12 Thread Duncan Thomas
On 11 March 2014 09:09, Zhangleiqiang  wrote:

> For example, one tenant's volume quota is five, and has 5 volumes and 1 
> snapshot already. If the data in base volume of the snapshot is corrupted, 
> the user will need to create a new volume from the snapshot, but this 
> operation will be failed because there are already 5 volumes, and the 
> original volume cannot be deleted, too.

That original volume is still taking up disk space, so absolutely
needs to be part of the quota and billing.

We talked about allowing snapshots to exist when their origin volume
is deleted in cinder (I was an advocate of it), but it turns out to be
impossible on some backends without lots of data copying, and having a
quota system that does not represent the actual resource usage is
begging for s DoS attack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-12 Thread Sylvain Bauza
Hi Russell,
Thanks for replying,


2014-03-12 16:46 GMT+01:00 Russell Bryant :

> On 03/12/2014 07:35 AM, Dina Belova wrote:
> > Thanks TC for spending time on Blazar (ex. Climate, in process of
> > renaming) discussion!
> >
> > It was decided that potentially reservation idea is interesting for OS
> > and it'll be great to have cross-project session on ongoing Atlanta
> > Summit and discuss future of reservation/scheduling management in
> OpenStack.
> >
> > Here is link to cross-project session proposal:
> >
> > http://summit.openstack.org/cfp/details/45
> >
> > Thanks everyone and let's keep working on that idea.
>
> Yes, I do think it would be useful to discuss this in person.  However,
> I don't think that was the most important feedback from the TC meeting.
>
> The biggest concern seemed to be that we weren't sure whether Climate
> makes sense as an independent project or not.  We think it may make more
> sense to integrate what Climate does today into Nova directly.  More
> generally, we think reservations of resources may best belong in the
> APIs responsible for managing those resources, similar to how quota
> management for resources lives in the resource APIs.
>
> There is some expectation that this type of functionality will extend
> beyond Nova, but for that we could look at creating a shared library of
> code to ease implementing this sort of thing in each API that needs it.
>


That's really a good question, so maybe I could give some feedback on how
we deal with the existing use-cases.
About the possible integration with Nova, that's already something we did
for the virtual instances use-case, thanks to an API extension responsible
for checking if a scheduler hint called 'reservation' was spent, and if so,
take use of the python-climateclient package to send a request to Climate.

I truly agree with the fact that possibly users should not use a separate
API for reserving resources, but that would be worth duty for the project
itself (Nova, Cinder or even Heat). That said, we think that there is need
for having a global ordonancer managing resources and not siloing the
resources. Hence that's why we still think there is still a need for a
Climate Manager.

Once I said that, there are different ways to plug in with the Manager, our
proposal is to deliver a REST API and a python client so that there could
be still some operator access for managing the resources if needed. The
other way would be to only expose an RPC interface like the scheduler does
at the moment but as the move to Pecan/WSME is already close to be done
(reviews currently in progress), that's still a good opportunity for
leveraging the existing bits of code.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travis yaml file on Horizon

2014-03-12 Thread Joshua Harlow
Openstack has it's own CI system that is independent of travis, so I'm not sure 
the benefit of adding it into horizons repo.

If u have a public fork what stops u from putting the file/s in that fork? 
Seems like u should be able to manipulate your fork however u want.

Sent from my really tiny device...

On Mar 12, 2014, at 5:09 AM, "Adam Nelson" mailto:a...@kili.io>> 
wrote:

I see that horizon doesn't use Travis CI

Is that a political decision?

If not, would there be resistance to adding a minimal .travis.yml file to 
horizon/master so that other people can use travis for their public horizon 
repos?  Travis requires the file to exist on all branches for historical 
reasons.

This wouldn't oblige the use of Travis but it would make it easier for me since 
we're actively developing on a public fork and use Travis.

-Adam
--
Kili - Cloud for Africa: kili.io
Musings: twitter.com/varud
More Musings: varud.com
About Adam: 
www.linkedin.com/in/adamcnelson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-12 Thread Duncan Thomas
On 15 January 2014 18:53, Brant Knudson  wrote:

> At no point do I care what are the different commits that are being brought
> in from oslo-incubator. If the commits are listed in the commit message then
> I feel an obligation to verify that they got the right commits in the
> message and that takes extra time for no gain.

I find that I very much *do* want a list of what changes have been
pulled in, so I've so idea of the intent of the changes. Some of the
OSLO changes can be large and complicated, and the more clues as to
why things changed, the better the chance I've got of spotting
breakages or differing assumptions between cinder and OSLO (of which
there have been a number)

I don't very often verify that the version that has been pulled in is
the very latest or anything like that - generally I want to know:
 - What issue are you trying to fix by doing an update? (The fact OSLO
is ahead of us is rarely a good enough reason on its own to do an
update - preferably reference a specific bug that exists in cinder)
 - What other incidental changes are being pulled in (by intent, not
just the code)
 - If I'm unsure about one of the incidental changes, how do I go find
the history for it, with lots of searching (hense the commit ID or the
change ID) - this lets me find bugs, reviews etc

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-12 Thread Jiří Stránský

On 11.3.2014 15:50, Adam Young wrote:

On 03/11/2014 05:25 AM, Dmitry Mescheryakov wrote:

For what it's worth in Sahara (former Savanna) we inject the second
key by userdata. I.e. we add
echo "${public_key}" >> ${user_home}/.ssh/authorized_keys

to the other stuff we do in userdata.

Dmitry

2014-03-10 17:10 GMT+04:00 Jiří Stránský :

On 7.3.2014 14:50, Imre Farkas wrote:

On 03/07/2014 10:30 AM, Jiří Stránský wrote:

Hi,

there's one step in cloud initialization that is performed over SSH --
calling "keystone-manage pki_setup". Here's the relevant code in
keystone-init [1], here's a review for moving the functionality to
os-cloud-config [2].


You really should not be doing this.  I should never have written
pki_setup:  it is a developers tool:  user a real CA and a real certificate.


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] - 
upload pre-created signing cert, signing key and CA cert to controller 
nodes using Heat. This seems like a much cleaner approach to 
initializing overcloud than having to SSH into it, and it will solve 
both problems i outlined in the initial e-mail.


It creates another problem though - for simple (think PoC) deployments 
without external CA we'll need to create the keys/certs 
somehow/somewhere anyway :) It shouldn't be hard because it's already 
implemented in keystone-manage pki_setup but we should figure out a way 
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally 
and passing a parameter to pki_setup to override default location where 
new keys/certs will be generated?



Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-12 Thread Joshua Harlow
Understandable,

Humans will be humans after all. 

To me if openstsck is a cloud platform then coming along with it should be best 
practices that come with the usage of a cloud platform (treat your instances as 
ephemeral, use configuration management, save your stuff in source control...). 
I have been preaching similar stuff at y! and getting people into the right 
mindset around "the cloud" is IMHO more important than making openstack fit 
peoples non-cloudy mindset.

Because once u teach a person to use the cloud right u don't need to have 
openstack compensate for them using it incorrectly.

Sent from my really tiny device...

> On Mar 12, 2014, at 4:45 AM, "CARVER, PAUL"  wrote:
> 
> I have personally witnessed someone (honestly, not me) select "Terminate 
> Instance" when they meant "Reboot Instance" and that mistake is way too easy. 
> I'm not sure if it was a brain mistake or mere slip of the mouse, but it's 
> enough to make people really nervous in a production environment. If there's 
> one thing you can count on about human beings, it's that they'll make 
> mistakes sooner or later. Any system that assumes infallible human beings as 
> a design criteria is making an invalid assumption.
> 
> -- 
> Paul Carver
> VO: 732-545-7377
> Cell: 908-803-1656
> E: pcar...@att.com
> Q Instant Message
> 
> 
> -Original Message-
> From: Tim Bell [mailto:tim.b...@cern.ch] 
> Sent: Tuesday, March 11, 2014 15:43
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
> deletion (step by step)
> 
> 
> Typical cases are user error where someone accidentally deletes an item from 
> a tenant. The image guys have a good structure where images become 
> unavailable and are recoverable for a certain period of time. A regular 
> periodic task cleans up deleted items after a configurable number of seconds 
> to avoid constant database growth.
> 
> My preference would be to follow this model universally (an archive table is 
> a nice way to do it without disturbing production).
> 
> Tim
> 
> 
>>> On Tue, Mar 11, 2014, Mike Wilson  wrote:
>>> Undeleting things is an important use case in my opinion. We do this
>>> in our environment on a regular basis. In that light I'm not sure that
>>> it would be appropriate just to log the deletion and git rid of the
>>> row. I would like to see it go to an archival table where it is easily 
>>> restored.
>> 
>> I'm curious, what are you undeleting and why?
>> 
>> JE
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-12 Thread Russell Bryant
On 03/12/2014 07:35 AM, Dina Belova wrote:
> Thanks TC for spending time on Blazar (ex. Climate, in process of
> renaming) discussion!
> 
> It was decided that potentially reservation idea is interesting for OS
> and it'll be great to have cross-project session on ongoing Atlanta
> Summit and discuss future of reservation/scheduling management in OpenStack.
> 
> Here is link to cross-project session proposal:
> 
> http://summit.openstack.org/cfp/details/45
> 
> Thanks everyone and let's keep working on that idea.

Yes, I do think it would be useful to discuss this in person.  However,
I don't think that was the most important feedback from the TC meeting.

The biggest concern seemed to be that we weren't sure whether Climate
makes sense as an independent project or not.  We think it may make more
sense to integrate what Climate does today into Nova directly.  More
generally, we think reservations of resources may best belong in the
APIs responsible for managing those resources, similar to how quota
management for resources lives in the resource APIs.

There is some expectation that this type of functionality will extend
beyond Nova, but for that we could look at creating a shared library of
code to ease implementing this sort of thing in each API that needs it.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Sahara (ex. Savanna) project renaming process [savanna]

2014-03-12 Thread Sergey Lukjanov
Blueprint for renaming in docs has been added -
https://blueprints.launchpad.net/sahara/+spec/savanna-renaming-docs

Erik B. volunteered to cover it, thanks for him.

On Wed, Mar 12, 2014 at 6:09 PM, Sergey Lukjanov  wrote:
> All repos has been renamed, all launchpad projects too, bps/issues
> mappings updated too. Additionally, we're ready to start moving to the
> new irc channel, I'll make separated announcements for both events.
>
> On Tue, Mar 11, 2014 at 5:38 PM, Sergey Lukjanov  
> wrote:
>> RE blueprints assignments - it looks like all bps have initial assignments.
>>
>> On the renaming the main service code side Alex I. is contact person,
>> I'll help him with some setup stuff.
>>
>> Additionally, you can find a bunch of my patches for external renaming
>> related changes -
>> https://review.openstack.org/#/q/status:open+topic:savanna-sahara+-savanna,n,z
>> and internal changes -
>> https://review.openstack.org/#/q/status:open+topic:savanna-sahara+savanna,n,z
>> (only open changes).
>>
>> Thanks.
>>
>> On Tue, Mar 11, 2014 at 5:33 PM, Sergey Lukjanov  
>> wrote:
>>> All launchpad projects has been renamed keeping full path redirects.
>>> It means that you can still reference to the bugs and blueprints under
>>> the savanna launchpad project and it'll be redirected to the new
>>> sahara project.
>>>
>>> All savanna repositories will be renamed to sahara ones on Wednesday,
>>> March 12 between 12:00 to 12:30 UTC [0]
>>>
>>>
>>> [0] 
>>> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140312T12&am=30
>>>
>>> On Sun, Mar 9, 2014 at 3:08 PM, Sergey Lukjanov  
>>> wrote:
 Matt,

 thanks for moving etherpad notes to the blueprints. I've added some
 notes and details to them and add some assignments to the blueprints
 where we have no choice.

 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci -
 Sergey Kolekonov
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
 - Dmitry Mescheryakov

 Thanks.

 On Sat, Mar 8, 2014 at 5:08 PM, Matthew Farrellee  wrote:
> On 03/07/2014 04:50 PM, Sergey Lukjanov wrote:
>>
>> Hey folks,
>>
>> we're now starting working on the project renaming. You can find
>> details in the etherpad [0]. We'll move all work items to the
>> blueprints, one blueprint per sub-project to well track progress and
>> work items. The general blueprint is [1], it'll depend on all other
>> blueprints and it's currently consists of general renaming tasks.
>>
>> Current plan is to assign each subproject blueprint to volunteer.
>> Please, contact me and Matthew Farrellee if you'd like to take the
>> renaming bp.
>>
>> Please, share your ideas/suggestions in ML or etherpad.
>>
>> [0] https://etherpad.openstack.org/p/savanna-renaming-process
>> [1] 
>> https://blueprints.launchpad.net/openstack?searchtext=savanna-renaming
>>
>> Thanks.
>>
>> P.S. Please, prepend email topics with [sahara] and append [savanna]
>> to the end of topic (like in this email) for the transition period.
>
>
> savann^wsahara team,
>
> i've separated out most of the activities that can happen in parallel,
> aligned them on repository boundaries, and filed blueprints for the 
> efforts.
> now we need community members to take ownership (be the assignee) of the
> blueprints. taking ownership means you'll be responsible for the renaming 
> in
> the repository, coordinating with other owners and getting feedback from 
> the
> community about important questions (such as compatibility requirements).
>
> to take ownership, just go to the blueprint and assign it to yourself. if
> there is already an assignee, reach out to that person and offer them
> assistance.
>
> blueprints up for grabs -
>
> what: savanna^wsahara ci
> blueprint:
> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci
> comments: this should be taken by someone already familiar with the ci. 
> i'd
> nominate skolekonov
>
> what: saraha puppet modules
> blueprint:
> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-puppet
> comments: this should be taken by someone who can validate the changes. 
> i'd
> nominate sbadia or dizz
>
> what: sahara extras
> blueprint:
> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-extra
> comments: this could be taken by anyone
>
> what: sahara dib image elements
> blueprint:
> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-image-elements
> comments: this could be taken by anyone
>
> what: sahara python client
> blueprint:
> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-client
> comments: this should be done by someone w/ experience in the client. i'd
> nominate tmckay

[openstack-dev] [Infra] [Savanna] [Sahara] [Ceilometer] [Chef] Gerrit maintenance concluded

2014-03-12 Thread Jeremy Stanley
This message is a reminder to update remotes on any of your local
clones of the following repositories which we just renamed:

openstack/python-savannaclient
-> openstack/python-saharaclient

openstack/savanna
-> openstack/sahara

openstack/savanna-dashboard
-> openstack/sahara-dashboard

openstack/savanna-extra
-> openstack/sahara-extra

openstack/savanna-image-elements
-> openstack/sahara-image-elements

stackforge/puppet-savanna
-> stackforge/puppet-sahara

stackforge/savanna-ci-config
-> stackforge/sahara-ci-config

stackforge/savanna-guestagent
-> stackforge/sahara-guestagent

stackforge/cookbook-openstack-metering
-> stackforge/cookbook-openstack-telemetry

Today's Gerrit maintenance concluded successfully. Due to some
unrelated test failures and Jenkins jobs running longer than
anticipated, the Gerrit outage began late and lasted from roughly
12:30-12:45 UTC.

We had a bit of a miscalculation on the criticality of the
corresponding devstack-gate patch for cloning sahara repositories,
and so a few DevStack-based jobs may have failed between 12:45 and
13:00 (the workspace log will mention a failure to fetch the origin
remote for openstack/savanna if so).

A minor issue with some recent statusbot improvements left stale
topics in a lot of channels, but I have manually corrected them all
at this point and a fix is already in review for that.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] moving to the #openstack-sahara channel [savanna]

2014-03-12 Thread Sergey Lukjanov
Hi folks,

let's start using new IRC channel #openstack-sahara at freenode (let's
do it right now).

Channel logs are available at eavesdrop [0] and git bot already works ok.

**Sahara core team, please, keep looking into the obsolete #savanna
channel to help folks moving to the right place.**

Thanks.

[0] http://eavesdrop.openstack.org/irclogs/%23openstack-sahara/

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-03-12 Thread Paul Czarkowski
There is a bug in the default docker registry.   Set this is your localrc
and it should work

DOCKER_REGISTRY_IMAGE=samalba/docker-registry

On 3/12/14 1:35 AM, "urgensherpa"  wrote:

>hello there,i setup using devstack ..below is my docker version output
>--
>
>redhat@test:~/devstack$ docker version
>Client version: 0.7.6
>Go version (client): go1.2
>Git commit (client): bc3b2ec
>Server version: 0.7.6
>Git commit (server): bc3b2ec
>Go version (server): go1.2
>Last stable version: 0.9.0, please update docker
>--
>-
>I followed a guide from
>*http://damithakumarage.wordpress.com/2014/01/31/how-to-setup-openstack-ha
>vana-with-docker-driver/*
>
>--
>-
>I tagged an image using
>
>$ docker tag urgensherpa/lamp6 192.168.140.193:5042/lamp6
>Below is my 'docker push' commad output.
>
>redhat@test:~/devstack$ docker push 192.168.140.193:5042/lamp6
>
>The push refers to a repository [192.168.140.193:5042/lamp6] (len: 1)
>Sending image list
>Pushing repository 192.168.140.193:5042/lamp6 (1 tags)
>2014/03/11 13:22:03 HTTP code 500 while uploading metadata: invalid
>character Œ<' looking for beginning of value
>--
>
>Please let me know what i need to do thanks
>
>
>
>--
>View this message in context:
>http://openstack.10931.n7.nabble.com/Openstack-Nova-Docker-Devstack-with-d
>ocker-driver-tp28361p34942.html
>Sent from the Developer mailing list archive at Nabble.com.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][devstack] Working config for Neutron install by DevStack?

2014-03-12 Thread Mike Spreitzer
I want to use DevStack to install and configure OpenStack with Neutron, 
into a VM in an OpenStack undercloud.  I looked at 
https://wiki.openstack.org/wiki/NeutronDevstack and tried that, and 
failed.  Looking deeper, I see there are very important additional details 
to pay attention to: flat networking vs. more complex, with attention to 
address ranges all around.  Is there a short sharp description of the 
DevStack defaults in those regards?  Does anybody have a simple example 
that works, with comments about the networking in the undercloud?

I have three underclouds available.  Undercloud 9 is built on bare metal 
machines in the 10.0.0.0/8 address space, but does not in any way use 
addresses in the 10.0.0.0/15 space; this is Havana with Neutron doing 
non-flat networking.  Undercloud 15 is built on a single bare metal 
machine in the 9.0.0.0/8 address space; this is Icehouse as of a few weeks 
ago, with nova networking giving VMs static IP addresses in the 
10.0.0.0/16 space (obviously I can add networks and subnets inside this 
undercloud).  Undercloud 22 is built on a single bare metal machine in the 
9.0.0.0/8 address space; this is Grizzly with Quantum doing flat 
networking.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-12 Thread Murray, Paul (HP Cloud Services)
Reviewing this thread to come to a conclusion (for myself at least - and 
hopefully so I can document something so reviewers know why I did it)

For approach:
1. plugins should use stevedore with entry points (as stated by Russell)
2. the plugins should be explicitly selected through configuration 

For api stability:
I'm not sure there was a consensus. Personally I would write a base class for 
the plugins and document in it that the interface is unstable. Sound good?

BTW: this is one of those things that could be put in a place to make and 
record decisions (like the gerrit idea for blueprints). But now I am referring 
to another thread 
[http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html ]

Paul.


-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
Sent: 04 March 2014 21:25
To: Murray, Paul (HP Cloud Services)
Cc: OpenStack Development Mailing List (not for usage questions); 
d...@danplanet.com
Subject: Re: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

And sorry, as to your original problem, the loadables approach is kinda messy 
since only the classes that are loaded when *that* module are loaded are used 
(vs. explicitly specifying them in a config). You may get different results 
when the flow changes.

Either entry-points or config would give reliable results.


On 03/04/2014 03:21 PM, Murray, Paul (HP Cloud Services) wrote:
> In a chat with Dan Smith on IRC, he was suggesting that the important thing 
> was not to use class paths in the config file. I can see that internal 
> implementation should not be exposed in the config files - that way the 
> implementation can change without impacting the nova users/operators.

There's plenty of easy ways to deal with that problem vs. entry points.

MyModule.get_my_plugin() ... which can point to anywhere in the module 
permanently.

Also, we don't have any of the headaches of merging setup.cfg sections (as we 
see with oslo.* integration).

> Sandy, I'm not sure I really get the security argument. Python provides every 
> means possible to inject code, not sure plugins are so different. Certainly 
> agree on choosing which plugins you want to use though.

The concern is that any compromised part of the python eco-system can get 
auto-loaded with the entry-point mechanism. Let's say Nova auto-loads all 
modules with entry-points the [foo] section. All I have to do is create a setup 
that has a [foo] section and my code is loaded.
Explicit is better than implicit.

So, assuming we don't auto-load modules ... what does the entry-point approach 
buy us?


> From: Russell Bryant [rbry...@redhat.com] We should be careful though.  
> We need to limit what we expose as external plug points, even if we consider 
> them unstable.  If we don't want it to be public, it may not make sense for 
> it to be a plugin interface at all.

I'm not sure what the concern with introducing new extension points is?
OpenStack is basically just a big bag of plugins. If it's optional, it's 
supposed to be a plugin (according to the design tenets).



> 
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-03-12 Thread Russell Bryant
On 03/12/2014 08:02 AM, Sean Dague wrote:
> On 03/12/2014 07:36 AM, Russell Bryant wrote:
>> Note that devstack is going to break for docker and Nova master
>> right now.  We're in the middle of moving the docker driver.  In
>> the meantime, use a rev of Nova before this merge:
>> 
>> https://review.openstack.org/#/c/79740/
>> 
>> Once the following change for the new repo merges, we can update 
>> devstack to reflect the new location:
>> 
>> https://review.openstack.org/#/c/79900/
>> 
>> If you really want to use HEAD of Nova master in the meantime,
>> you can install the docker driver from here:
>> 
>> https://github.com/russellb/nova-docker
>> 
>> Then configure nova.conf with:
>> 
>> compute_driver=novadocker.virt.docker.driver.DockerDriver
>> 
> 
> Honestly, I expect that removing the driver from Nova means we
> remove it from devstack. Devstack really tries to minimize being
> some random assembly tool for things out of tree. If the driver
> isn't good enough for Nova, I don't see why we'd be encouraging
> people to use it in devstack.

Yep, given devstack support for plugins, it's not a big deal to have
the devstack support move over to the nova-docker repo.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] mongodb notification driver

2014-03-12 Thread Doug Hellmann
On Tue, Mar 11, 2014 at 10:09 PM, Hiroyuki Eguchi wrote:

> I'm envisioning a mongodb notification driver.
>
> Currently, For troubleshooting, I'm using a log driver of notification,
> and sent notification log to rsyslog server, and store log in database
> using rsyslog-mysql package.
>
> I would like to make it more simple, So I came up with this feature.
>
> Ceilometer can manage notifications using mongodb, but Ceilometer should
> have the role of Metering, not Troubleshooting.
>
> If you have any comments or suggestion, please let me know.
> And please let me know if there's any discussion about this.
>

Ceilometer can record the raw notification event, so you could use it to
collect debugging data without needing a separate messaging driver or
database connections in every service that sends notifications.

Doug



>
> Thanks.
> --hiroyuki
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Refresher on OSLO-Incubator

2014-03-12 Thread Doug Hellmann
On Tue, Mar 11, 2014 at 11:07 PM, John Griffith  wrote:

> Hey Everyone,
>
> I wanted to send an email out to point out something that we ran across in
> Cinder yesterday.  First I want to review my understanding of how
> OSLO-Incubator is intended to work:
>
> The idea behind having the OSLO repository is to consolidate the various
> modules and such that all of the OpenStack projects use.  Not only is this
> great to reduce code duplication (at least reinventing the wheel), it also
> provides consistency and what should in the end be more reliable modules
> for all of those methods and functionality that all of the OpenStack
> projects share.
>
> Typically in Cinder if a patch comes along that attempts to modify
> anything in cinder/openstack/common directly it's rejected, the reason is
> that the idea of OSLO is that it is to be the master/upstream repository
> for the shared code.  If a change is needed or a bug needs fixing it needs
> to be fixed their first, and then synched back to the other projects.
>
> In my personal opinion the whole concept of OSLO-Incubator falls apart and
> doesn't work if this process isn't followed.  If the OSLO code needs a
> special customization for a single project then we need to look at the
> module and see if it can be modified to suit everyones needs, or said
> project just shouldn't import that module and should use their own (I know
> some won't like that but hey, it's reality).
>
> Anyway, the reason I'm sending this email out is that recently we had a
> problem showing up in CI with Cinder-API logging a ton of tracebacks.  It
> wasn't overly visible at first because the tests were actually passing, but
> it was a problem in logging and the logging messages.  After some digging
> it turned out that the problem was actually a bug in the
> openstack/common/log.py module which we just recently synched from OSLO,
> bug here [1].
>
> When I first started looking at this I discounted the synch with log.py
> because I noticed that other project (based on git history) had performed
> the same sync recently and had the same version.  After some digging and
> some work by Luis and others however we noticed that those projects had
> patched the log.py file directly in the project (Nova and Glance
> inparticular).
>
> So the problem now is that even though we have what we call "common" it
> seems there's a good chance that a number of projects have their own custom
> version of the code that's there.  That defeats the purpose in my opinion.
>  I don't want to argue the concept or policy of OSLO-Incubator code, but my
> point is that we do have a policy and we agreed on it so we should be
> careful to make sure we follow it.  It's easy for things like this to slip
> by so I'm by no means criticizing (especially since I'm sure there's
> similar things in Cinder), I just mentioned it in the project meeting today
> and folks thought it might be good to get it out on the ML to remind all of
> us about the process here.
>

Thanks for raising this, John, it's a good reminder.

This is also one reason we are going to be working so hard during Juno to
move code out of the incubator and into libraries. In addition to
eliminating some of the copying, it will force us to address some of these
sorts of slight incompatibility issues, because projects won't have private
copies to modify.

Doug



>
> Thanks,
> John
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] all git repos / lp projects has been renamed to sahara [savanna]

2014-03-12 Thread Sergey Lukjanov
Hi folks,

please, note that all repos has been renamed. You should update your
git remotes and gerrit queries. .gitreview updates are on the go.

Devstack/Tempest jobs aren't working now waiting to the update in
DevStack, I'll make a note when it'll start working. Savanna-ci will
be updated soon too.

BTW we have redirects from old name to the new one for both github
repos and launchpad projects. The same with wiki pages.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Sahara (ex. Savanna) project renaming process [savanna]

2014-03-12 Thread Sergey Lukjanov
All repos has been renamed, all launchpad projects too, bps/issues
mappings updated too. Additionally, we're ready to start moving to the
new irc channel, I'll make separated announcements for both events.

On Tue, Mar 11, 2014 at 5:38 PM, Sergey Lukjanov  wrote:
> RE blueprints assignments - it looks like all bps have initial assignments.
>
> On the renaming the main service code side Alex I. is contact person,
> I'll help him with some setup stuff.
>
> Additionally, you can find a bunch of my patches for external renaming
> related changes -
> https://review.openstack.org/#/q/status:open+topic:savanna-sahara+-savanna,n,z
> and internal changes -
> https://review.openstack.org/#/q/status:open+topic:savanna-sahara+savanna,n,z
> (only open changes).
>
> Thanks.
>
> On Tue, Mar 11, 2014 at 5:33 PM, Sergey Lukjanov  
> wrote:
>> All launchpad projects has been renamed keeping full path redirects.
>> It means that you can still reference to the bugs and blueprints under
>> the savanna launchpad project and it'll be redirected to the new
>> sahara project.
>>
>> All savanna repositories will be renamed to sahara ones on Wednesday,
>> March 12 between 12:00 to 12:30 UTC [0]
>>
>>
>> [0] 
>> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140312T12&am=30
>>
>> On Sun, Mar 9, 2014 at 3:08 PM, Sergey Lukjanov  
>> wrote:
>>> Matt,
>>>
>>> thanks for moving etherpad notes to the blueprints. I've added some
>>> notes and details to them and add some assignments to the blueprints
>>> where we have no choice.
>>>
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci -
>>> Sergey Kolekonov
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
>>> - Dmitry Mescheryakov
>>>
>>> Thanks.
>>>
>>> On Sat, Mar 8, 2014 at 5:08 PM, Matthew Farrellee  wrote:
 On 03/07/2014 04:50 PM, Sergey Lukjanov wrote:
>
> Hey folks,
>
> we're now starting working on the project renaming. You can find
> details in the etherpad [0]. We'll move all work items to the
> blueprints, one blueprint per sub-project to well track progress and
> work items. The general blueprint is [1], it'll depend on all other
> blueprints and it's currently consists of general renaming tasks.
>
> Current plan is to assign each subproject blueprint to volunteer.
> Please, contact me and Matthew Farrellee if you'd like to take the
> renaming bp.
>
> Please, share your ideas/suggestions in ML or etherpad.
>
> [0] https://etherpad.openstack.org/p/savanna-renaming-process
> [1] https://blueprints.launchpad.net/openstack?searchtext=savanna-renaming
>
> Thanks.
>
> P.S. Please, prepend email topics with [sahara] and append [savanna]
> to the end of topic (like in this email) for the transition period.


 savann^wsahara team,

 i've separated out most of the activities that can happen in parallel,
 aligned them on repository boundaries, and filed blueprints for the 
 efforts.
 now we need community members to take ownership (be the assignee) of the
 blueprints. taking ownership means you'll be responsible for the renaming 
 in
 the repository, coordinating with other owners and getting feedback from 
 the
 community about important questions (such as compatibility requirements).

 to take ownership, just go to the blueprint and assign it to yourself. if
 there is already an assignee, reach out to that person and offer them
 assistance.

 blueprints up for grabs -

 what: savanna^wsahara ci
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci
 comments: this should be taken by someone already familiar with the ci. i'd
 nominate skolekonov

 what: saraha puppet modules
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-puppet
 comments: this should be taken by someone who can validate the changes. i'd
 nominate sbadia or dizz

 what: sahara extras
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-extra
 comments: this could be taken by anyone

 what: sahara dib image elements
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-image-elements
 comments: this could be taken by anyone

 what: sahara python client
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-client
 comments: this should be done by someone w/ experience in the client. i'd
 nominate tmckay

 what: sahara horizon plugin
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-dashboard
 comments: this will require experience and care. i'd nominate croberts

 what: sahara guestagent
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
 comments: i'd nominate dmit

Re: [openstack-dev] [Glance] Need to revert "Don't enable all stores by default"

2014-03-12 Thread Sean Dague
On 03/12/2014 09:01 AM, Flavio Percoco wrote:
> On 11/03/14 16:25 -0700, Clint Byrum wrote:
>> Hi. I asked in #openstack-glance a few times today but got no response,
>> so sorry for the list spam.
>>
>> https://review.openstack.org/#/c/79710/
>>
>> This change introduces a backward incompatible change to defaults with
>> Havana. If a user has chosen to configure swift, but did not add swift
>> to the known_stores, then when that user upgrades Glance, Glance will
>> fail to start because their swift configuration will be invalid.
>>
>> This broke TripleO btw, which tries hard to use default configurations.
>>
>> Also I am not really sure why this approach was taken. If a user has
>> explicitly put swift configuration options in their config file, why
>> not just load swift store? Oslo.config will help here in that you can
>> just add all of the config options but not actually expect them to be
>> set. It seems entirely backwards to just fail in this case.
>>
> 
> Here's an attempt to fix this issues without reverting the patch.
> Feedback appreciated.
> 
> https://review.openstack.org/#/c/79935/

ACK. Looks pretty good. You might want to consider using one of the oslo
deprecation functions to make it consistent on the deprecation side.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] [zeromq] nova-rpc-zmq-receiver bottleneck

2014-03-12 Thread yatin kumbhare
When zeromq is use as rpc-backend, "nova-rpc-zmq-receiver" service needs to
be run on every node.

zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
based on topic-name (which is extracted from received data), it forwards
data to respective local services, over IPC protocol.

While, openstack services, listen/bind on "IPC" socket with socket-type
PULL.

I see, zmq-receiver as a bottleneck and overhead as per the current design.
1. if this service crashes: communication lost.
2. overhead of running this extra service on every nodes, which just
forward messages as is.


I'm looking forward to, remove zmq-receiver service and enable direct
communication (nova-* and cinder-*) across and within node.

I believe, this will create, zmq experience more seamless.

the communication will change from IPC to zmq TCP socket type for each
service.

like: rpc.cast from scheduler -to - compute would be direct rpc message
passing. no routing through zmq-receiver.

Now, TCP protocol, all services will bind to unique port (port-range could
be, 9501-9510)

from nova.conf, rpc_zmq_matchmaker =
nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing.

I have put arbitrary ports numbers after the service name.

file:///etc/oslo/matchmaker_ring.json

{
 "cert:9507": [
 "controller"
 ],
 "cinder-scheduler:9508": [
 "controller"
 ],
 "cinder-volume:9509": [
 "controller"
 ],
 "compute:9501": [
 "controller","computenodex"
 ],
 "conductor:9502": [
 "controller"
 ],
 "consoleauth:9503": [
 "controller"
 ],
 "network:9504": [
 "controller","computenodex"
 ],
 "scheduler:9506": [
 "controller"
 ],
 "zmq_replies:9510": [
 "controller","computenodex"
 ]
 }

Here, the json file would keep track of ports for each services.

Looking forward to seek community feedback on this idea.


Regards,
Yatin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-12 Thread Adam Young

On 03/11/2014 01:20 PM, Clint Byrum wrote:

Excerpts from Adam Young's message of 2014-03-11 07:50:58 -0700:

On 03/11/2014 05:25 AM, Dmitry Mescheryakov wrote:

For what it's worth in Sahara (former Savanna) we inject the second
key by userdata. I.e. we add
echo "${public_key}" >> ${user_home}/.ssh/authorized_keys

to the other stuff we do in userdata.

Dmitry

2014-03-10 17:10 GMT+04:00 Jiří Stránský :

On 7.3.2014 14:50, Imre Farkas wrote:

On 03/07/2014 10:30 AM, Jiří Stránský wrote:

Hi,

there's one step in cloud initialization that is performed over SSH --
calling "keystone-manage pki_setup". Here's the relevant code in
keystone-init [1], here's a review for moving the functionality to
os-cloud-config [2].

You really should not be doing this.  I should never have written
pki_setup:  it is a developers tool:  user a real CA and a real certificate.


This alludes to your point, but also says that keystone-manage can be used:

http://docs.openstack.org/developer/keystone/configuration.html#certificates-for-pki

Yep.  And we need to get a better story for certificate management.


Seems that some time should be spent making this more clear if for some
reason pki_setup is weak for production use cases. My brief analysis
of the code says that the weakness is that the CA should generally be
kept apart from the CSR's so that a compromise of a node does not lead
to an attacker being able to generate their own keystone service. This
seems like a low probability attack vector, as compromise of the keystone
machines also means write access to the token backend, and thus no need
to generate ones' own tokens (you can just steal all the existing tokens).


This is a pretty good explanation.  I would love to see it submitted as 
part of the keystone configuration document above.




I'd like to see it called out in the section above though, so that
users can know what risk their accepting when they use what looks like a
recommended tool. Another thing would be to log copious warnings when
pki_setup is run that it is not for production usage. That should be
sufficient to scare some diligent deployers into reading the docs closely
and mitigating the risk.

Very good idea.



Anyway, shaking fist at users and devs in -dev for using tools in the
documentation probably _isn't_ going to convince anyone to spend more
time setting up PKI tokens.


The only one I am shaking my fist at is myself...and maybe those that 
browbeat me into writing the utility.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Difficult to understand message when using incorrect role against object in Neutron

2014-03-12 Thread Adam Young

On 03/11/2014 11:42 AM, Sudipta Biswas3 wrote:

Hi all,

I'm hitting a scenario where, a user runs an action against an object 
in neutron for which they don't have the authority to perform the 
action(perhaps their role allows read of the object, but not update). 
The following returned to back to the user when such an action is 
performed: "The resource could not be found".  This can be confusing 
to users.  For example, basic users may not have the privilege to edit 
a network and attempts doing that but ends up getting the resource not 
found message, even though they have read privileges.


This is a confusing message because the object they just read in is 
now stating that it does not exist. This is not true, the root issue 
is that they do not have authority to it. One can argue that for 
security reasons, we should state that the object does not exist. 
However, it creates a odd scenario where you have certain roles that 
can read an object, but then not create/update/delete it.


I have filed a community bug for the same: 
https://bugs.launchpad.net/neutron/+bug/1290895


I'm proposing that we change the message to "The resource could not be 
found or user's role does not have sufficient privileges to run the 
operation."
Ther is a serious security concern with people probing for information 
that they do not have access too.  The 404 is a way to make it 
impossible to distinguish between "the object does not exist" and "it 
exists but it does not belong to you."





I'm sending to the mailing list to see if there are any discussion 
points against making this change.


Thanks,
Sudipto


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >