Re: [openstack-dev] [qa][tempest] Where to do response body validation

2014-03-13 Thread Valeriy Ponomaryov
I disagree to moving this logic to tempest/services/*. The idea of these
modules - assemble requests and return responses. Testing and verification
should be wrapped over it. Either base class or tests, it depends on
situation...

-- 
Kind Regards
Valeriy Ponomaryov


On Thu, Mar 13, 2014 at 6:55 AM, Christopher Yeoh cbky...@gmail.com wrote:

 Hi,

 The new tempest body response validation is being added to individual
 testcases. See this as an example:

 https://review.openstack.org/#/c/78149

 After having a look at https://review.openstack.org/#/c/80174/
 I'm now thinking that perhaps we should be doing the response validation
 in the tempest/services/compute classes. And only check the
 response body if the status code is a success code (and then check that
 it is an appropriate success code).

 I think this will lead to fewer changes needed in the end as the
 response body checking will not needed to be added to individual tests.

 There may be some complications with handling extensions, but I think
 they are all implement backwards compatible behaviour so should be ok.

 Anyone have any thoughts about this alternative approach?

 Regards,

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo majop...@redhat.comwrote:

 I'm not familiar with unix domain sockets at low level, but , I wonder
 if authentication could be achieved just with permissions (only users in
 group neutron or group rootwrap accessing this service.


It can be enforced, but it is not needed at all (see below).


 I find it an interesting alternative, to the other proposed solutions, but
 there are some challenges associated with this solution, which could make
 it more complicated:

 1) Access control, file system permission based or token based,


If we pass the token to the calling process through a pipe bound to stdout,
it won't be intercepted so token-based authentication for further requests
is secure enough.

2) stdout/stderr/return encapsulation/forwarding to the caller,
if we have a simple/fast RPC mechanism we can use, it's a matter
of serializing a dictionary.


RPC implementation in multiprocessing module uses either xmlrpclib or
pickle-based RPC. It should be enough to pass output of a command.
If we ever hit performance problem with passing long strings we can even
pass opened pipe's descriptors over UNIX socket to let caller interact with
spawned process directly.


 3) client side implementation for 1 + 2.


Most of the code should be placed in oslo.rootwrap. Services using it
should replaces calls to root_helper with appropriate client calls like
this:

if run_as_root:
  if CONF.use_rootwrap_daemon:
oslo.rootwrap.client.call(cmd)

All logic around spawning rootwrap daemon and interacting with it should be
hidden so that changes to services will be minimum.

4) It would need to accept new domain socket connections in green threads
 to avoid spawning a new process to handle a new connection.


We can do connection pooling if we ever run into performance problems with
connecting new socket for every rootwrap call (which is unlikely).
On the daemon side I would avoid using fancy libraries (eventlet) because
of both new fat requirement for oslo.rootwrap (it depends on six only
currently) and running more possibly buggy and unsafe code with elevated
privileges.
Simple threaded daemon should be enough given it will handle needs of only
one service process.


 The advantages:
* we wouldn't need to break the only-python-rule.
* we don't need to rewrite/translate rootwrap.

 The disadvantages:
   * it needs changes on the client side (neutron + other projects),


As I said, changes should be minimal.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest] Where to do response body validation

2014-03-13 Thread Kenichi Oomichi

Hi Chris,

Thank you for picking it up,

 -Original Message-
 From: Christopher Yeoh [mailto:cbky...@gmail.com]
 Sent: Thursday, March 13, 2014 1:56 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [qa][tempest] Where to do response body validation
 
 The new tempest body response validation is being added to individual
 testcases. See this as an example:
 
 https://review.openstack.org/#/c/78149
 
 After having a look at https://review.openstack.org/#/c/80174/
 I'm now thinking that perhaps we should be doing the response validation
 in the tempest/services/compute classes. And only check the
 response body if the status code is a success code (and then check that
 it is an appropriate success code).
 
 I think this will lead to fewer changes needed in the end as the
 response body checking will not needed to be added to individual tests.
 
 There may be some complications with handling extensions, but I think
 they are all implement backwards compatible behaviour so should be ok.
 
 Anyone have any thoughts about this alternative approach?

I like the above idea that the response body validation will be operated
in REST client. Tempest will be able to check response body anytime and
reduce the test code.

One concern is that Nova API returns different response body when admin
user or not. I'd like to check response body containing attributes what
admin user can get. For example, get server info API returns a response
including OS-EXT-STS and OS-EXT-SRV-ATT attributes when an admin user.

So how about operating the basic validation(without special attributes)
only in each REST client?
The the special validation(such as the above admin info) would be operated
in each test. The schema size for the special cases could be reduced.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Avoiding dhcp port loss using conductors

2014-03-13 Thread Zhongyue Luo
Hi folks,

I having working on enhancing the base performance of Neutron using ML2 OVS
during my free cycles for the past few weeks. The current bottleneck in
Neutron is, as we all know, the part when a neutron agent requests
security_group_rules_for_devices to the server in order to update its
port's SG rules. This operation takes an average 24secs according to my
observation and it is also the main cause of some VMs not able to receive
DHCP responses in severe conditions.

To enhance the throughput performance of VM provisioning with Neutron, I've
created a neutron-conductor service which currently only handles
security_group_rules_for_devices requests. Though it is yet in prototype
status, it works great in my devstack env and in the OpenStack deployment
in our lab. In my all-in-one devstack env with 4cores and 8G mem, I was
able to provision 50 nano flavor VMs without any network failures. In the
lab deployment, which has 16 compute nodes, I was able to provision 500
tiny flavor VMs with 8sec interval between every nova boot. All of which
successfully obtained fixed IPs.

I've pushed my devstack patch which launches the neutron conductor service
to my github account:
https://github.com/zyluo/devstack/tree/havana_neutron_conductor_sg_only

The patch for the neutron-conductor is located at:
https://github.com/zyluo/neutron/tree/havana_conductor_sg_only

If you are interested, feel free to try this on your devstack env.
Instructions are in the commit message:
https://github.com/zyluo/devstack/commit/07382fa11ec9049b5e732390aca67d4bc9a7443b

I've only tested on CentOS 6.4 using ML2 OVS. Any feedback would be
appreciated. Thanks!

-- 
*Intel SSG/STO/DCST/CIT*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 All,

 I was writing down a summary of all of this and decided to just do it
 on an etherpad.  Will you help me capture the big picture there?  I'd
 like to come up with some actions this week to try to address at least
 part of the problem before Icehouse releases.

 https://etherpad.openstack.org/p/neutron-agent-exec-performance


Great idea! I've added some details on my proposal there.

As of your proposed multitool, I'm very concerned about moving logic to a
bash script. I think we should not deviate from having Python-based agent,
not bash-based.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Zhangleiqiang (Trump)
About the (1) [Single VM], the use cases as follows can be supplement.

1. Protection Group: Define the set of instances to be protected.
2. Protection Policy: Define the policy for protection group, such as sync 
period, sync priority, advanced features, etc.
3. Recovery Plan:Define the recovery steps during recovery, such as the 
power-off and boot order of instances, etc

--
zhangleiqiang (Ray)

Best Regards


 -Original Message-
 From: Bruce Montague [mailto:bruce_monta...@symantec.com]
 Sent: Thursday, March 13, 2014 2:38 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for
 stakeholder
 
 
 Hi, regarding the call to create a list of disaster recovery (DR) use cases
 ( http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html
  ), the following list sketches some speculative OpenStack DR use cases. These
 use cases do not reflect any specific product behavior and span a wide
 spectrum. This list is not a proposal, it is intended primarily to solicit 
 additional
 discussion. The first basic use case, (1), is described in a bit more detail 
 than
 the others; many of the others are elaborations on this basic theme.
 
 
 
 * (1) [Single VM]
 
 A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy
 Services) installed runs a key application and integral database. VSS can 
 quiesce
 the app, database, filesystem, and I/O on demand and can be invoked external
 to the guest.
 
a. The VM's volumes, including the boot volume, are replicated to a remote
 DR site (another OpenStack deployment).
 
b. Some form of replicated VM or VM metadata exists at the remote site.
 This VM/description includes the replicated volumes. Some systems might use
 cold migration or some form of wide-area live VM migration to establish this
 remote site VM/description.
 
c. When specified by an SLA or policy, VSS is invoked, putting the VM's
 volumes in an application-consistent state. This state is flushed all the way
 through to the remote volumes. As each remote volume reaches its
 application-consistent state, this is recognized in some fashion, perhaps by 
 an
 in-band signal, and a snapshot of the volume is made at the remote site.
 Volume replication is re-enabled immediately following the snapshot. A backup
 is then made of the snapshot on the remote site. At the completion of this 
 cycle,
 application-consistent volume snapshots and backups exist on the remote site.
 
d.  When a disaster or firedrill happens, the replication network
 connection is cut. The remote site VM pre-created or defined so as to use the
 replicated volumes is then booted, using the latest application-consistent 
 state
 of the replicated volumes. The entire VM environment (management accounts,
 networking, external firewalling, console access, etc..), similar to that of 
 the
 primary, either needs to pre-exist in some fashion on the secondary or be
 created dynamically by the DR system. The booting VM either needs to attach
 to a virtual network environment similar to at the primary site or the VM 
 needs
 to have boot code that can alter its network personality. Networking
 configuration may occur in conjunction with an update to DNS and other
 networking infrastructure. It is necessary for all required networking
 configuration  to be pre-specified or done automatically. No manual admin
 activity should be required. Environment requirements may be stored in a DR
 configuration !
 or database associated with the replication.
 
e. In a firedrill or test, the virtual network environment at the remote 
 site
 may be a test bubble isolated from the real network, with some provision for
 protected access (such as NAT). Automatic testing is necessary to verify that
 replication succeeded. These tests need to be configurable by the end-user and
 admin and integrated with DR orchestration.
 
f. After the VM has booted and been operational, the network connection
 between the two sites is re-established. A replication connection between the
 replicated volumes is restablished, and the replicated volumes are re-synced,
 with the roles of primary and secondary reversed. (Ongoing replication in this
 configuration may occur, driven from the new primary.)
 
g. A planned failback of the VM to the old primary proceeds similar to the
 failover from the old primary to the old replica, but with roles reversed and 
 the
 process minimizing offline time and data loss.
 
 
 
 * (2) [Core tenant/project infrastructure VMs]
 
 Twenty VMs power the core infrastructure of a group using a private cloud
 (OpenStack in their own datacenter). Not all VMs run Windows with VSS, some
 run Linux with some equivalent mechanism, such as qemu-ga, driving fsfreeze
 and signal scripts. These VMs are replicated to a remote OpenStack
 deployment, in a fashion similar to (1). Orchestration occurring at the remote
 site on failover is more 

Re: [openstack-dev] [Mistral] Error on running tox

2014-03-13 Thread Renat Akhmerov
Ok, awesome. Btw, when I was writing tests for Data Flow I didn’t make 
assumptions about order of tasks. You can take a look at how it’s achieved.

Renat Akhmerov
@ Mirantis Inc.



On 13 Mar 2014, at 02:04, Manas Kelshikar ma...@stackstorm.com wrote:

 Works ok if I directly run nosetests from the virtual environment or even in 
 the IDE. I see this error only when I run tox.
 
 In anycase after looking at the specific testcases I can tell that the test 
 makes some ordering assumptions which may not be required. I will send out a 
 patch soon and we can move the conversation to the review.
 
 Thanks!
 
 
 On Wed, Mar 12, 2014 at 11:00 AM, Manas Kelshikar ma...@stackstorm.com 
 wrote:
 I pasted only for python 2.6 but exact same errors with 2.7. Also, I posted 
 this question after I nuked my entire dev folder so this was being run on a 
 new environment.
 
 /Manas
 
 
 On Wed, Mar 12, 2014 at 4:44 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 I would just try to recreate virtual environments. We haven’t been able to 
 reproduce this problem so far.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 12 Mar 2014, at 16:32, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 maybe something wrong with python2.6? 
 
 .tox/py26/lib/python2.6/site-packages/mock.py, line 1201, in patched
 
 what if you try it on py27?
 
 
 On Wed, Mar 12, 2014 at 10:08 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 Ok. I might be related with oslo.messaging change that we merged in 
 yesterday but I don’t see at this point how exactly.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 12 Mar 2014, at 12:38, Manas Kelshikar ma...@stackstorm.com wrote:
 
 Yes it is 100% reproducible.
 
 Was hoping it was environmental i.e. missing some dependency etc. but since 
 that does not seem to be the case I shall debug locally and report back.
 
 Thanks!
 
 
 On Tue, Mar 11, 2014 at 9:54 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 Hm.. Interesting. CI wasn’t able to reveal this for some reason.
 
 My first guess is that there’s a race condition somewhere. Did you try to 
 debug it? And is this error 100% repeatable?
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 12 Mar 2014, at 11:18, Manas Kelshikar ma...@stackstorm.com wrote:
 
 I see this error when I run tox. I pulled down a latest copy of master and 
 tried to setup the environment. Any ideas?
 
 See http://paste.openstack.org/show/73213/ for details. Any help is 
 appreciated.
 
 
 
 Thanks,
 
 Manas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-13 Thread Edgar Magana
Hi There,

Basically, we just wanted to be sure that the pre-populated information is
accurate and if there is nothing else to add you can close the bug.

Thanks,

Edgar

From:  Mohammad Banikazemi m...@us.ibm.com
Date:  Wednesday, March 12, 2014 8:53 PM
To:  OpenStack List openstack-dev@lists.openstack.org
Cc:  Edgar Magana emag...@plumgrid.com, OpenStack List
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron] Docs for new plugins

Tom Fifield t...@openstack.org wrote on 03/12/2014 10:51:54 PM:

 From: Tom Fifield t...@openstack.org
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org, Edgar Magana emag...@plumgrid.com,
 Date: 03/12/2014 10:59 PM
 Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
 
 On 13/03/14 13:43, Mohammad Banikazemi wrote:
  Thanks for your response.
 
  It looks like the page you are referring to gets populated automatically
  and I see a link already added to it for the new plugin. I also see a
  file corresponding to the new plugin having been created and populated
  with the plugin config options in the latest openstack-manuals cloned
  from github.
 
  After talking to the docs people on #openstack-docs, now I know that
  these files get created automatically and periodically. Any changes to
  the docs should come through changes to the config file in the code
  which will be automatically picked up at some point when the docs
  scripts get executed.
 
 Just to clarify one point - the text comes from the code, in the oslo
 option registration's helptext, not from the configuration files in etc.
 

Thanks for clarifying this point and for the initial information as well.
Yes, by config file in the code I was referring to the config.py file in
our plugin (and a few other Neutron plugins I have seen) where the plugin
options and corresponding helptexts get registered by using register_opts()
from oslo.


  It looks like there is nothing to be done in this front for adding the
  docs for the new plugin. If that seems reasonable, I will close the bug
  I had opened for the the docs for our plugin.
 
  Thanks,
 
  -Mohammad
 
 
 
 
 
  Inactive hide details for Edgar Magana ---03/12/2014 06:10:31 PM---You
  should be able to add your plugin here: http://docs.openEdgar Magana
  ---03/12/2014 06:10:31 PM---You should be able to add your plugin here:
  http://docs.openstack.org/havana/config-reference/conten
 
  From: Edgar Magana emag...@plumgrid.com
  To: Mohammad Banikazemi/Watson/IBM@IBMUS, OpenStack Development Mailing
  List (not for usage questions) openstack-dev@lists.openstack.org,
  Date: 03/12/2014 06:10 PM
  Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
 
  
 
 
 
  You should be able to add your plugin here:
  _http://docs.openstack.org/havana/config-reference/content/
 networking-options-plugins.html_
 
  Thanks,
 
  Edgar
 
  *From: *Mohammad Banikazemi _...@us.ibm.com_ mailto:m...@us.ibm.com*
  Date: *Monday, March 10, 2014 2:40 PM*
  To: *OpenStack List _openstack-dev@lists.openstack.org_
  mailto:openstack-dev@lists.openstack.org*
  Cc: *Edgar Magana _emagana@plumgrid.com_ mailto:emag...@plumgrid.com*
  Subject: *Re: [openstack-dev] [Neutron] Docs for new plugins
 
  Would like to know what to do for adding documentation for a new plugin.
  Can someone point me to the right place/process please.
 
  Thanks,
 
  Mohammad
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Miguel Angel Ajo


Yuri, could you elaborate your idea in detail? , I'm lost at some
points with your unix domain / token authentication.

Where does the token come from?,

Who starts rootwrap the first time?

If you could write a full interaction sequence, on the etherpad, from 
rootwrap daemon start ,to a simple call to system happening, I think 
that'd help my understanding.


Best regards,
Miguel Ángel.


On 03/13/2014 07:42 AM, Yuriy Taraday wrote:

On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:

I'm not familiar with unix domain sockets at low level, but , I wonder
if authentication could be achieved just with permissions (only
users in group neutron or group rootwrap accessing this service.


It can be enforced, but it is not needed at all (see below).

I find it an interesting alternative, to the other proposed
solutions, but there are some challenges associated with this
solution, which could make it more complicated:

1) Access control, file system permission based or token based,


If we pass the token to the calling process through a pipe bound to
stdout, it won't be intercepted so token-based authentication for
further requests is secure enough.

2) stdout/stderr/return encapsulation/forwarding to the caller,
if we have a simple/fast RPC mechanism we can use, it's a matter
of serializing a dictionary.


RPC implementation in multiprocessing module uses either xmlrpclib or
pickle-based RPC. It should be enough to pass output of a command.
If we ever hit performance problem with passing long strings we can even
pass opened pipe's descriptors over UNIX socket to let caller interact
with spawned process directly.

3) client side implementation for 1 + 2.


Most of the code should be placed in oslo.rootwrap. Services using it
should replaces calls to root_helper with appropriate client calls like
this:

if run_as_root:
   if CONF.use_rootwrap_daemon:
 oslo.rootwrap.client.call(cmd)

All logic around spawning rootwrap daemon and interacting with it should
be hidden so that changes to services will be minimum.

4) It would need to accept new domain socket connections in green
threads to avoid spawning a new process to handle a new connection.


We can do connection pooling if we ever run into performance problems
with connecting new socket for every rootwrap call (which is unlikely).
On the daemon side I would avoid using fancy libraries (eventlet)
because of both new fat requirement for oslo.rootwrap (it depends on six
only currently) and running more possibly buggy and unsafe code with
elevated privileges.
Simple threaded daemon should be enough given it will handle needs of
only one service process.

The advantages:
* we wouldn't need to break the only-python-rule.
* we don't need to rewrite/translate rootwrap.

The disadvantages:
   * it needs changes on the client side (neutron + other projects),


As I said, changes should be minimal.

--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread Renat Akhmerov

On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:

 I can write a method in base test to start local executor.  I will do that as 
 a separate bp.  
Ok.
 After the engine is made standalone, the API will communicate to the engine 
 and the engine to the executor via the oslo.messaging transport.  This means 
 that for the local option, we need to start all three components (API, 
 engine, and executor) on the same process.  If the long term goal as you 
 stated above is to use separate launchers for these components, this means 
 that the API launcher needs to duplicate all the logic to launch the engine 
 and the executor. Hence, my proposal here is to move the logic to launch the 
 components into a common module and either have a single generic launch 
 script that launch specific components based on the CLI options or have 
 separate launch scripts that reference the appropriate launch function from 
 the common module.
Ok, I see your point. Then I would suggest we have one script which we could 
use to run all the components (any subset of of them). So for those components 
we specified when launching the script we use this local transport. Btw, 
scheduler eventually should become a standalone component too, so we have 4 
components.

 The RPC client/server in oslo.messaging do not determine the transport.  The 
 transport is determine via oslo.config and then given explicitly to the RPC 
 client/server.  
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
  and 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
  are examples for the client and server respectively.  The in process Queue 
 is instantiated within this transport object from the fake driver.  For the 
 local option, all three components need to share the same transport in 
 order to have the Queue in scope. Thus, we will need some method to have this 
 transport object visible to all three components and hence my proposal to use 
 a global variable and a factory method. 
I’m still not sure I follow your point here.. Looking at the links you provided 
I see this:

transport = messaging.get_transport(cfg.CONF)

So my point here is we can make this call once in the launching script and pass 
it to engine/executor (and now API too if we want it to be launched by the same 
script). Of course, we’ll have to change the way how we initialize these 
components, but I believe we can do it. So it’s just a dependency injection. 
And in this case we wouldn’t need to use a global variable. Am I still missing 
something?


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [I18n][Horizon] I18n compliance test string freeze exception

2014-03-13 Thread Ying Chun Guo

Hello, all

Our translators start translation and I18n compliance test since string
frozen date.
During the translation and test, we may report bugs.
Some bugs are incorrect and incomprehensible messages.
Some bugs are user facing messages but not marked with _().
All of these bugs might introduce string changes and add new strings to be
translated.
I noticed some patches to fix these bugs got -1 because of string freeze.
For example, https://review.openstack.org/#/c/79679/
and https://review.openstack.org/#/c/79948/

StringFreeze - Start translation  test - Report bugs which may cause
string changes - Cannot fix these bugs because of StringFreeze.
So I'd like to bring this question to dev: when shall we fix these errors
then?

From my point of view, FeatureFreeze means not accept new features, and
doesn't mean cannot fix bugs in features.
StringFreeze should mean not to add new strings. But we could be able to
improve strings and fix bugs.
I think shipping with incorrect messages is worse than strict string
freeze.

From my experiences in Havana release, since StringFreeze, there are
volunteers from Horizon team who would
keep an eye on strings changes. If string changes happen, they would send
email
to I18n ML to inform these changes. Many thanks to their work.
In Havana release, we kept on those kind of bug reporting and fixing till
RC2.
Most of them are happened before RC1.

Now I hope to hear your input to this situation: when and how should we fix
these kind of bugs in Icehouse?

Best regards
Ying Chun Guo (Daisy)___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-13 Thread Qin Zhao
Hi Vincent,
I feel your blueprint is interesting, too. Its objective seems similar with
the existing one, and some new APIs also look similar as existing ones. For
instance, 'RestoreFromSnapshot' looks like 'rebuild', and
'SpawnFromSnapshot' looks like 'spawn'. Does it benefit a lot, if we define
these set of new APIs?


On Wed, Mar 12, 2014 at 11:14 AM, Sheng Bo Hou sb...@cn.ibm.com wrote:

 Hi everyone,

 I got excited to hear that this live snapshot has been taken into
 discussion in our community. Recently my clients in China came up with this
 live snapshot requirement as well, because they have already had their
 legacy environment and expect the original functions work fine when they
 transfer to use OpenStack. In my opinion, we need to think a little bit
 about these clients' needs, because it is also a potential market for
 OpenStack.

 I registered a new blueprint for Nova
 https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot. It
 is named driver-specific before, but can be changed later.

 The Nova API could be implemented via the extension, the following API may
 be added:
 * CreateSnapshot: create a snapshot from the VM. *The snapshot can be
 live snapshot or other hypervisor native way to create a snapshot.*
 * RestoreFromSnapshot: restore/revert the VM from a snapshot.
 * DeleteSnapshot: delete a snapshot.
 * ListSnapshot: list all the snapshots or list all the snapshots if a VM
 id is given.
 * SpawnFromSnapshot: spawn a new VM from an existing snapshot, which is
 the live snapshot or the snapshot of other snapshot created in a hypervisor
 native way.
 The features in this blueprint can be optional for any drivers. If a
 driver does not have a native way to do live snapshot or other kind of
 snapshots, it is fine to leave the API not implemented; if a driver can
 provide the native feature to do snapshot, it is an opportunity to
 reinforce Nova with this snapshot support.

 I sincerely need your comments and hope we can figure it out in a most
 favorable way.
 Thank you so much.

 Best wishes,
 Vincent Hou (侯胜博)

 Staff Software Engineer, Open Standards and Open Source Team, Emerging
 Technology Institute, IBM China Software Development Lab

 Tel: 86-10-82450778 Fax: 86-10-82453660
 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
 Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
 West Road, Haidian District, Beijing, P.R.C.100193
 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193


  *Jay Pipes jaypi...@gmail.com jaypi...@gmail.com*

 2014/03/12 03:15
  Please respond to
 OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org

   To
 openstack-dev@lists.openstack.org,
 cc
   Subject
 Re: [openstack-dev] [nova] a question about instance snapshot




 On Tue, 2014-03-11 at 06:35 +, Bohai (ricky) wrote:
   -Original Message-
   From: Jay Pipes [mailto:jaypi...@gmail.com jaypi...@gmail.com]
   Sent: Tuesday, March 11, 2014 3:20 AM
   To: openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [nova] a question about instance snapshot
  
   On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
We have very strong interest in pursing this feature in the VMware
driver as well. I would like to see the revert instance feature
implemented at least.
   
When I used to work in multi-discipline roles involving operations it
would be common for us to snapshot a vm, run through an upgrade
process, then revert if something did not upgrade smoothly. This
ability alone can be exceedingly valuable in long-lived virtual
machines.
   
I also have some comments from parties interested in refactoring how
the VMware drivers handle snapshots but I'm not certain how much that
plays into this live snapshot discussion.
  
   I think the reason that there isn't much interest in doing this kind
 of thing is
   because the worldview that VMs are pets is antithetical to the
 worldview that
   VMs are cattle, and Nova tends to favor the latter (where DRS/DPM on
   vSphere tends to favor the former).
  
   There's nothing about your scenario above of being able to revert an
 instance
   to a particular state that isn't possible with today's Nova.
   Snapshotting an instance, doing an upgrade of software on the
 instance, and
   then restoring from the snapshot if something went wrong (reverting) is
   already fully possible to do with the regular Nova snapshot and restore
   operations. The only difference is that the live-snapshot
   stuff would include saving the memory view of a VM in addition to its
 disk state.
   And that, at least in my opinion, is only needed when you are treating
 VMs like
   pets and not cattle.
  
 
  Hi Jay,
 
  I read every words in your reply and respect what you said.
 
  But i can't agree with you that memory snapshot is a feature for pat not
 for cattle.
  I think it's a feature whatever what do you look the instance as.
 
  The 

Re: [openstack-dev] [Mistral] Actions design BP

2014-03-13 Thread Renat Akhmerov
So no need to ask the same…”

Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-13 Thread Renat Akhmerov
Joshua,

Thanks for your interest and feedback.

I believe you were able to deliver your message already, we definitely hear 
you. So no need to the same stuff again and again ;) As I promised before, we 
are now evaluating what’s been going on with TaskFlow for the last couple of 
months and preparing our questions/concerns/suggestions on using it within 
Mistral. But that’s not very easy and quick thing to do since there’s a bunch 
of details to take into account, especially given the fact that Mistral 
codebase has become much more solid and Mistral itself now has lots of 
requirements dictated by its use cases and roadmap vision. So patience would be 
really appreciated here.

If you don’t mind I would prefer to discuss things like that in separate 
threads, not in threads devoted to daily project activities. So that we can 
split our conceptual discussions and current work that’s going on according to 
our plans. Otherwise we have a risk to make spaghetti out of our ML threads.

Thanks

Renat Akhmerov
@ Mirantis Inc.



On 13 Mar 2014, at 08:54, Joshua Harlow harlo...@yahoo-inc.com wrote:

 So taskflow has tasks, which seems comparable to actions?
 
 I guess I should get tired of asking but why recreate the same stuff ;)
 
 The questions listed:
 
 - Does action need to have revert() method along with run() method?
 - How does action expose errors occurring during it's work?
 
 - In what form does action return a result?
 
 
 And more @ https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign
 
 And quite a few others that haven't been mentioned (how does a action
 retry? How does a action report partial progress? What's the
 intertask/state persistence mechanism?) have been worked on by the
 taskflow team for a while now...
 
 https://github.com/openstack/taskflow/blob/master/taskflow/task.py#L31
 (and others...)
 
 Anyways, I know mistral is still POC/pilot/prototype... but seems like
 more duplicate worked that could just be avoided ;)
 
 -Josh
 
 -Original Message-
 From: Renat Akhmerov rakhme...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Tuesday, March 11, 2014 at 11:32 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Mistral] Actions design BP
 
 Team,
 
 I started summarizing all the thoughts and ideas that we¹ve been
 discussing for a while regarding actions. The main driver for this work
 is that the system keeps evolving and we still don¹t have a comprehensive
 understanding of that part. Additionally, we keep getting a lot of
 requests and questions from our potential users which are related to
 actions (Œwill they be extensible?¹, Œwill they have dry-run feature?¹,
 Œwhat are the ways to configure and group them?¹ and so on and so forth).
 So although we¹re still in a Pilot phase we need to start this work in
 parallel. Even now lack of solid understanding of it creates a lot of
 problems in pilot development.
 
 I created a BP at launchpad [0] which has a reference to detailed
 specification [1]. It¹s still in progress but you could already leave
 your early feedback so that I don¹t go in a wrong direction too far.
 
 The highest priority now is still finishing the pilot so we shouldn¹t
 start implementing everything described in BP right now. However, some of
 the things have to be adjusted asap (like Action interface and the main
 implementation principles).
 
 [0]: 
 https://blueprints.launchpad.net/mistral/+spec/mistral-actions-design
 [1]: https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-03-13 Thread Liuji (Jeremy)
Hi,

I have written a wiki about usb controller and usb-passthrough in 
https://wiki.openstack.org/wiki/Nova/proposal_about_usb_passthrough.

Hope I can get your good advices.

Thanks,
Jeremy Liu

 -Original Message-
 From: Liuji (Jeremy) [mailto:jeremy@huawei.com]
 Sent: Thursday, February 27, 2014 9:59 AM
 To: yunhong.ji...@linux.intel.com; OpenStack Development Mailing List (not
 for usage questions)
 Cc: Luohao (brian); Yuanjing (D)
 Subject: Re: [openstack-dev] [nova] Question about USB passthrough
 
 Yes, PCI devices like GPU or HBA are common resources, admin/user do not
 need to specify which device to which VM. So current PCI passthrough function
 can meet user scenarios.
 
 But USB devices have different user scenarios. Take USB key or USB disk as
 example, admin/user may need the content in USB device but not the device
 itself, so admin/user should specify which USB device to which VM.
 
 There are other things needed to be considered too, for example USB device
 may need a matched USB controller but not the default USB 1.1 controller
 created by qemu.
 
 I'm not clear about how to provide this function but still want to write a 
 wiki so
 that more people can participate in the discussion.
 
 Thanks,
 Jeremy Liu
 
  -Original Message-
  From: yunhong jiang [mailto:yunhong.ji...@linux.intel.com]
  Sent: Wednesday, February 26, 2014 1:17 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: bpavlo...@mirantis.com; Luohao (brian); Yuanjing (D)
  Subject: Re: [openstack-dev] [nova] Question about USB passthrough
 
  On Tue, 2014-02-25 at 03:05 +, Liuji (Jeremy) wrote:
   Now that USB devices are used so widely in private/hybrid cloud like
   used as USB key, and there are no technical issues in libvirt/qemu.
   I think it a valuable feature in openstack.
 
  USB key is an interesting scenario. I assume the USB key is just for
  some specific VM, wondering how the admin/user know which usb disk to
 which VM?
 
  --jyh
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Arnaud Legendre for Glance Core

2014-03-13 Thread Flavio Percoco

On 12/03/14 19:19 -0700, Mark Washenberger wrote:

Hi folks,

I'd like to nominate Arnaud Legendre to join Glance Core. Over the past cycle
his reviews have been consistently high quality and I feel confident in his
ability to assess the design of new features and the overall direction for
Glance.

If anyone has any concerns, please share them with me. If I don't hear any,
I'll make the membership change official in about a week.

Thanks for your consideration. And thanks for all your hard work, Arnaud!


+1

Thanks Arnaud.



markwash



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpNRSIcQwpNw.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] horizon PyPi distribution missing

2014-03-13 Thread Timur Sufiev
Hello!

Recently I've discovered (and it was really surprising for me) that
horizon package isn't published on PyPi (see
http://paste.openstack.org/show/73348/). The reason why I needed to
install horizon this way is that it is desirable for muranodashboard
unittests to have horizon in the test environment (and it currently
seems not possible).

Could you please give me a clue why horizon distribution is missing at PyPi?

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-13 Thread Stephen Gran

On 03/12/2014 06:34 PM, Mike Spreitzer wrote:

Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a
nested stack that includes a OS::Neutron::PoolMember?  Should I expect
this to work?


This sort of thing works fine for us.  It needs some patches that missed 
Havana, though.


{
  AWSTemplateFormatVersion: 2010-09-09,
  Description: Sample webserver config,
  Resources: {
LBMonitor: {
  Type: OS::Neutron::HealthMonitor,
  Properties: {
delay: 3,
max_retries: 5,
url_path: /_,
type: HTTP,
timeout: 10
  }
},
LaunchConfig: {
  Type: AWS::AutoScaling::LaunchConfiguration,
  Properties: {
},
SecurityGroups: [ { Ref: SecGroup } ],
InstanceType: m1.small,
ImageId: CentOS65-1312-1
  }
},
ELB: {
  Type: OS::Neutron::LoadBalancer,
  Properties: {
protocol_port: 8080,
pool_id: { Ref: LBPool }
  }
},
ASG: {
  Version: 2009-05-15,
  Type: AWS::AutoScaling::AutoScalingGroup,
  Properties: {
LaunchConfigurationName: { Ref: LaunchConfig },
MinSize: 1,
MaxSize: 2,
VPCZoneIdentifier: [ mumbleuuid ],
LoadBalancerNames: [ { Ref: ELB } ],
AvailabilityZones: { Fn::GetAZs:  }
  }
},
LBPool: {
  Type: OS::Neutron::Pool,
  Properties: {
lb_method: ROUND_ROBIN,
protocol: HTTP,
description: Test Pool,
subnet_id: mumbleuuid,
vip: {
  protocol_port: 8080,
  connection_limit: 1000,
  description: Test,
  name: Test
},
monitors: [ { Ref: LBMonitor } ],
name: test
  }
},
SecGroup: {
  Type: AWS::EC2::SecurityGroup,
  Properties: {
SecurityGroupIngress: [
{
SourceSecurityGroupId: mumbleuuid,
IpProtocol: tcp,
ToPort: 8080,
FromPort: 8080
  },
],
GroupDescription: Test
  }
}
  }
}

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 57% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] horizon PyPi distribution missing

2014-03-13 Thread Matthias Runge
On Thu, Mar 13, 2014 at 01:10:06PM +0400, Timur Sufiev wrote:
 Recently I've discovered (and it was really surprising for me) that
 horizon package isn't published on PyPi (see
 http://paste.openstack.org/show/73348/). The reason why I needed to
 install horizon this way is that it is desirable for muranodashboard
 unittests to have horizon in the test environment (and it currently
 seems not possible).

I'd expect this to change, when horizon and OpenStack Dashboard
are finally separated. I agree, it makes sense to have something
comparable to the package now called horizon on PyPi.

https://blueprints.launchpad.net/horizon/+spec/separate-horizon-from-dashboard
-- 
Matthias Runge mru...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Arnaud Legendre for Glance Core

2014-03-13 Thread Zhi Yan Liu
+1. Nice work Arnaud!

On Thu, Mar 13, 2014 at 5:09 PM, Flavio Percoco fla...@redhat.com wrote:
 On 12/03/14 19:19 -0700, Mark Washenberger wrote:

 Hi folks,

 I'd like to nominate Arnaud Legendre to join Glance Core. Over the past
 cycle
 his reviews have been consistently high quality and I feel confident in
 his
 ability to assess the design of new features and the overall direction for
 Glance.

 If anyone has any concerns, please share them with me. If I don't hear
 any,
 I'll make the membership change official in about a week.

 Thanks for your consideration. And thanks for all your hard work, Arnaud!


 +1

 Thanks Arnaud.


 markwash


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] test environment requirements

2014-03-13 Thread Robert Collins
So we already have pretty high requirements - its basically a 16G
workstation as minimum.

Specifically to test the full story:
 - a seed VM
 - an undercloud VM (bm deploy infra)
 - 1 overcloud control VM
 - 2 overcloud hypervisor VMs

   5 VMs with 2+G RAM each.

To test the overcloud alone against the seed we save 1 VM, to skip the
overcloud we save 3.

However, as HA matures we're about to add 4 more VMs: we need a HA
control plane for both the under and overclouds:
 - a seed VM
 - 3 undercloud VMs (HA bm deploy infra)
 - 3 overcloud control VMs (HA)
 - 2 overcloud hypervisor VMs

   9 VMs with 2+G RAM each == 18GB

What should we do about this?

A few thoughts to kick start discussion:
 - use Ironic to test across multiple machines (involves tunnelling
brbm across machines, fairly easy)
 - shrink the VM sizes (causes thrashing)
 - tell folk to toughen up and get bigger machines (ahahahahaha, no)
 - make the default configuration inline the hypervisors on the
overcloud with the control plane:
   - a seed VM
   - 3 undercloud VMs (HA bm deploy infra)
   - 3 overcloud all-in-one VMs (HA)
  
 7 VMs with 2+G RAM each == 14GB


I think its important that we exercise features like HA and live
migration regularly by developers, so I'm quite keen to have a fairly
solid systematic answer that will let us catch things like bad
firewall rules on the control node preventing network tunnelling
etc... e.g. we benefit the more things are split out like scale
deployments are. OTOH testing the micro-cloud that folk may start with
is also a really good idea

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-13 Thread Matthew Booth
On 12/03/14 18:28, Matt Riedemann wrote:
 
 
 On 2/25/2014 6:36 AM, Matthew Booth wrote:
 I'm new to Nova. After some frustration with the review process,
 specifically in the VMware driver, I decided to try to visualise how the
 review process is working across Nova. To that end, I've created 2
 graphs, both attached to this mail.

 Both graphs show a nova directory tree pruned at the point that a
 directory contains less than 2% of total LOCs. Additionally, /tests and
 /locale are pruned as they make the resulting graph much busier without
 adding a great deal of useful information. The data for both graphs was
 generated from the most recent 1000 changes in gerrit on Monday 24th Feb
 2014. This includes all pending changes, just over 500, and just under
 500 recently merged changes.

 pending.svg shows the percentage of LOCs which have an outstanding
 change against them. This is one measure of how hard it is to write new
 code in Nova.

 merged.svg shows the average length of time between the
 ultimately-accepted version of a change being pushed and being approved.

 Note that there are inaccuracies in these graphs, but they should be
 mostly good. Details of generation here:
 https://github.com/mdbooth/heatmap. This code is obviously
 single-purpose, but is free for re-use if anyone feels so inclined.

 The first graph above (pending.svg) is the one I was most interested in,
 and shows exactly what I expected it to. Note the size of 'vmwareapi'.
 If you check out Nova master, 24% of the vmwareapi driver has an
 outstanding change against it. It is practically impossible to write new
 code in vmwareapi without stomping on an oustanding patch. Compare that
 to the libvirt driver at a much healthier 3%.

 The second graph (merged.svg) is an attempt to look at why that is.
 Again comparing the VMware driver with the libvirt we can see that at 12
 days, it takes much longer for a change to be approved in the VMware
 driver than in the libvirt driver. I suspect that this isn't the whole
 story, which is likely a combination of a much longer review time with
 very active development.

 What's the impact of this? As I said above, it obviously makes it very
 hard to come in as a new developer of the VMware driver when almost a
 quarter of it has been rewritten, but you can't see it. I am very new to
 this and others should validate my conclusions, but I also believe this
 is having a detrimental impact to code quality. Remember that the above
 12 day approval is only the time for the final version to be approved.
 If a change goes through multiple versions, each of those also has an
 increased review period, meaning that the time from first submission to
 final inclusion is typically very, very protracted. The VMware driver
 has its fair share of high priority issues and functionality gaps, and
 the developers are motived to get it in the best possible shape as
 quickly as possible. However, it is my impression that when problems
 stem from structural issues, the developers choose to add metaphorical
 gaffer tape rather than fix them, because fixing both creates a
 dependency chain which pushes the user-visible fix months into the
 future. In this respect the review process is dysfunctional, and is
 actively detrimental to code quality.

 Unfortunately I'm not yet sufficiently familiar with the project to
 offer a solution. A core reviewer who regularly looks at it is an
 obvious fix. A less obvious fix might involve a process which allows
 developers to work on a fork which is periodically merged, rather like
 the kernel.

 Matt



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 When I originally read this I had some ideas in mind for a response
 regarding review latency with the vmware driver patches, but felt like
 anything I said, albeit what I consider honest, would sound
 bad/offensive in some way, and didn't want to convey that.
 
 But this came up in IRC today:
 
 https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor
 
 That spurred some discussion around this same topic and I think
 highlights one of the major issues, which is code quality and the design
 of the driver.
 
 For example, the driver's spawn method is huge and there are a lot of
 nested methods within it.  There are a lot of vmware patches and a lot
 of blueprints, and a lot of them touch spawn.  When I'm reviewing them,
 I'm looking for new conditions and checking to see if those are unit
 tested (positive/error cases) and a lot of the time I find it very
 difficult to tell if they are or not.  I think a lot of that has to do
 with how the vmware tests are scaffolded with fakes to simulate a
 vcenter backend, but it makes it very difficult if you're not extremely
 familiar with that code to know if something is covered or not.
 
 And I've actually asked in bp reviews before, 'you have this 

Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-13 Thread Thomas Herve

 Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a nested
 stack that includes a OS::Neutron::PoolMember? Should I expect this to work?

Hi Mike,

Yes I tested it and it works. I'm trying to build an example for heat-templates 
putting it all together. I'm mostly struggling with how to include the nested 
stack nicely (without a HTTP URL), but everything else seems fine.

Cheers,

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.comwrote:

 Yuri, could you elaborate your idea in detail? , I'm lost at some
 points with your unix domain / token authentication.

 Where does the token come from?,

 Who starts rootwrap the first time?

 If you could write a full interaction sequence, on the etherpad, from
 rootwrap daemon start ,to a simple call to system happening, I think that'd
 help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [neutron] Top Gate Race - Bug 1248757 - test_snapshot_pattern fails with paramiko ssh EOFError

2014-03-13 Thread Sean Dague
You may have noticed that gate race fails on neutron jobs seem to have
gone up a lot in the last couple of days. You aren't just imagining it...

The current top gate race is https://bugs.launchpad.net/bugs/1248757

That has a massive uptick in the last 2 days, which is pretty
concerning, as there has been a rush on landing code for FFEs right now.
This could really use eyes by the nova  neutron teams to figure out
what changed in the last 2 days that could account for that becoming
much less reliable.

Here is the latest marked fail -
http://logs.openstack.org/28/79628/4/check/check-tempest-dsvm-neutron/11f8293/

The gate at 7am EST is 43 items deep with a 10hr backlog, and this is
the primary reason for that.

Until this gets resolved we're looking at a completely jammed gate. So
you're code won't merge in any reasonable amount of time.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Replication multi cloud

2014-03-13 Thread Fargetta Marco
Hi all,

we would use the replication mechanism in swift to replicate the data
in two swift instances deployed in different clouds with different keystones
and administrative domains.

Is this possible with the current replication facilities or they should
stay in the same cloud sharing the keystone?

Cheers,
Marco



-- 

Eng. Marco Fargetta, PhD

Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-13 Thread John Garbutt
On 13 March 2014 10:09, Matthew Booth mbo...@redhat.com wrote:
 On 12/03/14 18:28, Matt Riedemann wrote:
 On 2/25/2014 6:36 AM, Matthew Booth wrote:
 I'm new to Nova. After some frustration with the review process,
 specifically in the VMware driver, I decided to try to visualise how the
 review process is working across Nova. To that end, I've created 2
 graphs, both attached to this mail.

 Both graphs show a nova directory tree pruned at the point that a
 directory contains less than 2% of total LOCs. Additionally, /tests and
 /locale are pruned as they make the resulting graph much busier without
 adding a great deal of useful information. The data for both graphs was
 generated from the most recent 1000 changes in gerrit on Monday 24th Feb
 2014. This includes all pending changes, just over 500, and just under
 500 recently merged changes.

 pending.svg shows the percentage of LOCs which have an outstanding
 change against them. This is one measure of how hard it is to write new
 code in Nova.

 merged.svg shows the average length of time between the
 ultimately-accepted version of a change being pushed and being approved.

 Note that there are inaccuracies in these graphs, but they should be
 mostly good. Details of generation here:
 https://github.com/mdbooth/heatmap. This code is obviously
 single-purpose, but is free for re-use if anyone feels so inclined.

 The first graph above (pending.svg) is the one I was most interested in,
 and shows exactly what I expected it to. Note the size of 'vmwareapi'.
 If you check out Nova master, 24% of the vmwareapi driver has an
 outstanding change against it. It is practically impossible to write new
 code in vmwareapi without stomping on an oustanding patch. Compare that
 to the libvirt driver at a much healthier 3%.

 The second graph (merged.svg) is an attempt to look at why that is.
 Again comparing the VMware driver with the libvirt we can see that at 12
 days, it takes much longer for a change to be approved in the VMware
 driver than in the libvirt driver. I suspect that this isn't the whole
 story, which is likely a combination of a much longer review time with
 very active development.

 What's the impact of this? As I said above, it obviously makes it very
 hard to come in as a new developer of the VMware driver when almost a
 quarter of it has been rewritten, but you can't see it. I am very new to
 this and others should validate my conclusions, but I also believe this
 is having a detrimental impact to code quality. Remember that the above
 12 day approval is only the time for the final version to be approved.
 If a change goes through multiple versions, each of those also has an
 increased review period, meaning that the time from first submission to
 final inclusion is typically very, very protracted. The VMware driver
 has its fair share of high priority issues and functionality gaps, and
 the developers are motived to get it in the best possible shape as
 quickly as possible. However, it is my impression that when problems
 stem from structural issues, the developers choose to add metaphorical
 gaffer tape rather than fix them, because fixing both creates a
 dependency chain which pushes the user-visible fix months into the
 future. In this respect the review process is dysfunctional, and is
 actively detrimental to code quality.

 Unfortunately I'm not yet sufficiently familiar with the project to
 offer a solution. A core reviewer who regularly looks at it is an
 obvious fix. A less obvious fix might involve a process which allows
 developers to work on a fork which is periodically merged, rather like
 the kernel.

 Matt



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 When I originally read this I had some ideas in mind for a response
 regarding review latency with the vmware driver patches, but felt like
 anything I said, albeit what I consider honest, would sound
 bad/offensive in some way, and didn't want to convey that.

 But this came up in IRC today:

 https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor

 That spurred some discussion around this same topic and I think
 highlights one of the major issues, which is code quality and the design
 of the driver.

 For example, the driver's spawn method is huge and there are a lot of
 nested methods within it.  There are a lot of vmware patches and a lot
 of blueprints, and a lot of them touch spawn.  When I'm reviewing them,
 I'm looking for new conditions and checking to see if those are unit
 tested (positive/error cases) and a lot of the time I find it very
 difficult to tell if they are or not.  I think a lot of that has to do
 with how the vmware tests are scaffolded with fakes to simulate a
 vcenter backend, but it makes it very difficult if you're not extremely
 familiar with that code to know if something is covered or not.

 And I've 

Re: [openstack-dev] [I18n][Horizon] I18n compliance test string freeze exception

2014-03-13 Thread Julie Pichon
On 13/03/14 09:28, Akihiro Motoki wrote:
 +1
 
 In my understanding String Freeze is a SOFT freeze as Daisy describes.
 Applying string freeze to incorrect or incomprehensible messages is
 not good from UX point of view
 and shipping the release with such strings will make the situation
 worse and people feel OpenStack is not mature
 and can misunderstand OpenStack doesn't care for such detail :-(
 
 From my experience of working as a translator and bridging Horizon and
 I18N community
 in the previous releases, the proposed policy sounds good and it can
 be accepted by translators.

That sounds good to me as well. I think we should modify the
StringFreeze page [1] to reflect this, as it sounds a lot more strict
than what the translation team actually wishes for.

Thanks,

Julie

[1] https://wiki.openstack.org/wiki/StringFreeze

 Thanks,
 Akihiro
 
 
 On Thu, Mar 13, 2014 at 5:41 PM, Ying Chun Guo guoyi...@cn.ibm.com wrote:
 Hello, all

 Our translators start translation and I18n compliance test since string
 frozen date.
 During the translation and test, we may report bugs.
 Some bugs are incorrect and incomprehensible messages.
 Some bugs are user facing messages but not marked with _().
 All of these bugs might introduce string changes and add new strings to be
 translated.
 I noticed some patches to fix these bugs got -1 because of string freeze.
 For example, https://review.openstack.org/#/c/79679/
 and https://review.openstack.org/#/c/79948/

 StringFreeze - Start translation  test - Report bugs which may cause
 string changes - Cannot fix these bugs because of StringFreeze.
 So I'd like to bring this question to dev: when shall we fix these errors
 then?

 From my point of view, FeatureFreeze means not accept new features, and
 doesn't mean cannot fix bugs in features.
 StringFreeze should mean not to add new strings. But we could be able to
 improve strings and fix bugs.
 I think shipping with incorrect messages is worse than strict string freeze.

 From my experiences in Havana release, since StringFreeze, there are
 volunteers from Horizon team who would
 keep an eye on strings changes. If string changes happen, they would send
 email
 to I18n ML to inform these changes. Many thanks to their work.
 In Havana release, we kept on those kind of bug reporting and fixing till
 RC2.
 Most of them are happened before RC1.

 Now I hope to hear your input to this situation: when and how should we fix
 these kind of bugs in Icehouse?

 Best regards
 Ying Chun Guo (Daisy)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] FFE Request: Allow enabling both v1 and v2 registries

2014-03-13 Thread stuart . mclaren

Hi,

Right now you can select running either the v1 or v2 registries but not
both at the same time. I'd like to ask for an FFE for this functionality
(Erno's patch in progress here: https://review.openstack.org/#/c/79957/)

With v1 on the road to deprecation I think this may help migrating from
a v1 to a v2 registry.

Thanks,

-Stuart

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 48 hours failures

2014-03-13 Thread Rossella Sblendido

Hello devs,

I wanted the update the analysis performed by Salvatore Orlando few 
weeks ago [1]
I used the following query for Logstash [2] to detect the failures of 
the last 48 hours.


There were 77 failures (40% of the total).
I classified them and obtained the following:

21% due to infra issues
16% https://bugs.launchpad.net/tempest/+bug/1253896
14% https://bugs.launchpad.net/neutron/+bug/1291922
12% https://bugs.launchpad.net/tempest/+bug/1281969
10% https://bugs.launchpad.net/neutron/+bug/1291920
7% https://bugs.launchpad.net/neutron/+bug/1291918
7% https://bugs.launchpad.net/neutron/+bug/1291926
5% https://bugs.launchpad.net/neutron/+bug/1291947
3% https://bugs.launchpad.net/neutron/+bug/1277439
3% https://bugs.launchpad.net/neutron/+bug/1283599
2% https://bugs.launchpad.net/nova/+bug/1255627

I had to file 5 new bugs, that are on the previous list and can be 
viewed here [3].


cheers,

Rossella

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027862.html
[2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiICBBTkQgcHJvamVjdDpcIm9wZW5zdGFjay9uZXV0cm9uXCIgQU5EIG1lc3NhZ2U6XCJGaW5pc2hlZDpcIiBBTkQgYnVpbGRfc3RhdHVzOlwiRkFJTFVSRVwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDcwNzAzODk5NywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[3] https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-full-job

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-13 Thread Victoria Martínez de la Cruz
@Sriram Thanks for the pointers! Though I'm afraid that students don't have
a Connections option. Maybe after submitting the proof of enrollment?

Cheers,




2014-03-12 22:07 GMT-03:00 Andronidis Anastasios andronat_...@hotmail.com:

 Ok, thank you very much!

 Anastasis

 On 13 Μαρ 2014, at 1:58 π.μ., Davanum Srinivas dava...@gmail.com wrote:

  Andronidis,
 
  not sure, we can ask others on the irc meeting tomorrow.
 
  Please answer the questions on the template, and if you see the last
  one is about links to your proposal on the openstack wiki.
 
  On Wed, Mar 12, 2014 at 8:40 PM, Andronidis Anastasios
  andronat_...@hotmail.com wrote:
  Hello everyone,
 
  I am a student and I can not see Connections anywhere. I also tried
 to re-loging, but still nothing. Is it sure that this Connections link
 exists in students too?
 
  I also have a second question, concerning the template on
 google-melange. Do we have to just answer the questions on the template? Or
 shall we also paste our proposal that we wrote on the openstack wiki?
 
  Kindly,
  Anastasis
 
  On 12 Μαρ 2014, at 10:46 μ.μ., Sriram Subramanian 
 sri...@sriramhere.com wrote:
 
  Victoria,
 
  When you click My Dashboard on the left hand side, you will see
 Connections, Proposals etc on your right, in the dashboard. Right below
 Connections, there are two links in smaller font, one which is the link
 to Connect (circled in blue in the attached snapshot).
  If you tried right after creating your profile, try logging out and
 in. When I created the profile, I remember having some issues around
 accessing profile (not the dashboard, but entire profile).
 
  thanks
  -Sriram
 
 
  On Wed, Mar 12, 2014 at 1:32 PM, Victoria Martínez de la Cruz 
 victo...@vmartinezdelacruz.com wrote:
  Hi,
 
  Thanks for working on the template, it sure ease things for students.
 
  I can't find the Connect with organizations link, does anyone have
 the same problem?
 
  I confirm my assistance to tomorrow's meeting, thanks for organizing
 it! +1
 
  Cheers,
 
  Victoria
 
 
 
  2014-03-11 14:29 GMT-03:00 Davanum Srinivas dava...@gmail.com:
 
  Hi,
 
  Mentors:
  * Please click on My Dashboard then Connect with organizations and
  request a connection as a mentor (on the GSoC web site -
  http://www.google-melange.com/)
 
  Students:
  * Please see the Application template you will need to fill in on the
 GSoC site.
   http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
  * Please click on My Dashboard then Connect with organizations and
  request a connection
 
  Both Mentors and Students:
  Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
  UTC for about 30 mins to meet and greet since all application deadline
  is next week. If this time is not convenient, please send me a note
  and i'll arrange for another time say on friday as well.
 
 http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09p1=43am=30
 
  We need to get an idea of how many slots we need to apply for based on
  really strong applications with properly fleshed out project ideas and
  mentor support. Hoping the meeting on IRC will nudge the students and
  mentors work towards that goal.
 
  Thanks,
  dims
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Thanks,
  -Sriram
  melange.PNG___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Davanum Srinivas :: http://davanum.wordpress.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] FFE Request: Allow enabling both v1 and v2 registries

2014-03-13 Thread Sean Dague
On 03/13/2014 08:00 AM, stuart.mcla...@hp.com wrote:
 Hi,
 
 Right now you can select running either the v1 or v2 registries but not
 both at the same time. I'd like to ask for an FFE for this functionality
 (Erno's patch in progress here: https://review.openstack.org/#/c/79957/)
 
 With v1 on the road to deprecation I think this may help migrating from
 a v1 to a v2 registry.
 
 Thanks,
 
 -Stuart
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

We're over a week past freeze, it is really time to focus on existing
issues, and not trying to bring in additional new feature freezes.

This should wait until the Juno tree opens up.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-13 Thread Davanum Srinivas
FYI, here's the log for the OpenStack GSoC meeting we just wrapped up

http://paste.openstack.org/show/73389/



On Tue, Mar 11, 2014 at 1:29 PM, Davanum Srinivas dava...@gmail.com wrote:
 Hi,

 Mentors:
 * Please click on My Dashboard then Connect with organizations and
 request a connection as a mentor (on the GSoC web site -
 http://www.google-melange.com/)

 Students:
 * Please see the Application template you will need to fill in on the GSoC 
 site.
   http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
 * Please click on My Dashboard then Connect with organizations and
 request a connection

 Both Mentors and Students:
 Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
 UTC for about 30 mins to meet and greet since all application deadline
 is next week. If this time is not convenient, please send me a note
 and i'll arrange for another time say on friday as well.
 http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09p1=43am=30

 We need to get an idea of how many slots we need to apply for based on
 really strong applications with properly fleshed out project ideas and
 mentor support. Hoping the meeting on IRC will nudge the students and
 mentors work towards that goal.

 Thanks,
 dims



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Alessandro Pilotti
Those use cases are very important in enterprise scenarios requirements, but 
there's an important missing piece in the current OpenStack APIs: support for 
application consistent backups via Volume Shadow Copy (or other solutions) at 
the instance level, including differential / incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with the 
free Hyper-V Server) with e.g. vSphere and XenServer supporting it as well 
(quescing) and with the option for third party vendors to add drivers for their 
solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the OpenStack 
community wants to support those use cases or not. Cinder backup/restore 
support [1] and volume replication [2] are surely a great starting point in 
this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


 On 12/mar/2014, at 20:45, Bruce Montague bruce_monta...@symantec.com 
 wrote:
 
 
 Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
 the following list sketches some speculative OpenStack DR use cases. These 
 use cases do not reflect any specific product behavior and span a wide 
 spectrum. This list is not a proposal, it is intended primarily to solicit 
 additional discussion. The first basic use case, (1), is described in a bit 
 more detail than the others; many of the others are elaborations on this 
 basic theme. 
 
 
 
 * (1) [Single VM]
 
 A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
 Services) installed runs a key application and integral database. VSS can 
 quiesce the app, database, filesystem, and I/O on demand and can be invoked 
 external to the guest.
 
   a. The VM's volumes, including the boot volume, are replicated to a remote 
 DR site (another OpenStack deployment).
 
   b. Some form of replicated VM or VM metadata exists at the remote site. 
 This VM/description includes the replicated volumes. Some systems might use 
 cold migration or some form of wide-area live VM migration to establish this 
 remote site VM/description.
 
   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
 volumes in an application-consistent state. This state is flushed all the way 
 through to the remote volumes. As each remote volume reaches its 
 application-consistent state, this is recognized in some fashion, perhaps by 
 an in-band signal, and a snapshot of the volume is made at the remote site. 
 Volume replication is re-enabled immediately following the snapshot. A backup 
 is then made of the snapshot on the remote site. At the completion of this 
 cycle, application-consistent volume snapshots and backups exist on the 
 remote site.
 
   d.  When a disaster or firedrill happens, the replication network 
 connection is cut. The remote site VM pre-created or defined so as to use the 
 replicated volumes is then booted, using the latest application-consistent 
 state of the replicated volumes. The entire VM environment (management 
 accounts, networking, external firewalling, console access, etc..), similar 
 to that of the primary, either needs to pre-exist in some fashion on the 
 secondary or be created dynamically by the DR system. The booting VM either 
 needs to attach to a virtual network environment similar to at the primary 
 site or the VM needs to have boot code that can alter its network 
 personality. Networking configuration may occur in conjunction with an update 
 to DNS and other networking infrastructure. It is necessary for all required 
 networking configuration  to be pre-specified or done automatically. No 
 manual admin activity should be required. Environment requirements may be 
 stored in a DR configuration o
 r database associated with the replication. 
 
   e. In a firedrill or test, the virtual network environment at the remote 
 site may be a test bubble isolated from the real network, with some 
 provision for protected access (such as NAT). Automatic testing is necessary 
 to verify that replication succeeded. These tests need to be configurable by 
 the end-user and admin and integrated with DR orchestration.
 
   f. After the VM has booted and been operational, the network connection 
 between the two sites is re-established. A replication connection between the 
 replicated volumes is restablished, and the replicated volumes are re-synced, 
 with the roles of primary and secondary reversed. (Ongoing replication in 
 this configuration may occur, driven from the new primary.)
 
   g. A planned failback of the VM to the old primary proceeds similar to the 
 failover from the old primary to the old replica, but with roles reversed and 
 the process minimizing offline time and data loss.
 
 
 
 * (2) [Core tenant/project infrastructure VMs] 
 
 Twenty VMs 

Re: [openstack-dev] [Glance] FFE Request: Allow enabling both v1 and v2 registries

2014-03-13 Thread Flavio Percoco

On 13/03/14 09:32 -0400, Sean Dague wrote:

On 03/13/2014 08:00 AM, stuart.mcla...@hp.com wrote:

Hi,

Right now you can select running either the v1 or v2 registries but not
both at the same time. I'd like to ask for an FFE for this functionality
(Erno's patch in progress here: https://review.openstack.org/#/c/79957/)

With v1 on the road to deprecation I think this may help migrating from
a v1 to a v2 registry.

Thanks,

-Stuart

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We're over a week past freeze, it is really time to focus on existing
issues, and not trying to bring in additional new feature freezes.

This should wait until the Juno tree opens up.


FWIW, the proposed change has a very low risk of regression and it
looks quite relevant for deployments still running Glance's v1 and
willing to upgrade slowly to the v2. Furthermore, it'll make the
registry service API configuration consistent with glance's api.

Just bringing this to the consideration. I'd be fine with reviewing it
and letting it land. The patch is pretty small and it looks quite good
already.

To be fair, it actually fixes a bug. What makes this change worth
asking for a FFE is the fact that it adds 2 new configuration options
and makes it possible to run the registry with both versions enabled.

Cheers,
Flavio.

--
@flaper87
Flavio Percoco


pgpr_7bvKaAd4.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Top Gate Race - Bug 1248757 - test_snapshot_pattern fails with paramiko ssh EOFError

2014-03-13 Thread Dan Smith
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 Here is the latest marked fail - 
 http://logs.openstack.org/28/79628/4/check/check-tempest-dsvm-neutron/11f8293/

So,
 
looking at this a little bit, you can see from the n-cpu log that
it is getting failures when talking to neutron. Specifically, from
neutronclient:

throwing ConnectionFailed : timed out _cs_request
ConnectionFailed: Connection to neutron failed: Maximum attempts reached

- From a brief look at neutronclient, it looks like it tries several
times to send a request, before it falls back to the above error.
Given the debug timed out log there, I would assume that neutron's
API isn't accepting the connection in time.

Later in the log, it successfully reaches neutron again, and then
falls over again in the same way. This is a parallel job, so load is
high, which makes me suspect just a load issue.

- From talking to salv-orlando on IRC just now, it sounds like this
might just be some lock contention on the Neutron side, which is
slowing it down enough to cause failures occasionally.

- --Dan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTIbxoAAoJEBeZxaMESjNVB4IH/0wzaRhW/xkuUbFxNsSbRRt5
8EJdBkDJHfFQW6VQM6GqmvyZOVFkTLOhdMGF1dgWLBTTkGhmOVRiwdkim059sPd4
3EwUH3ZhSQg8n/rSAoS0rb1nFKaCt6D76DNJR5LXBCd89k6d/0q8SAkOgwNg7H82
oS17CjnLYvUfvF0JqSmKNt4ter1zMSXMZXNe8z09mKqZBTC4vNWIskv2yLgUbecv
Sb6NVc+HFkCk3t5MlKlM8fnLIoF2b4F0w0rCSJPV9txXWL2ijiaFIncyTYSFSuOp
NE1kdEAuZOIUnnZW3udEyb4QQS3HhRbVvRHJbnTAOVLGw5ijp+1V5FoipizA3v0=
=lVal
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Solly Ross
@bnemec: I don't think that's been considered.  I'm actually one of the 
upstream maintainers for noVNC.  The only concern that I'd have with OpenStack 
adopting noVNC (there are other maintainers, as well as the author, so I'd have 
to check with them as well) is that there are a few other projects that use 
noVNC, so we'd need to make sure that no OpenStack-specific code gets merged 
into noVNC if we adopt it.  Other that that, though, adopting noVNC doesn't 
sound like a horrible idea.

Best Regards,
Solly Ross

- Original Message -
From: Ben Nemec openst...@nemebean.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: openstack-in...@lists.openstack.org
Sent: Wednesday, March 12, 2014 3:38:19 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning
noVNC from github.com/kanaka



On 2014-03-11 20:34, Joshua Harlow wrote: 


https://status.github.com/messages 
* 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
mitigations we have in place are proving effective in protecting us and we're 
hopeful that we've got this one resolved.' 
If you were cloning from github.org and not http://git.openstack.org then you 
were likely seeing some of the DDoS attack in action. 
Unfortunately I don't think novnc is in git.openstack.org because it's not an 
OpenStack project. I wonder if we should investigate adopting it (if the 
author(s) are amenable to that) since we're using the git version of it. Maybe 
that's already been considered and I just don't know about it. :-) 
-Ben 



From: Sukhdev Kapur  sukhdevka...@gmail.com  
Reply-To: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org  
Date: Tuesday, March 11, 2014 at 4:08 PM 
To: Dane Leblanc (leblancd)  lebla...@cisco.com  
Cc: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org ,  openstack-in...@lists.openstack.org   
openstack-in...@lists.openstack.org  
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka 



I have noticed that even clone of devstack has failed few times within last 
couple of hours - it was running fairly smooth so far. 
-Sukhdev 


On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur  sukhdevka...@gmail.com  
wrote: 



[adding openstack-dev list as well ] 
I have noticed that this has stated hitting my builds within last few hours. I 
have noticed exact same failures on almost 10 builds. 
Looks like something has happened within last few hours - perhaps the load? 
-Sukhdev 


On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd)  lebla...@cisco.com  
wrote: 





Apologies if this is the wrong audience for this question... 



I'm seeing intermittent failures running stack.sh whereby 'git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning various 
errors. Below are 2 examples. 



Is this a known issue? Are there any localrc settings which might help here? 



Example 1: 



2014-03-11 15:00:33.779 | + is_service_enabled n-novnc 

2014-03-11 15:00:33.780 | + return 0 

2014-03-11 15:00:33.781 | ++ trueorfalse False 

2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False 

2014-03-11 15:00:33.783 | + '[' False = True ']' 

2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC 

2014-03-11 15:00:33.785 | + git_clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC master 

2014-03-11 15:00:33.786 | + GIT_REMOTE= https://github.com/kanaka/noVNC.git 

2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC 

2014-03-11 15:00:33.789 | + GIT_REF=master 

2014-03-11 15:00:33.790 | ++ trueorfalse False False 

2014-03-11 15:00:33.791 | + RECLONE=False 

2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]] 

2014-03-11 15:00:33.793 | + echo master 

2014-03-11 15:00:33.794 | + egrep -q '^refs' 

2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]] 

2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]] 

2014-03-11 15:00:33.797 | + git_timed clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC 

2014-03-11 15:00:33.798 | + local count=0 

2014-03-11 15:00:33.799 | + local timeout=0 

2014-03-11 15:00:33.801 | + [[ -n 0 ]] 

2014-03-11 15:00:33.802 | + timeout=0 

2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC 

2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'... 

2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200 

2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly 

2014-03-11 15:03:13.697 | fatal: early EOF 

2014-03-11 15:03:13.698 | fatal: index-pack failed 

2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]] 

2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone' 
https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]' 

2014-03-11 15:03:13.701 | + local exitcode=0 

2014-03-11 15:03:13.702 | [Call Trace] 

2014-03-11 15:03:13.703 | 

Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-13 Thread Luke Gorrie
Howdy!

I have some tech questions I'd love some pointers on from people who've
succeeded in setting up CI for Neutron based on the upstream devstack-gate.

Here are the parts where I'm blocked now:

1. I need to enable an ML2 mech driver. How can I do this? I have been
trying to create a localrc with a Q_ML2_PLUGIN_MECHANISM_DRIVERS=...
line, but it appears that the KEEP_LOCALRC option in devstack-gate is
broken (confirmed on #openstack-infra).

2. How do I streamline which tests are run? I tried added export
DEVSTACK_GATE_TEMPEST_REGEX=network in the Jenkins job configuration but I
don't see any effect. (word on #openstack-infra is this option is not used
by them so status unknown.)

3. How do I have Jenkins copy the log files into a directory on the Jenkins
master node (that I can serve up with Apache)? This is left as an exercise
to the reader in the blog tutorial but I would love a cheat, since I am
getting plenty of exercise already :-).

I also have the meta-question: How can I test changes/fixes to
devstack-gate? I've attempted many times to modify how scripts work, but I
don't have a global understanding of the whole openstack-infra setup, and
somehow my changes always end up being clobbered by a fresh checkout from
the upstream repo on Github. That's crazy frustrating when it takes 10+
minutes to fire up a test via Jenkins even when I'm only e.g. trying to add
an echo to a shell script somewhere to see what's in an environment
variable at a certain point in a script. I'd love a faster edit-compile-run
loop, especially one that doesn't involve needing to get changed merged
upstream into the official openstack-infra repo.

I also have an issue that worries me. I once started seeing tempest tests
failing due to a resource leak where the kernel ran out of loopback mounts
and that broke tempest. Here is what I saw:

root@egg-slave:~# losetup -a
/dev/loop0: [fc00]:5248399
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop1: [fc00]:5248409 (/opt/stack/data/stack-volumes-backing-file)
/dev/loop2: [fc00]:5248467
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop3: [fc00]:5248496
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop4: [fc00]:5248702
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop5: [fc00]:5248735
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop6: [fc00]:5248814
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop7: [fc00]:5248825
(/opt/stack/data/swift/drives/images/swift.img)

and trying to remove this with 'losetup -d ...' had no effect. I rebooted.
(I'm on Ubuntu 13.10.)

This kind of spurious error has the potential to cause my CI to start
casting negative votes (again) and upsetting everybody's workflows, not
because my tests have actually found a problem but just because it's a
non-trivial problem for me to keep a devstack-gate continuously
operational. I hope that doesn't happen, but with this level of
infrastructure complexity it does feel a little like playing russian
roulette that the next glitch in
devstack/devstack-gate/Jenkins/Gerrit/Zuul/Gearman/... will manifest itself
in the copy that's running on my server. /vent

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Duncan Thomas
On 12 March 2014 17:35, Tim Bell tim.b...@cern.ch wrote:

 And if the same mistake is done for a cinder volume or a trove database ?

Deferred deletion for cinder has been proposed, and there have been
few objections to it... nobody has put forward code yet, but anybody
is welcome to do so.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Bruce Montague
Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a within-guest 
agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work 
with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered via 
libvirt)? I think there was an effort for qemu-ga to use fsfreeze as an 
equivalent to VSS on Linux systems, was that done?  If so, could an OpenStack 
API provide a generic quiesce request that would then get passed to libvirt? 
(Also, the XenServer VSS support seems different than qemu/KVM's, is this true? 
Can it also be accessed through libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, but 
there's an important missing piece in the current OpenStack APIs: support for 
application consistent backups via Volume Shadow Copy (or other solutions) at 
the instance level, including differential / incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with the 
free Hyper-V Server) with e.g. vSphere and XenServer supporting it as well 
(quescing) and with the option for third party vendors to add drivers for their 
solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the OpenStack 
community wants to support those use cases or not. Cinder backup/restore 
support [1] and volume replication [2] are surely a great starting point in 
this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


 On 12/mar/2014, at 20:45, Bruce Montague bruce_monta...@symantec.com 
 wrote:


 Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
 the following list sketches some speculative OpenStack DR use cases. These 
 use cases do not reflect any specific product behavior and span a wide 
 spectrum. This list is not a proposal, it is intended primarily to solicit 
 additional discussion. The first basic use case, (1), is described in a bit 
 more detail than the others; many of the others are elaborations on this 
 basic theme.



 * (1) [Single VM]

 A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
 Services) installed runs a key application and integral database. VSS can 
 quiesce the app, database, filesystem, and I/O on demand and can be invoked 
 external to the guest.

   a. The VM's volumes, including the boot volume, are replicated to a remote 
 DR site (another OpenStack deployment).

   b. Some form of replicated VM or VM metadata exists at the remote site. 
 This VM/description includes the replicated volumes. Some systems might use 
 cold migration or some form of wide-area live VM migration to establish this 
 remote site VM/description.

   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
 volumes in an application-consistent state. This state is flushed all the way 
 through to the remote volumes. As each remote volume reaches its 
 application-consistent state, this is recognized in some fashion, perhaps by 
 an in-band signal, and a snapshot of the volume is made at the remote site. 
 Volume replication is re-enabled immediately following the snapshot. A backup 
 is then made of the snapshot on the remote site. At the completion of this 
 cycle, application-consistent volume snapshots and backups exist on the 
 remote site.

   d.  When a disaster or firedrill happens, the replication network
 connection is cut. The remote site VM pre-created or defined so as to use the 
 replicated volumes is then booted, using the latest application-consistent 
 state of the replicated volumes. The entire VM environment (management 
 accounts, networking, external firewalling, console access, etc..), similar 
 to that of the primary, either needs to pre-exist in some fashion on the 
 secondary or be created dynamically by the DR system. The booting VM either 
 needs to attach to a virtual network environment similar to at the primary 
 site or the VM needs to have boot code that can alter its network 
 personality. Networking configuration may occur in conjunction with an update 
 to DNS and other networking infrastructure. It is necessary for all required 
 networking configuration  to be pre-specified or done automatically. No 
 manual admin activity should be required. Environment requirements may be 
 stored in a DR configuration o r database associated with the replication.

   e. In a firedrill or test, the virtual network environment at the remote 
 site may be a test bubble isolated 

Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Sean Dague
I think a bigger question is why are we using a git version of something
outside of OpenStack.

Where is a noNVC release we can point to and use?

In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.

Currently we have 4 repositories that fit this bill:

SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

I think all of these probably need to be removed from devstack. We
should be using release versions (preferably in distros, though allowed
to be in language specific package manager).

-Sean

On 03/13/2014 10:26 AM, Solly Ross wrote:
 @bnemec: I don't think that's been considered.  I'm actually one of the 
 upstream maintainers for noVNC.  The only concern that I'd have with 
 OpenStack adopting noVNC (there are other maintainers, as well as the author, 
 so I'd have to check with them as well) is that there are a few other 
 projects that use noVNC, so we'd need to make sure that no OpenStack-specific 
 code gets merged into noVNC if we adopt it.  Other that that, though, 
 adopting noVNC doesn't sound like a horrible idea.
 
 Best Regards,
 Solly Ross
 
 - Original Message -
 From: Ben Nemec openst...@nemebean.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: openstack-in...@lists.openstack.org
 Sent: Wednesday, March 12, 2014 3:38:19 PM
 Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning  
 noVNC from github.com/kanaka
 
 
 
 On 2014-03-11 20:34, Joshua Harlow wrote: 
 
 
 https://status.github.com/messages 
 * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
 mitigations we have in place are proving effective in protecting us and we're 
 hopeful that we've got this one resolved.' 
 If you were cloning from github.org and not http://git.openstack.org then you 
 were likely seeing some of the DDoS attack in action. 
 Unfortunately I don't think novnc is in git.openstack.org because it's not an 
 OpenStack project. I wonder if we should investigate adopting it (if the 
 author(s) are amenable to that) since we're using the git version of it. 
 Maybe that's already been considered and I just don't know about it. :-) 
 -Ben 
 
 
 
 From: Sukhdev Kapur  sukhdevka...@gmail.com  
 Reply-To: OpenStack Development Mailing List (not for usage questions)  
 openstack-dev@lists.openstack.org  
 Date: Tuesday, March 11, 2014 at 4:08 PM 
 To: Dane Leblanc (leblancd)  lebla...@cisco.com  
 Cc: OpenStack Development Mailing List (not for usage questions)  
 openstack-dev@lists.openstack.org ,  openstack-in...@lists.openstack.org  
  openstack-in...@lists.openstack.org  
 Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
 noVNC from github.com/kanaka 
 
 
 
 I have noticed that even clone of devstack has failed few times within last 
 couple of hours - it was running fairly smooth so far. 
 -Sukhdev 
 
 
 On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur  sukhdevka...@gmail.com  
 wrote: 
 
 
 
 [adding openstack-dev list as well ] 
 I have noticed that this has stated hitting my builds within last few hours. 
 I have noticed exact same failures on almost 10 builds. 
 Looks like something has happened within last few hours - perhaps the load? 
 -Sukhdev 
 
 
 On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd)  lebla...@cisco.com 
  wrote: 
 
 
 
 
 
 Apologies if this is the wrong audience for this question... 
 
 
 
 I'm seeing intermittent failures running stack.sh whereby 'git clone 
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning various 
 errors. Below are 2 examples. 
 
 
 
 Is this a known issue? Are there any localrc settings which might help here? 
 
 
 
 Example 1: 
 
 
 
 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc 
 
 2014-03-11 15:00:33.780 | + return 0 
 
 2014-03-11 15:00:33.781 | ++ trueorfalse False 
 
 2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False 
 
 2014-03-11 15:00:33.783 | + '[' False = True ']' 
 
 2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC 
 
 2014-03-11 15:00:33.785 | + git_clone https://github.com/kanaka/noVNC.git 
 /opt/stack/noVNC master 
 
 2014-03-11 15:00:33.786 | + GIT_REMOTE= https://github.com/kanaka/noVNC.git 
 
 2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC 
 
 2014-03-11 15:00:33.789 | + GIT_REF=master 
 
 2014-03-11 15:00:33.790 | ++ trueorfalse False False 
 
 2014-03-11 15:00:33.791 | + RECLONE=False 
 
 2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]] 
 
 2014-03-11 15:00:33.793 | + echo master 
 
 2014-03-11 15:00:33.794 | + egrep -q '^refs' 
 
 2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]] 
 
 2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]] 

Re: [openstack-dev] Climate Incubation Application

2014-03-13 Thread Russell Bryant
On 03/12/2014 12:14 PM, Sylvain Bauza wrote:
 Hi Russell,
 Thanks for replying,
 
 
 2014-03-12 16:46 GMT+01:00 Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com:
 The biggest concern seemed to be that we weren't sure whether Climate
 makes sense as an independent project or not.  We think it may make more
 sense to integrate what Climate does today into Nova directly.  More
 generally, we think reservations of resources may best belong in the
 APIs responsible for managing those resources, similar to how quota
 management for resources lives in the resource APIs.
 
 There is some expectation that this type of functionality will extend
 beyond Nova, but for that we could look at creating a shared library of
 code to ease implementing this sort of thing in each API that needs it.
 
 
 
 That's really a good question, so maybe I could give some feedback on
 how we deal with the existing use-cases.
 About the possible integration with Nova, that's already something we
 did for the virtual instances use-case, thanks to an API extension
 responsible for checking if a scheduler hint called 'reservation' was
 spent, and if so, take use of the python-climateclient package to send a
 request to Climate.
 
 I truly agree with the fact that possibly users should not use a
 separate API for reserving resources, but that would be worth duty for
 the project itself (Nova, Cinder or even Heat). That said, we think that
 there is need for having a global ordonancer managing resources and not
 siloing the resources. Hence that's why we still think there is still a
 need for a Climate Manager.

What we need to dig in to is *why* do you feel it needs to be global?

I'm trying to understand what you're saying here ... do you mean that
since we're trying to get to where there's a global scheduler, that it
makes sense there should be a central point for this, even if the API is
through the existing compute/networking/storage APIs?

If so, I think that makes sense.  However, until we actually have
something for scheduling, I think we should look at implementing all of
this in the services, and perhaps share some code with a Python library.
 So, I'm thinking along the lines of ...

1) Take what Climate does today and work to integrate it into Nova,
using as much of the existing Climate code as makes sense.  Be careful
about coupling in Nova so that we can easily split out the right code
into a library once we're ready to work on integration in another project.

2) In parallel, continue working on decoupling nova-scheduler from the
rest of Nova, so that we can split it out into its own project.

3) Once the scheduler is split out, re-visit what part of reservations
functionality belongs in the new scheduling project and what parts
should remain in each of the projects responsible for managing resources.

 Once I said that, there are different ways to plug in with the Manager,
 our proposal is to deliver a REST API and a python client so that there
 could be still some operator access for managing the resources if
 needed. The other way would be to only expose an RPC interface like the
 scheduler does at the moment but as the move to Pecan/WSME is already
 close to be done (reviews currently in progress), that's still a good
 opportunity for leveraging the existing bits of code.

Yes, I would want to use as much of the existing code as possible.

As I said above, I just think it's premature to make this its own
project on its own, unless we're able to look at scheduling more broadly
as its own project.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Aaron Rosen
The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason.
See: https://review.openstack.org/#/c/28914/ which did this for thr
dhcp-agent.

Best,

Aaron
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.comwrote:

 Yuri, could you elaborate your idea in detail? , I'm lost at some
 points with your unix domain / token authentication.

 Where does the token come from?,

 Who starts rootwrap the first time?

 If you could write a full interaction sequence, on the etherpad, from
 rootwrap daemon start ,to a simple call to system happening, I think that'd
 help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

-- 

Kind regards, Yuriy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread Joe Hakim Rahme
On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:

 There are a number of patches up for review that make various changes to use 
 six apis instead of Python 2 constructs. While I understand the desire to 
 get a head start on getting Tempest to run in Python 3, I'm not sure it makes 
 sense to do this work piecemeal until we are near ready to introduce a py3 
 gate job. Many contributors will not be aware of what all the differences are 
 and py2-isms will creep back in resulting in more overall time spent making 
 these changes and reviewing. Also, the core review team is busy trying to do 
 stuff important to the icehouse release which is barely more than 5 weeks 
 away. IMO we should hold off on various kinds of cleanup patches for now.

+1 I agree with you David.

However, what’s the best way we can go about making sure to make this a
goal for the next release cycle?

---
Joe H. Rahme
IRC: rahmu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] LBaaS design proposals

2014-03-13 Thread Prashanth Hari
Hi,

I am a late comer in this discussion.
Can someone please point me to the design proposal documentations in
addition to the object model ?


Thanks,
Prashanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-13 Thread Luke Gorrie
oh and in my haste I forgot to say: thank you extremely much to everybody
who's been giving me pointers on IRC and especially to Jay for the blog
walkthrough!


On 13 March 2014 15:30, Luke Gorrie l...@tail-f.com wrote:

 Howdy!

 I have some tech questions I'd love some pointers on from people who've
 succeeded in setting up CI for Neutron based on the upstream devstack-gate.

 Here are the parts where I'm blocked now:

 1. I need to enable an ML2 mech driver. How can I do this? I have been
 trying to create a localrc with a Q_ML2_PLUGIN_MECHANISM_DRIVERS=...
 line, but it appears that the KEEP_LOCALRC option in devstack-gate is
 broken (confirmed on #openstack-infra).

 2. How do I streamline which tests are run? I tried added export
 DEVSTACK_GATE_TEMPEST_REGEX=network in the Jenkins job configuration but I
 don't see any effect. (word on #openstack-infra is this option is not used
 by them so status unknown.)

 3. How do I have Jenkins copy the log files into a directory on the
 Jenkins master node (that I can serve up with Apache)? This is left as an
 exercise to the reader in the blog tutorial but I would love a cheat, since
 I am getting plenty of exercise already :-).

 I also have the meta-question: How can I test changes/fixes to
 devstack-gate? I've attempted many times to modify how scripts work, but I
 don't have a global understanding of the whole openstack-infra setup, and
 somehow my changes always end up being clobbered by a fresh checkout from
 the upstream repo on Github. That's crazy frustrating when it takes 10+
 minutes to fire up a test via Jenkins even when I'm only e.g. trying to add
 an echo to a shell script somewhere to see what's in an environment
 variable at a certain point in a script. I'd love a faster edit-compile-run
 loop, especially one that doesn't involve needing to get changed merged
 upstream into the official openstack-infra repo.

 I also have an issue that worries me. I once started seeing tempest tests
 failing due to a resource leak where the kernel ran out of loopback mounts
 and that broke tempest. Here is what I saw:

 root@egg-slave:~# losetup -a
 /dev/loop0: [fc00]:5248399
 (/opt/stack/data/swift/drives/images/swift.img)
 /dev/loop1: [fc00]:5248409 (/opt/stack/data/stack-volumes-backing-file)
 /dev/loop2: [fc00]:5248467
 (/opt/stack/data/swift/drives/images/swift.img)
 /dev/loop3: [fc00]:5248496
 (/opt/stack/data/swift/drives/images/swift.img)
 /dev/loop4: [fc00]:5248702
 (/opt/stack/data/swift/drives/images/swift.img)
 /dev/loop5: [fc00]:5248735
 (/opt/stack/data/swift/drives/images/swift.img)
 /dev/loop6: [fc00]:5248814
 (/opt/stack/data/swift/drives/images/swift.img)
 /dev/loop7: [fc00]:5248825
 (/opt/stack/data/swift/drives/images/swift.img)

 and trying to remove this with 'losetup -d ...' had no effect. I rebooted.
 (I'm on Ubuntu 13.10.)

 This kind of spurious error has the potential to cause my CI to start
 casting negative votes (again) and upsetting everybody's workflows, not
 because my tests have actually found a problem but just because it's a
 non-trivial problem for me to keep a devstack-gate continuously
 operational. I hope that doesn't happen, but with this level of
 infrastructure complexity it does feel a little like playing russian
 roulette that the next glitch in
 devstack/devstack-gate/Jenkins/Gerrit/Zuul/Gearman/... will manifest itself
 in the copy that's running on my server. /vent

 Cheers,
 -Luke



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] any recommendations for live debugging of openstack services?

2014-03-13 Thread Solly Ross
Well, for a non-interactive view of things, you can use the 
openstack.common.report functionality.  It's currently integrated into Nova, 
and I believe that the other projects are working to get it integrated as well. 
 To use it, you just send a SIGUSR1 to any Nova process, and a report of the 
current stack traces of threads and green threads, as well as the current 
configuration options, will be dumped.

It doesn't look like exactly what you want, but I figured it might be useful to 
you anyway.

Best Regards,
Solly Ross

- Original Message -
From: Chris Friesen chris.frie...@windriver.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, March 12, 2014 12:47:32 PM
Subject: [openstack-dev] any recommendations for live debugging of openstack
services?


Are there any tools that people can recommend for live debugging of 
openstack services?

I'm looking for a mechanism where I could take a running system that 
isn't behaving the way I expect and somehow poke around inside the 
program while it keeps running.  (Sort of like tracepoints in gdb.)

I've seen mention of things like twisted.manhole and 
eventlet.backdoor...has anyone used this sort of thing with openstack? 
Are there better options?

Also, has anyone ever seen an implementation of watchpoints for python? 
  By that I mean the ability to set a breakpoint if the value of a 
variable changes.  I found 
https://sourceforge.net/blog/watchpoints-in-python/; but it looks 
pretty hacky.

Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [Infra] pep8 issues in tempest gate / testscenarios lib

2014-03-13 Thread Koderer, Marc
Hi folks,

I can't make it to the QA meeting for today so I wanted to summarize the issue
that we have with the pep8 and tempest gate. An example for the issue you can
find here:
  https://review.openstack.org/#/c/79256/ 
  
http://logs.openstack.org/56/79256/1/gate/gate-tempest-pep8/088cc12/console.html

pep8 check shows an error but the check itself is marked as success.

For me this show two issues. First flake8 should return with an exit code !=0.
I will have a closer look into hacking and what went wrong here.

Second issue is the current implementation of the negative testing framework:
we are using the testscenarios lib with the load_tests variable interpreted
by the test runner. This forces us to build the scenario at import time and if
we want to have tempest configurations for this (like introduced in
https://review.openstack.org/#/c/73982/) the laziness for the config doesn't
work.

Although it seems like if I remove the inheritance of the xml class to the
json class 
(https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_flavors_negative_xml.py#L24)
that error doesn't appear any longer, I see a general problem with
the usage of import-time code and we may think about a better solution in 
general.

I'll try to address the missing pieces tomorrow.
Bug: https://bugs.launchpad.net/tempest/+bug/1291826

Regards,
Marc

DEUTSCHE TELEKOM AG
Digital Business Unit, Cloud Services (PI)
Marc Koderer
Cloud Technology Software Developer
T-Online-Allee 1, 64211 Darmstadt
E-Mail: m.kode...@telekom.de
www.telekom.com   

LIFE IS FOR SHARING. 

DEUTSCHE TELEKOM AG
Supervisory Board: Prof. Dr. Ulrich Lehner (Chairman)
Board of Management: René Obermann (Chairman),
Reinhard Clemens, Niek Jan van Damme, Timotheus Höttges,
Dr. Thomas Kremer, Claudia Nemat, Prof. Dr. Marion Schick
Commercial register: Amtsgericht Bonn HRB 6794
Registered office: Bonn

BIG CHANGES START SMALL – CONSERVE RESOURCES BY NOT PRINTING EVERY E-MAIL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS design proposals

2014-03-13 Thread Brandon Logan
This is the object model proposals:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion

From: Prashanth Hari [hvpr...@gmail.com]
Sent: Thursday, March 13, 2014 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] LBaaS design proposals

Hi,

I am a late comer in this discussion.
Can someone please point me to the design proposal documentations in addition 
to the object model ?


Thanks,
Prashanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread McCabe, Donagh
Marco,

The replication *inside* Swift is not intended to move data between two 
different Swift instances -- it's an internal data repair and rebalance 
mechanism.

However, there is a different mechanism, called container-to-container 
synchronization that might be what you are looking for. It will sync two 
containers in different swift instances. The swift instances may be in 
different Keystone administrative domains -- the authentication is not based on 
Keystone. It does require that each swift instance be configured to recognise 
each other. However, this is only usable for low update rates.

Regards,
Donagh

-Original Message-
From: Fargetta Marco [mailto:marco.farge...@ct.infn.it] 
Sent: 13 March 2014 11:24
To: OpenStack Development Mailing List
Subject: [openstack-dev] [swift] Replication multi cloud

Hi all,

we would use the replication mechanism in swift to replicate the data in two 
swift instances deployed in different clouds with different keystones and 
administrative domains.

Is this possible with the current replication facilities or they should stay in 
the same cloud sharing the keystone?

Cheers,
Marco



--

Eng. Marco Fargetta, PhD

Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy

EMail: marco.farge...@ct.infn.it


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Ben Nemec

On 2014-03-13 09:44, Sean Dague wrote:
I think a bigger question is why are we using a git version of 
something

outside of OpenStack.

Where is a noNVC release we can point to and use?

In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.

Currently we have 4 repositories that fit this bill:

SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

I think all of these probably need to be removed from devstack. We
should be using release versions (preferably in distros, though allowed
to be in language specific package manager).


IIRC, when I looked into using the distro-packaged noVNC it broke all 
kinds of things because for some reason noVNC has a dependency on 
nova-common (now python-nova it looks like), so we end up pulling in all 
kinds of distro nova stuff that conflicts with the devstack installed 
pieces.  It doesn't seem like a correct dep to me, but maybe Solly can 
comment on whether it's necessary or not.


-Ben



-Sean

On 03/13/2014 10:26 AM, Solly Ross wrote:
@bnemec: I don't think that's been considered.  I'm actually one of 
the upstream maintainers for noVNC.  The only concern that I'd have 
with OpenStack adopting noVNC (there are other maintainers, as well as 
the author, so I'd have to check with them as well) is that there are 
a few other projects that use noVNC, so we'd need to make sure that no 
OpenStack-specific code gets merged into noVNC if we adopt it.  Other 
that that, though, adopting noVNC doesn't sound like a horrible idea.


Best Regards,
Solly Ross

- Original Message -
From: Ben Nemec openst...@nemebean.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org

Cc: openstack-in...@lists.openstack.org
Sent: Wednesday, March 12, 2014 3:38:19 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
cloning	noVNC from github.com/kanaka




On 2014-03-11 20:34, Joshua Harlow wrote:


https://status.github.com/messages
* 'GitHub.com is operating normally, despite an ongoing DDoS attack. 
The mitigations we have in place are proving effective in protecting 
us and we're hopeful that we've got this one resolved.'
If you were cloning from github.org and not http://git.openstack.org 
then you were likely seeing some of the DDoS attack in action.
Unfortunately I don't think novnc is in git.openstack.org because it's 
not an OpenStack project. I wonder if we should investigate adopting 
it (if the author(s) are amenable to that) since we're using the git 
version of it. Maybe that's already been considered and I just don't 
know about it. :-)

-Ben



From: Sukhdev Kapur  sukhdevka...@gmail.com 
Reply-To: OpenStack Development Mailing List (not for usage 
questions)  openstack-dev@lists.openstack.org 

Date: Tuesday, March 11, 2014 at 4:08 PM
To: Dane Leblanc (leblancd)  lebla...@cisco.com 
Cc: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org ,  
openstack-in...@lists.openstack.org   
openstack-in...@lists.openstack.org 
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
cloning noVNC from github.com/kanaka




I have noticed that even clone of devstack has failed few times within 
last couple of hours - it was running fairly smooth so far.

-Sukhdev


On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur  
sukhdevka...@gmail.com  wrote:




[adding openstack-dev list as well ]
I have noticed that this has stated hitting my builds within last few 
hours. I have noticed exact same failures on almost 10 builds.
Looks like something has happened within last few hours - perhaps the 
load?

-Sukhdev


On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd)  
lebla...@cisco.com  wrote:






Apologies if this is the wrong audience for this question...



I'm seeing intermittent failures running stack.sh whereby 'git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning 
various errors. Below are 2 examples.




Is this a known issue? Are there any localrc settings which might help 
here?




Example 1:



2014-03-11 15:00:33.779 | + is_service_enabled n-novnc

2014-03-11 15:00:33.780 | + return 0

2014-03-11 15:00:33.781 | ++ trueorfalse False

2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False

2014-03-11 15:00:33.783 | + '[' False = True ']'

2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC

2014-03-11 15:00:33.785 | + git_clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC master


2014-03-11 15:00:33.786 | + GIT_REMOTE= 
https://github.com/kanaka/noVNC.git


2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC

2014-03-11 15:00:33.789 | + GIT_REF=master

2014-03-11 15:00:33.790 | ++ trueorfalse 

[openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Anna A Sortland
[A] The current keystone LDAP community driver returns all users that 
exist in LDAP via the API call v3/users, instead of returning just users 
that have role grants (similar processing is true for groups). This could 
potentially be a very large number of users. We have seen large companies 
with LDAP servers containing hundreds and thousands of users. We are aware 
of the filters available in keystone.conf ([ldap].user_filter and 
[ldap].query_scope) to cut down on the number of results, but they do not 
provide sufficient filtering (for example, it is not possible to set 
user_filter to members of certain known groups for OpenLDAP without 
creating a memberOf overlay on the LDAP server). 

[Nathan Kinder] What attributes would you filter on?  It seems to me that 
LDAP would need to have knowledge of the roles to be able to filter based 
on the roles.  This is not necessarily the case, as identity and 
assignment can be split in Keystone such that identity is in LDAP and role 
assignment is in SQL.  I believe it was designed this way to deal with 
deployments
where LDAP already exists and there is no need (or possibility) of adding 
role info into LDAP. 

[A] That's our main use case. The users and groups are in LDAP and role 
assignments are in SQL. 
You would filter on role grants and this information is in SQL backend. So 
new API would need to query both identity and assignment drivers. 

[Nathan Kinder] Without filtering based on a role attribute in LDAP, I 
don't think that there is a good solution if you have OpenStack and 
non-OpenStack users mixed in the same container in LDAP.
If you want to first find all of the users that have a role assigned to 
them in the assignments backend, then pull their information from LDAP, I 
think that you will end up with one LDAP search operation per user. This 
also isn't a very scalable solution.

[A] What was the reason the LDAP driver was written this way, instead of 
returning just the users that have OpenStack-known roles? Was the creation 
of a separate API for this function considered? 
Are other exploiters of OpenStack (or users of Horizon) experiencing this 
issue? If so, what was their approach to overcome this issue? We have been 
prototyping a keystone extension that provides an API that provides this 
filtering capability, but it seems like a function that should be 
generally available in keystone.

[Nathan Kinder] I'm curious to know how your prototype is looking to 
handle this. 

[A] The prototype basically first calls assignment API 
list_role_assignments() to get a list of users and groups with role 
grants. It then iterates the retrieved list and calls identity API 
list_users_in_group() to get the list of users in these groups with grants 
and get_user() to get users that have role grants but do not belong to the 
groups with role grants (a call for each user). Both calls ignore groups 
and users that are not found in the LDAP registry but exist in SQL (this 
could be the result of a user or group being removed from LDAP, but the 
corresponding role grant was not revoked). Then the code removes 
duplicates if any and returns the combined list. 

The new extension API is /v3/my_new_extension/users. Maybe the better 
naming would be v3/roles/users (list users with any role) - compare to 
existing v3/roles/​{role_id}​/users  (list users with a specified role). 


Another alternative that we've tried is just a new identity driver that 
inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides 
just the list_users() function. That's probably not the best approach from 
OpenStack standards point of view but I would like to get community's 
feedback on whether this is acceptable. 


I've posted this question to openstack-security last week but could not 
get any feedback after Nathan's first reply. Reposting to openstack-dev..



Anna Sortland
Cloud Systems Software Development
IBM Rochester, MN
annas...@us.ibm.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Brian Haley
Aaron,

I thought the l3-agent already did this if doing a full sync?

_sync_routers_task()-_process_routers()-spawn_n(self.process_router, ri)

So each router gets processed in a greenthread.

It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
limiting factor, at least on network nodes with large numbers of namespaces.

-Brian

On 03/13/2014 10:48 AM, Aaron Rosen wrote:
 The easiest/quickest thing to do for ice house would probably be to run the
 initial sync in parallel like the dhcp-agent does for this exact reason. See:
 https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.
 
 Best,
 
 Aaron
 
 On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.com
 mailto:majop...@redhat.com wrote:
 
 Yuri, could you elaborate your idea in detail? , I'm lost at some
 points with your unix domain / token authentication.
 
 Where does the token come from?,
 
 Who starts rootwrap the first time?
 
 If you could write a full interaction sequence, on the etherpad, from
 rootwrap daemon start ,to a simple call to system happening, I think 
 that'd
 help my understanding.
 
 
 Here it is: https://etherpad.openstack.org/p/rootwrap-agent
 Please take a look.
 
 -- 
 
 Kind regards, Yuriy.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Michael Factor
Bruce,

Nice list of use cases; thank you for sharing.  One thought

Bruce Montague bruce_monta...@symantec.com wrote on 13/03/2014 04:34:59 
PM:


  * (2) [Core tenant/project infrastructure VMs]
 
  Twenty VMs power the core infrastructure of a group using a 
 private cloud (OpenStack in their own datacenter). Not all VMs run 
 Windows with VSS, some run Linux with some equivalent mechanism, 
 such as qemu-ga, driving fsfreeze and signal scripts. These VMs are 
 replicated to a remote OpenStack deployment, in a fashion similar to
 (1). Orchestration occurring at the remote site on failover is more 
 complex (correct VM boot order is orchestrated, DHCP service is 
 configured as expected, all IPs are made available and verified). An
 equivalent virtual network topology consisting of multiple networks 
 or subnets might be pre-created or dynamically created at failover time.
 
a. Storage for all volumes of all VMs might be on a single 
 storage backend (logically a single large volume containing many 
 smaller sub-volumes, examples being a VMware datastore or Hyper-V 
 CSV). This entire large volume might be replicated between similar 
 storage backends at the primary and secondary site. A single 
 replicated large volume thus replicates all the tenant VM's volumes.
 The DR system must trigger quiesce of all volumes to application-
 consistent state.

A variant of having logically a single volume on a single storage backend 
is having all the volumes allocated from storage that provides consistency 
groups.  This may also be related to cross VM consistent 
backups/snapshots.  Of course a question would be whether, and if so, how 
to surface this.

-- Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-13 Thread Tim Hinrichs
Hi Prabhakar,

I'm not sure the functionality is split between 'policy' and 'server' as 
cleanly as you describe.

The 'policy' directory contains the Policy Engine.  At its core, the policy 
engine has a generic Datalog implementation that could feasibly be used by 
other OS components.  (I don't want to think about pulling it out into Oslo 
though.  There are just too many other things going on and no demand yet.)  But 
there are also Congress-specific things in that directory, e.g. the class 
Runtime in policy/runtime.py will be the one that we hook up external API calls 
to.

The 'server' directory contains the code for the API web server that calls into 
the Runtime class.

So if you're digging through code, I'd suggest focusing on the 'policy' 
directory and looking at compile.py (responsible for converting Datalog rules 
written as strings into an internal representation) and runtime.py (responsible 
for everything else).  The docs I mentioned in the IRC should have a decent 
explanation of the functions in Runtime that the API web server will hook into. 
 

Be warned though that unless someone raises some serious objections to the 
proposal that started this thread, we'll be removing some of the more 
complicated functions from Runtime.  The compile.py code won't change (much).  
All of the 3 new theories will be instances of MaterializedViewTheory.  That's 
also the code that must change to add in the Python functions we talked about 
(more specifically see MaterializedViewTheory::propagate_rule(), which calls 
TopDownTheory::top_down_evaluation(), which is what will need modification).

Tim
 



- Original Message -
| From: prabhakar Kudva nandava...@hotmail.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Wednesday, March 12, 2014 1:38:55 PM
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| 
| 
| 
| Hi Tim,
| 
| Thanks for your comments.
| Would be happy to contribute to the propsal and code.
| 
| The existing code already reflects the thoughts below, and got me
| in the line of ideas. Please orrect me if I am wrong as I am
| learning with these discussions:
| 
| One part (reflected by code in policy directory is the generic
| condition- action engine which could take logic primitives and
| (in the future) python functions, evaluate the conditions and
| execute the action. This portable core engine be used for any kind of
| policy enforcement
| (as by other OS projects), such as for data center monitoring and
| repair,
| service level enforcement, compliance policies, optimization (energy,
| performance) etc... at any level of the stack. This core engine seems
| possibly
| a combination of logic reasoning/unification and python function
| evaluation, and python code actions.
| 
| Second part (reflected by code in server) are the applications
| for various purposes. These could be project specific, task specific.
| We could add a diverse set of examples. The example I have worked
| with seems closer to compliance (as in net owner, vm owner check),
| and we will add more.
| 
| Prabhakar
| 
| 
| 
| Date: Wed, 12 Mar 2014 12:33:35 -0700
| From: thinri...@vmware.com
| To: openstack-dev@lists.openstack.org
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| 
| 
| Hi Prabhakar,
| 
| 
| Thanks for the feedback. I'd be interested to hear what other policy
| types you have in mind.
| 
| 
| To answer your questions...
| 
| 
| We're planning on extending our policy language in such a way that
| you can use Python functions as conditions (atom in the grammar)
| in rules. That's on my todo-list but didn't mention it yesterday as
| we were short on time. There will be some syntactic restrictions so
| that we can properly execute those Python functions (i.e. we need to
| always be able to compute the inputs to the function). I had thought
| it was just an implementation detail I hadn't gotten around to (all
| Datalog implementations I've seen have such things), but it sounds
| like it's worth writing up a proposal and sending it around before
| implementing. If that's a pressing concern for you, let me know and
| I'll bump it up the stack (a little). If you'd like, feel free to
| draft a proposal (or remind me to do it once in a while).
| 
| 
| As for actions, I typically think of them as API calls to other OS
| components like Nova. But they could just as easily be Python
| functions. But I would want to avoid an action that changes
| Congress's internal data structures directly (e.g. adding a new
| policy statement). Such actions have caused trouble in the past for
| policy languages (though for declarative programming languages like
| Prolog they are less problematic). I don't think there's anyway we
| can stop people from creating such actions, but I think we should
| advocate against them.
| 
| 
| Tim
| 
| 
| 
| From: prabhakar Kudva nandava...@hotmail.com
| To: OpenStack Development Mailing List (not for usage 

[openstack-dev] [Horizon] Regarding bug/bp https://bugs.launchpad.net/horizon/+bug/1285298

2014-03-13 Thread Abishek Subramanian (absubram)
Hi all, Akihiro, David,

This is regarding the review for - https://review.openstack.org/#/c/76653/

Akihiro - Thanks for the review as always and as I mentioned in the review
comment 
I completely agree with you. This is a small featurette.

However this is small in that it adds to a chociefield in an existing
forms.py
attribute that I left out which neutron supports.
And so in addition, I also had to add some code to my clean routine and yes
update the string in the create description to include this new option.
I have more test code really, than actual code.

It was small enough, and hence I made request that this be treated as
a bug and not a bp. And only then did I proceed to open the bug.

I will respect what the community decides on this. Please let me know
how we wish to proceed.


Thanks and regards,
Abishek


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Monty Taylor

I agree.

Solly - in addition to potentially 'adopting' noVNC - or as a parallel 
train of thought ...


As we started working on storyboard in infra, we've started using the 
bower tool for html/javascript packaging - and we have some ability to 
cache the output of that pretty easily. Would you accept patches to 
noVNC to add bower config things and/or publication of tarballs of 
releases via it? Since noVNC isn't likely to be participating in the 
integrated gate in either case, we could potentially split the question 
of how do we get copies of it in a way that doesn't depend on OS 
distros (which is why we use pip for our python depends) and does 
noVNC want to have its git repo exist in OpenStack Infra systems.


Monty

On 03/13/2014 07:44 AM, Sean Dague wrote:

I think a bigger question is why are we using a git version of something
outside of OpenStack.

Where is a noNVC release we can point to and use?

In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.

Currently we have 4 repositories that fit this bill:

SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

I think all of these probably need to be removed from devstack. We
should be using release versions (preferably in distros, though allowed
to be in language specific package manager).

-Sean

On 03/13/2014 10:26 AM, Solly Ross wrote:

@bnemec: I don't think that's been considered.  I'm actually one of the 
upstream maintainers for noVNC.  The only concern that I'd have with OpenStack 
adopting noVNC (there are other maintainers, as well as the author, so I'd have 
to check with them as well) is that there are a few other projects that use 
noVNC, so we'd need to make sure that no OpenStack-specific code gets merged 
into noVNC if we adopt it.  Other that that, though, adopting noVNC doesn't 
sound like a horrible idea.

Best Regards,
Solly Ross

- Original Message -
From: Ben Nemec openst...@nemebean.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: openstack-in...@lists.openstack.org
Sent: Wednesday, March 12, 2014 3:38:19 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning
noVNC from github.com/kanaka



On 2014-03-11 20:34, Joshua Harlow wrote:


https://status.github.com/messages
* 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
mitigations we have in place are proving effective in protecting us and we're 
hopeful that we've got this one resolved.'
If you were cloning from github.org and not http://git.openstack.org then you 
were likely seeing some of the DDoS attack in action.
Unfortunately I don't think novnc is in git.openstack.org because it's not an 
OpenStack project. I wonder if we should investigate adopting it (if the 
author(s) are amenable to that) since we're using the git version of it. Maybe 
that's already been considered and I just don't know about it. :-)
-Ben



From: Sukhdev Kapur  sukhdevka...@gmail.com 
Reply-To: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org 
Date: Tuesday, March 11, 2014 at 4:08 PM
To: Dane Leblanc (leblancd)  lebla...@cisco.com 
Cc: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org ,  openstack-in...@lists.openstack.org   
openstack-in...@lists.openstack.org 
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka



I have noticed that even clone of devstack has failed few times within last 
couple of hours - it was running fairly smooth so far.
-Sukhdev


On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur  sukhdevka...@gmail.com  wrote:



[adding openstack-dev list as well ]
I have noticed that this has stated hitting my builds within last few hours. I 
have noticed exact same failures on almost 10 builds.
Looks like something has happened within last few hours - perhaps the load?
-Sukhdev


On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd)  lebla...@cisco.com  
wrote:





Apologies if this is the wrong audience for this question...



I'm seeing intermittent failures running stack.sh whereby 'git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning various 
errors. Below are 2 examples.



Is this a known issue? Are there any localrc settings which might help here?



Example 1:



2014-03-11 15:00:33.779 | + is_service_enabled n-novnc

2014-03-11 15:00:33.780 | + return 0

2014-03-11 15:00:33.781 | ++ trueorfalse False

2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False

2014-03-11 15:00:33.783 | + '[' False = True ']'

2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-13 Thread James Slagle
On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
robe...@robertcollins.net wrote:
 So we already have pretty high requirements - its basically a 16G
 workstation as minimum.

 Specifically to test the full story:
  - a seed VM
  - an undercloud VM (bm deploy infra)
  - 1 overcloud control VM
  - 2 overcloud hypervisor VMs
 
5 VMs with 2+G RAM each.

 To test the overcloud alone against the seed we save 1 VM, to skip the
 overcloud we save 3.

 However, as HA matures we're about to add 4 more VMs: we need a HA
 control plane for both the under and overclouds:
  - a seed VM
  - 3 undercloud VMs (HA bm deploy infra)
  - 3 overcloud control VMs (HA)
  - 2 overcloud hypervisor VMs
 
9 VMs with 2+G RAM each == 18GB

 What should we do about this?

 A few thoughts to kick start discussion:
  - use Ironic to test across multiple machines (involves tunnelling
 brbm across machines, fairly easy)
  - shrink the VM sizes (causes thrashing)
  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
  - make the default configuration inline the hypervisors on the
 overcloud with the control plane:
- a seed VM
- 3 undercloud VMs (HA bm deploy infra)
- 3 overcloud all-in-one VMs (HA)
   
  7 VMs with 2+G RAM each == 14GB


 I think its important that we exercise features like HA and live
 migration regularly by developers, so I'm quite keen to have a fairly
 solid systematic answer that will let us catch things like bad
 firewall rules on the control node preventing network tunnelling
 etc... e.g. we benefit the more things are split out like scale
 deployments are. OTOH testing the micro-cloud that folk may start with
 is also a really good idea


The idea I was thinking was to make a testenv host available to
tripleo atc's. Or, perhaps make it a bit more locked down and only
available to a new group of tripleo folk, existing somewhere between
the privileges of tripleo atc's and tripleo-cd-admins.  We could
document how you use the cloud (Red Hat's or HP's) rack to start up a
instance to run devtest on one of the compute hosts, request and lock
yourself a testenv environment on one of the testenv hosts, etc.
Basically, how our CI works. Although I think we'd want different
testenv hosts for development vs what runs the CI, and would need to
make sure everything was locked down appropriately security-wise.

Some other ideas:

- Allow an option to get rid of the seed VM, or make it so that you
can shut it down after the Undercloud is up. This only really gets rid
of 1 VM though, so it doesn't buy you much nor solve any long term
problem.

- Make it easier to see how you'd use virsh against any libvirt host
you might have lying around.  We already have the setting exposed, but
make it a bit more public and call it out more in the docs. I've
actually never tried it myself, but have been meaning to.

- I'm really reaching now, and this may be entirely unrealistic :),
butsomehow use the fake baremetal driver and expose a mechanism to
let the developer specify the already setup undercloud/overcloud
environment ahead of time.
For example:
* Build your undercloud images with the vm element since you won't be
PXE booting it
* Upload your images to a public cloud, and boot instances for them.
* Use this new mechanism when you run devtest (presumably running from
another instance in the same cloud)  to say I'm using the fake
baremetal driver, and here are the  IP's of the undercloud instances.
* Repeat steps for the overcloud (e.g., configure undercloud to use
fake baremetal driver, etc).
* Maybe it's not the fake baremetal driver, and instead a new driver
that is a noop for the pxe stuff, and the power_on implementation
powers on the cloud instances.
* Obviously if your aim is to test the pxe and disk deploy process
itself, this wouldn't work for you.
* Presumably said public cloud is OpenStack, so we've also achieved
another layer of On OpenStack.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread Marco Fargetta
Thanks Donagh,

I will take a look to the ontainer-to-container synchronization to understand 
if it fits with my scenario.

Cheers,
Marco

On Thu, Mar 13, 2014 at 03:28:03PM +, McCabe, Donagh wrote:
 Marco,
 
 The replication *inside* Swift is not intended to move data between two 
 different Swift instances -- it's an internal data repair and rebalance 
 mechanism.
 
 However, there is a different mechanism, called container-to-container 
 synchronization that might be what you are looking for. It will sync two 
 containers in different swift instances. The swift instances may be in 
 different Keystone administrative domains -- the authentication is not based 
 on Keystone. It does require that each swift instance be configured to 
 recognise each other. However, this is only usable for low update rates.
 
 Regards,
 Donagh
 
 -Original Message-
 From: Fargetta Marco [mailto:marco.farge...@ct.infn.it] 
 Sent: 13 March 2014 11:24
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [swift] Replication multi cloud
 
 Hi all,
 
 we would use the replication mechanism in swift to replicate the data in two 
 swift instances deployed in different clouds with different keystones and 
 administrative domains.
 
 Is this possible with the current replication facilities or they should stay 
 in the same cloud sharing the keystone?
 
 Cheers,
 Marco
 
 
 
 --
 
 Eng. Marco Fargetta, PhD
 
 Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy
 
 EMail: marco.farge...@ct.infn.it
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Dan Smith
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 Because of where we are in the freeze, I think this should wait 
 until Juno opens to fix. Icehouse will only be compatible with
 SQLA 0.8, which I think is fine. I expect the rest of the issues
 can be addressed during Juno 1.

Agreed. I think we have some other things to check before we make this
move, like how we currently check to see if something is loaded in a
SQLA object. ISTR it changed between 0.8 and 0.9 and so likely tests
would not fail, but we'd lazy load a bunch of stuff that we didn't
intend to.

Even without that, I think it's really way too late to make such a switch.

- --Dan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTIeATAAoJEBeZxaMESjNVm8QH/0kjEjXYTHuj3jmuiL0P8ccy
KVMaXTL3NmIhaNm1UD/OcWIebgkOKk1BjYAloSRRewulvt0XcK5yr272FLhwuLqr
IJBtF15/4pG1b9B8Ol/sOlgAUzcgQ68pu8jIHRd7S5cxjWlEuCP7y2H3pUG38rfq
lqUZhrltMpBbcZ0/ewG1BlIgfCWjuv6c/U+S8K2D4zcKkfuOG2hfzPk4ZEy99+wh
UYiLfaW+dvku8rN6Lll+6S8VfKM1V+I9hFpKs2exxbX65KJinNgymHxLAj2iQD6Y
Ubpk8LO2DElpUm2gULgUqKh0kddmXL7Cuqa2/B5Bm3BAa89CAUVny4ASAWk868c=
=Qet4
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Weekly meeting summary

2014-03-13 Thread Ilya Sviridov
Hello openstackers,

You can find MagnetoDB team weekly meeting notes below

Meeting summary

   1. *General project status overview*
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-14,
   13:02:15)
   2. *MagnetoDB API Draft status*
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-26,
   13:08:37)
  1. https://wiki.openstack.org/wiki/MagnetoDB/api
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-28,
  13:09:28)
  2. ACTION: achudnovets start ML thread with API discussion
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-36,
  13:13:19)
  3. https://launchpad.net/magnetodb/+milestone/2.0.1
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-39,
  13:14:24)

   3. *Third party CI status*
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-40,
   13:14:41)
  1. https://blueprints.launchpad.net/magnetodb/+spec/third-party-ci (
  
isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-45,
  13:16:39)
  2. ACTION: achuprin discuss with infra the best way for our CI (
  
isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-68,
  13:27:36)
  3. ACTION: achuprin create wiki page with CI description
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-69,
  13:28:01)

   4. *Support of other database backends except Cassandra. Support of
   HBase* 
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-74,
   13:29:24)
  1. ACTION: isviridov ikhudoshyn start mail thread about evalution
  other databases as backend for MagnetoDB
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-98,
  13:38:16)

   5. *Devstack integration status*
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-99,
   13:38:35)
  1.
  https://blueprints.launchpad.net/magnetodb/+spec/devstack-integration
  
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-102,
  13:39:07)
  2. https://github.com/pcmanus/ccm
(vnaboichenkohttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-105,
  13:40:13)
  3. ACTION: vnaboichenko devstack integration guide in OpenStack wiki (
  
isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-112,
  13:42:15)

   6. *Weekly meeting time slot*
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-113,
   13:42:33)
  1. ACTION: isviridov find better time slot for meeting
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-122,
  13:44:47)
  2. ACTION: isviridov start ML voting meeting time
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-124,
  13:45:05)

   7. *Open discussion*
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-127,
   13:45:31)


For more details, please follow the links

Minutes:
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.txt
Log:
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html


Have a nice day,
Ilya Sviridov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Can I use a new plugin based on Ml2Plugin instead of Ml2Plugin as core_plugin

2014-03-13 Thread Nader Lahouti
-- edited the subject

I'm resending this question.
The issue is described in email thread and. In brief, I need to add load
new extensions and it seems the mechanism driver does not support that. In
order to do that I was thinking to have a new ml2 plugin base on existing
Ml2Plugin and add my changes there and have it as core_plugin.
Please read the email thread and glad to have your suggestion.


On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti nader.laho...@gmail.comwrote:

 1) Does it mean an interim solution is to have our own plugin (and have
 all the changes in it) and declare it as core_plugin instead of Ml2Plugin?

 2) The other issue as I mentioned before, is that the extension(s) is not
 showing up in the result, for instance when create_network is called
 [*result = super(Ml2Plugin, self).create_network(context, network)]*, and
 as a result they cannot be used in the mechanism drivers when needed.

 Looks like the process_extensions is disabled when fix for Bug 1201957
 committed and here is the change:
 Any idea why it is disabled?

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 ...

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 *998 -return self._make_network_dict(network)*

 *998 +return self._make_network_dict(network,
 process_extensions=False)*

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']

 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura kuk...@noironetworks.comwrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a mechanism
 driver.

 But another problem I think we have with ML2 plugin is the list
 extensions supported by default [1].
 The extensions should only load by MD and the ML2 plugin should only
 implement the Neutron core API.


 Keep in mind that ML2 supports multiple MDs simultaneously, so no single
 MD can really control what set of extensions are active. Drivers need to be
 able to load private extensions that only pertain to that driver, but we
 also need to be able to share common extensions across subsets of drivers.
 Furthermore, the semantics of the extensions need to be correct in the face
 of multiple co-existing drivers, some of which know about the extension,
 and some of which don't. Getting this properly defined and implemented
 seems like a good goal for juno.

 -Bob



  Any though ?
 Édouard.

  [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



 On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi,

 I think it is better to continue the discussion here. It is a good log
 :-)

 Eugine and I talked the related topic to allow drivers to load
 extensions)  in Icehouse Summit
 but I could not have enough time to work on it during Icehouse.
 I am still interested in implementing it and will register a blueprint
 on it.

 etherpad in icehouse summit has baseline thought on how to achieve it.
 https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
 I hope it is a good start point of the discussion.

 Thanks,
 Akihiro

 On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti nader.laho...@gmail.com
 wrote:
  Hi Kyle,
 
  Just wanted to clarify: Should I continue using this mailing list to
 post my
  question/concerns about ML2? Please advise.
 
  Thanks,
  Nader.
 
 
 
  On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  Thanks Edgar, I think this is the appropriate place to continue this
  discussion.
 
 
  On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana emag...@plumgrid.com
 wrote:
 
  Nader,
 
  I would encourage you to first discuss the possible extension with
 the
  ML2 team. Rober and Kyle are leading this effort and they have a IRC
 meeting
  every week:
 
 https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
 
  Bring your concerns on this meeting and get the right feedback.
 
  Thanks,
 
  Edgar
 
  From: Nader Lahouti nader.laho...@gmail.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Thursday, March 6, 2014 12:14 PM
  To: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron][ML2]
 
  Hi Aaron,
 
  I appreciate your reply.
 
  Here is some more details on what I'm 

[openstack-dev] [MagnetoDB] MagnetoDB API draft

2014-03-13 Thread Aleksandr Chudnovets
Hi all,

Here is the draft for MagnetoDB API:
https://wiki.openstack.org/wiki/MagnetoDB/api

Your comments and propositions are welcome. And welcome to discuss this
draft and any other KeyValue aaS -related subjects in our IRC channel:
#magnetodb. Please note, MagnetoDB team mostly in UTC+2.

Best regards,
Alexander Chudnovets
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Solly Ross
@Monty: having a packaging system sounds like a good idea.  Send us a pull 
request on github.com/kanaka/noVNC.

Best Regards,
Solly Ross

- Original Message -
From: Monty Taylor mord...@inaugust.com
To: Sean Dague s...@dague.net, OpenStack Development Mailing List (not for 
usage questions) openstack-dev@lists.openstack.org, openst...@nemebean.com
Cc: openstack-in...@lists.openstack.org
Sent: Thursday, March 13, 2014 12:09:01 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka

I agree.

Solly - in addition to potentially 'adopting' noVNC - or as a parallel 
train of thought ...

As we started working on storyboard in infra, we've started using the 
bower tool for html/javascript packaging - and we have some ability to 
cache the output of that pretty easily. Would you accept patches to 
noVNC to add bower config things and/or publication of tarballs of 
releases via it? Since noVNC isn't likely to be participating in the 
integrated gate in either case, we could potentially split the question 
of how do we get copies of it in a way that doesn't depend on OS 
distros (which is why we use pip for our python depends) and does 
noVNC want to have its git repo exist in OpenStack Infra systems.

Monty

On 03/13/2014 07:44 AM, Sean Dague wrote:
 I think a bigger question is why are we using a git version of something
 outside of OpenStack.

 Where is a noNVC release we can point to and use?

 In Juno I'd really be pro removing all the devstack references to git
 repos not on git.openstack.org, because these kinds of failures have
 real impact.

 Currently we have 4 repositories that fit this bill:

 SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
 NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
 RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
 SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

 I think all of these probably need to be removed from devstack. We
 should be using release versions (preferably in distros, though allowed
 to be in language specific package manager).

   -Sean

 On 03/13/2014 10:26 AM, Solly Ross wrote:
 @bnemec: I don't think that's been considered.  I'm actually one of the 
 upstream maintainers for noVNC.  The only concern that I'd have with 
 OpenStack adopting noVNC (there are other maintainers, as well as the 
 author, so I'd have to check with them as well) is that there are a few 
 other projects that use noVNC, so we'd need to make sure that no 
 OpenStack-specific code gets merged into noVNC if we adopt it.  Other that 
 that, though, adopting noVNC doesn't sound like a horrible idea.

 Best Regards,
 Solly Ross

 - Original Message -
 From: Ben Nemec openst...@nemebean.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: openstack-in...@lists.openstack.org
 Sent: Wednesday, March 12, 2014 3:38:19 PM
 Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
 noVNC from github.com/kanaka



 On 2014-03-11 20:34, Joshua Harlow wrote:


 https://status.github.com/messages
 * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
 mitigations we have in place are proving effective in protecting us and 
 we're hopeful that we've got this one resolved.'
 If you were cloning from github.org and not http://git.openstack.org then 
 you were likely seeing some of the DDoS attack in action.
 Unfortunately I don't think novnc is in git.openstack.org because it's not 
 an OpenStack project. I wonder if we should investigate adopting it (if the 
 author(s) are amenable to that) since we're using the git version of it. 
 Maybe that's already been considered and I just don't know about it. :-)
 -Ben



 From: Sukhdev Kapur  sukhdevka...@gmail.com 
 Reply-To: OpenStack Development Mailing List (not for usage questions)  
 openstack-dev@lists.openstack.org 
 Date: Tuesday, March 11, 2014 at 4:08 PM
 To: Dane Leblanc (leblancd)  lebla...@cisco.com 
 Cc: OpenStack Development Mailing List (not for usage questions)  
 openstack-dev@lists.openstack.org ,  openstack-in...@lists.openstack.org  
  openstack-in...@lists.openstack.org 
 Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
 noVNC from github.com/kanaka



 I have noticed that even clone of devstack has failed few times within last 
 couple of hours - it was running fairly smooth so far.
 -Sukhdev


 On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur  sukhdevka...@gmail.com  
 wrote:



 [adding openstack-dev list as well ]
 I have noticed that this has stated hitting my builds within last few hours. 
 I have noticed exact same failures on almost 10 builds.
 Looks like something has happened within last few hours - perhaps the load?
 -Sukhdev


 On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd)  
 lebla...@cisco.com  wrote:





 Apologies if this is the wrong audience 

Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread Chmouel Boudjnah
You may be interested by this project as well :

https://github.com/stackforge/swiftsync

you would need to replicate your keystone in both way via mysql replication
or something like this (and have endpoint url changed as well obviously
there).

Chmouel



On Thu, Mar 13, 2014 at 5:25 PM, Marco Fargetta
marco.farge...@ct.infn.itwrote:

 Thanks Donagh,

 I will take a look to the ontainer-to-container synchronization to
 understand if it fits with my scenario.

 Cheers,
 Marco

 On Thu, Mar 13, 2014 at 03:28:03PM +, McCabe, Donagh wrote:
  Marco,
 
  The replication *inside* Swift is not intended to move data between two
 different Swift instances -- it's an internal data repair and rebalance
 mechanism.
 
  However, there is a different mechanism, called container-to-container
 synchronization that might be what you are looking for. It will sync two
 containers in different swift instances. The swift instances may be in
 different Keystone administrative domains -- the authentication is not
 based on Keystone. It does require that each swift instance be configured
 to recognise each other. However, this is only usable for low update
 rates.
 
  Regards,
  Donagh
 
  -Original Message-
  From: Fargetta Marco [mailto:marco.farge...@ct.infn.it]
  Sent: 13 March 2014 11:24
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [swift] Replication multi cloud
 
  Hi all,
 
  we would use the replication mechanism in swift to replicate the data in
 two swift instances deployed in different clouds with different keystones
 and administrative domains.
 
  Is this possible with the current replication facilities or they should
 stay in the same cloud sharing the keystone?
 
  Cheers,
  Marco
 
 
 
  --
  
  Eng. Marco Fargetta, PhD
 
  Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy
 
  EMail: marco.farge...@ct.infn.it
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 
 Eng. Marco Fargetta, PhD

 Istituto Nazionale di Fisica Nucleare (INFN)
 Catania, Italy

 EMail: marco.farge...@ct.infn.it
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Sean Dague
On 03/13/2014 12:42 PM, Dan Smith wrote:
 Because of where we are in the freeze, I think this should wait
 until Juno opens to fix. Icehouse will only be compatible with
 SQLA 0.8, which I think is fine. I expect the rest of the issues
 can be addressed during Juno 1.
 
 Agreed. I think we have some other things to check before we make this
 move, like how we currently check to see if something is loaded in a
 SQLA object. ISTR it changed between 0.8 and 0.9 and so likely tests
 would not fail, but we'd lazy load a bunch of stuff that we didn't
 intend to.
 
 Even without that, I think it's really way too late to make such a switch.
 
 --Dan

Yeh, the initial look at Tempest failures wasn't terrible once I fixed a
ceilometer issue. However something is definitely different on delete
semantics, enough to make us fail a bunch of Nova Tempest tests.

That seems dangerous to address during freeze.

I consider this something which should be dealt with in Juno 1 though,
as I'm very interested in whether the new optimizer in sqla 0.9 helps us
on performance.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] [zeromq] nova-rpc-zmq-receiver bottleneck

2014-03-13 Thread yatin kumbhare
Hello Folks,

When zeromq is use as rpc-backend, nova-rpc-zmq-receiver service needs to
be run on every node.

zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
based on topic-name (which is extracted from received data), it forwards
data to respective local services, over IPC protocol.

While, openstack services, listen/bind on IPC socket with socket-type
PULL.

I see, zmq-receiver as a bottleneck and overhead as per the current design.
1. if this service crashes: communication lost.
2. overhead of running this extra service on every nodes, which just
forward messages as is.


I'm looking forward to, remove zmq-receiver service and enable direct
communication (nova-* and cinder-*) across and within node.

I believe, this will create, zmq experience more seamless.

the communication will change from IPC to zmq TCP socket type for each
service.

like: rpc.cast from scheduler -to - compute would be direct rpc message
passing. no routing through zmq-receiver.

Now, TCP protocol, all services will bind to unique port (port-range could
be, 9501-9510)

from nova.conf, rpc_zmq_matchmaker =
nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing.

I have put arbitrary ports numbers after the service name.

file:///etc/oslo/matchmaker_ring.json

{
 cert:9507: [
 controller
 ],
 cinder-scheduler:9508: [
 controller
 ],
 cinder-volume:9509: [
 controller
 ],
 compute:9501: [
 controller,computenodex
 ],
 conductor:9502: [
 controller
 ],
 consoleauth:9503: [
 controller
 ],
 network:9504: [
 controller,computenodex
 ],
 scheduler:9506: [
 controller
 ],
 zmq_replies:9510: [
 controller,computenodex
 ]
 }

Here, the json file would keep track of ports for each services.

Looking forward to seek community feedback on this idea.


Regards,
Yatin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread Joe Gordon
On Thu, Mar 13, 2014 at 7:50 AM, Joe Hakim Rahme 
joe.hakim.ra...@enovance.com wrote:

 On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:

  There are a number of patches up for review that make various changes to
 use six apis instead of Python 2 constructs. While I understand the
 desire to get a head start on getting Tempest to run in Python 3, I'm not
 sure it makes sense to do this work piecemeal until we are near ready to
 introduce a py3 gate job. Many contributors will not be aware of what all
 the differences are and py2-isms will creep back in resulting in more
 overall time spent making these changes and reviewing. Also, the core
 review team is busy trying to do stuff important to the icehouse release
 which is barely more than 5 weeks away. IMO we should hold off on various
 kinds of cleanup patches for now.

 +1 I agree with you David.

 However, what's the best way we can go about making sure to make this a
 goal for the next release cycle?


On a related note, we have been -2ing these patches in nova until there is
a plan to get all the dependencies python3 compatible.



 ---
 Joe H. Rahme
 IRC: rahmu


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread Marco Fargetta
Hi Chmouel,

using this approach should I need to have the same users in both keystone?

Is there any way to map user A from cloud X to user B in cloud Y?

Our clouds have different users and replicates the keystone could have
some problems, not only technical.

Cheers,
Marco

On Thu, Mar 13, 2014 at 06:19:29PM +0100, Chmouel Boudjnah wrote:
 You may be interested by this project as well :
 
 https://github.com/stackforge/swiftsync
 
 you would need to replicate your keystone in both way via mysql replication
 or something like this (and have endpoint url changed as well obviously
 there).
 
 Chmouel
 
 
 
 On Thu, Mar 13, 2014 at 5:25 PM, Marco Fargetta
 marco.farge...@ct.infn.itwrote:
 
  Thanks Donagh,
 
  I will take a look to the ontainer-to-container synchronization to
  understand if it fits with my scenario.
 
  Cheers,
  Marco
 
  On Thu, Mar 13, 2014 at 03:28:03PM +, McCabe, Donagh wrote:
   Marco,
  
   The replication *inside* Swift is not intended to move data between two
  different Swift instances -- it's an internal data repair and rebalance
  mechanism.
  
   However, there is a different mechanism, called container-to-container
  synchronization that might be what you are looking for. It will sync two
  containers in different swift instances. The swift instances may be in
  different Keystone administrative domains -- the authentication is not
  based on Keystone. It does require that each swift instance be configured
  to recognise each other. However, this is only usable for low update
  rates.
  
   Regards,
   Donagh
  
   -Original Message-
   From: Fargetta Marco [mailto:marco.farge...@ct.infn.it]
   Sent: 13 March 2014 11:24
   To: OpenStack Development Mailing List
   Subject: [openstack-dev] [swift] Replication multi cloud
  
   Hi all,
  
   we would use the replication mechanism in swift to replicate the data in
  two swift instances deployed in different clouds with different keystones
  and administrative domains.
  
   Is this possible with the current replication facilities or they should
  stay in the same cloud sharing the keystone?
  
   Cheers,
   Marco
  
  
  
   --
   
   Eng. Marco Fargetta, PhD
  
   Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy
  
   EMail: marco.farge...@ct.infn.it
   
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  
  Eng. Marco Fargetta, PhD
 
  Istituto Nazionale di Fisica Nucleare (INFN)
  Catania, Italy
 
  EMail: marco.farge...@ct.infn.it
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Mark Washenberger
Hi Anna,


On Thu, Mar 13, 2014 at 8:36 AM, Anna A Sortland annas...@us.ibm.comwrote:

 [A] The current keystone LDAP community driver returns all users that
 exist in LDAP via the API call v3/users, instead of returning just users
 that have role grants (similar processing is true for groups). This could
 potentially be a very large number of users. We have seen large companies
 with LDAP servers containing hundreds and thousands of users. We are aware
 of the filters available in keystone.conf ([ldap].user_filter and
 [ldap].query_scope) to cut down on the number of results, but they do not
 provide sufficient filtering (for example, it is not possible to set
 user_filter to members of certain known groups for OpenLDAP without
 creating a memberOf overlay on the LDAP server).

 [Nathan Kinder] What attributes would you filter on?  It seems to me that
 LDAP would need to have knowledge of the roles to be able to filter based
 on the roles.  This is not necessarily the case, as identity and assignment
 can be split in Keystone such that identity is in LDAP and role assignment
 is in SQL.  I believe it was designed this way to deal with deployments
 where LDAP already exists and there is no need (or possibility) of adding
 role info into LDAP.

 [A] That's our main use case. The users and groups are in LDAP and role
 assignments are in SQL.
 You would filter on role grants and this information is in SQL backend. So
 new API would need to query both identity and assignment drivers.


From my perspective, it seems there is a chicken-and-egg problem with this
proposal. If a user doesn't have a role assigned, the user does not show up
in the list. But if the user doesn't show up in the list, the user doesn't
exist. If the user doesn't exist, you cannot add a role to it.

Perhaps what is needed is just some sort of filter to listing users that
only returns users with a role in the cloud?




 [Nathan Kinder] Without filtering based on a role attribute in LDAP, I
 don't think that there is a good solution if you have OpenStack and
 non-OpenStack users mixed in the same container in LDAP.
 If you want to first find all of the users that have a role assigned to
 them in the assignments backend, then pull their information from LDAP, I
 think that you will end up with one LDAP search operation per user. This
 also isn't a very scalable solution.

 [A] What was the reason the LDAP driver was written this way, instead of
 returning just the users that have OpenStack-known roles? Was the creation
 of a separate API for this function considered?
 Are other exploiters of OpenStack (or users of Horizon) experiencing this
 issue? If so, what was their approach to overcome this issue? We have been
 prototyping a keystone extension that provides an API that provides this
 filtering capability, but it seems like a function that should be generally
 available in keystone.

 [Nathan Kinder] I'm curious to know how your prototype is looking to
 handle this.

 [A] The prototype basically first calls assignment API
 list_role_assignments() to get a list of users and groups with role grants.
 It then iterates the retrieved list and calls identity API
 list_users_in_group() to get the list of users in these groups with grants
 and get_user() to get users that have role grants but do not belong to the
 groups with role grants (a call for each user). Both calls ignore groups
 and users that are not found in the LDAP registry but exist in SQL (this
 could be the result of a user or group being removed from LDAP, but the
 corresponding role grant was not revoked). Then the code removes duplicates
 if any and returns the combined list.

 The new extension API is /v3/my_new_extension/users. Maybe the better
 naming would be v3/roles/users (list users with any role) - compare to
 existing v3/roles/{role_id}/users  (list users with a specified role).

 Another alternative that we've tried is just a new identity driver that
 inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides
 just the list_users() function. That's probably not the best approach from
 OpenStack standards point of view but I would like to get community's
 feedback on whether this is acceptable.


 I've posted this question to openstack-security last week but could not
 get any feedback after Nathan's first reply. Reposting to openstack-dev..



 Anna Sortland
 Cloud Systems Software Development
 IBM Rochester, MN
 annas...@us.ibm.com



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Sean Dague
On 03/13/2014 12:31 PM, Thomas Goirand wrote:
 On 03/12/2014 07:07 PM, Sean Dague wrote:
 Because of where we are in the freeze, I think this should wait until
 Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
 I think is fine. I expect the rest of the issues can be addressed during
 Juno 1.

  -Sean
 
 Sean,
 
 No, it's not fine for me. I'd like things to be fixed so we can move
 forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
 will be released SQLA 0.9 and with Icehouse, not Juno.

We're past freeze, and this requires deep changes in Nova DB to work. So
it's not going to happen. Nova provably does not work with SQLA 0.9, as
seen in Tempest tests.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo incubator deprecation policy tweak

2014-03-13 Thread Doug Hellmann
The Oslo team is working hard to move code from the incubator into
libraries, and that work will speed up during Juno. As part of the
planning, we have been developing our deprecation policy for code in the
oslo-incubator repository. We recognize that it may take some projects
longer than others to adopt the new libraries, but we need to balance the
need for long-term support with the amount of effort it requires to
maintain multiple copies of the code.

We have, during icehouse, been treating the master branch of oslo-incubator
as the stable branch for oslo.messaging. In practice, that has meant
refusing new features in the incubator copy of the rpc code and requiring
bug fixes to land in oslo.messaging first. This policy is described in the
wiki (https://wiki.openstack.org/wiki/Oslo#Graduation):

After the first release of the new library, the status of the module(s)
should be updated to Obsolete. During this phase, only critical bug fixes
will be allowed in the incubator version of the code. New features and
minor bugs should be fixed in the released library, and effort should be
spent focusing on having downstream projects consume the library.

After all integrated projects that use the code are using the library
instead of the incubator, the module(s)_ can be deleted from the incubator.

We would like to clarify the first part, and add a time limit to the second
part:

After the first release of the new library, the status of the module(s)
should be updated to Obsolete. During this phase, only critical bug fixes
will be allowed in the incubator version of the code. All changes should be
proposed first to the new library repository, and then bug fixes can be
back-ported to the incubator. New features and minor bugs should be fixed
in the released library only, and effort should be spent focusing on having
downstream projects consume the library.

The incubator version of the code will be supported with critical bug
fixes for one full release cycle after the library graduates, and then be
deleted. If all integrated projects using the module(s) update to use the
library before this time period, the module(s) may be deleted early. Old
versions will be maintained in the stable branches of the incubator under
the usual long-term deprecation policy.

I will update the wiki, but I also wanted to announce the change here on
the list so everyone is aware.


Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-13 Thread Fox, Kevin M
Hi Chris,

That's great to hear. I'm looking forward to installing icehouse and testing 
that out. :)

Thanks,
Kevin


From: Chris Armstrong [chris.armstr...@rackspace.com]
Sent: Wednesday, March 12, 2014 1:29 PM
To: Fox, Kevin M; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Hi Kevin,

The design of OS::Heat::AutoScalingGroup should not require explicit support 
for load balancers. The design is meant to allow you to create a resource that 
wraps up both a OS::Heat::Server and a PoolMember in a template and use it via 
a Stack resource.

(Note that Mike was talking about the new OS::Heat::AutoScalingGroup resource, 
not AWS::AutoScaling::AutoScalingGroup).

So, while I haven’t tested this case with PoolMember specifically, and there 
may still be bugs, no more feature implementation should be necessary (I hope).

--
Christopher Armstrong
IRC: radix



On March 12, 2014 at 1:52:53 PM, Fox, Kevin M 
(kevin@pnnl.govmailto:kevin@pnnl.gov) wrote:

I submitted a blueprint a while back that I think is relevant:

https://blueprints.launchpad.net/heat/+spec/elasticloadbalancing-lbaas

Currently heat autoscaling doesn't interact with Neutron lbaas and the 
configurable bits aren't configurable enough to allow it without code changes 
as far as I can tell.

I think its only a few days of work, but the OpenStack CLA is preventing me 
from contributing. :/

Thanks,
Kevin


From: Mike Spreitzer [mspre...@us.ibm.com]
Sent: Wednesday, March 12, 2014 11:34 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a nested 
stack that includes a OS::Neutron::PoolMember?  Should I expect this to work?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread W Chan
On the transport variable, the problem I see isn't with passing the
variable to the engine and executor.  It's passing the transport into the
API layer.  The API layer is a pecan app and I currently don't see a way
where the transport variable can be passed to it directly.  I'm looking at
https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
 Do you have any suggestion?  Thanks.


On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:


- I can write a method in base test to start local executor.  I will
do that as a separate bp.

 Ok.


- After the engine is made standalone, the API will communicate to the
engine and the engine to the executor via the oslo.messaging transport.
 This means that for the local option, we need to start all three
components (API, engine, and executor) on the same process.  If the long
term goal as you stated above is to use separate launchers for these
components, this means that the API launcher needs to duplicate all the
logic to launch the engine and the executor. Hence, my proposal here is to
move the logic to launch the components into a common module and either
have a single generic launch script that launch specific components based
on the CLI options or have separate launch scripts that reference the
appropriate launch function from the common module.

 Ok, I see your point. Then I would suggest we have one script which we
 could use to run all the components (any subset of of them). So for those
 components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.


- The RPC client/server in oslo.messaging do not determine the
transport.  The transport is determine via oslo.config and then given
explicitly to the RPC client/server.

 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and

 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
  examples for the client and server respectively.  The in process Queue
is instantiated within this transport object from the fake driver.  For the
local option, all three components need to share the same transport in
order to have the Queue in scope. Thus, we will need some method to have
this transport object visible to all three components and hence my proposal
to use a global variable and a factory method.

 I'm still not sure I follow your point here.. Looking at the links you
 provided I see this:

 transport = messaging.get_transport(cfg.CONF)

 So my point here is we can make this call once in the launching script and
 pass it to engine/executor (and now API too if we want it to be launched by
 the same script). Of course, we'll have to change the way how we initialize
 these components, but I believe we can do it. So it's just a dependency
 injection. And in this case we wouldn't need to use a global variable. Am I
 still missing something?


 Renat Akhmerov
 @ Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-13 Thread Mike Spreitzer
Therve told me he actually tested this and it works.  Now if I could only 
configure DevStack to install a working Neutron...

Regards,
Mike



From:   Fox, Kevin M kevin@pnnl.gov
To: Chris Armstrong chris.armstr...@rackspace.com, OpenStack 
Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/13/2014 02:19 PM
Subject:Re: [openstack-dev] [heat][neutron] 
OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?



Hi Chris,

That's great to hear. I'm looking forward to installing icehouse and 
testing that out. :)

Thanks,
Kevin


From: Chris Armstrong [chris.armstr...@rackspace.com]
Sent: Wednesday, March 12, 2014 1:29 PM
To: Fox, Kevin M; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup 
and OS::Neutron::PoolMember?

Hi Kevin,

The design of OS::Heat::AutoScalingGroup should not require explicit 
support for load balancers. The design is meant to allow you to create a 
resource that wraps up both a OS::Heat::Server and a PoolMember in a 
template and use it via a Stack resource.

(Note that Mike was talking about the new OS::Heat::AutoScalingGroup 
resource, not AWS::AutoScaling::AutoScalingGroup).

So, while I haven’t tested this case with PoolMember specifically, and 
there may still be bugs, no more feature implementation should be 
necessary (I hope).

-- 
Christopher Armstrong
IRC: radix


On March 12, 2014 at 1:52:53 PM, Fox, Kevin M (kevin@pnnl.gov) wrote:
I submitted a blueprint a while back that I think is relevant:

https://blueprints.launchpad.net/heat/+spec/elasticloadbalancing-lbaas

Currently heat autoscaling doesn't interact with Neutron lbaas and the 
configurable bits aren't configurable enough to allow it without code 
changes as far as I can tell.

I think its only a few days of work, but the OpenStack CLA is preventing 
me from contributing. :/

Thanks,
Kevin


From: Mike Spreitzer [mspre...@us.ibm.com]
Sent: Wednesday, March 12, 2014 11:34 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a 
nested stack that includes a OS::Neutron::PoolMember?  Should I expect 
this to work?

Thanks,
Mike
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Fox, Kevin M
Funny this topic came up. I was just looking into some of this yesterday. 
Here's some links that I came up with:

*  
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-qemu-ga-freeze-thaw.html
 - Describes how application level safe backups of vm's can be accomplished. 
Didn't have the proper framework prior to RedHat 6.5. Looks reasonable now.

* http://lists.gnu.org/archive/html/qemu-devel/2012-11/msg01043.html - An 
example of a hook that lets you snapshot mysql safely while it is still running.

* https://wiki.openstack.org/wiki/Cinder/QuiescedSnapshotWithQemuGuestAgent - A 
blueprint for making safe live snapshots enabled via the Cinder api. Its not 
there yet, but being worked on.

 * https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support - Nova 
supports freeze/thawing the instance.

Thanks,
Kevin

From: Bruce Montague [bruce_monta...@symantec.com]
Sent: Thursday, March 13, 2014 7:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a within-guest 
agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work 
with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered via 
libvirt)? I think there was an effort for qemu-ga to use fsfreeze as an 
equivalent to VSS on Linux systems, was that done?  If so, could an OpenStack 
API provide a generic quiesce request that would then get passed to libvirt? 
(Also, the XenServer VSS support seems different than qemu/KVM's, is this true? 
Can it also be accessed through libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, but 
there's an important missing piece in the current OpenStack APIs: support for 
application consistent backups via Volume Shadow Copy (or other solutions) at 
the instance level, including differential / incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with the 
free Hyper-V Server) with e.g. vSphere and XenServer supporting it as well 
(quescing) and with the option for third party vendors to add drivers for their 
solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the OpenStack 
community wants to support those use cases or not. Cinder backup/restore 
support [1] and volume replication [2] are surely a great starting point in 
this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


 On 12/mar/2014, at 20:45, Bruce Montague bruce_monta...@symantec.com 
 wrote:


 Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
 the following list sketches some speculative OpenStack DR use cases. These 
 use cases do not reflect any specific product behavior and span a wide 
 spectrum. This list is not a proposal, it is intended primarily to solicit 
 additional discussion. The first basic use case, (1), is described in a bit 
 more detail than the others; many of the others are elaborations on this 
 basic theme.



 * (1) [Single VM]

 A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
 Services) installed runs a key application and integral database. VSS can 
 quiesce the app, database, filesystem, and I/O on demand and can be invoked 
 external to the guest.

   a. The VM's volumes, including the boot volume, are replicated to a remote 
 DR site (another OpenStack deployment).

   b. Some form of replicated VM or VM metadata exists at the remote site. 
 This VM/description includes the replicated volumes. Some systems might use 
 cold migration or some form of wide-area live VM migration to establish this 
 remote site VM/description.

   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
 volumes in an application-consistent state. This state is flushed all the way 
 through to the remote volumes. As each remote volume reaches its 
 application-consistent state, this is recognized in some fashion, perhaps by 
 an in-band signal, and a snapshot of the volume is made at the remote site. 
 Volume replication is re-enabled immediately following the snapshot. A backup 
 is then made of the snapshot on the remote site. At the completion of this 
 cycle, application-consistent volume snapshots and backups exist on the 

Re: [openstack-dev] [Congress][Data Integration]

2014-03-13 Thread Rajdeep Dua
Thanks for the feedback.
Will create a design on these lines and send across for review


On Wed, Mar 12, 2014 at 3:53 PM, Tim Hinrichs thinri...@vmware.com wrote:

 Hi Rajdeep,

 This is an great problem to work on because it confronts one of the
 assumptions we're making in Congress: that cloud services can be
 represented as a collection of tables in a reasonable way.  You're asking
 good questions here.

 More responses inline.

 Tim


 --

 *From: *Rajdeep Dua rajdeep@gmail.com
 *To: *openstack-dev@lists.openstack.org
 *Sent: *Wednesday, March 12, 2014 11:54:28 AM
 *Subject: *[openstack-dev] [Congress][Data Integration]


 Need some guidance on how to convert nested types into flat tuples.
 Also should we reorder the tuple values in a particular sequence?

 Order of tuples doesn't matter. Order of columns (values) within a tuple
 doesn't really matter either, except that all tuples must use the same
 order and the policies we write must know which column is which.


 Thanks
 Rajdeep

 As an example i have shown networks and ports tuples with some nested types

 networks - tuple format
 ---

 keys (for reference)

 {'status','subnets',
 'name','test-network','provider:physical_network','admin_state_up',
 'tenant_id','provider:network_type','router:external',
 'shared',id,'provider:segmentation_id'}

 values
 ---
 ('ACTIVE', ['4cef03d0-1d02-40bb-8c99-2f442aac6ab0'], 'test-network', None,
 True,
 '570fe78a1dc54cffa053bd802984ede2', 'gre', False, False,
 '240ff9df-df35-43ae-9df5-27fae87f2492', 4)

 Here we'd want to pull the List out and replace it with an ID. Then create
 another table that shows which subnets belong to the list with that ID.
 (You can think of the ID as a pointer to the list---in the C/C++ sense.)
  So something like...

 network( 'ACTIVE', 'ID1', 'test-network', None, True,

 '570fe78a1dc54cffa053bd802984ede2', 'gre', False, False,
 '240ff9df-df35-43ae-9df5-27fae87f2492', 4)

 element('ID1', '4cef03d0-1d02-40bb-8c99-2f442aac6ab0')
 element('ID1', another subnet if one existed)

 The other thing to think about is whether we want 1 table with 10 columns
 or we want 10 tables with 2 columns each. In this example, we would have...


 network('net1')
 network.status('net1', 'ACTIVE' )
 network.subnets('net1', 'ID1')
 network.name('net1', 'test-network')
 ...

 The period is just another character in the tablename. Nothing fancy
 happening here.

 The ports example below would need a similar flattening.  To handle
 dictionaries, I would use the dot-notation shown above.

 A single Neutron API call might populate several Congress tables.

 Tim


 ports - tuple format
 
 keys (for reference)

 {'status','binding:host_id', 'name', 'allowed_address_pairs',
 'admin_state_up', 'network_id',
 'tenant_id', 'extra_dhcp_opts': [],
 'binding:vif_type', 'device_owner',
 'binding:capabilities', 'mac_address',
 'fixed_ips' , 'id', 'security_groups',
 'device_id'}

 Values

 ('ACTIVE', 'havana', '', [], True, '240ff9df-df35-43ae-9df5-27fae87f2492',
 '570fe78a1dc54cffa053bd802984ede2', [], 'ovs', 'network:router_interface',
 {'port_filter': True}, 'fa:16:3e:ab:90:df', [{'subnet_id':
 '4cef03d0-1d02-40bb-8c99-2f442aac6ab0', 'ip_address': '90.0.0.1'}],
 '0a2ce569-85a8-45ec-abb3-0d4b34ff69ba', [],
 '864e4acf-bf8e-4664-8cf7-ad5daa95681e')

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org

 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=A86YVKfBX5U3g6F7eNScJYjr6Qwjv4dyDyVxE9Im8g8%3D%0As=0345ab3711a58ec1ebcee08649f047826cec593f57e9843df0fec2f8cfb03b42



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Nova] libvirt+Xen+OVS VLAN networking in icehouse

2014-03-13 Thread iain macdonnell
I've been playing with an icehouse build grabbed from fedorapeople. My
hypervisor platform is libvirt-xen, which I understand may be
deprecated for icehouse(?) but I'm stuck with it for now, and I'm
using VLAN networking. It almost works, but I have a problem with
networking. In havana, the VIF gets placed on a legacy ethernet
bridge, and a veth pair connects that to the OVS integration bridge.
In understand that this was done to enable iptables filtering at the
VIF. In icehouse, the VIF appears to get placed directly on the
integration bridge - i.e. the libvirt XML includes something like:

interface type='bridge'
  mac address='fa:16:3e:e7:1e:c3'/
  source bridge='br-int'/
  script path='vif-bridge'/
  target dev='tap43b9d367-32'/
/interface


The problem is that the port on br-int does not have the VLAN tag.
i.e. I'll see something like:

Bridge br-int
Port tap43b9d367-32
Interface tap43b9d367-32
Port qr-cac87198-df
tag: 1
Interface qr-cac87198-df
type: internal
Port int-br-bond0
Interface int-br-bond0
Port br-int
Interface br-int
type: internal
Port tapb8096c18-cf
tag: 1
Interface tapb8096c18-cf
type: internal


If I manually set the tag using 'ovs-vsctl set port tap43b9d367-32
tag=1', traffic starts flowing where it needs to.

I've traced this back a bit through the agent code, and find that the
bridge port is ignored by the agent because it does not have any
external_ids (observed with 'ovs-vsctl list Interface'), and so the
update process that normally sets the tag is not invoked. It appears
that Xen is adding the port to the bridge, but nothing is updating it
with the neutron-specific external_ids that the agent expects to
see.

Before I dig any further, I thought I'd ask; is this stuff supposed to
work at this point? Is it intentional that the VIF is getting placed
directly on the integration bridge now? Might I be missing something
in my configuration?

FWIW, I've tried the ML2 plugin as well as the legacy OVS one, with
the same result.

TIA,

~iain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Monty Taylor
Will do!

On Mar 13, 2014 10:13 AM, Solly Ross sr...@redhat.com wrote:

 @Monty: having a packaging system sounds like a good idea.  Send us a pull 
 request on github.com/kanaka/noVNC. 

 Best Regards, 
 Solly Ross 

 - Original Message - 
 From: Monty Taylor mord...@inaugust.com 
 To: Sean Dague s...@dague.net, OpenStack Development Mailing List (not 
 for usage questions) openstack-dev@lists.openstack.org, 
 openst...@nemebean.com 
 Cc: openstack-in...@lists.openstack.org 
 Sent: Thursday, March 13, 2014 12:09:01 PM 
 Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
 noVNC from github.com/kanaka 

 I agree. 

 Solly - in addition to potentially 'adopting' noVNC - or as a parallel 
 train of thought ... 

 As we started working on storyboard in infra, we've started using the 
 bower tool for html/javascript packaging - and we have some ability to 
 cache the output of that pretty easily. Would you accept patches to 
 noVNC to add bower config things and/or publication of tarballs of 
 releases via it? Since noVNC isn't likely to be participating in the 
 integrated gate in either case, we could potentially split the question 
 of how do we get copies of it in a way that doesn't depend on OS 
 distros (which is why we use pip for our python depends) and does 
 noVNC want to have its git repo exist in OpenStack Infra systems. 

 Monty 

 On 03/13/2014 07:44 AM, Sean Dague wrote: 
  I think a bigger question is why are we using a git version of something 
  outside of OpenStack. 
  
  Where is a noNVC release we can point to and use? 
  
  In Juno I'd really be pro removing all the devstack references to git 
  repos not on git.openstack.org, because these kinds of failures have 
  real impact. 
  
  Currently we have 4 repositories that fit this bill: 
  
  SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git} 
  NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git} 
  RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git} 
  SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}
   
  
  I think all of these probably need to be removed from devstack. We 
  should be using release versions (preferably in distros, though allowed 
  to be in language specific package manager). 
  
  -Sean 
  
  On 03/13/2014 10:26 AM, Solly Ross wrote: 
  @bnemec: I don't think that's been considered.  I'm actually one of the 
  upstream maintainers for noVNC.  The only concern that I'd have with 
  OpenStack adopting noVNC (there are other maintainers, as well as the 
  author, so I'd have to check with them as well) is that there are a few 
  other projects that use noVNC, so we'd need to make sure that no 
  OpenStack-specific code gets merged into noVNC if we adopt it.  Other that 
  that, though, adopting noVNC doesn't sound like a horrible idea. 
  
  Best Regards, 
  Solly Ross 
  
  - Original Message - 
  From: Ben Nemec openst...@nemebean.com 
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org 
  Cc: openstack-in...@lists.openstack.org 
  Sent: Wednesday, March 12, 2014 3:38:19 PM 
  Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
  cloning noVNC from github.com/kanaka 
  
  
  
  On 2014-03-11 20:34, Joshua Harlow wrote: 
  
  
  https://status.github.com/messages 
  * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
  mitigations we have in place are proving effective in protecting us and 
  we're hopeful that we've got this one resolved.' 
  If you were cloning from github.org and not http://git.openstack.org then 
  you were likely seeing some of the DDoS attack in action. 
  Unfortunately I don't think novnc is in git.openstack.org because it's not 
  an OpenStack project. I wonder if we should investigate adopting it (if 
  the author(s) are amenable to that) since we're using the git version of 
  it. Maybe that's already been considered and I just don't know about it. 
  :-) 
  -Ben 
  
  
  
  From: Sukhdev Kapur  sukhdevka...@gmail.com  
  Reply-To: OpenStack Development Mailing List (not for usage questions)  
  openstack-dev@lists.openstack.org  
  Date: Tuesday, March 11, 2014 at 4:08 PM 
  To: Dane Leblanc (leblancd)  lebla...@cisco.com  
  Cc: OpenStack Development Mailing List (not for usage questions)  
  openstack-dev@lists.openstack.org ,  openstack-in...@lists.openstack.org 
openstack-in...@lists.openstack.org  
  Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
  cloning noVNC from github.com/kanaka 
  
  
  
  I have noticed that even clone of devstack has failed few times within 
  last couple of hours - it was running fairly smooth so far. 
  -Sukhdev 
  
  
  On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur  sukhdevka...@gmail.com  
  wrote: 
  
  
  
  [adding openstack-dev list as well ] 
  I have noticed that this has stated hitting my builds within last few 
  hours. I have 

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Boris Pavlovic
Hi all,


I would like to fix direction of this thread. Cause it is going in wrong
direction.

To assume:
1) Yes restoring already deleted recourses could be useful.
2) Current approach with soft deletion is broken by design and we should
get rid of them.

More about why I think that it is broken:
1) When you are restoring some resource you should restore N records from N
tables (e.g. VM)
2) Restoring sometimes means not only restoring DB records.
3) Not all resources should be restorable (e.g. why I need to restore
fixed_ip? or key-pairs?)


So what we should think about is:
1) How to implement restoring functionally in common way (e.g. framework
that will be in oslo)
2) Split of work of getting rid of soft deletion in steps (that I already
mention):
a) remove soft deletion from places where we are not using it
b) replace internal code where we are using soft deletion to that framework
c) replace API stuff using ceilometer (for logs) or this framework (for
restorable stuff)


To put in a nutshell: Restoring Delete resources / Delayed Deletion != Soft
deletion.


Best regards,
Boris Pavlovic



On Thu, Mar 13, 2014 at 9:21 PM, Mike Wilson geekinu...@gmail.com wrote:

 For some guests we use the LVM imagebackend and there are times when the
 guest is deleted on accident. Humans, being what they are, don't back up
 their files and don't take care of important data, so it is not uncommon to
 use lvrestore and undelete an instance so that people can get their data.
 Of course, this is not always possible if the data has been subsequently
 overwritten. But it is common enough that I imagine most of our operators
 are familiar with how to do it. So I guess my saying that we do it on a
 regular basis is not quite accurate. Probably would be better to say that
 it is not uncommon to do this, but definitely not a daily task or something
 of that ilk.

 I have personally undeleted an instance a few times after accidental
 deletion also. I can't remember the specifics, but I do remember doing it
 :-).

 -Mike


 On Tue, Mar 11, 2014 at 12:46 PM, Johannes Erdfelt 
 johan...@erdfelt.comwrote:

 On Tue, Mar 11, 2014, Mike Wilson geekinu...@gmail.com wrote:
  Undeleting things is an important use case in my opinion. We do this in
 our
  environment on a regular basis. In that light I'm not sure that it
 would be
  appropriate just to log the deletion and git rid of the row. I would
 like
  to see it go to an archival table where it is easily restored.

 I'm curious, what are you undeleting and why?

 JE


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Josh Durgin

On 03/12/2014 04:54 PM, Matt Riedemann wrote:



On 3/12/2014 6:32 PM, Dan Smith wrote:

I'm confused as to why we arrived at the decision to revert the commits
since Jay's patch was accepted. I'd like some details about this
decision, and what new steps we need to take to get this back in for
Juno.


Jay's fix resolved the immediate problem that was reported by the user.
However, after realizing why the bug manifested itself and why it didn't
occur during our testing, all of the core members involved recommended a
revert as the least-risky course of action at this point. If it took
almost no time for that change to break a user that wasn't even using
the feature, we're fearful about what may crop up later.

We talked with the patch author (zhiyan) in IRC for a while after making
the decision to revert about what the path forward for Juno is. The
tl;dr as I recall is:

  1. Full Glance v2 API support merged
  2. Tests in tempest and nova that exercise Glance v2, and the new
 feature
  3. Push the feature patches back in

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Those are essentially the steps as I remember them too.  Sean changed
the dependencies in the blueprints so the nova glance v2 blueprint is
the root dependency, then multiple images and then the other download
handler blueprints at the top.  I haven't checked but the blueprints
should be marked as not complete (not sure what that would be now) and
marked for next, the v2 glance root blueprint should be marked as high
priority too so we get the proper focus when Juno opens up.


These reverts are still confusing me. The use of glance's v2 api
is very limited and easy to protect from errors.

These patches use the v2 glance api for exactly one call - to get
image locations. This has been available and used by other
features in nova and cinder since 2012.

Jay's patch fixed the one issue that was found, and added tests for
several other cases. No other calls to glance v2 are used. The method
Jay fixed is the only one that accesses the response from glanceclient.
Furthermore, it's trivial to guard against more incompatibilities and
fall back to downloading normally if any errors occur. This already
happens if glance does not expose image locations.

Can we consider adding this safety valve and un-reverting these patches?

Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-03-13-18.04.html
Log: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-03-13-18.04.log.html

It was decided to not keep backward compatibility for renaming due to
the a lot of additional effort needed. We'll discuss the starting date
for the full backward compatibility on the next meeting.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Jay Pipes
On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:
 Thanks everyone who have joined Savanna meeting.

You mean Sahara? :P

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-03-12 10:58:36 -0700:
 On Wed, 2014-03-12 at 17:35 +, Tim Bell wrote:
  And if the same mistake is done for a cinder volume or a trove database ?
 
 Snapshots and backups?
 

and bears, oh my!

+1, whether it is large data on a volume or saved state in the RAM of
a compute node, it isn't safe unless it is duplicated.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Clint Byrum
Excerpts from Tim Bell's message of 2014-03-12 11:02:25 -0700:
 
  
  If you want to archive images per-say, on deletion just export it to a 
  'backup tape' (for example) and store enough of the metadata
  on that 'tape' to re-insert it if this is really desired and then delete it 
  from the database (or do the export... asynchronously). The
  same could be said with VMs, although likely not all resources, aka 
  networks/.../ make sense to do this.
  
  So instead of deleted = 1, wait for cleaner, just save the resource (if
  possible) + enough metadata on some other system ('backup tape', alternate 
  storage location, hdfs, ceph...) and leave it there unless
  it's really needed. Making the database more complex (and all associated 
  code) to achieve this same goal seems like a hack that just
  needs to be addressed with a better way to do archiving.
  
  In a cloudy world of course people would be able to recreate everything 
  they need on-demand so who needs undelete anyway ;-)
  
 
 I have no problem if there was an existing process integrated into all of the 
 OpenStack components which would produce me an archive trail with meta data 
 and a command to recover the object from that data.
 
 Currently, my understanding is that there is no such function and thus the 
 proposal to remove the deleted column is premature.
 

That seems like an unreasonable request of low level tools like Nova. End
user applications and infrastructure management should be responsible
for these things and will do a much better job of it, as you can work
your own business needs for reliability and recovery speed into an
application aware solution. If Nova does it, your cloud just has to
provide everybody with the same un-delete, which is probably overkill
for _many_ applications.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [db][all] (Proposal) Restorable Delayed deletion of OS Resources

2014-03-13 Thread Boris Pavlovic
Hi stackers,

As a result of discussion:
[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
(step by step)
http://osdir.com/ml/openstack-dev/2014-03/msg00947.html

I understood that there should be another proposal. About how we should
implement Restorable  Delayed Deletion of OpenStack Resource in common way
 without these hacks with soft deletion in DB.  It is actually very
simple, take a look at this document:

https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel-dev] [Fuel] Fuel Project 4.1 milestone reached!

2014-03-13 Thread David Easter
Hi All,

  Just wanted to let everyone know that the Fuel Project met it¹s 4.1
milestone on Monday March 7th.  This latest version includes (among other
things):
* Support for the OpenStack Havana 2013.2.2
https://wiki.openstack.org/wiki/ReleaseNotes/2013.2.2  release.
* Ability to stop a deployment before completion
* Ability to reset an environment back to pre-deployment state without
losing original configuration settings
* NIC Bonding configuration in the Fuel UI
* Ceph acting as a backend for ephemeral volumes is no longer experimental
* The Ceilometer section within Horizon is now enabled by default
* Multiple network roles can share a single physical NIC
* Hundreds of defect fixes
Please feel free to visit https://launchpad.net/fuel/4.x/4.1 to view the
blueprints implemented and defects fixed.

Thanks to everyone in the community who contributed to hitting this
milestone!

- David J. Easter
  Product Line Manager,  Mirantis


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Mike Wilson
The restore use case is for sure inconsistently implemented and used. I
think I agree with Boris that we treat it as separate and just move on with
cleaning up soft delete. I imagine most deployments don't like having most
of the rows in their table be useless and make db access slow? That being
said, I am a little sad my hacky restore method will need to be reworked
:-).

-Mike


On Thu, Mar 13, 2014 at 1:30 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Tim Bell's message of 2014-03-12 11:02:25 -0700:
 
  
   If you want to archive images per-say, on deletion just export it to a
 'backup tape' (for example) and store enough of the metadata
   on that 'tape' to re-insert it if this is really desired and then
 delete it from the database (or do the export... asynchronously). The
   same could be said with VMs, although likely not all resources, aka
 networks/.../ make sense to do this.
  
   So instead of deleted = 1, wait for cleaner, just save the resource (if
   possible) + enough metadata on some other system ('backup tape',
 alternate storage location, hdfs, ceph...) and leave it there unless
   it's really needed. Making the database more complex (and all
 associated code) to achieve this same goal seems like a hack that just
   needs to be addressed with a better way to do archiving.
  
   In a cloudy world of course people would be able to recreate
 everything they need on-demand so who needs undelete anyway ;-)
  
 
  I have no problem if there was an existing process integrated into all
 of the OpenStack components which would produce me an archive trail with
 meta data and a command to recover the object from that data.
 
  Currently, my understanding is that there is no such function and thus
 the proposal to remove the deleted column is premature.
 

 That seems like an unreasonable request of low level tools like Nova. End
 user applications and infrastructure management should be responsible
 for these things and will do a much better job of it, as you can work
 your own business needs for reliability and recovery speed into an
 application aware solution. If Nova does it, your cloud just has to
 provide everybody with the same un-delete, which is probably overkill
 for _many_ applications.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Russell Bryant
On 03/13/2014 03:04 PM, Josh Durgin wrote:
 These reverts are still confusing me. The use of glance's v2 api
 is very limited and easy to protect from errors.
 
 These patches use the v2 glance api for exactly one call - to get
 image locations. This has been available and used by other
 features in nova and cinder since 2012.
 
 Jay's patch fixed the one issue that was found, and added tests for
 several other cases. No other calls to glance v2 are used. The method
 Jay fixed is the only one that accesses the response from glanceclient.
 Furthermore, it's trivial to guard against more incompatibilities and
 fall back to downloading normally if any errors occur. This already
 happens if glance does not expose image locations.

There was some use of the v2 API, but not by default.  These patches
changed that, and it was broken.  We went from not requiring the v2 API
to requiring it, without a complete view for what that means, including
a severe lack of testing of that API.

I think it's the right call to block any non-optional use of the API
until it's properly tested, and ideally, supported more generally in nova.

 Can we consider adding this safety valve and un-reverting these patches?

No.  We're already well into the freeze and we can't afford risk or
distraction.  The time it took to deal with and discuss the issue this
caused is exactly why we're hesitant to approve FFEs at all.  It's a
distraction during critical time as we work toward the RC.

The focus right now has to be on high/critical priority bugs and
regressions.  We can revisit this properly in Juno.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Anna A Sortland
Hi Mark, 

The existing v3/users API will still exist and will show all users. So you 
will still be able to grant role to a user who does not have one now.
I wonder if it makes sense to add a new API that would show only users 
that have role grants. 

So we would have:
v3/users - list all users   (existing API)
v3/roles/users - list users that have role grants   (new API)
v3/roles/​{role_id}​/users - list users with a specified role (existing 
API)



Anna Sortland
Cloud Systems Software Development
IBM Rochester, MN
annas...@us.ibm.com






From:   Mark Washenberger mark.washenber...@markwash.net
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/13/2014 01:01 PM
Subject:Re: [openstack-dev] [keystone] All LDAP users returned 
using keystone v3/users API



Hi Anna,


On Thu, Mar 13, 2014 at 8:36 AM, Anna A Sortland annas...@us.ibm.com 
wrote:
[A] The current keystone LDAP community driver returns all users that 
exist in LDAP via the API call v3/users, instead of returning just users 
that have role grants (similar processing is true for groups). This could 
potentially be a very large number of users. We have seen large companies 
with LDAP servers containing hundreds and thousands of users. We are aware 
of the filters available in keystone.conf ([ldap].user_filter and 
[ldap].query_scope) to cut down on the number of results, but they do not 
provide sufficient filtering (for example, it is not possible to set 
user_filter to members of certain known groups for OpenLDAP without 
creating a memberOf overlay on the LDAP server). 

[Nathan Kinder] What attributes would you filter on?  It seems to me that 
LDAP would need to have knowledge of the roles to be able to filter based 
on the roles.  This is not necessarily the case, as identity and 
assignment can be split in Keystone such that identity is in LDAP and role 
assignment is in SQL.  I believe it was designed this way to deal with 
deployments
where LDAP already exists and there is no need (or possibility) of adding 
role info into LDAP. 

[A] That's our main use case. The users and groups are in LDAP and role 
assignments are in SQL. 
You would filter on role grants and this information is in SQL backend. So 
new API would need to query both identity and assignment drivers. 

From my perspective, it seems there is a chicken-and-egg problem with this 
proposal. If a user doesn't have a role assigned, the user does not show 
up in the list. But if the user doesn't show up in the list, the user 
doesn't exist. If the user doesn't exist, you cannot add a role to it.

Perhaps what is needed is just some sort of filter to listing users that 
only returns users with a role in the cloud?

 

[Nathan Kinder] Without filtering based on a role attribute in LDAP, I 
don't think that there is a good solution if you have OpenStack and 
non-OpenStack users mixed in the same container in LDAP.
If you want to first find all of the users that have a role assigned to 
them in the assignments backend, then pull their information from LDAP, I 
think that you will end up with one LDAP search operation per user. This 
also isn't a very scalable solution.

[A] What was the reason the LDAP driver was written this way, instead of 
returning just the users that have OpenStack-known roles? Was the creation 
of a separate API for this function considered? 
Are other exploiters of OpenStack (or users of Horizon) experiencing this 
issue? If so, what was their approach to overcome this issue? We have been 
prototyping a keystone extension that provides an API that provides this 
filtering capability, but it seems like a function that should be 
generally available in keystone. 

[Nathan Kinder] I'm curious to know how your prototype is looking to 
handle this. 

[A] The prototype basically first calls assignment API 
list_role_assignments() to get a list of users and groups with role 
grants. It then iterates the retrieved list and calls identity API 
list_users_in_group() to get the list of users in these groups with grants 
and get_user() to get users that have role grants but do not belong to the 
groups with role grants (a call for each user). Both calls ignore groups 
and users that are not found in the LDAP registry but exist in SQL (this 
could be the result of a user or group being removed from LDAP, but the 
corresponding role grant was not revoked). Then the code removes 
duplicates if any and returns the combined list. 

The new extension API is /v3/my_new_extension/users. Maybe the better 
naming would be v3/roles/users (list users with any role) - compare to 
existing v3/roles/{role_id}/users  (list users with a specified role). 

Another alternative that we've tried is just a new identity driver that 
inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides 
just the list_users() function. That's probably not the best approach from 
OpenStack standards point of view but I 

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-13 Thread Jorge Miramontes
Hey everyone,

Now that the thread has had enough time for people to reply it appears that the 
majority of people that vocalized their opinion are in favor of a mini-summit, 
preferably to occur in Atlanta days before the Openstack summit. There are 
concerns however, most notably the concern that the mini-summit is not 100% 
inclusive (this seems to imply that other mini-summits are not 100% inclusive). 
Furthermore, there seems to be a concern about timing. I am relatively new to 
Openstack processes so I want to make sure I am following them. In this case, 
does majority vote win? If so, I'd like to further this discussion into 
actually planning a mini-summit. Thoughts?

Cheers,
--Jorge

From: Mike Wilson geekinu...@gmail.commailto:geekinu...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 11, 2014 11:57 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

Hangouts  worked well at the nova mid-cycle meetup. Just make sure you have 
your network situation sorted out before hand. Bandwidth and firewalls are what 
comes to mind immediately.

-Mike


On Tue, Mar 11, 2014 at 9:34 AM, Tom Creighton 
tom.creigh...@rackspace.commailto:tom.creigh...@rackspace.com wrote:
When the Designate team had their mini-summit, they had an open Google Hangout 
for remote participants.  We could even have an open conference bridge if you 
are not partial to video conferencing.  With the issue of inclusion solved, 
let’s focus on a date that is good for the team!

Cheers,

Tom Creighton


On Mar 10, 2014, at 4:10 PM, Edgar Magana 
emag...@plumgrid.commailto:emag...@plumgrid.com wrote:

 Eugene,

 A have a few arguments why I believe this is not 100% inclusive
   • Is the foundation involved on this process? How? What is the budget? 
 Who is the responsible from the foundation  side?
   • If somebody made already travel arraignments, it won't be possible to 
 make changes at not cost.
   • Staying extra days in a different city could impact anyone's budget
   • As a OpenStack developer. I want to understand why the summit is not 
 enough for deciding the next steps for each project. If that is the case, I 
 would prefer to make changes on the organization of the summit instead of 
 creating mini-summits all around!
 I could continue but I think these are good enough.

 I could agree with your point about previous summits being distractive for 
 developers, this is why this time the OpenStack foundation is trying very 
 hard to allocate specific days for the conference and specific days for the 
 summit.
 The point that I am totally agree with you is that we SHOULD NOT have session 
 about work that will be done no matter what!  Those are just a waste of good 
 time that could be invested in very interesting discussions about topics that 
 are still not clear.
 I would recommend that you express this opinion to Mark. He is the right guy 
 to decide which sessions will bring interesting discussions and which ones 
 will be just a declaration of intents.

 Thanks,

 Edgar

 From: Eugene Nikanorov 
 enikano...@mirantis.commailto:enikano...@mirantis.com
 Reply-To: OpenStack List 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 10:32 AM
 To: OpenStack List 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

 Hi Edgar,

 I'm neutral to the suggestion of mini summit at this point.
 Why do you think it will exclude developers?
 If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same city) 
 that would allow anyone who joins OS Summit to save on extra travelling.
 OS Summit itself is too distractive to have really productive discussions, 
 unless your missing the sessions and spend time discussing.
 For instance design sessions basically only good for declaration of intents, 
 but not for real discussion of a complex topic at meaningful detail level.

 What would be your suggestions to make this more inclusive?
 I think the time and place is the key here - hence Atlanta and few days prior 
 OS summit.

 Thanks,
 Eugene.



 On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana 
 emag...@plumgrid.commailto:emag...@plumgrid.com wrote:
 Team,

 I found that having a mini-summit with a very short notice means excluding
 a lot of developers of such an interesting topic for Neutron.
 The OpenStack summit is the opportunity for all developers to come
 together and discuss the next steps, there are many developers that CAN
 NOT afford another trip for a special summit. I am personally against
 that and I do support Mark's proposal of having all the conversation over
 IRC and mailing 

Re: [openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Matthew Farrellee

On 03/13/2014 03:24 PM, Jay Pipes wrote:

On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:

Thanks everyone who have joined Savanna meeting.


You mean Sahara? :P

-jay


sergey now has to put some bitcoins in the jar...


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Andrew Woodward
I disagree with the new dependency graph here, I don't think its reasonable
continue to have the Ephemeral RBD patch behind both glance v2 support and
image-multiple-location. Given the time that this has been in flight, we
should not be holding up features that do exist for features that don't.

I think we should go back to the original work proposed by Josh in [1] and
clean it up to be resubmitted once we re-open for Juno. If some
re-factoring for RBD is needed when glance v2 or image-multiple-location
does land, we would be happy to assist.

[1]  https://review.openstack.org/46879

Andrew
Mirantis
Ceph Community


On Thu, Mar 13, 2014 at 12:04 PM, Josh Durgin josh.dur...@inktank.comwrote:

 On 03/12/2014 04:54 PM, Matt Riedemann wrote:



 On 3/12/2014 6:32 PM, Dan Smith wrote:

 I'm confused as to why we arrived at the decision to revert the commits
 since Jay's patch was accepted. I'd like some details about this
 decision, and what new steps we need to take to get this back in for
 Juno.


 Jay's fix resolved the immediate problem that was reported by the user.
 However, after realizing why the bug manifested itself and why it didn't
 occur during our testing, all of the core members involved recommended a
 revert as the least-risky course of action at this point. If it took
 almost no time for that change to break a user that wasn't even using
 the feature, we're fearful about what may crop up later.

 We talked with the patch author (zhiyan) in IRC for a while after making
 the decision to revert about what the path forward for Juno is. The
 tl;dr as I recall is:

   1. Full Glance v2 API support merged
   2. Tests in tempest and nova that exercise Glance v2, and the new
  feature
   3. Push the feature patches back in

 --Dan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Those are essentially the steps as I remember them too.  Sean changed
 the dependencies in the blueprints so the nova glance v2 blueprint is
 the root dependency, then multiple images and then the other download
 handler blueprints at the top.  I haven't checked but the blueprints
 should be marked as not complete (not sure what that would be now) and
 marked for next, the v2 glance root blueprint should be marked as high
 priority too so we get the proper focus when Juno opens up.


 These reverts are still confusing me. The use of glance's v2 api
 is very limited and easy to protect from errors.

 These patches use the v2 glance api for exactly one call - to get
 image locations. This has been available and used by other
 features in nova and cinder since 2012.

 Jay's patch fixed the one issue that was found, and added tests for
 several other cases. No other calls to glance v2 are used. The method
 Jay fixed is the only one that accesses the response from glanceclient.
 Furthermore, it's trivial to guard against more incompatibilities and
 fall back to downloading normally if any errors occur. This already
 happens if glance does not expose image locations.

 Can we consider adding this safety valve and un-reverting these patches?

 Josh


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
If google has done it, Google did it right!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-13 Thread Matt Riedemann



On 3/12/2014 7:29 PM, Arnaud Legendre wrote:

Hi Matt,

I totally agree with you and actually we have been discussing this a lot 
internally the last few weeks.
. As a top priority, the driver MUST integrate with oslo.vmware. This will be 
achieved through this chain of patches [1]. We want these patches to be merged 
before other things.
I think we should stop introducing more complexity which makes the task of refactoring 
more and more complicated. The integration with oslo.vmware is not a refactoring but 
should be seen as a way to get a more lightweight version of the driver which 
will make the task of refactoring a bit easier.
. Then, we want to actually refactor, we got several meetings to know what is 
the best strategy to adopt going forward (and avoid reproducing the same 
mistakes).
The highest priority is spawn(): we need to make it modular, remove nested 
methods. This refactoring work should include the integration with the image 
handler framework [2] and introducing the notion of image type object to avoid 
all these conditions on types of images inside the core logic.


Breaking up the spawn method to make it modular and thus testable or 
refactoring to use oslo.vmware, order there doesn't seem to really 
matter to me since both sound good.  But this scares me:


This refactoring work should include the integration with the image 
handler framework


Hopefully the refactoring being talked about here with oslo.vmware and 
breaking spawn into chunks can be done *before* any work to refactor the 
vmware driver to use the multiple image locations feature - it will 
probably have to be given that was reverted out of Icehouse and will 
have some prerequisite work to do before it will land in Juno.



. I would like to see you cores to be involved in this design since you will be 
reviewing the code at some point. involved here can be interpreted as review the 
design, and/ or actually participate to the design discussions. I would like to get your POV on 
this.

Let me know if this approach makes sense.

Thanks,
Arnaud

[1] https://review.openstack.org/#/c/70175/
[2] https://review.openstack.org/#/c/33409/


- Original Message -
From: Matt Riedemann mrie...@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Sent: Wednesday, March 12, 2014 11:28:23 AM
Subject: Re: [openstack-dev] [nova] An analysis of code review in Nova



On 2/25/2014 6:36 AM, Matthew Booth wrote:

I'm new to Nova. After some frustration with the review process,
specifically in the VMware driver, I decided to try to visualise how the
review process is working across Nova. To that end, I've created 2
graphs, both attached to this mail.

Both graphs show a nova directory tree pruned at the point that a
directory contains less than 2% of total LOCs. Additionally, /tests and
/locale are pruned as they make the resulting graph much busier without
adding a great deal of useful information. The data for both graphs was
generated from the most recent 1000 changes in gerrit on Monday 24th Feb
2014. This includes all pending changes, just over 500, and just under
500 recently merged changes.

pending.svg shows the percentage of LOCs which have an outstanding
change against them. This is one measure of how hard it is to write new
code in Nova.

merged.svg shows the average length of time between the
ultimately-accepted version of a change being pushed and being approved.

Note that there are inaccuracies in these graphs, but they should be
mostly good. Details of generation here:
https://urldefense.proofpoint.com/v1/url?u=https://github.com/mdbooth/heatmapk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=q%2BhYPEq%2BGxlDrGrMdbYCWuaLhZOwXwRpMQwWxkSied4%3D%0As=9a9e8ba562a81e0d00ca4190fbda306617637473ba5e721e4071d8d0ae20175c.
 This code is obviously
single-purpose, but is free for re-use if anyone feels so inclined.

The first graph above (pending.svg) is the one I was most interested in,
and shows exactly what I expected it to. Note the size of 'vmwareapi'.
If you check out Nova master, 24% of the vmwareapi driver has an
outstanding change against it. It is practically impossible to write new
code in vmwareapi without stomping on an oustanding patch. Compare that
to the libvirt driver at a much healthier 3%.

The second graph (merged.svg) is an attempt to look at why that is.
Again comparing the VMware driver with the libvirt we can see that at 12
days, it takes much longer for a change to be approved in the VMware
driver than in the libvirt driver. I suspect that this isn't the whole
story, which is likely a combination of a much longer review time with
very active development.

What's the impact of this? As I said above, it obviously makes it very
hard to come in as a new developer of the VMware driver when almost a
quarter of it has been rewritten, but you can't see it. I am very new to
this and others should validate my conclusions, but I also believe this
is having a detrimental 

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-13 Thread Jay Pipes
On Thu, 2014-03-13 at 20:06 +, Jorge Miramontes wrote:
 Now that the thread has had enough time for people to reply it appears
 that the majority of people that vocalized their opinion are in favor
 of a mini-summit, preferably to occur in Atlanta days before the
 Openstack summit. There are concerns however, most notably the concern
 that the mini-summit is not 100% inclusive (this seems to imply that
 other mini-summits are not 100% inclusive). Furthermore, there seems
 to be a concern about timing. I am relatively new to Openstack
 processes so I want to make sure I am following them. In this case,
 does majority vote win? If so, I'd like to further this discussion
 into actually planning a mini-summit. Thoughts?



I personally would not be able to attend a mini-summit days before the
regular summit. I would, however, support a mini-summit about a month
after the regular summit, where the focus would be on implementing the
designs that are discussed at the regular summit.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Sergey Lukjanov
Heh, we should have a fathomless jar for it :(

On Thu, Mar 13, 2014 at 11:30 PM, Matthew Farrellee m...@redhat.com wrote:
 On 03/13/2014 03:24 PM, Jay Pipes wrote:

 On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:

 Thanks everyone who have joined Savanna meeting.


 You mean Sahara? :P

 -jay


 sergey now has to put some bitcoins in the jar...



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread David Kranz

On 03/13/2014 10:50 AM, Joe Hakim Rahme wrote:

On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:


There are a number of patches up for review that make various changes to use six apis 
instead of Python 2 constructs. While I understand the desire to get a head start on getting 
Tempest to run in Python 3, I'm not sure it makes sense to do this work piecemeal until we are near 
ready to introduce a py3 gate job. Many contributors will not be aware of what all the differences 
are and py2-isms will creep back in resulting in more overall time spent making these changes and 
reviewing. Also, the core review team is busy trying to do stuff important to the icehouse release 
which is barely more than 5 weeks away. IMO we should hold off on various kinds of 
cleanup patches for now.

+1 I agree with you David.

However, what’s the best way we can go about making sure to make this a
goal for the next release cycle?
Basically we just need to decide that it is important. Then we would set 
up a non-voting py3.3 job that fails miserably. We would have a list of 
all the changes that are needed. Implement the changes and turn the 
py3.3 job voting as soon as it passes. The more quickly this is done 
once it starts, the better, both because it will cause rebase havoc and 
new non-working-in-3.3 stuff will come in. So it is best done in a 
highly coordinated way where the patches are submitted according to a 
planned sequence and reviewed immediately.


 -David


---
Joe H. Rahme
IRC: rahmu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Josh Durgin

On 03/13/2014 12:48 PM, Russell Bryant wrote:

On 03/13/2014 03:04 PM, Josh Durgin wrote:

These reverts are still confusing me. The use of glance's v2 api
is very limited and easy to protect from errors.

These patches use the v2 glance api for exactly one call - to get
image locations. This has been available and used by other
features in nova and cinder since 2012.

Jay's patch fixed the one issue that was found, and added tests for
several other cases. No other calls to glance v2 are used. The method
Jay fixed is the only one that accesses the response from glanceclient.
Furthermore, it's trivial to guard against more incompatibilities and
fall back to downloading normally if any errors occur. This already
happens if glance does not expose image locations.


There was some use of the v2 API, but not by default.  These patches
changed that, and it was broken.  We went from not requiring the v2 API
to requiring it, without a complete view for what that means, including
a severe lack of testing of that API.


That's my point - these patches did not need to require the v2 API. They
could easily try it and fall back, or detect when only the default
handler was enabled and not even try the v2 API in that case.

There is no hard requirement on the v2 API.


I think it's the right call to block any non-optional use of the API
until it's properly tested, and ideally, supported more generally in nova.


Can we consider adding this safety valve and un-reverting these patches?


No.  We're already well into the freeze and we can't afford risk or
distraction.  The time it took to deal with and discuss the issue this
caused is exactly why we're hesitant to approve FFEs at all.  It's a
distraction during critical time as we work toward the RC.


FWIW the patch that caused the issue was merged before FF.


The focus right now has to be on high/critical priority bugs and
roegressions.  We can revisit this properly in Juno.


Ok.

Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does exception need localize or not?

2014-03-13 Thread Doug Hellmann
On Thu, Feb 27, 2014 at 3:45 AM, yongli he yongli...@intel.com wrote:

  refer to :
 https://wiki.openstack.org/wiki/Translations

 now some exception use _ and some not.  the wiki suggest do not to do
 that. but i'm not sure.

 what's the correct way?


 F.Y.I

 What To Translate

 At present the convention is to translate *all* user-facing strings. This
 means API messages, CLI responses, documentation, help text, etc.

 There has been a lack of consensus about the translation of log messages;
 the current ruling is that while it is not against policy to mark log
 messages for translation if your project feels strongly about it,
 translating log messages is not actively encouraged.


I've updated the wiki to replace that paragraph with a pointer to 
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation which
explains the log translation rules. We will be adding the job needed to
have different log translations during Juno.



 Exception text should *not* be marked for translation, becuase if an
 exception occurs there is no guarantee that the translation machinery will
 be functional.


This makes no sense to me. Exceptions should be translated. By far the
largest number of errors will be presented to users through the API or
through Horizon (which gets them from the API). We will ensure that the
translation code does its best to fall back to the original string if the
translation fails.

Doug





 Regards
 Yongli He


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread Sean Dague
On 03/13/2014 04:29 PM, David Kranz wrote:
 On 03/13/2014 10:50 AM, Joe Hakim Rahme wrote:
 On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:

 There are a number of patches up for review that make various changes
 to use six apis instead of Python 2 constructs. While I understand
 the desire to get a head start on getting Tempest to run in Python 3,
 I'm not sure it makes sense to do this work piecemeal until we are
 near ready to introduce a py3 gate job. Many contributors will not be
 aware of what all the differences are and py2-isms will creep back in
 resulting in more overall time spent making these changes and
 reviewing. Also, the core review team is busy trying to do stuff
 important to the icehouse release which is barely more than 5 weeks
 away. IMO we should hold off on various kinds of cleanup patches
 for now.
 +1 I agree with you David.

 However, what’s the best way we can go about making sure to make this a
 goal for the next release cycle?
 Basically we just need to decide that it is important. Then we would set
 up a non-voting py3.3 job that fails miserably. We would have a list of
 all the changes that are needed. Implement the changes and turn the
 py3.3 job voting as soon as it passes. The more quickly this is done
 once it starts, the better, both because it will cause rebase havoc and
 new non-working-in-3.3 stuff will come in. So it is best done in a
 highly coordinated way where the patches are submitted according to a
 planned sequence and reviewed immediately.

So it's important that there is a full plan about how to get there,
including the python 3 story for everything in requirements.txt and
test-requirements.txt being resolved first.

Because partial work is pretty pointless, it bit rots. And if we can't
get to running tempest regularly with python3 then it will regress (I
would see us doing an extra python3 full run to prove that).

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >