Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Flavio Percoco

On 29/11/13 13:47 -0500, Eric Windisch wrote:

Based on that, I would like to say that we do not add new features to
incubated code after it starts moving into a library, and only provide
stable-like bug fix support until integrated projects are moved over to
the graduated library (although even that is up for discussion). After all
integrated projects that use the code are using the library instead of the
incubator, we can delete the module(s) from the incubator.


+1

Although never formalized, this is how I had expected we would handle
the graduation process. It is also how we have been responding to
patches and blueprints offerings improvements and feature requests for
oslo.messaging.


+1

FF

--
@flaper87
Flavio Percoco


pgpq0wWsoLCQL.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-02 Thread Vijay Venkatachalam

LBaaS enthusiasts: Your vote on the revised model for SSL Termination?

Here is a comparison between the original and revised model for SSL Termination:

***
Original Basic Model that was proposed in summit
***
* Certificate parameters introduced as part of VIP resource.
* This model is for basic config and there will be a model introduced in future 
for detailed use case.
* Each certificate is created for one and only one VIP.
* Certificate params not stored in DB and sent directly to loadbalancer. 
* In case of failures, there is no way to restart the operation from details 
stored in DB.
***
Revised New Model
***
* Certificate parameters will be part of an independent certificate resource. A 
first-class citizen handled by LBaaS plugin.
* It is a forwarding looking model and aligns with AWS for uploading server 
certificates.
* A certificate can be reused in many VIPs.
* Certificate params stored in DB. 
* In case of failures, parameters stored in DB will be used to restore the 
system.

A more detailed comparison can be viewed in the following link
 
https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8FqVeZISh07iGs/edit?usp=sharing

Thanks,
Vijay V.


 -Original Message-
 From: Vijay Venkatachalam
 Sent: Friday, November 29, 2013 2:18 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as
 first level citizen - SSL Termination
 
 
 To summarize:
 Certificate will be a first level citizen which can be reused and For 
 certificate
 management nothing sophisticated is required.
 
 Can you please Vote (+1, -1)?
 
 We can move on if there is consensus around this.
 
  -Original Message-
  From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
  Sent: Wednesday, November 20, 2013 3:01 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
 
  Hi,
 
  On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
   Hi,
  
  
  
   Evgeny has outlined the wiki for the proposed change at:
   https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line
   with what was discussed during the summit.
  
   The
  
 
 https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
  YTvMkMJ_inbo/edit discuss in addition Certificate Chains.
  
  
  
   What would be the benefit of having a certificate that must be
   connected to VIP vs. embedding it in the VIP?
 
  You could reuse the same certificate for multiple loadbalancer VIPs.
  This is a fairly common pattern - we have a dev wildcard cert that is
  self- signed, and is used for lots of VIPs.
 
   When we get a system that can store certificates (ex: Barbican), we
   will add support to it in the LBaaS model.
 
  It probably doesn't need anything that complicated, does it?
 
  Cheers,
  --
  Stephen Gran
  Senior Systems Integrator - The Guardian
 
  Please consider the environment before printing this email.
  --
  Visit theguardian.com
 
  On your mobile, download the Guardian iPhone app
  theguardian.com/iphone and our iPad edition theguardian.com/iPad Save
  up to 33% by subscribing to the Guardian and Observer - choose the
  papers you want and get full digital access.
  Visit subscribe.theguardian.com
 
  This e-mail and all attachments are confidential and may also be
  privileged. If you are not the named recipient, please notify the
  sender and delete the e- mail and all attachments immediately.
  Do not disclose the contents to another person. You may not use the
  information for any purpose, or store, or copy, it in any way.
 
  Guardian News  Media Limited is not liable for any computer viruses
  or other material transmitted with or as part of this e-mail. You
  should employ virus checking software.
 
  Guardian News  Media Limited
 
  A member of Guardian Media Group plc
  Registered Office
  PO Box 68164
  Kings Place
  90 York Way
  London
  N1P 2AP
 
  Registered in England Number 908396
 
  --
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Julien Danjou
On Fri, Nov 29 2013, Doug Hellmann wrote:

 Before we make this policy official, I want to solicit feedback from the
 rest of the community and the Oslo core team.

+1

I think it's a good idea. It'll push people forward migrating to the
split library if they want the new features.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] list of negative tests that need to be separated from other tests.

2013-12-02 Thread Kenichi Oomichi

Hi Adalberto,

 -Original Message-
 From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com]
 Sent: Saturday, November 30, 2013 11:29 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [tempest] list of negative tests that need to be 
 separated from other tests.
 
 Hi!
 
 I understand that one action toward negative tests, even before
 implementing the automatic schema generation, is to move them to their
 own file (.py), thus separating them from the 'positive' tests. (See
 patch https://review.openstack.org/#/c/56807/ as an example).
 
 In order to do so, I've got a list of testcases that still have both
 negative and positive tests together, and listed them in the following
 etherpad link: https://etherpad.openstack.org/p/bp_negative_tests_list
 
 The idea here is to have patches for each file until we get all the
 negative tests in their own files. I also linked the etherpad to the
 specific blueprint created by Marc for negative tests in icehouse
 (https://blueprints.launchpad.net/tempest/+spec/negative-tests ).
 
 Please, send any comments and whether you think this is the right
 approach to keep track on that task.

We have already the same etherpad, and we are working on it.
Please check the following:
https://etherpad.openstack.org/p/TempestTestDevelopment


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Splitting up V3 API admin-actions plugin

2013-12-02 Thread Lingxian Kong
agree with Alex, maybe 'reset_network' or 'inject_network_info' is just
used by XenServer now. and I think the same functionality can be replaced
with Neutron or Nova metadata service. am I right?


2013/12/2 Alex Xu x...@linux.vnet.ibm.com

  On 2013年12月01日 21:39, Christopher Yeoh wrote:

  Hi,

 At the summit we agreed to split out lock/unlock, pause/unpause,
 suspend/unsuspend
  functionality out of the V3 version of admin actions into separate
 extensions to make it easier for deployers to only have loaded the
 functionality that they want.

 Remaining in admin_actions we have:

  migrate
 live_migrate
  reset_network
  inject_network_info
  create_backup
  reset_state

  I think it makes sense to separate out migrate and live_migrate into a
 migrate plugin as well.

 What do people think about the others? There is no real overhead of having
 them in separate
 plugins and totally remove admin_actions. Does anyone have any objections
 from this being done?


 I have question for reset_network and inject_network_info. Is it useful
 for v3 api? The network info(ip address, gateway...) should be pushed
 by DHCP service that provided by Neutron. And we don't like any inject.


  Also in terms of grouping I don't think any of the others remaining above
 really belong together, but welcome any suggestions.

  Chris



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2013-12-02 Thread Dina Belova
Thanks everybody who joined us on our Climate weekly meeting.

Here are logs/minutes from it:


Minutes:
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-12-02-10.02.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-12-02-10.02.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-12-02-10.02.log.html

-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Increase Swift ring partition power

2013-12-02 Thread Christian Schwede
Hello together,

I'd like to discuss a way to increase the partition power of an existing
Swift cluster.
This is most likely interesting for smaller clusters that are growing
beyond their original planed size.

As discussed earlier [1] a rehashing is required after changing the
partition power to make existing data available again.

My idea is to increase the partition power by 1 and then assign the same
devices to (old partition*2 and old_partition*2+1). For example:

Assigned devices on older ring:

|Partition 0:2 3 0
Partition 1:1 0 3|

Assigned devices on new ring with partition power +1:

|Partition 0:2 3 0
Partition 1:2 3 0
Partition 2:1 0 3
Partition 3:1 0 3

|

The hash of an object doesn't change with a new partition, only the
assigned partition. An object on partition 1 on the old ring will be
assigned to partition 2 OR 3 on the ring with the increased partition
power. Because of the fact that the used devices are the same for the
new partitions no data movement to other devices or storage nodes is
required (only locally).

A longer example together with a small tool can be found at
https://github.com/cschwede/swift-ring-tool

Since the device distribution on the new ring might not be optimal it is
possible to use a fresh distribution and migrate the ring with the
increased partition power to a ring with a new distribution.

So far this worked for smaller clusters (with a few hundred TB) as well
as in local SAIO installations.

I'd like to discuss this approach and see if it makes sense to continue
work on this and adding this tool to swift, python-swiftclient or
stackforge (or whatever else might be appropriate).

Please let me know what you think.

Best regards,

Christian


[1]
http://lists.openstack.org/pipermail/openstack-operators/2013-January/002544.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] list of negative tests that need to be separated from other tests.

2013-12-02 Thread Adalberto Medeiros

Thanks Ken'ichi. I added my name to a couple of them in that list.

Adalberto Medeiros
Linux Technology Center
Openstack and Cloud Development
IBM Brazil
Email: adal...@linux.vnet.ibm.com

On Mon 02 Dec 2013 07:36:38 AM BRST, Kenichi Oomichi wrote:


Hi Adalberto,


-Original Message-
From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com]
Sent: Saturday, November 30, 2013 11:29 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [tempest] list of negative tests that need to be 
separated from other tests.

Hi!

I understand that one action toward negative tests, even before
implementing the automatic schema generation, is to move them to their
own file (.py), thus separating them from the 'positive' tests. (See
patch https://review.openstack.org/#/c/56807/ as an example).

In order to do so, I've got a list of testcases that still have both
negative and positive tests together, and listed them in the following
etherpad link: https://etherpad.openstack.org/p/bp_negative_tests_list

The idea here is to have patches for each file until we get all the
negative tests in their own files. I also linked the etherpad to the
specific blueprint created by Marc for negative tests in icehouse
(https://blueprints.launchpad.net/tempest/+spec/negative-tests ).

Please, send any comments and whether you think this is the right
approach to keep track on that task.


We have already the same etherpad, and we are working on it.
Please check the following:
https://etherpad.openstack.org/p/TempestTestDevelopment


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Tokens in memcache become unauthorized

2013-12-02 Thread Nadya Privalova
Hi guys,

I've faced with nova+memcache issue on the cluster in HA-mode.

Issue is related to nova in case of using REST (that includes Horizon):
it's impossible to use auth-token several times because it became
unauthorized in cache.

Logs:

Nov 13 06:58:49 controller-1461 nova keystoneclient.middleware.auth_token
DEBUG Token validation failure.
Traceback (most recent call last):
  File
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py,
line 684, in _validate_u
ser_token
cached = self._cache_get(token_id)
  File
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py,
line 898, in _cache_get
raise InvalidUserToken('Token authorization failed')
InvalidUserToken: Token authorization failed
Nov 13 06:58:49 controller-1461 nova keystoneclient.middleware.auth_token
DEBUG Marking token 211b590c4ba94d62a3981fbf91e934dc as unauthorized in
memcache

Issue was fixed after the following was added to [keystone_authtoken]
section in nova.conf to all controller's nodes:
memcache_security_strategy=ENCRYPT
memcache_secret_key=any_key

But from docs I see that these configs are not required. The issue does not
appear with any other services but all of them have
memcache_security_strategy empty. So is it a nova-bug or for HA-mode I
should configure this params? May the issue be caused by the fact that
memcaches are not syncked on controllers?

Thanks,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-02 Thread gustavo panizzo gfa
On 11/27/2013 01:50 PM, Maru Newby wrote:
 Just a heads up, the console output for neutron gate jobs is about to get a 
 lot noisier.  Any log output that contains 'ERROR' is going to be dumped into 
 the console output so that we can identify and eliminate unnecessary error 
 logging.  Once we've cleaned things up, the presence of unexpected 
 (non-whitelisted) error output can be used to fail jobs, as per the following 
 Tempest blueprint:

 https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors

 I've filed a related Neutron blueprint for eliminating the unnecessary error 
 logging:

 https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error

 I'm looking for volunteers to help with this effort, please reply in this 
 thread if you're willing to assist.
i want to help, i'm a newbie on openstack process, looking at
status.openstack.org i don't see any obvious place to start, could you
share a list of tests or a bug list?

thanks

 Thanks,


 Maru
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Sandy Walsh


On 12/01/2013 06:40 PM, Doug Hellmann wrote:
 
 
 
 On Sat, Nov 30, 2013 at 3:52 PM, Sandy Walsh sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com wrote:
 
 
 
 On 11/29/2013 03:58 PM, Doug Hellmann wrote:
 
 
 
  On Fri, Nov 29, 2013 at 2:14 PM, Sandy Walsh
 sandy.wa...@rackspace.com mailto:sandy.wa...@rackspace.com
  mailto:sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com wrote:
 
  So, as I mention in the branch, what about deployments that
 haven't
  transitioned to the library but would like to cherry pick this
 feature?
 
  after it starts moving into a library can leave a very big gap
  when the functionality isn't available to users.
 
 
  Are those deployments tracking trunk or a stable branch? Because IIUC,
  we don't add features like this to stable branches for the main
  components, either, and if they are tracking trunk then they will get
  the new feature when it ships in a project that uses it. Are you
  suggesting something in between?
 
 Tracking trunk. If the messaging branch has already landed in Nova, then
 this is a moot discussion. Otherwise we'll still need it in incubator.
 
 That said, consider if messaging wasn't in nova trunk. According to this
 policy the new functionality would have to wait until it was. And, as
 we've seen with messaging, that was a very long time. That doesn't seem
 reasonable.
 
 
 The alternative is feature drift between the incubated version of rpc
 and oslo.messaging, which makes the task of moving the other projects to
 messaging even *harder*.
 
 What I'm proposing seems like a standard deprecation/backport policy;
 I'm not sure why you see the situation as different. Sandy, can you
 elaborate on how you would expect to maintain feature parity between the
 incubator and library while projects are in transition?

Deprecation usually assumes there is something in place to replace the
old way.

If I'm reading this correctly, you're proposing we stop adding to the
existing library as soon as the new library has started?

Shipping code always wins out. We can't stop development simply based on
the promise that something new is on the way. Leaving the existing code
to bug fix only status is far too limiting. In the case of messaging
this would have meant an entire release cycle with no new features in
oslo.rpc.

Until the new code replaces the old, we have to suffer the pain of
updating both codebases.


 Doug
 
  
 
 
 
  Doug
 
 
 
 
  -S
 
  
  From: Eric Windisch [e...@cloudscaling.com
 mailto:e...@cloudscaling.com
  mailto:e...@cloudscaling.com mailto:e...@cloudscaling.com]
  Sent: Friday, November 29, 2013 2:47 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [oslo] maintenance policy for code
  graduating from the incubator
 
   Based on that, I would like to say that we do not add new
 features to
   incubated code after it starts moving into a library, and
 only provide
   stable-like bug fix support until integrated projects are
 moved
  over to
   the graduated library (although even that is up for discussion).
  After all
   integrated projects that use the code are using the library
  instead of the
   incubator, we can delete the module(s) from the incubator.
 
  +1
 
  Although never formalized, this is how I had expected we would
 handle
  the graduation process. It is also how we have been responding to
  patches and blueprints offerings improvements and feature
 requests for
  oslo.messaging.
 
  --
  Regards,
  Eric Windisch
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-02 Thread mar...@redhat.com
On 02/12/13 15:05, gustavo panizzo gfa wrote:
 On 11/27/2013 01:50 PM, Maru Newby wrote:
 Just a heads up, the console output for neutron gate jobs is about to get a 
 lot noisier.  Any log output that contains 'ERROR' is going to be dumped 
 into the console output so that we can identify and eliminate unnecessary 
 error logging.  Once we've cleaned things up, the presence of unexpected 
 (non-whitelisted) error output can be used to fail jobs, as per the 
 following Tempest blueprint:

 https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors

 I've filed a related Neutron blueprint for eliminating the unnecessary error 
 logging:

 https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error

 I'm looking for volunteers to help with this effort, please reply in this 
 thread if you're willing to assist.
 i want to help, i'm a newbie on openstack process, looking at
 status.openstack.org i don't see any obvious place to start, could you
 share a list of tests or a bug list?
 

hey gustavo, am in the same position as you:
https://wiki.openstack.org/wiki/NeutronDevelopment and especially
https://wiki.openstack.org/wiki/NeutronStarterBugs helped me,

good luck! marios

 thanks

 Thanks,


 Maru
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-02 Thread Joe Gordon
On Dec 2, 2013 3:39 AM, Maru Newby ma...@redhat.com wrote:


 On Dec 2, 2013, at 2:07 AM, Anita Kuno ante...@anteaya.info wrote:

  Great initiative putting this plan together, Maru. Thanks for doing
  this. Thanks for volunteering to help, Salvatore (I'm thinking of asking
  for you to be cloned - once that becomes available.) if you add your
  patch urls (as you create them) to the blueprint Maru started [0] that
  would help to track the work.
 
  Armando, thanks for doing this work as well. Could you add the urls of
  the patches you reference to the exceptional-conditions blueprint?
 
  For icehouse-1 to be a realistic goal for this assessment and clean-up,
  patches for this would need to be up by Tuesday Dec. 3 at the latest
  (does 13:00 UTC sound like a reasonable target?) so that they can make
  it through review and check testing, gate testing and merging prior to
  the Thursday Dec. 5 deadline for icehouse-1. I would really like to see
  this, I just want the timeline to be conscious.

 My mistake, getting this done by Tuesday does not seem realistic.
 icehouse-2, then.


With icehouse-2 being the nova-network feature freeze reevaluation point
(possibly lifting it) I think gating on new stacktraces by icehouse-2 is
too late.  Even a huge whitelist of errors is better then letting new
errors in.


 m.

 
  I would like to say talk to me tomorrow in -neutron to ensure you are
  getting the support you need to achieve this but I will be flying (wifi
  uncertain). I do hope that some additional individuals come forward to
  help with this.
 
  Thanks Maru, Salvatore and Armando,
  Anita.
 
  [0]
 
https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
 
  On 11/30/2013 08:24 PM, Maru Newby wrote:
 
  On Nov 28, 2013, at 1:08 AM, Salvatore Orlando sorla...@nicira.com
wrote:
 
  Thanks Maru,
 
  This is something my team had on the backlog for a while.
  I will push some patches to contribute towards this effort in the
next few days.
 
  Let me know if you're already thinking of targeting the completion of
this job for a specific deadline.
 
  I'm thinking this could be a task for those not involved in fixing
race conditions, and be done in parallel.  I guess that would be for
icehouse-1 then?  My hope would be that the early signs of race conditions
would then be caught earlier.
 
 
  m.
 
 
  Salvatore
 
 
  On 27 November 2013 17:50, Maru Newby ma...@redhat.com wrote:
  Just a heads up, the console output for neutron gate jobs is about to
get a lot noisier.  Any log output that contains 'ERROR' is going to be
dumped into the console output so that we can identify and eliminate
unnecessary error logging.  Once we've cleaned things up, the presence of
unexpected (non-whitelisted) error output can be used to fail jobs, as per
the following Tempest blueprint:
 
  https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
 
  I've filed a related Neutron blueprint for eliminating the
unnecessary error logging:
 
 
https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
 
  I'm looking for volunteers to help with this effort, please reply in
this thread if you're willing to assist.
 
  Thanks,
 
 
  Maru
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Russell Bryant
On 11/29/2013 01:39 PM, Doug Hellmann wrote:
 We have a review up (https://review.openstack.org/#/c/58297/) to add
 some features to the notification system in the oslo incubator. THe
 notification system is being moved into oslo.messaging, and so we have
 the question of whether to accept the patch to the incubated version,
 move it to oslo.messaging, or carry it in both.
 
 As I say in the review, from a practical standpoint I think we can't
 really support continued development in both places. Given the number of
 times the topic of just make everything a library has come up, I would
 prefer that we focus our energy on completing the transition for a given
 module or library once it the process starts. We also need to avoid
 feature drift, and provide a clear incentive for projects to update to
 the new library.
 
 Based on that, I would like to say that we do not add new features to
 incubated code after it starts moving into a library, and only provide
 stable-like bug fix support until integrated projects are moved over
 to the graduated library (although even that is up for discussion).
 After all integrated projects that use the code are using the library
 instead of the incubator, we can delete the module(s) from the incubator. 
 
 Before we make this policy official, I want to solicit feedback from the
 rest of the community and the Oslo core team.

+1 in general.

You may want to make after it starts moving into a library more
specific, though.  One approach could be to reflect this status in the
MAINTAINERS file.  Right now there is a status field for each module in
the incubator:

 S: Status, one of the following:
  Maintained:  Has an active maintainer
  Orphan:  No current maintainer, feel free to step up!
  Obsolete:Replaced by newer code, or a dead end, or out-dated

It seems that the types of code we're talking about should just be
marked as Obsolete.  Obsolete code should only get stable-like bug fixes.

That would mean marking 'rpc' and 'notifier' as Obsolete (currently
listed as Maintained).  I think that is accurate, though.

https://review.openstack.org/59367

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Store quotas in Keystone

2013-12-02 Thread Chmouel Boudjnah
Hello,

I was wondering what was the status of Keystone being the central place
across all OpenStack projects for quotas.

There is already an implementation from Dmitry here :

https://review.openstack.org/#/c/40568/

but hasn't seen activities since october waiting for icehouse development
to be started and a few bits to be cleaned and added (i.e: the sqlite
migration).

It would be great if we can get this rekicked to get that for icehouse-2.

Thanks,
Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 8:08 AM, Sandy Walsh sandy.wa...@rackspace.comwrote:



 On 12/01/2013 06:40 PM, Doug Hellmann wrote:
 
 
 
  On Sat, Nov 30, 2013 at 3:52 PM, Sandy Walsh sandy.wa...@rackspace.com
  mailto:sandy.wa...@rackspace.com wrote:
 
 
 
  On 11/29/2013 03:58 PM, Doug Hellmann wrote:
  
  
  
   On Fri, Nov 29, 2013 at 2:14 PM, Sandy Walsh
  sandy.wa...@rackspace.com mailto:sandy.wa...@rackspace.com
   mailto:sandy.wa...@rackspace.com
  mailto:sandy.wa...@rackspace.com wrote:
  
   So, as I mention in the branch, what about deployments that
  haven't
   transitioned to the library but would like to cherry pick this
  feature?
  
   after it starts moving into a library can leave a very big
 gap
   when the functionality isn't available to users.
  
  
   Are those deployments tracking trunk or a stable branch? Because
 IIUC,
   we don't add features like this to stable branches for the main
   components, either, and if they are tracking trunk then they will
 get
   the new feature when it ships in a project that uses it. Are you
   suggesting something in between?
 
  Tracking trunk. If the messaging branch has already landed in Nova,
 then
  this is a moot discussion. Otherwise we'll still need it in
 incubator.
 
  That said, consider if messaging wasn't in nova trunk. According to
 this
  policy the new functionality would have to wait until it was. And, as
  we've seen with messaging, that was a very long time. That doesn't
 seem
  reasonable.
 
 
  The alternative is feature drift between the incubated version of rpc
  and oslo.messaging, which makes the task of moving the other projects to
  messaging even *harder*.
 
  What I'm proposing seems like a standard deprecation/backport policy;
  I'm not sure why you see the situation as different. Sandy, can you
  elaborate on how you would expect to maintain feature parity between the
  incubator and library while projects are in transition?

 Deprecation usually assumes there is something in place to replace the
 old way.

 If I'm reading this correctly, you're proposing we stop adding to the
 existing library as soon as the new library has started?

 Shipping code always wins out. We can't stop development simply based on
 the promise that something new is on the way. Leaving the existing code
 to bug fix only status is far too limiting. In the case of messaging
 this would have meant an entire release cycle with no new features in
 oslo.rpc.

 Until the new code replaces the old, we have to suffer the pain of
 updating both codebases.


I think you misunderstand either my intent or the status of the library.

During Havana we accepted patches to the rpc code and developed
oslo.messaging as a standalone library. Now that oslo.messaging has been
released, it is shipping code and the rpc portion of the incubator can be
deprecated during Icehouse.

Doug





  Doug
 
 
 
 
  
   Doug
  
  
  
  
   -S
  
   
   From: Eric Windisch [e...@cloudscaling.com
  mailto:e...@cloudscaling.com
   mailto:e...@cloudscaling.com mailto:e...@cloudscaling.com]
   Sent: Friday, November 29, 2013 2:47 PM
   To: OpenStack Development Mailing List (not for usage
 questions)
   Subject: Re: [openstack-dev] [oslo] maintenance policy for code
   graduating from the incubator
  
Based on that, I would like to say that we do not add new
  features to
incubated code after it starts moving into a library, and
  only provide
stable-like bug fix support until integrated projects are
  moved
   over to
the graduated library (although even that is up for
 discussion).
   After all
integrated projects that use the code are using the library
   instead of the
incubator, we can delete the module(s) from the incubator.
  
   +1
  
   Although never formalized, this is how I had expected we would
  handle
   the graduation process. It is also how we have been responding
 to
   patches and blueprints offerings improvements and feature
  requests for
   oslo.messaging.
  
   --
   Regards,
   Eric Windisch
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
   mailto:OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant rbry...@redhat.com wrote:

 On 11/29/2013 01:39 PM, Doug Hellmann wrote:
  We have a review up (https://review.openstack.org/#/c/58297/) to add
  some features to the notification system in the oslo incubator. THe
  notification system is being moved into oslo.messaging, and so we have
  the question of whether to accept the patch to the incubated version,
  move it to oslo.messaging, or carry it in both.
 
  As I say in the review, from a practical standpoint I think we can't
  really support continued development in both places. Given the number of
  times the topic of just make everything a library has come up, I would
  prefer that we focus our energy on completing the transition for a given
  module or library once it the process starts. We also need to avoid
  feature drift, and provide a clear incentive for projects to update to
  the new library.
 
  Based on that, I would like to say that we do not add new features to
  incubated code after it starts moving into a library, and only provide
  stable-like bug fix support until integrated projects are moved over
  to the graduated library (although even that is up for discussion).
  After all integrated projects that use the code are using the library
  instead of the incubator, we can delete the module(s) from the incubator.
 
  Before we make this policy official, I want to solicit feedback from the
  rest of the community and the Oslo core team.

 +1 in general.

 You may want to make after it starts moving into a library more
 specific, though.


I think my word choice is probably what threw Sandy off, too.

How about after it has been moved into a library with at least a release
candidate published?



  One approach could be to reflect this status in the
 MAINTAINERS file.  Right now there is a status field for each module in
 the incubator:


  S: Status, one of the following:
   Maintained:  Has an active maintainer
   Orphan:  No current maintainer, feel free to step up!
   Obsolete:Replaced by newer code, or a dead end, or out-dated

 It seems that the types of code we're talking about should just be
 marked as Obsolete.  Obsolete code should only get stable-like bug fixes.

 That would mean marking 'rpc' and 'notifier' as Obsolete (currently
 listed as Maintained).  I think that is accurate, though.


Good point.

Doug




 https://review.openstack.org/59367

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant rbry...@redhat.com wrote:

 On 11/29/2013 01:39 PM, Doug Hellmann wrote:
  We have a review up (https://review.openstack.org/#/c/58297/) to add
  some features to the notification system in the oslo incubator. THe
  notification system is being moved into oslo.messaging, and so we have
  the question of whether to accept the patch to the incubated version,
  move it to oslo.messaging, or carry it in both.
 
  As I say in the review, from a practical standpoint I think we can't
  really support continued development in both places. Given the number of
  times the topic of just make everything a library has come up, I would
  prefer that we focus our energy on completing the transition for a given
  module or library once it the process starts. We also need to avoid
  feature drift, and provide a clear incentive for projects to update to
  the new library.
 
  Based on that, I would like to say that we do not add new features to
  incubated code after it starts moving into a library, and only provide
  stable-like bug fix support until integrated projects are moved over
  to the graduated library (although even that is up for discussion).
  After all integrated projects that use the code are using the library
  instead of the incubator, we can delete the module(s) from the
 incubator.
 
  Before we make this policy official, I want to solicit feedback from the
  rest of the community and the Oslo core team.

 +1 in general.

 You may want to make after it starts moving into a library more
 specific, though.


 I think my word choice is probably what threw Sandy off, too.

 How about after it has been moved into a library with at least a release
 candidate published?



  One approach could be to reflect this status in the
 MAINTAINERS file.  Right now there is a status field for each module in
 the incubator:


  S: Status, one of the following:
   Maintained:  Has an active maintainer
   Orphan:  No current maintainer, feel free to step up!
   Obsolete:Replaced by newer code, or a dead end, or out-dated

 It seems that the types of code we're talking about should just be
 marked as Obsolete.  Obsolete code should only get stable-like bug fixes.

 That would mean marking 'rpc' and 'notifier' as Obsolete (currently
 listed as Maintained).  I think that is accurate, though.


 Good point.


I also added a Graduating status as an indicator for code in that
intermediate phase where there are 2 copies to be maintained. I hope we
don't have to use it very often, but it's best to be explicit.

https://review.openstack.org/#/c/59373/

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [eventlet] should we use spawn instead of spawn_n?

2013-12-02 Thread Russell Bryant
On 11/29/2013 01:01 AM, Jian Wen wrote:
 eventlet.spawn_n is the same as eventlet.spawn, but it’s not possible
 to know how the function terminated (i.e. no return value or exceptions)[1].
 If an exception is raised in the function, spawn_n prints a stack trace.
 The stack trace will not be written to the log file. It will be lost if we
 restart the daemon.
 
 Maybe we need to replace spawn_n with spawn. If an exception is raised
 in the
 function, we can log it if needed. Any thoughts?
 
 related bug: https://bugs.launchpad.net/neutron/+bug/1254984
 
 [1] http://eventlet.net/doc/basic_usage.html

In most cases the use of spawn_n is intentional.  There certainly should
not be a mass search and replace of this.

In the case of the patch you submitted [1], I think the change makes
sense.  The code was already waiting for all operations to finish.
Adding some error handling seems like a good idea.

[1] https://review.openstack.org/#/c/58668/
-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Configuration options placement

2013-12-02 Thread Russell Bryant
On 11/28/2013 03:55 PM, Doug Hellmann wrote:
 
 
 
 On Wed, Nov 27, 2013 at 5:21 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote:
 
 Hi,
 
 I am working on the user-authentication BP implementation. I need to
 introduce a new configuration option for enable or disable keystone
 authentication for incoming request. I am looking for a right place
 for this option.
 
 The current situation is that we have two places for configuration,
 one is oslo.config and second one is a pecan configuration. My
 initial intension was to add all parameters to solum.conf file like
 it is done for nova. Keystone middleware anyway use oslo.config for
 keystone connection parameters.
 At the same time there are projects (Ceilometer and Ironic) which
 have enable_acl parameter as a part of pecan config.  
 
 From my perspective it is not reasonable to have authentication
 options in two different places. I would rather use solum.conf for
 all parameters and limit pecan config usage to pecan specific options.
 
 
 Yes, I think the intent for ceilometer was to add a separate
 configuration option to replace the one in the pecan config and that we
 just overlooked doing that. It will certainly happen before any of that
 app code makes it way into Oslo (planned for this cycle).

Alright, sounds good to me.  Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Russell Bryant
On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 
 
 
 On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
 doug.hellm...@dreamhost.com mailto:doug.hellm...@dreamhost.com wrote:
 
 
 
 
 On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 11/29/2013 01:39 PM, Doug Hellmann wrote:
  We have a review up (https://review.openstack.org/#/c/58297/)
 to add
  some features to the notification system in the oslo
 incubator. THe
  notification system is being moved into oslo.messaging, and so
 we have
  the question of whether to accept the patch to the incubated
 version,
  move it to oslo.messaging, or carry it in both.
 
  As I say in the review, from a practical standpoint I think we
 can't
  really support continued development in both places. Given the
 number of
  times the topic of just make everything a library has come
 up, I would
  prefer that we focus our energy on completing the transition
 for a given
  module or library once it the process starts. We also need to
 avoid
  feature drift, and provide a clear incentive for projects to
 update to
  the new library.
 
  Based on that, I would like to say that we do not add new
 features to
  incubated code after it starts moving into a library, and only
 provide
  stable-like bug fix support until integrated projects are
 moved over
  to the graduated library (although even that is up for
 discussion).
  After all integrated projects that use the code are using the
 library
  instead of the incubator, we can delete the module(s) from the
 incubator.
 
  Before we make this policy official, I want to solicit
 feedback from the
  rest of the community and the Oslo core team.
 
 +1 in general.
 
 You may want to make after it starts moving into a library more
 specific, though.
 
 
 I think my word choice is probably what threw Sandy off, too.
 
 How about after it has been moved into a library with at least a
 release candidate published?

Sure, that's better.  That gives a specific bit of criteria for when the
switch is flipped.

  
 
  One approach could be to reflect this status in the
 MAINTAINERS file.  Right now there is a status field for each
 module in
 the incubator: 
 
 
  S: Status, one of the following:
   Maintained:  Has an active maintainer
   Orphan:  No current maintainer, feel free to step up!
   Obsolete:Replaced by newer code, or a dead end, or
 out-dated
 
 It seems that the types of code we're talking about should just be
 marked as Obsolete.  Obsolete code should only get stable-like
 bug fixes.
 
 That would mean marking 'rpc' and 'notifier' as Obsolete (currently
 listed as Maintained).  I think that is accurate, though.
 
 
 Good point.

So, to clarify, possible flows would be:

1) An API moving to a library as-is, like rootwrap

   Status: Maintained
   - Status: Graduating (short term)
   - Code removed from oslo-incubator once library is released

2) An API being replaced with a better one, like rpc being replaced by
oslo.messaging

   Status: Maintained
   - Status: Obsolete (once an RC of a replacement lib has been released)
   - Code removed from oslo-incubator once all integrated projects have
been migrated off of the obsolete code


Does that match your view?

 
 I also added a Graduating status as an indicator for code in that
 intermediate phase where there are 2 copies to be maintained. I hope we
 don't have to use it very often, but it's best to be explicit.
 
 https://review.openstack.org/#/c/59373/

Sounds good to me.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Olso][DB] Remove eventlet from oslo.db

2013-12-02 Thread Russell Bryant
On 12/02/2013 09:02 AM, Victor Sergeyev wrote:
 Hi folks!
 
 At the moment I and Roman Podoliaka are working on splitting of
 openstack.common.db code into a separate library. And it would be nice
 to drop dependency on eventlet before oslo.db is released.
 
 Currently, there is only one place in oslo.db where we use eventlet -
 wrapping of DB API method calls to be executed by tpool threads. It
 wraps DB API calls to be executed by tpool threads. This is only needed
 when eventlet is used together with DB-API driver implemented as a
 Python C extension (eventlet can't monkey patch C code, so we end up
 with DB API calls blocking all green threads when using Python-MySQLdb).
 eventlet has a workaround known as 'tpool' which is basically a pool of
 real OS threads that can play nicely with eventlet event loop. tpool
 feature is experimental and known to have stability problems. There is a
 doubt that anyone is using it in production at all. Nova API (and
 probably other API services) has an option to prefork the process on
 start, so that they don't need to use tpool when using eventlet together
 Python-MySQLdb.
 
 We'd really like to drop tpool support from oslo.db, because as a
 library we should not be bound to any particular concurrency model. If a
 target project is using eventlet, we believe, it is its problem how to
 make it play nicely with Python-MySQLdb lib, but not the problem of
 oslo.db. Though, we could put tpool wrapper into another helper module
 within oslo-incubator. 
 
 But we would really-really like not to have any eventlet related code in
 oslo.db.
 
 Are you using CONF.database.use_tpool in production? Does the approach
 with a separate tpool wrapper class seem reasonable? Or we can just drop
 tpool support at all, if no one is using it?

Please don't remove it completely.  Putting it in oslo-incubator for now
seems fine, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Olso][DB] Remove eventlet from oslo.db

2013-12-02 Thread Jay Pipes

On 12/02/2013 09:02 AM, Victor Sergeyev wrote:

Hi folks!

At the moment I and Roman Podoliaka are working on splitting of
openstack.common.db code into a separate library. And it would be nice
to drop dependency on eventlet before oslo.db is released.

Currently, there is only one place in oslo.db where we use eventlet -
wrapping of DB API method calls to be executed by tpool threads. It
wraps DB API calls to be executed by tpool threads. This is only needed
when eventlet is used together with DB-API driver implemented as a
Python C extension (eventlet can't monkey patch C code, so we end up
with DB API calls blocking all green threads when using Python-MySQLdb).
eventlet has a workaround known as 'tpool' which is basically a pool of
real OS threads that can play nicely with eventlet event loop. tpool
feature is experimental and known to have stability problems. There is a
doubt that anyone is using it in production at all. Nova API (and
probably other API services) has an option to prefork the process on
start, so that they don't need to use tpool when using eventlet together
Python-MySQLdb.

We'd really like to drop tpool support from oslo.db, because as a
library we should not be bound to any particular concurrency model. If a
target project is using eventlet, we believe, it is its problem how to
make it play nicely with Python-MySQLdb lib, but not the problem of
oslo.db. Though, we could put tpool wrapper into another helper module
within oslo-incubator.

But we would really-really like not to have any eventlet related code in
oslo.db.

Are you using CONF.database.use_tpool in production?


No.


Does the approach
with a separate tpool wrapper class seem reasonable? Or we can just drop
tpool support at all, if no one is using it?


Incubator :)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][messaging]: some questions on the 'driver' API

2013-12-02 Thread Gordon Sim
The BaseDriver - which I take to be the API that must be fulfilled by 
any transport implementation - has a listen() method taking a target, 
but no way to stop listening on that specific target.


Is this because there is no need to ever cancel a subscription?

There is a cleanup() method on the BaseDriver and the Transport object 
that contains it. However if more than one Transport object exists (e.g. 
because there are more than one Targets) then they will share the same 
driver, so calling cleanup on one would affect them all.


On the topic of relationship between Transport (and therefore RPCClient) 
and driver instances, it would be nice if there was some more explicit 
association.  For example, some transport mechanisms may require some 
setup for sending to each distinct address, and would perhaps want to 
avoid doing that on every individual call. They could of course cache 
their own context against the target instance. However it seems like it 
might be generally useful for drivers to be aware of the lifecycle of 
the different Transports or RPCClient instances that use them (it might 
help inform other caching or pooling choices for example).


This is admittedly coming from a very limited understanding  of the 
anticipated use cases, and perhaps assuming it is a more general purpose 
rpc mechanism than is actually intended.


--Gordon.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 11/29/2013 10:01 AM, Thierry Carrez wrote:
 Robert Collins wrote:
 https://etherpad.openstack.org/p/icehouse-external-scheduler
 
 Just looked into it with release management / TC hat on and I have a
 (possibly minor) concern on the deprecation path/timing.
 
 Assuming everything goes well, the separate scheduler will be
 fast-tracked through incubation in I, graduate at the end of the I cycle
 to be made a fully-integrated project in the J release.
 
 Your deprecation path description mentions that the internal scheduler
 will be deprecated in I, although there is no released (or
 security-supported) alternative to switch to at that point. It's not
 until the J release that such an alternative will be made available.
 
 So IMHO for the release/security-oriented users, the switch point is
 when they start upgrading to J, and not the final step of their upgrade
 to I (as suggested by the deploy the external scheduler and switch over
 before you consider your migration to I complete wording in the
 Etherpad). As the first step towards *switching to J* you would install
 the new scheduler before upgrading Nova itself. That works whether
 you're a CD user (and start deploying pre-J stuff just after the I
 release), or a release user (and wait until J final release to switch to
 it).
 
 Maybe we are talking about the same thing (the migration to the separate
 scheduler must happen after the I release and, at the latest, when you
 switch to the J release) -- but I wanted to make sure we were on the
 same page.

Sounds good to me.

 I also assume that all the other scheduler-consuming projects would
 develop the capability to talk to the external scheduler during the J
 cycle, so that their own schedulers would be deprecated in J release and
 removed at the start of H. That would be, to me, the condition to
 considering the external scheduler as integrated with (even if not
 mandatory for) the rest of the common release components.
 
 Does that work for you ?

I would change all the other to at least one other here.  I think
once we prove that a second project can be integrated into it, the
project is ready to be integrated.  Adding support for even more
projects is something that will continue to happen over a longer period
of time, I suspect, especially since new projects are coming in every cycle.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Tempest] Static IPv6 injection test case

2013-12-02 Thread Yang XY Yu
Hi all stackers,

Currently Neutron/Nova code has supported the static IPv6 injection, but 
there is no tempest scenario coverage to support IPv6 injection test case. 
So I finished the test case and run the it successfully in my local 
environment, and already submitted the code-review in community: 
https://review.openstack.org/#/c/58721/, but the community Jenkins env has 
not supported IPv6 and there are still a few pre-requisites setup below if 
running the test case correctly, 

1. Special Image needed to support IPv6 by using cloud-init, currently the 
cirros image used by tempest does not installed cloud-init.

2. Prepare interfaces.template file below on compute node.
edit  /usr/share/nova/interfaces.template 

# Injected by Nova on instance boot
#
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

{% for ifc in interfaces -%}
auto {{ ifc.name }}
{% if use_ipv6 -%}
iface {{ ifc.name }} inet6 static
address {{ ifc.address_v6 }}
netmask {{ ifc.netmask_v6 }}
{%- if ifc.gateway_v6 %}
gateway {{ ifc.gateway_v6 }}
{%- endif %}
{%- endif %}

{%- endfor %}


So considering these two pre-requisites, what should be done to enable 
this patch for IPv6 injection? Should I open a bug for cirros to enable 
cloud-init?   Or skip the test case because of this bug ?
Any comments are appreciated!

Thanks  Best Regards,

Yang Yu(于杨)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 9:06 AM, Russell Bryant rbry...@redhat.com wrote:

 On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 
 
 
  On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
  doug.hellm...@dreamhost.com mailto:doug.hellm...@dreamhost.com
 wrote:
 
 
 
 
  On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 11/29/2013 01:39 PM, Doug Hellmann wrote:
   We have a review up (https://review.openstack.org/#/c/58297/)
  to add
   some features to the notification system in the oslo
  incubator. THe
   notification system is being moved into oslo.messaging, and so
  we have
   the question of whether to accept the patch to the incubated
  version,
   move it to oslo.messaging, or carry it in both.
  
   As I say in the review, from a practical standpoint I think we
  can't
   really support continued development in both places. Given the
  number of
   times the topic of just make everything a library has come
  up, I would
   prefer that we focus our energy on completing the transition
  for a given
   module or library once it the process starts. We also need to
  avoid
   feature drift, and provide a clear incentive for projects to
  update to
   the new library.
  
   Based on that, I would like to say that we do not add new
  features to
   incubated code after it starts moving into a library, and only
  provide
   stable-like bug fix support until integrated projects are
  moved over
   to the graduated library (although even that is up for
  discussion).
   After all integrated projects that use the code are using the
  library
   instead of the incubator, we can delete the module(s) from the
  incubator.
  
   Before we make this policy official, I want to solicit
  feedback from the
   rest of the community and the Oslo core team.
 
  +1 in general.
 
  You may want to make after it starts moving into a library more
  specific, though.
 
 
  I think my word choice is probably what threw Sandy off, too.
 
  How about after it has been moved into a library with at least a
  release candidate published?

 Sure, that's better.  That gives a specific bit of criteria for when the
 switch is flipped.

 
 
   One approach could be to reflect this status in the
  MAINTAINERS file.  Right now there is a status field for each
  module in
  the incubator:
 
 
   S: Status, one of the following:
Maintained:  Has an active maintainer
Orphan:  No current maintainer, feel free to step up!
Obsolete:Replaced by newer code, or a dead end, or
  out-dated
 
  It seems that the types of code we're talking about should just
 be
  marked as Obsolete.  Obsolete code should only get stable-like
  bug fixes.
 
  That would mean marking 'rpc' and 'notifier' as Obsolete
 (currently
  listed as Maintained).  I think that is accurate, though.
 
 
  Good point.

 So, to clarify, possible flows would be:

 1) An API moving to a library as-is, like rootwrap

Status: Maintained
- Status: Graduating (short term)
- Code removed from oslo-incubator once library is released


I think it's ok to mark it as obsolete for a short time, after the release,
until we are sure that the adoption will be as painless as we expect. I'm
not sure we need a hard rule here, but I do agree that the distinction is
the degree to which the API has changed.




 2) An API being replaced with a better one, like rpc being replaced by
 oslo.messaging

Status: Maintained
- Status: Obsolete (once an RC of a replacement lib has been released)
- Code removed from oslo-incubator once all integrated projects have
 been migrated off of the obsolete code


 Does that match your view?


If we had been using the graduating status, the rpc, zmq, and
notifications modules would have been marked as graduating during the
havana cycle, too.




 
  I also added a Graduating status as an indicator for code in that
  intermediate phase where there are 2 copies to be maintained. I hope we
  don't have to use it very often, but it's best to be explicit.
 
  https://review.openstack.org/#/c/59373/

 Sounds good to me.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Joe Gordon
On Mon, Dec 2, 2013 at 6:06 AM, Russell Bryant rbry...@redhat.com wrote:

 On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 
 
 
  On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
  doug.hellm...@dreamhost.com mailto:doug.hellm...@dreamhost.com
 wrote:
 
 
 
 
  On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 11/29/2013 01:39 PM, Doug Hellmann wrote:
   We have a review up (https://review.openstack.org/#/c/58297/)
  to add
   some features to the notification system in the oslo
  incubator. THe
   notification system is being moved into oslo.messaging, and so
  we have
   the question of whether to accept the patch to the incubated
  version,
   move it to oslo.messaging, or carry it in both.
  
   As I say in the review, from a practical standpoint I think we
  can't
   really support continued development in both places. Given the
  number of
   times the topic of just make everything a library has come
  up, I would
   prefer that we focus our energy on completing the transition
  for a given
   module or library once it the process starts. We also need to
  avoid
   feature drift, and provide a clear incentive for projects to
  update to
   the new library.
  
   Based on that, I would like to say that we do not add new
  features to
   incubated code after it starts moving into a library, and only
  provide
   stable-like bug fix support until integrated projects are
  moved over
   to the graduated library (although even that is up for
  discussion).
   After all integrated projects that use the code are using the
  library
   instead of the incubator, we can delete the module(s) from the
  incubator.
  
   Before we make this policy official, I want to solicit
  feedback from the
   rest of the community and the Oslo core team.
 
  +1 in general.
 
  You may want to make after it starts moving into a library more
  specific, though.
 
 
  I think my word choice is probably what threw Sandy off, too.
 
  How about after it has been moved into a library with at least a
  release candidate published?

 Sure, that's better.  That gives a specific bit of criteria for when the
 switch is flipped.

 
 
   One approach could be to reflect this status in the
  MAINTAINERS file.  Right now there is a status field for each
  module in
  the incubator:
 
 
   S: Status, one of the following:
Maintained:  Has an active maintainer
Orphan:  No current maintainer, feel free to step up!
Obsolete:Replaced by newer code, or a dead end, or
  out-dated
 
  It seems that the types of code we're talking about should just
 be
  marked as Obsolete.  Obsolete code should only get stable-like
  bug fixes.
 
  That would mean marking 'rpc' and 'notifier' as Obsolete
 (currently
  listed as Maintained).  I think that is accurate, though.
 
 
  Good point.

 So, to clarify, possible flows would be:

 1) An API moving to a library as-is, like rootwrap

Status: Maintained
- Status: Graduating (short term)
- Code removed from oslo-incubator once library is released

 2) An API being replaced with a better one, like rpc being replaced by
 oslo.messaging

Status: Maintained
- Status: Obsolete (once an RC of a replacement lib has been released)
- Code removed from oslo-incubator once all integrated projects have
 been migrated off of the obsolete code


 Does that match your view?

 
  I also added a Graduating status as an indicator for code in that
  intermediate phase where there are 2 copies to be maintained. I hope we
  don't have to use it very often, but it's best to be explicit.
 
  https://review.openstack.org/#/c/59373/

 Sounds good to me.


So is messaging in 'graduating' since it isn't used by all core projects
yet (nova - https://review.openstack.org/#/c/39929/)?

--
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Flavio Percoco

On 02/12/13 09:06 -0500, Russell Bryant wrote:

On 12/02/2013 08:53 AM, Doug Hellmann wrote:
So, to clarify, possible flows would be:

1) An API moving to a library as-is, like rootwrap

  Status: Maintained
  - Status: Graduating (short term)
  - Code removed from oslo-incubator once library is released


We should make the module print a deprecation warning which would be
more like a 'transition' warning. So that people know the module is
being moved to it's own package.



2) An API being replaced with a better one, like rpc being replaced by
oslo.messaging

  Status: Maintained
  - Status: Obsolete (once an RC of a replacement lib has been released)
  - Code removed from oslo-incubator once all integrated projects have
been migrated off of the obsolete code


We've a deprecated package in oslo-incubator. It may complicate things
a bit but, moving obsolete packages there may make sense. I'd also
update the module - or package - and make it print a deprecation
warning.

FF

--
@flaper87
Flavio Percoco


pgpNqMKh1PkJM.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why are we backporting low priority v3 api fixes to v2?

2013-12-02 Thread Joe Gordon
On Sat, Nov 30, 2013 at 2:25 PM, Christopher Yeoh cbky...@gmail.com wrote:


 On Sun, Dec 1, 2013 at 8:02 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
  wrote:

 I've seen a few bugs/reviews like this [1] lately which are essentially
 backporting fixes from the nova openstack v3 API to the v2 API. While this
 is goodness for the v2 API, I'm not sure why we're spending time on low
 priority bug fixes like this for the v2 API when v3 is the future.
 Shouldn't only high impact / high probability fixes get backported to the
 nova v2 API now?  I think most people are still using v2 so they are
 probably happy to get the fixes, but it kind of seems to prolong the
 inevitable.

 Am I missing something?


 The V2 API is going to be with us for quite a while even if the as planned
 V3 API becomes official with
 the icehouse release. At the moment the V2 API is still even open for new
 features - this will probably
 change at the end of I-2.

 I agree those bugs are quite low priority fixes and the V3 work is a lot
 more important, but I don't think we should blocking
 them yet. We should perhaps reconsider the acceptance of very low priority
 fixes like you reference towards or at the end of
 Icehouse.


I don't think we should be blocking them per-se as long as they fit the API
change guidelines https://wiki.openstack.org/wiki/APIChangeGuidelines.



 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 9:22 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Mon, Dec 2, 2013 at 6:06 AM, Russell Bryant rbry...@redhat.com wrote:

 On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 
 
 
  On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
  doug.hellm...@dreamhost.com mailto:doug.hellm...@dreamhost.com
 wrote:
 
 
 
 
  On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 11/29/2013 01:39 PM, Doug Hellmann wrote:
   We have a review up (https://review.openstack.org/#/c/58297/)
  to add
   some features to the notification system in the oslo
  incubator. THe
   notification system is being moved into oslo.messaging, and so
  we have
   the question of whether to accept the patch to the incubated
  version,
   move it to oslo.messaging, or carry it in both.
  
   As I say in the review, from a practical standpoint I think we
  can't
   really support continued development in both places. Given the
  number of
   times the topic of just make everything a library has come
  up, I would
   prefer that we focus our energy on completing the transition
  for a given
   module or library once it the process starts. We also need to
  avoid
   feature drift, and provide a clear incentive for projects to
  update to
   the new library.
  
   Based on that, I would like to say that we do not add new
  features to
   incubated code after it starts moving into a library, and only
  provide
   stable-like bug fix support until integrated projects are
  moved over
   to the graduated library (although even that is up for
  discussion).
   After all integrated projects that use the code are using the
  library
   instead of the incubator, we can delete the module(s) from the
  incubator.
  
   Before we make this policy official, I want to solicit
  feedback from the
   rest of the community and the Oslo core team.
 
  +1 in general.
 
  You may want to make after it starts moving into a library
 more
  specific, though.
 
 
  I think my word choice is probably what threw Sandy off, too.
 
  How about after it has been moved into a library with at least a
  release candidate published?

 Sure, that's better.  That gives a specific bit of criteria for when the
 switch is flipped.

 
 
   One approach could be to reflect this status in the
  MAINTAINERS file.  Right now there is a status field for each
  module in
  the incubator:
 
 
   S: Status, one of the following:
Maintained:  Has an active maintainer
Orphan:  No current maintainer, feel free to step up!
Obsolete:Replaced by newer code, or a dead end, or
  out-dated
 
  It seems that the types of code we're talking about should just
 be
  marked as Obsolete.  Obsolete code should only get stable-like
  bug fixes.
 
  That would mean marking 'rpc' and 'notifier' as Obsolete
 (currently
  listed as Maintained).  I think that is accurate, though.
 
 
  Good point.

 So, to clarify, possible flows would be:

 1) An API moving to a library as-is, like rootwrap

Status: Maintained
- Status: Graduating (short term)
- Code removed from oslo-incubator once library is released

 2) An API being replaced with a better one, like rpc being replaced by
 oslo.messaging

Status: Maintained
- Status: Obsolete (once an RC of a replacement lib has been released)
- Code removed from oslo-incubator once all integrated projects have
 been migrated off of the obsolete code


 Does that match your view?

 
  I also added a Graduating status as an indicator for code in that
  intermediate phase where there are 2 copies to be maintained. I hope we
  don't have to use it very often, but it's best to be explicit.
 
  https://review.openstack.org/#/c/59373/

 Sounds good to me.


 So is messaging in 'graduating' since it isn't used by all core projects
 yet (nova - https://review.openstack.org/#/c/39929/)?


Graduation is a status within the oslo project, not the other projects. We
can't control adoption downstream, so I am trying to set a reasonable
policy for maintenance until we have an official release.

Graduating means there is a git repo with a library but the library has no
releases yet.

Obsolete means there is a library, but we are providing a grace period for
adoption during which critical issues in the incubated version of the code
will be accepted -- but no features.

Doug




 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [nova] why are we backporting low priority v3 api fixes to v2?

2013-12-02 Thread Jonathan Proulx
On Mon, Dec 2, 2013 at 9:27 AM, Joe Gordon joe.gord...@gmail.com wrote:


 I don't think we should be blocking them per-se as long as they fit the API
 change guidelines https://wiki.openstack.org/wiki/APIChangeGuidelines.

Agreed, possibly not what one would assign developers to do but as an
open project if it is important enough to someone that they've already
done the work why not accept the change?

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 9:25 AM, Flavio Percoco fla...@redhat.com wrote:

 On 02/12/13 09:06 -0500, Russell Bryant wrote:

 On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 So, to clarify, possible flows would be:

 1) An API moving to a library as-is, like rootwrap

   Status: Maintained
   - Status: Graduating (short term)
   - Code removed from oslo-incubator once library is released


 We should make the module print a deprecation warning which would be
 more like a 'transition' warning. So that people know the module is
 being moved to it's own package.


I thought about that, too. We could do it, but it feels like code churn. I
would rather spend the effort on updating projects to have the libraries
adopted.






 2) An API being replaced with a better one, like rpc being replaced by
 oslo.messaging

   Status: Maintained
   - Status: Obsolete (once an RC of a replacement lib has been released)
   - Code removed from oslo-incubator once all integrated projects have
 been migrated off of the obsolete code


 We've a deprecated package in oslo-incubator. It may complicate things
 a bit but, moving obsolete packages there may make sense. I'd also
 update the module - or package - and make it print a deprecation
 warning.


The deprecated package is for modules we are no longer maintaining but for
which there is not a direct replacement. Right now that only applies to the
wsgi module, since Pecan isn't an Oslo library.

Doug





 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Increase Swift ring partition power

2013-12-02 Thread Gregory Holt
On Dec 2, 2013, at 5:19 AM, Christian Schwede christian.schw...@enovance.com 
wrote:

 I'd like to discuss a way to increase the partition power of an existing
 Swift cluster.

Achieving this transparently is part of the ongoing plans, starting with things 
like the DiskFile refactoring and SSync. The idea is to isolate the direct disk 
access from other servers/tools, something that (for instance) RSync has today. 
Once the isolation is there, it should be fairly straightforward to have 
incoming requests for a ring^20 partition look on the local disk in a directory 
structure that was originally created for a ring^19 partition, or even vice 
versa. Then, there will be no need to move data around just for a ring-doubling 
or halving, and no down time to do so.

That said, if you want create a tool that allows such ring shifting in the 
interim, it should work with smaller clusters that don't mind downtime. I would 
prefer that it not become a core tool checked directly into 
swift/python-swiftclient, just because of the plans stated above that should 
one day make it obsolete.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-02 Thread Jay Pipes

On 12/01/2013 10:04 PM, John Dickinson wrote:

Just to add to the story, Swift uses X-Trans-Id and generates it in
the outer-most catch_errors middleware.

Swift's catch errors middleware is responsible for ensuring that the
transaction id exists on each request, and that all errors previously
uncaught, anywhere in the pipeline, are caught and logged. If there
is not a common way to do this, yet, I submit it as a great template
for solving this problem. It's simple, scalable, and well-tested (ie
tests and running in prod for years).

https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py


++

If there's prior art here, might as well use it. I'm not a huge fan of 
using the term transaction within things that do not have a 
transactional safety context... but that's just because of my background 
in RDBMS stuff. If X-Trans-Id is already in use by another OpenStack 
project, it should probably take precedence over something new unless 
there is a really good reason otherwise (and my personal opinion about 
the semantics of transactions ain't a good reason!).



 Leaving aside error handling and only focusing on the transaction id
(or request id) generation, since OpenStack services are exposed to
untrusted clients, how would you propose communicating the
appropriate transaction id to a different service? I can see great
benefit to having a glance transaction ID carry through to Swift
requests (and so on), but how should the transaction id be
communicated? It's not sensitive info, but I can imagine a pretty big
problem when trying to track down errors if a client application
decides to set eg the X-Set-Transaction-Id header on every request to
the same thing.


I suppose if this were really a problem (and I'm not sold on the idea 
that it is a problem), one solution might be to store a checksum 
somewhere for the transaction ID and some other piece of data. But I 
don't really see that as super useful, and it would slow things down. 
Glance already stores a checksum for important things like the data in 
an image. If a service needs to be absolutely sure that a piece of data 
hasn't been messed with, this cross-service request ID probably isn't 
the thing to use...



Thanks for bringing this up, and I'd welcome a patch in Swift that
would use a common library to generate the transaction id, if it were
installed. I can see that there would be huge advantage to operators
to trace requests through multiple systems.


Hmm, so does that mean that you'd be open to (gradually) moving to an 
x-openstack-request-id header to replace x-trans-id?



Another option would be for each system that calls an another
OpenStack system to expect and log the transaction ID for the request
that was given. This would be looser coupling and be more forgiving
for a heterogeneous cluster. Eg when Glance makes a call to Swift,
Glance cloud log the transaction id that Swift used (from the Swift
response). Likewise, when Swift makes a call to Keystone, Swift could
log the Keystone transaction id. This wouldn't result in a single
transaction id across all systems, but it would provide markers so an
admin could trace the request.


Sure, this is a perfectly fine option, but doesn't really provide the 
single traceable ID value that I think we're looking for here.


Best,
-jay


On Dec 1, 2013, at 5:48 PM, Maru Newby ma...@redhat.com wrote:



On Nov 30, 2013, at 1:00 AM, Sean Dague s...@dague.net wrote:


On 11/29/2013 10:33 AM, Jay Pipes wrote:

On 11/28/2013 07:45 AM, Akihiro Motoki wrote:

Hi,

I am working on adding request-id to API response in
Neutron. After I checked what header is used in other
projects header name varies project by project. It seems
there is no consensus what header is recommended and it is
better to have some consensus.

nova: x-compute-request-id cinder:
x-compute-request-id glance:   x-openstack-request-id
neutron:  x-network-request-id  (under review)

request-id is assigned and used inside of each project now,
so x-service-request-id looks good. On the other hand, if
we have a plan to enhance request-id across projects,
x-openstack-request-id looks better.


My vote is for:

x-openstack-request-id

With an implementation of create a request UUID if none exists
yet in some standardized WSGI middleware...


Agreed. I don't think I see any value in having these have
different service names, having just x-openstack-request-id
across all the services seems a far better idea, and come back
through and fix nova and cinder to be that as well.


+1

An openstack request id should be service agnostic to allow
tracking of a request across many services (e.g. a call to nova to
boot a VM should generate a request id that is provided to other
services in requests to provision said VM).  All services would
ideally share a facility for generating new request ids and for
securely accepting request ids from other services.


m.



-Sean

-- Sean Dague http://dague.net


Re: [openstack-dev] [Nova] Splitting up V3 API admin-actions plugin

2013-12-02 Thread Andrew Laski

On 12/02/13 at 08:38am, Russell Bryant wrote:

On 12/01/2013 08:39 AM, Christopher Yeoh wrote:

Hi,

At the summit we agreed to split out lock/unlock, pause/unpause,
suspend/unsuspend
functionality out of the V3 version of admin actions into separate
extensions to make it easier for deployers to only have loaded the
functionality that they want.

Remaining in admin_actions we have:

migrate
live_migrate
reset_network
inject_network_info
create_backup
reset_state

I think it makes sense to separate out migrate and live_migrate into a
migrate plugin as well.

What do people think about the others? There is no real overhead of
having them in separate
plugins and totally remove admin_actions. Does anyone have any
objections from this being done?

Also in terms of grouping I don't think any of the others remaining
above really belong together, but welcome any suggestions.


+1 to removing admin_actions and splitting everything out.


+1 from me as well.



--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-02 Thread Joe Gordon
On Sun, Dec 1, 2013 at 7:04 PM, John Dickinson m...@not.mn wrote:

 Just to add to the story, Swift uses X-Trans-Id and generates it in the
 outer-most catch_errors middleware.

 Swift's catch errors middleware is responsible for ensuring that the
 transaction id exists on each request, and that all errors previously
 uncaught, anywhere in the pipeline, are caught and logged. If there is not
 a common way to do this, yet, I submit it as a great template for solving
 this problem. It's simple, scalable, and well-tested (ie tests and running
 in prod for years).


 https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py

 Leaving aside error handling and only focusing on the transaction id (or
 request id) generation, since OpenStack services are exposed to untrusted
 clients, how would you propose communicating the appropriate transaction id
 to a different service? I can see great benefit to having a glance
 transaction ID carry through to Swift requests (and so on), but how should
 the transaction id be communicated? It's not sensitive info, but I can
 imagine a pretty big problem when trying to track down errors if a client
 application decides to set eg the X-Set-Transaction-Id header on every
 request to the same thing.


-1 to cross service request IDs, for the reasons John mentions above.



 Thanks for bringing this up, and I'd welcome a patch in Swift that would
 use a common library to generate the transaction id, if it were installed.
 I can see that there would be huge advantage to operators to trace requests
 through multiple systems.

 Another option would be for each system that calls an another OpenStack
 system to expect and log the transaction ID for the request that was given.
 This would be looser coupling and be more forgiving for a heterogeneous
 cluster. Eg when Glance makes a call to Swift, Glance cloud log the
 transaction id that Swift used (from the Swift response). Likewise, when
 Swift makes a call to Keystone, Swift could log the Keystone transaction
 id. This wouldn't result in a single transaction id across all systems, but
 it would provide markers so an admin could trace the request.


There was a session on this at the summit, and although the notes are a
little scarce this was the conclusion we came up with.  Every time a cross
service call is made, we will log and send a notification for ceilometer to
consume, with the request-ids of both request ids.  One of the benefits of
this approach is that we can easily generate a tree of all the API calls
that are made (and clearly show when multiple calls are made to the same
service), something that just a cross service request id would have trouble
with.

https://etherpad.openstack.org/p/icehouse-summit-qa-gate-debugability


With that in mind I think having a standard x-openstack-request-id makes
things a little more uniform, and means that adding new services doesn't
require new logic to handle new request ids.



 --John




 On Dec 1, 2013, at 5:48 PM, Maru Newby ma...@redhat.com wrote:

 
  On Nov 30, 2013, at 1:00 AM, Sean Dague s...@dague.net wrote:
 
  On 11/29/2013 10:33 AM, Jay Pipes wrote:
  On 11/28/2013 07:45 AM, Akihiro Motoki wrote:
  Hi,
 
  I am working on adding request-id to API response in Neutron.
  After I checked what header is used in other projects
  header name varies project by project.
  It seems there is no consensus what header is recommended
  and it is better to have some consensus.
 
  nova: x-compute-request-id
  cinder:   x-compute-request-id
  glance:   x-openstack-request-id
  neutron:  x-network-request-id  (under review)
 
  request-id is assigned and used inside of each project now,
  so x-service-request-id looks good. On the other hand,
  if we have a plan to enhance request-id across projects,
  x-openstack-request-id looks better.
 
  My vote is for:
 
  x-openstack-request-id
 
  With an implementation of create a request UUID if none exists yet in
  some standardized WSGI middleware...
 
  Agreed. I don't think I see any value in having these have different
  service names, having just x-openstack-request-id across all the
  services seems a far better idea, and come back through and fix nova and
  cinder to be that as well.
 
  +1
 
  An openstack request id should be service agnostic to allow tracking of
 a request across many services (e.g. a call to nova to boot a VM should
 generate a request id that is provided to other services in requests to
 provision said VM).  All services would ideally share a facility for
 generating new request ids and for securely accepting request ids from
 other services.
 
 
  m.
 
 
   -Sean
 
  --
  Sean Dague
  http://dague.net
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  

[openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim
All,

Barbican is the OpenStack key management service and we’d like to request 
incubation for the Icehouse release. A Rackspace sponsored team has been 
working for about 9 months now, including following the Havana release cycle 
for our first release.

Our incubation request is here:
https://wiki.openstack.org/wiki/Barbican

Our documentation is mostly hosted at GitHub for the moment, though we are in 
the process of converting much of it to DocBook.
https://github.com/cloudkeep/barbican
https://github.com/cloudkeep/barbican/wiki


The Barbican team will be on IRC today at #openstack-barbican and you can 
contact us using the barbi...@lists.rackspace.com mailing list if desired.



Thanks,

Jarret Raim   |Security Intrapreneur
-
5000 Walzem RoadOffice: 210.312.3121
San Antonio, TX 78218   Cellular: 210.437.1217
-
rackspace hosting   |   experience fanatical support
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Olso][DB] Remove eventlet from oslo.db

2013-12-02 Thread Russell Bryant
On 12/02/2013 09:51 AM, Andrew Laski wrote:
 On 12/02/13 at 09:08am, Russell Bryant wrote:
 On 12/02/2013 09:02 AM, Victor Sergeyev wrote:
 Hi folks!

 At the moment I and Roman Podoliaka are working on splitting of
 openstack.common.db code into a separate library. And it would be nice
 to drop dependency on eventlet before oslo.db is released.

 Currently, there is only one place in oslo.db where we use eventlet -
 wrapping of DB API method calls to be executed by tpool threads. It
 wraps DB API calls to be executed by tpool threads. This is only needed
 when eventlet is used together with DB-API driver implemented as a
 Python C extension (eventlet can't monkey patch C code, so we end up
 with DB API calls blocking all green threads when using Python-MySQLdb).
 eventlet has a workaround known as 'tpool' which is basically a pool of
 real OS threads that can play nicely with eventlet event loop. tpool
 feature is experimental and known to have stability problems. There is a
 doubt that anyone is using it in production at all. Nova API (and
 probably other API services) has an option to prefork the process on
 start, so that they don't need to use tpool when using eventlet together
 Python-MySQLdb.

 We'd really like to drop tpool support from oslo.db, because as a
 library we should not be bound to any particular concurrency model. If a
 target project is using eventlet, we believe, it is its problem how to
 make it play nicely with Python-MySQLdb lib, but not the problem of
 oslo.db. Though, we could put tpool wrapper into another helper module
 within oslo-incubator.

 But we would really-really like not to have any eventlet related code in
 oslo.db.

 Are you using CONF.database.use_tpool in production? Does the approach
 with a separate tpool wrapper class seem reasonable? Or we can just drop
 tpool support at all, if no one is using it?

 Please don't remove it completely.  Putting it in oslo-incubator for now
 seems fine, though.
 
 +1.  We are using it in production, though only in certain places
 because it does have stability issues that we'd like to track down and fix.

Related bug report: https://bugs.launchpad.net/nova/+bug/1171601

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all-projects] PLEASE READ: Code to fail builds on new log errors is merging

2013-12-02 Thread David Kranz
As mentioned several times on this list, 
https://review.openstack.org/#/c/58848/ is in the process of merging. 
Once it does, builds will start failing if there are log ERRORs that are 
not filtered out by 
https://github.com/openstack/tempest/blob/master/etc/whitelist.yaml. 
There are too many issues with neutron so, for the moment, neutron will 
not fail in this case.


If a build fails due to log errors, which will be shown in the console 
right after the tempest output, and you really think the log error is 
not a bug, you can propose a change to the whitelist. It will be great 
to also see the whitelist reduced as bugs are fixed.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Monty Taylor


On 12/02/2013 09:13 AM, Russell Bryant wrote:
 On 11/29/2013 10:01 AM, Thierry Carrez wrote:
 Robert Collins wrote:
 https://etherpad.openstack.org/p/icehouse-external-scheduler

 Just looked into it with release management / TC hat on and I have a
 (possibly minor) concern on the deprecation path/timing.

 Assuming everything goes well, the separate scheduler will be
 fast-tracked through incubation in I, graduate at the end of the I cycle
 to be made a fully-integrated project in the J release.

 Your deprecation path description mentions that the internal scheduler
 will be deprecated in I, although there is no released (or
 security-supported) alternative to switch to at that point. It's not
 until the J release that such an alternative will be made available.

 So IMHO for the release/security-oriented users, the switch point is
 when they start upgrading to J, and not the final step of their upgrade
 to I (as suggested by the deploy the external scheduler and switch over
 before you consider your migration to I complete wording in the
 Etherpad). As the first step towards *switching to J* you would install
 the new scheduler before upgrading Nova itself. That works whether
 you're a CD user (and start deploying pre-J stuff just after the I
 release), or a release user (and wait until J final release to switch to
 it).

 Maybe we are talking about the same thing (the migration to the separate
 scheduler must happen after the I release and, at the latest, when you
 switch to the J release) -- but I wanted to make sure we were on the
 same page.
 
 Sounds good to me.
 
 I also assume that all the other scheduler-consuming projects would
 develop the capability to talk to the external scheduler during the J
 cycle, so that their own schedulers would be deprecated in J release and
 removed at the start of H. That would be, to me, the condition to
 considering the external scheduler as integrated with (even if not
 mandatory for) the rest of the common release components.

 Does that work for you ?
 
 I would change all the other to at least one other here.  I think
 once we prove that a second project can be integrated into it, the
 project is ready to be integrated.  Adding support for even more
 projects is something that will continue to happen over a longer period
 of time, I suspect, especially since new projects are coming in every cycle.

Just because I'd like to argue - if what we do here is an actual
forklift, do we really need a cycle of deprecation?

The reason I ask is that this is, on first stab, not intended to be a
service that has user-facing API differences. It's a reorganization of
code from one repo into a different one. It's very strongly designed to
not be different. It's not even adding a new service like conductor was
- it's simply moving the repo where the existing service is held.

Why would we need/want to deprecate? I say that if we get the code
ectomied and working before nova feature freeze, that we elevate the new
nova repo and delete the code from nova. Process for process sake here
I'm not sure gets us anywhere.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][qa] log errors

2013-12-02 Thread David Kranz
So the last two bugs I filed were coming from check builds that were not 
merged. It was necessary to look at all builds to get to the point where 
gate failing on errors could be turned on. Here is the current 
whitelist. If you folks think the .*' entries are no longer needed then 
please let me know and I will delete them.


 -David

ceilometer-acompute:
- module: ceilometer.compute.pollsters.disk
  message: Unable to read from monitor: Connection reset by peer
- module: ceilometer.compute.pollsters.disk
  message: Requested operation is not valid: domain is not running
- module: ceilometer.compute.pollsters.net
  message: Requested operation is not valid: domain is not running
- module: ceilometer.compute.pollsters.disk
  message: Domain not found: no domain with matching uuid
- module: ceilometer.compute.pollsters.net
  message: Domain not found: no domain with matching uuid
- module: ceilometer.compute.pollsters.net
  message: No module named libvirt
- module: ceilometer.compute.pollsters.net
  message: Unable to write to monitor: Broken pipe
- module: ceilometer.compute.pollsters.cpu
  message: Domain not found: no domain with matching uuid
- module: ceilometer.compute.pollsters.net
  message: .*
- module: ceilometer.compute.pollsters.disk
  message: .*

ceilometer-alarm-evaluator:
- module: ceilometer.alarm.service
  message: alarm evaluation cycle failed
- module: ceilometer.alarm.evaluator.threshold
  message: .*

ceilometer-api:
- module: wsme.api
  message: .*


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:33 AM, Monty Taylor wrote:
 Just because I'd like to argue - if what we do here is an actual
 forklift, do we really need a cycle of deprecation?
 
 The reason I ask is that this is, on first stab, not intended to be a
 service that has user-facing API differences. It's a reorganization of
 code from one repo into a different one. It's very strongly designed to
 not be different. It's not even adding a new service like conductor was
 - it's simply moving the repo where the existing service is held.
 
 Why would we need/want to deprecate? I say that if we get the code
 ectomied and working before nova feature freeze, that we elevate the new
 nova repo and delete the code from nova. Process for process sake here
 I'm not sure gets us anywhere.

That makes sense to me, actually.

I suppose part of the issue is that we're not positive how much work
will happen to the code *after* the forklift.  Will we have other
services integrated?  Will it have its own database?  How different is
different enough to warrant needing a deprecation cycle?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-02 Thread Daniel P. Berrange
On Tue, Nov 19, 2013 at 12:15:51PM +, Daniel P. Berrange wrote:
 For attention of maintainers of Nova virt drivers

Anyone from Hyper-V or VMWare drivers wish to comment on this
proposal


 A while back there was a bug requesting the ability to set the CPU
 topology (sockets/cores/threads) for guests explicitly
 
https://bugs.launchpad.net/nova/+bug/1199019
 
 I countered that setting explicit topology doesn't play well with
 booting images with a variety of flavours with differing vCPU counts.
 
 This led to the following change which used an image property to
 express maximum constraints on CPU topology (max-sockets/max-cores/
 max-threads) which the libvirt driver will use to figure out the
 actual topology (sockets/cores/threads)
 
   https://review.openstack.org/#/c/56510/
 
 I believe this is a prime example of something we must co-ordinate
 across virt drivers to maximise happiness of our users.
 
 There's a blueprint but I find the description rather hard to
 follow
 
   https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
 
 So I've created a standalone wiki page which I hope describes the
 idea more clearly
 
   https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology
 
 Launchpad doesn't let me link the URL to the blueprint since I'm not
 the blurprint creator :-(
 
 Anyway this mail is to solicit input on the proposed standard way to
 express this which is hypervisor portable and the addition of some
 shared code for doing the calculations which virt driver impls can
 just all into rather than re-inventing
 
 I'm looking for buy-in to the idea from the maintainers of each
 virt driver that this conceptual approach works for them, before
 we go merging anything with the specific impl for libvirt.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Maintaining backwards compatibility for RPC calls

2013-12-02 Thread Russell Bryant
tl;dr - live upgrades are hard.  oops.

On 11/27/2013 07:38 AM, Day, Phil wrote:
 I’m a bit confused about the expectations of a manager class to be able
 to receive and process messages from a previous RPC version.   I thought
 the objective was to always make changes such that the manage can
 process any previous version of the call  that could come from the last
 release,  For example Icehouse code should be able to receive any
 version that could be generated by a version of Havana.   Generally of
 course that means new parameters have to have a default value.
 
 I’m kind of struggling then to see why we’ve now removed, for example,
 the default values for example from terminate_instance() as part of
 moving to RPC version to 3.0:
 
 def terminate_instance(self, context, instance, bdms=None,
 reservations=None):
 
 def terminate_instance(self, context, instance, bdms, reservations):
 
 https://review.openstack.org/#/c/54493/
 
 Doesn’t this mean that you can’t deploy Icehouse (3.0) code into a
 Havana system but leave the RPC version pinned at Havana until all of
 the code has been updated ? 

Thanks for bringing this up.  We realized a problem with the way I had
done these patches after some of them had merged.

First, some history.  The first time we did some major rpc version
bumps, they were done exactly like I did them here. [1][2]

This approach allows live upgrades for CD based deployments.  It does
*not* allow live upgrades from N-1 to N releases.  We didn't bother
because we knew there were other reasons that N-1 to N live upgrades
would not work at that point.

When I did this patch series, I took the same approach.  I didn't
account for the fact that we were going to try to pull off allowing live
upgrades from Havana to Icehouse.  The patches only supported live
upgrades in a CD environment.

I need to go back and add a shim layer that can handle receiving the
latest version of messages sent by Havana to all APIs.

[1] https://review.openstack.org/#/c/12130/
[2] https://review.openstack.org/#/c/12131/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Incubation / Graduation / New program requirements

2013-12-02 Thread Thierry Carrez
Hi!

The TC has been working to formalize the set of the criteria it applies
when considering project incubation and graduation to integrated status,
and when considering new programs. The goal is to be more predictable
and that candidate projects and teams know what is expected of them
before they can apply for a given specific status.

This is still very much work in progress, but the draft documents have
been posted here for further review and comments:

https://review.openstack.org/#/c/59454/

Feel free to comment on the thread and/or on the review itself.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Gary Kotton


On 12/2/13 5:39 PM, Russell Bryant rbry...@redhat.com wrote:

On 12/02/2013 10:33 AM, Monty Taylor wrote:
 Just because I'd like to argue - if what we do here is an actual
 forklift, do we really need a cycle of deprecation?
 
 The reason I ask is that this is, on first stab, not intended to be a
 service that has user-facing API differences. It's a reorganization of
 code from one repo into a different one. It's very strongly designed to
 not be different. It's not even adding a new service like conductor was
 - it's simply moving the repo where the existing service is held.
 
 Why would we need/want to deprecate? I say that if we get the code
 ectomied and working before nova feature freeze, that we elevate the new
 nova repo and delete the code from nova. Process for process sake here
 I'm not sure gets us anywhere.

That makes sense to me, actually.

I suppose part of the issue is that we're not positive how much work
will happen to the code *after* the forklift.  Will we have other
services integrated?  Will it have its own database?  How different is
different enough to warrant needing a deprecation cycle?

I have concerns with the forklift. There are at least 1 or 2 changes a
week to the scheduling code (and the is is not taking into account new
features being added). Will these need to be updated in 2 separate code
bases? How do we ensure that both are in sync for the interim period. I am
really sorry for playing devils advocate but I really think that there are
too many issues and we have yet to iron them out. This should not prevent
us from doing it but lets at least be aware of what is waiting ahead.


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=D6xsP98tUQhdMCSQYclCv
Bu6a28phHWronAJfm0sZwE%3D%0As=d306716b35d97764ffdf5a1b54427a6167c32ae1cef
fc3b41a7525e684d48d2d


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Increase Swift ring partition power

2013-12-02 Thread Christian Schwede
Am 02.12.13 15:47, schrieb Gregory Holt:
 Achieving this transparently is part of the ongoing plans, starting
 with things like the DiskFile refactoring and SSync. The idea is to
 isolate the direct disk access from other servers/tools, something
 that (for instance) RSync has today. Once the isolation is there, it
 should be fairly straightforward to have incoming requests for a
 ring^20 partition look on the local disk in a directory structure
 that was originally created for a ring^19 partition, or even vice
 versa. Then, there will be no need to move data around just for a
 ring-doubling or halving, and no down time to do so.

That sounds great! Is someone already working on this (I know about the
ongoing DiskFile refactoring) or even a blueprint available? I was aware
of the idea about multiple rings on the same policy but not about
support for rings with a modified partition power.

 That said, if you want create a tool that allows such ring shifting
 in the interim, it should work with smaller clusters that don't mind
 downtime. I would prefer that it not become a core tool checked
 directly into swift/python-swiftclient, just because of the plans
 stated above that should one day make it obsolete.

Yes, that makes a lot of sense. In fact the tool is already working; I
think the best way is to enhance the docs and to list it as a related
Swift project once I'm done with this.

Christian



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:53 AM, Gary Kotton wrote:
 
 
 On 12/2/13 5:39 PM, Russell Bryant rbry...@redhat.com wrote:
 
 On 12/02/2013 10:33 AM, Monty Taylor wrote:
 Just because I'd like to argue - if what we do here is an actual
 forklift, do we really need a cycle of deprecation?

 The reason I ask is that this is, on first stab, not intended to be a
 service that has user-facing API differences. It's a reorganization of
 code from one repo into a different one. It's very strongly designed to
 not be different. It's not even adding a new service like conductor was
 - it's simply moving the repo where the existing service is held.

 Why would we need/want to deprecate? I say that if we get the code
 ectomied and working before nova feature freeze, that we elevate the new
 nova repo and delete the code from nova. Process for process sake here
 I'm not sure gets us anywhere.

 That makes sense to me, actually.

 I suppose part of the issue is that we're not positive how much work
 will happen to the code *after* the forklift.  Will we have other
 services integrated?  Will it have its own database?  How different is
 different enough to warrant needing a deprecation cycle?
 
 I have concerns with the forklift. There are at least 1 or 2 changes a
 week to the scheduling code (and the is is not taking into account new
 features being added). Will these need to be updated in 2 separate code
 bases? How do we ensure that both are in sync for the interim period. I am
 really sorry for playing devils advocate but I really think that there are
 too many issues and we have yet to iron them out. This should not prevent
 us from doing it but lets at least be aware of what is waiting ahead.

This is one of the reasons that I think the forklift is a *good* idea.
It's what will enable us to do it as fast as possible and minimize the
time we're dealing with 2 code bases.  It could be just 1 deprecation
cycle, or just a matter of a few weeks if we settle on what Monty is
suggesting.

What we *don't* want is something like Neutron and nova-network, where
we end up maintaining two implementations of a thing for a long, long time.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Gary Kotton


On 12/2/13 5:33 PM, Monty Taylor mord...@inaugust.com wrote:



On 12/02/2013 09:13 AM, Russell Bryant wrote:
 On 11/29/2013 10:01 AM, Thierry Carrez wrote:
 Robert Collins wrote:
 
https://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.o
rg/p/icehouse-external-schedulerk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=
eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=nlm44OzEwIEFTzGCf
k6Dx1Lc0g7KHrY0h78JGykLd0s%3D%0As=0b234ac4dbe29b80b61b0d53be18362c3743
299f908abc97941bfdb6f4c0c9da

 Just looked into it with release management / TC hat on and I have a
 (possibly minor) concern on the deprecation path/timing.

 Assuming everything goes well, the separate scheduler will be
 fast-tracked through incubation in I, graduate at the end of the I
cycle
 to be made a fully-integrated project in the J release.

 Your deprecation path description mentions that the internal scheduler
 will be deprecated in I, although there is no released (or
 security-supported) alternative to switch to at that point. It's not
 until the J release that such an alternative will be made available.

 So IMHO for the release/security-oriented users, the switch point is
 when they start upgrading to J, and not the final step of their upgrade
 to I (as suggested by the deploy the external scheduler and switch
over
 before you consider your migration to I complete wording in the
 Etherpad). As the first step towards *switching to J* you would install
 the new scheduler before upgrading Nova itself. That works whether
 you're a CD user (and start deploying pre-J stuff just after the I
 release), or a release user (and wait until J final release to switch
to
 it).

 Maybe we are talking about the same thing (the migration to the
separate
 scheduler must happen after the I release and, at the latest, when you
 switch to the J release) -- but I wanted to make sure we were on the
 same page.
 
 Sounds good to me.
 
 I also assume that all the other scheduler-consuming projects would
 develop the capability to talk to the external scheduler during the J
 cycle, so that their own schedulers would be deprecated in J release
and
 removed at the start of H. That would be, to me, the condition to
 considering the external scheduler as integrated with (even if not
 mandatory for) the rest of the common release components.

 Does that work for you ?
 
 I would change all the other to at least one other here.  I think
 once we prove that a second project can be integrated into it, the
 project is ready to be integrated.  Adding support for even more
 projects is something that will continue to happen over a longer period
 of time, I suspect, especially since new projects are coming in every
cycle.

Just because I'd like to argue - if what we do here is an actual
forklift, do we really need a cycle of deprecation?

The reason I ask is that this is, on first stab, not intended to be a
service that has user-facing API differences. It's a reorganization of
code from one repo into a different one. It's very strongly designed to
not be different. It's not even adding a new service like conductor was
- it's simply moving the repo where the existing service is held.

I think that this is certainly different. It is something that we we want
and need a user facing API.
Examples:
 - aggregates
 - per host scheduling
 - instance groups

Etc.

That is just taking the nova options into account and not the other
modules. How doul one configure that we would like to have storage
proximity for a VM? This is where things start to get very interesting and
enable the cross service scheduling (which is the goal of this no?).

Thanks
Gary


Why would we need/want to deprecate? I say that if we get the code
ectomied and working before nova feature freeze, that we elevate the new
nova repo and delete the code from nova. Process for process sake here
I'm not sure gets us anywhere.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=nlm44OzEwIEFTzGCfk6Dx
1Lc0g7KHrY0h78JGykLd0s%3D%0As=e39dbfa0d3ca06da0fe8785a05a337a7c046c1634b3
7b24f9822e686596e4265


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ml2] Proposing a new time for the ML2 sub-team meeting

2013-12-02 Thread Kyle Mestery (kmestery)
FYI:

I have moved the ML2 meeting per my email below. The new
timeslot per the meeting page [1] is Wednesday at 1600UTC
on #openstack-meeting-alt.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting

On Nov 26, 2013, at 9:26 PM, Kyle Mestery (kmestery) kmest...@cisco.com wrote:
 Folks:
 
 I'd like to propose moving the weekly ML2 meeting [1]. The new
 proposed time is Wednesdays at 1600 UTC, and will be in the
 #openstack-meeting-alt channel. Please respond back if this time
 conflicts for you, otherwise starting next week on 12-4-2013
 we'll have the meeting at the new time.
 
 We will have a short meeting tomorrow at 1400 UTC in the
 #openstack-meeting channel to cover action items from last
 week and some proposed VIF changes.
 
 Thanks!
 Kyle
 
 [1] https://wiki.openstack.org/wiki/Meetings/ML2
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 10:26 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Mon, Dec 2, 2013 at 10:05 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Mon, Dec 2, 2013 at 6:37 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




 On Mon, Dec 2, 2013 at 9:22 AM, Joe Gordon joe.gord...@gmail.comwrote:




 On Mon, Dec 2, 2013 at 6:06 AM, Russell Bryant rbry...@redhat.comwrote:

 On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 
 
 
  On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
  doug.hellm...@dreamhost.com mailto:doug.hellm...@dreamhost.com
 wrote:
 
 
 
 
  On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant 
 rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 11/29/2013 01:39 PM, Doug Hellmann wrote:
   We have a review up (
 https://review.openstack.org/#/c/58297/)
  to add
   some features to the notification system in the oslo
  incubator. THe
   notification system is being moved into oslo.messaging,
 and so
  we have
   the question of whether to accept the patch to the
 incubated
  version,
   move it to oslo.messaging, or carry it in both.
  
   As I say in the review, from a practical standpoint I
 think we
  can't
   really support continued development in both places. Given
 the
  number of
   times the topic of just make everything a library has
 come
  up, I would
   prefer that we focus our energy on completing the
 transition
  for a given
   module or library once it the process starts. We also need
 to
  avoid
   feature drift, and provide a clear incentive for projects
 to
  update to
   the new library.
  
   Based on that, I would like to say that we do not add new
  features to
   incubated code after it starts moving into a library, and
 only
  provide
   stable-like bug fix support until integrated projects are
  moved over
   to the graduated library (although even that is up for
  discussion).
   After all integrated projects that use the code are using
 the
  library
   instead of the incubator, we can delete the module(s) from
 the
  incubator.
  
   Before we make this policy official, I want to solicit
  feedback from the
   rest of the community and the Oslo core team.
 
  +1 in general.
 
  You may want to make after it starts moving into a library
 more
  specific, though.
 
 
  I think my word choice is probably what threw Sandy off, too.
 
  How about after it has been moved into a library with at least a
  release candidate published?

 Sure, that's better.  That gives a specific bit of criteria for when
 the
 switch is flipped.

 
 
   One approach could be to reflect this status in the
  MAINTAINERS file.  Right now there is a status field for each
  module in
  the incubator:
 
 
   S: Status, one of the following:
Maintained:  Has an active maintainer
Orphan:  No current maintainer, feel free to step
 up!
Obsolete:Replaced by newer code, or a dead end, or
  out-dated
 
  It seems that the types of code we're talking about should
 just be
  marked as Obsolete.  Obsolete code should only get
 stable-like
  bug fixes.
 
  That would mean marking 'rpc' and 'notifier' as Obsolete
 (currently
  listed as Maintained).  I think that is accurate, though.
 
 
  Good point.

 So, to clarify, possible flows would be:

 1) An API moving to a library as-is, like rootwrap

Status: Maintained
- Status: Graduating (short term)
- Code removed from oslo-incubator once library is released

 2) An API being replaced with a better one, like rpc being replaced by
 oslo.messaging

Status: Maintained
- Status: Obsolete (once an RC of a replacement lib has been
 released)
- Code removed from oslo-incubator once all integrated projects
 have
 been migrated off of the obsolete code


 Does that match your view?

 
  I also added a Graduating status as an indicator for code in that
  intermediate phase where there are 2 copies to be maintained. I hope
 we
  don't have to use it very often, but it's best to be explicit.
 
  https://review.openstack.org/#/c/59373/

 Sounds good to me.


 So is messaging in 'graduating' since it isn't used by all core
 projects yet (nova - https://review.openstack.org/#/c/39929/)?


 Graduation is a status within the oslo project, not the other projects.
 We can't control adoption downstream, so I am trying to set a reasonable
 policy for maintenance until we have an official release.


 Although oslo cannot fully control downstream adaption, they can help
 facilitate that process, we are all in the same 

Re: [openstack-dev] [Swift] Increase Swift ring partition power

2013-12-02 Thread Gregory Holt
On Dec 2, 2013, at 9:48 AM, Christian Schwede christian.schw...@enovance.com 
wrote:

 That sounds great! Is someone already working on this (I know about the 
 ongoing DiskFile refactoring) or even a blueprint available?

There is https://blueprints.launchpad.net/swift/+spec/ring-doubling though I'm 
uncertain how up to date it is. Everybody works on everything ;) but Peter 
Portante has been the point on the DiskFile refactoring and I have been for the 
SSync part. David Hadas will probably kick back in once we (Peter and I) get a 
bit further down the line on our parts.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-02 Thread Julien Danjou
On Mon, Nov 18 2013, Julien Danjou wrote:

   https://blueprints.launchpad.net/oslo/+spec/messaging-decouple-cfg

So I've gone through the code and started to write a plan on how I'd do
things:

  https://wiki.openstack.org/wiki/Oslo/blueprints/messaging-decouple-cfg

I don't think I missed too much, though I didn't run into all tiny
details.

Please feel free to tell me if I miss anything obvious, otherwise I'll
try to start submitting patches, one at a time, to get this into shape
step by step.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-02 Thread Joe Gordon
On Mon, Dec 2, 2013 at 7:26 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Mon, Dec 2, 2013 at 10:05 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Mon, Dec 2, 2013 at 6:37 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




 On Mon, Dec 2, 2013 at 9:22 AM, Joe Gordon joe.gord...@gmail.comwrote:




 On Mon, Dec 2, 2013 at 6:06 AM, Russell Bryant rbry...@redhat.comwrote:

 On 12/02/2013 08:53 AM, Doug Hellmann wrote:
 
 
 
  On Mon, Dec 2, 2013 at 8:46 AM, Doug Hellmann
  doug.hellm...@dreamhost.com mailto:doug.hellm...@dreamhost.com
 wrote:
 
 
 
 
  On Mon, Dec 2, 2013 at 8:36 AM, Russell Bryant 
 rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 11/29/2013 01:39 PM, Doug Hellmann wrote:
   We have a review up (
 https://review.openstack.org/#/c/58297/)
  to add
   some features to the notification system in the oslo
  incubator. THe
   notification system is being moved into oslo.messaging,
 and so
  we have
   the question of whether to accept the patch to the
 incubated
  version,
   move it to oslo.messaging, or carry it in both.
  
   As I say in the review, from a practical standpoint I
 think we
  can't
   really support continued development in both places. Given
 the
  number of
   times the topic of just make everything a library has
 come
  up, I would
   prefer that we focus our energy on completing the
 transition
  for a given
   module or library once it the process starts. We also need
 to
  avoid
   feature drift, and provide a clear incentive for projects
 to
  update to
   the new library.
  
   Based on that, I would like to say that we do not add new
  features to
   incubated code after it starts moving into a library, and
 only
  provide
   stable-like bug fix support until integrated projects are
  moved over
   to the graduated library (although even that is up for
  discussion).
   After all integrated projects that use the code are using
 the
  library
   instead of the incubator, we can delete the module(s) from
 the
  incubator.
  
   Before we make this policy official, I want to solicit
  feedback from the
   rest of the community and the Oslo core team.
 
  +1 in general.
 
  You may want to make after it starts moving into a library
 more
  specific, though.
 
 
  I think my word choice is probably what threw Sandy off, too.
 
  How about after it has been moved into a library with at least a
  release candidate published?

 Sure, that's better.  That gives a specific bit of criteria for when
 the
 switch is flipped.

 
 
   One approach could be to reflect this status in the
  MAINTAINERS file.  Right now there is a status field for each
  module in
  the incubator:
 
 
   S: Status, one of the following:
Maintained:  Has an active maintainer
Orphan:  No current maintainer, feel free to step
 up!
Obsolete:Replaced by newer code, or a dead end, or
  out-dated
 
  It seems that the types of code we're talking about should
 just be
  marked as Obsolete.  Obsolete code should only get
 stable-like
  bug fixes.
 
  That would mean marking 'rpc' and 'notifier' as Obsolete
 (currently
  listed as Maintained).  I think that is accurate, though.
 
 
  Good point.

 So, to clarify, possible flows would be:

 1) An API moving to a library as-is, like rootwrap

Status: Maintained
- Status: Graduating (short term)
- Code removed from oslo-incubator once library is released

 2) An API being replaced with a better one, like rpc being replaced by
 oslo.messaging

Status: Maintained
- Status: Obsolete (once an RC of a replacement lib has been
 released)
- Code removed from oslo-incubator once all integrated projects
 have
 been migrated off of the obsolete code


 Does that match your view?

 
  I also added a Graduating status as an indicator for code in that
  intermediate phase where there are 2 copies to be maintained. I hope
 we
  don't have to use it very often, but it's best to be explicit.
 
  https://review.openstack.org/#/c/59373/

 Sounds good to me.


 So is messaging in 'graduating' since it isn't used by all core
 projects yet (nova - https://review.openstack.org/#/c/39929/)?


 Graduation is a status within the oslo project, not the other projects.
 We can't control adoption downstream, so I am trying to set a reasonable
 policy for maintenance until we have an official release.


 Although oslo cannot fully control downstream adaption, they can help
 facilitate that process, we are all in the same 

Re: [openstack-dev] [Neutron][LBaaS] L7 rules design

2013-12-02 Thread Avishay Balderman
1)  Done

2)  Done

3)  Attaching a pool to a vip is a private use case. The 'action' can also 
'reject the traffic' or something else. So the 'Action' ,ay tell that we need 
to attach Vip X to Pool Y

4)  Not sure .. It is an open discussion for now.

5)  See #4

   Yes - CRUD operation should be supported as well for the policy and rules

Thanks

Avishay

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Friday, November 22, 2013 5:24 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] L7 rules design

Hi Avishay, lbaas folks,

I've reviewed the wiki and have some questions/suggestions:

1) Looks like L7Policy is lacking 'name' attribute in it's description. However 
i see little benefit of having a name for this object
2) lbaas-related neutron client commands start with lb-, please fix this.
3) How does L7Policy specifies relation of vip and pool?
4) How default pool will be associated with the vip? Will it be a l7 rule of 
special kind?
It is not quite clear what does mean associating vip with pool with policy if 
each rule in the policy contains 'SelectedPool' attribute.
5) what is 'action'? What other actions except SELECT_POOL can be?

In fact my suggestion will be slightly different:
- Instead of having L7Policy which i believe serves only to rules grouping, we 
introduce VipPoolAssociation as follows:

VipPoolAssociation
 id,
 vip_id,
 pool_id,
 default - boolean flag for default pool for the vip

L7Rule then will have vippoolassociation_id (it's better to shorten the attr 
name) just like it has policy_id in your proposal. L7Rule doesn't need 
SelectedPool attribute since once it attached to VipPoolAssociation, the rule 
then points to the pool with pool_id of that association.
In other words, VipPoolAssociation is almost the same as L7Policy but named 
closer to it's primary goal of associating vips and pools.

I would also suggest to add add/delete-rule operation for the association.

What do you think?

Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-12-02 Thread Matt Wagner
On Sun Dec  1 00:27:30 2013, Tzu-Mainn Chen wrote:

 I think it's far more important that we list out requirements and
 create a design document that people agree upon first.  Otherwise, we
 run the risk of focusing on feature X for release 1 without ensuring
 that our architecture supports feature Y for release 2.

+1 to this.

I think that lifeless'
https://etherpad.openstack.org/p/tripleo-feature-map pad might be a good
way to get moving in that direction.


 The point of disagreement here - which actually seems quite minor to
 me - is how far we want to go in defining heterogeneity.  Are existing
 node attributes such as cpu and memory enough?  Or do we need to go
 further?  To take examples from this thread, some additional
 possibilities include: rack, network connectivity, etc.  Presumably,
 such attributes will be user defined and managed within TripleO itself.

I took the point of disagreement more about the allowance of manual
control. Should a user be able to override the list of what gets
provisioned, where?

And I don't think you always want heterogeneity. For example, if we
treat 'rack' as one of those attributes, a system administrator might
specifically want things to NOT share a rack, e.g. for redundancy.

That said, I suspect that many of us (myself included) have never
designed a data center, so I worry that some of our examples might be a
bit contrived. Not necessarily just for this conversation, but I think
it'd be handy to have real-world stories here. I'm sure no two are
identical, but it'd help make sure we're focused on real-world scenarios.



 If that understanding is correct, it seems to me that the requirements
 are broadly in agreement, and that TripleO defined node attributes
 is a feature that can easily be slotted into this sort of
 architecture.  Whether it needs to come first. . . should be a
 different discussion (my gut feel is that it shouldn't come first, as
 it depends on everything else working, but maybe I'm wrong).

So to me, that question -- what should come first? -- is exactly what
started this discussion. It didn't start out as a question of whether we
should allow users to override the schedule, but as a question of where
we should start building. Should we start off just letting Nova
scheduler do all the hard work for us and let overrides maybe come in
later? Or should we we start off requiring that everything is manual and
later transition to using Nova? (I don't have a strong opinion either
way, but I hope we land one way or the other soon.)

-- 
Matt Wagner
Software Engineer, Red Hat



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][database] Update compute_nodes table

2013-12-02 Thread Abbass MAROUNI
Hello,

I'm looking for way to a add new attribute to the compute nodes by  adding
a column to the compute_nodes table in nova database in order to track a
metric on the compute nodes and use later it in nova-scheduler.

I checked the  sqlalchemy/migrate_repo/versions and thought about adding my
own upgrade then sync using nova-manage db sync.

My question is :
What is the process of upgrading a table in the database ? Do I have to
modify or add a new variable in some class in order to associate the newly
added column with a variable that I can use ?

Best regards,
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Thierry Carrez
Monty Taylor wrote:
 On 12/02/2013 09:13 AM, Russell Bryant wrote:
 On 11/29/2013 10:01 AM, Thierry Carrez wrote:
 Robert Collins wrote:
 https://etherpad.openstack.org/p/icehouse-external-scheduler

 Just looked into it with release management / TC hat on and I have a
 (possibly minor) concern on the deprecation path/timing.

 Assuming everything goes well, the separate scheduler will be
 fast-tracked through incubation in I, graduate at the end of the I cycle
 to be made a fully-integrated project in the J release.

 Your deprecation path description mentions that the internal scheduler
 will be deprecated in I, although there is no released (or
 security-supported) alternative to switch to at that point. It's not
 until the J release that such an alternative will be made available.

 So IMHO for the release/security-oriented users, the switch point is
 when they start upgrading to J, and not the final step of their upgrade
 to I (as suggested by the deploy the external scheduler and switch over
 before you consider your migration to I complete wording in the
 Etherpad). As the first step towards *switching to J* you would install
 the new scheduler before upgrading Nova itself. That works whether
 you're a CD user (and start deploying pre-J stuff just after the I
 release), or a release user (and wait until J final release to switch to
 it).

 Maybe we are talking about the same thing (the migration to the separate
 scheduler must happen after the I release and, at the latest, when you
 switch to the J release) -- but I wanted to make sure we were on the
 same page.

 Sounds good to me.

 I also assume that all the other scheduler-consuming projects would
 develop the capability to talk to the external scheduler during the J
 cycle, so that their own schedulers would be deprecated in J release and
 removed at the start of H. That would be, to me, the condition to
 considering the external scheduler as integrated with (even if not
 mandatory for) the rest of the common release components.

 Does that work for you ?

 I would change all the other to at least one other here.  I think
 once we prove that a second project can be integrated into it, the
 project is ready to be integrated.  Adding support for even more
 projects is something that will continue to happen over a longer period
 of time, I suspect, especially since new projects are coming in every cycle.
 
 Just because I'd like to argue - if what we do here is an actual
 forklift, do we really need a cycle of deprecation?
 
 The reason I ask is that this is, on first stab, not intended to be a
 service that has user-facing API differences. It's a reorganization of
 code from one repo into a different one. It's very strongly designed to
 not be different. It's not even adding a new service like conductor was
 - it's simply moving the repo where the existing service is held.
 
 Why would we need/want to deprecate? I say that if we get the code
 ectomied and working before nova feature freeze, that we elevate the new
 nova repo and delete the code from nova. Process for process sake here
 I'm not sure gets us anywhere.

I don't really care that much about deprecation in that case, but I care
about which release the new project is made part of. Would you make it
part of the Icehouse common release ? That means fast-tracking through
incubation *and* integration in less than one cycle... I'm not sure we
want that.

I agree it's the same code (at least at the beginning), but the idea
behind forcing all projects to undergo a full cycle before being made
part of the release is not really about code stability, it's about
integration with the other projects and all the various programs. We
want them to go through a whole cycle to avoid putting unnecessary
stress on packagers, QA, docs, infrastructure and release management.

So while I agree that we could play tricks around deprecation, I'm not
sure we should go from forklifted to part of the common release in less
than 3 months.

I'm not sure it would buy us anything, either. Having the scheduler
usable by the end of the Icehouse cycle and integrated in the J cycle
lets you have one release where both options are available, remove it
first thing in J and then anyone running J (be it tracking trunk or
using the final release) is using the external scheduler. That makes
more sense to me and technically, you still have the option to use it
with Icehouse.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:19 AM, Jarret Raim wrote:
 All,
 
 Barbican is the OpenStack key management service and we’d like to
 request incubation for the Icehouse release. A Rackspace sponsored team
 has been working for about 9 months now, including following the Havana
 release cycle for our first release.
 
 Our incubation request is here:
 https://wiki.openstack.org/wiki/Barbican
 
 Our documentation is mostly hosted at GitHub for the moment, though we
 are in the process of converting much of it to DocBook.
 https://github.com/cloudkeep/barbican
 https://github.com/cloudkeep/barbican/wiki
 
 
 The Barbican team will be on IRC today at #openstack-barbican and you
 can contact us using the barbi...@lists.rackspace.com mailing list if
 desired.

The TC is currently working on formalizing requirements for new programs
and projects [3].  I figured I would give them a try against this
application.

First, I'm assuming that the application is for a new program that
contains the new project.  The application doesn't make that bit clear,
though.

 Teams in OpenStack can be created as-needed and grow organically. As the team
 work matures, some technical efforts will be recognized as essential to the
 completion of the OpenStack project mission. By becoming an official Program,
 they place themselves under the authority of the OpenStack Technical
 Committee. In return, their contributors get to vote in the Technical
 Committee election, and they get some space and time to discuss future
 development at our Design Summits. When considering new programs, the TC will
 look into a number of criteria, including (but not limited to):

 * Scope
 ** Team must have a specific scope, separated from others teams scope

I would like to see a statement of scope for Barbican on the
application.  It should specifically cover how the scope differs from
other programs, in particular the Identity program.

 ** Team must have a mission statement

This is missing.

 * Maturity
 ** Team must already exist, have regular meetings and produce some work

This seems covered.

 ** Team should have a lead, elected by the team contributors

Was the PTL elected?  I can't seem to find record of that.  If not, I
would like to see an election held for the PTL.

 ** Team should have a clear way to grant ATC (voting) status to its
significant contributors

Related to the above

 * Deliverables
 ** Team should have a number of clear deliverables

barbican and python-barbicanclient, I presume.  It would be nice to have
this clearly defined on the application.


Now, for the project specific requirements:

  Projects wishing to be included in the integrated release of OpenStack must
  first apply for incubation status. During their incubation period, they will
  be able to access new resources and tap into other OpenStack programs (in
  particular the Documentation, QA, Infrastructure and Release management 
 teams)
  to learn about the OpenStack processes and get assistance in their 
 integration
  efforts.
  
  The TC will evaluate the project scope and its complementarity with existing
  integrated projects and other official programs, look into the project
  technical choices, and check a number of requirements, including (but not
  limited to):
  
  * Scope
  ** Project must have a clear and defined scope

This is missing

  ** Project should not inadvertently duplicate functionality present in other
 OpenStack projects. If they do, they should have a clear plan and 
 timeframe
 to prevent long-term scope duplication.
  ** Project should leverage existing functionality in other OpenStack projects
 as much as possible

I'm going to hold off on diving into this too far until the scope is
clarified.

  * Maturity
  ** Project should have a diverse and active team of contributors

Using a mailmap file [4]:

$ git shortlog -s -e | sort -n -r
   172  John Wood john.w...@rackspace.com
   150  jfwood john.w...@rackspace.com
65  Douglas Mendizabal douglas.mendiza...@rackspace.com
39  Jarret Raim jarret.r...@rackspace.com
17  Malini K. Bhandaru malini.k.bhand...@intel.com
10  Paul Kehrer paul.l.keh...@gmail.com
10  Jenkins jenk...@review.openstack.org
 8  jqxin2006 jqxin2...@gmail.com
 7  Arash Ghoreyshi arashghorey...@gmail.com
 5  Chad Lung chad.l...@gmail.com
 3  Dolph Mathews dolph.math...@gmail.com
 2  John Vrbanac john.vrba...@rackspace.com
 1  Steven Gonzales stevendgonza...@gmail.com
 1  Russell Bryant rbry...@redhat.com
 1  Bryan D. Payne bdpa...@acm.org

It appears to be an effort done by a group, and not an individual.  Most
commits by far are from Rackspace, but there is at least one non-trivial
contributor (Malini) from another company (Intel), so I think this is OK.

  ** Project should not have a major architectural rewrite planned. API should
 be reasonably stable.

Thoughts from the Barbican team on this?

  
  * Process
  ** Project must be hosted under stackforge 

Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Russell Bryant
On 12/02/2013 11:53 AM, Russell Bryant wrote:
  ** Project must have no library dependencies which effectively restrict how
 the project may be distributed [1]
 
 http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
 
 It looks like the only item here not in the global requirements is
 Celery, which is licensed under a 3-clause BSD license.
 
 https://github.com/celery/celery/blob/master/LICENSE
 
 A notable point is this clause:
 
   * Redistributions in binary form must reproduce the above copyright
 notice, this list of conditions and the following disclaimer in the
 documentation and/or other materials provided with the distribution.
 
 I'm not sure if we have other dependencies using this license already.
 It's also not clear how to interpret this when Python is always
 distributed as source.  We can take this up on the legal-discuss mailing
 list.

If you have comments on this point, please jump over to the
legal-discuss list and respond to this thread:

http://lists.openstack.org/pipermail/legal-discuss/2013-December/000106.html

We can post the outcome back to the -dev list once that thread reaches a
conclusion.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting meetings - 12/02/2013

2013-12-02 Thread Renat Akhmerov
Hi,

Thanks for joining our community meeting today at #openstack-meeting!

Here’re the links to the meeting minutes and the full log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-02-16.00.html
Logs: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-02-16.00.log.html

Feel free to join us next time!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][database] Update compute_nodes table

2013-12-02 Thread Russell Bryant
On 12/02/2013 11:47 AM, Abbass MAROUNI wrote:
 Hello,
 
 I'm looking for way to a add new attribute to the compute nodes by
  adding a column to the compute_nodes table in nova database in order to
 track a metric on the compute nodes and use later it in nova-scheduler.
 
 I checked the  sqlalchemy/migrate_repo/versions and thought about adding
 my own upgrade then sync using nova-manage db sync. 
 
 My question is : 
 What is the process of upgrading a table in the database ? Do I have to
 modify or add a new variable in some class in order to associate the
 newly added column with a variable that I can use ? 

Don't add this.  :-)

There is work in progress to just have a column with a json blob in it
for additional metadata like this.

https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://wiki.openstack.org/wiki/ExtensibleResourceTracking

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 11:41 AM, Thierry Carrez wrote:
 I don't really care that much about deprecation in that case, but I care
 about which release the new project is made part of. Would you make it
 part of the Icehouse common release ? That means fast-tracking through
 incubation *and* integration in less than one cycle... I'm not sure we
 want that.
 
 I agree it's the same code (at least at the beginning), but the idea
 behind forcing all projects to undergo a full cycle before being made
 part of the release is not really about code stability, it's about
 integration with the other projects and all the various programs. We
 want them to go through a whole cycle to avoid putting unnecessary
 stress on packagers, QA, docs, infrastructure and release management.
 
 So while I agree that we could play tricks around deprecation, I'm not
 sure we should go from forklifted to part of the common release in less
 than 3 months.
 
 I'm not sure it would buy us anything, either. Having the scheduler
 usable by the end of the Icehouse cycle and integrated in the J cycle
 lets you have one release where both options are available, remove it
 first thing in J and then anyone running J (be it tracking trunk or
 using the final release) is using the external scheduler. That makes
 more sense to me and technically, you still have the option to use it
 with Icehouse.
 

Not having to maintain code in 2 places is what it buys us.  However,
this particular point is a bit moot until we actually had it done and
working.  Perhaps we should just revisit the deprecation plan once we
actually have the thing split out and running.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Compute Node Interfaces configuration in Nova Configuration file

2013-12-02 Thread Balaji P
Hi,

Can anyone share the current deployment use case for compute Node Interfaces 
configuration in Nova configuration file where hundreads of compute nodes [with 
two or more physical interfaces] in DC network has to be configured with 
IPAddress for Management Networks and Data Networks.

Is there any other tool used for this purpose OR manual configuration has to be 
done?

Regards,
Balaji.P


From: Balaji Patnala [mailto:patnala...@gmail.com]
Sent: Tuesday, November 19, 2013 4:21 PM
To: OpenStack Development Mailing List
Cc: Addepalli Srini-B22160; B Veera-B37207; P Balaji-B37839; Lingala Srikanth 
Kumar-B37208; Somanchi Trinath-B39208; Mannidi Purandhar Sairam-B39209
Subject: [openstack-dev] [nova] Static IPAddress configuration in nova.conf file

Hi,
Nova-compute in Compute nodes send fanout_cast to the scheduler in Controlle 
Node once every 60 seconds.  Configuration file Nova.conf in Compute Node has 
to be configured with Management Network IP address and there is no provision 
to configure Data Network IP address in the configuration file. But if there is 
any change in the IPAddress for these Management Network Interface and Data 
Network Interface, then we have to configure them  manually in the 
configuration file of compute node.
We would like to create BP to address this issue of static configuration of 
IPAddress for Management Network Interface and Data Network Interface of 
Compute Node by providing the interface names in the nova.conf file.
So that any change in the ipaddress for these interfaces will be updated 
dynamically in the fanout_cast  message to the Controller and update the DB.
We came to know that the current deployments are using some scripts to handle 
this static ipaddress configuration in nova.conf file.
Any comments/suggestions will be useful.
Regards,
Balaji.P


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 10:59 AM, Gary Kotton wrote:
 I think that this is certainly different. It is something that we we want
 and need a user facing API.
 Examples:
  - aggregates
  - per host scheduling
  - instance groups
 
 Etc.
 
 That is just taking the nova options into account and not the other
 modules. How doul one configure that we would like to have storage
 proximity for a VM? This is where things start to get very interesting and
 enable the cross service scheduling (which is the goal of this no?).

An explicit part of this plan is that all of the things you're talking
about are *not* in scope until the forklift is complete and the new
thing is a functional replacement for the existing nova-scheduler.  We
want to get the project established and going so that it is a place
where this work can take place.  We do *not* want to slow down the work
of getting the project established by making these things a prerequisite.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Sylvain Bauza

Le 02/12/2013 18:12, Russell Bryant a écrit :

On 12/02/2013 10:59 AM, Gary Kotton wrote:

I think that this is certainly different. It is something that we we want
and need a user facing API.
Examples:
  - aggregates
  - per host scheduling
  - instance groups

Etc.

That is just taking the nova options into account and not the other
modules. How doul one configure that we would like to have storage
proximity for a VM? This is where things start to get very interesting and
enable the cross service scheduling (which is the goal of this no?).

An explicit part of this plan is that all of the things you're talking
about are *not* in scope until the forklift is complete and the new
thing is a functional replacement for the existing nova-scheduler.  We
want to get the project established and going so that it is a place
where this work can take place.  We do *not* want to slow down the work
of getting the project established by making these things a prerequisite.

While I can understand we need to secure the forklift, I also think it 
would be a good thing to step up defining what would be the 
scheduling-as-a-service in the next steps even if they are not planned yet.


There is already an RPC interface defining what *is* the scheduler, my 
concern is just to make sure this API (RPC or REST anyway) wouldn't it 
be too sticked to Nova instances so that planning to deliver for Cinder 
volumes or Nova hosts would be hard.
In other terms, having the RPC methods enough generic to say add this 
entity to this group or elect entities from this group would be fine.


-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday December 3rd at 19:00 UTC

2013-12-02 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday December 3rd, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] License headers in empty files

2013-12-02 Thread Julien Danjou
On Thu, Nov 28 2013, Julien Danjou wrote:

 On Thu, Nov 28 2013, Sean Dague wrote:

 I'm totally in favor of going further and saying empty files shouldn't
 have license headers, because their content of emptiness isn't
 copyrightable [1]. That's just not how it's written today.

 I went ahead and sent a first patch:

   https://review.openstack.org/#/c/59090/

 Help appreciated. :)

The patch is ready for review, but it also a bit stricter as it
completely disallows files with _only_ comments in them.

This is something that sounds like a good idea, but Joe wanted to bring
this to the mailing list for attention first, in case there would be a
problem.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Russell Bryant
On 12/02/2013 12:46 PM, Monty Taylor wrote:
 On 12/02/2013 11:53 AM, Russell Bryant wrote:
  * Scope
  ** Project must have a clear and defined scope

 This is missing

  ** Project should not inadvertently duplicate functionality present in 
 other
 OpenStack projects. If they do, they should have a clear plan and 
 timeframe
 to prevent long-term scope duplication.
  ** Project should leverage existing functionality in other OpenStack 
 projects
 as much as possible

 I'm going to hold off on diving into this too far until the scope is
 clarified.
 
 I'm not.
 
 *snip*
 

Ok, I can't help it now.


 The list looks reasonable right now.  Barbican should put migrating to
 oslo.messaging on the Icehouse roadmap though.
 
 *snip*

Yeahhh ... I looked and even though rpc and notifier are imported, they
do not appear to be used at all.


 http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires

 It looks like the only item here not in the global requirements is
 Celery, which is licensed under a 3-clause BSD license.
 
 I'd like to address the use of Celery.
 
 WTF
 
 Barbican has been around for 9 months, which means that it does not
 predate the work that has become oslo.messaging. It doesn't even try. It
 uses a completely different thing.
 
 The use of celery needs to be replaced with oslo. Full stop. I do not
 believe it makes any sense to spend further time considering a project
 that's divergent on such a core piece. Which is a shame - because I
 think that Barbican is important and fills an important need and I want
 it to be in. BUT - We don't get to end-run around OpenStack project
 choices by making a new project on the side and then submitting it for
 incubation. It's going to be a pile of suck to fix this I'm sure, and
 I'm sure that it's going to delay getting actually important stuff done
 - but we deal with too much crazy as it is to pull in a non-oslo
 messaging and event substrata.
 

Yeah, I'm afraid I agree with Monty here.  I didn't really address this
because I was trying to just do a first pass and not go too far into the
tech bits.

I think such a big divergence is going to be a hard sell for a number of
reasons.  It's a significant dependency that I don't think is justified.
 Further, it won't work in all of the same environments that OpenStack
works in today.  You can't use Celery with all of the same messaging
transports as oslo.messaging (or the older rpc lib).  One example is Qpid.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Working group on language packs

2013-12-02 Thread Clayton Coleman


- Original Message -
 Clayton, good detail documentation of the design approach @
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/BuildingSourceIntoDeploymentArtifacts
 
 A few questions -
 1. On the objective of supporting both heroku build packs and openshift
 cartridges: how compatible are these two with each other? i.e. is there
 enough common denominator between the two that Solum can be compatible with
 both?

A buildpack is:

1) an ubuntu base image with a set of preinstalled packages
2) a set of buildpack files downloaded into a specific location on disk
3) a copy of the application git repository downloaded into a specific location 
on disk
4) a script that is invoked to transform the buildpack files and git repository 
into a set of binaries (the deployment unit) in an output directory

Since the Solum proposal is a transformation of a base (#1) via a script (which 
can call #4) with injected app data (#2 and #3), we should have all the 
ingredients.

An openshift cartridge is:

1) a rhel base image with a set of preinstalled packages
   a) some additional tools and environment to execute cartridges
2) a metadata file that describes how the cartridge can be used (what the ports 
it exposes are, how it can be used)
3) a set of cartridge files that include binaries and control scripts
   a) a bin/install script that does setup of the binaries on disk, and the 
basics preparation
   b) a bin/control script that manages running processes and to do builds

An OpenShift cartridge would be a base image (#1) with a script that invokes 
the correct steps in the cartridge (#3) to create, build, and deploy the 
contents.

 
 2. What is the intended user persona for the image author? Is it the
 service operator, the developer end user, or could be both?

Yes, and there is a 3rd persona which could be a vendor packaging up their 
software in distributable format for consumption by multiple service operators. 
 Will add a section to the proposal.

These were discussed in the API meeting as well 
(http://irclogs.solum.io/2013/solum.2013-12-02-17.05.html), the additional info 
will be added to the proposal.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with rabbitmq on HA controller failure...anyone seen this?

2013-12-02 Thread Vishvananda Ishaya

On Nov 29, 2013, at 9:24 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 11/29/2013 06:37 PM, David Koo wrote:
 On Nov 29, 02:22:17 PM (Friday), Chris Friesen wrote:
 We're currently running Grizzly (going to Havana soon) and we're
 running into an issue where if the active controller is ungracefully
 killed then nova-compute on the compute node doesn't properly
 connect to the new rabbitmq server on the newly-active controller
 node.
 
 Interestingly, killing and restarting nova-compute on the compute
 node seems to work, which implies that the retry code is doing
 something less effective than the initial startup.
 
 Has anyone doing HA controller setups run into something similar?
 
 As a followup, it looks like if I wait for 9 minutes or so I see a message in 
 the compute logs:
 
 2013-11-30 00:02:14.756 1246 ERROR nova.openstack.common.rpc.common [-] 
 Failed to consume message from queue: Socket closed
 
 It then reconnects to the AMQP server and everything is fine after that.  
 However, any instances that I tried to boot during those 9 minutes stay stuck 
 in the BUILD status.
 
 
 
 So the rabbitmq server and the controller are on the same node?
 
 Yes, they are.
 
  My
 guess is that it's related to this bug 856764 (RabbitMQ connections
 lack heartbeat or TCP keepalives). The gist of it is that since there
 are no heartbeats between the MQ and nova-compute, if the MQ goes down
 ungracefully then nova-compute has no way of knowing. If the MQ goes
 down gracefully then the MQ clients are notified and so the problem
 doesn't arise.
 
 Sounds about right.
 
 We got bitten by the same bug a while ago when our controller node
 got hard reset without any warning!. It came down to this bug (which,
 unfortunately, doesn't have a fix yet). We worked around this bug by
 implementing our own crude fix - we wrote a simple app to periodically
 check if the MQ was alive (write a short message into the MQ, then
 read it out again). When this fails n-times in a row we restart
 nova-compute. Very ugly, but it worked!
 
 Sounds reasonable.
 
 I did notice a kombu heartbeat change that was submitted and then backed out 
 again because it was buggy. I guess we're still waiting on the real fix?

Hi Chris,

This general problem comes up a lot, and one fix is to use keepalives. Note 
that more is needed if you are using multi-master rabbitmq, but for failover I 
have had great success with the following (also posted to the bug):

When a connection to a socket is cut off completely, the receiving side doesn't 
know that the connection has dropped, so you can end up with a half-open 
connection. The general solution for this in linux is to turn on 
TCP_KEEPALIVES. Kombu will enable keepalives if the version number is high 
enough (1.0 iirc), but rabbit needs to be specially configured to send 
keepalives on the connections that it creates.

So solving the HA issue generally involves a rabbit config with a section like 
the following:

[
 {rabbit, [{tcp_listen_options, [binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false},
{keepalive, true}]}
  ]}
].

Then you should also shorten the keepalive sysctl settings or it will still 
take ~2 hrs to terminate the connections:

echo 5  /proc/sys/net/ipv4/tcp_keepalive_time
echo 5  /proc/sys/net/ipv4/tcp_keepalive_probes
echo 1  /proc/sys/net/ipv4/tcp_keepalive_intvl

Obviously this should be done in a sysctl config file instead of at the command 
line. Note that if you only want to shorten the rabbit keepalives but keep 
everything else as a default, you can use an LD_PRELOAD library to do so. For 
example you could use:

https://github.com/meebey/force_bind/blob/master/README

Vish

 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-02 Thread David Kranz

On 12/02/2013 10:24 AM, Julien Danjou wrote:

On Fri, Nov 29 2013, David Kranz wrote:


In preparing to fail builds with log errors I have been trying to make
things easier for projects by maintaining a whitelist. But these bugs in
ceilometer are coming in so fast that I can't keep up. So I am  just putting
.* in the white list for any cases I find before gate failing is turned
on, hopefully early this week.

Following the chat on IRC and the bug reports, it seems this might come
 From the tempest tests that are under reviews, as currently I don't
think Ceilometer generates any error as it's not tested.

So I'm not sure we want to whitelist anything?
So I tested this with https://review.openstack.org/#/c/59443/. There are 
flaky log errors coming from ceilometer. You
can see that the build at 12:27 passed, but the last build failed twice, 
each with a different set of errors. So the whitelist needs to remain 
and the ceilometer team should remove each entry when it is believed to 
be unnecessary.


The tricky part is going to be for us to fix Ceilometer on one side and
re-run Tempest reviews on the other side once a potential fix is merged.
This is another use case for the promised 
dependent-patch-between-projects thing.


 -David





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Salvatore Orlando
Hi,

As you might have noticed, there has been some progress on parallel tests
for neutron.
In a nutshell:
* Armando fixed the issue with IP address exhaustion on the public network
[1]
* Salvatore has now a patch which has a 50% success rate (the last failures
are because of me playing with it) [2]
* Salvatore is looking at putting back on track full isolation [3]
* All the bugs affecting parallel tests can be queried here [10]
* This blueprint tracks progress made towards enabling parallel testing [11]

-
The long story is as follows:
Parallel testing basically is not working because parallelism means higher
contention for public IP addresses. This was made worse by the fact that
some tests created a router with a gateway set but never deleted it. As a
result, there were even less addresses in the public range.
[1] was already merged and with [4] we shall make the public network for
neutron a /24 (the full tempest suite is still showing a lot of IP
exhaustion errors).

However, this was just one part of the issue. The biggest part actually
lied with the OVS agent and its interactions with the ML2 plugin. A few
patches ([5], [6], [7]) were already pushed to reduce the number of
notifications sent from the plugin to the agent. However, the agent is
organised in a way such that a notification is immediately acted upon thus
preempting the main agent loop, which is the one responsible for wiring
ports into networks. Considering the high level of notifications currently
sent from the server, this becomes particularly wasteful if one consider
that security membership updates for ports trigger global
iptables-save/restore commands which are often executed in rapid
succession, thus resulting in long delays for wiring VIFs to the
appropriate network.
With the patch [2] we are refactoring the agent to make it more efficient.
This is not production code, but once we'll get close to 100% pass for
parallel testing this patch will be split in several patches, properly
structured, and hopefully easy to review.
It is worth noting there is still work to do: in some cases the loop still
takes too long, and it has been observed ovs commands taking even 10
seconds to complete. To this aim, it is worth considering use of async
processes introduced in [8] as well as leveraging ovsdb monitoring [9] for
limiting queries to ovs database.
We're still unable to explain some failures where the network appears to be
correctly wired (floating IP, router port, dhcp port, and VIF port), but
the SSH connection fails. We're hoping to reproduce this failure patter
locally.

Finally, the tempest patch for full tempest isolation should be made usable
soon. Having another experimental job for it is something worth considering
as for some reason it is not always easy reproducing the same failure modes
exhibited on the gate.

Regards,
Salvatore

[1] https://review.openstack.org/#/c/58054/
[2] https://review.openstack.org/#/c/57420/
[3] https://review.openstack.org/#/c/53459/
[4] https://review.openstack.org/#/c/58284/
[5] https://review.openstack.org/#/c/58860/
[6] https://review.openstack.org/#/c/58597/
[7] https://review.openstack.org/#/c/58415/
[8] https://review.openstack.org/#/c/45676/
[9] https://bugs.launchpad.net/neutron/+bug/1177973
[10]
https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallelfield.tags_combinator=ANY
[11] https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-02 Thread Eric Windisch
I'd like to move this conversation along. It seems to have both
stalled and digressed.

 Just like with compute drivers, the raised bar applies to existing
 drivers, not just new ones.  We just gave a significant amount of
 lead-time to reach it.

I agree with this. The bar needs to be raised. The Tempest tests we
have should be passing. If they can't pass, they shouldn't be skipped,
the underlying support in Nova should be fixed. Is anyone arguing
against this?

The GCE code will need Tempest tests too (thankfully, the proposed
patches include Tempest tests). While it might be an even greater
uphill battle for GCE, versus EC2, to gain community momentum, it
cannot ever gain that momentum without an opportunity to do so. I
agree that Stackforge might normally be a reasonable path here, but I
agree with Mark's reservations around tracking the internal Nova APIs.

 I'm actually quite optimistic about the future of EC2 in Nova.  There is
 certainly interest.  I've followed up with Rohit who led the session at
 the design summit and we should see a sub-team ramping up soon.  The
 things we talked about the sub-team focusing on are in-line with moving

It sounds like the current model and process, while not perfect, isn't
too dysfunctional. Attempting to move the EC2 or GCE code into a
Stackforge repository might kill them before they can reach that bar
you're looking to set.

What more is needed from the blueprint or the patch authors to proceed?

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [Security]

2013-12-02 Thread Paul Montgomery
Yep, certainly interested in a broader OpenStack view of security; sign me
up. :)


On 11/27/13 9:06 PM, Adrian Otto adrian.o...@rackspace.com wrote:


On Nov 27, 2013, at 11:39 AM, Nathan Kinder nkin...@redhat.com
 wrote:

 On 11/27/2013 08:58 AM, Paul Montgomery wrote:
 I created some relatively high level security best practices that I
 thought would apply to Solum.  I don't think it is ever too early to
get
 mindshare around security so that developers keep that in mind
throughout
 the project.  When a design decision point could easily go two ways,
 perhaps these guidelines can sway direction towards a more secure path.
 
 This is a living document, please contribute and let's discuss topics.
 I've worn a security hat in various jobs so I'm always interested. :)
 Also, I realize that many of these features may not directly be
 encapsulated by Solum but rather components such as KeyStone or
Horizon.
 
 https://wiki.openstack.org/wiki/Solum/Security
 
 This is a great start.
 
 I think we really need to work towards a set of overarching security
 guidelines and best practices that can be applied to all of the
 projects.  I know that each project may have unique security needs, but
 it would be really great to have a central set of agreed upon
 cross-project guidelines that a developer can follow.
 
 This is a goal that we have in the OpenStack Security Group.  I am happy
 to work on coordinating this.  For defining these guidelines, I think a
 working group approach might be best, where we have an interested
 representative from each project be involved.  Does this approach make
 sense to others?

Nathan, that sounds great. I'd like Paul Montgomery to be involved for
Solum, and possibly others from Solum if we have volunteers with this
skill set to join him.

 
 Thanks,
 -NGK
 
 
 I would like to build on this list and create blueprints or tasks
based on
 topics that the community agrees upon.  We will also need to start
 thinking about timing of these features.
 
 Is there an OpenStack standard for code comments that highlight
potential
 security issues to investigate at a later point?  If not, what would
the
 community think of making a standard for Solum?  I would like to
identify
 these areas early while the developer is still engaged/thinking about
the
 code.  It is always harder to go back later and find everything in my
 experience.  Perhaps something like:
 
 # (SECURITY) This exception may contain database field data which could
 expose passwords to end users unless filtered.
 
 Or
 
 # (SECURITY) The admin password is read in plain text from a
configuration
 file.  We should fix this later.
 
 Regards,
 Paulmo
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-02 Thread Kurt Griffiths
Folks,

Want to surface events to end users?

Following up on some conversations we had at the summit, I’d like to get folks 
together on IRC tomorrow to crystalize the design for a notifications project 
under the Marconi program. The project’s goal is to create a service for 
surfacing events to end users (where a user can be a cloud app developer, or a 
customer using one of those apps). For example, a developer may want to be 
notified when one of their servers is low on disk space. Alternatively, a user 
of MyHipsterApp may want to get a text when one of their friends invites them 
to listen to That Band You’ve Never Heard Of.

Interested? Please join me and other members of the Marconi team tomorrow, Dec. 
3rd, for a brainstorming session in #openstack-marconi at 1500 
UTChttp://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0sec=0. 
Your contributions are crucial to making this project awesome.

I’ve seeded an etherpad for the discussion:

https://etherpad.openstack.org/p/marconi-notifications-brainstorm

@kgriffs




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-12-02 Thread Jordan OMara

On 01/12/13 00:27 -0500, Tzu-Mainn Chen wrote:
I think we may all be approaching the planning of this project in the wrong way, because of confusions such as: 


Well, I think there is one small misunderstanding. I've never said that
manual way should be primary workflow for us. I agree that we should lean
toward as much automation and smartness as possible. But in the same time, I
am adding that we need manual fallback for user to change that smart
decision.



Primary way would be to let TripleO decide, where the stuff go. I think we
agree here.
That's a pretty fundamental requirement that both sides seem to agree upon - but that agreement got lost in the discussions of what feature should come in which release, etc. That seems backwards to me. 

I think it's far more important that we list out requirements and create a design document that people agree upon first. Otherwise, we run the risk of focusing on feature X for release 1 without ensuring that our architecture supports feature Y for release 2. 

To make this example more specific: it seems clear that everyone agrees that the current Tuskar design (where nodes must be assigned to racks, which are then used as the primary means of manipulation) is not quite correct. Instead, we'd like to introduce a philosophy where we assume that users don't want to deal with homogeneous nodes individually, instead letting TripleO make decisions for them. 



I agree; getting buy-in on a design document up front is going to
save us future anguish

Regarding this - I think we may want to clarify what the purpose of our releases are at the moment. Personally, I don't think our current planning is about several individual product releases that we expect to be production-ready and usable by the world; I think it's about milestone releases which build towards a more complete product. 

From that perspective, if I were a prospective user, I would be less concerned with each release containing exactly what I need. Instead, what I would want most out of the project is: 

a) frequent stable releases (so I can be comfortable with the pace of development and the quality of code) 
b) design documentation and wireframes (so I can be comfortable that the architecture will support features I need) 
c) a roadmap (so I have an idea when my requirements will be met) 



+1
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgp7rupTEuBS0.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Alessandro Pilotti

On 02 Dec 2013, at 04:52 , Kyle Mestery (kmestery) kmest...@cisco.com wrote:

 On Dec 1, 2013, at 4:10 PM, Alessandro Pilotti 
 apilo...@cloudbasesolutions.com wrote:
 
 Hi all,
 
 At Cloudbase we are heavily using VMware Workstation and Fusion for 
 development, demos and PoCs, so we thought: why not replacing our automation 
 scripts with a fully functional Nova driver and use OpenStack APIs and Heat 
 for the automation? :-)
 
 Here’s the repo for this Nova driver project: 
 https://github.com/cloudbase/nova-vix-driver/
 
 The driver is already working well and supports all the basic features you’d 
 expect from a Nova driver, including a VNC console accessible via Horizon. 
 Please refer to the project README for additional details.
 The usage of CoW images (linked clones) makes deploying images particularly 
 fast, which is a good thing when you develop or run demos. Heat or Puppet, 
 Chef, etc make the whole process particularly sweet of course. 
 
 
 The main idea was to create something to be used in place of solutions like 
 Vagrant, with a few specific requirements:
 
 1) Full support for nested virtualization (VMX and EPT).
 
 For the time being the VMware products are the only ones supporting Hyper-V 
 and KVM as guests, so this became a mandatory path, at least until EPT 
 support will be fully functional in KVM.
 This rules out Vagrant as an option. Their VMware support is not free and 
 beside that they don’t support nested virtualization (yet, AFAIK). 
 
 Other workstation virtualization options, including VirtualBox and Hyper-V 
 are currently ruled out due to the lack of support for this feature as well.
 Beside that Hyper-V and VMware Workstation VMs can work side by side on 
 Windows 8.1, all you need is to fire up two nova-compute instances.
 
 2) Work on Windows, Linux and OS X workstations
 
 Here’s a snapshot of Nova compute  running on OS X and showing Novnc 
 connected to a Fusion VM console:
 
 https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png
 
 3) Use OpenStack APIs
 
 We wanted to have a single common framework for automation and bring 
 OpenStack on the workstations. 
 Beside that, dogfooding is a good thing. :-) 
 
 4) Offer a free alternative for community contributions
 
 VMware Player is fair enough, even with the “non commercial use” limits, etc.
 
 Communication with VMware components is based on the freely available Vix 
 SDK libs, using ctypes to call the C APIs. The project provides a library to 
 easily interact with the VMs, in case it sould be needed, e.g.:
 
 from vix import vixutils
 with vixutils.VixConnection() as conn:
with conn.open_vm(vmx_path) as vm:
vm.power_on()
 
 We though about using libvirt, since it has support for those APIs as well, 
 but it was way easier to create a lightweight driver from scratch using the 
 Vix APIs directly.
 
 TODOs:
 
 1) A minimal Neutron agent for attaching networks (now all networks are 
 attached to the NAT interface).
 2) Resize disks on boot based on the flavor size
 3) Volume attach / detach (we can just reuse the Hyper-V code for the 
 Windows case)
 4) Same host resize
 
 Live migration is not making particularly sense in this context, so the 
 implementation is not planned. 
 
 Note: we still have to commit the unit tests. We’ll clean them during next 
 week and push them.
 
 
 As usual, any idea, suggestions and especially contributions are highly 
 welcome!
 
 We’ll follow up with a blog post with some additional news related to this 
 project quite soon. 
 
 
 This is very cool Alessandro, thanks for sharing! Any plans to try and get 
 this
 nova driver upstreamed?

Thanks Kyle!

My personal opinion is that drivers should stay outside of Nova in a separate 
project. Said that, this driver is way easier to mantain than the Hyper-V one 
for example, so I wouldn’t have objections if this is what the community 
prefers.
On the other side, this driver will hardly have a CI as it’s not a requirement 
for the expected usage and this wouldn’t fit with the current (correct IMO) 
decision that only drivers with a CI gate will stay in Nova starting with 
Icehouse.
Said that, I would’t have of course anything against somebody (VMware?) 
volunteering for the CI. ;-)

IMO, as also Chmouel suggested in this thread, a Stackforge project could be a 
good start. This should make it easier for people to contribute and we’ll have 
a couple of release cycles to decide what to do with it.

Alessandro


 Thanks,
 Kyle
 
 Thanks,
 
 Alessandro
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

[openstack-dev] [Cinder] Cloning vs copying images

2013-12-02 Thread Dmitry Borodaenko
Hi OpenStack, particularly Cinder backend developers,

Please consider the following two competing fixes for the same problem:

https://review.openstack.org/#/c/58870/
https://review.openstack.org/#/c/58893/

The problem being fixed is that some backends, specifically Ceph RBD,
can only boot from volumes created from images in a certain format, in
RBD's case, RAW. When an image in a different format gets cloned into
a volume, it cannot be booted from. Obvious solution is to refuse
clone operation and copy/convert the image instead.

And now the principal question: is it safe to assume that this
restriction applies to all backends? Should the fix enforce copy of
non-RAW images for all backends? Or should the decision whether to
clone or copy the image be made in each backend?

The first fix puts this logic into the RBD backend, and makes changes
necessary for all other backends to have enough information to make a
similar decision if necessary. The problem with this approach is that
it's relatively intrusive, because driver clone_image() method
signature has to be changed.

The second fix has significantly less code changes, but it does
prevent cloning non-RAW images for all backends. I am not sure if this
is a real problem or not.

Can anyone point at a backend that can boot from a volume cloned from
a non-RAW image? I can think of one candidate: GPFS is a file-based
backend, while GPFS has a file clone operation. Is GPFS backend able
to boot from, say, a QCOW2 volume?

Thanks,

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fw: [Neutron][IPv6] Meeting logs from the first IRC meeting

2013-12-02 Thread Collins, Sean (Contractor)
Thank you - that'll work great for the short term until Nachi's patch
lands.

We still need a +2 from another Nova dev for a patch that disables
hairpinning when Neutron is being used [1].

The patch to allow ICMPv6 into instances for SLAAC
just landed today[2]. So we're making progress.

[1] https://review.openstack.org/#/c/56381/
[2] https://review.openstack.org/#/c/53028/

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Alessandro Pilotti

On 02 Dec 2013, at 20:54 , Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

Very cool stuff!

Seeing your special glance properties for iso and floppy connections made me 
think
of something. They seem, but it would be nice if they were done in a way that 
would
work in any hypervisor.

I think we have sufficient detail in block_device_mapping to do essentially 
the
same thing, and it would be awesome to verify and add some nicities to the nova
cli, something like:


Thanks Vish!

I was also thinking about bringing this to any hypervisor.

About the block storage option, we would have an issue with Hyper-V. We need 
local (or SMB accessible) ISOs and floppy images to be assigned to the 
instances.
IMO this isn’t a bad thing: ISOs would be potentially shared among instances in 
read only mode and it’s easy to pull them from Glance during live migration.
Floppy images on the other hand are insignificant quantity. :-)


nova boot --flavor 1 --iso CentOS-64-ks --floppy Kickstart (defaults to blank 
image)


This would be great! For the reasons above, I’d go anyway with some simple 
extensions to pass in the ISO and floppy refs in the instance data instead of 
block device mappings.

There’s also one additional scenario that would greatly benefit from those 
options: our Windows Heat templates (think about SQL Server, Exchange, etc) 
need to access the product media for installation and due to
license constraints the tenant needs to provide the media, we cannot simply 
download them. So far we solved it by attaching a volume containing the install 
media, but it’s of course a very unnatural process for the user.

Alessandro


Clearly that requires a few things:

1) vix block device mapping support
2) cli ux improvements
3) testing!

Vish

On Dec 1, 2013, at 2:10 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:

Hi all,

At Cloudbase we are heavily using VMware Workstation and Fusion for 
development, demos and PoCs, so we thought: why not replacing our automation 
scripts with a fully functional Nova driver and use OpenStack APIs and Heat for 
the automation? :-)

Here’s the repo for this Nova driver project: 
https://github.com/cloudbase/nova-vix-driver/

The driver is already working well and supports all the basic features you’d 
expect from a Nova driver, including a VNC console accessible via Horizon. 
Please refer to the project README for additional details.
The usage of CoW images (linked clones) makes deploying images particularly 
fast, which is a good thing when you develop or run demos. Heat or Puppet, 
Chef, etc make the whole process particularly sweet of course.


The main idea was to create something to be used in place of solutions like 
Vagrant, with a few specific requirements:

1) Full support for nested virtualization (VMX and EPT).

For the time being the VMware products are the only ones supporting Hyper-V and 
KVM as guests, so this became a mandatory path, at least until EPT support will 
be fully functional in KVM.
This rules out Vagrant as an option. Their VMware support is not free and 
beside that they don’t support nested virtualization (yet, AFAIK).

Other workstation virtualization options, including VirtualBox and Hyper-V are 
currently ruled out due to the lack of support for this feature as well.
Beside that Hyper-V and VMware Workstation VMs can work side by side on Windows 
8.1, all you need is to fire up two nova-compute instances.

2) Work on Windows, Linux and OS X workstations

Here’s a snapshot of Nova compute  running on OS X and showing Novnc connected 
to a Fusion VM console:

https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png

3) Use OpenStack APIs

We wanted to have a single common framework for automation and bring OpenStack 
on the workstations.
Beside that, dogfooding is a good thing. :-)

4) Offer a free alternative for community contributions

VMware Player is fair enough, even with the “non commercial use” limits, etc.

Communication with VMware components is based on the freely available Vix SDK 
libs, using ctypes to call the C APIs. The project provides a library to easily 
interact with the VMs, in case it sould be needed, e.g.:

from vix import vixutils
with vixutils.VixConnection() as conn:
with conn.open_vm(vmx_path) as vm:
vm.power_on()

We though about using libvirt, since it has support for those APIs as well, but 
it was way easier to create a lightweight driver from scratch using the Vix 
APIs directly.

TODOs:

1) A minimal Neutron agent for attaching networks (now all networks are 
attached to the NAT interface).
2) Resize disks on boot based on the flavor size
3) Volume attach / detach (we can just reuse the Hyper-V code for the Windows 
case)
4) Same host resize

Live migration is not making particularly sense in this context, so the 
implementation is not planned.

Note: we still have to commit the unit tests. We’ll clean them during 

Re: [openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Joe Gordon
On Dec 2, 2013 9:04 PM, Salvatore Orlando sorla...@nicira.com wrote:

 Hi,

 As you might have noticed, there has been some progress on parallel tests
for neutron.
 In a nutshell:
 * Armando fixed the issue with IP address exhaustion on the public
network [1]
 * Salvatore has now a patch which has a 50% success rate (the last
failures are because of me playing with it) [2]
 * Salvatore is looking at putting back on track full isolation [3]
 * All the bugs affecting parallel tests can be queried here [10]
 * This blueprint tracks progress made towards enabling parallel testing
[11]

 -
 The long story is as follows:
 Parallel testing basically is not working because parallelism means
higher contention for public IP addresses. This was made worse by the fact
that some tests created a router with a gateway set but never deleted it.
As a result, there were even less addresses in the public range.
 [1] was already merged and with [4] we shall make the public network for
neutron a /24 (the full tempest suite is still showing a lot of IP
exhaustion errors).

 However, this was just one part of the issue. The biggest part actually
lied with the OVS agent and its interactions with the ML2 plugin. A few
patches ([5], [6], [7]) were already pushed to reduce the number of
notifications sent from the plugin to the agent. However, the agent is
organised in a way such that a notification is immediately acted upon thus
preempting the main agent loop, which is the one responsible for wiring
ports into networks. Considering the high level of notifications currently
sent from the server, this becomes particularly wasteful if one consider
that security membership updates for ports trigger global
iptables-save/restore commands which are often executed in rapid
succession, thus resulting in long delays for wiring VIFs to the
appropriate network.
 With the patch [2] we are refactoring the agent to make it more
efficient. This is not production code, but once we'll get close to 100%
pass for parallel testing this patch will be split in several patches,
properly structured, and hopefully easy to review.
 It is worth noting there is still work to do: in some cases the loop
still takes too long, and it has been observed ovs commands taking even 10
seconds to complete. To this aim, it is worth considering use of async
processes introduced in [8] as well as leveraging ovsdb monitoring [9] for
limiting queries to ovs database.
 We're still unable to explain some failures where the network appears to
be correctly wired (floating IP, router port, dhcp port, and VIF port), but
the SSH connection fails. We're hoping to reproduce this failure patter
locally.

 Finally, the tempest patch for full tempest isolation should be made
usable soon. Having another experimental job for it is something worth
considering as for some reason it is not always easy reproducing the same
failure modes exhibited on the gate.

 Regards,
 Salvatore


Awesome work, thanks for the update.

 [1] https://review.openstack.org/#/c/58054/
 [2] https://review.openstack.org/#/c/57420/
 [3] https://review.openstack.org/#/c/53459/
 [4] https://review.openstack.org/#/c/58284/
 [5] https://review.openstack.org/#/c/58860/
 [6] https://review.openstack.org/#/c/58597/
 [7] https://review.openstack.org/#/c/58415/
 [8] https://review.openstack.org/#/c/45676/
 [9] https://bugs.launchpad.net/neutron/+bug/1177973
 [10]
https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallelfield.tags_combinator=ANY
 [11]
https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with rabbitmq on HA controller failure...anyone seen this?

2013-12-02 Thread Ravi Chunduru
We do had the same problem in our deployment.  Here is the brief
description of what we saw and how we fixed it.
http://l4tol7.blogspot.com/2013/12/openstack-rabbitmq-issues.html


On Mon, Dec 2, 2013 at 10:37 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Nov 29, 2013, at 9:24 PM, Chris Friesen chris.frie...@windriver.com
 wrote:

  On 11/29/2013 06:37 PM, David Koo wrote:
  On Nov 29, 02:22:17 PM (Friday), Chris Friesen wrote:
  We're currently running Grizzly (going to Havana soon) and we're
  running into an issue where if the active controller is ungracefully
  killed then nova-compute on the compute node doesn't properly
  connect to the new rabbitmq server on the newly-active controller
  node.
 
  Interestingly, killing and restarting nova-compute on the compute
  node seems to work, which implies that the retry code is doing
  something less effective than the initial startup.
 
  Has anyone doing HA controller setups run into something similar?
 
  As a followup, it looks like if I wait for 9 minutes or so I see a
 message in the compute logs:
 
  2013-11-30 00:02:14.756 1246 ERROR nova.openstack.common.rpc.common [-]
 Failed to consume message from queue: Socket closed
 
  It then reconnects to the AMQP server and everything is fine after that.
  However, any instances that I tried to boot during those 9 minutes stay
 stuck in the BUILD status.
 
 
 
  So the rabbitmq server and the controller are on the same node?
 
  Yes, they are.
 
   My
  guess is that it's related to this bug 856764 (RabbitMQ connections
  lack heartbeat or TCP keepalives). The gist of it is that since there
  are no heartbeats between the MQ and nova-compute, if the MQ goes down
  ungracefully then nova-compute has no way of knowing. If the MQ goes
  down gracefully then the MQ clients are notified and so the problem
  doesn't arise.
 
  Sounds about right.
 
  We got bitten by the same bug a while ago when our controller node
  got hard reset without any warning!. It came down to this bug (which,
  unfortunately, doesn't have a fix yet). We worked around this bug by
  implementing our own crude fix - we wrote a simple app to periodically
  check if the MQ was alive (write a short message into the MQ, then
  read it out again). When this fails n-times in a row we restart
  nova-compute. Very ugly, but it worked!
 
  Sounds reasonable.
 
  I did notice a kombu heartbeat change that was submitted and then backed
 out again because it was buggy. I guess we're still waiting on the real fix?

 Hi Chris,

 This general problem comes up a lot, and one fix is to use keepalives.
 Note that more is needed if you are using multi-master rabbitmq, but for
 failover I have had great success with the following (also posted to the
 bug):

 When a connection to a socket is cut off completely, the receiving side
 doesn't know that the connection has dropped, so you can end up with a
 half-open connection. The general solution for this in linux is to turn on
 TCP_KEEPALIVES. Kombu will enable keepalives if the version number is high
 enough (1.0 iirc), but rabbit needs to be specially configured to send
 keepalives on the connections that it creates.

 So solving the HA issue generally involves a rabbit config with a section
 like the following:

 [
  {rabbit, [{tcp_listen_options, [binary,
 {packet, raw},
 {reuseaddr, true},
 {backlog, 128},
 {nodelay, true},
 {exit_on_close, false},
 {keepalive, true}]}
   ]}
 ].

 Then you should also shorten the keepalive sysctl settings or it will
 still take ~2 hrs to terminate the connections:

 echo 5  /proc/sys/net/ipv4/tcp_keepalive_time
 echo 5  /proc/sys/net/ipv4/tcp_keepalive_probes
 echo 1  /proc/sys/net/ipv4/tcp_keepalive_intvl

 Obviously this should be done in a sysctl config file instead of at the
 command line. Note that if you only want to shorten the rabbit keepalives
 but keep everything else as a default, you can use an LD_PRELOAD library to
 do so. For example you could use:

 https://github.com/meebey/force_bind/blob/master/README

 Vish

 
  Chris
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Eugene Nikanorov
Salvatore and Armando, thanks for your great work and detailed explanation!

Eugene.


On Mon, Dec 2, 2013 at 11:48 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Dec 2, 2013 9:04 PM, Salvatore Orlando sorla...@nicira.com wrote:
 
  Hi,
 
  As you might have noticed, there has been some progress on parallel
 tests for neutron.
  In a nutshell:
  * Armando fixed the issue with IP address exhaustion on the public
 network [1]
  * Salvatore has now a patch which has a 50% success rate (the last
 failures are because of me playing with it) [2]
  * Salvatore is looking at putting back on track full isolation [3]
  * All the bugs affecting parallel tests can be queried here [10]
  * This blueprint tracks progress made towards enabling parallel testing
 [11]
 
  -
  The long story is as follows:
  Parallel testing basically is not working because parallelism means
 higher contention for public IP addresses. This was made worse by the fact
 that some tests created a router with a gateway set but never deleted it.
 As a result, there were even less addresses in the public range.
  [1] was already merged and with [4] we shall make the public network for
 neutron a /24 (the full tempest suite is still showing a lot of IP
 exhaustion errors).
 
  However, this was just one part of the issue. The biggest part actually
 lied with the OVS agent and its interactions with the ML2 plugin. A few
 patches ([5], [6], [7]) were already pushed to reduce the number of
 notifications sent from the plugin to the agent. However, the agent is
 organised in a way such that a notification is immediately acted upon thus
 preempting the main agent loop, which is the one responsible for wiring
 ports into networks. Considering the high level of notifications currently
 sent from the server, this becomes particularly wasteful if one consider
 that security membership updates for ports trigger global
 iptables-save/restore commands which are often executed in rapid
 succession, thus resulting in long delays for wiring VIFs to the
 appropriate network.
  With the patch [2] we are refactoring the agent to make it more
 efficient. This is not production code, but once we'll get close to 100%
 pass for parallel testing this patch will be split in several patches,
 properly structured, and hopefully easy to review.
  It is worth noting there is still work to do: in some cases the loop
 still takes too long, and it has been observed ovs commands taking even 10
 seconds to complete. To this aim, it is worth considering use of async
 processes introduced in [8] as well as leveraging ovsdb monitoring [9] for
 limiting queries to ovs database.
  We're still unable to explain some failures where the network appears to
 be correctly wired (floating IP, router port, dhcp port, and VIF port), but
 the SSH connection fails. We're hoping to reproduce this failure patter
 locally.
 
  Finally, the tempest patch for full tempest isolation should be made
 usable soon. Having another experimental job for it is something worth
 considering as for some reason it is not always easy reproducing the same
 failure modes exhibited on the gate.
 
  Regards,
  Salvatore
 

 Awesome work, thanks for the update.

  [1] https://review.openstack.org/#/c/58054/
  [2] https://review.openstack.org/#/c/57420/
  [3] https://review.openstack.org/#/c/53459/
  [4] https://review.openstack.org/#/c/58284/
  [5] https://review.openstack.org/#/c/58860/
  [6] https://review.openstack.org/#/c/58597/
  [7] https://review.openstack.org/#/c/58415/
  [8] https://review.openstack.org/#/c/45676/
  [9] https://bugs.launchpad.net/neutron/+bug/1177973
  [10]
 https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallelfield.tags_combinator=ANY
  [11]
 https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Vishvananda Ishaya

On Dec 2, 2013, at 9:12 AM, Russell Bryant rbry...@redhat.com wrote:

 On 12/02/2013 10:59 AM, Gary Kotton wrote:
 I think that this is certainly different. It is something that we we want
 and need a user facing API.
 Examples:
 - aggregates
 - per host scheduling
 - instance groups
 
 Etc.
 
 That is just taking the nova options into account and not the other
 modules. How doul one configure that we would like to have storage
 proximity for a VM? This is where things start to get very interesting and
 enable the cross service scheduling (which is the goal of this no?).
 
 An explicit part of this plan is that all of the things you're talking
 about are *not* in scope until the forklift is complete and the new
 thing is a functional replacement for the existing nova-scheduler.  We
 want to get the project established and going so that it is a place
 where this work can take place.  We do *not* want to slow down the work
 of getting the project established by making these things a prerequisite.


I'm all for the forklift approach since I don't think we will make any progress
if we stall going back into REST api design.

I'm going to reopen a can of worms, though. I think the most difficult part of
the forklift will be moving stuff out of the existing databases into
a new database. We had to deal with this in cinder and having a db export
and import strategy is annoying to say the least. Managing the db-related
code was the majority of the work during the cinder split.

I think this forklift will be way easier if we merge the no-db-scheduler[1]
patches first before separating the scheduler out into its own project:

https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

I think the effort to get this finished is smaller than the effort to write
db migrations and syncing scripts for the new project.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Russell Bryant
On 12/02/2013 03:31 PM, Vishvananda Ishaya wrote:
 
 On Dec 2, 2013, at 9:12 AM, Russell Bryant rbry...@redhat.com
 wrote:
 
 On 12/02/2013 10:59 AM, Gary Kotton wrote:
 I think that this is certainly different. It is something that
 we we want and need a user facing API. Examples: - aggregates -
 per host scheduling - instance groups
 
 Etc.
 
 That is just taking the nova options into account and not the
 other modules. How doul one configure that we would like to
 have storage proximity for a VM? This is where things start to
 get very interesting and enable the cross service scheduling
 (which is the goal of this no?).
 
 An explicit part of this plan is that all of the things you're
 talking about are *not* in scope until the forklift is complete
 and the new thing is a functional replacement for the existing
 nova-scheduler.  We want to get the project established and going
 so that it is a place where this work can take place.  We do
 *not* want to slow down the work of getting the project
 established by making these things a prerequisite.
 
 
 I'm all for the forklift approach since I don't think we will make
 any progress if we stall going back into REST api design.
 
 I'm going to reopen a can of worms, though. I think the most
 difficult part of the forklift will be moving stuff out of the
 existing databases into a new database. We had to deal with this in
 cinder and having a db export and import strategy is annoying to
 say the least. Managing the db-related code was the majority of the
 work during the cinder split.
 
 I think this forklift will be way easier if we merge the
 no-db-scheduler[1] patches first before separating the scheduler
 out into its own project:
 
 https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
 
 I think the effort to get this finished is smaller than the effort
 to write db migrations and syncing scripts for the new project.

Agreed that this should make it easier.

My thought was that the split out scheduler could just import nova's
db API and use it against nova's database directly until this work
gets done.  If the forklift goes that route instead of any sort of
work to migrate data from one db to another, they could really happen
in any order, I think.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-02 Thread Joshua Harlow
Thanks for writing this up, looking forward to seeing this happen so that
oslo.messaging can be used outside of the core openstack projects (and be
used in libraries that do not want to force a oslo.cfg model onto users of
said libraries).

Any idea of a timeline as to when this would be reflected in
https://github.com/openstack/oslo.messaging/ (even rough idea is fine).

-Josh

On 12/2/13 7:45 AM, Julien Danjou jul...@danjou.info wrote:

On Mon, Nov 18 2013, Julien Danjou wrote:

   https://blueprints.launchpad.net/oslo/+spec/messaging-decouple-cfg

So I've gone through the code and started to write a plan on how I'd do
things:

  https://wiki.openstack.org/wiki/Oslo/blueprints/messaging-decouple-cfg

I don't think I missed too much, though I didn't run into all tiny
details.

Please feel free to tell me if I miss anything obvious, otherwise I'll
try to start submitting patches, one at a time, to get this into shape
step by step.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][TripleO] Nested resources

2013-12-02 Thread Fox, Kevin M
Hi all,

I just want to run a crazy idea up the flag pole. TripleO has the concept of an 
under and over cloud. In starting to experiment with Docker, I see a pattern 
start to emerge.

 * As a User, I may want to allocate a BareMetal node so that it is entirely 
mine. I may want to run multiple VM's on it to reduce my own cost. Now I have 
to manage the BareMetal nodes myself or nest OpenStack into them.
 * As a User, I may want to allocate a VM. I then want to run multiple Docker 
containers on it to use it more efficiently. Now I have to manage the VM's 
myself or nest OpenStack into them.
 * As a User, I may want to allocate a BareMetal node so that it is entirely 
mine. I then want to run multiple Docker containers on it to use it more 
efficiently. Now I have to manage the BareMetal nodes myself or nest OpenStack 
into them.

I think this can then be generalized to:
As a User, I would like to ask for resources of one type (One AZ?), and be able 
to delegate resources back to Nova so that I can use Nova to subdivide and give 
me access to my resources as a different type. (As a different AZ?)

I think this could potentially cover some of the TripleO stuff without needing 
an over/under cloud. For that use case, all the BareMetal nodes could be added 
to Nova as such, allocated by the services tenant as running a nested VM 
image type resource stack, and then made available to all tenants. Sys admins 
then could dynamically shift resources from VM providing nodes to BareMetal 
Nodes and back as needed.

This allows a user to allocate some raw resources as a group, then schedule 
higher level services to run only in that group, all with the existing api.

Just how crazy an idea is this?

Thanks,
Kevin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Chris Friesen

On 12/02/2013 02:31 PM, Vishvananda Ishaya wrote:


I'm going to reopen a can of worms, though. I think the most difficult part of
the forklift will be moving stuff out of the existing databases into
a new database.


Do we really need to move it to a new database for the forklift?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Parallel testing update

2013-12-02 Thread Kyle Mestery (kmestery)
Yes, this is all great Salvatore and Armando! Thank you for all of this work
and the explanation behind it all.

Kyle

On Dec 2, 2013, at 2:24 PM, Eugene Nikanorov enikano...@mirantis.com wrote:

 Salvatore and Armando, thanks for your great work and detailed explanation!
 
 Eugene.
 
 
 On Mon, Dec 2, 2013 at 11:48 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 On Dec 2, 2013 9:04 PM, Salvatore Orlando sorla...@nicira.com wrote:
 
  Hi,
 
  As you might have noticed, there has been some progress on parallel tests 
  for neutron.
  In a nutshell:
  * Armando fixed the issue with IP address exhaustion on the public network 
  [1]
  * Salvatore has now a patch which has a 50% success rate (the last failures 
  are because of me playing with it) [2]
  * Salvatore is looking at putting back on track full isolation [3]
  * All the bugs affecting parallel tests can be queried here [10]
  * This blueprint tracks progress made towards enabling parallel testing [11]
 
  -
  The long story is as follows:
  Parallel testing basically is not working because parallelism means higher 
  contention for public IP addresses. This was made worse by the fact that 
  some tests created a router with a gateway set but never deleted it. As a 
  result, there were even less addresses in the public range.
  [1] was already merged and with [4] we shall make the public network for 
  neutron a /24 (the full tempest suite is still showing a lot of IP 
  exhaustion errors).
 
  However, this was just one part of the issue. The biggest part actually 
  lied with the OVS agent and its interactions with the ML2 plugin. A few 
  patches ([5], [6], [7]) were already pushed to reduce the number of 
  notifications sent from the plugin to the agent. However, the agent is 
  organised in a way such that a notification is immediately acted upon thus 
  preempting the main agent loop, which is the one responsible for wiring 
  ports into networks. Considering the high level of notifications currently 
  sent from the server, this becomes particularly wasteful if one consider 
  that security membership updates for ports trigger global 
  iptables-save/restore commands which are often executed in rapid 
  succession, thus resulting in long delays for wiring VIFs to the 
  appropriate network.
  With the patch [2] we are refactoring the agent to make it more efficient. 
  This is not production code, but once we'll get close to 100% pass for 
  parallel testing this patch will be split in several patches, properly 
  structured, and hopefully easy to review.
  It is worth noting there is still work to do: in some cases the loop still 
  takes too long, and it has been observed ovs commands taking even 10 
  seconds to complete. To this aim, it is worth considering use of async 
  processes introduced in [8] as well as leveraging ovsdb monitoring [9] for 
  limiting queries to ovs database.
  We're still unable to explain some failures where the network appears to be 
  correctly wired (floating IP, router port, dhcp port, and VIF port), but 
  the SSH connection fails. We're hoping to reproduce this failure patter 
  locally.
 
  Finally, the tempest patch for full tempest isolation should be made usable 
  soon. Having another experimental job for it is something worth considering 
  as for some reason it is not always easy reproducing the same failure modes 
  exhibited on the gate.
 
  Regards,
  Salvatore
 
 
 Awesome work, thanks for the update.
 
 
  [1] https://review.openstack.org/#/c/58054/
  [2] https://review.openstack.org/#/c/57420/
  [3] https://review.openstack.org/#/c/53459/
  [4] https://review.openstack.org/#/c/58284/
  [5] https://review.openstack.org/#/c/58860/
  [6] https://review.openstack.org/#/c/58597/
  [7] https://review.openstack.org/#/c/58415/
  [8] https://review.openstack.org/#/c/45676/
  [9] https://bugs.launchpad.net/neutron/+bug/1177973
  [10] 
  https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallelfield.tags_combinator=ANY
  [11] https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim

The TC is currently working on formalizing requirements for new programs
and projects [3].  I figured I would give them a try against this
application.

First, I'm assuming that the application is for a new program that
contains the new project.  The application doesn't make that bit clear,
though.

In looking through the documentation for incubating [1], there doesn¹t
seem to be any mention of also having to be associated with a program. Is
it a requirement that all projects belong to a program at this point? If
so, I guess we would be asking for a new program as I think that
encryption and key management is a separate concern from the rest of the
programs listed here [2].

[1] https://wiki.openstack.org/wiki/Governance/NewProjects
[2] https://wiki.openstack.org/wiki/Programs


 Teams in OpenStack can be created as-needed and grow organically. As
the team
 work matures, some technical efforts will be recognized as essential to
the
 completion of the OpenStack project mission. By becoming an official
Program,
 they place themselves under the authority of the OpenStack Technical
 Committee. In return, their contributors get to vote in the Technical
 Committee election, and they get some space and time to discuss future
 development at our Design Summits. When considering new programs, the
TC will
 look into a number of criteria, including (but not limited to):

 * Scope
 ** Team must have a specific scope, separated from others teams scope

I would like to see a statement of scope for Barbican on the
application.  It should specifically cover how the scope differs from
other programs, in particular the Identity program.

Happy to add this, I¹ll put it in the wiki today.

 ** Team must have a mission statement

This is missing.

We do have a mission statement on the Barbican/Incubation page here:
https://wiki.openstack.org/wiki/Barbican/Incubation


 ** Team should have a lead, elected by the team contributors

Was the PTL elected?  I can't seem to find record of that.  If not, I
would like to see an election held for the PTL.

We¹re happy to do an election. Is this something we can do as part of the
next election cycle? Or something that needs to be done out of band?



 ** Team should have a clear way to grant ATC (voting) status to its
significant contributors

Related to the above

I thought that the process of becoming an ATC was pretty well set [3]. Is
there some specific that Barbican would have to do that is different than
the ATC rules in the Tech Committee documentation?

[3] 
https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee



 * Deliverables
 ** Team should have a number of clear deliverables

barbican and python-barbicanclient, I presume.  It would be nice to have
this clearly defined on the application.

I will add a deliverables section, but you are correct.


Now, for the project specific requirements:

  Projects wishing to be included in the integrated release of OpenStack
must
  first apply for incubation status. During their incubation period,
they will
  be able to access new resources and tap into other OpenStack programs
(in
  particular the Documentation, QA, Infrastructure and Release
management teams)
  to learn about the OpenStack processes and get assistance in their
integration
  efforts.
  
  The TC will evaluate the project scope and its complementarity with
existing
  integrated projects and other official programs, look into the project
  technical choices, and check a number of requirements, including (but
not
  limited to):
  
  * Scope
  ** Project must have a clear and defined scope

This is missing

As mentioned above, I¹ll add this to the wiki today.


  ** Project should not inadvertently duplicate functionality present in
other
 OpenStack projects. If they do, they should have a clear plan and
timeframe
 to prevent long-term scope duplication.
  ** Project should leverage existing functionality in other OpenStack
projects
 as much as possible

I'm going to hold off on diving into this too far until the scope is
clarified.

  * Maturity
  ** Project should have a diverse and active team of contributors

Using a mailmap file [4]:

$ git shortlog -s -e | sort -n -r
   172 John Wood john.w...@rackspace.com
   150 jfwood john.w...@rackspace.com
65 Douglas Mendizabal douglas.mendiza...@rackspace.com
39 Jarret Raim jarret.r...@rackspace.com
17 Malini K. Bhandaru malini.k.bhand...@intel.com
10 Paul Kehrer paul.l.keh...@gmail.com
10 Jenkins jenk...@review.openstack.org
 8 jqxin2006 jqxin2...@gmail.com
 7 Arash Ghoreyshi arashghorey...@gmail.com
 5 Chad Lung chad.l...@gmail.com
 3 Dolph Mathews dolph.math...@gmail.com
 2 John Vrbanac john.vrba...@rackspace.com
 1 Steven Gonzales stevendgonza...@gmail.com
 1 Russell Bryant rbry...@redhat.com
 1 Bryan D. Payne bdpa...@acm.org

It appears to be an effort done by a group, and not an individual.  Most
commits by far are from Rackspace, but 

Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Vishvananda Ishaya

On Dec 2, 2013, at 11:40 AM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:

 
 On 02 Dec 2013, at 20:54 , Vishvananda Ishaya vishvana...@gmail.com wrote:
 
 Very cool stuff!
 
 Seeing your special glance properties for iso and floppy connections made me 
 think
 of something. They seem, but it would be nice if they were done in a way 
 that would
 work in any hypervisor.
 
 I think we have sufficient detail in block_device_mapping to do 
 essentially the
 same thing, and it would be awesome to verify and add some nicities to the 
 nova
 cli, something like:
 
 
 Thanks Vish!
 
 I was also thinking about bringing this to any hypervisor.
 
 About the block storage option, we would have an issue with Hyper-V. We need 
 local (or SMB accessible) ISOs and floppy images to be assigned to the 
 instances.
 IMO this isn’t a bad thing: ISOs would be potentially shared among instances 
 in read only mode and it’s easy to pull them from Glance during live 
 migration. 
 Floppy images on the other hand are insignificant quantity. :-)

I'm not sure exactly what the problem is here. Pulling them down from glance 
before boot seems like the way that other hypervisors would implement it as 
well. The block device mapping code was extended in havana so it doesn't just 
support volume/iscsi connections. You can specify where the item should be 
attached in addition to the source of the device (glance, cinder, etc.).

 
 
 nova boot --flavor 1 --iso CentOS-64-ks --floppy Kickstart (defaults to 
 blank image)
 
 
 This would be great! For the reasons above, I’d go anyway with some simple 
 extensions to pass in the ISO and floppy refs in the instance data instead of 
 block device mappings.

I think my above comment handles this. Block device mapping was designed to 
support floppies and ISOs as well

 
 There’s also one additional scenario that would greatly benefit from those 
 options: our Windows Heat templates (think about SQL Server, Exchange, etc) 
 need to access the product media for installation and due to 
 license constraints the tenant needs to provide the media, we cannot simply 
 download them. So far we solved it by attaching a volume containing the 
 install media, but it’s of course a very unnatural process for the user.

Couldn't this also be handled by above i.e. upload the install media to glance 
as an iso instead of a volume?

Vish

 
 Alessandro
 
 
 Clearly that requires a few things:
 
 1) vix block device mapping support
 2) cli ux improvements
 3) testing!
 
 Vish
 
 On Dec 1, 2013, at 2:10 PM, Alessandro Pilotti 
 apilo...@cloudbasesolutions.com wrote:
 
 Hi all,
 
 At Cloudbase we are heavily using VMware Workstation and Fusion for 
 development, demos and PoCs, so we thought: why not replacing our 
 automation scripts with a fully functional Nova driver and use OpenStack 
 APIs and Heat for the automation? :-)
 
 Here’s the repo for this Nova driver project: 
 https://github.com/cloudbase/nova-vix-driver/
 
 The driver is already working well and supports all the basic features 
 you’d expect from a Nova driver, including a VNC console accessible via 
 Horizon. Please refer to the project README for additional details.
 The usage of CoW images (linked clones) makes deploying images particularly 
 fast, which is a good thing when you develop or run demos. Heat or Puppet, 
 Chef, etc make the whole process particularly sweet of course. 
 
 
 The main idea was to create something to be used in place of solutions like 
 Vagrant, with a few specific requirements:
 
 1) Full support for nested virtualization (VMX and EPT).
 
 For the time being the VMware products are the only ones supporting Hyper-V 
 and KVM as guests, so this became a mandatory path, at least until EPT 
 support will be fully functional in KVM.
 This rules out Vagrant as an option. Their VMware support is not free and 
 beside that they don’t support nested virtualization (yet, AFAIK). 
 
 Other workstation virtualization options, including VirtualBox and Hyper-V 
 are currently ruled out due to the lack of support for this feature as well.
 Beside that Hyper-V and VMware Workstation VMs can work side by side on 
 Windows 8.1, all you need is to fire up two nova-compute instances.
 
 2) Work on Windows, Linux and OS X workstations
 
 Here’s a snapshot of Nova compute  running on OS X and showing Novnc 
 connected to a Fusion VM console:
 
 https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png
 
 3) Use OpenStack APIs
 
 We wanted to have a single common framework for automation and bring 
 OpenStack on the workstations. 
 Beside that, dogfooding is a good thing. :-) 
 
 4) Offer a free alternative for community contributions
   
 VMware Player is fair enough, even with the “non commercial use” limits, 
 etc.
 
 Communication with VMware components is based on the freely available Vix 
 SDK libs, using ctypes to call the C APIs. The project provides a library 
 to easily interact with the VMs, in case it 

Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-02 Thread Vishvananda Ishaya

On Dec 2, 2013, at 12:38 PM, Russell Bryant rbry...@redhat.com wrote:

 On 12/02/2013 03:31 PM, Vishvananda Ishaya wrote:
 
 On Dec 2, 2013, at 9:12 AM, Russell Bryant rbry...@redhat.com
 wrote:
 
 On 12/02/2013 10:59 AM, Gary Kotton wrote:
 I think that this is certainly different. It is something that
 we we want and need a user facing API. Examples: - aggregates -
 per host scheduling - instance groups
 
 Etc.
 
 That is just taking the nova options into account and not the
 other modules. How doul one configure that we would like to
 have storage proximity for a VM? This is where things start to
 get very interesting and enable the cross service scheduling
 (which is the goal of this no?).
 
 An explicit part of this plan is that all of the things you're
 talking about are *not* in scope until the forklift is complete
 and the new thing is a functional replacement for the existing
 nova-scheduler.  We want to get the project established and going
 so that it is a place where this work can take place.  We do
 *not* want to slow down the work of getting the project
 established by making these things a prerequisite.
 
 
 I'm all for the forklift approach since I don't think we will make
 any progress if we stall going back into REST api design.
 
 I'm going to reopen a can of worms, though. I think the most
 difficult part of the forklift will be moving stuff out of the
 existing databases into a new database. We had to deal with this in
 cinder and having a db export and import strategy is annoying to
 say the least. Managing the db-related code was the majority of the
 work during the cinder split.
 
 I think this forklift will be way easier if we merge the
 no-db-scheduler[1] patches first before separating the scheduler
 out into its own project:
 
 https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
 
 I think the effort to get this finished is smaller than the effort
 to write db migrations and syncing scripts for the new project.
 
 Agreed that this should make it easier.
 
 My thought was that the split out scheduler could just import nova's
 db API and use it against nova's database directly until this work
 gets done.  If the forklift goes that route instead of any sort of
 work to migrate data from one db to another, they could really happen
 in any order, I think.


That is a good point. If the forklift is still talking to nova's db
then it would be significantly less duplication and i could see doing
it in the reverse order. The no-db-stuff should be done before trying
to implement cinder support so we don't have the messiness of the
scheduler talking to multiple db apis.

Vish

 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-02 Thread Maru Newby

On Dec 2, 2013, at 10:19 PM, Joe Gordon joe.gord...@gmail.com wrote:

 
 On Dec 2, 2013 3:39 AM, Maru Newby ma...@redhat.com wrote:
 
 
  On Dec 2, 2013, at 2:07 AM, Anita Kuno ante...@anteaya.info wrote:
 
   Great initiative putting this plan together, Maru. Thanks for doing
   this. Thanks for volunteering to help, Salvatore (I'm thinking of asking
   for you to be cloned - once that becomes available.) if you add your
   patch urls (as you create them) to the blueprint Maru started [0] that
   would help to track the work.
  
   Armando, thanks for doing this work as well. Could you add the urls of
   the patches you reference to the exceptional-conditions blueprint?
  
   For icehouse-1 to be a realistic goal for this assessment and clean-up,
   patches for this would need to be up by Tuesday Dec. 3 at the latest
   (does 13:00 UTC sound like a reasonable target?) so that they can make
   it through review and check testing, gate testing and merging prior to
   the Thursday Dec. 5 deadline for icehouse-1. I would really like to see
   this, I just want the timeline to be conscious.
 
  My mistake, getting this done by Tuesday does not seem realistic.  
  icehouse-2, then.
 
 
 With icehouse-2 being the nova-network feature freeze reevaluation point 
 (possibly lifting it) I think gating on new stacktraces by icehouse-2 is too 
 late.  Even a huge whitelist of errors is better then letting new errors in. 

No question that it needs to happen asap.  If we're talking about milestones, 
though, and icehouse-1 patches need to be in by Tuesday, I don't think 
icehouse-1 is realistic.  It will have to be early in icehouse-2.


m.

 
  m.
 
  
   I would like to say talk to me tomorrow in -neutron to ensure you are
   getting the support you need to achieve this but I will be flying (wifi
   uncertain). I do hope that some additional individuals come forward to
   help with this.
  
   Thanks Maru, Salvatore and Armando,
   Anita.
  
   [0]
   https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
  
   On 11/30/2013 08:24 PM, Maru Newby wrote:
  
   On Nov 28, 2013, at 1:08 AM, Salvatore Orlando sorla...@nicira.com 
   wrote:
  
   Thanks Maru,
  
   This is something my team had on the backlog for a while.
   I will push some patches to contribute towards this effort in the next 
   few days.
  
   Let me know if you're already thinking of targeting the completion of 
   this job for a specific deadline.
  
   I'm thinking this could be a task for those not involved in fixing race 
   conditions, and be done in parallel.  I guess that would be for 
   icehouse-1 then?  My hope would be that the early signs of race 
   conditions would then be caught earlier.
  
  
   m.
  
  
   Salvatore
  
  
   On 27 November 2013 17:50, Maru Newby ma...@redhat.com wrote:
   Just a heads up, the console output for neutron gate jobs is about to 
   get a lot noisier.  Any log output that contains 'ERROR' is going to be 
   dumped into the console output so that we can identify and eliminate 
   unnecessary error logging.  Once we've cleaned things up, the presence 
   of unexpected (non-whitelisted) error output can be used to fail jobs, 
   as per the following Tempest blueprint:
  
   https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
  
   I've filed a related Neutron blueprint for eliminating the unnecessary 
   error logging:
  
   https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
  
   I'm looking for volunteers to help with this effort, please reply in 
   this thread if you're willing to assist.
  
   Thanks,
  
  
   Maru
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Neutron] [Fuel] Implementing Elastic Applications

2013-12-02 Thread David Easter
Support for deploying the Neutron LBaaS is on the roadmap for the Fuel
project, yes ­ but most likely not before IceHouse at current velocity.

- David J. Easter
  Product Line Manager,  Mirantis

-- Forwarded message --
From: Serg Melikyan smelik...@mirantis.com
Date: Wed, Nov 27, 2013 at 6:52 PM
Subject: Re: [openstack-dev] [Murano] [Neutron] [Fuel] Implementing Elastic
Applications
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org, fuel-...@lists.launchpad.net


I had added Neutron and Fuel teams to this e-mail thread: Guys what is your
thoughts on the subject?

We see three possible ways to implement Elastic Applications in Murano:
using Heat  Neutron LBaaS, Heat  AWS::ElasticLoadBalancing::LoadBalancer
resource and own solution using HAProxy directly (see more details in the
mail-thread).

Previously we was using Heat and AWS::ElasticLoadBalancing::LoadBalancer
resource, but this way have certain limitations.

Does Fuel team have plans to implement support for Neutron LBaaS any time
soon? 

Guys from Heat suggest Neutron LBaaS as a best long-term solution. Neutron
Team - what is your thoughts?


On Fri, Nov 15, 2013 at 6:53 PM, Thomas Hervé the...@gmail.com wrote:
 On Fri, Nov 15, 2013 at 12:56 PM, Serg Melikyan smelik...@mirantis.com
 wrote:
  Murano has several applications which support scaling via load-balancing,
  this applications (Internet Information Services Web Farm, ASP.NET
 http://ASP.NET
  Application Web Farm) currently are based on Heat, particularly on resource
  called AWS::ElasticLoadBalancing::LoadBalancer, that currently does not
  support specification of any network related parameters.
 
  Inability to specify network related params leads to incorrect behavior
  during deployment in tenants with advanced Quantum deployment
 configuration,
  like Per-tenant Routers with Private Networks and this makes deployment of
  our * Web Farm applications to fail.
 
  We need to resolve issues with our * Web Farm, and make this applications
 to
  be reference implementation for elastic applications in Murano.
 
  This issue may be resolved in three ways: via extending configuration
  capabilities of AWS::ElasticLoadBalancing::LoadBalancer, using another
  implementation of load balancing in Heat - OS::Neutron::LoadBalancer or via
  implementing own load balancing application (that going to balance other
  apllications), for example based on HAProxy (as all previous ones).
 
  Please, respond with your thoughts on the question: Which implementation
 we
  should use to resolve issue with our Web Farm applications and why?. Below
  you can find more details about each of the options.
 
  AWS::ElasticLoadBalancing::LoadBalancer
 
  AWS::ElasticLoadBalancing::LoadBalancer is Amazon Cloud Formation
 compatible
  resource that implements load balancer via hard-coded nested stack that
  deploys and configures HAProxy. This resource requires specific image with
  CFN Tools and specific name F17-x86_64-cfntools available in Glance. It's
  look like we miss implementation of only one property in this resource -
  Subnets.
 
  OS::Neutron::LoadBalancer
 
  OS::Neutron::LoadBalancer is another Heat resource that implements load
  balancer. This resource is based on Load Balancer as a Service feature in
  Neutron. OS::Neutron::LoadBalancer is much more configurable and
  sophisticated but underlying implementation makes usage of this resource
  quite complex.
  LBaaS is a set of services installed and configured as a part of Neutron.
  Fuel does not support LBaaS; Devstack has support for LBaaS, but LBaaS not
  installed by default with Neutron.
 
  Own, Based on HAProxy
 
  We may implement load-balancer as a regular application in Murano using
  HAProxy. This service may look like our Active Directory application with
  almost same user-expirience. User may create load-balancer inside of the
  environment and join any web-application (with any number of instances)
  directly to load-balancer.
  Load-balancer may be also implemented on Conductor workflows level, this
  implementation strategy not going to change user-experience (in fact we
  changing only underlying implementation details for our * Web Farm
  applications, without introducing new ones).
 
 Hi,
 
 I would strongly encourage using OS::Neutron::LoadBalancer. The AWS
 resource is supposed to mirror Amazon capabilities, so any extension,
 while not impossible, is frowned upon. On the other hand the Neutron
 load balancer can be extended to your need, and being able to use an
 API gives you much more flexibility. It also in active development and
 will get more interesting features in the future.
 
 If you're having concerns about deploying Neutron LBaaS, you should
 bring it up with the team, and I'm sure they can improve the
 situation. My limited experience with it in devstack has been really
 good.
 
 Cheers,
 
 --
 Thomas
 
 ___
 

Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim

  * Process
  ** Project must be hosted under stackforge (and therefore use git as
its VCS)
 
 I see that barbican is now on stackforge,  but python-barbicanclient is
 still on github.  Is that being moved soon?
 
  ** Project must obey OpenStack coordinated project interface (such as
tox,
 pbr, global-requirements...)
 
 Uses tox, but not pbr or global requirements

It's also pretty easy for a stackforge project to opt-in to the global
requirements sync job now too.

Are there some docs on how to do this somewhere? I added a task for us to
complete the work as part of the incubation request here:
https://wiki.openstack.org/wiki/Barbican/Incubation


  ** Project should use oslo libraries or oslo-incubator where
appropriate
 
 The list looks reasonable right now.  Barbican should put migrating to
 oslo.messaging on the Icehouse roadmap though.

*snip*

 
 
http://git.openstack.org/cgit/stackforge/barbican/tree/tools/pip-requires
 
 It looks like the only item here not in the global requirements is
 Celery, which is licensed under a 3-clause BSD license.

I'd like to address the use of Celery.

WTF

Barbican has been around for 9 months, which means that it does not
predate the work that has become oslo.messaging. It doesn't even try. It
uses a completely different thing.

The use of celery needs to be replaced with oslo. Full stop. I do not
believe it makes any sense to spend further time considering a project
that's divergent on such a core piece. Which is a shame - because I
think that Barbican is important and fills an important need and I want
it to be in. BUT - We don't get to end-run around OpenStack project
choices by making a new project on the side and then submitting it for
incubation. It's going to be a pile of suck to fix this I'm sure, and
I'm sure that it's going to delay getting actually important stuff done
- but we deal with too much crazy as it is to pull in a non-oslo
messaging and event substrata.


Is the challenge here that celery has some weird license requirements? Or
that it is a new library?

When we started the Barbican project in February of this year,
oslo.messaging did not exist. If I remember correctly, at the time we were
doing architecture set up, the messaging piece was not available as a
standalone library, was not available on PyPi and had no documentation.

It looks like the project was moved to its own repo in April. However, I
can¹t seem to find the docs anywhere? The only thing I see is a design doc
here [1]. Are there plans for it to be packaged and put into Pypi?

We are probably overdue to look at oslo.messaging again, but I don¹t think
it should be a blocker for our incubation. I'm happy to take a look to see
what we can do during the Icehouse release cycle. Would that be
sufficient? 


[1] https://wiki.openstack.org/wiki/Oslo/Messaging




Jarret


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-02 Thread Alessandro Pilotti


On 02/dic/2013, at 23:47, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:


On Dec 2, 2013, at 11:40 AM, Alessandro Pilotti 
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:


On 02 Dec 2013, at 20:54 , Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

Very cool stuff!

Seeing your special glance properties for iso and floppy connections made me 
think
of something. They seem, but it would be nice if they were done in a way that 
would
work in any hypervisor.

I think we have sufficient detail in block_device_mapping to do essentially 
the
same thing, and it would be awesome to verify and add some nicities to the nova
cli, something like:


Thanks Vish!

I was also thinking about bringing this to any hypervisor.

About the block storage option, we would have an issue with Hyper-V. We need 
local (or SMB accessible) ISOs and floppy images to be assigned to the 
instances.
IMO this isn’t a bad thing: ISOs would be potentially shared among instances in 
read only mode and it’s easy to pull them from Glance during live migration.
Floppy images on the other hand are insignificant quantity. :-)

I'm not sure exactly what the problem is here. Pulling them down from glance 
before boot seems like the way that other hypervisors would implement it as 
well. The block device mapping code was extended in havana so it doesn't just 
support volume/iscsi connections. You can specify where the item should be 
attached in addition to the source of the device (glance, cinder, etc.).

I have to say that I still used it for iSCSI volumes only. If Glance is 
supported, not only all my objections below are irrelevant, but it's also a 
great solution! Adding it right away!




nova boot --flavor 1 --iso CentOS-64-ks --floppy Kickstart (defaults to blank 
image)


This would be great! For the reasons above, I’d go anyway with some simple 
extensions to pass in the ISO and floppy refs in the instance data instead of 
block device mappings.

I think my above comment handles this. Block device mapping was designed to 
support floppies and ISOs as well


There’s also one additional scenario that would greatly benefit from those 
options: our Windows Heat templates (think about SQL Server, Exchange, etc) 
need to access the product media for installation and due to
license constraints the tenant needs to provide the media, we cannot simply 
download them. So far we solved it by attaching a volume containing the install 
media, but it’s of course a very unnatural process for the user.

Couldn't this also be handled by above i.e. upload the install media to glance 
as an iso instead of a volume?

Vish


Alessandro


Clearly that requires a few things:

1) vix block device mapping support
2) cli ux improvements
3) testing!

Vish

On Dec 1, 2013, at 2:10 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:

Hi all,

At Cloudbase we are heavily using VMware Workstation and Fusion for 
development, demos and PoCs, so we thought: why not replacing our automation 
scripts with a fully functional Nova driver and use OpenStack APIs and Heat for 
the automation? :-)

Here’s the repo for this Nova driver project: 
https://github.com/cloudbase/nova-vix-driver/

The driver is already working well and supports all the basic features you’d 
expect from a Nova driver, including a VNC console accessible via Horizon. 
Please refer to the project README for additional details.
The usage of CoW images (linked clones) makes deploying images particularly 
fast, which is a good thing when you develop or run demos. Heat or Puppet, 
Chef, etc make the whole process particularly sweet of course.


The main idea was to create something to be used in place of solutions like 
Vagrant, with a few specific requirements:

1) Full support for nested virtualization (VMX and EPT).

For the time being the VMware products are the only ones supporting Hyper-V and 
KVM as guests, so this became a mandatory path, at least until EPT support will 
be fully functional in KVM.
This rules out Vagrant as an option. Their VMware support is not free and 
beside that they don’t support nested virtualization (yet, AFAIK).

Other workstation virtualization options, including VirtualBox and Hyper-V are 
currently ruled out due to the lack of support for this feature as well.
Beside that Hyper-V and VMware Workstation VMs can work side by side on Windows 
8.1, all you need is to fire up two nova-compute instances.

2) Work on Windows, Linux and OS X workstations

Here’s a snapshot of Nova compute  running on OS X and showing Novnc connected 
to a Fusion VM console:

https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png

3) Use OpenStack APIs

We wanted to have a single common framework for automation and bring OpenStack 
on the workstations.
Beside that, dogfooding is a good thing. :-)

4) Offer a free alternative for community contributions


Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Monty Taylor


On 12/02/2013 04:12 PM, Jarret Raim wrote:

 The TC is currently working on formalizing requirements for new programs
 and projects [3].  I figured I would give them a try against this
 application.

 First, I'm assuming that the application is for a new program that
 contains the new project.  The application doesn't make that bit clear,
 though.
 
 In looking through the documentation for incubating [1], there doesn¹t
 seem to be any mention of also having to be associated with a program. Is
 it a requirement that all projects belong to a program at this point? If
 so, I guess we would be asking for a new program as I think that
 encryption and key management is a separate concern from the rest of the
 programs listed here [2].
 
 [1] https://wiki.openstack.org/wiki/Governance/NewProjects
 [2] https://wiki.openstack.org/wiki/Programs
 
 
 Teams in OpenStack can be created as-needed and grow organically. As
 the team
 work matures, some technical efforts will be recognized as essential to
 the
 completion of the OpenStack project mission. By becoming an official
 Program,
 they place themselves under the authority of the OpenStack Technical
 Committee. In return, their contributors get to vote in the Technical
 Committee election, and they get some space and time to discuss future
 development at our Design Summits. When considering new programs, the
 TC will
 look into a number of criteria, including (but not limited to):

 * Scope
 ** Team must have a specific scope, separated from others teams scope

 I would like to see a statement of scope for Barbican on the
 application.  It should specifically cover how the scope differs from
 other programs, in particular the Identity program.
 
 Happy to add this, I¹ll put it in the wiki today.
 
 ** Team must have a mission statement

 This is missing.
 
 We do have a mission statement on the Barbican/Incubation page here:
 https://wiki.openstack.org/wiki/Barbican/Incubation
 
 
 ** Team should have a lead, elected by the team contributors

 Was the PTL elected?  I can't seem to find record of that.  If not, I
 would like to see an election held for the PTL.
 
 We¹re happy to do an election. Is this something we can do as part of the
 next election cycle? Or something that needs to be done out of band?
 
 

 ** Team should have a clear way to grant ATC (voting) status to its
significant contributors

 Related to the above
 
 I thought that the process of becoming an ATC was pretty well set [3]. Is
 there some specific that Barbican would have to do that is different than
 the ATC rules in the Tech Committee documentation?
 
 [3] 
 https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee
 
 

 * Deliverables
 ** Team should have a number of clear deliverables

 barbican and python-barbicanclient, I presume.  It would be nice to have
 this clearly defined on the application.
 
 I will add a deliverables section, but you are correct.
 
 
 Now, for the project specific requirements:

  Projects wishing to be included in the integrated release of OpenStack
 must
  first apply for incubation status. During their incubation period,
 they will
  be able to access new resources and tap into other OpenStack programs
 (in
  particular the Documentation, QA, Infrastructure and Release
 management teams)
  to learn about the OpenStack processes and get assistance in their
 integration
  efforts.
  
  The TC will evaluate the project scope and its complementarity with
 existing
  integrated projects and other official programs, look into the project
  technical choices, and check a number of requirements, including (but
 not
  limited to):
  
  * Scope
  ** Project must have a clear and defined scope

 This is missing
 
 As mentioned above, I¹ll add this to the wiki today.
 

  ** Project should not inadvertently duplicate functionality present in
 other
 OpenStack projects. If they do, they should have a clear plan and
 timeframe
 to prevent long-term scope duplication.
  ** Project should leverage existing functionality in other OpenStack
 projects
 as much as possible

 I'm going to hold off on diving into this too far until the scope is
 clarified.

  * Maturity
  ** Project should have a diverse and active team of contributors

 Using a mailmap file [4]:

 $ git shortlog -s -e | sort -n -r
   172John Wood john.w...@rackspace.com
   150jfwood john.w...@rackspace.com
65Douglas Mendizabal douglas.mendiza...@rackspace.com
39Jarret Raim jarret.r...@rackspace.com
17Malini K. Bhandaru malini.k.bhand...@intel.com
10Paul Kehrer paul.l.keh...@gmail.com
10Jenkins jenk...@review.openstack.org
 8jqxin2006 jqxin2...@gmail.com
 7Arash Ghoreyshi arashghorey...@gmail.com
 5Chad Lung chad.l...@gmail.com
 3Dolph Mathews dolph.math...@gmail.com
 2John Vrbanac john.vrba...@rackspace.com
 1Steven Gonzales 

Re: [openstack-dev] Incubation Request for Barbican

2013-12-02 Thread Jarret Raim

 Uses tox, but not pbr or global requirements
 
 I added a ŒTasks¹ section for stuff we need to do from this review and
 I¹ve added these tasks. [4]
 
 [4] https://wiki.openstack.org/wiki/Barbican/Incubation

Awesome. Also, if you don't mind - we should add testr to that list.
We're looking at some things in infra that will need the subunit
processing.

Added that to our list.



Jarret

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >