Re: [openstack-dev] [oslo][nova] Issues syncing latest db changes

2014-02-06 Thread Victor Sergeyev
Hello Joe.

Thanks for pointing this issue. We will investigate this situation and fix
it.

In the future in such cases you can just create a bug on launchpad.
Also feel free to ping me (and another db maintainers) in IRC.

Thanks,
Victor


On Wed, Feb 5, 2014 at 9:12 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi Boris, Roman, Victor (oslo-incubator db maintainers),

 Last night I stumbled across bug https://launchpad.net/bugs/1272500 in
 nova, which says the issue has been fixed in the latest oslo-incubator
 code. So I ran:

 ./update.sh --base nova --dest-dir ../nova --modules db.sqlalchemy

 https://review.openstack.org/#/c/71191/

 And that appeared to fix the specific issues I was seeing from Bug
 1272500, but it introduced some new failures.


 I would like to get nova unit test working with sqlite 3.8.2-1 if
 possible. How can this situation be resolved?


 best,
 Joe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-06 Thread Andreas Jaeger
On 02/06/2014 07:42 AM, Andreas Jaeger wrote:
 On 02/05/2014 06:38 PM, Jonathan Bryce wrote:
 On Feb 5, 2014, at 10:18 AM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
 From: Andreas Jaeger a...@suse.com
 To: Mark McLoughlin mar...@redhat.com, OpenStack Development Mailing 
 List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: Jonathan Bryce jonat...@openstack.org
 Sent: Wednesday, February 5, 2014 9:17:39 AM
 Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming

 On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
 On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
 Steve Gordon wrote:
 From: Anne Gentle anne.gen...@rackspace.com
 Based on today's Technical Committee meeting and conversations with the
 OpenStack board members, I need to change our Conventions for service
 names
 at
 https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
 .

 Previously we have indicated that Ceilometer could be named OpenStack
 Telemetry and Heat could be named OpenStack Orchestration. That's not
 the
 case, and we need to change those names.

 To quote the TC meeting, ceilometer and heat are other modules 
 (second
 sentence from 4.1 in
 http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
 distributed with the Core OpenStack Project.

 Here's what I intend to change the wiki page to:
 Here's the list of project and module names and their official names
 and
 capitalization:

 Ceilometer module
 Cinder: OpenStack Block Storage
 Glance: OpenStack Image Service
 Heat module
 Horizon: OpenStack dashboard
 Keystone: OpenStack Identity Service
 Neutron: OpenStack Networking
 Nova: OpenStack Compute
 Swift: OpenStack Object Storage

 Small correction. The TC had not indicated that Ceilometer could be
 named OpenStack Telemetry and Heat could be named OpenStack
 Orchestration. We formally asked[1] the board to allow (or disallow)
 that naming (or more precisely, that use of the trademark).

 [1]
 https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names

 We haven't got a formal and clear answer from the board on that request
 yet. I suspect they are waiting for progress on DefCore before deciding.

 If you need an answer *now* (and I suspect you do), it might make sense
 to ask foundation staff/lawyers about using those OpenStack names with
 the current state of the bylaws and trademark usage rules, rather than
 the hypothetical future state under discussion.

 Basically, yes - I think having the Foundation confirm that it's
 appropriate to use OpenStack Telemetry in the docs is the right thing.

 There's an awful lot of confusion about the subject and, ultimately,
 it's the Foundation staff who are responsible for enforcing (and giving
 advise to people on) the trademark usage rules. I've cc-ed Jonathan so
 he knows about this issue.

 But FWIW, the TC's request is asking for Ceilometer and Heat to be
 allowed use their Telemetry and Orchestration names in *all* of the
 circumstances where e.g. Nova is allowed use its Compute name.

 Reading again this clause in the bylaws:

  The other modules which are part of the OpenStack Project, but
   not the Core OpenStack Project may not be identified using the
   OpenStack trademark except when distributed with the Core OpenStack
   Project.

 it could well be said that this case of naming conventions in the docs
 for the entire OpenStack Project falls under the distributed with case
 and it is perfectly fine to refer to OpenStack Telemetry in the docs.
 I'd really like to see the Foundation staff give their opinion on this,
 though.

 In this case, we are talking about documentation that is produced and 
 distributed with the integrated release to cover the Core OpenStack Project 
 and the “modules that are distributed together with the Core OpenStack 
 Project in the integrated release. This is the intended use case for the 
 exception Mark quoted above from the Bylaws, and I think it is perfectly 
 fine to refer to the integrated components in the OpenStack release 
 documentation as OpenStack components.
 
 
 What about if I talk about OpenStack at a conference (like I'm doing
 today)? What should I say: Orchestration, Heat module (or just Heat)?
 
 
 What about all the OpenStack distributors and users like SUSE,
 Rackspace, HP, Red Hat etc? What should they use in their documentation
 and software?

Should other OpenStack projects adjust, e.g. Horizon shows
Orchestration. I guess this is fine - isn't it?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [Neutron] [ML2] l2-pop bugs review

2014-02-06 Thread Édouard Thuleau
Hi all,

Just to point 2 reviews [1]  [2] I submitted to correct l2-pop
mechanism driver into the ML2 plugin.
I had some reviews and +1 but they doesn't progress anymore.
Could you check them ?
I also like to backport them for stable Havana branch.

[1] https://review.openstack.org/#/c/63917/
[2] https://review.openstack.org/#/c/63913/

Thanks,
Édouard.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Making periodic tasks config consistent.

2014-02-06 Thread Matthew Gilliard
Hello everyone.

  wrt these bugs: https://bugs.launchpad.net/nova/+bug/1276203
https://bugs.launchpad.net/nova/+bug/1272830 - I'd just like to make sure
that the approach I'm planning makes sense.

  To summarise: Currently there are a number of methods in
compute/manager.py that use the @periodic_task decorator.  Some of them
also do their own checks about how often they are called, and use a
convention of polling period = 0 to disable the method by returning early
(although this is sometimes implemented as =0 [1] and sometimes as ==0
[2]).  In the decorator itself though, a polling period of 0 is used to
mean call this method any time any other period task is run [3].  It's
difficult to predict how often this might be, and it may not be at regular
intervals.

  I'd like to make this more consistent and predictable.  My plan is to use
the following:

  - Any positive integer: the method is called every this many seconds,
best effort is made not to call it more or less often.
  - 0: the method will be called regularly at the default period.
Currently hard-coded to 60s [4] this could be made into a config option
  - Any negative integer: the method will not be called

  All this logic would be contained in the decorator so that the methods
themselves can just get on with whatever business they have.  So far, I
hope this isn't too contentious - just clean code.  Is there any case that
I've missed?  The fix will necessarily be a breaking change.  So how do you
suggest I approach that aspect?  As it's common code, should I actually be
looking to make these changes in Oslo first then porting them in?

  Thanks,

Matthew Gilliard

[1]
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4702
[2]
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4702
[3]
https://github.com/openstack/nova/blob/master/nova/openstack/common/periodic_task.py#L144
[4]
https://github.com/openstack/nova/blob/master/nova/openstack/common/periodic_task.py#L39
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress [for appliances]

2014-02-06 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-02-05 22:17:50 -0800:
  From: Prasad Vellanki prasad.vella...@oneconvergence.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org, 
  Date: 01/21/2014 02:16 AM
  Subject: Re: [openstack-dev] [heat] Sofware Config progress
  
  Steve  Clint
  
  That should work. We will look at implementing a resource that spins
  up a shortlived VM for bootstrapping a service VM and informing 
  configuration server for further configuration. 
  
  thanks
  prasadv
  
 
  On Wed, Jan 15, 2014 at 7:53 PM, Steven Dake sd...@redhat.com wrote:
  On 01/14/2014 09:27 PM, Clint Byrum wrote:
  Excerpts from Prasad Vellanki's message of 2014-01-14 18:41:46 -0800:
  Steve
  
  I did not mean to have custom solution at all. In fact that would be
  terrible.  I think Heat model of software config and deployment is 
 really
  good. That allows configurators such as Chef, Puppet, Salt or Ansible to 
 be
  plugged into it and all users need to write are modules for those.
  
  What I was  thinking is if there is a way to use software 
 config/deployment
to do initial configuration of the appliance by using agentless system
  such  as Ansible or Salt, thus requiring no cfminit. I am not sure this
  will work either, since it might require ssh keys to be installed for
  getting ssh to work without password prompting. But I do see that 
 ansible
  and salt support username/password option.
  If this would not work, I agree that the best option is to make them
  support cfminit...
  Ansible is not agent-less. It just makes use of an extremely flexible
  agent: sshd. :) AFAIK, salt does use an agent though maybe they've added
  SSH support.
  
  Anyway, the point is, Heat's engine should not be reaching into your
  machines. It talks to API's, but that is about it.
  
  What you really want is just a VM that spins up and does the work for
  you and then goes away once it is done.
  Good thinking.  This model might work well without introducing the 
  groan another daemon problems pointed out elsewhere in this thread
  that were snipped.  Then the modules could simply be heat 
  templates available to the Heat engine to do the custom config setup.
  
  The custom config setup might still be a problem with the original 
  constraints (not modifying images to inject SSH keys).
  
  That model wfm.
  
  Regards
  -steve
  
 
 (1) What destroys the short-lived VM if the heat engine crashes between 
 creating and destroying that short-lived VM?
 

The heat-engine that takes over the stack. Same as the answer for what
happens when a stack is half-created and heat-engine dies.

 (2) What if something goes wrong and the heat engine never gets the signal 
 it is waiting for?
 

Timeouts already cause failed state or rollback.

 (3) This still has the problem that something needs to be configured 
 some(client-ish)where to support the client authorization solution 
 (usually username/password).


The usual answer is that's cloud-init's job but we're discussing
working around not having cloud-init, so I suspect it has to be built
into the image (which, btw, is a really really bad idea). Another option
is that these weird proprietary systems might reach out to an auth
service which the short-lived VM would also be able to contact given
appropriate credentials for said auth service fed in via parameters.

 (4) Given that everybody seems sanguine about solving the client 
 authorization problem, what is wrong with code in the heat engine opening 
 and using a connection to code in an appliance?  Steve, what do you mean 
 by reaching into your machines that is critically different from calling 
 their APIs?
 

We can, and should, poke holes from heat-engine, out through a firewall,
so it can connect to all of the endpoints. However, if we start letting
it talk to all the managed machines, it becomes a really handy DoS tool
and also spends a ton of time talking to things that we have no control
over, thus taking up resources to an unknown degree.

Heat-engine is precious, it has access to a database with a ton of really
sensitive information. It is also expensive when heat-engine dies (until
we can make all tasks distributed) as it may force failure states. So
I think we need to be very careful about what we let it do.

 (5) Are we really talking about the same kind of software configuration 
 here?  Many appliances do not let you SSH into a bash shell and do 
 whatever you want; they provide only their own API or special command 
 language over a telnet/ssh sort of connection.  Is hot-software-config 
 intended to cover that?  Is this what the OneConvergence guys are 
 concerned with?
 

No. We are suggesting a solution to their unique problem of having to
talk to said API/special command language/telnet/IP-over-avian-carrier.
The short-lived VM can just have a UserData section which does all of
this really.


Re: [openstack-dev] [nova] vmware minesweeper

2014-02-06 Thread Gary Kotton
Hi,
The following patches will really help minesweeper:
1. Treat exceptions that are caused by parallel tests. This enables us to
run parallel jobs (we have very promising results of running 8 parallel
test jobs):
https://review.openstack.org/#/c/70137/
https://review.openstack.org/#/c/65306/
https://review.openstack.org/#/c/69622/
2. Improve test time considerably:
https://review.openstack.org/#/c/70079/

One of the requirements for Nova is that each driver would provide CI that
validate patches. If there was no CI then the driver would be marked as:
- quality warning (for example the Vmware ESX driver) -
https://review.openstack.org/#/c/70850/
- deprecation warning (for example the Xen driver) -
https://review.openstack.org/#/c/71289/

Having met the requirements is the first step in a long and adventurous
road ahead. As we can see from the gate upstream it is really important
that we constantly work on improving the stability of the CI and the
drivers that are being used. I would also like to note that we are
approaching the end of the cycle - there will soon be a mad rush for
gating. If we have stable CI's then we can provide quicker and more
reliable results. Without these we are shooting ourselves in the foot.

Thanks
Gary


On 2/6/14 2:38 AM, Ryan Hsu r...@vmware.com wrote:

Did you mean flags as in excluding tests? We have an exclude list but
these bugs are intermittent problems that can affect tests at random.

 On Feb 5, 2014, at 4:18 PM, John Dickinson m...@not.mn wrote:
 
 
 On Feb 5, 2014, at 4:04 PM, Ryan Hsu r...@vmware.com wrote:
 
 Also, I have added a section noting crucial bugs/patches that are
blocking Minesweeper.
 
 
 Can we just put flags around them and move on?
 
 
 
 signature.asc
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar
=0GxzU7fJvvhFTHF4JHrlbg%3D%3D%0Am=PaNv1e10cXsdYwWWKCJcK9%2Fj6DZ6FSoJrgPy
%2FgU%2BqRk%3D%0As=c39743f7af92cf915b297291118d723c9de907f51ddf0f25e7084
5769fcff2c4

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=NQIsczUP2%2FyA%2FqLVn
6C3i%2FRV3GmAj9EhqVAJDXb5%2FKs%3D%0As=4824efd32fe73aef7e219bc72c25ec37c13
10320c160faff9cdc170b44587bb3


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly meeting 06.02.2014

2014-02-06 Thread Eugene Nikanorov
Hi,

Let's discuss lbaas progress and plans in #openstack-meetings 14-00 UTC
today.

Meeting agenda: https://wiki.openstack.org/wiki/Network/LBaaS

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Image caching aging

2014-02-06 Thread Gary Kotton
Hi,
It has come to my attention that this blueprint 
(https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management) has 
been deferred to the next milestone series. The blueprint ensures that the 
driver has aging for cached images. This is a critical issue for the driver and 
is important as a parity feature.

I really am not sure why it has been deferered and would like to clarify a few 
points.

 1.  The blueprint code was implemented and pending review in November. This 
was blocked due to a blueprint 
(https://blueprints.launchpad.net/nova/+spec/multiple-image-cache-handlers) 
that is not at all related to the driver itself. Russells point for blocking 
the BP was there is common code between all drivers. This was addressed by: 
https://review.openstack.org/#/c/59994/ (approved on 17th December).
 2.  That aging code was rebased and pushed again in December.

It would really be nice to understand why the BP has been deferred to Juno. 
There was a mail that stated the following:

 1.  All BP's need to be in by Feb 4th (this BP was in and approved in I2)
 2.  The code for BP's needs to be in by February the 18th. The code was ready 
for review in December.

The code is specific to the VMware driver and provides parity to other drivers 
that have image caching. It would really be nice to understand why this has 
been deferred and why a feature that this totally isolated to a virtualization 
driver is deferred.

Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Making periodic tasks config consistent.

2014-02-06 Thread Michael Still
On Thu, Feb 6, 2014 at 8:16 PM, Matthew Gilliard
matthew.gilli...@gmail.com wrote:
 Hello everyone.

   wrt these bugs: https://bugs.launchpad.net/nova/+bug/1276203
 https://bugs.launchpad.net/nova/+bug/1272830 - I'd just like to make sure
 that the approach I'm planning makes sense.

   To summarise: Currently there are a number of methods in
 compute/manager.py that use the @periodic_task decorator.  Some of them also
 do their own checks about how often they are called, and use a convention of
 polling period = 0 to disable the method by returning early (although this
 is sometimes implemented as =0 [1] and sometimes as ==0 [2]).  In the
 decorator itself though, a polling period of 0 is used to mean call this
 method any time any other period task is run [3].  It's difficult to
 predict how often this might be, and it may not be at regular intervals.

   I'd like to make this more consistent and predictable.  My plan is to use
 the following:

   - Any positive integer: the method is called every this many seconds,
 best effort is made not to call it more or less often.
   - 0: the method will be called regularly at the default period.  Currently
 hard-coded to 60s [4] this could be made into a config option
   - Any negative integer: the method will not be called

   All this logic would be contained in the decorator so that the methods
 themselves can just get on with whatever business they have.  So far, I hope
 this isn't too contentious - just clean code.  Is there any case that I've
 missed?  The fix will necessarily be a breaking change.  So how do you
 suggest I approach that aspect?  As it's common code, should I actually be
 looking to make these changes in Oslo first then porting them in?

The decorator comes from oslo, so you're talking about changing the
default flag behaviour for pretty much every openstack project here.
How do we do this in a way which doesn't have unexpected side effects
for deployments?

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread victor stinner
Hi,

Joshua Harlow:
 Any mysql DB drivers (I think the majority of openstack deployments use
 mysql?).

I don't know. Here are some asynchronous clients for MySQL:

https://github.com/PyMySQL/PyMySQL/
https://launchpad.net/myconnpy
https://github.com/hybridlogic/txMySQL
http://chartio.com/blog/2011/06/making-mysql-queries-asynchronous-in-tornado
http://www.arpalert.org/mysac.html
http://code.google.com/p/perl-mysql-async/

IMO to have an efficient driver for asyncio, it should give access to the 
socket / file descriptor, so asyncio can watch it can execute a callback when 
some data can be read on the socket. A pure Python connector should fit such 
requirements. Or the API should use a callback when the result is available.

 How about sqlalchemy (what would possibly need to change there for it to
 work)?

I found some projects using SQLAchemy asynchronously, but only with PostgreSQL.

 The pain that I see is that to connect all these libraries into
 asyncio they have to invert how they work (sqlalchemy would have to become
 asyncio compatible (?), which probably means a big rewrite).

There is no problem to call slow blocking functions in asyncio.

But if you want to have efficient code, it's better to run the blocking code 
asynchronously. For example, use loop.run_in_executor() with a thread pool.

 This is where
 it would be great to have a 'eventlet' like-thing built ontop of asyncio
 (letting existing libraries work without rewrites). Eventually I guess
 in-time (if tulip succeeds) then this 'eventlet' like-thing could be
 removed.

It's a little bit weird to design an abstraction on top of asyncio, since 
asyncio has be designed an an abstraction of existing event loops. But I wrote 
an asyncio executor for Oslo Messaging which has already such abstraction.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread victor stinner
Sean Dague wrote:
 First, very cool!

Thanks.

 This is very promising work. It might be really interesting to figure
 out if there was a smaller project inside of OpenStack that could be
 test ported over to this (even as a stackforge project), and something
 we could run in the gate.

Oslo Messaging is a small project, but it's more a library. For a full daemon, 
my colleague Mehdi Abaakouk has a proof-on-concept for Ceilometer replacing 
eventlet with asyncio. Mehdi told me that he doesn't like to debug eventlet 
race conditions :-)

 Our experience is the OpenStack CI system catches bugs in libraries and
 underlying components that no one else catches, and definitely getting
 something running workloads hard on this might be helpful in maturing
 Trollius. Basically coevolve it with a piece of OpenStack to know that
 it can actually work on OpenStack and be a viable path forward.

Replacing eventlet with asyncio is a huge change. I don't want to force users 
to use it right now, nor to do the change in one huge commit. The change will 
be done step by step, and when possible, optional. For example, in Olso 
Messaging, you can choose the executor: eventlet or blocking (and I want to add 
asyncio).

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp: glance-snapshot-tasks

2014-02-06 Thread Alexander Gorodnev
Hi,

A blue print was created and Joshua even wrote quite huge text. Right now
this BP in Drafting stage, so I want to bring this BP to life and continue
working on the topic. I even tried to make some changes without approvement
(only just as experiment) and got negative feedbacks.
These steps I did when tried to implement this BP:

1) Moved snapshot functionality from Compute to Conductor (as I understood
it's the best place for such things, need clarification);
Even this step should be done in two steps:
a) Add snapshot_instance() method to Conductor that just calls the same
method from Compute;
b) After that move all error-handling / state transition / etc  logic from
Compute to Conductor. Compute exposes API for drivers (see step 2);

2) The hardest part is a common, convenient, complete API for drivers. Most
drivers do  almost the same things in the snapshot method:
a) Goes to Glance and registers new image there;
b) Makes snapshot;
c) Downloads the image to the Glance;
d) Clean temporary files;

I would really appreciate any thoughts and questions.

Thanks,
Alexander
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling the gantt scheduler in devstack

2014-02-06 Thread Swapnil Kulkarni
Small correction inline
s/filter_shceduler/filter_scheduler in SCHEDULER environment variable.




On Wed, Feb 5, 2014 at 5:35 AM, Dugger, Donald D
donald.d.dug...@intel.comwrote:

  Now that a preview version of the new gantt scheduler is available there
 is the problem of configuring devstack to utilize this new tree.
 Fortunately, it's a simple process that only requires 3 changes to your
 `localrc' file:



 1)  Enable gantt

 2)  Disable the nova scheduler

 3)  Change the SCHEDULER environment variable



 Specifically, adding these 3 lines to `localrc' should get gantt working
 in your environment:



 disable_service n-sch

 enable_service gantt

 SCHEDULER=gantt.scheduler.filter_scheduler.FilterScheduler



 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][vmware] A new VMwareAPISession

2014-02-06 Thread Matthew Booth
There's currently an effort to create a common internal API to the
vSphere/ESX API:

https://blueprints.launchpad.net/oslo/+spec/vmware-api

I see there's some code already in place which essentially copies what's
currently in Nova. Having spent some time digging in this code recently,
I would take the opportunity of this refactor to fix a few design issues
in the current code, which has an 'organic' feel to it.

The current code has 2 main objects:

* driver.VMwareAPISession

This object creates a Vim object and manages a session in it. It
provides _call_method(), which calls a method in an external module and
retries it if it failed because of a bad session. _call_method has also
had shoehorned into it the ability to make direct Vim calls.

* vim.Vim

This object creates a connection to vSphere/ESX and provides an API to
make remote calls.

Here are 2 typical uses of the API, both taken from vmops.py:

---
  hardware_devices = self._session._call_method(vim_util,
  get_dynamic_property, vm_ref,
  VirtualMachine, config.hardware.device)
---

This is using _call_method() to wrap:

  vim_util.get_dynamic_property(vm_ref, VirtualMachine,
config.hardware.device)

vim_util.get_dynamic_property() does an amount of work and creates a
number of objects before ultimately calling:

 return vim.RetrievePropertiesEx(...)

Note that in the event that this call fails, for example due to a
session timeout or a network error, the entire function will be
needlessly re-executed.

---
  reconfig_task = self._session._call_method(
  self._session._get_vim(),
  ReconfigVM_Task, vm_ref,
  spec=vmdk_attach_config_spec)
  self._session._wait_for_task(instance_uuid, reconfig_task)
---

This is using _call_method() to wrap:

  reconfig_task = vim.ReconfigVM_Task(
  vm_ref, spec=vmdk_attach_config_spec)
  wait_for_task(reconfig_task)  [1]

Things wrong with both of the above:
* It obfuscates the intention.
* It makes backtraces confusing.
* It's possible to forget to use _call_method() and it will still work,
resulting in uncaught intermittent faults.

Additionally, the choice of the separation of driver.VMwareAPISession
and vim.Vim results in several confused messes. In particular, notice
how the fault checker called by vim_request_handler can raise
FAULT_NOT_AUTHENTICATED, which then has to be caught and rechecked by
the driver because the required context isn't available in Vim.

As somebody who has come to this code recently, I can also attest that
the varying uses of _call_method with a module or a vim object, and the
fact that it isn't used at all in some places, is highly confusing to
anybody who isn't intimately familiar with the driver.

There's also a subtle point to do with the Vim object's use of
__getattr__ to syntactically sugar remote API calls: it can lead to
non-obvious behaviour. An example of this is
https://bugs.launchpad.net/nova/+bug/1275773, where the use of a Vim
object in boolean context to test for None[2] resulted in an attempt to
make a remote API call '__nonzero__'. Another example is
https://review.openstack.org/#/c/69652/ where the indirection, combined
with my next point about object orientedness, disguised a syntactically
incorrect call which wasn't being covered by a test.

The syntactic sugar isn't even working in our favour, as it doesn't
model the remote API. The vSphere API is very much object oriented, with
methods being properties of the managed object they are being called on
(e.g. a PropertyCollector or SessionManager). With that in mind, a
python programmer would expect to do something like:

  propertyCollector.RetrievePropertiesEx(args)

However, what we actually do is:

  vim.RetrievePropertiesEx(propertyCollector, args)

With all of the above in mind, I would replace both VMwareAPISession and
Vim with a single new class called VIM (not to be confused with the old
one).

class VIM(object):
  def __init__(self, host_ip=CONF.vmware.host_ip,
   username=CONF.vmware.host_username,
   password=CONF.vmware.host_password,
   retry_count=CONF.vmware.api_retry_count,
   scheme=https):
# This same arguments as to the old VMwareAPISession
# Create a suds client and log in

  def get_service_object():
# Return a service object using the suds client

  def call(object, method, *args, **kwargs):
# Ditch __getattr__(). No unexpected remote API calls.
# call() always takes the same first 2 arguments.
#
# call() will do session management, and retry the call if it fails.
# It will also create a new suds client if necessary, for example
# in the event of a network error. Note that it will only retry a
# single api call, not a whole function.
#
# All remote API calls outside the VIM object will use this method.
#
# Fault checking lives 

Re: [openstack-dev] [Murano] Repositoris re-organization

2014-02-06 Thread Serg Melikyan
Hi, Alexander,

In general I am completely agree with Clint and Robert, and as one of
contributors of Murano I don't see any practical reasons for repositories
reorganization. And regarding of your proposal I have a few thoughts that I
would like to share below:

This enourmous amount of repositories adds too much infrustructural
complexity
Creating a new repository is a quick, easy and completely automated
procedure that requires only simple commit to Zuul configuration. All
infrastructure related to repositories is handled by Openstack CI and
supported by Openstack Infra Team, and actually don't require anything from
project development team. About what infrastructure complexity you are
talking about?

I actually think keeping them separate is a great way to make sure you
have ongoing API stability. (c) Clint
I would like to share a little statistic gathered by Stan Lagun
a little time ago regarding repositories count in different PaaS solution.
If you are concerned about large number of repositories used by Murano, you
will be quite amused:

   - https://github.com/heroku - 275
   - https://github.com/cloudfoundry - 132
   - https://github.com/openshift - 49
   - https://github.com/CloudifySource - 46

First of all, I would suggest to have a single reposository for all the
three main components of Murano: main murano API (the contents of the
present), workflow execution engine (currently murano-conductor; also it
was suggested to rename the component itself to murano-engine for more
consistent naming) and metadata repository (currently murano-repository).

*murano-api* and *murano-repository* have many things in common, they are
both present HTTP API to the user, and I hope would be rewritten to common
framework (Pecan?). But *murano-conductor* have only one thing in common
with other two components: code shared under *murano-common*. That
repository may be eventually eliminated by moving to Oslo (as it should be
done).

Also, it has been suggested to move our agents (both windows and unified
python) into the main repository as well - just to put them into a separate
subfolder. I don't see any reasons why they should be separated from core
Murano: I don't believe we are going to have any third-party
implementations of our Unified agent proposals, while this possibility
was the main reason for separatinng them.

Main reason for murano-agent to have separate repository was not a
possibility to have another implementation, but that all sources that
should be able to be built as package, have tests and can be uploaded to
PyPI (or any other gate job) should be placed in different repository.
OpenStack CI have several rules regarding how repositories should be
organized to support running different gate jobs. For example, to run tests
*tox.ini* is need to be present in root directory, to build package
*setup.py* should be present in root directory. So we could not simply move
them to separate directories in main repository and have same capabilities
as in separate repository.

Next, deployment scripts and auto-generated docs: are there reasons why
they should be in their own repositories, instead of docs and
tools/deployment folders of the primary repo? I would prefer the latter:
docs and deployment scripts have no meaning without the sources which they
document/deploy - so it is better to have them consistent.
We have *developers documentation* alongside with all sources:
murano-conductorhttps://github.com/stackforge/murano-conductor/tree/master/doc/source,
murano-api https://github.com/stackforge/murano-api/tree/master/doc/source
and
so on. It is true that we have not so much documentation there, and not
much code is documented to add auto-generated documentation. Documentation
that is found in *murano-docs* repository actually is a docbook
documentation, that is presented in book manner, and follows documentation
patterns found in core projects itself:
openstack-manualshttps://github.com/openstack/openstack-manuals/tree/master/doc
.

*murano-deployment* contains scripts and other artefacts related to
deployment, but not necessary to source code. This repository don't use
much of CI capabilities, but raise it is logical place where we can place
different thing related to deployment: various scripts, specs, patches and
so on. Also with separate repository we can not to spam our deployment
engineers with software engineers related commits.



On Tue, Jan 21, 2014 at 11:55 PM, Alexander Tivelkov ativel...@mirantis.com
 wrote:

 Hi folks,

 As we are moving towards incubation application, I took a closer look at
 what is going on with our repositories.
 An here is what I found. We currently have 11 repositories at stackforge:

- murano-api
- murano-conductor
- murano-repository
- murano-dashboard
- murano-common
- python-muranoclient
- murano-metadataclient
- murano-agent
- murano-docs
- murano-tests
- murano-deployment

 This enourmous amount of repositories adds 

Re: [openstack-dev] [Nova][vmware] A new VMwareAPISession

2014-02-06 Thread Gary Kotton
Hi,
Thanks for the detailed mail. For the first step of moving the code into
OSLO we are trying to be as conservative as possible (similar to the fork
lift of the scheduler code). That is, we are taking working code and
moving it to the common library, not doing any rewrites and using the same
functionality and flows. Once that is in and stable then we should start
to work on making it more robust. Incidentally the code that was copied is
not Nova's but Cinders. When the Cinder driver was being posted the team
made a considerable amount of improvements to the driver.
The reason that the _call_method is used is that this deals with session
failures and has support for retries etc. Please see
https://review.openstack.org/#/c/61555/.
I certainly agree that we can improve the code moving forwards. I think
that the first priority should get a working version in OSLO. Once it is
up and running then we should certain start addressing issues that you
have raised.
Thanks
Gary


On 2/6/14 12:43 PM, Matthew Booth mbo...@redhat.com wrote:

There's currently an effort to create a common internal API to the
vSphere/ESX API:

https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/oslo/%2Bspec/vmware-apik=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8
NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=8nIWvYjr1NVyVpYg3ZwDiG5VZAeqSk
w8MPwOPQ4k8zs%3D%0As=facc68779808dd3d6fb45fbb9a7addb9c8f392421bfe850a9941
ec732195d641

I see there's some code already in place which essentially copies what's
currently in Nova. Having spent some time digging in this code recently,
I would take the opportunity of this refactor to fix a few design issues
in the current code, which has an 'organic' feel to it.

The current code has 2 main objects:

* driver.VMwareAPISession

This object creates a Vim object and manages a session in it. It
provides _call_method(), which calls a method in an external module and
retries it if it failed because of a bad session. _call_method has also
had shoehorned into it the ability to make direct Vim calls.

* vim.Vim

This object creates a connection to vSphere/ESX and provides an API to
make remote calls.

Here are 2 typical uses of the API, both taken from vmops.py:

---
  hardware_devices = self._session._call_method(vim_util,
  get_dynamic_property, vm_ref,
  VirtualMachine, config.hardware.device)
---

This is using _call_method() to wrap:

  vim_util.get_dynamic_property(vm_ref, VirtualMachine,
config.hardware.device)

vim_util.get_dynamic_property() does an amount of work and creates a
number of objects before ultimately calling:

 return vim.RetrievePropertiesEx(...)

Note that in the event that this call fails, for example due to a
session timeout or a network error, the entire function will be
needlessly re-executed.

---
  reconfig_task = self._session._call_method(
  self._session._get_vim(),
  ReconfigVM_Task, vm_ref,
  spec=vmdk_attach_config_spec)
  self._session._wait_for_task(instance_uuid, reconfig_task)
---

This is using _call_method() to wrap:

  reconfig_task = vim.ReconfigVM_Task(
  vm_ref, spec=vmdk_attach_config_spec)
  wait_for_task(reconfig_task)  [1]

Things wrong with both of the above:
* It obfuscates the intention.
* It makes backtraces confusing.
* It's possible to forget to use _call_method() and it will still work,
resulting in uncaught intermittent faults.

Additionally, the choice of the separation of driver.VMwareAPISession
and vim.Vim results in several confused messes. In particular, notice
how the fault checker called by vim_request_handler can raise
FAULT_NOT_AUTHENTICATED, which then has to be caught and rechecked by
the driver because the required context isn't available in Vim.

As somebody who has come to this code recently, I can also attest that
the varying uses of _call_method with a module or a vim object, and the
fact that it isn't used at all in some places, is highly confusing to
anybody who isn't intimately familiar with the driver.

There's also a subtle point to do with the Vim object's use of
__getattr__ to syntactically sugar remote API calls: it can lead to
non-obvious behaviour. An example of this is
https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/nova
/%2Bbug/1275773k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoM
Qu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=8nIWvYjr1NVyVpYg3ZwDiG5VZAeqSkw8MPwOPQ4k
8zs%3D%0As=db08b5e7f320ff7df857d33a65fbe96e61783518e4e4ae2a65020b12bd5151
a1, where the use of a Vim
object in boolean context to test for None[2] resulted in an attempt to
make a remote API call '__nonzero__'. Another example is
https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%2
3/c/69652/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2B
fDtysg45MkPhCZFxPEq8%3D%0Am=8nIWvYjr1NVyVpYg3ZwDiG5VZAeqSkw8MPwOPQ4k8zs%3

Re: [openstack-dev] [nova] Making periodic tasks config consistent.

2014-02-06 Thread Matthew Gilliard
If there is agreement that it's a change worth making, then I expect
something like:

1/ Add a warning for users who use period of 0 or use the default.  Both in
the literal sense of log.warning() and in the documentation.
2/ wait for a full release-cycle
3/ make the actual change in Juno.

Does that make sense?


On Thu, Feb 6, 2014 at 9:46 AM, Michael Still mi...@stillhq.com wrote:

 On Thu, Feb 6, 2014 at 8:16 PM, Matthew Gilliard
 matthew.gilli...@gmail.com wrote:
  Hello everyone.
 
wrt these bugs: https://bugs.launchpad.net/nova/+bug/1276203
  https://bugs.launchpad.net/nova/+bug/1272830 - I'd just like to make
 sure
  that the approach I'm planning makes sense.
 
To summarise: Currently there are a number of methods in
  compute/manager.py that use the @periodic_task decorator.  Some of them
 also
  do their own checks about how often they are called, and use a
 convention of
  polling period = 0 to disable the method by returning early (although
 this
  is sometimes implemented as =0 [1] and sometimes as ==0 [2]).  In the
  decorator itself though, a polling period of 0 is used to mean call this
  method any time any other period task is run [3].  It's difficult to
  predict how often this might be, and it may not be at regular intervals.
 
I'd like to make this more consistent and predictable.  My plan is to
 use
  the following:
 
- Any positive integer: the method is called every this many seconds,
  best effort is made not to call it more or less often.
- 0: the method will be called regularly at the default period.
  Currently
  hard-coded to 60s [4] this could be made into a config option
- Any negative integer: the method will not be called
 
All this logic would be contained in the decorator so that the methods
  themselves can just get on with whatever business they have.  So far, I
 hope
  this isn't too contentious - just clean code.  Is there any case that
 I've
  missed?  The fix will necessarily be a breaking change.  So how do you
  suggest I approach that aspect?  As it's common code, should I actually
 be
  looking to make these changes in Oslo first then porting them in?

 The decorator comes from oslo, so you're talking about changing the
 default flag behaviour for pretty much every openstack project here.
 How do we do this in a way which doesn't have unexpected side effects
 for deployments?

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 Sass support)

2014-02-06 Thread Jiri Tomasek

Hey,

Switching to SASS/Compass seems to me like a nice idea. Although reading 
Compass docs on using it in django/python projects [1], they recommend 
to serve compiled css in as output for production, so the production 
servers don't have to carry ruby/compass gems dependencies.


Also in django project development, you need to run compass --watch if 
you want scss to automatically compile and developers need to install 
ruby environment with necessary gems.


Switching to sass/compass is a good thing as it resolves the issue with 
nodejs dependency for less and also brings compass goodness into play. I 
think this solution is a bit rough for a python/django developer though.


Independently on whether we choose to stick with less or change to sass, 
we'll still need to add dependency (nodejs or ruby). What we need to 
consider is whether we want to compile css in production or not.


Recently mentioned solution of separating css and js into separate 
project that outputs compiled js and css comes into play. Problem with 
sass/compass here I see is that we'll probably need nodejs dependency 
for js tools like bower, grunt, js test suites etc. With sass/compass 
we'd need additional Ruby dependency.


Jirka



[1] http://compass-style.org/blog/2011/05/09/compass-django/


On 02/05/2014 08:23 PM, Gabriel Hurley wrote:

I would imagine the downstream distros won't have the same problems with Ruby 
as they did with Node.js from a dependency standpoint, though it still doesn't 
jive with the community's all-Python bias.

My real concern, though, is anyone who may have extended the Horizon 
stylesheets using the capabilities of LESS. There are lots of ways you can 
customize the appearance of Horizon, and some folks may have gone that route.

My recommended course of action would be to think deeply on some recommended ways of 
upgrading from LESS to SASS for existing deployments who may have written 
their own stylesheets. Treat this like a feature deprecation (which is what it is).

Otherwise, if it makes people's lives better to use SASS instead of LESS, it 
sounds good to me.

 - Gabriel


-Original Message-
From: Jason Rist [mailto:jr...@redhat.com]
Sent: Wednesday, February 05, 2014 9:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from
Less to Sass (Bootstrap 3  Sass support)

On Wed 05 Feb 2014 09:32:54 AM MST, Jaromir Coufal wrote:

Dear Horizoners,

in last days there were couple of interesting discussions about
updating to Bootstrap 3. In this e-mail, I would love to give a small
summary and propose a solution for us.

As Bootstrap was heavily dependent on Less, when we got rid of node.js
we started to use lesscpy. Unfortunately because of this change we
were unable to update to Bootstrap 3. Fixing lesscpy looks problematic
- there are issues with supporting all use-cases and even if we fix
this in some time, we might challenge these issues again in the future.

There is great news for Bootstrap. It started to support Sass [0].
(Thanks Toshi and MaxV for highlighting this news!)

Thanks to this step forward, we might get out of our lesscpy issues by
switching to Sass. I am very happy with this possible change, since
Sass is more powerful than Less and we will be able to update our
libraries without any constraints.

There are few downsides - we will need to change our Horizon Less
files to Sass, but it shouldn't be very big deal as far as we
discussed it with some Horizon folks. We can actually do it as a part
of Bootstrap update [1] (or CSS files restructuring [2]).

Other concern will be with compilers. So far I've found 3 ways:
* rails dependency (how big problem would it be?)
* https://pypi.python.org/pypi/scss/0.7.1
* https://pypi.python.org/pypi/SassPython/0.2.1
* ... (other suggestions?)

Nice benefit of Sass is, that we can use advantage of Compass
framework [3], which will save us a lot of energy when writing (not
just cross-browser) stylesheets thanks to their mixins.

When we discussed on IRC with Horizoners, it looks like this is good
way to go in order to move us forward. So I am here, bringing this
suggestion up to whole community.

My proposal for Horizon is to *switch from Less to Sass*. Then we can
unblock our already existing BPs, get Bootstrap updates and include
Compass framework. I believe this is all doable in Icehouse timeframe
if there are no problems with compilers.

Thoughts?

-- Jarda

[0] http://getbootstrap.com/getting-started/
[1] https://blueprints.launchpad.net/horizon/+spec/bootstrap-update
[2] https://blueprints.launchpad.net/horizon/+spec/css-breakdown
[3] http://compass-style.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think this is a fantastic idea. Having no experience with Less, but seeing 
that
it is troublesome - if 

Re: [openstack-dev] [Nova][vmware] A new VMwareAPISession

2014-02-06 Thread Matthew Booth
On 06/02/14 11:24, Gary Kotton wrote:
 Hi,
 Thanks for the detailed mail. For the first step of moving the code into
 OSLO we are trying to be as conservative as possible (similar to the fork
 lift of the scheduler code). That is, we are taking working code and
 moving it to the common library, not doing any rewrites and using the same
 functionality and flows. Once that is in and stable then we should start
 to work on making it more robust.

In moving the code to oslo there's going to be some donkey work required
to find all current uses of the code and fix them up. I'm proposing this
change now because it would be a great opportunity to do that donkey
work just once.

Also notice that the actual usage of the proposed API is very similar to
the old one. The donkey work would essentially amount to:

* Change all instances of vim.Method(object, args) in vim_util to
vim.call(object, Method, args)

* Change all instances of session._call_method(vim_util, 'method', args)
everywhere to vim_util.method(session, args)

Note that the changes are mechanical, and you're going to have to touch
it for the move to oslo anyway.

Also note that the proposed API would, at least in the first instance,
be substantially a cut and paste of existing code.

Incidentally the code that was copied is
 not Nova's but Cinders. When the Cinder driver was being posted the team
 made a considerable amount of improvements to the driver.

I've read it. It's certainly much prettier python, but the design is the
same as the Nova driver.

 The reason that the _call_method is used is that this deals with session
 failures and has support for retries etc. Please see
 https://review.openstack.org/#/c/61555/.

That's one of the explicit design goals of the proposed API. Notice that
mistakes like this are no longer possible, as the call() method does
session management and retries, and there is no other exposed interface
to make remote API calls.

 I certainly agree that we can improve the code moving forwards. I think
 that the first priority should get a working version in OSLO. Once it is
 up and running then we should certain start addressing issues that you
 have raised.

I think it's almost no additional work to fix it at this stage, given
that the code is being refactored anyway and it will require donkey work
in the driver to match up with it. If we wait until later it becomes a
whole additional task.

Matt

 Thanks
 Gary
 
 
 On 2/6/14 12:43 PM, Matthew Booth mbo...@redhat.com wrote:
 
 There's currently an effort to create a common internal API to the
 vSphere/ESX API:

 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
 t/oslo/%2Bspec/vmware-apik=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8
 NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=8nIWvYjr1NVyVpYg3ZwDiG5VZAeqSk
 w8MPwOPQ4k8zs%3D%0As=facc68779808dd3d6fb45fbb9a7addb9c8f392421bfe850a9941
 ec732195d641

 I see there's some code already in place which essentially copies what's
 currently in Nova. Having spent some time digging in this code recently,
 I would take the opportunity of this refactor to fix a few design issues
 in the current code, which has an 'organic' feel to it.

 The current code has 2 main objects:

 * driver.VMwareAPISession

 This object creates a Vim object and manages a session in it. It
 provides _call_method(), which calls a method in an external module and
 retries it if it failed because of a bad session. _call_method has also
 had shoehorned into it the ability to make direct Vim calls.

 * vim.Vim

 This object creates a connection to vSphere/ESX and provides an API to
 make remote calls.

 Here are 2 typical uses of the API, both taken from vmops.py:

 ---
  hardware_devices = self._session._call_method(vim_util,
  get_dynamic_property, vm_ref,
  VirtualMachine, config.hardware.device)
 ---

 This is using _call_method() to wrap:

  vim_util.get_dynamic_property(vm_ref, VirtualMachine,
config.hardware.device)

 vim_util.get_dynamic_property() does an amount of work and creates a
 number of objects before ultimately calling:

 return vim.RetrievePropertiesEx(...)

 Note that in the event that this call fails, for example due to a
 session timeout or a network error, the entire function will be
 needlessly re-executed.

 ---
  reconfig_task = self._session._call_method(
  self._session._get_vim(),
  ReconfigVM_Task, vm_ref,
  spec=vmdk_attach_config_spec)
  self._session._wait_for_task(instance_uuid, reconfig_task)
 ---

 This is using _call_method() to wrap:

  reconfig_task = vim.ReconfigVM_Task(
  vm_ref, spec=vmdk_attach_config_spec)
  wait_for_task(reconfig_task)  [1]

 Things wrong with both of the above:
 * It obfuscates the intention.
 * It makes backtraces confusing.
 * It's possible to forget to use _call_method() and it will still work,
 resulting in 

Re: [openstack-dev] [Nova][vmware] A new VMwareAPISession

2014-02-06 Thread Gary Kotton


On 2/6/14 1:58 PM, Matthew Booth mbo...@redhat.com wrote:

On 06/02/14 11:24, Gary Kotton wrote:
 Hi,
 Thanks for the detailed mail. For the first step of moving the code into
 OSLO we are trying to be as conservative as possible (similar to the
fork
 lift of the scheduler code). That is, we are taking working code and
 moving it to the common library, not doing any rewrites and using the
same
 functionality and flows. Once that is in and stable then we should start
 to work on making it more robust.

In moving the code to oslo there's going to be some donkey work required
to find all current uses of the code and fix them up. I'm proposing this
change now because it would be a great opportunity to do that donkey
work just once.

The work has already been done - https://review.openstack.org/#/c/70175/
This also has a +1 from the minesweeper which means that the API's are
working correctly.


Also notice that the actual usage of the proposed API is very similar to
the old one. The donkey work would essentially amount to:

* Change all instances of vim.Method(object, args) in vim_util to
vim.call(object, Method, args)

* Change all instances of session._call_method(vim_util, 'method', args)
everywhere to vim_util.method(session, args)

Note that the changes are mechanical, and you're going to have to touch
it for the move to oslo anyway.

Also note that the proposed API would, at least in the first instance,
be substantially a cut and paste of existing code.

Incidentally the code that was copied is
 not Nova's but Cinders. When the Cinder driver was being posted the team
 made a considerable amount of improvements to the driver.

I've read it. It's certainly much prettier python, but the design is the
same as the Nova driver.

 The reason that the _call_method is used is that this deals with session
 failures and has support for retries etc. Please see
 
https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%
23/c/61555/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%
2BfDtysg45MkPhCZFxPEq8%3D%0Am=co4zzllM3r0kGeYRL5kq9xJgzPe0T9gSqW%2B2XOJ%
2FTKY%3D%0As=3efe43037653b6caff24d2d253f566c1a1defa1fe4f0a902f57a17bf1bf
d2311.

That's one of the explicit design goals of the proposed API. Notice that
mistakes like this are no longer possible, as the call() method does
session management and retries, and there is no other exposed interface
to make remote API calls.

 I certainly agree that we can improve the code moving forwards. I think
 that the first priority should get a working version in OSLO. Once it is
 up and running then we should certain start addressing issues that you
 have raised.

I think it's almost no additional work to fix it at this stage, given
that the code is being refactored anyway and it will require donkey work
in the driver to match up with it. If we wait until later it becomes a
whole additional task.

Matt

 Thanks
 Gary
 
 
 On 2/6/14 12:43 PM, Matthew Booth mbo...@redhat.com wrote:
 
 There's currently an effort to create a common internal API to the
 vSphere/ESX API:

 
https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.
ne
 
t/oslo/%2Bspec/vmware-apik=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZ
o8
 
NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=8nIWvYjr1NVyVpYg3ZwDiG5VZAeq
Sk
 
w8MPwOPQ4k8zs%3D%0As=facc68779808dd3d6fb45fbb9a7addb9c8f392421bfe850a99
41
 ec732195d641

 I see there's some code already in place which essentially copies
what's
 currently in Nova. Having spent some time digging in this code
recently,
 I would take the opportunity of this refactor to fix a few design
issues
 in the current code, which has an 'organic' feel to it.

 The current code has 2 main objects:

 * driver.VMwareAPISession

 This object creates a Vim object and manages a session in it. It
 provides _call_method(), which calls a method in an external module and
 retries it if it failed because of a bad session. _call_method has also
 had shoehorned into it the ability to make direct Vim calls.

 * vim.Vim

 This object creates a connection to vSphere/ESX and provides an API to
 make remote calls.

 Here are 2 typical uses of the API, both taken from vmops.py:

 ---
  hardware_devices = self._session._call_method(vim_util,
  get_dynamic_property, vm_ref,
  VirtualMachine, config.hardware.device)
 ---

 This is using _call_method() to wrap:

  vim_util.get_dynamic_property(vm_ref, VirtualMachine,
config.hardware.device)

 vim_util.get_dynamic_property() does an amount of work and creates a
 number of objects before ultimately calling:

 return vim.RetrievePropertiesEx(...)

 Note that in the event that this call fails, for example due to a
 session timeout or a network error, the entire function will be
 needlessly re-executed.

 ---
  reconfig_task = self._session._call_method(
  self._session._get_vim(),
  ReconfigVM_Task, vm_ref,

Re: [openstack-dev] why do we put a license in every file?

2014-02-06 Thread Radomir Dopieralski
On 05/02/14 17:46, Jay Pipes wrote:
 On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
 that we have to put the same license into every single file of source code 
 in our projects.
 
 Meh, probably just habit and copy/paste behavior.

It's actually not just a habit, it's a requirement, and hacking has a
check for files with missing licenses.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 Sass support)

2014-02-06 Thread Radomir Dopieralski
On 05/02/14 17:32, Jaromir Coufal wrote:
[snip]

 Other concern will be with compilers. So far I've found 3 ways:
 * rails dependency (how big problem would it be?)
 * https://pypi.python.org/pypi/scss/0.7.1
 * https://pypi.python.org/pypi/SassPython/0.2.1
 * ... (other suggestions?)

The first of those links actually points to a newer and better
maintained project: https://github.com/Kronuz/pyScss

The question is, are we jumping out of the frying pan into fire? Are we
going to have exactly the same problems a couple of months or years down
the road with SASS? How healthy is the SASS community compared to that
of LESS and how good are the available tools?

We have to remember that we are adding a burden of learning new language
for all developers -- although SASS and LESS are relatively similar, we
are still running the risk of losing some contributors who don't want to
jump through one more hoop.

Finally, I wonder how well it would work with multiple SASS files
collected from different django apps. Currently, all the LESS files for
Bootstrap are included in Openstack Dashboard, simply because the
dashboard's files need to inherit from them, and putting them in Horizon
would make that impossible. It's a workaround, but it also makes it
impossible for apps other than the dashboard (like tuskar-ui) to inherit
from those less files. I think that if we had a solution for that for
SASS, that would be another strong advantage of switching.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Integrating with 3rd party DB

2014-02-06 Thread Noorul Islam Kamal Malmiyoda
Hello stackers,

We have a database with tables users, projects, roles, etc. Is there
any reference implementation or best practices to make keystone use
this DB instead of its own?

I have been reading
https://wiki.openstack.org/wiki/Keystone/Federation/Blueprint but I
could not find a open reference implementation for the same.

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-02-06 Thread Sandhya Dasu (sadasu)
Hi Bob and Irena,
   Thanks for the clarification. Irena, I am not opposed to a
SriovMechanismDriverBase/Mixin approach, but I want to first figure out
how much common functionality there is. Have you already looked at this?

Thanks,
Sandhya

On 2/5/14 1:58 AM, Irena Berezovsky ire...@mellanox.com wrote:

Please see inline my understanding

-Original Message-
From: Robert Kukura [mailto:rkuk...@redhat.com]
Sent: Tuesday, February 04, 2014 11:57 PM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
usage questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
 Hi,
  I have a couple of questions for ML2 experts regarding support of
 SR-IOV ports.

I'll try, but I think these questions might be more about how the various
SR-IOV implementations will work than about ML2 itself...

 1. The SR-IOV ports would not be managed by ova or linuxbridge L2
 agents. So, how does a MD for SR-IOV ports bind/unbind its ports to
 the host? Will it just be a db update?

I think whether or not to use an L2 agent depends on the specific SR-IOV
implementation. Some (Mellanox?) might use an L2 agent, while others
(Cisco?) might put information in binding:vif_details that lets the nova
VIF driver take care of setting up the port without an L2 agent.
[IrenaB] Based on VIF_Type that MD defines, and going forward with other
binding:vif_details attributes, VIFDriver should do the VIF pluging part.
As for required networking configuration is required, it is usually done
either by L2 Agent or external Controller, depends on MD.

 
 2. Also, how do we handle the functionality in mech_agent.py, within
 the SR-IOV context?

My guess is that those SR-IOV MechanismDrivers that use an L2 agent would
inherit the AgentMechanismDriverBase class if it provides useful
functionality, but any MechanismDriver implementation is free to not use
this base class if its not applicable. I'm not sure if an
SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being
planned, and how that would relate to AgentMechanismDriverBase.

[IrenaB] Agree with Bob, and as I stated before I think there is a need
for SriovMechanismDriverBase/Mixin that provides all the generic
functionality and helper methods that are common to SRIOV ports.
-Bob

 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Irena Berezovsky
 ire...@mellanox.com mailto:ire...@mellanox.com, Robert Li (baoli)
 ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura
 rkuk...@redhat.com mailto:rkuk...@redhat.com, Brian Bowen
 (brbowen) brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
 extra hr of discussion today
 
 Hi,
 Since, openstack-meeting-alt seems to be in use, baoli and myself
 are moving to openstack-meeting. Hopefully, Bob Kukura  Irena can
 join soon.
 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 1:26 PM
 To: Irena Berezovsky ire...@mellanox.com
 mailto:ire...@mellanox.com, Robert Li (baoli) ba...@cisco.com
 mailto:ba...@cisco.com, Robert Kukura rkuk...@redhat.com
 mailto:rkuk...@redhat.com, OpenStack Development Mailing List (not
for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
 extra hr of discussion today
 
 Hi all,
 Both openstack-meeting and openstack-meeting-alt are available
 today. Lets meet at UTC 2000 @ openstack-meeting-alt.
 
 Thanks,
 Sandhya
 
 From: Irena Berezovsky ire...@mellanox.com
 mailto:ire...@mellanox.com
 Date: Monday, February 3, 2014 12:52 AM
 To: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com, Robert
 Li (baoli) ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura
 rkuk...@redhat.com mailto:rkuk...@redhat.com, OpenStack
 Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
 Jan. 30th
 
 Hi Sandhya,
 
 Can you please elaborate how do you suggest to extend the below bp for
 SRIOV Ports managed by different Mechanism Driver?
 
 I am not 

Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-06 Thread Thierry Carrez
Andreas Jaeger wrote:
 On 02/05/2014 06:38 PM, Jonathan Bryce wrote:
 In this case, we are talking about documentation that is produced and 
 distributed with the integrated release to cover the Core OpenStack Project 
 and the “modules that are distributed together with the Core OpenStack 
 Project in the integrated release. This is the intended use case for the 
 exception Mark quoted above from the Bylaws, and I think it is perfectly 
 fine to refer to the integrated components in the OpenStack release 
 documentation as OpenStack components.
 
 What about if I talk about OpenStack at a conference (like I'm doing
 today)? What should I say: Orchestration, Heat module (or just Heat)?

The way I read that clause: if you mention Heat as a part of the
integrated release (i.e. your talk is about OpenStack), then you can use
OpenStack Orchestration. If you're talking about the project
separately (i.e. your talk is just about Heat), then you should probably
just say Heat. It makes sense, I think: Heat is the project that fills
the role of orchestration in the OpenStack integrated release.

 What about all the OpenStack distributors and users like SUSE,
 Rackspace, HP, Red Hat etc? What should they use in their documentation
 and software?

They should probably ask their legal counsel on how they can interpret
that clause.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-06 Thread Thierry Carrez
Russell Bryant wrote:
 Perhaps going through this process for a single project first would be
 helpful.  I agree that some clarification is needed on the details of
 the expected result.

At this point, I think we can break their request into two separate
questions.

The first one is high level, simple and technical: which parts of each
project have a pluggable interface ? We should be able to list those in
an objective fashion, and feed objective input into the second question.

The second question is more subjective: where, in that list, is it
acceptable to run an out of tree implementation ? Where would you say
you can substitute code in Nova and still consider running the resulting
beast is running nova ? The scheduler for example was explicitly
designed so that you can plug your own algorithm -- so I think an out of
tree scheduler class is fine... so I would exclude the scheduler classes
(ChanceScheduler and others) from the designated sections.

Since that second question is more subjective, I think the answer should
be a recommendation that the TC would collect and pass to the board.

As a first step, I think we should answer the technical first question.
There is no imposed format for the answer, so any freeform list will do.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-06 Thread Thierry Carrez
Mark Washenberger wrote:
 I don't have any issue defining what I think of as typical extension /
 variation seams in the Glance code base. However, I'm still struggling
 to understand what all this means for our projects and our ecosystem.
 Basically, why do I care? What are the implications of a 0% vs 100%
 designation? Are we hoping to improve interoperability, or encourage
 more upstream collaboration, or what?
 
 How many deployments do we expect to get the trademark after this core
 definition process is completed?

Yes... what is the end goal ? I agree that's important (and influences a
bit our response here). But that's a separate discussion, one I've
started on the foundation ML:

http://lists.openstack.org/pipermail/foundation/2014-February/001620.html

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-06 Thread Thierry Carrez
Dina Belova wrote:
 Perhaps we should start putting each project on the TC agenda for a
 review of its current standing.  For any gaps, I think we should set a
 specific timeframe for when we expect these gaps to be filled.
 
 
 Really good idea. New requirements are great, but frankly speaking not
 all currently integrated projects fit all of them.
 Will be nice to find out all gaps there and fix them asap.

Agreed. I propose we do this in future TC meetings, time permitting. I
propose we start with projects where the PTL was also elected to the TC,
so that we give this new review's process some mileage.

So.. Nova, Cinder, Neutron ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Modularity of generic driver (network mediated)

2014-02-06 Thread Swartzlander, Ben
Raja, this is one of a few workable approaches that I've thought about. I'm not 
convinced it's the best approach, but it does look to be less effort so we 
should examine it carefully. One thing to consider is that if we go down the 
route of using service VMs for the mediated drivers (such as gluster) then we 
don't need to be tied to Ganesha-NFS -- we could use nfs-kernel-server instead. 
Perhaps Ganesha-NFS is still the better choice but I'd like to compare the two 
in this context. One downside is that service VMs with full virtualization are 
a relatively heavyweight way to deliver file share services to tenants. If 
there were approaches that could use container-based virtualization or no 
virtualization at all, then those would probably be more efficient (although 
also possibly more work).

-Ben


-Original Message-
From: Ramana Raja [mailto:rr...@redhat.com] 
Sent: Wednesday, February 05, 2014 11:42 AM
To: openstack-dev@lists.openstack.org
Cc: vponomar...@mirantis.com; aostape...@mirantis.com; yportn...@mirantis.com; 
Csaba Henk; Vijay Bellur; Swartzlander, Ben
Subject: [Manila] Modularity of generic driver (network mediated)

Hi,

The first prototype of the multi-tenant capable GlusterFS driver would 
piggyback on the generic driver, which implements the network plumbing model 
[1]. We'd have NFS-Ganesha server running on the service VM. The Ganesha server 
would mediate access to the GlusterFS backend (or any other Ganesha compatible 
clustered file system backends such as CephFS, GPFS, among others), while the 
tenant network isolation would be done by the service VM networking [2][3]. To 
implement this idea, we'd have to reuse much of the generic driver code 
especially that related to the service VM networking.

So we were wondering whether the current generic driver can be made more 
modular? The service VM could not just be used to expose a formatted cinder 
volume, but instead be used as an instrument to convert the existing single 
tenant drivers (with slight modification) - LVM, GlusterFS - to a multi-tenant 
ready driver. Do you see any issues with this thought - generic driver, a 
modular multi-tenant driver that implements the network plumbing model? And is 
this idea feasible?


[1] https://wiki.openstack.org/wiki/Manila_Networking
[2] 
https://docs.google.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit
[3] 
https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit

Thanks,

Ram
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-06 Thread Dolph Mathews
On Wed, Feb 5, 2014 at 10:22 AM, Thierry Carrez thie...@openstack.orgwrote:

 (This email is mostly directed to PTLs for programs that include one
 integrated project)

 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.

 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition

 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.

 Comments, thoughts ?


I'm curious about the level of granularity that's envisioned in each
definition. Designated sections could be as broad as keystone.* or as
narrow as keystone.token.controllers.Auth.validate_token_head(). It could
be defined in terms of executables, package paths, or line numbers.

The definition is likely to change over time (i.e. per stable release). For
example, where support for password-based authentication might have been
mandated for an essex deployment, a havana deployment has the ability to
remove the password auth plugin and replace it with something else.

The definition may also be conditional, and require either A or B. In
havana for example, where keystone shipped two stable APIs side by side,
I wouldn't expect all deployments to enable both (or even bother to update
their paste pipeline from a previous release).



 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-06 Thread Dina Belova

 I propose we do this in future TC meetings, time permitting. I
 propose we start with projects where the PTL was also elected to the TC,
 so that we give this new review's process some mileage.


+1, good idea


On Thu, Feb 6, 2014 at 5:47 PM, Thierry Carrez thie...@openstack.orgwrote:

 Dina Belova wrote:
  Perhaps we should start putting each project on the TC agenda for a
  review of its current standing.  For any gaps, I think we should set
 a
  specific timeframe for when we expect these gaps to be filled.
 
 
  Really good idea. New requirements are great, but frankly speaking not
  all currently integrated projects fit all of them.
  Will be nice to find out all gaps there and fix them asap.

 Agreed. I propose we do this in future TC meetings, time permitting. I
 propose we start with projects where the PTL was also elected to the TC,
 so that we give this new review's process some mileage.

 So.. Nova, Cinder, Neutron ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-06 Thread Sergey Lukjanov
Probably all PTLs could be asked to prepare initial report for requirements
like it was done last time for graduating projects.


On Thu, Feb 6, 2014 at 6:07 PM, Dina Belova dbel...@mirantis.com wrote:

 I propose we do this in future TC meetings, time permitting. I
 propose we start with projects where the PTL was also elected to the TC,
 so that we give this new review's process some mileage.


 +1, good idea


 On Thu, Feb 6, 2014 at 5:47 PM, Thierry Carrez thie...@openstack.orgwrote:

 Dina Belova wrote:
  Perhaps we should start putting each project on the TC agenda for a
  review of its current standing.  For any gaps, I think we should
 set a
  specific timeframe for when we expect these gaps to be filled.
 
 
  Really good idea. New requirements are great, but frankly speaking not
  all currently integrated projects fit all of them.
  Will be nice to find out all gaps there and fix them asap.

 Agreed. I propose we do this in future TC meetings, time permitting. I
 propose we start with projects where the PTL was also elected to the TC,
 so that we give this new review's process some mileage.

 So.. Nova, Cinder, Neutron ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-06 Thread Thierry Carrez
Dolph Mathews wrote:
 I'm curious about the level of granularity that's envisioned in each
 definition. Designated sections could be as broad as keystone.* or as
 narrow as keystone.token.controllers.Auth.validate_token_head(). It
 could be defined in terms of executables, package paths, or line numbers.
 
 The definition is likely to change over time (i.e. per stable release).
 For example, where support for password-based authentication might have
 been mandated for an essex deployment, a havana deployment has the
 ability to remove the password auth plugin and replace it with something
 else.
 
 The definition may also be conditional, and require either A or B. In
 havana for example, where keystone shipped two stable APIs side by
 side, I wouldn't expect all deployments to enable both (or even bother
 to update their paste pipeline from a previous release).

That's why I think it's not practical to define which code needs to be
run (because you never run all the code paths or all the drivers at
the same time), and we should define where *external code* can be
plugged instead.

Then, in your example, if A and B are both in the code we ship, there is
no need to talk about them, unless you also allow an external C driver
to be run instead.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-06 Thread Doug Hellmann
On Thu, Feb 6, 2014 at 8:42 AM, Thierry Carrez thie...@openstack.orgwrote:

 Russell Bryant wrote:
  Perhaps going through this process for a single project first would be
  helpful.  I agree that some clarification is needed on the details of
  the expected result.

 At this point, I think we can break their request into two separate
 questions.

 The first one is high level, simple and technical: which parts of each
 project have a pluggable interface ? We should be able to list those in
 an objective fashion, and feed objective input into the second question.

 The second question is more subjective: where, in that list, is it
 acceptable to run an out of tree implementation ? Where would you say
 you can substitute code in Nova and still consider running the resulting
 beast is running nova ? The scheduler for example was explicitly
 designed so that you can plug your own algorithm -- so I think an out of
 tree scheduler class is fine... so I would exclude the scheduler classes
 (ChanceScheduler and others) from the designated sections.


 Since that second question is more subjective, I think the answer should
 be a recommendation that the TC would collect and pass to the board.

 As a first step, I think we should answer the technical first question.
 There is no imposed format for the answer, so any freeform list will do.


+1 -- having good developer docs for all of our publicly pluggable APIs
will be a good thing anyway.

Doug






 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-06 Thread Doug Hellmann
On Thu, Feb 6, 2014 at 9:21 AM, Thierry Carrez thie...@openstack.orgwrote:

 Dolph Mathews wrote:
  I'm curious about the level of granularity that's envisioned in each
  definition. Designated sections could be as broad as keystone.* or as
  narrow as keystone.token.controllers.Auth.validate_token_head(). It
  could be defined in terms of executables, package paths, or line numbers.
 
  The definition is likely to change over time (i.e. per stable release).
  For example, where support for password-based authentication might have
  been mandated for an essex deployment, a havana deployment has the
  ability to remove the password auth plugin and replace it with something
  else.
 
  The definition may also be conditional, and require either A or B. In
  havana for example, where keystone shipped two stable APIs side by
  side, I wouldn't expect all deployments to enable both (or even bother
  to update their paste pipeline from a previous release).

 That's why I think it's not practical to define which code needs to be
 run (because you never run all the code paths or all the drivers at
 the same time), and we should define where *external code* can be
 plugged instead.

 Then, in your example, if A and B are both in the code we ship, there is
 no need to talk about them, unless you also allow an external C driver
 to be run instead.


+1

Doug




 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-06 Thread Gil Rapaport
Mike, exactly: we would like to allow flexibility  complexity at the 
Advisor level without it affecting the placement computation.
Advisors are expected to manifest complex behavior as suggested by these 
BPs and gather constraints from multiple sources (users and providers).

The idea is indeed to define a protocol that can express placement 
requests without exposing the engine to 
complex/high-level/rapidly-changing/3rd-party semantics.
I think BPs such as the group API and flexible-evacuation combined with 
the power of LP solvers Yathiraj mentioned do push the scheduler towards 
being a more generic placement oracle, so the protocol should probably not 
be limited to the current deploy one or more instances of the same kind 
request.

Here's a more detailed description of our thoughts on how such a protocol 
might look:
https://wiki.openstack.org/wiki/Nova/PlacementAdvisorAndEngine
We've concentrated on the Nova scheduler; Would be interesting to see if 
this protocol aligns with Yathiraj's thoughts on a global scheduler 
addressing compute+storage+network.
Feedback is most welcome.

Regards,
Gil



From:   Mike Spreitzer mspre...@us.ibm.com
To: OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.org, 
Date:   02/04/2014 10:10 AM
Subject:Re: [openstack-dev] [Nova][Scheduler] Policy Based 
Scheduler and Solver Scheduler



 From: Khanh-Toan Tran khanh-toan.t...@cloudwatt.com 
...
 There is an unexpected line break in the middle of the link, so I post 
it
 again:
 
 
https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
 IQB2Y

The mailing list software keeps inserting that line break.  I 
re-constructed the URL and looked at the document.  As you point out at 
the end, the way you attempt to formulate load balancing as a linear 
objective does not work.  I think load-balancing is a non-linear thing. 

I also doubt that simple load balancing is what cloud providers want; I 
think cloud providers want to bunch up load, within limits, for example to 
keep some hosts idle so that they can be powered down to save on costs or 
left available for future exclusive use. 


 From: Gil Rapaport g...@il.ibm.com 
...
 As Alex Glikson hinted a couple of weekly meetings ago, our approach
 to this is to think of the driver's work as split between two entities: 
 -- A Placement Advisor, that constructs placement problems for 
 scheduling requests (filter-scheduler and policy-based-scheduler) 
 -- A Placement Engine, that solves placement problems (HostManager 
 in get_filtered_hosts() and solver-scheduler with its LP engine). 

Yes, I see the virtue in that separation.  Let me egg it on a little. What 
Alex and KTT want is more structure in the Placement Advisor, where there 
is a multiplicity of plugins, each bound to some fraction of the whole 
system, and a protocol for combining the advice from the plugins.  I would 
also like to remind you of another kind of structure: some of the 
placement desiderata come from the cloud users, and some from the cloud 
provider. 


 From: Yathiraj Udupi (yudupi) yud...@cisco.com
...
 Like you point out, I do agree the two entities of placement 
 advisor, and the placement engine, but I think there should be a 
 third one – the provisioning engine, which should be responsible for
 whatever it takes to finally create the instances, after the 
 placement decision has been taken. 

I'm not sure what you mean by whatever it takes to finally create the 
instances, but that sounds like what I had assumed everybody meant by 
orchestration (until I heard that there is no widespread agreement) --- 
and I think we need to take a properly open approach to that.  I think the 
proper API for cross-service whole-pattern scheduling should primarily 
focus on conveying the placement problem to the thing that will make the 
joint decision.  After the joint decision is made comes the time to create 
the individual resources.  I think we can NOT mandate one particular agent 
or language for that.  We will have to allow general clients to make calls 
on Nova, Cinder, etc. to do the individual resource creations (with some 
sort of reference to the decision that was already made).  My original 
position was that we could use Heat for this, but I think we have gotten 
push-back saying it is NOT OK to *require* that.  For example, note that 
some people do not want to use Heat at all, they prefer to make individual 
calls on Nova, Cinder, etc.  Of course, we definitely want to support, 
among others, the people who *do* use Heat. 


 From: Yathiraj Udupi (yudupi) yud...@cisco.com 
...
 The solver-scheduler is designed to solve for an arbitrary list of 
 instances of different flavors. We need to have some updated apis in
 the scheduler to be able to pass on such requests. Instance group 
 api is an initial effort to specify such groups. 

I'll remind the other readers of our draft of such a thing, at 


Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-06 Thread Anne Gentle
On Wed, Feb 5, 2014 at 11:38 AM, Jonathan Bryce jbr...@jbryce.com wrote:

 On Feb 5, 2014, at 10:18 AM, Steve Gordon sgor...@redhat.com wrote:

  - Original Message -
  From: Andreas Jaeger a...@suse.com
  To: Mark McLoughlin mar...@redhat.com, OpenStack Development
 Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Cc: Jonathan Bryce jonat...@openstack.org
  Sent: Wednesday, February 5, 2014 9:17:39 AM
  Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
 
  On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
  On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
  Steve Gordon wrote:
  From: Anne Gentle anne.gen...@rackspace.com
  Based on today's Technical Committee meeting and conversations with
 the
  OpenStack board members, I need to change our Conventions for
 service
  names
  at
 
 https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
  .
 
  Previously we have indicated that Ceilometer could be named
 OpenStack
  Telemetry and Heat could be named OpenStack Orchestration. That's
 not
  the
  case, and we need to change those names.
 
  To quote the TC meeting, ceilometer and heat are other modules
 (second
  sentence from 4.1 in
  http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
  distributed with the Core OpenStack Project.
 
  Here's what I intend to change the wiki page to:
  Here's the list of project and module names and their official names
  and
  capitalization:
 
  Ceilometer module
  Cinder: OpenStack Block Storage
  Glance: OpenStack Image Service
  Heat module
  Horizon: OpenStack dashboard
  Keystone: OpenStack Identity Service
  Neutron: OpenStack Networking
  Nova: OpenStack Compute
  Swift: OpenStack Object Storage
 
  Small correction. The TC had not indicated that Ceilometer could be
  named OpenStack Telemetry and Heat could be named OpenStack
  Orchestration. We formally asked[1] the board to allow (or disallow)
  that naming (or more precisely, that use of the trademark).
 
  [1]
 
 https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
 
  We haven't got a formal and clear answer from the board on that
 request
  yet. I suspect they are waiting for progress on DefCore before
 deciding.
 
  If you need an answer *now* (and I suspect you do), it might make
 sense
  to ask foundation staff/lawyers about using those OpenStack names with
  the current state of the bylaws and trademark usage rules, rather than
  the hypothetical future state under discussion.
 
  Basically, yes - I think having the Foundation confirm that it's
  appropriate to use OpenStack Telemetry in the docs is the right
 thing.
 
  There's an awful lot of confusion about the subject and, ultimately,
  it's the Foundation staff who are responsible for enforcing (and giving
  advise to people on) the trademark usage rules. I've cc-ed Jonathan so
  he knows about this issue.
 
  But FWIW, the TC's request is asking for Ceilometer and Heat to be
  allowed use their Telemetry and Orchestration names in *all* of the
  circumstances where e.g. Nova is allowed use its Compute name.
 
  Reading again this clause in the bylaws:
 
   The other modules which are part of the OpenStack Project, but
not the Core OpenStack Project may not be identified using the
OpenStack trademark except when distributed with the Core OpenStack
Project.
 
  it could well be said that this case of naming conventions in the docs
  for the entire OpenStack Project falls under the distributed with
 case
  and it is perfectly fine to refer to OpenStack Telemetry in the docs.
  I'd really like to see the Foundation staff give their opinion on this,
  though.

 In this case, we are talking about documentation that is produced and
 distributed with the integrated release to cover the Core OpenStack Project
 and the modules that are distributed together with the Core OpenStack
 Project in the integrated release. This is the intended use case for the
 exception Mark quoted above from the Bylaws, and I think it is perfectly
 fine to refer to the integrated components in the OpenStack release
 documentation as OpenStack components.


  What Steve is asking IMO is whether we have to change OpenStack
  Telemetry to Ceilometer module or whether we can just say Telemetry
  without the OpenStack in front of it,
 
  Andreas
 
  Constraining myself to the topic of what we should be using in the
 documentation, yes this is what I'm asking. This makes more sense to me
 than switching to calling them the Heat module and Ceilometer module
 because:
 
  1) It resolves the issue of using the OpenStack mark where it
 (apparently) shouldn't be used.
  2) It means we're still using the formal name for the program as
 defined by the TC [1] (it is my understanding this remains the purview of
 the TC, it's control of the mark that the board are exercising here).
  3) It is a more minor change/jump and 

[openstack-dev] WSME 0.6 released

2014-02-06 Thread Doug Hellmann
I have just tagged WSME 0.6. It is now on PyPI, and should be picked up
automatically by gate jobs as soon as the mirror updates.

Changes since the 0.5b6 release we have been using:

$ git log --format=oneline 0.5b6..0.6
e26d1b608cc5a05940c0b6b7fc176a0d587ba611 Add 'readonly' parameter to wsattr
9751ccebfa8c3cfbbc6b38e398f35ab557d7747c Fix typos in documents and comments
1d6b3a471b8afb3e96253d539f44506428314049 Merge Support dynamic types
cdf74daac2a204d5fe77f4b2bf5a956f65a73a6f Support dynamic types
984e9e360be74ff0b403a8548927aa3619ed7098 Support building wheels (PEP-427)
ec7d49f33cb777ecb05d6e6481de41320b37df52 Fix a typo in the types
documentation
f191f32a722ef0c2eaad71dd33da4e7787ac2424 Add IntegerType and some classes
for validation
a59576226dd4affde0afdd028f54c423b8786e24 Merge Drop description from 403
flask test case
e5927c8e30714c5e53cf8dc90f97b8f56b6d8cff Merge Fix SyntaxWarning under
Python 3
c63ad8bbfea78957d79bbb4a573cee97e0a8bd66 Merge Remove the duplicated error
message from Enum
db6c337526a6dbba11d260537f1eb95e7cabac4f Use assertRaises() for negative
tests
29547eae59d244adb681b6182b604e7085d8c1a8 Remove the duplicated error
message from Enum
1bf6317a3c7f3e9c7f61776ac269d617cee8f3fe Drop description from 403 flask
test case
0fa306fa4f70180641132d060a346ac0f02dbae3 Fix SyntaxWarning under Python 3

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] Maintaining support for the Tail-f NCS mech driver in Icehouse

2014-02-06 Thread Luke Gorrie
Howdy!

My name is Luke and I'm helping my friends at Tail-f Systems to
support Neutron with their NCS [1] product. This went really smoothly
for us on the Havana cycle, but lately we're having a harder time with
Icehouse. In particular, our attempt to fulfill the 3rd party testing
requirements has caused a lot of frustration for the #openstack-infra
team around New Year. So I'm writing now to look for a good solution.

Our goal for Icehouse is to keep our mechanism driver officially
supported. The code already works, and has unit tests to keep it
working. The risk we want to avoid is going on the dreaded
deprecated list for some other reason, which would confuse our
customers.

For background, our mechanism driver is about 150 lines of code. It
translates each network/port/subnet API call into a REST/JSON
notification to an external system. That external system returns HTTP
200 OK. That's about all. It's a pretty trivial program.

In December I sat down with Tobbe Tornqvist and we tried to setup
Jenkins 3rd party testing. We created a Vagrantfile that spins up an
Ubuntu VM, installs Neutron and NCS, and performs integration tests
with them connected together. We hooked this into Jenkins with a
service account.

This went fine to start with, but after Christmas our tests started
failing due to random setup issues with 'tox', and the script started
making -1 votes. Those -1's created major headaches for the
infrastructure team and others upstream, I am sorry to say, and ended
up requiring a messy process of manual cleanup, and a public flogging
for us on IRC. Bad feeling all around ...

And that's where we are today.

Now, reviewing the situation, I would love to find a solution that
doesn't make us deprecated _and_ doesn't require us to be so deeply
hooked into the OpenStack Jenkins infrastructure.

Could we, for example, write an accurate emulator for the external
system so that the MD code can be tested on OpenStack's own servers?
That would suit us very well. It seems like a reasonable request to
make given the simplicity of our driver code.

Hoping for a simple solution...

Cheers,
-Luke  friends at Tail-f

[1] http://blog.ipspace.net/2013/05/tail-f-network-control-system-first.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Extending] Binding/Restricting subnets to specific hosts more

2014-02-06 Thread Joe Harrison
Hi,

(Scroll down for tl;dr)

Unfortunately due to networking constraints I don't have the leisure
of a large and flat layer two network.

As such, different compute nodes and network nodes will be in separate
distinct subnets on the same network.

There will be hundreds if not thousands of subnets, and it does not
seem very user friendly to create a one-to-one mapping between these
subnets and neutron network objects.

Is there a resilient way to restrict and map subnets to compute nodes
and network nodes (or nodes running neutron plugin agents) without
having to hack the IP allocation code to bits and extending/modifying
the existing code.

Further to this, for auditing and network configuration purposes,
information such as MAC address, IP address and hostname needs to be
forwarded to an external system via means of a proprietary API.

To do this, my idea was to create a separate special agent which
attaches to the messaging server and manages this workflow for us,
hooking in with a few RPC calls here and there and subscribing to the
needed messaging queues and exchanges, whilst also creating my own API
extension to manage this workflow.

Does anyone have any advice, pointers or (hopefully) solutions to this
issue beyond what I'm already doing?

tl;dr need to restrict subnets to specific hosts. Also need to manage
an external networking workflow with an API extension and special
agent.

Thanks in advance,
Joe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME 0.6 released

2014-02-06 Thread Sylvain Bauza
Thanks Doug,




2014-02-06 15:54 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:


 cdf74daac2a204d5fe77f4b2bf5a956f65a73a6f Support dynamic types
 f191f32a722ef0c2eaad71dd33da4e7787ac2424 Add IntegerType and some classes
 for validation

 Doug



Do you know when the docs will be updated ? [1]
Some complex types can already be found on Ironic/Ceilometer/Climate and I
would love to see if some have been backported to WSME as native types
(like the UUID type or the String one)

-Sylvain

[1] : http://pythonhosted.org//WSME/types.html


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] meeting time updated

2014-02-06 Thread John Dickinson
Historically, the Swift team meetings have been every other week. In order to 
keep better track of things (and hopefully to get more specific attention on 
languishing reviews), we're moving to a weekly meeting schedule.

New meeting time: every Wednesday at 1900UTC in #openstack-meeting

The meeting agenda is tracked at https://wiki.openstack.org/wiki/Meetings/Swift


--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] meeting time updated

2014-02-06 Thread Luse, Paul E
What a corporate thing to do :)  Good call though John

-Paul

-Original Message-
From: John Dickinson [mailto:m...@not.mn] 
Sent: Thursday, February 6, 2014 9:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Swift] meeting time updated

Historically, the Swift team meetings have been every other week. In order to 
keep better track of things (and hopefully to get more specific attention on 
languishing reviews), we're moving to a weekly meeting schedule.

New meeting time: every Wednesday at 1900UTC in #openstack-meeting

The meeting agenda is tracked at https://wiki.openstack.org/wiki/Meetings/Swift


--John





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-06 Thread Jonathan Bryce
On Feb 6, 2014, at 8:08 AM, Dolph Mathews dolph.math...@gmail.com wrote:

 I'm curious about the level of granularity that's envisioned in each 
 definition. Designated sections could be as broad as keystone.* or as 
 narrow as keystone.token.controllers.Auth.validate_token_head(). It could be 
 defined in terms of executables, package paths, or line numbers.
 
 The definition is likely to change over time (i.e. per stable release). For 
 example, where support for password-based authentication might have been 
 mandated for an essex deployment, a havana deployment has the ability to 
 remove the password auth plugin and replace it with something else.
 
 The definition may also be conditional, and require either A or B. In 
 havana for example, where keystone shipped two stable APIs side by side, I 
 wouldn't expect all deployments to enable both (or even bother to update 
 their paste pipeline from a previous release).


Here’s an example of the real world application in the current implementation 
of the commercial usage agreements (Russell alluded to this earlier):

http://www.openstack.org/brand/powered-by-openstack/
--
PRODUCT REQUIREMENTS. You must meet the following requirements in order to 
qualify for an OpenStack Powered trademark license:

1) A primary purpose of your product must be to run a functioning operational 
instance of the OpenStack software.

2) To ensure compatibility, your product must:

i. include the entirety of the OpenStack Compute (Nova) code from either of the 
latest two releases and associated milestones, but no older, and

ii. expose the associated OpenStack APIs published on http://www.openstack.org 
without modification.

3) As of January 1st, 2012, your product must pass any Faithful Implementation 
Test Suite (FITS) defined by the Technical Committee that will be made 
available on http://www.openstack.org/FITS , to verify that you are 
implementing a sufficiently current and complete version of the software (and 
exposing associated APIs) to ensure compatibility and interoperability. Your 
product will be required to pass the current FITS test on an annual basis, 
which will generally require you to be running either of the latest two 
software releases.
--

The request from the DefCore committee around designated sections would replace 
Section 2(i) in the above example. The external API testing that is being 
developed would fulfill Section 3. You’ll notice that Section 2(i) is not 
granular at all, but does include a version requirement. I think Thierry’s 
proposal around breaking it into two separate steps makes a lot of sense. 
Ultimately, it all has to find its way into a form that can be included into 
the legal agreements these organizations sign.

Jonathan






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-06 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 3:01 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Feb 5, 2014 at 1:25 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-02-05 10:58, Doug Hellmann wrote:




 On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec openst...@nemebean.comwrote:

   On 2014-02-05 09:05, Doug Hellmann wrote:


 On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec openst...@nemebean.comwrote:

  On 2014-01-08 12:14, Doug Hellmann wrote:



 On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec openst...@nemebean.comwrote:

 On 2014-01-08 11:16, Sean Dague wrote:

 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 snip

 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else.

 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx.


 Doing the git thing is definitely not the right thing. But I guess I
 got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.

  The problem seems to be when we pip install -e oslo.config on the
 system, then pip install oslo.sphinx in a venv.  oslo.config is 
 unavailable
 in the venv, apparently because the namespace package for o.s causes the
 egg-link for o.c to be ignored.  Pretty much every other combination I've
 tried (regular pip install of both, or pip install -e of both, regardless
 of where they are) works fine, but there seem to be other issues with all
 of the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
 used for gating, and we can't pip install -e oslo.sphinx because it's not 
 a
 runtime dep so it doesn't belong in the gate.  Changing the toplevel
 package for oslo.sphinx was also mentioned, but has obvious drawbacks too.

 I think that about covers what I know so far.

  Here's a link dstufft provided to the pip bug tracking this problem:
 https://github.com/pypa/pip/issues/3
 Doug

   This just bit me again trying to run unit tests against a fresh Nova
 tree.I don't think it's just me either - Matt Riedemann said he has
 been disabling site-packages in tox.ini for local tox runs.  We really need
 to do _something_ about this, even if it's just disabling site-packages by
 default in tox.ini for the affected projects.  A different option would be
 nice, but based on our previous discussion I'm not sure we're going to find
 one.
 Thoughts?

  Is the problem isolated to oslo.sphinx? That is, do we end up with any
 configurations where we have 2 oslo libraries installed in different modes
 (development and regular) where one of those 2 libraries is not
 oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename
 that to move it out of the namespace package.

oslo.sphinx is the only one that has triggered this for me so far.
 I think it's less likely to happen with the others because they tend to be
 runtime dependencies so they get installed in devstack, whereas oslo.sphinx
 doesn't because it's a build dep (AIUI anyway).


  That's pretty much what I expected.

 Can we get a volunteer to work on renaming oslo.sphinx?


   I'm winding down on the parallel testing work so I could look at this
 next.  I don't know exactly what is going to be involved in the rename
 though.

 We also need to decide what we're going to call it.  I haven't come up
 with any suggestions that I'm particularly in love with so far. :-/


 Yeah, I haven't come up with anything good, either.

 oslosphinx?

 openstacksphinx?

 We will need to:

 - rename the git repository -- we have some other renames planned for this
 Friday, so we could possibly take care of that one this week
 - make sure the metadata file for packaging the new library is correct in
 the new repo
 - prepare a release under the new name so it ends up on PyPI
 - update the sphinx conf.py in all consuming projects to use the new name,
 and change their test-requirements.txt to refer to the new name (or finally
 add a doc-requirements.txt for doc jobs)
 - remove oslo.sphinx from pypi so no one uses it accidentally

 Doug



This work has started:
https://review.openstack.org/#/q/I7788a9d6b5984fdfcc4678f2182104d2eb8a2be0,n,z

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] l2-pop bugs review

2014-02-06 Thread Kyle Mestery

On Feb 6, 2014, at 3:09 AM, Édouard Thuleau thul...@gmail.com wrote:

 Hi all,
 
 Just to point 2 reviews [1]  [2] I submitted to correct l2-pop
 mechanism driver into the ML2 plugin.
 I had some reviews and +1 but they doesn't progress anymore.
 Could you check them ?
 I also like to backport them for stable Havana branch.
 
 [1] https://review.openstack.org/#/c/63917/
 [2] https://review.openstack.org/#/c/63913/
 
Hi Edouard:

I’ll take a look at these later today, thanks for bringing them to
my attention!

Kyle

 Thanks,
 Édouard.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ready to import Launchpad Answers into Ask OpenStack

2014-02-06 Thread Stefano Maffulli
Hello folks,

we're ready to import the answers from Launchpad into Ask OpenStack. A
script will import all questions, answers, comments (and data abou user
accounts) from LP into Ask, tag them as the project of origin (nova,
swift, etc). You can see the results of the test runs on
http://ask-staging.openstack.org/en/questions/
For example, the questions migrated from LP Answers Swift are
http://ask-staging.openstack.org/en/questions/scope:all/sort:activity-desc/tags:swift/page:1/

We'll try also to sync accounts already existing on Ask with those
imported from LP, matching on usernames, OpenID and email addresses as
exposed by LP API. If there is no match, a new account will be created.

I'm writing to you to make sure that you're aware of this effort and to
ask you if you are really, adamantly against closing LP Answers. In case
you are against, I'll try to convince you otherwise :)

You can see the history of the effort and its current status on

https://bugs.launchpad.net/openstack-community/+bug/1212089

Next step is to set a date to run the import. The process will be:

 1 - run the import script
 2 - put Ask down for maintenance
 3 - import data into Ask
 4 - check that it run correctly
 5 - close all LP Answers, reconfigure LP projects to redirect to Ask

I think we can run this process one project at the time so we minimize
interruptions. If the PTLs authorize me I think I have the necessary
permissions to edit LP Answers, remove the archives from the public once
the data is replicated correctly on Ask, so you can focus on coding.

Let me know what you think about closing LP Answers, use Ask exclusively
to handle support requests and about delegating to me closing LP Answers
for your projects.

Cheers,
stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ready to import Launchpad Answers into Ask OpenStack

2014-02-06 Thread John Dickinson
Sounds like a good plan. My only concern with the import is that the users are 
matched up, and it looks like that's being handled. The only reason I've wanted 
to keep LP Answers open is to not lose that content, and this takes care of 
that. Thanks for doing it, and lgtm.

--John



On Feb 6, 2014, at 9:07 AM, Stefano Maffulli stef...@openstack.org wrote:

 Hello folks,
 
 we're ready to import the answers from Launchpad into Ask OpenStack. A
 script will import all questions, answers, comments (and data abou user
 accounts) from LP into Ask, tag them as the project of origin (nova,
 swift, etc). You can see the results of the test runs on
 http://ask-staging.openstack.org/en/questions/
 For example, the questions migrated from LP Answers Swift are
 http://ask-staging.openstack.org/en/questions/scope:all/sort:activity-desc/tags:swift/page:1/
 
 We'll try also to sync accounts already existing on Ask with those
 imported from LP, matching on usernames, OpenID and email addresses as
 exposed by LP API. If there is no match, a new account will be created.
 
 I'm writing to you to make sure that you're aware of this effort and to
 ask you if you are really, adamantly against closing LP Answers. In case
 you are against, I'll try to convince you otherwise :)
 
 You can see the history of the effort and its current status on
 
 https://bugs.launchpad.net/openstack-community/+bug/1212089
 
 Next step is to set a date to run the import. The process will be:
 
 1 - run the import script
 2 - put Ask down for maintenance
 3 - import data into Ask
 4 - check that it run correctly
 5 - close all LP Answers, reconfigure LP projects to redirect to Ask
 
 I think we can run this process one project at the time so we minimize
 interruptions. If the PTLs authorize me I think I have the necessary
 permissions to edit LP Answers, remove the archives from the public once
 the data is replicated correctly on Ask, so you can focus on coding.
 
 Let me know what you think about closing LP Answers, use Ask exclusively
 to handle support requests and about delegating to me closing LP Answers
 for your projects.
 
 Cheers,
 stef
 
 -- 
 Ask and answer questions on https://ask.openstack.org



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ready to import Launchpad Answers into Ask OpenStack

2014-02-06 Thread Steven Dake

On 02/06/2014 10:07 AM, Stefano Maffulli wrote:

Hello folks,

we're ready to import the answers from Launchpad into Ask OpenStack. A
script will import all questions, answers, comments (and data abou user
accounts) from LP into Ask, tag them as the project of origin (nova,
swift, etc). You can see the results of the test runs on
http://ask-staging.openstack.org/en/questions/
For example, the questions migrated from LP Answers Swift are
http://ask-staging.openstack.org/en/questions/scope:all/sort:activity-desc/tags:swift/page:1/

We'll try also to sync accounts already existing on Ask with those
imported from LP, matching on usernames, OpenID and email addresses as
exposed by LP API. If there is no match, a new account will be created.

I'm writing to you to make sure that you're aware of this effort and to
ask you if you are really, adamantly against closing LP Answers. In case
you are against, I'll try to convince you otherwise :)

You can see the history of the effort and its current status on

https://bugs.launchpad.net/openstack-community/+bug/1212089

Next step is to set a date to run the import. The process will be:

  1 - run the import script
  2 - put Ask down for maintenance
  3 - import data into Ask
  4 - check that it run correctly
  5 - close all LP Answers, reconfigure LP projects to redirect to Ask

I think we can run this process one project at the time so we minimize
interruptions. If the PTLs authorize me I think I have the necessary
permissions to edit LP Answers, remove the archives from the public once
the data is replicated correctly on Ask, so you can focus on coding.

Let me know what you think about closing LP Answers, use Ask exclusively
to handle support requests and about delegating to me closing LP Answers
for your projects.

Cheers,
stef


Stefano,

Really would be a huge load off the core heat team's shoulders to move 
to one system and ask rocks.  Maintaining a presence in both places is 
too much work. :)


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Replication Contract Verbiage

2014-02-06 Thread Craig Vyvial
Daniel,

Couple questions.

So what happens if/when the volume is different on the nodes in the
replication cluster? If you need to resize the volume larger to handle more
data are you required to resize all the nodes individually? It makes sense
that maybe all the instances could have a different flavor if its not the
master in the cluster/grouping.

So is there a status of the replication set? If its healthy? or is that
just managed by the individual instances?
Because what would you expect to see if the first instance you create is
the master and the second is the slave and for what ever reason the slave
never comes online or connects up to the master.

Is the writable flag completely optional for creating the metadata on an
instance? Would that mean that there is a default per datastore or overall?

Thanks for putting all this together. Great work man.

- Craig Vyvial



On Wed, Feb 5, 2014 at 4:38 PM, Daniel Salinas imsplit...@gmail.com wrote:


 https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API#REPLICATION

 I have updated the wiki page to reflect the current proposal for
 replication verbiage with some explanation of the choices.  I would like to
 open discussion here regarding that verbiage.  Without completely
 duplicating everything I just wrote in the wiki here are the proposed words
 that could be used to describe replication between two datastore instances
 of the same type.  Please take a moment to consider them and let me know
 what you think.  I welcome all feedback.

 replicates_from:  This term will be used in an instance that is a slave of
 another instance. It is a clear indicator that it is a slave of another
 instance.

 replicates_to: This term will be used in an instance that has slaves of
 itself. It is a clear indicator that it is a master of one or more
 instances.

 writable: This term will be used in an instance to indicate whether it is
 intended to be used for writes. As replication is used commonly to scale
 read operations it is very common to have a read-only slave in many
 datastore types. It is beneficial to the user to be able to see this
 information when viewing the instance details via the api.

 The intention here is to:
 1.  have a clearly defined replication contract between instances.
 2.  allow users to create a topology map simply by querying the api for
 details of instances linked in the replication contracts
 3.  allow the greatest level of flexibility for users when replicating
 their data so that Trove doesn't prescribe how they should make use of
 replication.

 I also think there is value in documenting common replication topologies
 per datastore type with example replication contracts and/or steps to
 recreate them in our api documentation.  There are currently no examples of
 this yet

 e.g. To create multi-master replication in mysql...

 As previously stated I welcome all feedback and would love input.

 Regards,

 Daniel Salinas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress [for appliances]

2014-02-06 Thread Steven Dake

On 02/06/2014 02:19 AM, Clint Byrum wrote:

Excerpts from Mike Spreitzer's message of 2014-02-05 22:17:50 -0800:

From: Prasad Vellanki prasad.vella...@oneconvergence.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date: 01/21/2014 02:16 AM
Subject: Re: [openstack-dev] [heat] Sofware Config progress

Steve  Clint

That should work. We will look at implementing a resource that spins
up a shortlived VM for bootstrapping a service VM and informing
configuration server for further configuration.

thanks
prasadv

On Wed, Jan 15, 2014 at 7:53 PM, Steven Dake sd...@redhat.com wrote:
On 01/14/2014 09:27 PM, Clint Byrum wrote:
Excerpts from Prasad Vellanki's message of 2014-01-14 18:41:46 -0800:
Steve

I did not mean to have custom solution at all. In fact that would be
terrible.  I think Heat model of software config and deployment is

really

good. That allows configurators such as Chef, Puppet, Salt or Ansible to

be

plugged into it and all users need to write are modules for those.

What I was  thinking is if there is a way to use software

config/deployment

   to do initial configuration of the appliance by using agentless system
such  as Ansible or Salt, thus requiring no cfminit. I am not sure this
will work either, since it might require ssh keys to be installed for
getting ssh to work without password prompting. But I do see that

ansible

and salt support username/password option.
If this would not work, I agree that the best option is to make them
support cfminit...
Ansible is not agent-less. It just makes use of an extremely flexible
agent: sshd. :) AFAIK, salt does use an agent though maybe they've added
SSH support.

Anyway, the point is, Heat's engine should not be reaching into your
machines. It talks to API's, but that is about it.

What you really want is just a VM that spins up and does the work for
you and then goes away once it is done.
Good thinking.  This model might work well without introducing the
groan another daemon problems pointed out elsewhere in this thread
that were snipped.  Then the modules could simply be heat
templates available to the Heat engine to do the custom config setup.

The custom config setup might still be a problem with the original
constraints (not modifying images to inject SSH keys).

That model wfm.

Regards
-steve


(1) What destroys the short-lived VM if the heat engine crashes between
creating and destroying that short-lived VM?


The heat-engine that takes over the stack. Same as the answer for what
happens when a stack is half-created and heat-engine dies.


(2) What if something goes wrong and the heat engine never gets the signal
it is waiting for?


Timeouts already cause failed state or rollback.


(3) This still has the problem that something needs to be configured
some(client-ish)where to support the client authorization solution
(usually username/password).


The usual answer is that's cloud-init's job but we're discussing
working around not having cloud-init, so I suspect it has to be built
into the image (which, btw, is a really really bad idea). Another option
is that these weird proprietary systems might reach out to an auth
service which the short-lived VM would also be able to contact given
appropriate credentials for said auth service fed in via parameters.


(4) Given that everybody seems sanguine about solving the client
authorization problem, what is wrong with code in the heat engine opening
and using a connection to code in an appliance?  Steve, what do you mean
by reaching into your machines that is critically different from calling
their APIs?


We can, and should, poke holes from heat-engine, out through a firewall,
so it can connect to all of the endpoints. However, if we start letting
it talk to all the managed machines, it becomes a really handy DoS tool
and also spends a ton of time talking to things that we have no control
over, thus taking up resources to an unknown degree.

Heat-engine is precious, it has access to a database with a ton of really
sensitive information. It is also expensive when heat-engine dies (until
we can make all tasks distributed) as it may force failure states. So
I think we need to be very careful about what we let it do.

Just to expand on this, modeling scalability (not that we are doing 
this, but I expect it will happen in the future) is difficult when one 
heat engine could be totally bogged down by a bunch of ssh connections 
while other heat-engines are less busy.


From a security attack vector standpoint, I really don't think it makes 
sense to open connections to untrusted virtual machines from a service 
trusted by the openstack RPC infrastructure.  I don't know for certain 
this model could be attacked, but it does create new attack vectors 
which could potentially crater an entire operator's environment and I'd 
prefer not to play with that fire.


Regards
-steve


(5) Are we really talking about the same kind of software 

Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Yuriy Taraday
Hello.


On Tue, Feb 4, 2014 at 5:38 PM, victor stinner
victor.stin...@enovance.comwrote:

 I would like to replace eventlet with asyncio in OpenStack for the
 asynchronous programming. The new asyncio module has a better design and is
 less magical. It is now part of python 3.4 arguably becoming the de-facto
 standard for asynchronous programming in Python world.


I think that before doing this big move to yet another asynchronous
framework we should ask the main question: Do we need it? Why do we
actually need async framework inside our code?
There most likely is some historical reason why (almost) every OpenStack
project runs every its process with eventlet hub, but I think we should
reconsider this now when it's clear that we can't go forward with eventlet
(because of py3k mostly) and we're going to put considerable amount of
resources into switching to another async framework.

Let's take Nova for example.

There are two kinds of processes there: nova-api and others.

- nova-api process forks to a number of workers listening on one socket and
running a single greenthread for each incoming request;
- other services (workers) constantly poll some queue and spawn a
greenthread for each incoming request.

Both kinds to basically the same job: receive a request, run a handler in a
greenthread. Sounds very much like a job for some application server that
does just that and does it good.
If we remove all dependencies from eventlet or any other async framework,
we would not only be able to write Python code without need to keep in mind
that we're running in some reactor (that's why eventlet was chosen over
Twisted IIRC), but we can also forget about all these frameworks altogether.

I suggest approach like this:
- for API services use dead-simple threaded WSGI server (we have one in the
stdlib by the way - in wsgiref);
- for workers use simple threading-based oslo.messaging loop (it's on its
way).

Of course, it won't be production-ready. Dumb threaded approach won't scale
but we don't have to write our own scaling here. There are other tools
around to do this: Apache httpd, Gunicorn, uWSGI, etc. And they will work
better in production environment than any code we write because they are
proven with time and on huge scales.

So once we want to go to production, we can deploy things this way for
example:
- API services can be deployed within Apache server or any other HTTP
server with WSGI backend (Keystone already can be deployed within Apache);
- workers can be deployed in any non-HTTP application server, uWSGI is a
great example of one that can work in this mode.

With this approach we can leave the burden of process management, load
balancing, etc. to the services that are really good at it.

What do you think about this?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC: Generate API sample files from API schemas

2014-02-06 Thread Vishvananda Ishaya

On Feb 6, 2014, at 5:38 AM, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:

 
 Hi,
 
 I'd like to propose one idea that autogenerates API sample files from API
 schema for Nova v3 API.
 
 We are working on API validation for v3 API, the works require API schema
 which is defined with JSONSchema for each API. On the other hand, API sample
 files of v3 API are autogenerated from the template files of v3 API under
 nova/tests/integrated/v3/api_samples, as api_samples's README.rst[1].
 The API schema files are similar to the template files, because both represent
 the API parameter structures and each API name.
 
 For example, the template file of keypairs is
 --
  {
  keypair: {
  name: %(keypair_name)s
  }
  }
 --
 
 and the API schema file is
 --
  create = {
  'type': 'object',
  'properties': {
  'keypair': {
  'type': 'object',
  'properties': {
  'name': {
  'type': 'string', 'minLength': 1, 'maxLength': 255,
  'pattern': '^[a-zA-Z0-9 _-]+$'
  },
  'public_key': {'type': 'string'},
  },
  'required': ['name'],
  'additionalProperties': False,
  },
  },
  'required': ['keypair'],
  'additionalProperties': False,
  }
 --
 
 When implementing new v3 API, we need to write/review both files and that
 would be hard works. For reducing the workload, I'd like to propose one
 idea[2] that autogenerates API sample files from API schema instead of
 template files. We would not need to write a template file of a request.

+1

The template files were there because we didn’t have a clear schema defined.

It would be awesome to get rid of the templates.

Vish

 
 The XML support is dropped from Nova v3 API, and the decision could make
 this implementation easier. The NOTE is that we still need response template
 files even if implementing this idea, because API schema files of response
 don't exist.
 
 Any comments are welcome.
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ---
 [1]: 
 https://github.com/openstack/nova/blob/master/nova/tests/integrated/api_samples/README.rst
 [2]: https://review.openstack.org/#/c/71465/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Joshua Harlow
Has there been any investigation into heat.

Heat has already used parts of the coroutine approach (for better or
worse).

An example: 
https://github.com/openstack/heat/blob/master/heat/engine/scheduler.py#L230


Decorator for a task that needs to drive a subtask.

This is essentially a replacement for the Python 3-only yield from
keyword (PEP 380), using the yield keyword that is supported in
Python 2. For example::



I bet trollius would somewhat easily replace a big piece of that code.

-Josh

-Original Message-
From: victor stinner victor.stin...@enovance.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 1:55 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet
with asyncio

Sean Dague wrote:
 First, very cool!

Thanks.

 This is very promising work. It might be really interesting to figure
 out if there was a smaller project inside of OpenStack that could be
 test ported over to this (even as a stackforge project), and something
 we could run in the gate.

Oslo Messaging is a small project, but it's more a library. For a full
daemon, my colleague Mehdi Abaakouk has a proof-on-concept for Ceilometer
replacing eventlet with asyncio. Mehdi told me that he doesn't like to
debug eventlet race conditions :-)

 Our experience is the OpenStack CI system catches bugs in libraries and
 underlying components that no one else catches, and definitely getting
 something running workloads hard on this might be helpful in maturing
 Trollius. Basically coevolve it with a piece of OpenStack to know that
 it can actually work on OpenStack and be a viable path forward.

Replacing eventlet with asyncio is a huge change. I don't want to force
users to use it right now, nor to do the change in one huge commit. The
change will be done step by step, and when possible, optional. For
example, in Olso Messaging, you can choose the executor: eventlet or
blocking (and I want to add asyncio).

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Joshua Harlow
Its a good question, I see openstack as mostly like the following 2 groups of 
applications.

Group 1:

API entrypoints using [apache/nginx]+wsgi (nova-api, glance-api…)

In this group we can just let the underlying framework/app deal with the 
scaling and just use native wsgi as it was intended. Scale more [apache/nginx] 
if u need more requests per second. For any kind of long term work these apps 
should be dropping all work to be done on a MQ and letting someone pick that 
work up to be finished in some future time.

Group 2:

Workers that pick things up off MQ. In this area we are allowed to be a little 
more different and change as we want, but it seems like the simple approach we 
have been doing is the daemon model (forking N child worker processes). We've 
also added eventlet in these children (so it becomes more like NxM where M is 
the number of greenthreads). For the usages where workers are used has it been 
beneficial to add those M greenthreads? If we just scaled out more N 
(processes) how bad would it be? (I don't have the answers here actually, but 
it does make you wonder why we couldn't just eliminate eventlet/asyncio 
altogether and just use more N processes).

-Josh

From: Yuriy Taraday yorik@gmail.commailto:yorik@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 10:06 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet with 
asyncio

Hello.


On Tue, Feb 4, 2014 at 5:38 PM, victor stinner 
victor.stin...@enovance.commailto:victor.stin...@enovance.com wrote:
I would like to replace eventlet with asyncio in OpenStack for the asynchronous 
programming. The new asyncio module has a better design and is less magical. 
It is now part of python 3.4 arguably becoming the de-facto standard for 
asynchronous programming in Python world.

I think that before doing this big move to yet another asynchronous framework 
we should ask the main question: Do we need it? Why do we actually need async 
framework inside our code?
There most likely is some historical reason why (almost) every OpenStack 
project runs every its process with eventlet hub, but I think we should 
reconsider this now when it's clear that we can't go forward with eventlet 
(because of py3k mostly) and we're going to put considerable amount of 
resources into switching to another async framework.

Let's take Nova for example.

There are two kinds of processes there: nova-api and others.

- nova-api process forks to a number of workers listening on one socket and 
running a single greenthread for each incoming request;
- other services (workers) constantly poll some queue and spawn a greenthread 
for each incoming request.

Both kinds to basically the same job: receive a request, run a handler in a 
greenthread. Sounds very much like a job for some application server that does 
just that and does it good.
If we remove all dependencies from eventlet or any other async framework, we 
would not only be able to write Python code without need to keep in mind that 
we're running in some reactor (that's why eventlet was chosen over Twisted 
IIRC), but we can also forget about all these frameworks altogether.

I suggest approach like this:
- for API services use dead-simple threaded WSGI server (we have one in the 
stdlib by the way - in wsgiref);
- for workers use simple threading-based oslo.messaging loop (it's on its way).

Of course, it won't be production-ready. Dumb threaded approach won't scale but 
we don't have to write our own scaling here. There are other tools around to do 
this: Apache httpd, Gunicorn, uWSGI, etc. And they will work better in 
production environment than any code we write because they are proven with time 
and on huge scales.

So once we want to go to production, we can deploy things this way for example:
- API services can be deployed within Apache server or any other HTTP server 
with WSGI backend (Keystone already can be deployed within Apache);
- workers can be deployed in any non-HTTP application server, uWSGI is a great 
example of one that can work in this mode.

With this approach we can leave the burden of process management, load 
balancing, etc. to the services that are really good at it.

What do you think about this?

--

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] bp: glance-snapshot-tasks

2014-02-06 Thread Joshua Harlow
Hi alex,

I think u are referring to the following: 
https://blueprints.launchpad.net/nova/+spec/glance-snapshot-tasks

Can u describe the #2 part in more detail. Do some of the drivers already 
implement these new steps?

The goal I think u are having is to make the snapshot functionality resume 
better and cleanup better right? In the end this will even allow for resuming 
if nova-compute (the process that does the snapshot) crashes/is restarted… Just 
wanted to make sure people understand the larger goal here (without having to 
read the whole blueprint, which might be wordy, haha).

-Josh

From: Alexander Gorodnev gorod...@gmail.commailto:gorod...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 1:57 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] bp: glance-snapshot-tasks

Hi,

A blue print was created and Joshua even wrote quite huge text. Right now this 
BP in Drafting stage, so I want to bring this BP to life and continue working 
on the topic. I even tried to make some changes without approvement (only just 
as experiment) and got negative feedbacks.
These steps I did when tried to implement this BP:

1) Moved snapshot functionality from Compute to Conductor (as I understood it's 
the best place for such things, need clarification);
Even this step should be done in two steps:
a) Add snapshot_instance() method to Conductor that just calls the same method 
from Compute;
b) After that move all error-handling / state transition / etc  logic from 
Compute to Conductor. Compute exposes API for drivers (see step 2);

2) The hardest part is a common, convenient, complete API for drivers. Most 
drivers do  almost the same things in the snapshot method:
a) Goes to Glance and registers new image there;
b) Makes snapshot;
c) Downloads the image to the Glance;
d) Clean temporary files;

I would really appreciate any thoughts and questions.

Thanks,
Alexander
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] delayed delete and user credentials

2014-02-06 Thread Pete Zaitcev
Hi, guys:

I looked briefly at a bug/fix, which looks exceedingly strange to me:
 https://review.openstack.org/59689

As much as I can tell, the problem (lp:1238604) is that pending delete
fails because by the time the delete actually occurs, Glance API does
not have proper permissions to talk to Glance Registry.

So far so good, but the solution that we accepted is to forward
the user credentials to Registry... but only if configured to do so.
Does it make any sense to anyone? Why configure something that must
always work? How can sysadmin select the correct value?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Yuriy Taraday
On Thu, Feb 6, 2014 at 10:34 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Its a good question, I see openstack as mostly like the following 2
 groups of applications.

  Group 1:

  API entrypoints using [apache/nginx]+wsgi (nova-api, glance-api…)

  In this group we can just let the underlying framework/app deal with the
 scaling and just use native wsgi as it was intended. Scale more
 [apache/nginx] if u need more requests per second. For any kind of long
 term work these apps should be dropping all work to be done on a MQ and
 letting someone pick that work up to be finished in some future time.


They should and from what I see they do. API services either provide some
work to workers or do some DB work, nothing more.


 Group 2:

  Workers that pick things up off MQ. In this area we are allowed to be a
 little more different and change as we want, but it seems like the simple
 approach we have been doing is the daemon model (forking N child worker
 processes). We've also added eventlet in these children (so it becomes more
 like NxM where M is the number of greenthreads). For the usages where
 workers are used has it been beneficial to add those M greenthreads? If we
 just scaled out more N (processes) how bad would it be? (I don't have the
 answers here actually, but it does make you wonder why we couldn't just
 eliminate eventlet/asyncio altogether and just use more N processes).


If you really want greenthreads within your worker processes, you can use
greenable server for it. For example Gunicorn can work with eventlet,
uWSGI has its uGreen. Btw, you don't have to import eventlet every time you
need to spawn a thread or sleep a bit - you can just monkey-patch world
 (like almost everybody using eventlet in OpenStack do) if and when you
actually need it.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ready to import Launchpad Answers into Ask OpenStack

2014-02-06 Thread Russell Bryant
On 02/06/2014 12:07 PM, Stefano Maffulli wrote:
 Hello folks,
 
 we're ready to import the answers from Launchpad into Ask OpenStack. A
 script will import all questions, answers, comments (and data abou user
 accounts) from LP into Ask, tag them as the project of origin (nova,
 swift, etc). You can see the results of the test runs on
 http://ask-staging.openstack.org/en/questions/
 For example, the questions migrated from LP Answers Swift are
 http://ask-staging.openstack.org/en/questions/scope:all/sort:activity-desc/tags:swift/page:1/
 
 We'll try also to sync accounts already existing on Ask with those
 imported from LP, matching on usernames, OpenID and email addresses as
 exposed by LP API. If there is no match, a new account will be created.
 
 I'm writing to you to make sure that you're aware of this effort and to
 ask you if you are really, adamantly against closing LP Answers. In case
 you are against, I'll try to convince you otherwise :)
 
 You can see the history of the effort and its current status on
 
 https://bugs.launchpad.net/openstack-community/+bug/1212089
 
 Next step is to set a date to run the import. The process will be:
 
  1 - run the import script
  2 - put Ask down for maintenance
  3 - import data into Ask
  4 - check that it run correctly
  5 - close all LP Answers, reconfigure LP projects to redirect to Ask
 
 I think we can run this process one project at the time so we minimize
 interruptions. If the PTLs authorize me I think I have the necessary
 permissions to edit LP Answers, remove the archives from the public once
 the data is replicated correctly on Ask, so you can focus on coding.
 
 Let me know what you think about closing LP Answers, use Ask exclusively
 to handle support requests and about delegating to me closing LP Answers
 for your projects.

All sounds good to me!  Thanks for doing this!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME 0.6 released

2014-02-06 Thread Renat Akhmerov
I guess it should be but just in case…

Renat Akhmerov
@ Mirantis Inc.

On 06 Feb 2014, at 07:58, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 
 
 
 On Thu, Feb 6, 2014 at 10:11 AM, Sylvain Bauza sylvain.ba...@gmail.com 
 wrote:
 Thanks Doug,
 
 
 
 
 2014-02-06 15:54 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:
 
 cdf74daac2a204d5fe77f4b2bf5a956f65a73a6f Support dynamic types
 f191f32a722ef0c2eaad71dd33da4e7787ac2424 Add IntegerType and some classes for 
 validation
 
 Doug
 
 
 
 Do you know when the docs will be updated ? [1]
 Some complex types can already be found on Ironic/Ceilometer/Climate and I 
 would love to see if some have been backported to WSME as native types (like 
 the UUID type or the String one)
 
 I'll look into why the doc build is broken. The gerrit merge should have 
 triggered an update on http://wsme.readthedocs.org/en/latest/
 
 Doug
 
  
 
 -Sylvain
 
 [1] : http://pythonhosted.org//WSME/types.html 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME 0.6 released

2014-02-06 Thread Renat Akhmerov
Doug, is it backwards compatible with 0.5b6?

Renat Akhmerov
@ Mirantis Inc.

On 06 Feb 2014, at 07:58, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 
 
 
 On Thu, Feb 6, 2014 at 10:11 AM, Sylvain Bauza sylvain.ba...@gmail.com 
 wrote:
 Thanks Doug,
 
 
 
 
 2014-02-06 15:54 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:
 
 cdf74daac2a204d5fe77f4b2bf5a956f65a73a6f Support dynamic types
 f191f32a722ef0c2eaad71dd33da4e7787ac2424 Add IntegerType and some classes for 
 validation
 
 Doug
 
 
 
 Do you know when the docs will be updated ? [1]
 Some complex types can already be found on Ironic/Ceilometer/Climate and I 
 would love to see if some have been backported to WSME as native types (like 
 the UUID type or the String one)
 
 I'll look into why the doc build is broken. The gerrit merge should have 
 triggered an update on http://wsme.readthedocs.org/en/latest/
 
 Doug
 
  
 
 -Sylvain
 
 [1] : http://pythonhosted.org//WSME/types.html 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Kevin Conway
There's an incredibly valid reason why we use green thread abstractions
like eventlet and gevent in Python. The CPython implementation is
inherently single threaded so we need some other form of concurrency to get
the most effective use out of our code. You can import threading all you
want but it won't work the way you expect it to. If you are considering
doing anything threading related in Python then
http://www.youtube.com/watch?v=Obt-vMVdM8s is absolutely required watching.

Green threads give us a powerful way to manage concurrency where it counts:
I/O. Everything in openstack is waiting on something else in openstack.
That is our natural state of being. If your plan for increasing the number
of concurrent requests is fork more processes then you're in for a rude
awakening when your hosts start kernel panicking from a lack of memory.
With green threads, on the other hand, we maintain the use of one process,
one thread but are able to manage multiple, concurrent network operations.

In the case of API nodes: yes, they should (at most) do some db work and
drop a message on the queue. That means they almost exclusively deal with
I/O. Expecting your wsgi server to scale that up for you is wrong and, in
fact, the reason we have eventlet in the first place.

What's more is this conversation has turned from lets use asyncio to
lets make evenltet work with asyncio. If the aim is to convert eventlet
to use the asyncio interface then this seems like a great idea so long as
it takes place within the eventlet project and not openstack. I don't see
the benefit of shimming in asyncio and a fork/backport of asyncio into any
of our code bases if the point is to integrate it into a third party module.

On Thu, Feb 6, 2014 at 12:34 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Its a good question, I see openstack as mostly like the following 2
 groups of applications.

  Group 1:

  API entrypoints using [apache/nginx]+wsgi (nova-api, glance-api…)

  In this group we can just let the underlying framework/app deal with the
 scaling and just use native wsgi as it was intended. Scale more
 [apache/nginx] if u need more requests per second. For any kind of long
 term work these apps should be dropping all work to be done on a MQ and
 letting someone pick that work up to be finished in some future time.

  Group 2:

  Workers that pick things up off MQ. In this area we are allowed to be a
 little more different and change as we want, but it seems like the simple
 approach we have been doing is the daemon model (forking N child worker
 processes). We've also added eventlet in these children (so it becomes more
 like NxM where M is the number of greenthreads). For the usages where
 workers are used has it been beneficial to add those M greenthreads? If we
 just scaled out more N (processes) how bad would it be? (I don't have the
 answers here actually, but it does make you wonder why we couldn't just
 eliminate eventlet/asyncio altogether and just use more N processes).

  -Josh

   From: Yuriy Taraday yorik@gmail.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, February 6, 2014 at 10:06 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet
 with asyncio

   Hello.


 On Tue, Feb 4, 2014 at 5:38 PM, victor stinner 
 victor.stin...@enovance.com wrote:

 I would like to replace eventlet with asyncio in OpenStack for the
 asynchronous programming. The new asyncio module has a better design and is
 less magical. It is now part of python 3.4 arguably becoming the de-facto
 standard for asynchronous programming in Python world.


  I think that before doing this big move to yet another asynchronous
 framework we should ask the main question: Do we need it? Why do we
 actually need async framework inside our code?
 There most likely is some historical reason why (almost) every OpenStack
 project runs every its process with eventlet hub, but I think we should
 reconsider this now when it's clear that we can't go forward with eventlet
 (because of py3k mostly) and we're going to put considerable amount of
 resources into switching to another async framework.

  Let's take Nova for example.

  There are two kinds of processes there: nova-api and others.

  - nova-api process forks to a number of workers listening on one socket
 and running a single greenthread for each incoming request;
 - other services (workers) constantly poll some queue and spawn a
 greenthread for each incoming request.

  Both kinds to basically the same job: receive a request, run a handler
 in a greenthread. Sounds very much like a job for some application server
 that does just that and does it good.
 If we remove all dependencies from eventlet or any other async framework,
 we would not only be able to write Python code without need to keep in 

Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Clint Byrum
All due respect to Zane who created the scheduler. We simply could not
do what we do without it (and I think one of the first things I asked
for was parallel create ;).

IMO it is the single most confusing thing in Heat whenever one has to
deal with it. If we could stick to a threading model instead, I would
much prefer that.

Excerpts from Joshua Harlow's message of 2014-02-06 10:22:24 -0800:
 Has there been any investigation into heat.
 
 Heat has already used parts of the coroutine approach (for better or
 worse).
 
 An example: 
 https://github.com/openstack/heat/blob/master/heat/engine/scheduler.py#L230
 
 
 Decorator for a task that needs to drive a subtask.
 
 This is essentially a replacement for the Python 3-only yield from
 keyword (PEP 380), using the yield keyword that is supported in
 Python 2. For example::
 
 
 
 I bet trollius would somewhat easily replace a big piece of that code.
 
 -Josh
 
 -Original Message-
 From: victor stinner victor.stin...@enovance.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Thursday, February 6, 2014 at 1:55 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet
 with asyncio
 
 Sean Dague wrote:
  First, very cool!
 
 Thanks.
 
  This is very promising work. It might be really interesting to figure
  out if there was a smaller project inside of OpenStack that could be
  test ported over to this (even as a stackforge project), and something
  we could run in the gate.
 
 Oslo Messaging is a small project, but it's more a library. For a full
 daemon, my colleague Mehdi Abaakouk has a proof-on-concept for Ceilometer
 replacing eventlet with asyncio. Mehdi told me that he doesn't like to
 debug eventlet race conditions :-)
 
  Our experience is the OpenStack CI system catches bugs in libraries and
  underlying components that no one else catches, and definitely getting
  something running workloads hard on this might be helpful in maturing
  Trollius. Basically coevolve it with a piece of OpenStack to know that
  it can actually work on OpenStack and be a viable path forward.
 
 Replacing eventlet with asyncio is a huge change. I don't want to force
 users to use it right now, nor to do the change in one huge commit. The
 change will be done step by step, and when possible, optional. For
 example, in Olso Messaging, you can choose the executor: eventlet or
 blocking (and I want to add asyncio).
 
 Victor
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Centralized policy rules and quotas

2014-02-06 Thread Raildo Mascena
Hello,

Currently, there is a blueprint for creating a Domain in New Quota Driver
who is waiting approval, but that is already implemented. I believe that is
worth checking out.

https://blueprints.launchpad.net/nova/+spec/domain-quota-driver

Any questions I am available.

Regards,

Raildo Mascena


2014-02-06 7:22 GMT-03:00 Florent Flament florent.flament-...@cloudwatt.com
:

 Spliting from thread [openstack-dev][keystone][nova] Re: Hierarchicical
 Multitenancy Discussion

 Vinod, Vish:

 I understand that actions are different from one service to the
 other. What I meant is that the RBAC enforcement engine, doesn't need
 to understand the meaning of an action. It can allow (or not) an
 access, based on the action (a string - without understanding it), a
 context (e.g. a dictionary, with data about the user, role, ...)  and
 a set of rules.

 From the performance point of view, I agree that there may be an
 issue. Centralizing RBAC enforcement would mean that every API call
 has to be checked against a centralized controler, which could
 generate a heavy load on it, especially for services that require a
 heavy use of the APIs (like Swift for object storage). I believe that
 the issue would be the same for quotas enforcement. Now that I'm
 thinking about that, there's actually a similar issue with UUID tokens
 that have to be checked against Keystone for each API call. And the
 solution chosen to avoid Keystone to become a single point of failure
 (SPOF) has been to implement the PKI tokens. They allow Openstack
 services to work without checking Keystone every call.

 I agree with Vish, that a good compromise may be to have RBAC/quotas
 enforcement done in each specific service (altough by using a common
 middleware, like for tokens validation?). At the same time, RBAC rules
 and Quotas limits may be stored in a central place. There's already
 some discussion that have been made (at least on the Quotas) some
 months ago:

 http://lists.openstack.org/pipermail/openstack-dev/2013-December/020799.html

 I've got to catchup with what's been done on RBAC and Quotas, and see
 if I can propose some improvements. If you have some interesting links
 about blueprints / reviews about that I'd be interested.

 +1 for the implementation of domain Quotas for Nova.

 Florent Flament

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Raildo Mascena
Bacharel em Ciência da Computação - UFCG
Desenvolvedor no Laboratório de Sistemas Distribuidos - UFCG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Yuriy Taraday
Hello, Kevin.

On Fri, Feb 7, 2014 at 12:32 AM, Kevin Conway kevinjacobcon...@gmail.comwrote:

 There's an incredibly valid reason why we use green thread abstractions
 like eventlet and gevent in Python. The CPython implementation is
 inherently single threaded so we need some other form of concurrency to get
 the most effective use out of our code. You can import threading all you
 want but it won't work the way you expect it to. If you are considering
 doing anything threading related in Python then
 http://www.youtube.com/watch?v=Obt-vMVdM8s is absolutely required
 watching.


I suggest use threading module and let it be either eventlet's greethreads
(after monkey-patching) or built-in (OS) threads depending on deployment
scenario you use.

Green threads give us a powerful way to manage concurrency where it counts:
 I/O.


And that's exactly where GIL is released and other native threads are
executed, too. So they do not provide benefits because of overcoming GIL
but because of smart work with network connections.


 Everything in openstack is waiting on something else in openstack. That is
 our natural state of being. If your plan for increasing the number of
 concurrent requests is fork more processes then you're in for a rude
 awakening when your hosts start kernel panicking from a lack of memory.


There are threaded WSGI servers, there are even some greenthreaded ones. We
shouldn't burden ourselves with managing those processes, threads and
greenthreads.


 With green threads, on the other hand, we maintain the use of one process,
 one thread but are able to manage multiple, concurrent network operations.


But we still get one thread of execution, just as with native threads
(because of GIL).

In the case of API nodes: yes, they should (at most) do some db work and
 drop a message on the queue. That means they almost exclusively deal with
 I/O. Expecting your wsgi server to scale that up for you is wrong and, in
 fact, the reason we have eventlet in the first place.


But I'm sure it's not using eventlet's potential. In fact, I'm sure it
doesn't since DB calls (they are the most often ones in API, aren't they?)
block anyway and eventlet or any other coroutine-based framework can't do
much about it while application server can spawn more processes and/or
threads to handle the load.

I would like to refer to Adam Young here:
http://adam.younglogic.com/2012/03/keystone-should-move-to-apache-httpd/ -
as he provides more point in favor of external WSGI server (native calls,
IPv6, extensibility, stability and security).

Please take a look at this well-known benchmark:
http://nichol.as/benchmark-of-python-web-servers, where mod_wsgi performs
better than eventlet in the simple case and eventlet is not present in the
second case because of lack of HTTP/1.1 support.

Of course it's a matter for benchmarking. My point is that we can develop
our services with a simple threaded server and as long as they work
correctly we can always bring in greenthreads by monkey-patching later if
and only if they prove themselves better than other options in the market.
Our codebase should not be dependent on one single eventlet's or anyone's
other WSGI server or reactor loop.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [HA] blueprint: Provide agent service status which can be queried via init.d script or parent process

2014-02-06 Thread Miguel Angel Ajo Pelayo


During the design of HA deployments for Neutron, I have found
that agent's could run into problems, and they keep running,
but they have no methods to expose status to parent process 
or which could be queried via an init.d script.

So I'm proposing this blueprint,

https://blueprints.launchpad.net/neutron/+spec/agent-service-status

to make agent's expose internal status conditions via filesystem
as an extension of the current pid file.

This way, permanent or transient error conditions could be handled
by standard monitoring (or HA) solutions, to notify or take action
as appropriate.



It's a simple change that can make HA deployment's more robust, 
and capable of handling situations like this:

(If neutron spawned dnsmasq dies, neutron-dhcp-agent will be totally unaware)
https://bugs.launchpad.net/neutron/+bug/1257524 

We have the exact same problem with the other agents and sub-processes.

So I'm interested in getting this done for icehouse-3.

Any feedback?

Best regards, 
Miguel Ángel Ajo.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Integrating with 3rd party DB

2014-02-06 Thread Dolph Mathews
On Thu, Feb 6, 2014 at 6:38 AM, Noorul Islam Kamal Malmiyoda 
noo...@noorul.com wrote:

 Hello stackers,

 We have a database with tables users, projects, roles, etc. Is there
 any reference implementation or best practices to make keystone use
 this DB instead of its own?


What's the problem you're having? Does the schema in this database differ
from what keystone expects? What have you tried so far?



 I have been reading
 https://wiki.openstack.org/wiki/Keystone/Federation/Blueprint but I
 could not find a open reference implementation for the same.

 Regards,
 Noorul

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME 0.6 released

2014-02-06 Thread Doug Hellmann
On Thu, Feb 6, 2014 at 3:24 PM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Doug, is it backwards compatible with 0.5b6?


Yes, it should be. If you find otherwise, let me know so we can address
the problem.

Doug




 Renat Akhmerov
 @ Mirantis Inc.

 On 06 Feb 2014, at 07:58, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Thu, Feb 6, 2014 at 10:11 AM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Thanks Doug,




 2014-02-06 15:54 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:


 cdf74daac2a204d5fe77f4b2bf5a956f65a73a6f Support dynamic types
 f191f32a722ef0c2eaad71dd33da4e7787ac2424 Add IntegerType and some
 classes for validation

 Doug



 Do you know when the docs will be updated ? [1]
 Some complex types can already be found on Ironic/Ceilometer/Climate and
 I would love to see if some have been backported to WSME as native types
 (like the UUID type or the String one)


 I'll look into why the doc build is broken. The gerrit merge should have
 triggered an update on http://wsme.readthedocs.org/en/latest/

 Doug




 -Sylvain

 [1] : http://pythonhosted.org//WSME/types.html


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][vmware] A new VMwareAPISession

2014-02-06 Thread Shawn Hartsock
Hi folks,

Just following up on what we were talking about in IRC.

The BP: 
https://blueprints.launchpad.net/nova/+spec/vmware-soap-session-management

Is supposed to capture some of this work/discussion. Earlier in
Icehouse we had thought that having some kind of pseudo transaction
that could encompass a set of calls would be a nice way to allow a
method to roll back to some point and re-try a set of API calls as a
unit. This proved to be messy so I've abandoned that line of work.
Instead, (as pointed out by Matthew) the session details should not be
exposed at all above the Vim object. I think this is generally a good
direction the only problems would be timing of releases and refactors.

The core change I would like to propose to fit well with this idea of
restructuring around the Vim object revolves around how to verify and
terminate a session.

In particular, vim.SessionIsActive and vim.TerminateSession ... these
are intended as a system administrator's control API so a root user
could evict other users. Think of administrating a session through
these API as using 'kill -KILL pid' where this might be appropriate
if you were a root or super user cleaning out a session list. If you
were to log out of SSH using 'kill -KILL -1' it would work but it
would also be a little silly and would by pass logout scripts.

Individual users have the ability to check if their session is logged
in by using vim.CurrentTime or
ServiceContent.sessionManager.currentSession (you should see that
sessionManager and currentSession are not None). To log out your own
session there's a method you can used called vim.Logout which will
only affect the current session. The vim.TerminateSession can force
*any* open session off line so if there was a session ID bug in your
code you could randomly knock other driver instances off line which
could cause interesting unreproducible bugs for other users of the
system.

References (reading very carefully):
 * 
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.SessionManager.html
 * 
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.SessionManager.html#logout
 * 
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.SessionManager.html#sessionIsActive

... IN PROGRESS ...
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:session-management-refactor,n,z

I will be shuffling this patch set around to reflect these changes.
I've tried to communicate the real purpose of this refactor, not to
introduce new API but to change how sessions are logged out and/or
validated.

As for
https://blueprints.launchpad.net/oslo/+spec/vmware-api

I know we're trying to keep this a light weight fork-lift but as we
address other problems it's becoming clear to me we need to
incorporate certain key fixes.

I emailed with Vipin about https://review.openstack.org/#/c/65075/ he
is currently waiting for someone to direct him toward the correct
place to start committing this code. I'll have to refactor
https://review.openstack.org/#/c/63229/ so it can be used along side
that library.

I do have a question:
* if Vipin manages to ship an Oslo lib in icehouse is it too late for
us to change Nova over to that lib in Nova since the BP proposal
deadlines are past?

-- 
# Shawn.Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Joshua Harlow
+1 lots of respect for zane in doing this :)

I'm still very much interested in seeing how we can connect taskflow in to
your model.

I think the features that you guys were wanting (remote workers) are
showing up and hopefully will be all they can be!

It helps (imho) that taskflow doesn't connect itself to one model
(threaded, asyncio, yielding) since its model is tasks, dependencies, and
the order in which those are defined which is separate from how it runs
(via engines). Engines are allowed to and encouraged to use threads
underneath, asyncio or greenthreads, or remote workers


-Original Message-
From: Clint Byrum cl...@fewbar.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Thursday, February 6, 2014 at 12:46 PM
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace
eventletwith asyncio

All due respect to Zane who created the scheduler. We simply could not
do what we do without it (and I think one of the first things I asked
for was parallel create ;).

IMO it is the single most confusing thing in Heat whenever one has to
deal with it. If we could stick to a threading model instead, I would
much prefer that.

Excerpts from Joshua Harlow's message of 2014-02-06 10:22:24 -0800:
 Has there been any investigation into heat.
 
 Heat has already used parts of the coroutine approach (for better or
 worse).
 
 An example: 
 
https://github.com/openstack/heat/blob/master/heat/engine/scheduler.py#L2
30
 
 
 Decorator for a task that needs to drive a subtask.
 
 This is essentially a replacement for the Python 3-only yield from
 keyword (PEP 380), using the yield keyword that is supported in
 Python 2. For example::
 
 
 
 I bet trollius would somewhat easily replace a big piece of that code.
 
 -Josh
 
 -Original Message-
 From: victor stinner victor.stin...@enovance.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Thursday, February 6, 2014 at 1:55 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Asynchrounous programming: replace eventlet
 with asyncio
 
 Sean Dague wrote:
  First, very cool!
 
 Thanks.
 
  This is very promising work. It might be really interesting to figure
  out if there was a smaller project inside of OpenStack that could be
  test ported over to this (even as a stackforge project), and
something
  we could run in the gate.
 
 Oslo Messaging is a small project, but it's more a library. For a full
 daemon, my colleague Mehdi Abaakouk has a proof-on-concept for
Ceilometer
 replacing eventlet with asyncio. Mehdi told me that he doesn't like to
 debug eventlet race conditions :-)
 
  Our experience is the OpenStack CI system catches bugs in libraries
and
  underlying components that no one else catches, and definitely
getting
  something running workloads hard on this might be helpful in maturing
  Trollius. Basically coevolve it with a piece of OpenStack to know
that
  it can actually work on OpenStack and be a viable path forward.
 
 Replacing eventlet with asyncio is a huge change. I don't want to force
 users to use it right now, nor to do the change in one huge commit. The
 change will be done step by step, and when possible, optional. For
 example, in Olso Messaging, you can choose the executor: eventlet or
 blocking (and I want to add asyncio).
 
 Victor
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Devstack for IPv6 in the Comcast lab

2014-02-06 Thread Collins, Sean
Hi,

During our last meeting, there was an action item to share the Devstack
configuration that we have in the lab. Anthony Veiga, Paul Richie,
and other members of the infrastructure team did the majority of
the work involved in setting up the lab, while I was given the easier
task of just modifying DevStack to build Neutron networks that fit
our physical layout.

https://github.com/netoisstools/devstack/compare/openstack-dev:stable%2Fhavana...comcast_havana

This Devstack branch works in conjuction with a branch of
Neutron that has the IPv6 patches that only used one IPv6 subnet
keyword. 

https://github.com/netoisstools/neutron/tree/comcast_milestone_proposed

Hopefully I can build some documentation that explains our physical
layout for this lab, as well as rebasing these branches to use the newer
code and blueprints we've been working on.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC: Generate API sample files from API schemas

2014-02-06 Thread Rochelle.RochelleGrober
+1

Really lots more than just +1

This leads to so many more efficiencies and increase in effectiveness.

--Rocky

-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com] 
Sent: Thursday, February 06, 2014 10:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] RFC: Generate API sample files from API 
schemas


On Feb 6, 2014, at 5:38 AM, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:

 
 Hi,
 
 I'd like to propose one idea that autogenerates API sample files from 
 API schema for Nova v3 API.
 
 We are working on API validation for v3 API, the works require API 
 schema which is defined with JSONSchema for each API. On the other 
 hand, API sample files of v3 API are autogenerated from the template 
 files of v3 API under nova/tests/integrated/v3/api_samples, as api_samples's 
 README.rst[1].
 The API schema files are similar to the template files, because both 
 represent the API parameter structures and each API name.
 
 For example, the template file of keypairs is
 --
 
  {
  keypair: {
  name: %(keypair_name)s
  }
  }
 --
 
 
 and the API schema file is
 --
 
  create = {
  'type': 'object',
  'properties': {
  'keypair': {
  'type': 'object',
  'properties': {
  'name': {
  'type': 'string', 'minLength': 1, 'maxLength': 255,
  'pattern': '^[a-zA-Z0-9 _-]+$'
  },
  'public_key': {'type': 'string'},
  },
  'required': ['name'],
  'additionalProperties': False,
  },
  },
  'required': ['keypair'],
  'additionalProperties': False,
  }
 --
 
 
 When implementing new v3 API, we need to write/review both files and 
 that would be hard works. For reducing the workload, I'd like to 
 propose one idea[2] that autogenerates API sample files from API 
 schema instead of template files. We would not need to write a template file 
 of a request.

+1

The template files were there because we didn't have a clear schema defined.

It would be awesome to get rid of the templates.

Vish

 
 The XML support is dropped from Nova v3 API, and the decision could 
 make this implementation easier. The NOTE is that we still need 
 response template files even if implementing this idea, because API 
 schema files of response don't exist.
 
 Any comments are welcome.
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ---
 [1]: 
 https://github.com/openstack/nova/blob/master/nova/tests/integrated/ap
 i_samples/README.rst
 [2]: https://review.openstack.org/#/c/71465/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC: Generate API sample files from API schemas

2014-02-06 Thread Christopher Yeoh
On Thu, 6 Feb 2014 13:38:22 +
Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:

 
 Hi,
 
 I'd like to propose one idea that autogenerates API sample files from
 API schema for Nova v3 API.
 
 We are working on API validation for v3 API, the works require API
 schema which is defined with JSONSchema for each API. On the other
 hand, API sample files of v3 API are autogenerated from the template
 files of v3 API under nova/tests/integrated/v3/api_samples, as
 api_samples's README.rst[1]. The API schema files are similar to the
 template files, because both represent the API parameter structures
 and each API name.
 
 For example, the template file of keypairs is
 --
   {
   keypair: {
   name: %(keypair_name)s
   }
   }
 --
 
 and the API schema file is
 --
   create = {
   'type': 'object',
   'properties': {
   'keypair': {
   'type': 'object',
   'properties': {
   'name': {
   'type': 'string', 'minLength': 1, 'maxLength':
 255, 'pattern': '^[a-zA-Z0-9 _-]+$'
   },
   'public_key': {'type': 'string'},
   },
   'required': ['name'],
   'additionalProperties': False,
   },
   },
   'required': ['keypair'],
   'additionalProperties': False,
   }
 --
 
 When implementing new v3 API, we need to write/review both files and
 that would be hard works. For reducing the workload, I'd like to
 propose one idea[2] that autogenerates API sample files from API
 schema instead of template files. We would not need to write a
 template file of a request.

+1 to automating this. The more the better :-) 

There's probably some details to sort out such as wanting to generate
multiple template files from the same schema file where parameters are
optional. Because the generation of the api samples doubles as real
integration testcases which often pick up issues which the normal
unittests don't. But I'd love to see these autogenerated (they are a
bit of pain to create manually).

Longer term I'd like us to think about how we can use the schema
files to automate more of the doc generation too (given full Pecan/WSME
support is likely still a reasonable way off).

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-06 Thread Sukhdev Kapur
Hi Jay,

Thanks for bringing this up. I have been trying to make the recheck work
and have not had much success. Therefore, I agree that we should go with
option a) for the short term until b) or c) becomes available.
I would prefer b) because we have already invested a lot in our solution
and it is fully operational.

Thanks
-Sukhdev




On Tue, Feb 4, 2014 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

 Sorry for cross-posting to both mailing lists, but there's lots of folks
 working on setting up third-party testing platforms that are not members
 of the openstack-infra ML...

 tl;dr
 -

 The third party testing documentation [1] has requirements [2] that
 include the ability to trigger a recheck based on a gerrit comment.

 Unfortunately, the Gerrit Jenkins Trigger plugin [3] does not have the
 ability to trigger job runs based on a regex-filtered comment (only on
 the existence of any new comment to the code review).

 Therefore, we either should:

 a) Relax the requirement that the third party system trigger test
 re-runs when a comment including the word recheck appears in the
 Gerrit event stream

 b) Modify the Jenkins Gerrit plugin to support regex filtering on the
 comment text (in the same way that it currently supports regex filtering
 on the project name)

 or

 c) Add documentation to the third party testing pages that explains how
 to use Zuul as a replacement for the Jenkins Gerrit plugin.

 I propose we do a) for the short term, and I'll work on c) long term.
 However, I'm throwing this out there just in case there are some Java
 and Jenkins whizzes out there that could get b) done in a jiffy.

 details
 ---

 OK, so I've been putting together documentation on how to set up an
 external Jenkins platform that is linked [4] with the upstream
 OpenStack CI system.

 Recently, I wrote an article detailing how the upstream CI system
 worked, including a lot of the gory details from the
 openstack-infra/config project's files. [5]

 I've been working on a follow-up article that goes through how to set up
 a Jenkins system, and in writing that article, I created a source
 repository [6] that contains scripts, instructions and Puppet modules
 that set up a Jenkins system, the Jenkins Job Builder tool, and
 installs/configures the Jenkins Gerrit plugin [7].

 I planned to use the Jenkins Gerrit plugin as the mechanism that
 triggers Jenkins jobs on the external system based on gerrit events
 published by the OpenStack review.openstack.org Gerrit service. In
 addition to being mentioned in the third party documentation, Jenkins
 Job Builder has the ability to construct Jenkins jobs that are triggered
 by the Jenkins Gerrit plugin [8].

 Unforunately, I've run into a bit of a snag.

 The third party testing documentation has requirements that include the
 ability to trigger a recheck based on a gerrit comment:

 quote
 Support recheck to request re-running a test.
  * Support the following syntaxes recheck no bug and recheck bug ###.
  * Recheck means recheck everything. A single recheck comment should
 re-trigger all testing systems.
 /quote

 The documentation has a section on using the Gerrit Jenkins Trigger
 plugin [3] to accept notifications from the upstream OpenStack Gerrit
 instance.

 But unfortunately, the Jenkins Gerrit plugin does not support the
 ability to trigger a re-run of a job given a regex match of the word
 recheck. :(

 So, we either need to a) change the requirements of third party testers,
 b) enhance the Jenkins Gerrit plugin with the missing functionality, or
 c) add documentation on how to set up Zuul as the triggering system
 instead of the Jenkins Gerrit plugin.

 I'm happy to work on c), but I think relaxing the restriction (a) is
 probably needed short-term.

 Best,
 -jay

 [1] http://ci.openstack.org/third_party.html
 [2] http://ci.openstack.org/third_party.html#requirements
 [3]

 http://ci.openstack.org/third_party.html#the-jenkins-gerrit-trigger-plugin-way
 [4] By linked I mean it both reads from the OpenStack Gerrit system
 and writes (adds comments) to it
 [5] http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/
 [6] http://github.com/jaypipes/os-ext-testing
 [7] https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
 [8]

 https://github.com/openstack-infra/jenkins-job-builder/blob/master/jenkins_jobs/modules/triggers.py#L121



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][vmware] A new VMwareAPISession

2014-02-06 Thread Davanum Srinivas
Shawn,

We are waiting on this infra review to pass - to create the
oslo.vmware git repo.
https://review.openstack.org/#/c/70761/

-- dims

On Thu, Feb 6, 2014 at 5:23 PM, Shawn Hartsock harts...@acm.org wrote:
 Hi folks,

 Just following up on what we were talking about in IRC.

 The BP: 
 https://blueprints.launchpad.net/nova/+spec/vmware-soap-session-management

 Is supposed to capture some of this work/discussion. Earlier in
 Icehouse we had thought that having some kind of pseudo transaction
 that could encompass a set of calls would be a nice way to allow a
 method to roll back to some point and re-try a set of API calls as a
 unit. This proved to be messy so I've abandoned that line of work.
 Instead, (as pointed out by Matthew) the session details should not be
 exposed at all above the Vim object. I think this is generally a good
 direction the only problems would be timing of releases and refactors.

 The core change I would like to propose to fit well with this idea of
 restructuring around the Vim object revolves around how to verify and
 terminate a session.

 In particular, vim.SessionIsActive and vim.TerminateSession ... these
 are intended as a system administrator's control API so a root user
 could evict other users. Think of administrating a session through
 these API as using 'kill -KILL pid' where this might be appropriate
 if you were a root or super user cleaning out a session list. If you
 were to log out of SSH using 'kill -KILL -1' it would work but it
 would also be a little silly and would by pass logout scripts.

 Individual users have the ability to check if their session is logged
 in by using vim.CurrentTime or
 ServiceContent.sessionManager.currentSession (you should see that
 sessionManager and currentSession are not None). To log out your own
 session there's a method you can used called vim.Logout which will
 only affect the current session. The vim.TerminateSession can force
 *any* open session off line so if there was a session ID bug in your
 code you could randomly knock other driver instances off line which
 could cause interesting unreproducible bugs for other users of the
 system.

 References (reading very carefully):
  * 
 http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.SessionManager.html
  * 
 http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.SessionManager.html#logout
  * 
 http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.SessionManager.html#sessionIsActive

 ... IN PROGRESS ...
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:session-management-refactor,n,z

 I will be shuffling this patch set around to reflect these changes.
 I've tried to communicate the real purpose of this refactor, not to
 introduce new API but to change how sessions are logged out and/or
 validated.

 As for
 https://blueprints.launchpad.net/oslo/+spec/vmware-api

 I know we're trying to keep this a light weight fork-lift but as we
 address other problems it's becoming clear to me we need to
 incorporate certain key fixes.

 I emailed with Vipin about https://review.openstack.org/#/c/65075/ he
 is currently waiting for someone to direct him toward the correct
 place to start committing this code. I'll have to refactor
 https://review.openstack.org/#/c/63229/ so it can be used along side
 that library.

 I do have a question:
 * if Vipin manages to ship an Oslo lib in icehouse is it too late for
 us to change Nova over to that lib in Nova since the BP proposal
 deadlines are past?

 --
 # Shawn.Hartsock

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Zane Bitter

On 04/02/14 13:53, Kevin Conway wrote:

On 2/4/14 12:07 PM, victor stinnervictor.stin...@enovance.com  wrote:

The purpose of replacing eventlet with asyncio is to get a well defined
control flow, no more surprising task switching at random points.


I disagree with this. Eventlet and gevent yield the execution context
anytime an IO call is made or the 'sleep()' function is called explicitly.
The order in which greenthreads grain execution context is deterministic
even if not entirely obvious. There is no context switching at random.


This is technically correct of course, but in reality there's no way to 
know whether a particular piece of code is safe from context switches 
unless you have the entire codebase of the program and all of its 
libraries in your head at the same time. So no, it's not *random*, but 
it might as well be. And it's certainly not explicit in the way that 
asyncio is explicit; it's very much implicit in other operations.



What's more is it shouldn't matter when the context switch happens. When
writing green threaded code you just pretend you have real threads and
understand that things happen in an order other than A = B = C.


If you like pretending you have real threads, you could just use Python 
threads. Greenthreads exist because people don't want to deal with 
actual pretend threads.



One of the great benefits of using a green thread abstraction, like
eventlet or gevent, is that it lets you write normal Python code and slap
your concurrency management over the top.


Right, it lets you do that and neglects to mention that it doesn't 
actually work.


The whole premise of eventlet is that it allows you to write your code 
without thinking about thread safety, except that you *do* still have to 
think about thread safety. So the whole reason for its existence is to 
write cheques that it can't cash. It's conceptually unsound at the most 
fundamental level.


I'm not suggesting for a second that it would be an easy change - I'm 
not even sure it would be a good change - but let's not kid ourselves 
that everything is fine here in happy-land and there's nothing to discuss.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress [for appliances]

2014-02-06 Thread Prasad Vellanki
On Thu, Feb 6, 2014 at 1:19 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Mike Spreitzer's message of 2014-02-05 22:17:50 -0800:
   From: Prasad Vellanki prasad.vella...@oneconvergence.com
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org,
   Date: 01/21/2014 02:16 AM
   Subject: Re: [openstack-dev] [heat] Sofware Config progress
  
   Steve  Clint
  
   That should work. We will look at implementing a resource that spins
   up a shortlived VM for bootstrapping a service VM and informing
   configuration server for further configuration.
  
   thanks
   prasadv
  
 
   On Wed, Jan 15, 2014 at 7:53 PM, Steven Dake sd...@redhat.com wrote:
   On 01/14/2014 09:27 PM, Clint Byrum wrote:
   Excerpts from Prasad Vellanki's message of 2014-01-14 18:41:46 -0800:
   Steve
  
   I did not mean to have custom solution at all. In fact that would be
   terrible.  I think Heat model of software config and deployment is
  really
   good. That allows configurators such as Chef, Puppet, Salt or Ansible
 to
  be
   plugged into it and all users need to write are modules for those.
  
   What I was  thinking is if there is a way to use software
  config/deployment
 to do initial configuration of the appliance by using agentless
 system
   such  as Ansible or Salt, thus requiring no cfminit. I am not sure this
   will work either, since it might require ssh keys to be installed for
   getting ssh to work without password prompting. But I do see that
  ansible
   and salt support username/password option.
   If this would not work, I agree that the best option is to make them
   support cfminit...
   Ansible is not agent-less. It just makes use of an extremely flexible
   agent: sshd. :) AFAIK, salt does use an agent though maybe they've
 added
   SSH support.
  
   Anyway, the point is, Heat's engine should not be reaching into your
   machines. It talks to API's, but that is about it.
  
   What you really want is just a VM that spins up and does the work for
   you and then goes away once it is done.
   Good thinking.  This model might work well without introducing the
   groan another daemon problems pointed out elsewhere in this thread
   that were snipped.  Then the modules could simply be heat
   templates available to the Heat engine to do the custom config setup.
  
   The custom config setup might still be a problem with the original
   constraints (not modifying images to inject SSH keys).
  
   That model wfm.
  
   Regards
   -steve
  
 
  (1) What destroys the short-lived VM if the heat engine crashes between
  creating and destroying that short-lived VM?
 

 The heat-engine that takes over the stack. Same as the answer for what
 happens when a stack is half-created and heat-engine dies.

  (2) What if something goes wrong and the heat engine never gets the
 signal
  it is waiting for?
 

 Timeouts already cause failed state or rollback.

  (3) This still has the problem that something needs to be configured
  some(client-ish)where to support the client authorization solution
  (usually username/password).
 

 The usual answer is that's cloud-init's job but we're discussing
 working around not having cloud-init, so I suspect it has to be built
 into the image (which, btw, is a really really bad idea). Another option
 is that these weird proprietary systems might reach out to an auth
 service which the short-lived VM would also be able to contact given
 appropriate credentials for said auth service fed in via parameters.

 The idea I thought was that the short lived VM will act as a proxy to the
configuration engine such as Puppet or chef to bootstrap ie get the
credentials for appliance. Once the initial bootstrap is done, then regular
configuration process as suggested by Heat will work.

Though I had one question as to how heat will send configuration
information to puppet or chef that configures the VM in tenant domain.
Assuming that the chef or puppet can be reachable from tenant VM, how does
heat reach chef server.

One scenario  that needs a little thought is if the service VM is actually
owned by the provider but is  invoked in the tenant domain. The management
of such will come  via Neutron API but then on the backend driver for that
service will configure the service VM. It would be great if Heat is used
for this.

 (4) Given that everybody seems sanguine about solving the client
  authorization problem, what is wrong with code in the heat engine opening
  and using a connection to code in an appliance?  Steve, what do you mean
  by reaching into your machines that is critically different from
 calling
  their APIs?
 

 We can, and should, poke holes from heat-engine, out through a firewall,
 so it can connect to all of the endpoints. However, if we start letting
 it talk to all the managed machines, it becomes a really handy DoS tool
 and also spends a ton of time talking to things that we have no control
 over, thus taking up 

Re: [openstack-dev] WSME 0.6 released

2014-02-06 Thread Kenichi Oomichi

Hi Doug,

 -Original Message-
 From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 Sent: Thursday, February 06, 2014 11:55 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] WSME 0.6 released
 
 I have just tagged WSME 0.6. It is now on PyPI, and should be picked up 
 automatically by gate jobs as soon as the mirror
 updates.
 
 Changes since the 0.5b6 release we have been using:
 
 $ git log --format=oneline 0.5b6..0.6
 e26d1b608cc5a05940c0b6b7fc176a0d587ba611 Add 'readonly' parameter to wsattr
 9751ccebfa8c3cfbbc6b38e398f35ab557d7747c Fix typos in documents and comments
 1d6b3a471b8afb3e96253d539f44506428314049 Merge Support dynamic types
 cdf74daac2a204d5fe77f4b2bf5a956f65a73a6f Support dynamic types
 984e9e360be74ff0b403a8548927aa3619ed7098 Support building wheels (PEP-427)
 ec7d49f33cb777ecb05d6e6481de41320b37df52 Fix a typo in the types documentation
 f191f32a722ef0c2eaad71dd33da4e7787ac2424 Add IntegerType and some classes for 
 validation
 a59576226dd4affde0afdd028f54c423b8786e24 Merge Drop description from 403 
 flask test case
 e5927c8e30714c5e53cf8dc90f97b8f56b6d8cff Merge Fix SyntaxWarning under 
 Python 3
 c63ad8bbfea78957d79bbb4a573cee97e0a8bd66 Merge Remove the duplicated error 
 message from Enum
 db6c337526a6dbba11d260537f1eb95e7cabac4f Use assertRaises() for negative tests
 29547eae59d244adb681b6182b604e7085d8c1a8 Remove the duplicated error message 
 from Enum
 1bf6317a3c7f3e9c7f61776ac269d617cee8f3fe Drop description from 403 flask test 
 case
 0fa306fa4f70180641132d060a346ac0f02dbae3 Fix SyntaxWarning under Python 3

Thank you for releasing new WSME.
I will try to apply IntegerType class to Ceilometer with new WSME.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-06 Thread Adam Young

On 02/06/2014 05:18 AM, Florent Flament wrote:

Vish:

+1 for hierchical IDs (e.g:
b04f9ea01a9944ac903526885a2666de.c45674c5c2c6463dad3c0cb9d7b8a6d8)


Please keep names and identifiers separate.  Identifiers should *NOT* be 
hierarchical.  Names can be.


Think of the operating system distinction between dentries (names) and 
Inode identifiers (IDs)   only dentries are hierarchical.



If you want to move something from one container to another, and 
maintain identity, use the same id,  just mount it somewhere else.



(names used for clarity of explanations).

Chris:

+1 for hierarchical /project flavors, images, and so on ..

Vinod, Vish:

Starting new Thread [openstack-dev][keystone] Centralized policy rules
and quotas for thoughts about centralized RBAC rules and Quotas.


Florent Flament


- Original Message -
From: Chris Behrens cbehr...@codestud.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, February 5, 2014 8:43:08 PM
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical
Multitenancy Discussion


On Feb 5, 2014, at 3:38 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:


On Feb 5, 2014, at 12:27 AM, Chris Behrens cbehr...@codestud.com wrote:


1) domain ‘a’ cannot see instances (or resources in general) in domain ‘b’. It 
doesn’t matter if domain ‘a’ and domain ‘b’ share the same tenant ID. If you 
act with the API on behalf of domain ‘a’, you cannot see your instances in 
domain ‘b’.
2) Flavors per domain. domain ‘a’ can have different flavors than domain ‘b’.

I hadn’t thought of this one, but we do have per-project flavors so I think this 
could work in a project hierarchy world. We might have to rethink the idea of global 
flavors and just stick them in the top-level project. That way the flavors could be 
removed. The flavor list would have to be composed by matching all parent projects. 
It might make sense to have an option for flavors to be “hidden in sub 
projects somehow as well. In other words if orgb wants to delete a flavor from the 
global list they could do it by hiding the flavor.

Definitely some things to be thought about here.

Yeah, it's completely do-able in some way. The per-project flavors is a good 
start.


3) Images per domain. domain ‘a’ could see different images than domain ‘b’.

Yes this would require similar hierarchical support in glance.

Yup :)


4) Quotas and quota limits per domain. your instances in domain ‘a’ don’t count 
against quotas in domain ‘b’.

Yes we’ve talked about quotas for sure. This is definitely needed.

Also: not really related to this, but if we're making considerable quota 
changes, I would also like to see the option for separate quotas _per flavor_, 
even. :)


5) Go as far as using different config values depending on what domain you’re 
using. This one is fun. :)

Curious for some examples here.

With the idea that I want to be able to provide multiple virtual clouds within 
1 big cloud, these virtual clouds may desire different config options. I'll 
pick one that could make sense:

# When set, compute API will consider duplicate hostnames
# invalid within the specified scope, regardless of case.
# Should be empty, project or global. (string value)
#osapi_compute_unique_server_name_scope=

This is the first one that popped into my mind for some reason, and it turns out that this is actually a more 
complicated example than I was originally intending. I left it here, because there might be a potential issue 
with this config option when using 'org.tenant' as project_id. Ignoring that, let's say this config option 
had a way to say I don't want duplicate hostnames within my organization at all, I don't 
want any single tenant in my organization to have duplicate hostnames, or I don't care at all 
about duplicate hostnames. Ideally each organization could have its own config for this.


volved with this. I am not sure that I currently have the time to help with 
implementation, however.

Come to the meeting on friday! 1600 UTC

I meant to hit the first one. :-/   I'll try to hit it this week.

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] http://www.xrefs.info: OpenStack source code cross reference and browse

2014-02-06 Thread xrefs.info Admin
hello,

I made http://www.xrefs.info available to open source community in
the hope of make open source developers more productive.
The site hosts many open source code projects' cross references based
on OpenGrok,
which is a very fast cross reference tool, and easy to use.

OpenStack is a big open source project, play a key role in the cloud computing,
it is covered by the site, including last few releases and latest code
from git repository, updated nightly.
To access a specific version of OpenStack, you can go to site
http://www.xrefs.info, select a version of a component under OpenStack
select box.
If you want to search the definition of a function, simply type it in
the definition box; If you want to do a full search, type your text in
the first box; If you want to search a file, simply type file name in
file path box. Hit search button, That's it!

The site covers other projects like:
 - Linux kernel from verion 0.01 - 3.13.2, plus nightly latest.
 - Linux boot loader (u-boot, lilo, grub, syslinux)
 - Linux user space core packages
 - Android
 - Cloud computing other projects like XEN, QEMU hypervisor; CloudStack etc.
 - Big data project: Hadoop
 - BSD: FreeBSD, NetBSD, DragonflyBSD
 - Languages: OpenJDK, Perl, Python, PHP

If you have any questions, comments or suggestions for the site,
please let me know.

Thanks.
xrefs.info admin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Chris Behrens

On Feb 6, 2014, at 11:07 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 +1
 
 To give an example as to why eventlet implicit monkey patch the world isn't 
 especially great (although it's what we are currently using throughout 
 openstack).
 
 The way I think about how it works is to think about what libraries that a 
 single piece of code calls and how it is very hard to predict whether that 
 code will trigger a implicit switch (conceptually similar to a context 
 switch).

Conversely, switching to asyncio means that every single module call that would 
have blocked before monkey patching… will now block. What is worse? :)

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev