Re: [openstack-dev] [WSME] Complex Type Validation Issue On Deserialization

2014-09-11 Thread Brandon Logan
Fixed the issue and it's a boneheaded one as I suspected.

I need to initialize the type ala wtypes.IPv4AddressType() instead of
just wtypes.IPv4AddressType.

Does appear that the validation for the IPv4AddressType has a bug where
is should return the value, but there is no return at all.

Thanks,
Brandon

On Thu, 2014-09-11 at 04:45 +, Brandon Logan wrote:
 I'm having an issue where incoming validation of a complex type with a
 an attribute of type IPv4Address is failing validation when it is a
 correct ipv4 address.  Same is happening for UuidType and
 IPv6AddressType.  I am using using Pecan with WSME 0.6.1 and python
 2.7.6.
 
 Complex Type:
 class LoadBalancer(wtypes.Base):
 ip_address = wtypes.wsattr(wtypes.IPv4AddressType)
 
 Controller Method:
 @pecan.wsexpose(v1types.LoadBalancer, body=v1types.LoadBalancer)
 def post(self, load_balancer):
return load_balancer
 
 
 When doing a POST to the correct resource with this body:
 
 {ip_address: 10.0.0.1}
 
 I am getting this error:
 
 {
 debuginfo: null
 faultcode: Server
 faultstring: Value should be IPv4 format
 }
 
 
 it looks like in the fromjson method when it starts validating the
 actual ip 10.0.0.1 it is sending an instantiation of the
 IPv4AddressType instead of the actual ip address 10.0.0.1 to
 wsme.types.validate_value.  Am I doing something wrong or is this a
 bug?  The stack trace did not yield anymore information, unless you
 want the actual methods called.  I can provide that if needed.
 
 Thanks,
 Brandon 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-11 Thread Flavio Percoco
On 09/10/2014 03:18 PM, Gordon Sim wrote:
 On 09/10/2014 09:58 AM, Flavio Percoco wrote:
 To clarify the doubts of what Zaqar is or it's not, let me quote what's
 written in the project's overview section[0]:

 Zaqar is a multi-tenant cloud messaging service for web developers.
 
 How are different tenants isolated from each other? Can different
 tenants access the same queue? If so, what does Zaqar do to prevent one
 tenant from negatively affecting the other? If not, how is communication
 with other tenants achieved.
 
 Most messaging systems allow authorisation to be used to restrict what a
 particular user can access and quotas to restrict their resource
 consumption. What does Zaqar do differently?

Zaqar keeps queues/groups isolated in a per-tenant basis. As of now,
there's still no way to make 2 tenants access the same group of
messages. However, we've already discussed - we'll likely work on that
during Kilo - a way to provide a more fine-grained access control on
messages.


 The service features a fully RESTful API, which developers can use to
 send messages between various components of their SaaS and mobile
 applications, by using a variety of communication patterns. Underlying
 this API is an efficient messaging engine designed with scalability and
 security in mind.

 Other OpenStack components can integrate with Zaqar to surface events
 to end users and to communicate with guest agents that run in the
 over-cloud layer.
 
 I may be misunderstanding the last sentence, but I think *direct*
 integration of other OpenStack services with Zaqar would be a bad idea.
 
 Wouldn't this be better done through olso.messaging's notifications in
 some way? and/or through some standard protocol (and there's more than
 one to choose from)?
 
 Communicating through a specific, fixed messaging system, with its own
 unique protocol is actually a step backwards in my opinion, especially
 for things that you want to keep as loosely coupled as possible. This is
 exactly why various standard protocols emerged.
 

Yes and no. The answer is yes most of the time but there are use cases,
like the ones mentioned here[0], that make zaqar a good tool for the job.

[0] https://etherpad.openstack.org/p/zaqar-integrated-projects-use-cases

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyone using RabbitMQ with active/active mirrored queues?

2014-09-11 Thread Jesse Pretorius
On 10 September 2014 17:20, Chris Friesen chris.frie...@windriver.com
wrote:

 I see that the OpenStack high availability guide is still recommending the
 active/standby method of configuring RabbitMQ.

 Has anyone tried using active/active with mirrored queues as recommended
 by the RabbitMQ developers?  If so, what problems did you run into?


Whoops - finger trouble led my my last email being sent prematurely.

We've been running RabbitMQ 3.1.5 as a high availability cluster in
production for over a year now. Previous versions had some nasty memory
leaks and later versions changed the way authentication was handled and we
haven't gotten to work out the changes to our chef recipes to facilitate
the upgrades yet.

We're still only using one IP address in the OpenStack conf files - this
points to a virtual IP address which floats from one node to the other, so
one may consider it an active-passive cluster in actual usage.

We previously used two nodes configured as single servers and used DRBD and
pacemaker to manage the data partition and failover, but RabbitMQ's queue
mirroring is much less pain to deal with than DRBD.

The only trouble we've had has been when there have been network partitions
for extended periods, but I would think that this is a fairly normal
situation to result in a little pain. In our case it's been simple enough
to just restart the node to get the queues back to a normal running state.
We have seen some issues with the service restarts not working too well
(the service stays running), but that's easy enough to resolve too.

I would recommend that you ask this question on the openstack-perators list
as you'll likely get more feedback.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Licensing issue with using JSHint in build

2014-09-11 Thread Solly Ross
I want to apologize for my rapid response, I was incorrect about the license
because of the file you pointed out.  I did not intend to sound snarky or
anything like that in either the original email or the reply.

Anyway, for future reference, I believe the last thread where this was 
discussed was
here: http://lists.openstack.org/pipermail/openstack-dev/2014-April/031689.html,
which basically reiterates what David says above (it's good to have links to the
past discussions, IMO).

Best Regards,
Solly Ross

P.S. Here's hoping that the JSHint devs eventually find a way to remove that 
line
from the file -- according to https://github.com/jshint/jshint/issues/1234, not
much of the original remains.

- Original Message -
 From: Aaron Sahlin asah...@linux.vnet.ibm.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, September 10, 2014 1:35:48 PM
 Subject: Re: [openstack-dev] [Horizon] Licensing issue with using JSHint in 
 build
 
 What you are finding is the same as I found, which raised my concern.
 
 Thanks for the pointer to legal-disc...@lists.openstack.org, I will post
 the question there (let the lawyers figure it out).
 
 
 
 
 On 9/10/2014 12:16 PM, Solly Ross wrote:
  - Original Message -
  From: Jeremy Stanley fu...@yuggoth.org
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Wednesday, September 10, 2014 1:10:18 PM
  Subject: Re: [openstack-dev] [Horizon] Licensing issue with using JSHint
  in build
 
  On 2014-09-10 13:00:29 -0400 (-0400), Solly Ross wrote:
  JSHint *isn't* Douglas Crockford. It was written by someone who
  (understandably) thought Douglas Crockford had some good ideas,
  but was overzealous.
  [...]
 
  Overzealous enough to copy his code.
  ?? This sentence doesn't make much sense.  I meant to say that
  Douglas Crockford was overzealous (which he is, IMO).
 
  The license is as such:
  https://github.com/jshint/jshint/blob/master/LICENSE
  Ahem. https://github.com/jshint/jshint/blob/master/src/jshint.js#L19
  Fair enough.  I stand corrected.  I didn't catch that.
  The general license, however, is as stated.
 
  You are thinking of JSLint, which is written by Douglas Crockford.
  JSHint is a derivative project of JSLint. Sorry to burst your
  bubble.
  To be fair, it's been undergoing *major* revisions lately, making it
  resemble
  JSHint less and less in terms of what it checks for.  Having used it in the
  past, functionality wise it's very different.  While it maintains some
  backwards
  compatibility, it has added in new checks, doesn't complain about nearly
  the number
  of things that JSLint complains about (for good reasons).
 
  --
  Jeremy Stanley
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Sylvain Bauza


Le 11/09/2014 01:10, Joe Cropper a écrit :

Agreed - I’ll draft up a formal proposal in the next week or two and we can 
focus the discussion there.  Thanks for the feedback - this provides a good 
framework for implementation considerations.


Count me on it, I'm interested in discussing the next stage.

When preparing the scheduler split, I just discovered it was unnecessary 
to keep the instance groups setup in the scheduler, because it was 
creating dependencies to other Nova objects that the Scheduler doesn't 
necessarly need to handle.
I proposed accordingly a patch for moving the logic to the conductor 
instead, see the proposal here :

https://review.openstack.org/110043

Reviews are welcome of course.

-Sylvain



- Joe
On Sep 10, 2014, at 6:00 PM, Russell Bryant rbry...@redhat.com wrote:


On 09/10/2014 06:46 PM, Joe Cropper wrote:

Hmm, not sure I follow the concern, Russell.  How is that any different
from putting a VM into the group when it’s booted as is done today?
This simply defers the ‘group insertion time’ to some time after
initial the VM’s been spawned, so I’m not sure this creates anymore race
conditions than what’s already there [1].

[1] Sure, the to-be-added VM could be in the midst of a migration or
something, but that would be pretty simple to check make sure its task
state is None or some such.

The way this works at boot is already a nasty hack.  It does policy
checking in the scheduler, and then has to re-do some policy checking at
launch time on the compute node.  I'm afraid of making this any worse.
In any case, it's probably better to discuss this in the context of a
more detailed design proposal.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Joe Cropper
Great to hear.  I started a blueprint for this [1].  More detail can be added 
once the kilo nova-specs directory is created… for now, I’ve tried to put some 
fairly detailed notes on the blueprint’s description.

[1] https://blueprints.launchpad.net/nova/+spec/dynamic-server-groups

- Joe
On Sep 11, 2014, at 2:11 AM, Sylvain Bauza sba...@redhat.com wrote:

 
 Le 11/09/2014 01:10, Joe Cropper a écrit :
 Agreed - I’ll draft up a formal proposal in the next week or two and we can 
 focus the discussion there. Thanks for the feedback - this provides a good 
 framework for implementation considerations.
 
 Count me on it, I'm interested in discussing the next stage.
 
 When preparing the scheduler split, I just discovered it was unnecessary to 
 keep the instance groups setup in the scheduler, because it was creating 
 dependencies to other Nova objects that the Scheduler doesn't necessarly need 
 to handle.
 I proposed accordingly a patch for moving the logic to the conductor instead, 
 see the proposal here :
 https://review.openstack.org/110043
 
 Reviews are welcome of course.
 
 -Sylvain
 
 
 - Joe
 On Sep 10, 2014, at 6:00 PM, Russell Bryant rbry...@redhat.com wrote:
 
 On 09/10/2014 06:46 PM, Joe Cropper wrote:
 Hmm, not sure I follow the concern, Russell.  How is that any different
 from putting a VM into the group when it’s booted as is done today?
 This simply defers the ‘group insertion time’ to some time after
 initial the VM’s been spawned, so I’m not sure this creates anymore race
 conditions than what’s already there [1].
 
 [1] Sure, the to-be-added VM could be in the midst of a migration or
 something, but that would be pretty simple to check make sure its task
 state is None or some such.
 The way this works at boot is already a nasty hack.  It does policy
 checking in the scheduler, and then has to re-do some policy checking at
 launch time on the compute node.  I'm afraid of making this any worse.
 In any case, it's probably better to discuss this in the context of a
 more detailed design proposal.
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Supporting Javascript clients calling OpenStack APIs

2014-09-11 Thread Richard Jones
[This is Horizon-related but affects every service in OpenStack, hence no
filter in the subject]

I would like for OpenStack to support browser-based Javascript API clients.
Currently this is not possible because of cross-origin resource blocking in
Javascript clients - that is, given some Javascript hosted on
https://horizon.company.com/; you cannot, for example, call from that
Javascript code to an API on https://apis.company.com:5000/v2.0/tokens; to
authenticate with Keystone.

There are three solutions to this problem:

1. the Horizon solution, in which those APIs are proxied by a very thick
   layer of additional Python API, plus some Python view code with some
   Javascript on the top only calling the Horizon view code,
2. add CORS support to all the OpenStack APIs though a new WSGI middleware
   (for example oslo.middleware.cors) and configured into each of the API
   services individually since they all exist on different origin
   host:port combinations, or
3. a new web service that proxies all the APIs and serves the static
   Javascript (etc) content from the one origin (host). APIs are then served
   from new URL roots /name/ where the name is from the serviceCatalog
   entry. Static content can be served from /static/. The serviceCatalog
from
   keystone will be rewritten on the fly to point the API publicURLs at the
   new service. Requests are no longer cross-origin.

I have implemented options 2 and 3 as an exercise to see how horrid each one
is.


== CORS Middleware ==

For those wanting a bit of background, I have written up a spec for oslo
that
talks about how this could work: https://review.openstack.org/#/c/119485/

The middleware option results in a reasonably nice bit of middleware. It's
short and relatively easy to test. The big problem with it comes in
configuring it in all the APIs. The configuration for the middleware takes
two forms:

1. hooking oslo.middleware.cors into the WSGI pipeline (there's more than
   one in each API),
2. adding the CORS configuration itself for the middleware in the API's main
   configuration file (eg. keystone.conf or nova.conf).

So for each service, that's two configuration files *and* the kicker is that
the paste configuration file is non-trivially different in almost every
case.

That's a lot of work, and confusing for deployers. Configuration management
tools can ease *some* of this burden (the *.conf files) but those paste
files
are a bit of a mess :(

Once the config change is in place, it works (well, except for an issue I
ran
into relating to oslo.middleware.sizelimit which I'll go into in another
place).

The implementation hasn't been pushed up for review as I'm not sure it
should
be. I can do this if people wish me to.


== New Single-Point API Service ==

Actually, this is not horrid in any way - unless that publicURL rewriting
gives you the heebie-jeebies.

It works, and offers us some nice additional features like being able to
host
the service behind SSL without needing to get a bazillion certificates. And
maybe load balancing. And maybe API access filtering.

I note that https://openrepose.org already exists to be *something* like
this, but it's not *precisely* what I'm proposing. Also Java euwww ;)


So, I propose that the idea of CORS-in-all-the-things as an idea be
put aside as unworkable.

I intend to pursue the single-point API service that I have described as a
way of moving forward in prototyping a pure-Javascript OpenStack Dashboard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-11 Thread Thomas Goirand
Hi Matt,

On 09/10/2014 04:30 AM, Matt Riedemann wrote:
 It took me a while to untangle this so prepare for links. :)
 
 I noticed this change [1] today for global-requirements to require tooz
 [2] for a ceilometer blueprint [3].
 
 The sad part is that tooz requires pymemcache [4] which is, from what I
 can tell, a memcached client that is not the same as python-memcached [5].
 
 Note that python-memcached is listed in global-requirements already [6].
 
 The problem I have with this is it doesn't appear that RHEL/Fedora
 package pymemcache (they do package python-memcached).  I see that
 openSUSE builds separate packages for each.  It looks like Ubuntu also
 has separate packages.
 
 My question is, is this a problem?  I'm assuming RDO will just have to
 package python-pymemcache themselves but what about people not using RDO
 (SOL? Don't care? Other?).
 
 Reverting the requirements change would probably mean reverting the
 ceilometer blueprint (or getting a version of tooz out that works with
 python-memcached which is probably too late for that right now).  Given
 the point in the schedule that seems pretty drastic.
 
 Maybe I'm making more of this than it's worth but wanted to bring it up
 in case anyone else has concerns.
 [1] https://review.openstack.org/#/c/93443/
 [2] https://github.com/stackforge/tooz/blob/master/requirements.txt#L6
 [3]

http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/central-agent-partitioning.html

 [4] https://pypi.python.org/pypi/pymemcache
 [5] https://pypi.python.org/pypi/python-memcached/
 [6]

https://github.com/openstack/requirements/blob/master/global-requirements.txt#L108



On my side (as the Debian package maintainer of OpenStack), I was more
than happy to see that Ceilometer made the choice to use a Python module
for memcache which supports Python 3. Currently python-memcache does
*not* support Python 3. It's in fact standing in the way to add Python 3
compatibility to *a lot* of the OpenStack packages, because this
directly impact python-keystoneclient, which is a (build-)dependency of
almost everything.

This situation has been very frustrating for me. I really would like
this to be solved. I see 2 ways to have it solved:
1- Complete the Python 3 support for python-memcache
or
2- Switch all OpenStack packages to pymemcache

There has been a few attempts with 1- already, with even a complete fork
of the project which you can see on both github and pypi. But it's a
real problem that it hasn't been upstreamed, and that the fork doesn't
support *both* Python 2 and 3.

I've tried to work out the port to Python 3 myself, but lamely failed.
It's not that easy, as one has to deal with unicode string from and to
the memcache server, and I have to admin I lack knowledge of how this
works. Also, the unit tests of python-memcache would have to be
re-written in a much better way than it is right now.

It would be really cool if someone could work on porting python-memcache
to Python 3, regardless if we decide to switch to pymemcache: that way,
we'd addressed the Python 3 compatibility quickly. If one wants to work
out a Python 3 compat in python-memcache, it'd be really great to have
it before Debian Jessie is frozen on the 5th of Nov, so that I get a
chance to package it. I asked several people, it seems that none really
wants to do it (is the code too ugly? probably...).

As for 2-, well, that would be a lot of work, I guess. I haven't
compared both APIs yet, but they should be different. And we have calls
to python-memcache in a lot of our projects. But this seems to be the
best way forward, as pymemcache seems to be written in a much better
way: smaller method, and the code is easier to understand., with more
extensive unit tests.

As for the issue with Redhat packaging, well, I'm sorry if that's a
problem for you, though really, pymemcache is a good choice, and I
support Julien on that one.

Cheers,

Thomas Goirand (zigo)

P.S: It's to be noted that python-memcached is called
python-memcache in Debian, because that's the name of the module when
doing imports...


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Mike Scherbakov
Hi all,
what about using experimental tag for experimental features?

After we implemented feature groups [1], we can divide our features and for
complex features, or those which don't get enough QA resources in the dev
cycle, we can declare as experimental. It would mean that those are not
production ready features.
Giving them live still in experimental mode allows early adopters to give a
try and bring a feedback to the development team.

I think we should not count bugs for HCF criteria if they affect only
experimental feature(s). At the moment, we have Zabbix as experimental
feature, and Patching of OpenStack [2] is under consideration: if today QA
doesn't approve it to be as ready for production use, we have no other
choice. All deadlines passed, and we need to get 5.1 finally out.

Any objections / other ideas?

[1]
https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
[2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Licensing issue with using JSHint in build

2014-09-11 Thread Martin Geisler
Solly Ross sr...@redhat.com writes:

Hi,

I recently began using using ESLint for all my JavaScript linting:

  http://eslint.org/

It has nice documentation, a normal license, and you can easily write
new rules for it.

 P.S. Here's hoping that the JSHint devs eventually find a way to
 remove that line from the file -- according to
 https://github.com/jshint/jshint/issues/1234, not much of the original
 remains.

I don't think it matters how much of the original code remains -- what
matters is that any rewrite is a derived work. Otherwise Debian and
others could have made the license pure MIT long ago.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpBeHS_mQeu4.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Horizon] Last remaining Django 1.7 issue in Horizon (and other OpenStack packages)

2014-09-11 Thread Thomas Goirand
Hi,

Here's a message from the maintainer of python-django in Debian. We've
been trying to switch to Django 1.7, because we would like to benefits
from its security support for the life of Debian Jessie.

I have already fixed numerous Debian packages regarding Django 1.7
compatibility (for example: python-appconf,
python-django-openstack-auth, python-django-compressor,
python-django-pycss, tuskar-ui, many issues in Horizon, and more...).

This is (hopefully...) the very last remaining issue, so it'd be really
cool to get at least comments on it, so I can consider everything
solved. Input from the Horizon team would be great.

BTW, I'd like to publicly thanks Raphael for all the help he provided
investigating many of the Django 1.7 issues in OpenStack related
packages. He's been really great and supportive, plus warned soon enough
so we had time to fix everything together.

Cheers,

Thomas Goirand (zigo)

 Original Message 
Subject: [PKG-Openstack-devel] Bug#755651: [openstack-dev] [horizon]
Support for Django 1.7: there's a bit of work, though it looks fixable
to me...
Resent-Date: Wed, 10 Sep 2014 20:45:23 +
Resent-From: Raphael Hertzog hert...@debian.org
Resent-To: debian-bugs-d...@lists.debian.org
Resent-CC: PKG OpenStack openstack-de...@lists.alioth.debian.org
Date: Wed, 10 Sep 2014 22:42:09 +0200
From: Raphael Hertzog hert...@debian.org
Reply-To: Tracking bugs and development for OpenStack
openstack-de...@lists.alioth.debian.org
To: Thomas Goirand z...@debian.org
CC: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org, 755...@bugs.debian.org

[ I'm not subscribed to openstack-devel, please cc me ]

On Tue, 05 Aug 2014, Thomas Goirand wrote:
 I'm now down to only a single error not solved:
 
 ==
 FAIL: test_update_project_when_default_role_does_not_exist
 (openstack_dashboard.dashboards.admin.projects.tests.UpdateProjectWorkflowTests)
 --
 Traceback (most recent call last):
   File
 /home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/openstack_dashboard/test/helpers.py,
 line 83, in instance_stub_out
 return fn(self, *args, **kwargs)
   File
 /home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/openstack_dashboard/dashboards/admin/projects/tests.py,
 line 1458, in test_update_project_when_default_role_does_not_exist
 self.client.get(url)
 AssertionError: NotFound not raised
 
 Any idea?

I looked further into this and in fact the exceptions is caught by the
template rendering code (IncludeNode.render() in
django/template/loader_tags.py while processing the include of
'horizon/common/_workflow.html' and to be more precise the processing of
'{% if step.has_required_fields %}' is the exact place where the exception
is fired). I can get the test to pass by changing the Django setting
TEMPLATE_DEBUG to True.

Django 1.6 has a slightly different code here and doesn't intercept
it (I'm not sure why, but I did two parallel step by step execution
with pdb to verify this).

But such a setting is not appropriate for production usage and somehow I
expect horizon to rely on the fact that the exception can be caught in
some upper level. So I'm not quite sure what to suggest here.

The expected behaviour is not clear to me and relying on exception
propagation from code executed indirectly in template rendering seems
bad design from the start.

That said the consequences of this failing test is just that we get a
page without any useful content instead of an internal server error.
Not sure that it matters much either...

Cheers,
-- 
Raphaël Hertzog ◈ Debian Developer

Discover the Debian Administrator's Handbook:
→ http://debian-handbook.info/get/

___
Openstack-devel mailing list
openstack-de...@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/openstack-devel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread Eoghan Glynn


 Hi,
 
 Nejc has been doing a great work and has been very helpful during the
 Juno cycle and his help is very valuable.
 
 I'd like to propose that we add Nejc Saje to the ceilometer-core group.
 
 Please, dear ceilometer-core members, reply with your votes!

A hearty +1 for me, Nejc has made a great impact in Juno.

Cheers,
Eoghan 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] referencing the index of a ResourceGroup

2014-09-11 Thread Steven Hardy
On Wed, Sep 10, 2014 at 04:44:01PM -0500, Jason Greathouse wrote:
I'm trying to find a way to create a set of servers and attach a new
volume to each server.  
I first tried to use block_device_mapping but that requires an existing
snapshot or volume and the deployment would fail when Rackspace
intermittently timed out trying to create the new volume from a
snapshot.  
I'm now trying with 3 ResourceGroups: OS::Cinder::Volume to build volumes
followed by OS::Nova::Server and then trying to attach the volumes
with  OS::Cinder::VolumeAttachment.

Basically creating lots of resource groups for related things is the wrong
pattern.  You need to create one nested stack template containing the
related things (Server, Volume and VolumeAttachment in this case), and use
ResourceGroup to multiply them as a unit.

I answered a similar question here on the openstack general ML recently
(which for future reference may be a better ML for usage questions like
this, as it's not really development discussion):

http://lists.openstack.org/pipermail/openstack/2014-September/009216.html

Here's another example which I used in a summit demo, which I think
basically does what you need?

https://github.com/hardys/demo_templates/tree/master/juno_summit_intro_to_heat/example3_server_with_volume_group

Steve.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread Mehdi Abaakouk
+1

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Mike Scherbakov
 if we point somewhere about knowing issues in those experimental features
there are might be dozens of bugs.
May be we can use tag per feature, for example zabbix, so it will be easy
to search in LP all open bugs regarding Zabbix feature?

On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky ikalnit...@mirantis.com
wrote:

  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s).

 +1, I'm totally agree with you - it makes no sense to count
 experimental bugs as HCF criteria.

  Any objections / other ideas?

 I think it would be great for customers if we point somewhere about
 knowing issues in those experimental features. IMHO, it should help
 them to understand what's wrong in case of errors and may prevent bug
 duplication in LP.


 On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  Hi all,
  what about using experimental tag for experimental features?
 
  After we implemented feature groups [1], we can divide our features and
 for
  complex features, or those which don't get enough QA resources in the dev
  cycle, we can declare as experimental. It would mean that those are not
  production ready features.
  Giving them live still in experimental mode allows early adopters to
 give a
  try and bring a feedback to the development team.
 
  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s). At the moment, we have Zabbix as experimental
  feature, and Patching of OpenStack [2] is under consideration: if today
 QA
  doesn't approve it to be as ready for production use, we have no other
  choice. All deadlines passed, and we need to get 5.1 finally out.
 
  Any objections / other ideas?
 
  [1]
 
 https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
  [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-11 Thread Germy Lure
Hi stackers,

According to my statistics(J2), the LOC of vendors' plugin and driver is
about 102K, while the whole under neutron is 220K.
That is to say the community has paid and is paying over 46% energy to
maintain vendors' code. If we take mails, bugs,
BPs  and so on into consideration, this percentage will be more.

Most of these codes are just plugins and drivers implementing almost  the
same functions. Every vendor submits a plugin,
and the community only do the same thing, repeat and repeat. Meaningless.I
think it's time to move them out.
Let's focus on improving those exist but still weak features, on
introducing important and interesting new features.

My suggestions now:
1.monopolized plugins
  1)The community only standards NB API and keeps built-ins, such as ML2,
OVS and Linux bridge plugins.
  2)Vendors maintain their plugins locally.
  3)Users get neutron from community and plugin from some vendor on demand.
2.service plugins
  1)The community standards SB API and keeps open source driver(iptables,
openSwan and etc.) as built-in.
  2)Vendors only provide drivers not plugin. And those drivers also need
not deliver to community.
  3)Like above, Users can get code on demand from vendors or just use open
source.
3.ML2 plugin
  1)Like service and monopolized plugin, the community just keep open
source implementations as built-in.
  2)L2-population should be kept.

I am very happy to discuss this further.

vendors' code stat. table(excluding built-in plugins and drivers)

Path Size
neutron-master\neutron\plugins\63170
neutron-master\neutron\services\ 4052
neutron-master\neutron\tests\ 35756

BR,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Igor Kalnitsky
 May be we can use tag per feature, for example zabbix

Tags are ok, but I still think that we can mention at least some
significant bugs. For example, if some feature doesn't work in some
deployment mode (e.g. simple, with ceilometer, etc) we can at least
notify users so they even don't try.

Another opinions?


On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
mscherba...@mirantis.com wrote:
 if we point somewhere about knowing issues in those experimental features
 there are might be dozens of bugs.
 May be we can use tag per feature, for example zabbix, so it will be easy
 to search in LP all open bugs regarding Zabbix feature?

 On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s).

 +1, I'm totally agree with you - it makes no sense to count
 experimental bugs as HCF criteria.

  Any objections / other ideas?

 I think it would be great for customers if we point somewhere about
 knowing issues in those experimental features. IMHO, it should help
 them to understand what's wrong in case of errors and may prevent bug
 duplication in LP.


 On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  Hi all,
  what about using experimental tag for experimental features?
 
  After we implemented feature groups [1], we can divide our features and
  for
  complex features, or those which don't get enough QA resources in the
  dev
  cycle, we can declare as experimental. It would mean that those are not
  production ready features.
  Giving them live still in experimental mode allows early adopters to
  give a
  try and bring a feedback to the development team.
 
  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s). At the moment, we have Zabbix as experimental
  feature, and Patching of OpenStack [2] is under consideration: if today
  QA
  doesn't approve it to be as ready for production use, we have no other
  choice. All deadlines passed, and we need to get 5.1 finally out.
 
  Any objections / other ideas?
 
  [1]
 
  https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
  [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Daniel P. Berrange
On Thu, Sep 11, 2014 at 09:23:34AM +1000, Michael Still wrote:
 On Thu, Sep 11, 2014 at 8:11 AM, Jay Pipes jaypi...@gmail.com wrote:
 
  a) Sorting out the common code is already accounted for in Dan B's original
  proposal -- it's a prerequisite for the split.
 
 Its a big prerequisite though. I think we're talking about a release
 worth of work to get that right. I don't object to us doing that work,
 but I think we need to be honest about how long its going to take. It
 will also make the core of nova less agile, as we'll find it hard to
 change the hypervisor driver interface over time. Do we really think
 its ready to be stable?

Yes, in my proposal I explicitly said we'd need to have Kilo
for all the prep work to clean up the virt API, before only
doing the split in Lx.

The actual nova/virt/driver.py has been more stable over the
past few releases than I thought it would be. In terms of APIs
we're not really modified existing APIs, mostly added new ones.
Where we did modify existing APIs, we could have easily taken
the approach of adding a new API in parallel and deprecating
the old entry point to maintain compat.

The big change which isn't visible directly is the conversion
of internal nova code to use objects. Finishing this conversion
is clearly a pre-requisite to any such split, since we'd need
to make sure all data passed into the nova virt APIs as parameters
is stable  well defined. 

 As an alternative approach...
 
 What if we pushed most of the code for a driver into a library?
 Imagine a library which controls the low level operations of a
 hypervisor -- create a vm, attach a NIC, etc. Then the driver would
 become a shim around that which was relatively thin, but owned the
 interface into the nova core. The driver handles the nova specific
 things like knowing how to create a config drive, or how to
 orchestrate with cinder, but hands over all the hypervisor operations
 to the library. If we found a bug in the library we just pin our
 dependancy on the version we know works whilst we fix things.
 
 In fact, the driver inside nova could be a relatively generic library
 driver, and we could have multiple implementations of the library,
 one for each hypervisor.

I don't think that particularly solves the problem, particularly
the ones you are most concerned about above of API stability. The
naive impl of any library for the virt driver would pretty much
mirror the nova virt API. The virt driver impls would thus have to
do the job of taking the Nova objects passed in as parameters and
turning them into something stable to pass to the library. Except
now instead of us only having to figure out a stable API in one
place, every single driver has to reinvent the wheel defining their
own stable interface  objects. I'd also be concerned that ongoing
work on drivers is still going to require alot of patches to Nova
to update the shims all the time, so we're still going to contend
on resource fairly highly.

  b) The conflict Dan is speaking of is around the current situation where we
  have a limited core review team bandwidth and we have to pick and choose
  which virt driver-specific features we will review. This leads to bad
  feelings and conflict.
 
 The way this worked in the past is we had cores who were subject
 matter experts in various parts of the code -- there is a clear set of
 cores who get xen or libivrt for example and I feel like those
 drivers get reasonable review times. What's happened though is that
 we've added a bunch of drivers without adding subject matter experts
 to core to cover those drivers. Those newer drivers therefore have a
 harder time getting things reviewed and approved.

FYI, for Juno at least I really don't consider that even the libvirt
driver got acceptable review times in any sense. The pain of waiting
for reviews in libvirt code I've submitted this cycle is what prompted
me to start this thread. All the virt drivers are suffering way more
than they should be, but those without core team representation suffer
to an even greater degree.  And this is ignoring the point Jay  I
were making about how the use of a single team means that there is
always contention for feature approval, so much work gets cut right
at the start even if maintainers of that area felt it was valuable
and worth taking.

  c) It's the impact to the CI and testing load that I see being the biggest
  benefit to the split-out driver repos. Patches proposed to the XenAPI driver
  shouldn't have the Hyper-V CI tests run against the patch. Likewise, running
  libvirt unit tests in the VMWare driver repo doesn't make a whole lot of
  sense, and all of these tests add a not-insignificant load to the overall
  upstream and external CI systems. The long wait time for tests to come back
  means contributors get frustrated, since many reviewers tend to wait until
  Jenkins returns some result before they review. All of this leads to
  increased conflict that would be somewhat ameliorated by 

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Daniel P. Berrange
On Wed, Sep 10, 2014 at 07:35:05PM -0700, Armando M. wrote:
 Hi,
 
 I devoured this thread, so much it was interesting and full of
 insights. It's not news that we've been pondering about this in the
 Neutron project for the past and existing cycle or so.
 
 Likely, this effort is going to take more than two cycles, and would
 require a very focused team of people working closely together to
 address this (most likely the core team members plus a few other folks
 interested).
 
 One question I was unable to get a clear answer was: what happens to
 existing/new bug fixes and features? Would the codebase go in lockdown
 mode, i.e. not accepting anything else that isn't specifically
 targeting this objective? Just using NFV as an example, I can't
 imagine having changes supporting NFV still being reviewed and merged
 while this process takes place...it would be like shooting at a moving
 target! If we did go into lockdown mode, what happens to all the
 corporate-backed agendas that aim at delivering new value to
 OpenStack?

I don't think it is credible to say we'd go into lockldown refusing
all other feature proposals, precisely for the kind of reasons you
mention. We have to recognise that people will want to continue to
contribute stuff and that's fine in general. The primary impact will
be around prioritization of work. eg in the event of contention for
attention / approval, work on refactoring would be given priority
over other feature work. I'd expect that we'd still acceptable a
reasonable amount of other feature work, because mythical man month
paradigm means you can't put every contributor  reviewer working on
the same refactoring problem at once.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Daniel P. Berrange
On Wed, Sep 10, 2014 at 12:41:44PM -0700, Vishvananda Ishaya wrote:
 
 On Sep 5, 2014, at 4:12 AM, Sean Dague s...@dague.net wrote:
 
  On 09/05/2014 06:40 AM, Nikola Đipanov wrote:
  
  
  Just some things to think about with regards to the whole idea, by no
  means exhaustive.
  
  So maybe the better question is: what are the top sources of technical
  debt in Nova that we need to address? And if we did, everyone would be
  more sane, and feel less burnt.
  
  Maybe the drivers are the worst debt, and jettisoning them makes them
  someone else's problem, so that helps some. I'm not entirely convinced
  right now.
  
  I think Cells represents a lot of debt right now. It doesn't fully work
  with the rest of Nova, and produces a ton of extra code paths special
  cased for the cells path.
  
  The Scheduler has a ton of debt as has been pointed out by the efforts
  in and around Gannt. The focus has been on the split, but realistically
  I'm with Jay is that we should focus on the debt, and exposing a REST
  interface in Nova.
  
  What about the Nova objects transition? That continues to be slow
  because it's basically Dan (with a few other helpers from time to time).
  Would it be helpful if we did an all hands on deck transition of the
  rest of Nova for K1 and just get it done? Would be nice to have the bulk
  of Nova core working on one thing like this and actually be in shared
  context with everyone else for a while.
 
 In my mind, spliting helps with all of these things. A lot of the cleanup
 related work is completely delayed because the review queue starts to seem
 like an insurmountable hurdle. There are various cleanups needed in the
 drivers as well but they are not progressing due to the glacier pace we
 are moving right now. Some examples: Vmware spawn refactor, Hyper-v bug
 fixes, Libvirt resize/migrate (this is still using ssh to copy data!)
 
 People need smaller areas of work. And they need a sense of pride and
 ownership of the things that they work on. In my mind that is the best
 way to ensure success.

I do like to look at past experiance for guidance, and with Nova we have
had a history of splitting out pieces of code and I think it is fair to
say that all those splits have been very successful for both sides (the
new project and Nova). eg if we look at the size and scope of the cinder
project  team today, I don't think it could ever have grown to that
scale if it had remained part of Nova. Splitting it out unleashed its
latent potential for success.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-11 Thread Kevin Benton
This has been brought up several times already and I believe is going to be
discussed at the Kilo summit.

I agree that reviewing third party patches eats community time. However,
claiming that the community pays 46% of it's energy to maintain
vendor-specific code doesn't make any sense. LOC in the repo has very
little to do with ongoing required maintenance. Assuming the APIs for the
plugins stay consistent, there should be few 'maintenance' changes required
to a plugin once it's in the tree. If there are that many changes to
plugins just to keep them operational, that means Neutron is far too
unstable to support drivers living outside of the tree anyway.

On a related note, if we are going to pull plugins/drivers out of Neutron,
I think all of them should be removed, including the OVS and LinuxBridge
ones. There is no reason for them to be there if Neutron has stable enough
internal APIs to eject the 3rd party plugins from the repo. They should be
able to live in a separate neutron-opensource-drivers repo or something
along those lines. This will free up significant amounts of
developer/reviewer cycles for neutron to work on the API refactor, task
based workflows, performance improvements for the DB operations, etc.

If the open source drivers stay in the tree and the others are removed,
there is little incentive to keep the internal APIs stable and 3rd party
drivers sitting outside of the tree will break on every refactor or data
structure change. If that's the way we want to treat external driver
developers, let's be explicit about it and just post warnings that 3rd
party drivers can break at any point and that the onus is on the external
developers to learn what changed an react to it. At some point they will
stop bothering with Neutron completely in their deployments and mimic its
public API.

A clear separation of the open source drivers/plugins and core Neutron
would give a much better model for 3rd party driver developers to follow
and would enforce a stable internal API in the Neutron core.



On Thu, Sep 11, 2014 at 1:54 AM, Germy Lure germy.l...@gmail.com wrote:

 Hi stackers,

 According to my statistics(J2), the LOC of vendors' plugin and driver is
 about 102K, while the whole under neutron is 220K.
 That is to say the community has paid and is paying over 46% energy to
 maintain vendors' code. If we take mails, bugs,
 BPs  and so on into consideration, this percentage will be more.

 Most of these codes are just plugins and drivers implementing almost  the
 same functions. Every vendor submits a plugin,
 and the community only do the same thing, repeat and repeat. Meaningless.I
 think it's time to move them out.
 Let's focus on improving those exist but still weak features, on
 introducing important and interesting new features.

 My suggestions now:
 1.monopolized plugins
   1)The community only standards NB API and keeps built-ins, such as ML2,
 OVS and Linux bridge plugins.
   2)Vendors maintain their plugins locally.
   3)Users get neutron from community and plugin from some vendor on demand.
 2.service plugins
   1)The community standards SB API and keeps open source driver(iptables,
 openSwan and etc.) as built-in.
   2)Vendors only provide drivers not plugin. And those drivers also need
 not deliver to community.
   3)Like above, Users can get code on demand from vendors or just use open
 source.
 3.ML2 plugin
   1)Like service and monopolized plugin, the community just keep open
 source implementations as built-in.
   2)L2-population should be kept.

 I am very happy to discuss this further.

 vendors' code stat. table(excluding built-in plugins and drivers)
 
 Path Size
 neutron-master\neutron\plugins\63170
 neutron-master\neutron\services\ 4052
 neutron-master\neutron\tests\ 35756

 BR,
 Germy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] master access control - future work

2014-09-11 Thread Evgeniy L
Hi Lukasz,

Regarding to 'Node agent authorization' do you have some ideas how it could
be done?
For me it looks really complicated, because we don't upgrade agents on
slave nodes and
I'm not sure if we will be able to do it in the nearest future.

Thanks,

On Tue, Sep 9, 2014 at 1:50 PM, Lukasz Oles lo...@mirantis.com wrote:

 Dear Fuelers,

 I have some ideas and questions to share regarding Fuel Master access
 control.

 During 5,1 cycle we made some non optimal decision which we have to fix.
 The following blueprint describes required changes:


 https://blueprints.launchpad.net/fuel/+spec/access-control-master-node-improvments

 The next step to improve security is to introduce secure connection using
 HTTPS, it is described here:

 https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints

 And now, there is question about next stages from original blueprint:

 https://blueprints.launchpad.net/fuel/+spec/access-control-master-node

 For example, from stage 3:
 - Node agent authorization, which will increase security. Currently, any
 one can change node data.
 What do you think do we need it now?

 Please read and comment first two blueprints.

 --
 Łukasz Oleś

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.0.0: A console interface to Gerrit

2014-09-11 Thread Daniele Pizzolli
James E. Blair wrote:

[]

 We write code in a terminal.  We read logs in a terminal.  We debug code
 in a terminal.  We commit in a terminal.  You know what's next.

[]

Hello James,

thanks for the announce and for gertty, I am new to OpenStack, and
this tools is really interesting to me.  In fact I am still searching
for tools that allow smart interaction with the OpenStack
infrastructure.

Since it was not on the wiki I just added it:
https://wiki.openstack.org/wiki/ReviewWorkflowTips

I wish something similar exists for emacs, but did not found anything
searching around.

My main purpose right now is to follow broadly the review process for
some projects.  And gertty seems to be the best option at this.

Last point: why not adding an explicit reference to the git repo in
the page shown on pypi?  The last line with the reference to
CONTRIBUTING.rst file without the reference to the repo force the
reader to open a search engine.  I now that there is a little link in
the middle of the text on 'source distribution' but it is out of
context and can easily go unnoticed.

Best,
Daniele

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Error in deploying ironicon Ubuntu 12.04

2014-09-11 Thread Peeyush

Hi all,

I have been trying to deploy Openstack-ironic on a Ubuntu 12.04 VM.
I encountered the following error:

2014-09-11 10:08:11.166 | Reading package lists...
2014-09-11 10:08:11.471 | Building dependency tree...
2014-09-11 10:08:11.475 | Reading state information...
2014-09-11 10:08:11.610 | E: Unable to locate package docker.io
2014-09-11 10:08:11.610 | E: Couldn't find any package by regex 'docker.io'
2014-09-11 10:08:11.611 | + exit_trap
2014-09-11 10:08:11.612 | + local r=100
2014-09-11 10:08:11.612 | ++ jobs -p
2014-09-11 10:08:11.612 | + jobs=
2014-09-11 10:08:11.612 | + [[ -n '' ]]
2014-09-11 10:08:11.612 | + kill_spinner
2014-09-11 10:08:11.613 | + '[' '!' -z '' ']'
2014-09-11 10:08:11.613 | + [[ 100 -ne 0 ]]
2014-09-11 10:08:11.613 | + echo 'Error on exit'
2014-09-11 10:08:11.613 | Error on exit
2014-09-11 10:08:11.613 | + [[ -z /opt/stack ]]
2014-09-11 10:08:11.613 | + ./tools/worlddump.py -d /opt/stack
2014-09-11 10:08:11.655 | + exit 100

I tried to make it work on a separate machine, but got the same error.
I understand that it could be because script is looking for docker.io 
package,
but I guess only docker package is available. I tried to install 
docker.io, but couldn't

find it.

Can you please help me out to resolve this?

Thanks,

--
Peeyush Gupta
gpeey...@linux.vnet.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Nikolay Markov
Probably, even experimental feature should at least pretend to be
working, anyway, or it shouldn't be publically announced. But I think
it's important to describe limitation of this features (or mark some
of them as untested) and I think list of known issues with links to
most important bugs is a good approach. And tags will just make things
simpler.

On Thu, Sep 11, 2014 at 1:05 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote:
 May be we can use tag per feature, for example zabbix

 Tags are ok, but I still think that we can mention at least some
 significant bugs. For example, if some feature doesn't work in some
 deployment mode (e.g. simple, with ceilometer, etc) we can at least
 notify users so they even don't try.

 Another opinions?


 On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
 if we point somewhere about knowing issues in those experimental features
 there are might be dozens of bugs.
 May be we can use tag per feature, for example zabbix, so it will be easy
 to search in LP all open bugs regarding Zabbix feature?

 On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s).

 +1, I'm totally agree with you - it makes no sense to count
 experimental bugs as HCF criteria.

  Any objections / other ideas?

 I think it would be great for customers if we point somewhere about
 knowing issues in those experimental features. IMHO, it should help
 them to understand what's wrong in case of errors and may prevent bug
 duplication in LP.


 On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  Hi all,
  what about using experimental tag for experimental features?
 
  After we implemented feature groups [1], we can divide our features and
  for
  complex features, or those which don't get enough QA resources in the
  dev
  cycle, we can declare as experimental. It would mean that those are not
  production ready features.
  Giving them live still in experimental mode allows early adopters to
  give a
  try and bring a feedback to the development team.
 
  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s). At the moment, we have Zabbix as experimental
  feature, and Patching of OpenStack [2] is under consideration: if today
  QA
  doesn't approve it to be as ready for production use, we have no other
  choice. All deadlines passed, and we need to get 5.1 finally out.
 
  Any objections / other ideas?
 
  [1]
 
  https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
  [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Best regards,
Nick Markov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-09-11 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate. 

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [LBaaS] Packet flow between instances using a load balancer

2014-09-11 Thread Maish Saidel-Keesing
I am trying to find out how traffic currently flows went sent to an
instance through a LB.

Say I have the following scenario:


RHA1   LB_A -- - LB_B ---  RHB1
   |  |
RHA2 ---|  |-   RHB2


A packet is sent from RHA1 to LB_B (with a final destination of course
being either RHB1 or RHB2)

I have a few questions about the flow.

1. When the packet is received by RHB1 - what is the source and
destination address?
 Is the source RHA1 or LB_B?
 Is the destination LB_B or RHB_1?
2. When is the packet modified (if it is)? And how?
3. Traffic in the opposite direction. RHB1 - RHA1. What is the path
that will be taken?

The catalyst of this question was how to control traffic that is coming
into instances through a LoadBalancer with security groups. At the
moment you can either define a source IP/range or a security group.
There is no way to add a LB to a security group (at least not that I
know of).

If the source IP that the packet is identified with - is the Load
balancer (and I suspect it is) then there is no way to enforce the
traffic flow.

How would you all deal with this scenario and controlling the traffic flow?

Any help / thoughts is appreciated!

-- 
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread Angus Salkeld
On 10/09/2014 8:37 PM, Julien Danjou jul...@danjou.info wrote:

 Hi,

 Nejc has been doing a great work and has been very helpful during the
 Juno cycle and his help is very valuable.

 I'd like to propose that we add Nejc Saje to the ceilometer-core group.

 Please, dear ceilometer-core members, reply with your votes!

+1


 --
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Error in deploying ironicon Ubuntu 12.04

2014-09-11 Thread Lucas Alvares Gomes
Oh, it's because Precise doesn't have the docker.io package[1] (nor docker).

AFAIK the -infra team is now using Trusty in gate, so it won't be a
problem. But if you think that we should still support Ironic DevStack
with Precise please file a bug about it so the Ironic team can take a
look on that.

[1] 
http://packages.ubuntu.com/search?suite=trustysection=allarch=anykeywords=docker.iosearchon=names

Cheers,
Lucas

On Thu, Sep 11, 2014 at 11:12 AM, Peeyush gpeey...@linux.vnet.ibm.com wrote:
 Hi all,

 I have been trying to deploy Openstack-ironic on a Ubuntu 12.04 VM.
 I encountered the following error:

 2014-09-11 10:08:11.166 | Reading package lists...
 2014-09-11 10:08:11.471 | Building dependency tree...
 2014-09-11 10:08:11.475 | Reading state information...
 2014-09-11 10:08:11.610 | E: Unable to locate package docker.io
 2014-09-11 10:08:11.610 | E: Couldn't find any package by regex 'docker.io'
 2014-09-11 10:08:11.611 | + exit_trap
 2014-09-11 10:08:11.612 | + local r=100
 2014-09-11 10:08:11.612 | ++ jobs -p
 2014-09-11 10:08:11.612 | + jobs=
 2014-09-11 10:08:11.612 | + [[ -n '' ]]
 2014-09-11 10:08:11.612 | + kill_spinner
 2014-09-11 10:08:11.613 | + '[' '!' -z '' ']'
 2014-09-11 10:08:11.613 | + [[ 100 -ne 0 ]]
 2014-09-11 10:08:11.613 | + echo 'Error on exit'
 2014-09-11 10:08:11.613 | Error on exit
 2014-09-11 10:08:11.613 | + [[ -z /opt/stack ]]
 2014-09-11 10:08:11.613 | + ./tools/worlddump.py -d /opt/stack
 2014-09-11 10:08:11.655 | + exit 100

 I tried to make it work on a separate machine, but got the same error.
 I understand that it could be because script is looking for docker.io
 package,
 but I guess only docker package is available. I tried to install docker.io,
 but couldn't
 find it.

 Can you please help me out to resolve this?

 Thanks,

 --
 Peeyush Gupta
 gpeey...@linux.vnet.ibm.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Re: SSL in Fuel.

2014-09-11 Thread Sebastian Kalinowski
I have some topics for [1] that I want to discuss:

1) Should we allow users to turn SSL on/off for Fuel master?
I think we should since some users may don't care about SSL and
enabling it will just make them unhappy (like warnings in browsers,
expiring certs).

2) Will we allow users (in first iteration) to use their own certs?
If we will (which I think we should and other people aslo seems to
share this point of view), we have some options for that:
 A) Add informations to docs where to upload your own certificate on
master node (no UI) - less work, but requires a little more action from
users
 B) Simple form in UI where user will be able to paste his certs -
little bit more work, user friendly
Are there any reasons we shouldn't do that?

3) How we will manage cert expiration?
Stanislaw proposed that we should show user a notification that will
tell user about cert expiration. We could check that in cron job.
I think that we should also allow user to generate a new cert in Fuel
if the old one will expire.

I'll also remove part about adding cert validation in fuel agent since it
would require a significant amount of work and it's not essential for first
iteration.

Best,
Sebastian


[1] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-11 Thread Flavio Percoco
On 09/10/2014 03:45 PM, Gordon Sim wrote:
 On 09/10/2014 01:51 PM, Thierry Carrez wrote:
 I think we do need, as Samuel puts it, some sort of durable
 message-broker/queue-server thing. It's a basic application building
 block. Some claim it's THE basic application building block, more useful
 than database provisioning. It's definitely a layer above pure IaaS, so
 if we end up splitting OpenStack into layers this clearly won't be in
 the inner one. But I think IaaS+ basic application building blocks
 belong in OpenStack one way or another. That's the reason I supported
 Designate (everyone needs DNS) and Trove (everyone needs DBs).

 With that said, I think yesterday there was a concern that Zaqar might
 not fill the some sort of durable message-broker/queue-server thing
 role well. The argument goes something like: if it was a queue-server
 then it should actually be built on top of Rabbit; if it was a
 message-broker it should be built on top of postfix/dovecot; the current
 architecture is only justified because it's something in between, so
 it's broken.
 
 What is the distinction between a message broker and a queue server? To
 me those terms both imply something broadly similar (message broker
 perhaps being a little bit more generic). I could see Zaqar perhaps as
 somewhere between messaging and data-storage.

I agree with Gordon here. I really don't know how to say this without
creating more confusion. Zaqar is a messaging service. Messages are the
most important entity in Zaqar. This, however, does not forbid anyone to
use Zaqar as a queue. It has the required semantics, it guarantees FIFO
and other queuing specific patterns. This doesn't mean Zaqar is trying
to do something outside its scope, it comes for free.

Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal is
to optimize Zaqar for delivering messages and supporting different
messaging patterns.

Should we remove all the semantics that allow people to use Zaqar as a
queue service? I don't think so either. Again, the semantics are there
because Zaqar is using them to do its job. Whether other folks may/may
not use Zaqar as a queue service is out of our control.

This doesn't mean the project is broken.


 There are of course quite a lot of durable message-broker/queue-server
 things around already. I understood Zaqar to have been created to
 address perceived limitations in existing solutions (e.g. requiring less
 'babysitting', being 'designed for the cloud' etc). All solutions
 certainly have their limitations. Zaqar has limitations with respect to
 existing solutions also.

Agreed, again. Zaqar has a long way ahead but a clear goal. New features
will be proposed by the community once, hopefully, it's adopted by more
and more users. The project will obviously evaluated them all and make
sure that whatever gets implemented fits in the project goals.

 So while I agree that there is great value in a basic building block for
 'messaging as a service' I think the ideal solution would allow
 different variations, tailored to different patterns of use with a
 common API for provisioning, managing and monitoring coupled with
 support for standard protocols.

I agree there's lots of space for other messaging related services. One
of the ideas that has come up quite a few times is having a service that
provisions message brokers (or other messaging technologies). However, I
believe that's a complete different service and it's out of the scope of
what Zaqar wants to do.

That said, I'm really looking forward to see a project like that emerge
in the future.

Thanks a lot for your feedback,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL in Fuel

2014-09-11 Thread Evgeniy L
Hi,

We definitely need a person who will help with design for the feature.

Here is the list of open questions:

1. UI design for certificates uploading
2. CLI
3. diagnostic snapshot sanitising
4. REST API/DB design
5. background tasks for nailgun (?)
6. do we need separate container to certificates signing? I don't think
that we need if it's
not separate service. If it command line tool, it can be installed in
nailgun container, in
case if we implement background tasks for nailgun, or in mcollective
container.

Thanks,

On Tue, Sep 9, 2014 at 2:09 PM, Guillaume Thouvenin thouv...@gmail.com
wrote:

 I think that the management of certificates should be discussed in the
 ca-deployment blueprint [3]

 We had some discussions and it seems that one idea is to use a docker
 container as the root authority. By doing this we should be able to sign
 certificate from Nailgun and distribute the certificate to the
 corresponding controllers. So one way to see this is:

 1) a new environment is created
 2) Nailgun generates a key pair that will be used for the new env.
 3) Nailgun sends a CSR that contains the VIP used by the new environment
 and signed by the newly created private key to the docker root CA.
 4) the docker CA will send back a signed certificate.
 5) Nailgun distribute this signed certificate and the env private key to
 the corresponding controller through mcollective.

 It's not clear to me how Nailgun will interact with docker CA and I aslo
 have some concerns about the storage of different private key of
 environments but it is the idea...
 If needed I can start to fill the ca-deployment according to this scenario
 but I guess that we need to approve the BP [3].

 So I think that we need to start on [3]. As this is required for OSt
 public endpoint SSL and also for Fuel SSL it can be quicker to make a first
 stage where a self-signed certificate is managed from nailgun and a second
 stage with the docker CA...

 Best regards,
 Guillaume

 [3] https://blueprints.launchpad.net/fuel/+spec/ca-deployment

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Sean Dague
On 09/10/2014 06:11 PM, Jay Pipes wrote:
 
 
 On 09/10/2014 05:55 PM, Chris Friesen wrote:
 On 09/10/2014 02:44 AM, Daniel P. Berrange wrote:
 On Tue, Sep 09, 2014 at 05:14:43PM -0700, Stefano Maffulli wrote:

 I have the impression this idea has been circling around for a while
 but
 for some reason or another (like lack of capabilities in gerrit and
 other reasons) we never tried to implement it. Maybe it's time to think
 about an implementation. We have been thinking about mentors
 https://wiki.openstack.org/wiki/Mentors, maybe that's a way to go?
 Sub-team with +1.5 scoring capabilities?

 I think that setting up subteams is neccessary to stop us imploding but
 I don't think it is enough. As long as we have one repo we're forever
 going to have conflict  contention in deciding which features to
 accept,
 which is a big factor in problems today.

 If each hypervisor team mostly only modifies their own code, why would
 there be conflict?

 As I see it, the only causes for conflict would be in the shared code,
 and you'd still need to sort out the issues with the shared code even if
 you split out the individual drivers into separate repos.
 
 a) Sorting out the common code is already accounted for in Dan B's
 original proposal -- it's a prerequisite for the split.
 
 b) The conflict Dan is speaking of is around the current situation where
 we have a limited core review team bandwidth and we have to pick and
 choose which virt driver-specific features we will review. This leads to
 bad feelings and conflict.
 
 c) It's the impact to the CI and testing load that I see being the
 biggest benefit to the split-out driver repos. Patches proposed to the
 XenAPI driver shouldn't have the Hyper-V CI tests run against the patch.
 Likewise, running libvirt unit tests in the VMWare driver repo doesn't
 make a whole lot of sense, and all of these tests add a
 not-insignificant load to the overall upstream and external CI systems.
 The long wait time for tests to come back means contributors get
 frustrated, since many reviewers tend to wait until Jenkins returns some
 result before they review. All of this leads to increased conflict that
 would be somewhat ameliorated by having separate code repos for the virt
 drivers.

So I haven't done the math recently, what do you expect the time savings
to be here? Because unit tests aren't run by 3rd party today.

On my fancy desktop (test time including testr overhead):
 * tox -epy27: 330s
 * tox -epy27 libvirt: 18s
 * tox -epy27 vmware: 9s
 * tox -epy27 xen: 18s
 * tox -epy27 hyperv: 13s

The testr overhead is about 8s for discovery (yes, I do realize that's
probably more than it should be, that's a different story), so we'd be
looking at a reduction of about 10% of the total run time of unit tests
if we don't have the virt drivers in tree. That's not very much.

The only reason we're asking 3rd party CI folks to test everything... is
policy. I don't think it's a big deal to only require them to test
changes that hit their driver. Just decide that.

The conflict isn't going to go away, it's going to now exist on
integration, where there isn't a single core team to work through it in
a holistic way. This is hugely more painful place to pay it.

...

Right now the top of the gate is 26 hrs.

One of the reasons that that continues to grow and get worse over time
is related to the total # of git trees that we have to integrate that
don't have common core teams across them that understand their
interactions correctly. I firmly believe that anything that creates more
git trees that we have to integrate after the fact makes that worse. I
believe the 10+ oslo lib trees have made this worse. I believe
continuing to add new integrated projects has made this worse. And I
believe that a virt driver split by any project will make it worse, if
we expect to test that code upstream.

The docker driver in stackforge has been a success in merging docker
code. It's been much less of a success in terms of making it easy for
anyone to run it, use it, or for us to get on common ground for a
containers service moving forward.

The bulk of the folks that would be on the driver teams don't really
look at failures that expose in the gate. So I have a lot of trepidation
about the claims that this will make integration better by folks that
don't spend a lot of time looking and helping on our current integration.

...

That being said, I'm entirely pro cleaning up the virt interfaces as a
matter of paying down debt. That was a blueprint when I first joined the
project, that died on the vine somewhere. I think more common
infrastructure for virt drivers would be a good thing, and make the code
a lot more understandable. And as it's the prereq for any of this, so
let's do it.

Why don't we start with let's clean up the virt interface and make it
more sane, as I don't think there is any disagreement there. If it's
going to take a cycle, it's going to take a cycle anyway (it will
probably take 2 cycles, 

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread Eoghan Glynn


 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].
 
 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what
 they think the project wide Kilo cycle goals should be and post them on this
 thread ...

Here's my list of high-level cycle goals, for consideration ...


1. Address our usability debts

With some justification, we've been saddled with the perception
of not caring enough about the plight of users and operators. The
frustrating thing is that much of this is very fixable, *if* we take
time out from the headlong rush to add features. Achievable things
like documentation completeness, API consistency, CLI intuitiveness,
logging standardization, would all go a long way here.

These things are of course all not beyond the wit of man, but we
need to take the time out to actually do them. This may involve
a milestone, or even longer, where we accept that the rate of
feature addition will be deliberately slowed down. 


2. Address the drags on our development velocity

Despite the Trojan efforts of the QA team, the periodic brownouts
in the gate are having a serious impact on our velocity. Over the
past few cycles, we've seen the turnaround time for patch check/
verification spike up unacceptably long multiple times, mostly
around the milestones.

Whatever we can do to smoothen out these spikes, whether it be
moving much of the Tempest coverage into the project trees, or
switching focus onto post-merge verification as suggested by
Sean on this thread, or even considering some more left-field
approaches such as staggered milestones, we need to grasp this
nettle as a matter of urgency.

Further back in the pipeline, the effort required to actually get
something shepherded through review is steadily growing. To the
point that we need to consider some radical approaches that
retain the best of our self-organizing model, while setting more
reasonable  reliable expectations for patch authors, and making
it more likely that narrow domain expertise is available to review
their contributions in timely way. For the larger projects, this
is likely to mean something different (along the lines of splits
or sub-domains) than it does for the smaller projects.


3. Address the long-running what's in and what's out questions

The way some of the discussions about integration and incubation 
played out this cycle have made me sad. Not all of these discussions
have been fully supported by the facts on the ground IMO. And not
all of the issues that have been held up as justifications for
whatever course of exclusion or inclusion would IMO actually be
solved in that way.

I think we need to move the discussion around a new concept of
layering, or redefining what it means to be in the tent, to a
more constructive and collaborative place than heretofore.


4. Address the fuzziness in cross-service interactions

In a semi-organic way, we've gone and built ourselves a big ol'
service-oriented architecture. But without necessarily always
following the strong contracts, loose coupling, discoverability,
and autonomy that a SOA approach implies.

We need to take the time to go back and pay down some of the debt
that has accreted over multiple cycles around these these
cross-service interactions. The most pressing of these would
include finally biting the bullet on the oft-proposed but never
delivered-upon notion of stabilizing notifications behind a
well-defined contract. Also, the more recently advocated notions
of moving away from coarse-grained versioning of the inter-service
APIs, and supporting better introspection and discovery of
capabilities.

 by end of day Wednesday, September 10th.

Oh, yeah, and impose fewer arbitrary deadlines ;)

Cheers,
Eoghan

 After which time we can
 begin discussing the results.
 The goal of this exercise is to help us see if our individual world views
 align with the greater community, and to get the ball rolling on a larger
 discussion of where as a project we should be focusing more time.
 
 
 best,
 Joe Gordon
 
 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
 [1]
 http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Sean Dague
On 09/11/2014 05:18 AM, Daniel P. Berrange wrote:
 On Thu, Sep 11, 2014 at 09:23:34AM +1000, Michael Still wrote:
 On Thu, Sep 11, 2014 at 8:11 AM, Jay Pipes jaypi...@gmail.com wrote:

 a) Sorting out the common code is already accounted for in Dan B's original
 proposal -- it's a prerequisite for the split.

 Its a big prerequisite though. I think we're talking about a release
 worth of work to get that right. I don't object to us doing that work,
 but I think we need to be honest about how long its going to take. It
 will also make the core of nova less agile, as we'll find it hard to
 change the hypervisor driver interface over time. Do we really think
 its ready to be stable?
 
 Yes, in my proposal I explicitly said we'd need to have Kilo
 for all the prep work to clean up the virt API, before only
 doing the split in Lx.
 
 The actual nova/virt/driver.py has been more stable over the
 past few releases than I thought it would be. In terms of APIs
 we're not really modified existing APIs, mostly added new ones.
 Where we did modify existing APIs, we could have easily taken
 the approach of adding a new API in parallel and deprecating
 the old entry point to maintain compat.
 
 The big change which isn't visible directly is the conversion
 of internal nova code to use objects. Finishing this conversion
 is clearly a pre-requisite to any such split, since we'd need
 to make sure all data passed into the nova virt APIs as parameters
 is stable  well defined. 
 
 As an alternative approach...

 What if we pushed most of the code for a driver into a library?
 Imagine a library which controls the low level operations of a
 hypervisor -- create a vm, attach a NIC, etc. Then the driver would
 become a shim around that which was relatively thin, but owned the
 interface into the nova core. The driver handles the nova specific
 things like knowing how to create a config drive, or how to
 orchestrate with cinder, but hands over all the hypervisor operations
 to the library. If we found a bug in the library we just pin our
 dependancy on the version we know works whilst we fix things.

 In fact, the driver inside nova could be a relatively generic library
 driver, and we could have multiple implementations of the library,
 one for each hypervisor.
 
 I don't think that particularly solves the problem, particularly
 the ones you are most concerned about above of API stability. The
 naive impl of any library for the virt driver would pretty much
 mirror the nova virt API. The virt driver impls would thus have to
 do the job of taking the Nova objects passed in as parameters and
 turning them into something stable to pass to the library. Except
 now instead of us only having to figure out a stable API in one
 place, every single driver has to reinvent the wheel defining their
 own stable interface  objects. I'd also be concerned that ongoing
 work on drivers is still going to require alot of patches to Nova
 to update the shims all the time, so we're still going to contend
 on resource fairly highly.
 
 b) The conflict Dan is speaking of is around the current situation where we
 have a limited core review team bandwidth and we have to pick and choose
 which virt driver-specific features we will review. This leads to bad
 feelings and conflict.

 The way this worked in the past is we had cores who were subject
 matter experts in various parts of the code -- there is a clear set of
 cores who get xen or libivrt for example and I feel like those
 drivers get reasonable review times. What's happened though is that
 we've added a bunch of drivers without adding subject matter experts
 to core to cover those drivers. Those newer drivers therefore have a
 harder time getting things reviewed and approved.
 
 FYI, for Juno at least I really don't consider that even the libvirt
 driver got acceptable review times in any sense. The pain of waiting
 for reviews in libvirt code I've submitted this cycle is what prompted
 me to start this thread. All the virt drivers are suffering way more
 than they should be, but those without core team representation suffer
 to an even greater degree.  And this is ignoring the point Jay  I
 were making about how the use of a single team means that there is
 always contention for feature approval, so much work gets cut right
 at the start even if maintainers of that area felt it was valuable
 and worth taking.

I continue to not understand how N non overlapping teams makes this any
better. You have to pay the integration cost somewhere. Right now we're
trying to pay it 1 patch at a time. This model means the integration
units get much bigger, and with less common ground.

Look at how much active work in crossing core teams we've had to do to
make any real progress on the neutron replacing nova-network front. And
how slow that process is. I think you'll see that hugely show up here.

 c) It's the impact to the CI and testing load that I see being the biggest
 benefit to the split-out driver 

Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Sean Dague
On 09/10/2014 08:46 PM, Jamie Lennox wrote:
 
 - Original Message -
 From: Steven Hardy sha...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, September 11, 2014 1:55:49 AM
 Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
 tokens leads to overall OpenStack fragility

 On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
 Going through the untriaged Nova bugs, and there are a few on a similar
 pattern:

 Nova operation in progress takes a while
 Crosses keystone token expiration time
 Timeout thrown
 Operation fails
 Terrible 500 error sent back to user

 We actually have this exact problem in Heat, which I'm currently trying to
 solve:

 https://bugs.launchpad.net/heat/+bug/1306294

 Can you clarify, is the issue either:

 1. Create novaclient object with username/password
 2. Do series of operations via the client object which eventually fail
 after $n operations due to token expiry

 or:

 1. Create novaclient object with username/password
 2. Some really long operation which means token expires in the course of
 the service handling the request, blowing up and 500-ing

 If the former, then it does sound like a client, or usage-of-client bug,
 although note if you pass a *token* vs username/password (as is currently
 done for glance and heat in tempest, because we lack the code to get the
 token outside of the shell.py code..), there's nothing the client can do,
 because you can't request a new token with longer expiry with a token...

 However if the latter, then it seems like not really a client problem to
 solve, as it's hard to know what action to take if a request failed
 part-way through and thus things are in an unknown state.

 This issue is a hard problem, which can possibly be solved by
 switching to a trust scoped token (service impersonates the user), but then
 you're effectively bypassing token expiry via delegation which sits
 uncomfortably with me (despite the fact that we may have to do this in heat
 to solve the afforementioned bug)

 It seems like we should have a standard pattern that on token expiration
 the underlying code at least gives one retry to try to establish a new
 token to complete the flow, however as far as I can tell *no* clients do
 this.

 As has been mentioned, using sessions may be one solution to this, and
 AFAIK session support (where it doesn't already exist) is getting into
 various clients via the work being carried out to add support for v3
 keystone by David Hu:

 https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z

 I see patches for Heat (currently gating), Nova and Ironic.

 I know we had to add that into Tempest because tempest runs can exceed 1
 hr, and we want to avoid random fails just because we cross a token
 expiration boundary.

 I can't claim great experience with sessions yet, but AIUI you could do
 something like:

 from keystoneclient.auth.identity import v3
 from keystoneclient import session
 from keystoneclient.v3 import client

 auth = v3.Password(auth_url=OS_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_id=PROJECT,
user_domain_name='default')
 sess = session.Session(auth=auth)
 ks = client.Client(session=sess)

 And if you can pass the same session into the various clients tempest
 creates then the Password auth-plugin code takes care of reauthenticating
 if the token cached in the auth plugin object is expired, or nearly
 expired:

 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120

 So in the tempest case, it seems like it may be a case of migrating the
 code creating the clients to use sessions instead of passing a token or
 username/password into the client object?

 That's my understanding of it atm anyway, hopefully jamielennox will be along
 soon with more details :)

 Steve
 
 
 By clients here are you referring to the CLIs or the python libraries? 
 Implementation is at different points with each. 
 
 Sessions will handle automatically reauthenticating and retrying a request, 
 however it relies on the service throwing a 401 Unauthenticated error. If a 
 service is returning a 500 (or a timeout?) then there isn't much that a 
 client can/should do for that because we can't assume that trying again with 
 a new token will solve anything. 
 
 At the moment we have keystoneclient, novaclient, cinderclient neutronclient 
 and then a number of the smaller projects with support for sessions. That 
 obviously doesn't mean that existing users of that code have transitioned to 
 the newer way though. David Hu has been working on using this code within the 
 existing CLIs. I have prototypes for at least nova to talk to neutron and 
 cinder which i'm waiting for Kilo to push. From there it should be easier to 
 do this for other services. 
 
 For service to 

Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Sean Dague
On 09/10/2014 11:55 AM, Steven Hardy wrote:
 On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
 Going through the untriaged Nova bugs, and there are a few on a similar
 pattern:

 Nova operation in progress takes a while
 Crosses keystone token expiration time
 Timeout thrown
 Operation fails
 Terrible 500 error sent back to user
 
 We actually have this exact problem in Heat, which I'm currently trying to
 solve:
 
 https://bugs.launchpad.net/heat/+bug/1306294
 
 Can you clarify, is the issue either:
 
 1. Create novaclient object with username/password
 2. Do series of operations via the client object which eventually fail
 after $n operations due to token expiry
 
 or:
 
 1. Create novaclient object with username/password
 2. Some really long operation which means token expires in the course of
 the service handling the request, blowing up and 500-ing

From what I can tell of the Nova bugs both are issues. Honestly, it
would probably be really telling to setup a test env with 10s token
timeouts and see how crazy it broke. I expect that our expiration logic,
and how our components react to it, is actually a lot less coherent than
we believe.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Thierry Carrez
Sean Dague wrote:
 [...]
 Why don't we start with let's clean up the virt interface and make it
 more sane, as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.

Yes, that sounds like the logical next step. We can't split drivers
without first doing that anyway. I still think people need smaller
areas of work, as Vish eloquently put it. I still hope that refactoring
our test architecture will let us reach the same level of quality with
only a fraction of the tests being run at the gate, which should address
most of the harm you see in adding additional repositories. But I agree
there is little point in discussing splitting virt drivers (or anything
else, really) until the internal interface below that potential split is
fully cleaned up and it becomes an option.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Bringing back auto-abandon

2014-09-11 Thread Ryan Brown
On 09/10/2014 06:32 PM, James E. Blair wrote:
 James Polley j...@jamezpolley.com writes:
 Incidentally, that is the query in the Wayward Changes section of the
 Review Inbox dashboard (thanks Sean!); for nova, you can see it here:
 
   
 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard
 
 The key here is that there are a lot of changes in a lot of different
 states, and one query isn't going to do everything that everyone wants
 it to do.  Gerrit has a _very_ powerful query language that can actually
 help us make sense of all the changes we have in our system without
 externalizing the cost of that onto contributors in the form of
 forced-abandoning of changes.  Dashboards can help us share the
 knowledge of how to get the most out of it.
 
   https://review.openstack.org/Documentation/user-dashboards.html
   https://review.openstack.org/Documentation/user-search.html
 
 -Jim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Also if you don't feel existing dashboards scratch your project's
particular itch, there's always gerrit-dash-creator[1] to help you make
one that fits your needs.

[1]: https://github.com/stackforge/gerrit-dash-creator

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Re: SSL in Fuel.

2014-09-11 Thread Simon Pasquier
Hi,

On Thu, Sep 11, 2014 at 1:03 PM, Sebastian Kalinowski 
skalinow...@mirantis.com wrote:

 I have some topics for [1] that I want to discuss:

 1) Should we allow users to turn SSL on/off for Fuel master?
 I think we should since some users may don't care about SSL and
 enabling it will just make them unhappy (like warnings in browsers,
 expiring certs).


Definitely +1. I think that Tomasz mentioned somewhere that HTTP should be
kept as the default.


 2) Will we allow users (in first iteration) to use their own certs?
 If we will (which I think we should and other people aslo seems to
 share this point of view), we have some options for that:
  A) Add informations to docs where to upload your own certificate on
 master node (no UI) - less work, but requires a little more action from
 users
  B) Simple form in UI where user will be able to paste his certs -
 little bit more work, user friendly
 Are there any reasons we shouldn't do that?


Option A is enough. If there is enough time to implement option B, that's
cool but this should not be a blocker.


 3) How we will manage cert expiration?
 Stanislaw proposed that we should show user a notification that will
 tell user about cert expiration. We could check that in cron job.
 I think that we should also allow user to generate a new cert in Fuel
 if the old one will expire.


As long as the user cannot upload a certificate, we don't need to care
about this point but it should be mentioned in the doc.
And to avoid this problem, Fuel should generate certificates that expire in
many years (eg = 10).

BR

Simon


 I'll also remove part about adding cert validation in fuel agent since it
 would require a significant amount of work and it's not essential for first
 iteration.

 Best,
 Sebastian


 [1] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugable solution for running abstract commands on nodes

2014-09-11 Thread Evgeniy L
Hi,

In most cases for plugin developers or fuel users it will be much
easier to just write command which he wants to run on nodes
instead of describing some abstract task which doesn't have
any additional information/logic and looks like unnecessary complexity.

But for complicated cases user will have to write some code for tasklib.

Thanks,

On Wed, Sep 10, 2014 at 8:10 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Hi,

 you described transport mechanism for running commands based on facts, we
 have another one, which stores
 all business logic in nailgun and only provides orchestrator with set of
 tasks to execute. This is not a problem.

 I am talking about API for plugin writer/developer. And how implement it
 to be more friendly

 On Wed, Sep 10, 2014 at 6:46 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 Hi,

 as for execution of arbitrary code across the OpenStack cluster - I was
 thinking of mcollective + fact filters:

 1) we need to start using mcollective facts [0] [2] - we don't
 use/configure this currently
 2) use mcollective execute_shell_command agent (or any other agent) with
 fact filter [1]

 So, for example, if we have mcollective fact called node_roles:
 node_roles: compute ceph-osd

 Then we can execute shell cmd on all compute nodes like this:

 mco rpc execute_shell_command execute cmd=/some_script.sh -F
 node_role=/compute/

 Of course, we can use more complicated filters to run commands more
 precisely.

 [0]
 https://projects.puppetlabs.com/projects/mcollective-plugins/wiki/FactsFacterYAML
 [1]
 https://docs.puppetlabs.com/mcollective/reference/ui/filters.html#fact-filters
 [2] https://docs.puppetlabs.com/mcollective/reference/plugins/facts.html


 On Wed, Sep 10, 2014 at 6:04 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 Some of you may know that there is ongoing work to achieve kindof
 data-driven orchestration
 for Fuel. If this is new to you, please get familiar with spec:

 https://review.openstack.org/#/c/113491/

 Knowing that running random command on nodes will be probably most
 usable type of
 orchestration extension, i want to discuss our solution for this problem.

 Plugin writer will need to do two things:

 1. Provide custom task.yaml (i am using /etc/puppet/tasks, but this is
 completely configurable,
 we just need to reach agreement)

   /etc/puppet/tasks/echo/task.yaml

   with next content:

type: exec
cmd: echo 1

 2. Provide control plane with orchestration metadata

 /etc/fuel/tasks/echo_task.yaml

 controller:
  -
   task: echo
   description: Simple echo for you
   priority: 1000
 compute:
 -
   task: echo
   description: Simple echo for you
   priority: 1000

 This is done in order to separate concerns of orchestration logic and
 tasks.

 From plugin writer perspective it is far more usable to provide exact
 command in orchestration metadata itself, like:

 /etc/fuel/tasks/echo_task.yaml

 controller:
  -
   task: echo
   description: Simple echo for you
   priority: 1000
   cmd: echo 1
   type: exec

 compute:
 -
  task: echo
   description: Simple echo for you
   priority: 1000
   cmd: echo 1
   type: exec

 I would prefer to stick to the first, because there is benefits of using
 one interface between all tasks executors (puppet, exec, maybe chef), which
 will improve debuging and development process.

 So my question is first - good enough? Or second is essential type of
 plugin to support?

 If you want additional implementation details check:
 https://review.openstack.org/#/c/118311/
 https://review.openstack.org/#/c/113226/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] rss for specs

2014-09-11 Thread Sergey Lukjanov
Hi folks,

you could subscribe to specs rss now -
http://specs.openstack.org/openstack/sahara-specs/rss

Thanks for the Doug Hellmann for implementing it.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Triage Bug Day Today!

2014-09-11 Thread Cindy Pallares

Hi Folks!

Glance is having its bug triage day today! Please help out if you can. 
You can check out the tasks here:


http://etherpad.openstack.org/p/glancebugday

Also here are some handy links to the untriaged bugs in glance and the 
client:


https://bugs.launchpad.net/glance/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on

https://bugs.launchpad.net/python-glanceclient/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on



-Cindy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Triage Bug Day Today!

2014-09-11 Thread Flavio Percoco
On 09/11/2014 02:28 PM, Cindy Pallares wrote:
 Hi Folks!
 
 Glance is having its bug triage day today! Please help out if you can.
 You can check out the tasks here:
 
 http://etherpad.openstack.org/p/glancebugday
 
 Also here are some handy links to the untriaged bugs in glance and the
 client:
 
 https://bugs.launchpad.net/glance/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
 
 
 https://bugs.launchpad.net/python-glanceclient/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
 
 
 

Awesome,

Thanks for organizing this, Cindy.
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread gordon chung
 Nejc has been doing a great work and has been very helpful during the Juno 
 cycle and his help is very valuable.
 
 I'd like to propose that we add Nejc Saje to the ceilometer-core group.can we 
 minus because he makes me look bad? /sarcasm
+1 for core.
cheers,
gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Anastasia Urlapova
 I think we should not count bugs for HCF criteria if they affect only
 experimental feature(s).

+1, absolutely agree, but we should determine count of allowed bugs for
experimental features against severity.

On Thu, Sep 11, 2014 at 2:13 PM, Nikolay Markov nmar...@mirantis.com
wrote:

 Probably, even experimental feature should at least pretend to be
 working, anyway, or it shouldn't be publically announced. But I think
 it's important to describe limitation of this features (or mark some
 of them as untested) and I think list of known issues with links to
 most important bugs is a good approach. And tags will just make things
 simpler.

 On Thu, Sep 11, 2014 at 1:05 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
  May be we can use tag per feature, for example zabbix
 
  Tags are ok, but I still think that we can mention at least some
  significant bugs. For example, if some feature doesn't work in some
  deployment mode (e.g. simple, with ceilometer, etc) we can at least
  notify users so they even don't try.
 
  Another opinions?
 
 
  On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
  if we point somewhere about knowing issues in those experimental
 features
  there are might be dozens of bugs.
  May be we can use tag per feature, for example zabbix, so it will be
 easy
  to search in LP all open bugs regarding Zabbix feature?
 
  On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky 
 ikalnit...@mirantis.com
  wrote:
 
   I think we should not count bugs for HCF criteria if they affect only
   experimental feature(s).
 
  +1, I'm totally agree with you - it makes no sense to count
  experimental bugs as HCF criteria.
 
   Any objections / other ideas?
 
  I think it would be great for customers if we point somewhere about
  knowing issues in those experimental features. IMHO, it should help
  them to understand what's wrong in case of errors and may prevent bug
  duplication in LP.
 
 
  On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
   Hi all,
   what about using experimental tag for experimental features?
  
   After we implemented feature groups [1], we can divide our features
 and
   for
   complex features, or those which don't get enough QA resources in the
   dev
   cycle, we can declare as experimental. It would mean that those are
 not
   production ready features.
   Giving them live still in experimental mode allows early adopters to
   give a
   try and bring a feedback to the development team.
  
   I think we should not count bugs for HCF criteria if they affect only
   experimental feature(s). At the moment, we have Zabbix as
 experimental
   feature, and Patching of OpenStack [2] is under consideration: if
 today
   QA
   doesn't approve it to be as ready for production use, we have no
 other
   choice. All deadlines passed, and we need to get 5.1 finally out.
  
   Any objections / other ideas?
  
   [1]
  
  
 https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
   [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
   --
   Mike Scherbakov
   #mihgen
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Best regards,
 Nick Markov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Gary Kotton


On 9/11/14, 2:55 PM, Thierry Carrez thie...@openstack.org wrote:

Sean Dague wrote:
 [...]
 Why don't we start with let's clean up the virt interface and make it
 more sane, as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.

Yes, that sounds like the logical next step. We can't split drivers
without first doing that anyway. I still think people need smaller
areas of work, as Vish eloquently put it. I still hope that refactoring
our test architecture will let us reach the same level of quality with
only a fraction of the tests being run at the gate, which should address
most of the harm you see in adding additional repositories. But I agree
there is little point in discussing splitting virt drivers (or anything
else, really) until the internal interface below that potential split is
fully cleaned up and it becomes an option.

How about we start to try and patch gerrit to provide +2 permissions for
people
Who can be assigned Œdriver core¹ status. This is something that is
relevant to Nova and Neutron and I guess Cinder too.

Thanks
Gary


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread David Kranz

On 09/11/2014 07:32 AM, Eoghan Glynn wrote:



As you all know, there has recently been several very active discussions
around how to improve assorted aspects of our development process. One idea
that was brought up is to come up with a list of cycle goals/project
priorities for Kilo [0].

To that end, I would like to propose an exercise as discussed in the TC
meeting yesterday [1]:
Have anyone interested (especially TC members) come up with a list of what
they think the project wide Kilo cycle goals should be and post them on this
thread ...

Here's my list of high-level cycle goals, for consideration ...


1. Address our usability debts

With some justification, we've been saddled with the perception
of not caring enough about the plight of users and operators. The
frustrating thing is that much of this is very fixable, *if* we take
time out from the headlong rush to add features. Achievable things
like documentation completeness, API consistency, CLI intuitiveness,
logging standardization, would all go a long way here.

These things are of course all not beyond the wit of man, but we
need to take the time out to actually do them. This may involve
a milestone, or even longer, where we accept that the rate of
feature addition will be deliberately slowed down.


2. Address the drags on our development velocity

Despite the Trojan efforts of the QA team, the periodic brownouts
in the gate are having a serious impact on our velocity. Over the
past few cycles, we've seen the turnaround time for patch check/
verification spike up unacceptably long multiple times, mostly
around the milestones.

Whatever we can do to smoothen out these spikes, whether it be
moving much of the Tempest coverage into the project trees, or
switching focus onto post-merge verification as suggested by
Sean on this thread, or even considering some more left-field
approaches such as staggered milestones, we need to grasp this
nettle as a matter of urgency.

Further back in the pipeline, the effort required to actually get
something shepherded through review is steadily growing. To the
point that we need to consider some radical approaches that
retain the best of our self-organizing model, while setting more
reasonable  reliable expectations for patch authors, and making
it more likely that narrow domain expertise is available to review
their contributions in timely way. For the larger projects, this
is likely to mean something different (along the lines of splits
or sub-domains) than it does for the smaller projects.


3. Address the long-running what's in and what's out questions

The way some of the discussions about integration and incubation
played out this cycle have made me sad. Not all of these discussions
have been fully supported by the facts on the ground IMO. And not
all of the issues that have been held up as justifications for
whatever course of exclusion or inclusion would IMO actually be
solved in that way.

I think we need to move the discussion around a new concept of
layering, or redefining what it means to be in the tent, to a
more constructive and collaborative place than heretofore.


4. Address the fuzziness in cross-service interactions

In a semi-organic way, we've gone and built ourselves a big ol'
service-oriented architecture. But without necessarily always
following the strong contracts, loose coupling, discoverability,
and autonomy that a SOA approach implies.

We need to take the time to go back and pay down some of the debt
that has accreted over multiple cycles around these these
cross-service interactions. The most pressing of these would
include finally biting the bullet on the oft-proposed but never
delivered-upon notion of stabilizing notifications behind a
well-defined contract. Also, the more recently advocated notions
of moving away from coarse-grained versioning of the inter-service
APIs, and supporting better introspection and discovery of
capabilities.

+1
IMO, almost all of the other ills discussed recently derive from this 
single failure.


 -David

by end of day Wednesday, September 10th.

Oh, yeah, and impose fewer arbitrary deadlines ;)

Cheers,
Eoghan


After which time we can
begin discussing the results.
The goal of this exercise is to help us see if our individual world views
align with the greater community, and to get the ball rolling on a larger
discussion of where as a project we should be focusing more time.


best,
Joe Gordon

[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
[1]
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Andrew Laski


On 09/10/2014 07:23 PM, Michael Still wrote:

On Thu, Sep 11, 2014 at 8:11 AM, Jay Pipes jaypi...@gmail.com wrote:


a) Sorting out the common code is already accounted for in Dan B's original
proposal -- it's a prerequisite for the split.

Its a big prerequisite though. I think we're talking about a release
worth of work to get that right. I don't object to us doing that work,
but I think we need to be honest about how long its going to take. It
will also make the core of nova less agile, as we'll find it hard to
change the hypervisor driver interface over time. Do we really think
its ready to be stable?


I don't.  For a long time now I've wanted to split the gigantic spawn() 
method in the virt api into more discrete steps.  I think there's some 
opportunity for doing some steps in parallel and the potential to have 
failures reported earlier and handled better.  But I've been sitting on 
it because I wanted to use 'tasks' as a way to address the 
parallelization and that work hasn't happened yet.  But this work would 
be introducing new calls which would be used based on some sort of 
capability query to the driver, so I don't think this work is 
necessarily hindered by stabilizing the interface.


I also think the migration/resize methods could use some analysis before 
making a determination that they are what we want in a stable interface.




As an alternative approach...

What if we pushed most of the code for a driver into a library?
Imagine a library which controls the low level operations of a
hypervisor -- create a vm, attach a NIC, etc. Then the driver would
become a shim around that which was relatively thin, but owned the
interface into the nova core. The driver handles the nova specific
things like knowing how to create a config drive, or how to
orchestrate with cinder, but hands over all the hypervisor operations
to the library. If we found a bug in the library we just pin our
dependancy on the version we know works whilst we fix things.

In fact, the driver inside nova could be a relatively generic library
driver, and we could have multiple implementations of the library,
one for each hypervisor.

This would make testing nova easier too, because we know how to mock
libraries already.

Now, that's kind of what we have in the hypervisor driver API now.
What I'm proposing is that the point where we break out of the nova
code base should be closer to the hypervisor than what that API
presents.


b) The conflict Dan is speaking of is around the current situation where we
have a limited core review team bandwidth and we have to pick and choose
which virt driver-specific features we will review. This leads to bad
feelings and conflict.

The way this worked in the past is we had cores who were subject
matter experts in various parts of the code -- there is a clear set of
cores who get xen or libivrt for example and I feel like those
drivers get reasonable review times. What's happened though is that
we've added a bunch of drivers without adding subject matter experts
to core to cover those drivers. Those newer drivers therefore have a
harder time getting things reviewed and approved.

That said, a heap of cores have spent time reviewing vmware driver
code this release, so its obviously not as simple as I describe above.


c) It's the impact to the CI and testing load that I see being the biggest
benefit to the split-out driver repos. Patches proposed to the XenAPI driver
shouldn't have the Hyper-V CI tests run against the patch. Likewise, running
libvirt unit tests in the VMWare driver repo doesn't make a whole lot of
sense, and all of these tests add a not-insignificant load to the overall
upstream and external CI systems. The long wait time for tests to come back
means contributors get frustrated, since many reviewers tend to wait until
Jenkins returns some result before they review. All of this leads to
increased conflict that would be somewhat ameliorated by having separate
code repos for the virt drivers.

It is already possible to filter CI runs to specific paths in the
code. We just didn't choose to do that for policy reasons. We could
change that right now with a trivial tweak to each CI system's zuul
config.

Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Error in deploying ironicon Ubuntu 12.04

2014-09-11 Thread Jim Rollenhagen


On September 11, 2014 3:52:59 AM PDT, Lucas Alvares Gomes 
lucasago...@gmail.com wrote:
Oh, it's because Precise doesn't have the docker.io package[1] (nor
docker).

AFAIK the -infra team is now using Trusty in gate, so it won't be a
problem. But if you think that we should still support Ironic DevStack
with Precise please file a bug about it so the Ironic team can take a
look on that.

[1]
http://packages.ubuntu.com/search?suite=trustysection=allarch=anykeywords=docker.iosearchon=names

Cheers,
Lucas

On Thu, Sep 11, 2014 at 11:12 AM, Peeyush gpeey...@linux.vnet.ibm.com
wrote:
 Hi all,

 I have been trying to deploy Openstack-ironic on a Ubuntu 12.04 VM.
 I encountered the following error:

 2014-09-11 10:08:11.166 | Reading package lists...
 2014-09-11 10:08:11.471 | Building dependency tree...
 2014-09-11 10:08:11.475 | Reading state information...
 2014-09-11 10:08:11.610 | E: Unable to locate package docker.io
 2014-09-11 10:08:11.610 | E: Couldn't find any package by regex
'docker.io'
 2014-09-11 10:08:11.611 | + exit_trap
 2014-09-11 10:08:11.612 | + local r=100
 2014-09-11 10:08:11.612 | ++ jobs -p
 2014-09-11 10:08:11.612 | + jobs=
 2014-09-11 10:08:11.612 | + [[ -n '' ]]
 2014-09-11 10:08:11.612 | + kill_spinner
 2014-09-11 10:08:11.613 | + '[' '!' -z '' ']'
 2014-09-11 10:08:11.613 | + [[ 100 -ne 0 ]]
 2014-09-11 10:08:11.613 | + echo 'Error on exit'
 2014-09-11 10:08:11.613 | Error on exit
 2014-09-11 10:08:11.613 | + [[ -z /opt/stack ]]
 2014-09-11 10:08:11.613 | + ./tools/worlddump.py -d /opt/stack
 2014-09-11 10:08:11.655 | + exit 100

 I tried to make it work on a separate machine, but got the same
error.
 I understand that it could be because script is looking for docker.io
 package,
 but I guess only docker package is available. I tried to install
docker.io,
 but couldn't
 find it.

 Can you please help me out to resolve this?

Ouch. I added this as a dependency in devstack for building IPA. 

As Lucas said, it works fine in 14.04. In 12.04, and if using Ironic with the 
PXE driver (default), you can likely remove that line from 
devstack/files/apts/ironic. I won't promise that everything will work after 
that, but chances are good. 

// jim

 Thanks,

 --
 Peeyush Gupta
 gpeey...@linux.vnet.ibm.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-11 Thread Julien Danjou
On Tue, Sep 09 2014, Matt Riedemann wrote:

 I noticed this change [1] today for global-requirements to require tooz [2] 
 for
 a ceilometer blueprint [3].

 The sad part is that tooz requires pymemcache [4] which is, from what I can
 tell, a memcached client that is not the same as python-memcached [5].

 Note that python-memcached is listed in global-requirements already [6].

You're not going to control the entire full list of dependency of things
we use in OpenStack, so this kind of situation is going to arise anyway.

 The problem I have with this is it doesn't appear that RHEL/Fedora package
 pymemcache (they do package python-memcached).  I see that openSUSE builds
 separate packages for each.  It looks like Ubuntu also has separate packages.

 My question is, is this a problem?  I'm assuming RDO will just have to package
 python-pymemcache themselves but what about people not using RDO (SOL? Don't
 care? Other?).

 Reverting the requirements change would probably mean reverting the ceilometer
 blueprint (or getting a version of tooz out that works with python-memcached
 which is probably too late for that right now).  Given the point in the 
 schedule
 that seems pretty drastic.

python-memcached is a terrible memcache client, which does not support
Python 3. pymemcache is way better than python-memcached, and everybody
should switch to it. When we started tooz from scratch a year ago, there
was no point starting to use a non-Python 3 compatible and crappy
memcache client.

pymemcache shouldn't be a problem to package anyway. :)

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Vladimir Kuklin
+1

On Thu, Sep 11, 2014 at 5:05 PM, Anastasia Urlapova aurlap...@mirantis.com
wrote:

  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s).

 +1, absolutely agree, but we should determine count of allowed bugs for
 experimental features against severity.

 On Thu, Sep 11, 2014 at 2:13 PM, Nikolay Markov nmar...@mirantis.com
 wrote:

 Probably, even experimental feature should at least pretend to be
 working, anyway, or it shouldn't be publically announced. But I think
 it's important to describe limitation of this features (or mark some
 of them as untested) and I think list of known issues with links to
 most important bugs is a good approach. And tags will just make things
 simpler.

 On Thu, Sep 11, 2014 at 1:05 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
  May be we can use tag per feature, for example zabbix
 
  Tags are ok, but I still think that we can mention at least some
  significant bugs. For example, if some feature doesn't work in some
  deployment mode (e.g. simple, with ceilometer, etc) we can at least
  notify users so they even don't try.
 
  Another opinions?
 
 
  On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
  if we point somewhere about knowing issues in those experimental
 features
  there are might be dozens of bugs.
  May be we can use tag per feature, for example zabbix, so it will be
 easy
  to search in LP all open bugs regarding Zabbix feature?
 
  On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky 
 ikalnit...@mirantis.com
  wrote:
 
   I think we should not count bugs for HCF criteria if they affect
 only
   experimental feature(s).
 
  +1, I'm totally agree with you - it makes no sense to count
  experimental bugs as HCF criteria.
 
   Any objections / other ideas?
 
  I think it would be great for customers if we point somewhere about
  knowing issues in those experimental features. IMHO, it should help
  them to understand what's wrong in case of errors and may prevent bug
  duplication in LP.
 
 
  On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
   Hi all,
   what about using experimental tag for experimental features?
  
   After we implemented feature groups [1], we can divide our features
 and
   for
   complex features, or those which don't get enough QA resources in
 the
   dev
   cycle, we can declare as experimental. It would mean that those are
 not
   production ready features.
   Giving them live still in experimental mode allows early adopters to
   give a
   try and bring a feedback to the development team.
  
   I think we should not count bugs for HCF criteria if they affect
 only
   experimental feature(s). At the moment, we have Zabbix as
 experimental
   feature, and Patching of OpenStack [2] is under consideration: if
 today
   QA
   doesn't approve it to be as ready for production use, we have no
 other
   choice. All deadlines passed, and we need to get 5.1 finally out.
  
   Any objections / other ideas?
  
   [1]
  
  
 https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
   [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
   --
   Mike Scherbakov
   #mihgen
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Best regards,
 Nick Markov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugable solution for running abstract commands on nodes

2014-09-11 Thread Vladimir Kuklin
Let's not create architectural leaks here. Let there be only tasks, but
let's create a really simple template for task that user will be able to
easily fill only with the command itself.

On Thu, Sep 11, 2014 at 4:17 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 In most cases for plugin developers or fuel users it will be much
 easier to just write command which he wants to run on nodes
 instead of describing some abstract task which doesn't have
 any additional information/logic and looks like unnecessary complexity.

 But for complicated cases user will have to write some code for tasklib.

 Thanks,

 On Wed, Sep 10, 2014 at 8:10 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi,

 you described transport mechanism for running commands based on facts, we
 have another one, which stores
 all business logic in nailgun and only provides orchestrator with set of
 tasks to execute. This is not a problem.

 I am talking about API for plugin writer/developer. And how implement it
 to be more friendly

 On Wed, Sep 10, 2014 at 6:46 PM, Aleksandr Didenko adide...@mirantis.com
  wrote:

 Hi,

 as for execution of arbitrary code across the OpenStack cluster - I was
 thinking of mcollective + fact filters:

 1) we need to start using mcollective facts [0] [2] - we don't
 use/configure this currently
 2) use mcollective execute_shell_command agent (or any other agent) with
 fact filter [1]

 So, for example, if we have mcollective fact called node_roles:
 node_roles: compute ceph-osd

 Then we can execute shell cmd on all compute nodes like this:

 mco rpc execute_shell_command execute cmd=/some_script.sh -F
 node_role=/compute/

 Of course, we can use more complicated filters to run commands more
 precisely.

 [0]
 https://projects.puppetlabs.com/projects/mcollective-plugins/wiki/FactsFacterYAML
 [1]
 https://docs.puppetlabs.com/mcollective/reference/ui/filters.html#fact-filters
 [2] https://docs.puppetlabs.com/mcollective/reference/plugins/facts.html


 On Wed, Sep 10, 2014 at 6:04 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 Some of you may know that there is ongoing work to achieve kindof
 data-driven orchestration
 for Fuel. If this is new to you, please get familiar with spec:

 https://review.openstack.org/#/c/113491/

 Knowing that running random command on nodes will be probably most
 usable type of
 orchestration extension, i want to discuss our solution for this
 problem.

 Plugin writer will need to do two things:

 1. Provide custom task.yaml (i am using /etc/puppet/tasks, but this is
 completely configurable,
 we just need to reach agreement)

   /etc/puppet/tasks/echo/task.yaml

   with next content:

type: exec
cmd: echo 1

 2. Provide control plane with orchestration metadata

 /etc/fuel/tasks/echo_task.yaml

 controller:
  -
   task: echo
   description: Simple echo for you
   priority: 1000
 compute:
 -
   task: echo
   description: Simple echo for you
   priority: 1000

 This is done in order to separate concerns of orchestration logic and
 tasks.

 From plugin writer perspective it is far more usable to provide exact
 command in orchestration metadata itself, like:

 /etc/fuel/tasks/echo_task.yaml

 controller:
  -
   task: echo
   description: Simple echo for you
   priority: 1000
   cmd: echo 1
   type: exec

 compute:
 -
  task: echo
   description: Simple echo for you
   priority: 1000
   cmd: echo 1
   type: exec

 I would prefer to stick to the first, because there is benefits of
 using one interface between all tasks executors (puppet, exec, maybe chef),
 which will improve debuging and development process.

 So my question is first - good enough? Or second is essential type of
 plugin to support?

 If you want additional implementation details check:
 https://review.openstack.org/#/c/118311/
 https://review.openstack.org/#/c/113226/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread Nadya Privalova
I'm in :)
+1

On Thu, Sep 11, 2014 at 4:58 PM, gordon chung g...@live.ca wrote:

  Nejc has been doing a great work and has been very helpful during the

  Juno cycle and his help is very valuable.

  I'd like to propose that we add Nejc Saje to the ceilometer-core group.

 can we minus because he makes me look bad? /sarcasm

 +1 for core.

 cheers,
 *gord*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Julien Danjou
Hi,

Dina has been doing a great work and has been very helpful during the
Juno cycle and her help is very valuable. She's been doing a lot of
reviews and has been very active in our community.

I'd like to propose that we add Dina Belova to the ceilometer-core
group, as I'm convinced it'll help the project.

Please, dear ceilometer-core members, reply with your votes!

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Nadya Privalova
May I be the first  :)? Big +1 from me. Thanks Dina!

On Thu, Sep 11, 2014 at 5:24 PM, Julien Danjou jul...@danjou.info wrote:

 Hi,

 Dina has been doing a great work and has been very helpful during the
 Juno cycle and her help is very valuable. She's been doing a lot of
 reviews and has been very active in our community.

 I'd like to propose that we add Dina Belova to the ceilometer-core
 group, as I'm convinced it'll help the project.

 Please, dear ceilometer-core members, reply with your votes!

 --
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Sean Dague
On 09/11/2014 09:09 AM, Gary Kotton wrote:
 
 
 On 9/11/14, 2:55 PM, Thierry Carrez thie...@openstack.org wrote:
 
 Sean Dague wrote:
 [...]
 Why don't we start with let's clean up the virt interface and make it
 more sane, as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.

 Yes, that sounds like the logical next step. We can't split drivers
 without first doing that anyway. I still think people need smaller
 areas of work, as Vish eloquently put it. I still hope that refactoring
 our test architecture will let us reach the same level of quality with
 only a fraction of the tests being run at the gate, which should address
 most of the harm you see in adding additional repositories. But I agree
 there is little point in discussing splitting virt drivers (or anything
 else, really) until the internal interface below that potential split is
 fully cleaned up and it becomes an option.
 
 How about we start to try and patch gerrit to provide +2 permissions for
 people
 Who can be assigned Œdriver core¹ status. This is something that is
 relevant to Nova and Neutron and I guess Cinder too.

If you think that's the right solution, I'd say go and investigate it
with folks that understand enough gerrit internals to be able to figure
out how hard it would be. Start a conversation in #openstack-infra to
explore it.

My expectation is that there is more complexity there than you give it
credit for. That being said one of the biggest limitations we've had on
gerrit changes is we've effectively only got one community member, Kai,
who does any of that. If other people, or teams, were willing to dig in
and own things like this, that might be really helpful.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Mike Scherbakov
 +1, absolutely agree, but we should determine count of allowed bugs for
experimental features against severity.
Anastasia, can you please give an example? I think we should not count them
at all. Experimental features, if they are isolated, they can be in any
stated. May be just very beginning of the development cycle.

On Thu, Sep 11, 2014 at 5:20 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 +1

 On Thu, Sep 11, 2014 at 5:05 PM, Anastasia Urlapova 
 aurlap...@mirantis.com wrote:

  I think we should not count bugs for HCF criteria if they affect only
  experimental feature(s).

 +1, absolutely agree, but we should determine count of allowed bugs for
 experimental features against severity.

 On Thu, Sep 11, 2014 at 2:13 PM, Nikolay Markov nmar...@mirantis.com
 wrote:

 Probably, even experimental feature should at least pretend to be
 working, anyway, or it shouldn't be publically announced. But I think
 it's important to describe limitation of this features (or mark some
 of them as untested) and I think list of known issues with links to
 most important bugs is a good approach. And tags will just make things
 simpler.

 On Thu, Sep 11, 2014 at 1:05 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
  May be we can use tag per feature, for example zabbix
 
  Tags are ok, but I still think that we can mention at least some
  significant bugs. For example, if some feature doesn't work in some
  deployment mode (e.g. simple, with ceilometer, etc) we can at least
  notify users so they even don't try.
 
  Another opinions?
 
 
  On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
  if we point somewhere about knowing issues in those experimental
 features
  there are might be dozens of bugs.
  May be we can use tag per feature, for example zabbix, so it will
 be easy
  to search in LP all open bugs regarding Zabbix feature?
 
  On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky 
 ikalnit...@mirantis.com
  wrote:
 
   I think we should not count bugs for HCF criteria if they affect
 only
   experimental feature(s).
 
  +1, I'm totally agree with you - it makes no sense to count
  experimental bugs as HCF criteria.
 
   Any objections / other ideas?
 
  I think it would be great for customers if we point somewhere about
  knowing issues in those experimental features. IMHO, it should help
  them to understand what's wrong in case of errors and may prevent bug
  duplication in LP.
 
 
  On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
   Hi all,
   what about using experimental tag for experimental features?
  
   After we implemented feature groups [1], we can divide our
 features and
   for
   complex features, or those which don't get enough QA resources in
 the
   dev
   cycle, we can declare as experimental. It would mean that those
 are not
   production ready features.
   Giving them live still in experimental mode allows early adopters
 to
   give a
   try and bring a feedback to the development team.
  
   I think we should not count bugs for HCF criteria if they affect
 only
   experimental feature(s). At the moment, we have Zabbix as
 experimental
   feature, and Patching of OpenStack [2] is under consideration: if
 today
   QA
   doesn't approve it to be as ready for production use, we have no
 other
   choice. All deadlines passed, and we need to get 5.1 finally out.
  
   Any objections / other ideas?
  
   [1]
  
  
 https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
   [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
   --
   Mike Scherbakov
   #mihgen
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Best regards,
 Nick Markov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, 

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Radoslav Gerganov

On 09/11/2014 04:30 PM, Sean Dague wrote:

On 09/11/2014 09:09 AM, Gary Kotton wrote:



On 9/11/14, 2:55 PM, Thierry Carrez thie...@openstack.org wrote:


Sean Dague wrote:

[...]
Why don't we start with let's clean up the virt interface and make it
more sane, as I don't think there is any disagreement there. If it's
going to take a cycle, it's going to take a cycle anyway (it will
probably take 2 cycles, realistically, we always underestimate these
things, remember when no-db-compute was going to be 1 cycle?). I don't
see the need to actually decide here and now that the split is clearly
at least 7 - 12 months away. A lot happens in the intervening time.


Yes, that sounds like the logical next step. We can't split drivers
without first doing that anyway. I still think people need smaller
areas of work, as Vish eloquently put it. I still hope that refactoring
our test architecture will let us reach the same level of quality with
only a fraction of the tests being run at the gate, which should address
most of the harm you see in adding additional repositories. But I agree
there is little point in discussing splitting virt drivers (or anything
else, really) until the internal interface below that potential split is
fully cleaned up and it becomes an option.


How about we start to try and patch gerrit to provide +2 permissions for
people
Who can be assigned Œdriver core¹ status. This is something that is
relevant to Nova and Neutron and I guess Cinder too.


If you think that's the right solution, I'd say go and investigate it
with folks that understand enough gerrit internals to be able to figure
out how hard it would be. Start a conversation in #openstack-infra to
explore it.

My expectation is that there is more complexity there than you give it
credit for. That being said one of the biggest limitations we've had on
gerrit changes is we've effectively only got one community member, Kai,
who does any of that. If other people, or teams, were willing to dig in
and own things like this, that might be really helpful.


I don't think we need to modify gerrit to support this functionality. We 
can simply have a gerrit job (similar to the existing CI jobs) which is 
run on every patch set and checks if:

1) the changes are only under /nova/virt/XYZ and /nova/tests/virt/XYZ
2) it has two +1 from maintainers of driver XYZ

if the above conditions are met, the job will post W+1 for this 
patchset. Does that make sense?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Duncan Thomas
On 11 September 2014 12:36, Sean Dague s...@dague.net wrote:

 I continue to not understand how N non overlapping teams makes this any
 better. You have to pay the integration cost somewhere. Right now we're
 trying to pay it 1 patch at a time. This model means the integration
 units get much bigger, and with less common ground.

 Look at how much active work in crossing core teams we've had to do to
 make any real progress on the neutron replacing nova-network front. And
 how slow that process is. I think you'll see that hugely show up here.

Cinder has also suffered extreme latency trying to make changes to the
nova-cinder interface, to a sufficient degree that work is under
consideration to move the interface to give cinder more control over
parts of it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Tomasz Napierala

On 11 Sep 2014, at 09:19, Mike Scherbakov mscherba...@mirantis.com wrote:

 Hi all,
 what about using experimental tag for experimental features?
 
 After we implemented feature groups [1], we can divide our features and for 
 complex features, or those which don't get enough QA resources in the dev 
 cycle, we can declare as experimental. It would mean that those are not 
 production ready features.
 Giving them live still in experimental mode allows early adopters to give a 
 try and bring a feedback to the development team.
 
 I think we should not count bugs for HCF criteria if they affect only 
 experimental feature(s). At the moment, we have Zabbix as experimental 
 feature, and Patching of OpenStack [2] is under consideration: if today QA 
 doesn't approve it to be as ready for production use, we have no other 
 choice. All deadlines passed, and we need to get 5.1 finally out.
 
 Any objections / other ideas?

+1

-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Duncan Thomas
On 11 September 2014 03:17, Angus Lees g...@inodes.org wrote:

 (As inspired by eg kerberos)
 2. Ensure at some environmental/top layer that the advertised token lifetime
 exceeds the timeout set on the request, before making the request.  This
 implies (since there's no special handling in place) failing if the token was
 expired earlier than expected.

We've a related problem in cinder (cinder-backup uses the user's token
to talk to swift, and the backup can easily take longer than the token
expiry time) which could not be solved by this, since the time the
backup takes is unknown (compression, service and resource contention,
etc alter the time by multiple orders of magnitude)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][db] Need help resolving a strange error with db connections in tests

2014-09-11 Thread Anna Kamyshnikova
Hello everyone!

I'm working on implementing test in Neutron that checks that models are
synchronized with database state [1] [2]. This is very important change as
during Juno cycle big changes of database structure were done.

I was working on it for rather long time but about three weeks ago strange
error appeared [3], using AssertionPool shows [4]. The problem is that
somehow there are more than one connection to database from each test. I
tried to use locks from lockutils, but it didn’t help. On db meeting we
decided to add TestCase just for one Ml2 plugin for starters, and then
continue working on this strange error, that is why there are two change
requests [1] and [2]. But I found out that somehow even one testcase fails
with the same error [5] from time to time.

I’m asking for any suggestions that could be done in this case. It is very
important to get at least [1] merged in Juno.

[1] - https://review.openstack.org/76520

[2] - https://review.openstack.org/120040

[3] - http://paste.openstack.org/show/110158/

[4] - http://paste.openstack.org/show/110159/

[5] -
http://logs.openstack.org/20/76520/68/check/gate-neutron-python27/63938f9/testr_results.html.gz

Regards,

Ann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Anastasia Urlapova
Mike, i've just want to say, if feature isn't ready for production use and
we have no other choice, we should provide detailed limitations and
examples of proper use.

On Thu, Sep 11, 2014 at 5:58 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:


 On 11 Sep 2014, at 09:19, Mike Scherbakov mscherba...@mirantis.com
 wrote:

  Hi all,
  what about using experimental tag for experimental features?
 
  After we implemented feature groups [1], we can divide our features and
 for complex features, or those which don't get enough QA resources in the
 dev cycle, we can declare as experimental. It would mean that those are not
 production ready features.
  Giving them live still in experimental mode allows early adopters to
 give a try and bring a feedback to the development team.
 
  I think we should not count bugs for HCF criteria if they affect only
 experimental feature(s). At the moment, we have Zabbix as experimental
 feature, and Patching of OpenStack [2] is under consideration: if today QA
 doesn't approve it to be as ready for production use, we have no other
 choice. All deadlines passed, and we need to get 5.1 finally out.
 
  Any objections / other ideas?

 +1

 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Steven Hardy
On Wed, Sep 10, 2014 at 08:46:45PM -0400, Jamie Lennox wrote:
 
 - Original Message -
  From: Steven Hardy sha...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Sent: Thursday, September 11, 2014 1:55:49 AM
  Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
  tokens leads to overall OpenStack fragility
  
  On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
   Going through the untriaged Nova bugs, and there are a few on a similar
   pattern:
   
   Nova operation in progress takes a while
   Crosses keystone token expiration time
   Timeout thrown
   Operation fails
   Terrible 500 error sent back to user
  
  We actually have this exact problem in Heat, which I'm currently trying to
  solve:
  
  https://bugs.launchpad.net/heat/+bug/1306294
  
  Can you clarify, is the issue either:
  
  1. Create novaclient object with username/password
  2. Do series of operations via the client object which eventually fail
  after $n operations due to token expiry
  
  or:
  
  1. Create novaclient object with username/password
  2. Some really long operation which means token expires in the course of
  the service handling the request, blowing up and 500-ing
  
  If the former, then it does sound like a client, or usage-of-client bug,
  although note if you pass a *token* vs username/password (as is currently
  done for glance and heat in tempest, because we lack the code to get the
  token outside of the shell.py code..), there's nothing the client can do,
  because you can't request a new token with longer expiry with a token...
  
  However if the latter, then it seems like not really a client problem to
  solve, as it's hard to know what action to take if a request failed
  part-way through and thus things are in an unknown state.
  
  This issue is a hard problem, which can possibly be solved by
  switching to a trust scoped token (service impersonates the user), but then
  you're effectively bypassing token expiry via delegation which sits
  uncomfortably with me (despite the fact that we may have to do this in heat
  to solve the afforementioned bug)
  
   It seems like we should have a standard pattern that on token expiration
   the underlying code at least gives one retry to try to establish a new
   token to complete the flow, however as far as I can tell *no* clients do
   this.
  
  As has been mentioned, using sessions may be one solution to this, and
  AFAIK session support (where it doesn't already exist) is getting into
  various clients via the work being carried out to add support for v3
  keystone by David Hu:
  
  https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
  
  I see patches for Heat (currently gating), Nova and Ironic.
  
   I know we had to add that into Tempest because tempest runs can exceed 1
   hr, and we want to avoid random fails just because we cross a token
   expiration boundary.
  
  I can't claim great experience with sessions yet, but AIUI you could do
  something like:
  
  from keystoneclient.auth.identity import v3
  from keystoneclient import session
  from keystoneclient.v3 import client
  
  auth = v3.Password(auth_url=OS_AUTH_URL,
 username=USERNAME,
 password=PASSWORD,
 project_id=PROJECT,
 user_domain_name='default')
  sess = session.Session(auth=auth)
  ks = client.Client(session=sess)
  
  And if you can pass the same session into the various clients tempest
  creates then the Password auth-plugin code takes care of reauthenticating
  if the token cached in the auth plugin object is expired, or nearly
  expired:
  
  https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
  
  So in the tempest case, it seems like it may be a case of migrating the
  code creating the clients to use sessions instead of passing a token or
  username/password into the client object?
  
  That's my understanding of it atm anyway, hopefully jamielennox will be 
  along
  soon with more details :)
  
  Steve
 
 
 By clients here are you referring to the CLIs or the python libraries? 
 Implementation is at different points with each. 

I think for both heat and tempest we're talking about the python libraries
(Client objects).

 Sessions will handle automatically reauthenticating and retrying a request, 
 however it relies on the service throwing a 401 Unauthenticated error. If a 
 service is returning a 500 (or a timeout?) then there isn't much that a 
 client can/should do for that because we can't assume that trying again with 
 a new token will solve anything. 

Hmm, I was hoping it would reauthenticate based on the auth_ref
will_expire_soon, as it would fit better with out current usage of the
auth_ref in heat.

 
 At the moment we have keystoneclient, novaclient, cinderclient neutronclient 
 and then a number of the smaller 

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Davanum Srinivas
Rados,

personally, i'd want a human to do the +W. Also the critieria would
include a 3) which is the CI for the driver if applicable.

On Thu, Sep 11, 2014 at 9:53 AM, Radoslav Gerganov rgerga...@vmware.com wrote:
 On 09/11/2014 04:30 PM, Sean Dague wrote:

 On 09/11/2014 09:09 AM, Gary Kotton wrote:



 On 9/11/14, 2:55 PM, Thierry Carrez thie...@openstack.org wrote:

 Sean Dague wrote:

 [...]
 Why don't we start with let's clean up the virt interface and make it
 more sane, as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.


 Yes, that sounds like the logical next step. We can't split drivers
 without first doing that anyway. I still think people need smaller
 areas of work, as Vish eloquently put it. I still hope that refactoring
 our test architecture will let us reach the same level of quality with
 only a fraction of the tests being run at the gate, which should address
 most of the harm you see in adding additional repositories. But I agree
 there is little point in discussing splitting virt drivers (or anything
 else, really) until the internal interface below that potential split is
 fully cleaned up and it becomes an option.


 How about we start to try and patch gerrit to provide +2 permissions for
 people
 Who can be assigned Œdriver core¹ status. This is something that is
 relevant to Nova and Neutron and I guess Cinder too.


 If you think that's the right solution, I'd say go and investigate it
 with folks that understand enough gerrit internals to be able to figure
 out how hard it would be. Start a conversation in #openstack-infra to
 explore it.

 My expectation is that there is more complexity there than you give it
 credit for. That being said one of the biggest limitations we've had on
 gerrit changes is we've effectively only got one community member, Kai,
 who does any of that. If other people, or teams, were willing to dig in
 and own things like this, that might be really helpful.


 I don't think we need to modify gerrit to support this functionality. We can
 simply have a gerrit job (similar to the existing CI jobs) which is run on
 every patch set and checks if:
 1) the changes are only under /nova/virt/XYZ and /nova/tests/virt/XYZ
 2) it has two +1 from maintainers of driver XYZ

 if the above conditions are met, the job will post W+1 for this patchset.
 Does that make sense?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Mike Scherbakov
 Mike, i've just want to say, if feature isn't ready for production use
and we have no other choice, we should provide detailed limitations and
examples of proper use.
Fully agree, such features should become experimental. We should have this
information in release notes.

Basically, Patching of OpenStack becomes as such, unfortunately. We still
have bugs, and there is no guarantee that we won't find more.

So, let's add experimental tag to issues around Zabbix  Patching of
OpenStack.

On Thu, Sep 11, 2014 at 6:19 PM, Anastasia Urlapova aurlap...@mirantis.com
wrote:

 Mike, i've just want to say, if feature isn't ready for production use and
 we have no other choice, we should provide detailed limitations and
 examples of proper use.

 On Thu, Sep 11, 2014 at 5:58 PM, Tomasz Napierala tnapier...@mirantis.com
  wrote:


 On 11 Sep 2014, at 09:19, Mike Scherbakov mscherba...@mirantis.com
 wrote:

  Hi all,
  what about using experimental tag for experimental features?
 
  After we implemented feature groups [1], we can divide our features and
 for complex features, or those which don't get enough QA resources in the
 dev cycle, we can declare as experimental. It would mean that those are not
 production ready features.
  Giving them live still in experimental mode allows early adopters to
 give a try and bring a feedback to the development team.
 
  I think we should not count bugs for HCF criteria if they affect only
 experimental feature(s). At the moment, we have Zabbix as experimental
 feature, and Patching of OpenStack [2] is under consideration: if today QA
 doesn't approve it to be as ready for production use, we have no other
 choice. All deadlines passed, and we need to get 5.1 finally out.
 
  Any objections / other ideas?

 +1

 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread James Bottomley
On Thu, 2014-09-11 at 07:36 -0400, Sean Dague wrote:
  b) The conflict Dan is speaking of is around the current situation where 
  we
  have a limited core review team bandwidth and we have to pick and choose
  which virt driver-specific features we will review. This leads to bad
  feelings and conflict.
 
  The way this worked in the past is we had cores who were subject
  matter experts in various parts of the code -- there is a clear set of
  cores who get xen or libivrt for example and I feel like those
  drivers get reasonable review times. What's happened though is that
  we've added a bunch of drivers without adding subject matter experts
  to core to cover those drivers. Those newer drivers therefore have a
  harder time getting things reviewed and approved.
  
  FYI, for Juno at least I really don't consider that even the libvirt
  driver got acceptable review times in any sense. The pain of waiting
  for reviews in libvirt code I've submitted this cycle is what prompted
  me to start this thread. All the virt drivers are suffering way more
  than they should be, but those without core team representation suffer
  to an even greater degree.  And this is ignoring the point Jay  I
  were making about how the use of a single team means that there is
  always contention for feature approval, so much work gets cut right
  at the start even if maintainers of that area felt it was valuable
  and worth taking.
 
 I continue to not understand how N non overlapping teams makes this any
 better. You have to pay the integration cost somewhere. Right now we're
 trying to pay it 1 patch at a time. This model means the integration
 units get much bigger, and with less common ground.

OK, so look at a concrete example: in 2002, the Linux kernel went with
bitkeeper precisely because we'd reached the scaling limit of a single
integration point, so we took the kernel from a single contributing team
to a bunch of them.  This was expanded with git in 2005 and leads to the
hundreds of contributing teams we have today.

The reason this scales nicely is precisely because the integration costs
are lower.  However, there are a couple of principles that really assist
us getting there.  The first is internal API management: an Internal API
is a contract between two teams (may be more, but usually two).  If
someone wants to change this API they have to negotiate between the two
(or more) teams.  This naturally means that only the affected components
review this API change, but *only* they need to review it, so it doesn't
bubble up to the whole kernel community.  The second is automation:
linux-next and the zero day test programme build and smoke test an
integration of all our development trees.  If one team does something
that impacts another in their development tree, this system gives us
immediate warning.  Basically we run continuous integration, so when
Linus does his actual integration pull, everything goes smoothly (that's
how we integrate all the 300 or so trees for a kernel release in about
ten days).  We also now have a lot of review automation (checkpatch.pl
for instance), but that's independent of the number of teams

In this model the scaling comes from the local reviews and integration.
The more teams the greater the scaling.  The factor which obstructs
scaling is the internal API ... it usually doesn't make sense to
separate a component where there's no API between the two pieces ...
however, if you think there should be, separating and telling the teams
to figure it out is a great way to generate the API.   The point here is
that since an API is a contract, forcing people to negotiate and abide
by the contract tends to make them think much more carefully about it.
Internal API moves from being a global issue to being a local one.

By the way, the extra link work is actually time well spent because it
means the link APIs are negotiated by teams with use cases not just
designed by abstract architecture.  The greater the link pain the
greater the indication that there's an API problem and the greater the
pressure on the teams either end to fix it.  Once the link pain is
minimised, the API is likely a good one.

 Look at how much active work in crossing core teams we've had to do to
 make any real progress on the neutron replacing nova-network front. And
 how slow that process is. I think you'll see that hugely show up here.

Well, as I said, separating the components leads to API negotiation
between the teams  Because of the API negotiation, taking one thing and
making it two does cause more work, and it's visible work because the
two new teams get to do the API negotiation which didn't exist before.
The trick to getting the model to scale is the network effect.  The
scaling comes by splitting out into high numbers of teams (say N) the
added work comes in the links (the API contracts) between the N teams.
If the network is star shaped (everything touches everything else), then
you've achieved nothing other than a 

Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Robert Kukura


On 9/10/14, 6:54 PM, Kevin Benton wrote:
Being in the incubator won't help with this if it's a different repo 
as well.

Agreed.

Given the requirement for GBP to intercept API requests, the potential 
couplings between policy drivers, ML2 mechanism drivers, and even 
service plugins (L3 router), and the fact Neutron doesn't have a stable 
[service] plugin API, along with the goal to eventually merge GBP into 
Neutron, I'd rank the options as follows in descending order:


1) Merge the GBP patches to the neutron repo early in Kilo and iterate, 
just like we had planned for Juno;-) .


2) Like 1, but with the code initially in a preview subtree to clarify 
its level of stability and support, and to facilitate packaging it as an 
optional component.


3) Like 1, but merge to a feature branch in the neutron repo and iterate 
there.


4) Develop in an official neutron-incubator repo, with neutron core 
reviews of each GBP patch.


5) Develop in StackForge, without neutron core reviews.


Here's how I see these options in terms of the various considerations 
that have come up during this discussion:


* Options 1, 2 and 3 most easily support whatever coupling is needed 
with the rest of Neutron. Options 4 and 5 would sometimes require 
synchronized changes across repos since dependencies aren't in terms of 
stable interfaces.


* Options 1, 2 and 3 provide a clear path to eventually graduate GBP 
into a fully supported Neutron feature, without loss of git history. 
Option 4 would have some hope of eventually merging into the neutron 
repo due to the code having already had core reviews. With option 5, 
reviewing and merging a complete GBP implementation from StackForge into 
the neutron repo would be a huge effort, with significant risk that 
reviewers would want design changes not practical to make at that stage.


* Options 1 and 2 take full advantage of existing review, CI, packaging 
and release processes and mechanisms. All the other options require 
extra work to put these in place.


* Options 1 and 2 can easily make GBP consumable by early adopters 
through normal channels such as devstack and OpenStack distributions. 
The other options all require the operator or the packager to pull GBP 
code from a different source than the base Neutron code.


* Option 1 relies on the historical understanding that new Neutron 
extension APIs are not initially considered stable, and incompatible 
changes can occur in future releases. Options 2, 3 and 4 make this 
explicit. Option 5 really has nothing to do with Neutron.


* Option 5 allows rapid iteration by the GBP team, without waiting for 
core review. This is essential during experimentation and prototyping, 
but at least some participants consider the GBP implementation to be 
well beyond that phase.


* Options 3, 4, and 5 potentially decouple the GBP release schedule from 
the Neutron release schedule. With options 1 or 2, GBP snapshots would 
be included in all normal Neutron releases. With any of the options, the 
GBP team, vendors, or distributions would be able to back-port arbitrary 
snapshots of GBP to a branch off the stable/juno branch (in the neutron 
repo itself or in a clone) to allow early adopters to use GBP with 
Juno-based OpenStack distributions.



Does the above make some sense? What have I missed?

Of course this all assumes there is consensus that we should proceed 
with GBP, that we should continue by iterating the currently proposed 
design and code, and that GBP should eventually become part of Neutron. 
These assumptions may still be the real issues:-( . If we can't agree on 
whether GBP is in an experimentation/rapid-prototyping phase vs. an 
almost-ready-to-beta-test phase, I don't see how can we get consensus on 
the next steps for its development.


-Bob


On Wed, Sep 10, 2014 at 7:22 AM, Robert Kukura 
kuk...@noironetworks.com mailto:kuk...@noironetworks.com wrote:



On 9/9/14, 7:51 PM, Jay Pipes wrote:

On 09/09/2014 06:57 PM, Kevin Benton wrote:

Hi Jay,

The main component that won't work without direct
integration is
enforcing policy on calls directly to Neutron and calls
between the
plugins inside of Neutron. However, that's only one
component of GBP.
All of the declarative abstractions, rendering of policy,
etc can be
experimented with here in the stackforge project until the
incubator is
figured out.


OK, thanks for the explanation Kevin, that helps!

I'll add that there is likely to be a close coupling between ML2
mechanism drivers and corresponding GBP policy drivers for some of
the back-end integrations. These will likely share local state
such as connections to controllers, and may interact with each
other as part of processing core and GBP API requests.
Development, review, and packaging of these would be facilitated
by having 

Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Eoghan Glynn


 Hi,
 
 Dina has been doing a great work and has been very helpful during the
 Juno cycle and her help is very valuable. She's been doing a lot of
 reviews and has been very active in our community.
 
 I'd like to propose that we add Dina Belova to the ceilometer-core
 group, as I'm convinced it'll help the project.
 
 Please, dear ceilometer-core members, reply with your votes!

A definite +1 from me.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] referencing the index of a ResourceGroup

2014-09-11 Thread Jason Greathouse
My mistake about the mailing list, The openstack heat wiki page (
https://wiki.openstack.org/wiki/Heat) only lists the dev list. I will
make sure to ask future usage questions on the other one.

Thank you for the response and example. This is what I was missing.

On Thu, Sep 11, 2014 at 3:21 AM, Steven Hardy sha...@redhat.com wrote:

 On Wed, Sep 10, 2014 at 04:44:01PM -0500, Jason Greathouse wrote:
 I'm trying to find a way to create a set of servers and attach a new
 volume to each server.
 I first tried to use block_device_mapping but that requires an
 existing
 snapshot or volume and the deployment would fail when Rackspace
 intermittently timed out trying to create the new volume from a
 snapshot.
 I'm now trying with 3 ResourceGroups: OS::Cinder::Volume to build
 volumes
 followed by OS::Nova::Server and then trying to attach the volumes
 with  OS::Cinder::VolumeAttachment.

 Basically creating lots of resource groups for related things is the wrong
 pattern.  You need to create one nested stack template containing the
 related things (Server, Volume and VolumeAttachment in this case), and use
 ResourceGroup to multiply them as a unit.

 I answered a similar question here on the openstack general ML recently
 (which for future reference may be a better ML for usage questions like
 this, as it's not really development discussion):

 http://lists.openstack.org/pipermail/openstack/2014-September/009216.html

 Here's another example which I used in a summit demo, which I think
 basically does what you need?


 https://github.com/hardys/demo_templates/tree/master/juno_summit_intro_to_heat/example3_server_with_volume_group

 Steve.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Jason Greathouse*
Sr. Systems Engineer

*[image: LeanKitlogo] https://leankit.com/*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Licensing issue with using JSHint in build

2014-09-11 Thread Solly Ross
Thanks!  ESLint looks interesting.  I'm curious to see what it
says about the Horizon source.  I'll keep it in mind for future
personal projects and the like.

Best Regards,
Solly Ross

- Original Message -
 From: Martin Geisler mar...@geisler.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, September 11, 2014 3:20:56 AM
 Subject: Re: [openstack-dev] [Horizon] Licensing issue with using JSHint in   
 build
 
 Solly Ross sr...@redhat.com writes:
 
 Hi,
 
 I recently began using using ESLint for all my JavaScript linting:
 
   http://eslint.org/
 
 It has nice documentation, a normal license, and you can easily write
 new rules for it.
 
  P.S. Here's hoping that the JSHint devs eventually find a way to
  remove that line from the file -- according to
  https://github.com/jshint/jshint/issues/1234, not much of the original
  remains.
 
 I don't think it matters how much of the original code remains -- what
 matters is that any rewrite is a derived work. Otherwise Debian and
 others could have made the license pure MIT long ago.
 
 --
 Martin Geisler
 
 http://google.com/+MartinGeisler
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Gary Kotton


On 9/11/14, 4:30 PM, Sean Dague s...@dague.net wrote:

On 09/11/2014 09:09 AM, Gary Kotton wrote:
 
 
 On 9/11/14, 2:55 PM, Thierry Carrez thie...@openstack.org wrote:
 
 Sean Dague wrote:
 [...]
 Why don't we start with let's clean up the virt interface and make it
 more sane, as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.

 Yes, that sounds like the logical next step. We can't split drivers
 without first doing that anyway. I still think people need smaller
 areas of work, as Vish eloquently put it. I still hope that
refactoring
 our test architecture will let us reach the same level of quality with
 only a fraction of the tests being run at the gate, which should
address
 most of the harm you see in adding additional repositories. But I agree
 there is little point in discussing splitting virt drivers (or anything
 else, really) until the internal interface below that potential split
is
 fully cleaned up and it becomes an option.
 
 How about we start to try and patch gerrit to provide +2 permissions for
 people
 Who can be assigned Œdriver core¹ status. This is something that is
 relevant to Nova and Neutron and I guess Cinder too.

If you think that's the right solution, I'd say go and investigate it
with folks that understand enough gerrit internals to be able to figure
out how hard it would be. Start a conversation in #openstack-infra to
explore it.

My expectation is that there is more complexity there than you give it
credit for. That being said one of the biggest limitations we've had on
gerrit changes is we've effectively only got one community member, Kai,
who does any of that. If other people, or teams, were willing to dig in
and own things like this, that might be really helpful.

What about what Radoslav suggested? Having a background task running -
that can set a flag indicating that the code has been approved by the
driver ‘maintainers’. This can be something that driver CI should run -
that is, driver code can only be approved if it has X +1’s from the driver
maintainers and a +1 from the driver CI.



   -Sean

-- 
Sean Dague
https://urldefense.proofpoint.com/v1/url?u=http://dague.net/k=oIvRg1%2BdG
AgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%
0Am=krRe7RLL8WDd62ypHGZ6F1MqaSzJLkWn153Ch9UZktk%3D%0As=9b417c5fd29939b40
eee619ca9ed30be48192d939b824941d42d6e6ab36b1883

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread Jeremy Stanley
On 2014-09-11 01:27:23 -0400 (-0400), Russell Bryant wrote:
[...]
 But seriously, we should probably put out a more official notice about
 this once Kilo opens up.

It's probably worth carrying in the release notes for all Juno
servers... This is the last release of OpenStack with official
support for Python 2.6-based platforms.

Of course we're still supporting it on the Juno stable branch for
its lifetime (probably something like a year depending on what the
stable branch managers feel they can provide), and in all involved
clients and libraries until Juno reaches end of support. So don't
get all excited that 2.6 is going away entirely in a couple
months.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Duncan Thomas
On 11 September 2014 15:35, James Bottomley
james.bottom...@hansenpartnership.com wrote:

 OK, so look at a concrete example: in 2002, the Linux kernel went with
 bitkeeper precisely because we'd reached the scaling limit of a single
 integration point, so we took the kernel from a single contributing team
 to a bunch of them.  This was expanded with git in 2005 and leads to the
 hundreds of contributing teams we have today.


One thing the kernel has that Openstack doesn't, that alter the way
this model plays out, is a couple of very strong, forthright and frank
personalities at the top who are pretty well respected. Both Andrew
and Linux (and others) regularly if not frequently rip into ideas
quite scathingly, even after they have passed other barriers and
gauntlets and just say no to things. Openstack has nothing of this
sort, and there is no evidence that e.g. the TC can, should or desire
to fill this role.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] referencing the index of a ResourceGroup

2014-09-11 Thread Mike Spreitzer
Steven Hardy sha...@redhat.com wrote on 09/11/2014 04:21:18 AM:

 On Wed, Sep 10, 2014 at 04:44:01PM -0500, Jason Greathouse wrote:
 I'm trying to find a way to create a set of servers and attach a 
new
 volume to each server. 
 ...
 
 Basically creating lots of resource groups for related things is the 
wrong
 pattern.  You need to create one nested stack template containing the
 related things (Server, Volume and VolumeAttachment in this case), and 
use
 ResourceGroup to multiply them as a unit.
 
 I answered a similar question here on the openstack general ML recently
 (which for future reference may be a better ML for usage questions like
 this, as it's not really development discussion):
 
 
http://lists.openstack.org/pipermail/openstack/2014-September/009216.html
 
 Here's another example which I used in a summit demo, which I think
 basically does what you need?
 
 https://github.com/hardys/demo_templates/tree/master/
 juno_summit_intro_to_heat/example3_server_with_volume_group

There is also an example of exactly this under review.  See 
https://review.openstack.org/#/c/97366/

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread James Bottomley
On Thu, 2014-09-11 at 16:20 +0100, Duncan Thomas wrote:
 On 11 September 2014 15:35, James Bottomley
 james.bottom...@hansenpartnership.com wrote:
 
  OK, so look at a concrete example: in 2002, the Linux kernel went with
  bitkeeper precisely because we'd reached the scaling limit of a single
  integration point, so we took the kernel from a single contributing team
  to a bunch of them.  This was expanded with git in 2005 and leads to the
  hundreds of contributing teams we have today.
 
 
 One thing the kernel has that Openstack doesn't, that alter the way
 this model plays out, is a couple of very strong, forthright and frank
 personalities at the top who are pretty well respected. Both Andrew
 and Linux (and others) regularly if not frequently rip into ideas
 quite scathingly, even after they have passed other barriers and
 gauntlets and just say no to things. Openstack has nothing of this
 sort, and there is no evidence that e.g. the TC can, should or desire
 to fill this role.

Linus is the court of last appeal.  It's already a team negotiation
failure if stuff bubbles up to him.  The somewhat abrasive response
you'll get if you're being stupid acts as strong downward incentive on
the teams to sort out their own API squabbles *before* they get this
type of visibility.

The whole point of open source is aligning the structures with the
desire to fix it yourself.  In an ideal world, everything would get
sorted at the local level and nothing would bubble up.  Of course, the
world isn't ideal, so you need some court of last appeal, but it doesn't
have to be an individual ... it just has to be something that's
daunting, to encourage local settlement, and decisive.

Every process has to have something like this anyway.  If there's no
process way of sorting out intractable disputes, they go on for ever and
damage the project.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] referencing the index of a ResourceGroup

2014-09-11 Thread Steven Hardy
On Thu, Sep 11, 2014 at 10:06:01AM -0500, Jason Greathouse wrote:
My mistake about the mailing list, The openstack heat wiki page
(https://wiki.openstack.org/wiki/Heat) only lists the dev list. I will
make sure to ask future usage questions on the other one.

No worries, we should update the wiki by the sounds of it.

Since you're not the first person to ask this question this week, I wrote a
quick blog post with some more info:

http://hardysteven.blogspot.co.uk/2014/09/using-heat-resourcegroup-resources.html

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Sep 11 1800 UTC

2014-09-11 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140911T18

P.S. I'm on vacation this week, so, Andrew Lazarev will chair the meeting.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyone using RabbitMQ with active/active mirrored queues?

2014-09-11 Thread Chris Friesen

On 09/11/2014 12:50 AM, Jesse Pretorius wrote:

On 10 September 2014 17:20, Chris Friesen chris.frie...@windriver.com
mailto:chris.frie...@windriver.com wrote:

I see that the OpenStack high availability guide is still
recommending the active/standby method of configuring RabbitMQ.

Has anyone tried using active/active with mirrored queues as
recommended by the RabbitMQ developers?  If so, what problems did
you run into?



I would recommend that you ask this question on the openstack-perators
list as you'll likely get more feedback.


Thanks for the suggestion, will do.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Ildikó Váncsa
Hi,

+1 from me too, thanks for all the hard work so far.

Best Regards,
Ildikó

-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info] 
Sent: Thursday, September 11, 2014 3:25 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

Hi,

Dina has been doing a great work and has been very helpful during the Juno 
cycle and her help is very valuable. She's been doing a lot of reviews and has 
been very active in our community.

I'd like to propose that we add Dina Belova to the ceilometer-core group, as 
I'm convinced it'll help the project.

Please, dear ceilometer-core members, reply with your votes!

--
Julien Danjou
// Free Software hacker
// http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyone using RabbitMQ with active/active mirrored queues?

2014-09-11 Thread Abel Lopez
Yes, not sure why the HA guide says that.
Only problems I've run into was around cluster upgrades. If you're running
3.2+ you'll likely have a better experience.

List ha_queues in all your configs, list all your rabbit hosts (I don't use
a VIP as heartbeats weren't working when I did this)

On Wednesday, September 10, 2014, Chris Friesen chris.frie...@windriver.com
wrote:

 Hi,

 I see that the OpenStack high availability guide is still recommending the
 active/standby method of configuring RabbitMQ.

 Has anyone tried using active/active with mirrored queues as recommended
 by the RabbitMQ developers?  If so, what problems did you run into?

 Thanks,
 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Allow for per-subnet dhcp options

2014-09-11 Thread Jonathan Proulx
Hi All,

I'm hoping to get this blueprint
https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet
some love...seems it's been hanging around since January so my
assumption is it's not going anywhere.

As a private cloud operator I make heavy use of vlan based provider
networks to plug VMs into exiting datacenter networks.

Some of these are Jumbo frame networks and some use standard 1500 MTUs
so I really want to specify the MTU per subnet, there is currently no
way to do this.  I can get it globally in dnsmasq.conf or I can set it
per port using extra-dhcp-opt neither of which really do what I need.

Given that extra-dhcp-opt is implemented per port is seems to me that
making a similar implementation per subnet would not be a difficult
task for someone familiar with the code.

I'm not that person but if you are, then you can be my Neutron hero
for the next release cycle :)

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Kevin Benton
Thanks. This is good writeup.

Of course this all assumes there is consensus that we should proceed with
GBP, that we should continue by iterating the currently proposed design and
code, and that GBP should eventually become part of Neutron. These
assumptions may still be the real issues :-( .

Unfortunately I think this is the real root cause. Most of the people that
worked on GBP definitely want to see it merged into Neutron and is in
general agreement there. However, some of the other cores disagreed and now
GBP is sitting in limbo. IIUC, this thread was started to just get GBP to
some location where it can be developed on and tested that isn't a big
string of rejected gerrit patches.

Does the above make some sense? What have I missed?

Option 1 is great, but I don't see how the same thing that happened in Juno
would be avoided.

Option 2 is also good, but that idea didn't seem to catch on. If this
option is on the table, this seems like the best way to go.

Option 3 sounded like it brought up a lot of tooling (gerrit) issues with
regard to how the merging workflow would work.

Option 4 is unknown until the incubator details are hashed out.

Option 5 is stackforge. I see this as a better place just to do what is
already being done right now. You're right that patches would occur without
core reviewers, but that's essentially what's happening now since nothing
is getting merged.




On Thu, Sep 11, 2014 at 7:57 AM, Robert Kukura kuk...@noironetworks.com
wrote:


 On 9/10/14, 6:54 PM, Kevin Benton wrote:

 Being in the incubator won't help with this if it's a different repo as
 well.

 Agreed.

 Given the requirement for GBP to intercept API requests, the potential
 couplings between policy drivers, ML2 mechanism drivers, and even service
 plugins (L3 router), and the fact Neutron doesn't have a stable [service]
 plugin API, along with the goal to eventually merge GBP into Neutron, I'd
 rank the options as follows in descending order:

 1) Merge the GBP patches to the neutron repo early in Kilo and iterate,
 just like we had planned for Juno ;-) .

 2) Like 1, but with the code initially in a preview subtree to clarify
 its level of stability and support, and to facilitate packaging it as an
 optional component.

 3) Like 1, but merge to a feature branch in the neutron repo and iterate
 there.

 4) Develop in an official neutron-incubator repo, with neutron core
 reviews of each GBP patch.

 5) Develop in StackForge, without neutron core reviews.


 Here's how I see these options in terms of the various considerations that
 have come up during this discussion:

 * Options 1, 2 and 3 most easily support whatever coupling is needed with
 the rest of Neutron. Options 4 and 5 would sometimes require synchronized
 changes across repos since dependencies aren't in terms of stable
 interfaces.

 * Options 1, 2 and 3 provide a clear path to eventually graduate GBP into
 a fully supported Neutron feature, without loss of git history. Option 4
 would have some hope of eventually merging into the neutron repo due to the
 code having already had core reviews. With option 5, reviewing and merging
 a complete GBP implementation from StackForge into the neutron repo would
 be a huge effort, with significant risk that reviewers would want design
 changes not practical to make at that stage.

 * Options 1 and 2 take full advantage of existing review, CI, packaging
 and release processes and mechanisms. All the other options require extra
 work to put these in place.

 * Options 1 and 2 can easily make GBP consumable by early adopters through
 normal channels such as devstack and OpenStack distributions. The other
 options all require the operator or the packager to pull GBP code from a
 different source than the base Neutron code.

 * Option 1 relies on the historical understanding that new Neutron
 extension APIs are not initially considered stable, and incompatible
 changes can occur in future releases. Options 2, 3 and 4 make this
 explicit. Option 5 really has nothing to do with Neutron.

 * Option 5 allows rapid iteration by the GBP team, without waiting for
 core review. This is essential during experimentation and prototyping, but
 at least some participants consider the GBP implementation to be well
 beyond that phase.

 * Options 3, 4, and 5 potentially decouple the GBP release schedule from
 the Neutron release schedule. With options 1 or 2, GBP snapshots would be
 included in all normal Neutron releases. With any of the options, the GBP
 team, vendors, or distributions would be able to back-port arbitrary
 snapshots of GBP to a branch off the stable/juno branch (in the neutron
 repo itself or in a clone) to allow early adopters to use GBP with
 Juno-based OpenStack distributions.


 Does the above make some sense? What have I missed?

 Of course this all assumes there is consensus that we should proceed with
 GBP, that we should continue by iterating the currently proposed design and
 code, and that GBP 

Re: [openstack-dev] [Cinder] Request for J3 Feature Freeze Exception

2014-09-11 Thread Mike Perez
On 19:32 Fri 05 Sep , David Pineau wrote:
 So I asked Duncan what could be done, learned about the FFE, and I am
 now humbly asking you guys to give us a last chance to get in for
 Juno. I was told that if it was possible the last delay would be next
 week, and believe me, we're doing everything we can on our side to be
 able to meet that.

As given in the comments [1], there will be a better chance for an exception
with this after cert results are provided.

[1] - https://review.openstack.org/#/c/110236/

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Request for J3 FFE - add reset-state function for backups

2014-09-11 Thread Mike Perez
On 12:23 Tue 09 Sep , yunling wrote:
 Hi Cinder Folks,I would like to request a FFE for add reset-state function 
 for backups[1][2].The spec of add reset-state function for backups has been 
 reviewed and merged[2]. These code changes have been well tested and are not 
 very complex[3]. I would appreciate any consideration for an FFE.Thanks,

It looks like the current review has some comments that are waiting too be
addressed now.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Request for J3 Feature Freeze Exception

2014-09-11 Thread Duncan Thomas
Mike

This FFE request was withdraw. I updated the etherpad but didn't mail
the list, sorry

On 11 September 2014 18:07, Mike Perez thin...@gmail.com wrote:
 On 19:32 Fri 05 Sep , David Pineau wrote:
 So I asked Duncan what could be done, learned about the FFE, and I am
 now humbly asking you guys to give us a last chance to get in for
 Juno. I was told that if it was possible the last delay would be next
 week, and believe me, we're doing everything we can on our side to be
 able to meet that.

 As given in the comments [1], there will be a better chance for an exception
 with this after cert results are provided.

 [1] - https://review.openstack.org/#/c/110236/

 --
 Mike Perez

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Mandeep Dhami
I agree with Kevin. Any option in-tree or in-incubator would need core
review time, and they are already oversubscribed with nova parity issues
(for Juno). So the only option to continue collaboration on experimenting
with policy based networking on current openstack is on stackforge (option
5).

So the summary is: We develop in stackforge for Juno code, and that we
should keep our options open and review this as a community again during
the Kilo conference.

Regards,
Mandeep



On Thu, Sep 11, 2014 at 10:02 AM, Kevin Benton blak...@gmail.com wrote:

 Thanks. This is good writeup.

 Of course this all assumes there is consensus that we should proceed
 with GBP, that we should continue by iterating the currently proposed
 design and code, and that GBP should eventually become part of Neutron.
 These assumptions may still be the real issues :-( .

 Unfortunately I think this is the real root cause. Most of the people that
 worked on GBP definitely want to see it merged into Neutron and is in
 general agreement there. However, some of the other cores disagreed and now
 GBP is sitting in limbo. IIUC, this thread was started to just get GBP to
 some location where it can be developed on and tested that isn't a big
 string of rejected gerrit patches.

 Does the above make some sense? What have I missed?

 Option 1 is great, but I don't see how the same thing that happened in
 Juno would be avoided.

 Option 2 is also good, but that idea didn't seem to catch on. If this
 option is on the table, this seems like the best way to go.

 Option 3 sounded like it brought up a lot of tooling (gerrit) issues with
 regard to how the merging workflow would work.

 Option 4 is unknown until the incubator details are hashed out.

 Option 5 is stackforge. I see this as a better place just to do what is
 already being done right now. You're right that patches would occur without
 core reviewers, but that's essentially what's happening now since nothing
 is getting merged.




 On Thu, Sep 11, 2014 at 7:57 AM, Robert Kukura kuk...@noironetworks.com
 wrote:


 On 9/10/14, 6:54 PM, Kevin Benton wrote:

 Being in the incubator won't help with this if it's a different repo as
 well.

 Agreed.

 Given the requirement for GBP to intercept API requests, the potential
 couplings between policy drivers, ML2 mechanism drivers, and even service
 plugins (L3 router), and the fact Neutron doesn't have a stable [service]
 plugin API, along with the goal to eventually merge GBP into Neutron, I'd
 rank the options as follows in descending order:

 1) Merge the GBP patches to the neutron repo early in Kilo and iterate,
 just like we had planned for Juno ;-) .

 2) Like 1, but with the code initially in a preview subtree to clarify
 its level of stability and support, and to facilitate packaging it as an
 optional component.

 3) Like 1, but merge to a feature branch in the neutron repo and iterate
 there.

 4) Develop in an official neutron-incubator repo, with neutron core
 reviews of each GBP patch.

 5) Develop in StackForge, without neutron core reviews.


 Here's how I see these options in terms of the various considerations
 that have come up during this discussion:

 * Options 1, 2 and 3 most easily support whatever coupling is needed with
 the rest of Neutron. Options 4 and 5 would sometimes require synchronized
 changes across repos since dependencies aren't in terms of stable
 interfaces.

 * Options 1, 2 and 3 provide a clear path to eventually graduate GBP into
 a fully supported Neutron feature, without loss of git history. Option 4
 would have some hope of eventually merging into the neutron repo due to the
 code having already had core reviews. With option 5, reviewing and merging
 a complete GBP implementation from StackForge into the neutron repo would
 be a huge effort, with significant risk that reviewers would want design
 changes not practical to make at that stage.

 * Options 1 and 2 take full advantage of existing review, CI, packaging
 and release processes and mechanisms. All the other options require extra
 work to put these in place.

 * Options 1 and 2 can easily make GBP consumable by early adopters
 through normal channels such as devstack and OpenStack distributions. The
 other options all require the operator or the packager to pull GBP code
 from a different source than the base Neutron code.

 * Option 1 relies on the historical understanding that new Neutron
 extension APIs are not initially considered stable, and incompatible
 changes can occur in future releases. Options 2, 3 and 4 make this
 explicit. Option 5 really has nothing to do with Neutron.

 * Option 5 allows rapid iteration by the GBP team, without waiting for
 core review. This is essential during experimentation and prototyping, but
 at least some participants consider the GBP implementation to be well
 beyond that phase.

 * Options 3, 4, and 5 potentially decouple the GBP release schedule from
 the Neutron release schedule. With options 1 

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Sean Dague
On 09/11/2014 11:14 AM, Gary Kotton wrote:
 
 
 On 9/11/14, 4:30 PM, Sean Dague s...@dague.net wrote:
 
 On 09/11/2014 09:09 AM, Gary Kotton wrote:


 On 9/11/14, 2:55 PM, Thierry Carrez thie...@openstack.org wrote:

 Sean Dague wrote:
 [...]
 Why don't we start with let's clean up the virt interface and make it
 more sane, as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.

 Yes, that sounds like the logical next step. We can't split drivers
 without first doing that anyway. I still think people need smaller
 areas of work, as Vish eloquently put it. I still hope that
 refactoring
 our test architecture will let us reach the same level of quality with
 only a fraction of the tests being run at the gate, which should
 address
 most of the harm you see in adding additional repositories. But I agree
 there is little point in discussing splitting virt drivers (or anything
 else, really) until the internal interface below that potential split
 is
 fully cleaned up and it becomes an option.

 How about we start to try and patch gerrit to provide +2 permissions for
 people
 Who can be assigned Œdriver core¹ status. This is something that is
 relevant to Nova and Neutron and I guess Cinder too.

 If you think that's the right solution, I'd say go and investigate it
 with folks that understand enough gerrit internals to be able to figure
 out how hard it would be. Start a conversation in #openstack-infra to
 explore it.

 My expectation is that there is more complexity there than you give it
 credit for. That being said one of the biggest limitations we've had on
 gerrit changes is we've effectively only got one community member, Kai,
 who does any of that. If other people, or teams, were willing to dig in
 and own things like this, that might be really helpful.
 
 What about what Radoslav suggested? Having a background task running -
 that can set a flag indicating that the code has been approved by the
 driver ‘maintainers’. This can be something that driver CI should run -
 that is, driver code can only be approved if it has X +1’s from the driver
 maintainers and a +1 from the driver CI.

There is a ton of complexity and open questions with that approach as
well, largely, again because people are designing systems based on
gerrit from the hip without actually understanding gerrit.

If someone wants to devote time to that kind of system and architecture,
they should engage the infra team to understand what can and can't be
done here. And take that on as a Kilo cycle goal. It would be useful,
but there is no 'simply' about it.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Stephen Wong
I agree with Kevin. Like in Juno, we as a subteam will be shooting for
option 1 (again) for Kilo - ideally we can land in Kilo, and we will work
closely with the community to try to accomplish that. In the meantime, we
need to have a repo to iterate our implementations, build package (Juno
based) to early adopters, and be as transparent as if the code is on
gerrit. With option 2 never picking up momentum when Bob suggested it on
the ML, option 3 being more like an idea discussed by several cores during
the mid-cycle meetup, and option 4 is currently on holding pattern without
any detail but tons of concern raised on ML --- option 5 (stackforge) seems
like the best available option at this point.

Thanks,
- Stephen

On Thu, Sep 11, 2014 at 10:02 AM, Kevin Benton blak...@gmail.com wrote:

 Thanks. This is good writeup.

 Of course this all assumes there is consensus that we should proceed
 with GBP, that we should continue by iterating the currently proposed
 design and code, and that GBP should eventually become part of Neutron.
 These assumptions may still be the real issues :-( .

 Unfortunately I think this is the real root cause. Most of the people that
 worked on GBP definitely want to see it merged into Neutron and is in
 general agreement there. However, some of the other cores disagreed and now
 GBP is sitting in limbo. IIUC, this thread was started to just get GBP to
 some location where it can be developed on and tested that isn't a big
 string of rejected gerrit patches.

 Does the above make some sense? What have I missed?

 Option 1 is great, but I don't see how the same thing that happened in
 Juno would be avoided.

 Option 2 is also good, but that idea didn't seem to catch on. If this
 option is on the table, this seems like the best way to go.

 Option 3 sounded like it brought up a lot of tooling (gerrit) issues with
 regard to how the merging workflow would work.

 Option 4 is unknown until the incubator details are hashed out.

 Option 5 is stackforge. I see this as a better place just to do what is
 already being done right now. You're right that patches would occur without
 core reviewers, but that's essentially what's happening now since nothing
 is getting merged.




 On Thu, Sep 11, 2014 at 7:57 AM, Robert Kukura kuk...@noironetworks.com
 wrote:


 On 9/10/14, 6:54 PM, Kevin Benton wrote:

 Being in the incubator won't help with this if it's a different repo as
 well.

 Agreed.

 Given the requirement for GBP to intercept API requests, the potential
 couplings between policy drivers, ML2 mechanism drivers, and even service
 plugins (L3 router), and the fact Neutron doesn't have a stable [service]
 plugin API, along with the goal to eventually merge GBP into Neutron, I'd
 rank the options as follows in descending order:

 1) Merge the GBP patches to the neutron repo early in Kilo and iterate,
 just like we had planned for Juno ;-) .

 2) Like 1, but with the code initially in a preview subtree to clarify
 its level of stability and support, and to facilitate packaging it as an
 optional component.

 3) Like 1, but merge to a feature branch in the neutron repo and iterate
 there.

 4) Develop in an official neutron-incubator repo, with neutron core
 reviews of each GBP patch.

 5) Develop in StackForge, without neutron core reviews.


 Here's how I see these options in terms of the various considerations
 that have come up during this discussion:

 * Options 1, 2 and 3 most easily support whatever coupling is needed with
 the rest of Neutron. Options 4 and 5 would sometimes require synchronized
 changes across repos since dependencies aren't in terms of stable
 interfaces.

 * Options 1, 2 and 3 provide a clear path to eventually graduate GBP into
 a fully supported Neutron feature, without loss of git history. Option 4
 would have some hope of eventually merging into the neutron repo due to the
 code having already had core reviews. With option 5, reviewing and merging
 a complete GBP implementation from StackForge into the neutron repo would
 be a huge effort, with significant risk that reviewers would want design
 changes not practical to make at that stage.

 * Options 1 and 2 take full advantage of existing review, CI, packaging
 and release processes and mechanisms. All the other options require extra
 work to put these in place.

 * Options 1 and 2 can easily make GBP consumable by early adopters
 through normal channels such as devstack and OpenStack distributions. The
 other options all require the operator or the packager to pull GBP code
 from a different source than the base Neutron code.

 * Option 1 relies on the historical understanding that new Neutron
 extension APIs are not initially considered stable, and incompatible
 changes can occur in future releases. Options 2, 3 and 4 make this
 explicit. Option 5 really has nothing to do with Neutron.

 * Option 5 allows rapid iteration by the GBP team, without waiting for
 core review. This is essential during 

[openstack-dev] [Trove] Cluster implementation is grabbing instance's gutsHi guys, I was looking through the clustering code today and noticed a lot of it is grabbing what I'd call the guts of the ins

2014-09-11 Thread Tim Simpson
Hi everyone,

I was looking through the clustering code today and noticed a lot of it is 
grabbing what I'd call the guts of the instance models code.

The best example is here: 
https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89

In the _all_instances_ready function, I would have expected 
trove.instance.models.load_any_instance to be called for each instance ID and 
it's status to be checked.

Instead, the service_status is being called directly. That is a big mistake. 
For now it works, but in general it mixes the concern of what is an instance 
stauts? to code outside of the instance class itself.

For an example of why this is bad, look at the method 
_instance_ids_with_failures. The code is checking for failures by seeing if 
the service status is failed. What if the Nova server or Cinder volume have 
tanked instead? The code won't work as expected.

It could be we need to introduce another status besides BUILD to instance 
statuses, or we need to introduce a new internal property to the SimpleInstance 
base class we can check. But whatever we do we should add this extra logic to 
the instance class itself rather than put it in the clustering models code.

This is a minor nitpick but I think we should fix it before too much time 
passes.

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Dan Prince
On Thu, 2014-09-04 at 11:24 +0100, Daniel P. Berrange wrote:
 Position statement
 ==
 
 Over the past year I've increasingly come to the conclusion that
 Nova is heading for (or probably already at) a major crisis. If
 steps are not taken to avert this, the project is likely to loose
 a non-trivial amount of talent, both regular code contributors and
 core team members. That includes myself. This is not good for
 Nova's long term health and so should be of concern to anyone
 involved in Nova and OpenStack.
 
 For those who don't want to read the whole mail, the executive
 summary is that the nova-core team is an unfixable bottleneck
 in our development process with our current project structure.
 The only way I see to remove the bottleneck is to split the virt
 drivers out of tree and let them all have their own core teams
 in their area of code, leaving current nova core to focus on
 all the common code outside the virt driver impls. I, now, none
 the less urge people to read the whole mail.
 


I've always referred to the virt/driver.py API as an internal API
meaning there are no guarantees about it being preserved across
releases. I'm not saying this is correct... just that it is what we've
got.  While OpenStack attempts to do a good job at stabilizing its
public API's we haven't done the same for internal API's. It is actually
quite painful to be out of tree at this point as I've seen with the
Ironic driver being out of the Nova tree. (really glad that is back in
now!)

So because we haven't designed things to be split out in this regard we
can't just go and do it. 

I tinkered with some numbers... not sure if this helps or hurts my
stance but here goes. By my calculation this is the number of commits
we've made that touched each virt driver tree for the last 3 releases
plus stuff done to-date in Juno.

Created using a command like this in each virt directory for each
release: git log origin/stable/havana..origin/stable/icehouse
--no-merges --pretty=oneline . | wc -l

essex = folsom:

 baremetal: 26
 hyperv: 9
 libvirt: 222
 vmwareapi: 18
 xenapi: 164
* total for above: 439

folsom = grizzly:

 baremetal: 83
 hyperv: 58
 libvirt: 254
 vmwareapi: 59
 xenapi: 126
   * total for above: 580

grizzly = havana:

 baremetal: 48
 hyperv: 55
 libvirt: 157
 vmwareapi: 105
 xenapi: 123
   * total for above: 488

havana = icehouse:

 baremetal: 45
 hyperv: 42
 libvirt: 212
 vmwareapi: 121
 xenapi: 100
   * total for above: 520

icehouse = master:

 baremetal: 26
 hyperv: 32
 libvirt: 188
 vmwareapi: 121
 xenapi: 71
   * total for above: 438

---

A couple of things jump out at me from the numbers:

 -drivers that are being deprecated (baremetal) still have lots of
changes. Some of these changes are valid bug fixes for the driver but a
majority of them are actually related to internal cleanups and interface
changes. This goes towards the fact that Nova isn't mature enough to do
a split like this yet.

 -the number of commits landed isn't growing *that* much across releases
in the virt driver trees. Presumably we think we were doing a better job
2 years ago? But the number of changes in the virt trees is largely the
same... perhaps this is because people aren't submitting stuff because
they are frustrated though?

---

For comparison here are the total number of commits for each Nova
release (includes the above commits):

essex - folsom: 1708
folsom - grizzly: 2131
grizzly - havana: 2188
havana - icehouse: 1696
icehouse - master: 1493

---

So say around 30% of the commits for a given release touch the virt
drivers themselves.. many of them aren't specifically related to the
virt drivers. Rather just general Nova internal cleanups because the
interfaces aren't stable.

And while splitting Nova virt drivers might help out some I'm not sure
it helps the general Nova issue in that we have more reviews with less
of the good ones landing. Nova is a weird beast at the moment and just
splitting things like this is probably going to harm as much as it helps
(like we saw with Ironic) unless we stabilize the APIs... and even then
I'm skeptical of death by a million tiny sub-projects. I'm just not
convinced this is the number #1 pain point around Nova reviews. What
about the other 70%?

For me a lot of the frustration with reviews is around test/gate time,
pushing things through, rechecks, etc... and if we break something it
takes just as much time to get the revert in. The last point (the
ability to revert code quickly) is a really important one as it
sometimes takes days to get a simple (obvious) revert landed. This
leaves groups like TripleO who have their own CI and 3rd party testing
systems which also capable of finding many critical issues in the
difficult position of having to revert/cherry pick critical changes for
days at a time in order to keep things running.

Maybe I'm impatient (I totally am!) but I see much of the review
slowdown as a result of the feedback loop times increasing over 

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Armando M.
On 10 September 2014 22:23, Russell Bryant rbry...@redhat.com wrote:
 On 09/10/2014 10:35 PM, Armando M. wrote:
 Hi,

 I devoured this thread, so much it was interesting and full of
 insights. It's not news that we've been pondering about this in the
 Neutron project for the past and existing cycle or so.

 Likely, this effort is going to take more than two cycles, and would
 require a very focused team of people working closely together to
 address this (most likely the core team members plus a few other folks
 interested).

 One question I was unable to get a clear answer was: what happens to
 existing/new bug fixes and features? Would the codebase go in lockdown
 mode, i.e. not accepting anything else that isn't specifically
 targeting this objective? Just using NFV as an example, I can't
 imagine having changes supporting NFV still being reviewed and merged
 while this process takes place...it would be like shooting at a moving
 target! If we did go into lockdown mode, what happens to all the
 corporate-backed agendas that aim at delivering new value to
 OpenStack?

 Yes, I imagine a temporary slow-down on new feature development makes
 sense.  However, I don't think it has to be across the board.  Things
 should be considered case by case, like usual.

Aren't we trying to move away from the 'usual'? Considering things on
a case by case basis still requires review cycles, etc. Keeping the
status quo would mean prolonging the exact pain we're trying to
address.


 For example, a feature that requires invasive changes to the virt driver
 interface might have a harder time during this transition, but a more
 straight forward feature isolated to the internals of a driver might be
 fine to let through.  Like anything else, we have to weight cost/benefit.

 Should we relax what goes into the stable branches, i.e. considering
 having  a Juno on steroids six months from now that includes some of
 the features/fixes that didn't land in time before this process kicks
 off?

 No ... maybe I misunderstand the suggestion, but I definitely would not
 be in favor of a Juno branch with features that haven't landed in master.


I was thinking of the bold move of having Kilo (and beyond)
developments solely focused on this transition. Until this is
complete, nothing would be merged that is not directly pertaining this
objective. At the same time, we'd still want pending features/fixes
(and possibly new features) to land somewhere stable-ish. I fear that
doing so in master, while stuff is churned up and moved out into
external repos, will makes this whole task harder than it already is.

Thanks,
Armando

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Chris Friesen

On 09/11/2014 12:02 PM, Dan Prince wrote:


Maybe I'm impatient (I totally am!) but I see much of the review
slowdown as a result of the feedback loop times increasing over the
years. OpenStack has some really great CI and testing but I think our
focus on not breaking things actually has us painted into a corner. We
are losing our agility and the review process is paying the price. At
this point I think splitting out the virt drivers would be more of a
distraction than a help.


I think the only solution to feedback loop times increasing is to scale 
the review process, which I think means giving more people 
responsibility for a smaller amount of code.


I don't think it's strictly necessary to split the code out into a 
totally separate repo, but I do think it would make sense to have 
changes that are entirely contained within a virt driver be reviewed 
only by developers of that virt driver rather than requiring review by 
the project as a whole.  And they should only have to pass a subset of 
the CI testing--that way they wouldn't be held up by gating bugs in 
other areas.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-11 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2014-09-11 04:14:30 -0700:
 On 09/10/2014 03:45 PM, Gordon Sim wrote:
  On 09/10/2014 01:51 PM, Thierry Carrez wrote:
  I think we do need, as Samuel puts it, some sort of durable
  message-broker/queue-server thing. It's a basic application building
  block. Some claim it's THE basic application building block, more useful
  than database provisioning. It's definitely a layer above pure IaaS, so
  if we end up splitting OpenStack into layers this clearly won't be in
  the inner one. But I think IaaS+ basic application building blocks
  belong in OpenStack one way or another. That's the reason I supported
  Designate (everyone needs DNS) and Trove (everyone needs DBs).
 
  With that said, I think yesterday there was a concern that Zaqar might
  not fill the some sort of durable message-broker/queue-server thing
  role well. The argument goes something like: if it was a queue-server
  then it should actually be built on top of Rabbit; if it was a
  message-broker it should be built on top of postfix/dovecot; the current
  architecture is only justified because it's something in between, so
  it's broken.
  
  What is the distinction between a message broker and a queue server? To
  me those terms both imply something broadly similar (message broker
  perhaps being a little bit more generic). I could see Zaqar perhaps as
  somewhere between messaging and data-storage.
 
 I agree with Gordon here. I really don't know how to say this without
 creating more confusion. Zaqar is a messaging service. Messages are the
 most important entity in Zaqar. This, however, does not forbid anyone to
 use Zaqar as a queue. It has the required semantics, it guarantees FIFO
 and other queuing specific patterns. This doesn't mean Zaqar is trying
 to do something outside its scope, it comes for free.
 

It comes with a huge cost actually, so saying it comes for free is a
misrepresentation. It is a side effect of developing a superset of
queueing. But that superset is only useful to a small number of your
stated use cases. Many of your use cases (including the one I've been
involved with, Heat pushing metadata to servers) are entirely served by
the much simpler, much lighter weight, pure queueing service.

 Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal is
 to optimize Zaqar for delivering messages and supporting different
 messaging patterns.
 

Awesome! Just please don't expect people to get excited about it for
the lighter weight queueing workloads that you've claimed as use cases.

I totally see Horizon using it to keep events for users. I see Heat
using it for stack events as well. I would bet that Trove would benefit
from being able to communicate messages to users.

But I think in between Zaqar and the backends will likely be a lighter
weight queue-only service that the users can just subscribe to when they
don't want an inbox. And I think that lighter weight queue service is
far more important for OpenStack than the full blown random access
inbox.

I think the reason such a thing has not appeared is because we were all
sort of running into but Zaqar is already incubated. Now that we've
fleshed out the difference, I think those of us that need a lightweight
multi-tenant queue service should add it to OpenStack.  Separately. I hope
that doesn't offend you and the rest of the excellent Zaqar developers. It
is just a different thing.

 Should we remove all the semantics that allow people to use Zaqar as a
 queue service? I don't think so either. Again, the semantics are there
 because Zaqar is using them to do its job. Whether other folks may/may
 not use Zaqar as a queue service is out of our control.
 
 This doesn't mean the project is broken.
 

No, definitely not broken. It just isn't actually necessary for many of
the stated use cases.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-11 Thread Devananda van der Veen
On Wed, Sep 10, 2014 at 6:09 PM, Kurt Griffiths
kurt.griffi...@rackspace.com wrote:
 On 9/10/14, 3:58 PM, Devananda van der Veen devananda@gmail.com
 wrote:

I'm going to assume that, for these benchmarks, you configured all the
services optimally.

 Sorry for any confusion; I am not trying to hide anything about the setup.
 I thought I was pretty transparent about the way uWSGI, MongoDB, and Redis
 were configured. I tried to stick to mostly default settings to keep
 things simple, making it easier for others to reproduce/verify the results.

 Is there further information about the setup that you were curious about
 that I could provide? Was there a particular optimization that you didn’t
 see that you would recommend?


Nope.

I'm not going to question why you didn't run tests
with tens or hundreds of concurrent clients,

 If you review the different tests, you will note that a couple of them
 used at least 100 workers. That being said, I think we ought to try higher
 loads in future rounds of testing.


Perhaps I misunderstand what 2 processes with 25 gevent workers
means - I think this means you have two _processes_ which are using
greenthreads and eventlet, and so each of those two python processes
is swapping between 25 coroutines. From a load generation standpoint,
this is not the same as having 100 concurrent client _processes_.

or why you only ran the
tests for 10 seconds.

 In Round 1 I did mention that i wanted to do a followup with a longer
 duration. However, as I alluded to in the preamble for Round 2, I kept
 things the same for the redis tests to compare with the mongo ones done
 previously.

 We’ll increase the duration in the next round of testing.


Sure - consistency between tests is good. But I don't believe that a
10-second benchmark is ever enough to suss out service performance.
Lots of things only appear after high load has been applied for a
period of time as eg. caches fill up, though this leads to my next
point below...

Instead, I'm actually going to question how it is that, even with
relatively beefy dedicated hardware (128 GB RAM in your storage
nodes), Zaqar peaked at around 1,200 messages per second.

 I went back and ran some of the tests and never saw memory go over ~20M
 (as observed with redis-top) so these same results should be obtainable on
 a box with a lot less RAM.

Whoa. So, that's a *really* important piece of information which was,
afaict, missing from your previous email(s). I hope you can understand
how, with the information you provided (the Redis server has 128GB
RAM) I was shocked at the low performance.

 Furthermore, the tests only used 1 CPU on the
 Redis host, so again, similar results should be achievable on a much more
 modest box.

You described fairy beefy hardware but didn't utilize it fully -- I
was expecting your performance test to attempt to stress the various
components of a Zaqar installation and, at least in some way, attempt
to demonstrate what the capacity of a Zaqar deployment might be on the
hardware you have available. Thus my surprise at the low numbers. If
that wasn't your intent (and given the CPU/RAM usage your tests
achieved, it's not what you achieved) then my disappointment in those
performance numbers is unfounded.

But I hope you can understand, if I'm looking at a service benchmark
to gauge how well that service might perform in production, seeing
expensive hardware perform disappointingly slowly is not a good sign.


 FWIW, I went back and ran a couple scenarios to get some more data points.
 First, I did one with 50 producers and 50 observers. In that case, the
 single CPU on which the OS scheduled the Redis process peaked at 30%. The
 second test I did was with 50 producers + 5 observers + 50 consumers
 (which claim messages and delete them rather than simply page through
 them). This time, Redis used 78% of its CPU. I suppose this should not be
 surprising because the consumers do a lot more work than the observers.
 Meanwhile, load on the web head was fairly high; around 80% for all 20
 CPUs. This tells me that python and/or uWSGI are working pretty hard to
 serve these requests, and there may be some opportunities to optimize that
 layer. I suspect there are also some opportunities to reduce the number of
 Redis operations and roundtrips required to claim a batch of messages.


OK - those resource usages sound better. At least you generated enough
load to saturate the uWSGI process CPU, which is a good point to look
at performance of the system.

At that peak, what was the:
- average msgs/sec
- min/max/avg/stdev time to [post|get|delete] a message

 The other thing to consider is that in these first two rounds I did not
 test increasing amounts of load (number of clients performing concurrent
 requests) and graph that against latency and throughput. Out of curiosity,
 I just now did a quick test to compare the messages enqueued with 50
 producers + 5 observers + 50 consumers vs. adding another 50 producer
 clients 

Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Anastasia Urlapova
QA-agree.

--
nurla

On Thu, Sep 11, 2014 at 6:28 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

  Mike, i've just want to say, if feature isn't ready for production use
 and we have no other choice, we should provide detailed limitations and
 examples of proper use.
 Fully agree, such features should become experimental. We should have this
 information in release notes.

 Basically, Patching of OpenStack becomes as such, unfortunately. We still
 have bugs, and there is no guarantee that we won't find more.

 So, let's add experimental tag to issues around Zabbix  Patching of
 OpenStack.

 On Thu, Sep 11, 2014 at 6:19 PM, Anastasia Urlapova 
 aurlap...@mirantis.com wrote:

 Mike, i've just want to say, if feature isn't ready for production use
 and we have no other choice, we should provide detailed limitations and
 examples of proper use.

 On Thu, Sep 11, 2014 at 5:58 PM, Tomasz Napierala 
 tnapier...@mirantis.com wrote:


 On 11 Sep 2014, at 09:19, Mike Scherbakov mscherba...@mirantis.com
 wrote:

  Hi all,
  what about using experimental tag for experimental features?
 
  After we implemented feature groups [1], we can divide our features
 and for complex features, or those which don't get enough QA resources in
 the dev cycle, we can declare as experimental. It would mean that those are
 not production ready features.
  Giving them live still in experimental mode allows early adopters to
 give a try and bring a feedback to the development team.
 
  I think we should not count bugs for HCF criteria if they affect only
 experimental feature(s). At the moment, we have Zabbix as experimental
 feature, and Patching of OpenStack [2] is under consideration: if today QA
 doesn't approve it to be as ready for production use, we have no other
 choice. All deadlines passed, and we need to get 5.1 finally out.
 
  Any objections / other ideas?

 +1

 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-11 Thread Roman Vyalov
Mike,
2 jobs for Icehouse and Juno equal 2 different repository with packages for
Fuel 6.0. This can be problem for current osci workflow.
For example:  We need building new packages. Which repository we must put
packages? to icehouse or/and Juno ?
if new packages will break icehouse repository, but required for Juno ...

On Wed, Sep 10, 2014 at 12:39 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Aleksandra,
 you've got us exactly right. Fuel CI for OSTF can wait a bit longer, but 4
 fuel-library tests should happen right after we create stable/5.1. Also,
 for Fuel CI for OSTF - I don't think it's actually necessary to support
 5.0 envs.

 Your questions:

1. Create jobs for both Icehouse and Juno, but it doesn't make sense
to do staging for Juno till it starts to pass deployment in HA mode. Once
it passes deployment in HA, staging should be enabled. Then, once it passes
OSTF - we extend criteria, and pass only those mirrors which also pass OSTF
phase
2. Once Juno starts to pass BVT with OSTF check enabled, I think we
can disable Icehouse checks. Not sure about fuel-library tests on Fuel CI
with Icehouse - we might want to continue using them.

 Thanks,

 On Wed, Sep 10, 2014 at 12:22 AM, Aleksandra Fedorova 
 afedor...@mirantis.com wrote:

  Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
 Icehouse packages; 2 non-voting, with Juno packages.
  Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
 actually before Juno becomes stable. We will be able to run 2 sets of BVTs
 (against Icehouse and Juno), and it means that we will be able to see
 almost immediately if something in nailgun/astute/puppet integration broke.
 For Juno builds it's going to be all red initially.

 Let me rephrase:

 We keep one Fuel master branch for two OpenStack releases. And we make
 sure that Fuel master code is compatible with both of them. And we use
 current release (Icehouse) as a reference for test results of upcoming
 release, till we obtain stable enough reference point in Juno itself.
 Moreover we'd like to have OSTF code running on all previous Fuel releases.

 Changes to CI workflow look as follows:

 Nightly builds:
   1) We build two mirrors: one for Icehouse and one for Juno.
   2) From each mirror we build Fuel ISO using exactly the same fuel
 master branch code.
   3) Then we run BVT tests on both (using the same fuel-main code for
 system tests).
   4) If Icehouse BVT tests pass, we deploy both ISO images (even with
 failed Juno tests) onto Fuel CI.

 On Fuel CI we should run:
   - 4 fuel-library tests (revert master node, inject fuel-library code in
 master node and run deployment):
 2 (ubuntu and centos) voting Icehouse tests and 2 non-voting
 Juno tests
   - 5 OSTF tests (revert deployed environment, inject OSTF code into
 master node, run OSTF):
 voting on 4.1, 5.0, 5.1, master/icehouse and non-voting on
 master/Juno
   - other tests, which don't use prebuilt environment, work as before

 The major action point here would be OSTF tests, as we don't have yet
 working implementation of injecting OSTF code into deployed environment.
 And we don't run any tests on old environments.


 Questions:

 1) How should we test mirrors?

 Current master mirrors go through the 4 hours test cycle involving Fuel
 ISO build:
   1. we build temporary mirror
   2. build custom iso from it
   3. run two custom bvt jobs
   4. if they pass we move mirror to stable and sitch to it for our
 primary fuel_master_iso

 Should we test only Icehouse mirrors, or both, but ignoring again failed
 BVT for Juno? Maybe we should enable these tests only later in release
 cycle, say, after SCF?

 2) It is not clear for me when and how we will switch from supporting two
 releases back to one.
 Should we add one more milestone to our release process? The Switching
 point, when we disable and remove Icehouse tasks and move to Juno
 completely? I guess it should happen before next SCF?



 On Tue, Sep 9, 2014 at 9:52 PM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

  What we need to achieve that is have 2 build series based on Fuel
 master: one with Icehouse packages, and one with Juno, and, as Mike
 proposed, keep our manifests backwards compatible with Icehouse.
 Exactly. Our Fuel CI can do 4 builds against puppet modules: 2 voting,
 with Icehouse packages; 2 non-voting, with Juno packages.

 Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
 actually before Juno becomes stable. We will be able to run 2 sets of BVTs
 (against Icehouse and Juno), and it means that we will be able to see
 almost immediately if something in nailgun/astute/puppet integration broke.
 For Juno builds it's going to be all red initially.

 Another suggestion would be to lower green switch in BVTs for Juno:
 first, when it passes deployment; and then, if it finally passes OSTF.

 I'd like to hear QA  DevOps opinion on all the above. Immediately we
 would need 

[openstack-dev] [qa] Tempest Bug triage

2014-09-11 Thread David Kranz
So we had a Bug Day this week and the results were a bit disappointing 
due to lack of participation. We went from 124 New bugs to 75. There 
were also many cases where bugs referred to logs that no longer existed. 
This suggests that we really need to keep up with bug triage in real 
time. Since bug triage should involve the Core review team, we propose 
to rotate the responsibility of triaging bugs weekly. I put up an 
etherpad here https://etherpad.openstack.org/p/qa-bug-triage-rotation 
and I hope the tempest core review team will sign up. Given our size, 
this should involve signing up once every two months or so. I took next 
week.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-11 Thread Zane Bitter

On 04/09/14 08:14, Sean Dague wrote:


I've been one of the consistent voices concerned about a hard
requirement on adding NoSQL into the mix. So I'll explain that thinking
a bit more.

I feel like when the TC makes an integration decision previously this
has been about evaluating the project applying for integration, and if
they met some specific criteria they were told about some time in the
past. I think that's the wrong approach. It's a locally optimized
approach that fails to ask the more interesting question.

Is OpenStack better as a whole if this is a mandatory component of
OpenStack? Better being defined as technically better (more features,
less janky code work arounds, less unexpected behavior from the stack).
Better from the sense of easier or harder to run an actual cloud by our
Operators (taking into account what kinds of moving parts they are now
expected to manage). Better from the sense of a better user experience
in interacting with OpenStack as whole. Better from a sense that the
OpenStack release will experience less bugs, less unexpected cross
project interactions, an a greater overall feel of consistency so that
the OpenStack API feels like one thing.

https://dague.net/2014/08/26/openstack-as-layers/


I don't want to get off-topic here, but I want to state before this 
becomes the de-facto starting point for a layering discussion that I 
don't accept this model at all. It is not based on any analysis 
whatsoever but appears to be entirely arbitrary - a collection of 
individual prejudices arranged visually.


On a hopefully more constructive note, I believe there are at least two 
analyses that _would_ produce interesting data here:


1) Examine the dependencies, both hard and optional, between projects 
and enumerate the things you lose when ignoring each optional one.
2) Analyse projects based on the type of user consuming the service - 
e.g. Nova is mostly used (directly or indirectly via e.g. Heat and/or 
Horizon) by actual, corporeal persons, while Zaqar is used by both 
persons (to set up queues) and services (which actually send and receive 
messages) - of both OpenStack and applications. I believe, BTW that this 
analysis will uncover a lot of missing features in Keystone[1].


What you can _not_ produce is a linear model of the different types of 
clouds for different use cases, because different organisations have 
wildly differing needs.



One of the interesting qualities of Layers 1  2 is they all follow an
AMQP + RDBMS pattern (excepting swift). You can have a very effective
IaaS out of that stack. They are the things that you can provide pretty
solid integration testing on (and if you look at where everything stood
before the new TC mandates on testing / upgrade that was basically what
was getting integration tested). (Also note, I'll accept Barbican is
probably in the wrong layer, and should be a Layer 2 service.)


Swift is the current exception here, but one could argue, and people 
have[2], that Swift is also the only project that actually conforms to 
our stated design tenets for OpenStack. I'd struggle to tell the Zaqar 
folks they've done the Wrong Thing... especially when abandoning the 
RDBMS driver was done largely at the direction of the TC iirc.


Speaking of Swift, I would really love to see it investigated as a 
potential storage backend for Zaqar. If it proves to have the right 
guarantees (and durability is the crucial one, so it sounds promising) 
then that has the potential to smooth over a lot of the deployment problem.



While large shops can afford to have a dedicated team to figure out how
to make mongo or redis HA, provide monitoring, have a DR plan for when a
huricane requires them to flip datacenters, that basically means
OpenStack heads further down the path of only for the big folks. I
don't want OpenStack to be only for the big folks, I want OpenStack to
be for all sized folks. I really do want to have all the local small
colleges around here have OpenStack clouds, because it's something that
people believe they can do and manage. I know the people that work in
this places, they all come out to the LUG I run. We've talked about
this. OpenStack is basically seen as too complex for them to use as it
stands, and that pains me a ton.


This is a great point, and one that we definitely have to keep in mind.

It's also worth noting that small organisations also get the most 
benefit. Rather than having to stand up a cluster of reliable message 
brokers (large organisations are much more likely to need this kind of 
flexibility anyway) - potentially one cluster per application - they can 
have their IT department deploy e.g. a single Redis cluster and have 
messaging handled for every application in their cloud with all the 
benefits of multitenancy.


Part of the move to the cloud is inevitably going to mean organisational 
changes in a lot of places, where the operations experts will 
increasingly focus on maintaining the cloud itself, rather 

  1   2   >