Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread GHANSHYAM MANN
+1. That will also help full for API coming up with microversion like Nova.

On Fri, Aug 29, 2014 at 11:56 PM, Sean Dague  wrote:

> On 08/29/2014 10:19 AM, David Kranz wrote:
> > While reviewing patches for moving response checking to the clients, I
> > noticed that there are places where client methods do not return any
> value.
> > This is usually, but not always, a delete method. IMO, every rest client
> > method should return at least the response. Some services return just
> > the response for delete methods and others return (resp, body). Does any
> > one object to cleaning this up by just making all client methods return
> > resp, body? This is mostly a change to the clients. There were only a
> > few places where a non-delete  method was returning just a body that was
> > used in test code.
>
> Yair and I were discussing this yesterday. As the response correctness
> checking is happening deeper in the code (and you are seeing more and
> more people assigning the response object to _ ) my feeling is Tempest
> clients should probably return a body obj that's basically.
>
> class ResponseBody(dict):
> def __init__(self, body={}, resp=None):
> self.update(body)
> self.resp = resp
>
> Then all the clients would have single return values, the body would be
> the default thing you were accessing (which is usually what you want),
> and the response object is accessible if needed to examine headers.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks & Regards
Ghanshyam Mann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread Andrea Frittoli
+1

keeping the body as a ~dict will help with all existing asserts comparing
dicts in tests.

Andrea
On 30 Aug 2014 06:45, "Christopher Yeoh"  wrote:

> On Fri, 29 Aug 2014 11:13:39 -0400
> David Kranz  wrote:
>
> > On 08/29/2014 10:56 AM, Sean Dague wrote:
> > > On 08/29/2014 10:19 AM, David Kranz wrote:
> > >> While reviewing patches for moving response checking to the
> > >> clients, I noticed that there are places where client methods do
> > >> not return any value. This is usually, but not always, a delete
> > >> method. IMO, every rest client method should return at least the
> > >> response. Some services return just the response for delete
> > >> methods and others return (resp, body). Does any one object to
> > >> cleaning this up by just making all client methods return resp,
> > >> body? This is mostly a change to the clients. There were only a
> > >> few places where a non-delete  method was returning just a body
> > >> that was used in test code.
> > > Yair and I were discussing this yesterday. As the response
> > > correctness checking is happening deeper in the code (and you are
> > > seeing more and more people assigning the response object to _ ) my
> > > feeling is Tempest clients should probably return a body obj that's
> > > basically.
> > >
> > > class ResponseBody(dict):
> > >  def __init__(self, body={}, resp=None):
> > >  self.update(body)
> > > self.resp = resp
> > >
> > > Then all the clients would have single return values, the body
> > > would be the default thing you were accessing (which is usually
> > > what you want), and the response object is accessible if needed to
> > > examine headers.
> > >
> > > -Sean
> > >
> > Heh. I agree with that and it is along a similar line to what I
> > proposed here https://review.openstack.org/#/c/106916/ but using a
> > dict rather than an attribute dict. I did not propose this since it
> > is such a big change. All the test code would have to be changed to
> > remove the resp or _ that is now receiving the response. But I think
> > we should do this before the client code is moved to tempest-lib.
>
> +1. this would be a nice cleanup.
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?

2014-08-29 Thread laserjetyang
I think the purpose of nova VM is not for persistent usage, and it should
be used for stateless. However, there are use cases to use VM to replace
bare metal applications, and it requires the same coverage, which I think
VMware did pretty well.
The nova backup is snapshot indeed, so it should be re-implemented to be
fitting into the real backup solution.


On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister 
wrote:

> The current "backup" APIs in OpenStack do not really make sense (and
> apparently do not work ... which perhaps says something about usage and
> usability). So in that sense, they could be removed.
>
> Wrote out a bit as to what is needed:
>
> http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/
>
> At the same time, to do efficient backup at cloud scale, OpenStack is
> missing a few primitives needed for backup. We need to be able to quiesce
> instances, and collect changed-block lists, across hypervisors and
> filesystems. There is some relevant work in this area - for example:
>
> https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots
>
> Switching hats - as a cloud developer, on AWS there is excellent current
> means of backup-through-snapshots, which is very quick and is charged based
> on changed-blocks. (The performance and cost both reflect use of
> changed-block tracking underneath.)
>
> If OpenStack completely lacks any equivalent API, then the platform is
> less competitive.
>
> Are you thinking about backup as performed by the cloud infrastructure
> folk, or as a service used by cloud developers in deployed applications?
> The first might do behind-the-scenes backups. The second needs an API.
>
>
>
>
> On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes  wrote:
>
>> On 08/29/2014 02:48 AM, Preston L. Bannister wrote:
>>
>>> Looking to put a proper implementation of instance backup into
>>> OpenStack. Started by writing a simple set of baseline tests and running
>>> against the stable/icehouse branch. They failed!
>>>
>>> https://github.com/dreadedhill-work/openstack-backup-scripts
>>>
>>> Scripts and configuration are in the above. Simple tests.
>>>
>>> At first I assumed there was a configuration error in my Devstack ...
>>> but at this point I believe the errors are in fact in OpenStack. (Also I
>>> have rather more colorful things to say about what is and is not logged.)
>>>
>>> Try to backup bootable Cinder volumes attached to instances ... and all
>>> fail. Try to backup instances booted from images, and all-but-one fail
>>> (without logged errors, so far as I see).
>>>
>>> Was concerned about preserving existing behaviour (as I am currently
>>> hacking the Nova backup API), but ... if the existing is badly broken,
>>> this may not be a concern. (Makes my job a bit simpler.)
>>>
>>> If someone is using "nova backup" successfully (more than one backup at
>>> a time), I *would* rather like to know!
>>>
>>> Anyone with different experience?
>>>
>>
>> IMO, the create_backup API extension should be removed from the Compute
>> API. It's completely unnecessary and backups should be the purview of
>> external (to Nova) scripts or configuration management modules. This API
>> extension is essentially trying to be a Cloud Cron, which is inappropriate
>> for the Compute API, IMO.
>>
>> -jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread Christopher Yeoh
On Fri, 29 Aug 2014 11:13:39 -0400
David Kranz  wrote:

> On 08/29/2014 10:56 AM, Sean Dague wrote:
> > On 08/29/2014 10:19 AM, David Kranz wrote:
> >> While reviewing patches for moving response checking to the
> >> clients, I noticed that there are places where client methods do
> >> not return any value. This is usually, but not always, a delete
> >> method. IMO, every rest client method should return at least the
> >> response. Some services return just the response for delete
> >> methods and others return (resp, body). Does any one object to
> >> cleaning this up by just making all client methods return resp,
> >> body? This is mostly a change to the clients. There were only a
> >> few places where a non-delete  method was returning just a body
> >> that was used in test code.
> > Yair and I were discussing this yesterday. As the response
> > correctness checking is happening deeper in the code (and you are
> > seeing more and more people assigning the response object to _ ) my
> > feeling is Tempest clients should probably return a body obj that's
> > basically.
> >
> > class ResponseBody(dict):
> >  def __init__(self, body={}, resp=None):
> >  self.update(body)
> > self.resp = resp
> >
> > Then all the clients would have single return values, the body
> > would be the default thing you were accessing (which is usually
> > what you want), and the response object is accessible if needed to
> > examine headers.
> >
> > -Sean
> >
> Heh. I agree with that and it is along a similar line to what I
> proposed here https://review.openstack.org/#/c/106916/ but using a
> dict rather than an attribute dict. I did not propose this since it
> is such a big change. All the test code would have to be changed to
> remove the resp or _ that is now receiving the response. But I think
> we should do this before the client code is moved to tempest-lib.

+1. this would be a nice cleanup.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?

2014-08-29 Thread Preston L. Bannister
The current "backup" APIs in OpenStack do not really make sense (and
apparently do not work ... which perhaps says something about usage and
usability). So in that sense, they could be removed.

Wrote out a bit as to what is needed:
http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/

At the same time, to do efficient backup at cloud scale, OpenStack is
missing a few primitives needed for backup. We need to be able to quiesce
instances, and collect changed-block lists, across hypervisors and
filesystems. There is some relevant work in this area - for example:

https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots

Switching hats - as a cloud developer, on AWS there is excellent current
means of backup-through-snapshots, which is very quick and is charged based
on changed-blocks. (The performance and cost both reflect use of
changed-block tracking underneath.)

If OpenStack completely lacks any equivalent API, then the platform is less
competitive.

Are you thinking about backup as performed by the cloud infrastructure
folk, or as a service used by cloud developers in deployed applications?
The first might do behind-the-scenes backups. The second needs an API.




On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes  wrote:

> On 08/29/2014 02:48 AM, Preston L. Bannister wrote:
>
>> Looking to put a proper implementation of instance backup into
>> OpenStack. Started by writing a simple set of baseline tests and running
>> against the stable/icehouse branch. They failed!
>>
>> https://github.com/dreadedhill-work/openstack-backup-scripts
>>
>> Scripts and configuration are in the above. Simple tests.
>>
>> At first I assumed there was a configuration error in my Devstack ...
>> but at this point I believe the errors are in fact in OpenStack. (Also I
>> have rather more colorful things to say about what is and is not logged.)
>>
>> Try to backup bootable Cinder volumes attached to instances ... and all
>> fail. Try to backup instances booted from images, and all-but-one fail
>> (without logged errors, so far as I see).
>>
>> Was concerned about preserving existing behaviour (as I am currently
>> hacking the Nova backup API), but ... if the existing is badly broken,
>> this may not be a concern. (Makes my job a bit simpler.)
>>
>> If someone is using "nova backup" successfully (more than one backup at
>> a time), I *would* rather like to know!
>>
>> Anyone with different experience?
>>
>
> IMO, the create_backup API extension should be removed from the Compute
> API. It's completely unnecessary and backups should be the purview of
> external (to Nova) scripts or configuration management modules. This API
> extension is essentially trying to be a Cloud Cron, which is inappropriate
> for the Compute API, IMO.
>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] how to make Mistral build after keystone-pythonclient

2014-08-29 Thread Dmitri Zimine

Once we got dependencies on keystone-python client, Mistral doesn’t build for 
me on Mac. 

Before, I installed a new openssl (1.0.1h) - keystone authentication didn’t 
work with out it, remember? 

enykeev suggested to return to the old stock openssl, it worked.

but this sort of sucks to switch on and off? 

Ideas? 


DZ>

/usr/bin/clang -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -g 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/primitives/__pycache__/cryptography/hazmat/primitives/__pycache__/_Cryptography_cffi_684bb40axf342507b.o
 -o 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/primitives/__pycache__/_Cryptography_cffi_684bb40axf342507b.so

running build_ext

building '_Cryptography_cffi_8f86901cxc1767c5a' extension

/usr/bin/clang -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch 
x86_64 -g -O2 -DNDEBUG -g -O3 
-I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c 
cryptography/hazmat/primitives/__pycache__/_Cryptography_cffi_8f86901cxc1767c5a.c
 -o 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/primitives/__pycache__/cryptography/hazmat/primitives/__pycache__/_Cryptography_cffi_8f86901cxc1767c5a.o

/usr/bin/clang -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -g 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/primitives/__pycache__/cryptography/hazmat/primitives/__pycache__/_Cryptography_cffi_8f86901cxc1767c5a.o
 -o 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/primitives/__pycache__/_Cryptography_cffi_8f86901cxc1767c5a.so

running build_ext

building '_Cryptography_cffi_4ed9e37dx4000d087' extension

creating 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/bindings/__pycache__/cryptography

creating 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/bindings/__pycache__/cryptography/hazmat

creating 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/bindings/__pycache__/cryptography/hazmat/bindings

creating 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/bindings/__pycache__/cryptography/hazmat/bindings/__pycache__

/usr/bin/clang -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch 
x86_64 -g -O2 -DNDEBUG -g -O3 
-I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c 
cryptography/hazmat/bindings/__pycache__/_Cryptography_cffi_4ed9e37dx4000d087.c 
-o 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/bindings/__pycache__/cryptography/hazmat/bindings/__pycache__/_Cryptography_cffi_4ed9e37dx4000d087.o

/usr/bin/clang -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -g 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/bindings/__pycache__/cryptography/hazmat/bindings/__pycache__/_Cryptography_cffi_4ed9e37dx4000d087.o
 -lcrypto -lssl -o 
/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/cryptography/hazmat/bindings/__pycache__/_Cryptography_cffi_4ed9e37dx4000d087.so

Traceback (most recent call last):

  File "", line 16, in 

  File 
"/Users/dzimine/Dev/openstack/mistral/.tox/py27/build/cryptography/setup.py", 
line 174, in 

"test": PyTest,

  File 
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py",
 line 152, in setup

dist.run_commands()

  File 
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py",
 line 953, in run_commands

self.run_command(cmd)

  File 
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py",
 line 972, in run_command

cmd_obj.run()

  File "", line 14, in replacement_run

  File "build/bdist.macosx-10.6-intel/egg/setuptools/command/egg_info.py", line 
261, in find_sources

  File "build/bdist.macosx-10.6-intel/egg/setuptools/command/egg_info.py", line 
327, in run

  File "build/bdist.macosx-10.6-intel/egg/setuptools/command/egg_info.py", line 
363, in add_defaults

  File "build/bdist.macosx-10.6-intel/egg/setuptools/command/sdist.py", line 
219, in add_defaults

  File 
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py",
 line 312, in get_finalized_command

cmd_obj.ensure_finalized()

  File 
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py",
 line 109, in ensure_finalized

self.finalize_options()

  File "build/bdist.macosx-10.6-intel/egg/setuptools/command/build_py.py", line 
73, in finalize_options

  File 
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build_py.py",
 line 46, in finalize_options

('force', 'force'))

  File 
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py",
 line 298, in set_undefined

Re: [openstack-dev] [vmware] Canonical list of os types

2014-08-29 Thread Steve Gordon
- Original Message -
> From: "Matthew Booth" 
> 
> On 14/08/14 12:41, Steve Gordon wrote:
> > - Original Message -
> >> From: "Matthew Booth" 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >>
> >> I've just spent the best part of a day tracking down why instance
> >> creation was failing on a particular setup. The error message from
> >> CreateVM_Task was: 'A specified parameter was not correct'.
> >>
> >> After discounting a great many possibilities, I finally discovered that
> >> the problem was guestId, which was being set to 'CirrosGuest'.
> >> Unusually, the vSphere API docs don't contain a list of valid values for
> >> that field. Given the unhelpfulness of the error message, it might be
> >> worthwhile validating that field (which we get from glance) and
> >> displaying an appropriate warning.
> >>
> >> Does anybody have a canonical list of valid values?
> >>
> >> Thanks,
> >>
> >> Matt
> > 
> > I found a page [1] linked from the Grizzly edition of the compute guide
> > [2] which has since been superseded. The content that would appear to
> > have replaced it in more recent versions of the documentation suite [3]
> > does not appear to contain such a link though. If a link to a more formal
> > list is available it would be great to get this in the documentation.
> 
> I just extracted a list of 126 os types from the ESX 5.5u1 installation
> iso. While this isn't ideal documentation, I'm fairly sure it will be
> accurate :)
> 
> Matt

Hi Matt,

Any chance you can provide this list? Would be good to get into the 
configuration reference.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Joe Gordon
On Aug 29, 2014 10:42 AM, "Dugger, Donald D" 
wrote:
>
> Well, I think that there is a sign of a broken (or at least bent) process
and that's what I'm trying to expose.  Especially given the ongoing
conversations over Gantt it seems wrong that ultimately it was rejected due
to silence.  Maybe rejecting the BP was the right decision but the way the
decision was made was just wrong.
>
> Note that dealing with silence is `really` difficult.  You point out that
maybe silence means people don't agree with the BP but how do I know?
Maybe it means no one has time, maybe no one has an opinion, maybe it got
lost in the shuffle, maybe I'm being too obnoxious - who knows.  A simple
-1 with a one sentence explanation would helped a lot.

How is this:

-1, we already have too many approved blueprints in Juno and it sounds like
there are still concerns about the Gantt split in general. Hopefully after
trunk is open for Kilo we can revisit the Gantt idea. I'm thinking yet
another ML thread outlining why and how to get there.

>
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
>
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Friday, August 29, 2014 10:43 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>
> On 08/29/2014 12:25 PM, Zane Bitter wrote:
> > On 28/08/14 17:02, Jay Pipes wrote:
> >> I understand your frustration about the silence, but the silence from
> >> core team members may actually be a loud statement about where their
> >> priorities are.
> >
> > I don't know enough about the Nova review situation to say if the
> > process is broken or not. But I can say that if passive-aggressively
> > ignoring people is considered a primary communication channel,
> > something is definitely broken.
>
> Nobody is ignoring anyone. There have ongoing conversations about the
scheduler and Gantt, and those conversations haven't resulted in all the
decisions that Don would like. That is unfortunate, but it's not a sign of
a broken process.
>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Sylvain Bauza
Sorry folks, I just had a new daughter since Thursday so I'm on 
PTO until Monday, so thanks to the people who discussed about the 
blueprint I created and how we can avoid the problem raised by Don for Kilo.



Answers inline.

Le 29/08/2014 19:42, John Garbutt a écrit :

I think this is now more about code reviews, but this is important...

On 29 August 2014 10:30, Daniel P. Berrange  wrote:

On Fri, Aug 29, 2014 at 11:07:33AM +0200, Thierry Carrez wrote:

Joe Gordon wrote:

On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
mailto:alan.kavan...@ericsson.com>> wrote:


 I share Donald's points here, I believe what would help is to
 clearly describe in the Wiki the process and workflow for the BP
 approval process and build in this process how to deal with
 discrepancies/disagreements and build timeframes for each stage and
 process of appeal etc.
 The current process would benefit from some fine tuning and helping
 to build safe guards and time limits/deadlines so folks can expect
 responses within a reasonable time and not be left waiting in the cold.

This is a resource problem, the nova team simply does not have enough
people doing enough reviews to make this possible.

I think Nova lacks core reviewers more than it lacks reviewers, though.
Just looking at the ratio of core developers vs. patchsets proposed,
it's pretty clear that the core team is too small:

Nova: 750 patchsets/month for 21 core = 36
Heat: 230/14 = 16
Swift: 50/16 = 3

Neutron has the same issue (550/14 = 39). I think above 20, you have a
dysfunctional setup. No amount of process, spec, or runway will solve
that fundamental issue.

+1


+1. I can really understand that there is a reviewers bandwidth issue 
within Nova which can't just be answered by adding new cores, because it 
would create some parliament issues.





The problem is, you can't just add core reviewers, they have to actually
understand enough of the code base to be trusted with that +2 power. All
potential candidates are probably already in. In Nova, the code base is
so big it's difficult to find people that know enough of it. In Neutron,
the contributors are often focused on subsections of the code base so
they are not really interested in learning enough of the rest. That
makes the pool of core candidates quite dry.

The other point is keeping the reviews consistent. Making the team
larger makes that harder.

If we did a better job of discussing core disagreements more in the
nova-meeting, maybe that would help keep consistency between a larger
group of people. But it boils down to trusting each other, and a group
bigger than 20, is a lot of people to get to know.


+1 to John, I think adding 5 new cores now would create some problems if 
we do that before discussing how the specs model and the Kilo bps can be 
done. I don't want to do kind of parralelism to what's happening with 
some politics model here but I think we need to have anyway a small 
number of deciders within a big project to make it successful (and 
delegate, but I will explain further below)

I fear the only solution is smaller groups being experts on smaller
codebases. There is less to review, and more candidates that are likely
to be experts in this limited area.

Applied to Nova, that means modularization -- having strong internal
interfaces and trusting subteams to +2 the code they are experts on.
Maybe VMWare driver people should just +2 VMware-related code. We've had
that discussion before, and I know there is a dangerous potential
quality slope there -- I just fail to see any other solution to bring
that 750/21=36 figure down to a bearable level, before we burn out all
of the Nova core team.

This worked really well for Cinder, and I hope Gantt will do the same
kind of thing for Scheduling.

It certainly feels like we really need to split things up, maybe:
* API (talks to compute api to creates tasks and gets objects)
* core task orchestration and persistence (compute api, db objects,
conductor, talks to compute manager api, scheduler api, network api)
* compute manager + "drivers" (gets instance objects)
* Scheduling (models resources, gets )
* nova-network

But clearly, that will make evolving those interfaces much harder, the
separate they become.

Certainly we fee a few release away from some of those splits.


I broadly agree - I think that unless Nova moves more towards something
that is closer to the Linux style subsystem maintainer model we are
doomed. I know in Linux, the maintainers actually use separate git trees,
and that isn't what I mean - I think using a single git tree is still
desirable (at least for now). What I mean is that we should place more
trust on the opinion of the people who are experts for a particular
area of code. Let those experts take on a greater burden of the code
review so core team can put more focus on actual merge approval.

I know some of the core team try to do this implicitly - eg we know who
some of the main people involve

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-29 Thread Kyle Mestery
On Fri, Aug 29, 2014 at 11:51 AM, Eichberger, German
 wrote:
> Kyle,
>
> I am confused. So basically you (and Mark) are saying:
>
> 1) We deprecate Neutron LBaaS v1
> 2) We spin out Neutron LBaaS v2 into it's own project in stackforge
> 3) Users don't have an OpenStack LBaaS any longer until we graduate from 
> OpenStack incubation (as opposed Neutron incubation)
>
> I am hoping you can clarify how this will be shaping up -
>
I think what is needed is this:

1) We incubate Neutron LBaaS V2 in the incubator.
2) It graduates into a project under the networking program.
3) We deprecate Neutron LBaaS v1.

To deprecate, we need the new API stable and ready, and then once V1
is deprecated it takes 2 cycles for us to remove it.

Hope that helps!

Thanks,
Kyle

> Thanks,
> German
>
>
> -Original Message-
> From: Kyle Mestery [mailto:mest...@mestery.com]
> Sent: Thursday, August 28, 2014 6:48 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>
> On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
>> I think we need some clarification here too about the difference
>> between the general OpenStack Incubation and the Neutron incubation.
>> From my understanding, the Neutron incubation isn't the path to a
>> separate project and independence from Neutron. It's a process to get
>> into Neutron. So if you want to keep it as a separate project with its
>> own cores and a PTL, Neutron incubation would not be the way to go.
>
> That's not true, there are 3 ways out of incubation: 1) The project withers 
> and dies on it's own. 2) The project is spun back into Neutron. 3) The 
> project is spun out into it's own project.
>
> However, it's worth noting that if the project is spun out into it's own 
> entity, it would have to go through incubation to become a fully functioning 
> OpenStack project of it's own.
>
>>
>>
>> On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
>> wrote:
>>>
>>> Just for us to learn about the incubator status, here are some of the
>>> info on incubation:
>>>
>>> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
>>> https://wiki.openstack.org/wiki/Governance/NewProjects
>>>
>>> Susanne
>>>
>>>
>>> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle
>>> 
>>> wrote:

  I would like to discuss the pros and cons of putting Octavia into
 the Neutron LBaaS incubator project right away. If it is going to be
 the reference implementation for LBaaS v 2 then I believe Octavia
 belong in Neutron LBaaS v2 incubator.

 The Pros:
 * Octavia is in Openstack incubation right away along with the lbaas
 v2 code. We do not have to apply for incubation later on.
 * As incubation project we have our own core and should be able ot
 commit our code
 * We are starting out as an OpenStack incubated project

 The Cons:
 * Not sure of the velocity of the project
 * Incubation not well defined.

 If Octavia starts as a standalone stackforge project we are assuming
 that it would be looked favorable on when time is to move it into
 incubated status.

 Susanne


>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Kevin Benton
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally scenario Issue

2014-08-29 Thread Ajay Kalambur (akalambu)
Hi Timur
With this I was able to create networks and attach VM to those networks. Would 
I now be able to ssh to this and run command what I am looking for is a 
unification of boot-runcommand-delete and this neutron_network so create 
network attach to router associate floating ip and ssh to it
Ajay


From: Timur Nurlygayanov 
mailto:tnurlygaya...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 at 1:54 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Harshil Shah (harsshah)" mailto:harss...@cisco.com>>
Subject: Re: [openstack-dev] Rally scenario Issue

Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron Networks 
during the benchmarks execution.
We now working on merge of two different comits with NeutronContext 
implementation:
https://review.openstack.org/#/c/96300  and 
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and run 
your benchmarks? Neutron Network with subnetworks and routers will be 
automatically created for each created tenant and you should have the ability 
to connect to VMs. Please, note, that you should add the following part to your 
task JSON to enable Neutron context:
...
"context": {
...
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network="private",
   floating_network="public",
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print "fixed network:%s floating network:%s" 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name("rally_novaserver_"),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no errors when I run with
rally task start boot-and-delete.json

And reports results. First what am I missing in this case. Thing is I am using 
neutron not nova-network
Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc

[http://www.openstacksv.com/]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Heat AWS WaitCondition's count

2014-08-29 Thread Clint Byrum
There are still a few lingering wait conditions. They should probably be
cleaned up from tripleo-heat-templates.

Excerpts from Pavlo Shchelokovskyy's message of 2014-08-28 02:26:16 -0700:
> Hi all,
> 
> the AWS::CloudFormation::WaitCondition resource in Heat allows to update
> the 'count' property, although in real AWS this is prohibited (
> https://bugs.launchpad.net/heat/+bug/1340100).
> 
> My question is does TripleO still depends on this behavior of AWS
> WaitCondition in any way? I want to be sure that fixing the mentioned bug
> will not break TripleO.
> 
> Best regards,
> Pavlo Shchelokovskyy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3

2014-08-29 Thread Ben Nemec
On 08/28/2014 12:34 PM, Doug Hellmann wrote:
> 
> On Aug 28, 2014, at 8:36 AM, Mark McLoughlin  wrote:
> 
>> On Thu, 2014-08-28 at 13:24 +0200, Flavio Percoco wrote:
>>> On 08/27/2014 03:35 PM, Ken Giusti wrote:
 Hi All,

 I believe Juno-3 is our last chance to get this feature [1] included
 into olso.messaging.

 I honestly believe this patch is about as low risk as possible for a
 change that introduces a whole new transport into oslo.messaging.  The
 patch shouldn't affect the existing transports at all, and doesn't
 come into play unless the application specifically turns on the new
 'amqp' transport, which won't be the case for existing applications.

 The patch includes a set of functional tests which exercise all the
 messaging patterns, timeouts, and even broker failover. These tests do
 not mock out any part of the driver - a simple test broker is included
 which allows the full driver codepath to be executed and verified.

 IFAIK, the only remaining technical block to adding this feature,
 aside from core reviews [2], is sufficient infrastructure test coverage.
 We discussed this a bit at the last design summit.  The root of the
 issue is that this feature is dependent on a platform-specific library
 (proton) that isn't in the base repos for most of the CI platforms.
 But it is available via EPEL, and the Apache QPID team is actively
 working towards getting the packages into Debian (a PPA is available
 in the meantime).

 In the interim I've proposed a non-voting CI check job that will
 sanity check the new driver on EPEL based systems [3].  I'm also
 working towards adding devstack support [4], which won't be done in
 time for Juno but nevertheless I'm making it happen.

 I fear that this feature's inclusion is stuck in a chicken/egg
 deadlock: the driver won't get merged until there is CI support, but
 the CI support won't run correctly (and probably won't get merged)
 until the driver is available.  The driver really has to be merged
 first, before I can continue with CI/devstack development.

 [1] 
 https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
 [2] https://review.openstack.org/#/c/75815/
 [3] https://review.openstack.org/#/c/115752/
 [4] https://review.openstack.org/#/c/109118/
>>>
>>>
>>> Hi Ken,
>>>
>>> Thanks a lot for your hard work here. As I stated in my last comment on
>>> the driver's review, I think we should let this driver land and let
>>> future patches improve it where/when needed.
>>>
>>> I agreed on letting the driver land as-is based on the fact that there
>>> are patches already submitted ready to enable the gates for this driver.
>>
>> I feel bad that the driver has been in a pretty complete state for quite
>> a while but hasn't received a whole lot of reviews. There's a lot of
>> promise to this idea, so it would be ideal if we could unblock it.
>>
>> One thing I've been meaning to do this cycle is add concrete advice for
>> operators on the state of each driver. I think we'd be a lot more
>> comfortable merging this in Juno if we could somehow make it clear to
>> operators that it's experimental right now. My idea was:
>>
>>  - Write up some notes which discusses the state of each driver e.g.
>>
>>  - RabbitMQ - the default, used by the majority of OpenStack 
>>deployments, perhaps list some of the known bugs, particularly 
>>around HA.
>>
>>  - Qpid - suitable for production, but used in a limited number of 
>>deployments. Again, list known issues. Mention that it will 
>>probably be removed with the amqp10 driver matures.
>>
>>  - Proton/AMQP 1.0 - experimental, in active development, will
>>support  multiple brokers and topologies, perhaps a pointer to a
>>wiki page with the current TODO list
>>
>>  - ZeroMQ - unmaintained and deprecated, planned for removal in
>>Kilo
>>
>>  - Propose this addition to the API docs and ask the operators list 
>>for feedback
>>
>>  - Propose a patch which adds a load-time deprecation warning to the 
>>ZeroMQ driver
>>
>>  - Include a load-time experimental warning in the proton driver
>>
>> Thoughts on that?
> 
> By "API docs" do you mean the ones in the oslo.messaging repository? Would it 
> be better to put this information in the operator’s guide?

I was talking to Ken a little about this today and came up with
http://docs.openstack.org/icehouse/config-reference/content/configuring-rpc.html

That seems like a reasonable place to put information like this (in
fact, there's already some there about rabbit being the default).  I
wasn't sure exactly where those docs are generated from, so I suggested
he talk to Anne Gentle about it.

-Ben

> 
> Other than the question of where to put it, I definitely think this is the 
> sort of guidance we should document, incl

Re: [openstack-dev] [oslo] change to deprecation policy in the incubator

2014-08-29 Thread Ben Nemec
On 08/28/2014 11:14 AM, Doug Hellmann wrote:
> Before Juno we set a deprecation policy for graduating libraries that said 
> the incubated versions of the modules would stay in the incubator repository 
> for one full cycle after graduation. This gives projects time to adopt the 
> libraries and still receive bug fixes to the incubated version (see 
> https://wiki.openstack.org/wiki/Oslo#Graduation).
> 
> That policy worked well early on, but has recently introduced some challenges 
> with the low level modules. Other modules in the incubator are still 
> importing the incubated versions of, for example, timeutils, and so tests 
> that rely on mocking out or modifying the behavior of timeutils do not work 
> as expected when different parts of the application code end up calling 
> different versions of timeutils. We had similar issues with the notifiers and 
> RPC code, and I expect to find other cases as we continue with the 
> graduations.
> 
> To deal with this problem, I propose that for Kilo we delete graduating 
> modules as soon as the new library is released, rather than waiting to the 
> end of the cycle. We can update the other incubated modules at the same time, 
> so that the incubator will always use the new libraries and be consistent.

So from a consumer perspective, this means projects will need to sync
from stable/juno until they adopt the new libs and then they need to
sync from master, which will also be using the new libs.

One thing I think is worth noting is the fact that this will require
projects to adopt all of the libs at once (or at least all of the libs
that need to match incubator, but that's not always obvious so probably
safest to just say "all").  It might be possible to sync some modules
from master and some from stable, but that sounds like a mess waiting to
happen. :-)

I guess my concern here is that I don't think projects have been
adopting all of the oslo libs at once, so if, for example, a project was
looking at adopting oslo.i18n and oslo.utils they may have to do both at
the same time since adopting one will require them to start syncing from
master, and then they won't have the ability to use the graduated
modules anymore.

This may be a necessary evil, but it does raise the short-term bar for
adopting any oslo lib, even if the end result will be the same (all of
the released libs adopted).

> 
> We have not had a lot of patches where backports were necessary, but there 
> have been a few important ones, so we need to retain the ability to handle 
> them and allow projects to adopt libraries at a reasonable pace. To handle 
> backports cleanly, we can “freeze” all changes to the master branch version 
> of modules slated for graduation during Kilo (we would need to make a good 
> list very early in the cycle), and use the stable/juno branch for backports.
> 
> The new process would be:
> 
> 1. Declare which modules we expect to graduate during Kilo.
> 2. Changes to those pre-graduation modules could be made in the master branch 
> before their library is released, as long as the change is also backported to 
> the stable/juno branch at the same time (we should enforce this by having 
> both patches submitted before accepting either).
> 3. When graduation for a library starts, freeze those modules in all branches 
> until the library is released.
> 4. Remove modules from the incubator’s master branch after the library is 
> released.
> 5. Land changes in the library first.
> 6. Backport changes, as needed, to stable/juno instead of master.
> 
> It would be better to begin the export/import process as early as possible in 
> Kilo to keep the window where point 2 applies very short.
> 
> If there are objections to using stable/juno, we could introduce a new branch 
> with a name like backports/kilo, but I am afraid having the extra branch to 
> manage would just cause confusion.
> 
> I would like to move ahead with this plan by creating the stable/juno branch 
> and starting to update the incubator as soon as the oslo.log repository is 
> imported (https://review.openstack.org/116934).
> 
> Thoughts?

I think the obvious concern for me is the extra overhead of trying to
keep one more branch in sync with all the others.  With this we will
require two commits for each change to incubator code that isn't
graduating.  Backporting to Havana would require four changes.  I guess
this is no worse than the situation with graduating code (one commit to
the lib and one to incubator), but that's temporary pain for specific
files.  This would continue indefinitely for all files in incubator.

We could probably help this by requiring changes to be linked in their
commit messages so reviewers can vote on both changes at once, but it's
still additional work for everyone so I think it's worth bringing up.

I don't have a better solution to the incubator-lib mismatch issue so
I'm okay with going forward on this, but it will introduce some new
issues that I think we should be aware of 

Re: [openstack-dev] [rally]Rally scenario Issue

2014-08-29 Thread Ajay Kalambur (akalambu)
Issue fixed small syntax mistake in scenario file
Will now look into more details
Thx


From: akalambu mailto:akala...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 at 3:03 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally]Rally scenario Issue

Sorry here is the context


"context": {

"users": {

"tenants": 1,

"users_per_tenant": 1

}

"neutron_network": {

"network_cidr": "10.%s.0.0/16",

}

}

Because I see this error
2014-08-29 14:56:23.769 5495 TRACE rally "expected ',' or '}', but got %r" 
% token.id, token.start_mark)
2014-08-29 14:56:23.769 5495 TRACE rally ParserError: while parsing a flow 
mapping
2014-08-29 14:56:23.769 5495 TRACE rally   in "", line 3, column 9:
2014-08-29 14:56:23.769 5495 TRACE rally {
2014-08-29 14:56:23.769 5495 TRACE rally ^
2014-08-29 14:56:23.769 5495 TRACE rally expected ',' or '}', but got ''
2014-08-29 14:56:23.769 5495 TRACE rally   in "", line 29, column 13:
2014-08-29 14:56:23.769 5495 TRACE rally "neutron_network": {
2014-08-29 14:56:23.769 5495 TRACE rally ^
2014-08-29 14:56:23.769 5495 TRACE rally


Ajay


From: Timur Nurlygayanov 
mailto:tnurlygaya...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 at 1:54 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Harshil Shah (harsshah)" mailto:harss...@cisco.com>>
Subject: Re: [openstack-dev] Rally scenario Issue

Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron Networks 
during the benchmarks execution.
We now working on merge of two different comits with NeutronContext 
implementation:
https://review.openstack.org/#/c/96300  and 
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and run 
your benchmarks? Neutron Network with subnetworks and routers will be 
automatically created for each created tenant and you should have the ability 
to connect to VMs. Please, note, that you should add the following part to your 
task JSON to enable Neutron context:
...
"context": {
...
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network="private",
   floating_network="public",
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print "fixed network:%s floating network:%s" 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name("rally_novaserver_"),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no errors when I run with
rally task start boot-and-delete.json

And reports results. First what am I missing in this case. Thing is I am using 
neutron not nova-network
Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc

[http://www.openstacksv.com/]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally]Rally scenario Issue

2014-08-29 Thread Ajay Kalambur (akalambu)
Sorry here is the context


"context": {

"users": {

"tenants": 1,

"users_per_tenant": 1

}

"neutron_network": {

"network_cidr": "10.%s.0.0/16",

}

}

Because I see this error
2014-08-29 14:56:23.769 5495 TRACE rally "expected ',' or '}', but got %r" 
% token.id, token.start_mark)
2014-08-29 14:56:23.769 5495 TRACE rally ParserError: while parsing a flow 
mapping
2014-08-29 14:56:23.769 5495 TRACE rally   in "", line 3, column 9:
2014-08-29 14:56:23.769 5495 TRACE rally {
2014-08-29 14:56:23.769 5495 TRACE rally ^
2014-08-29 14:56:23.769 5495 TRACE rally expected ',' or '}', but got ''
2014-08-29 14:56:23.769 5495 TRACE rally   in "", line 29, column 13:
2014-08-29 14:56:23.769 5495 TRACE rally "neutron_network": {
2014-08-29 14:56:23.769 5495 TRACE rally ^
2014-08-29 14:56:23.769 5495 TRACE rally


Ajay


From: Timur Nurlygayanov 
mailto:tnurlygaya...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 at 1:54 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Harshil Shah (harsshah)" mailto:harss...@cisco.com>>
Subject: Re: [openstack-dev] Rally scenario Issue

Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron Networks 
during the benchmarks execution.
We now working on merge of two different comits with NeutronContext 
implementation:
https://review.openstack.org/#/c/96300  and 
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and run 
your benchmarks? Neutron Network with subnetworks and routers will be 
automatically created for each created tenant and you should have the ability 
to connect to VMs. Please, note, that you should add the following part to your 
task JSON to enable Neutron context:
...
"context": {
...
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network="private",
   floating_network="public",
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print "fixed network:%s floating network:%s" 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name("rally_novaserver_"),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no errors when I run with
rally task start boot-and-delete.json

And reports results. First what am I missing in this case. Thing is I am using 
neutron not nova-network
Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc

[http://www.openstacksv.com/]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party] [infra] New mailing lists for third party announcements and account requests

2014-08-29 Thread James E. Blair
Stefano Maffulli  writes:

> On Fri 29 Aug 2014 12:47:00 PM PDT, Elizabeth K. Joseph wrote:
>> Third-party-request
>>
>> This list is the new place to request the creation or modification of
>> your third party account. Note that old requests sent to the
>> openstack-infra mailing list don't need to be resubmitted, they are
>> already in the queue for creation.
>
> I'm not happy about this decision: creating new lists is expensive, it
> multiplies entry points for newcomers, which need to be explained *and*
> understood. We've multiplying processes, rules, points of contact and
> places to monitor, be aware of... I feel overwhelmed. I wonder how much
> worse that feeling is for people who are not 150% of their time
> following discussions online and offline on all OpenStack channels.

I'm thrilled about it.  Creating new lists is cheap, a lot cheaper than
asking people who want to discuss infrastructure tooling to wade through
hundreds of administrative messages about ssh keys, email addresses,
etc.

> Are you sure that a mailing list is the most appropriate way of handling
> requests? Aren't bug trackers more appropriate instead?  And don't we
> have a bug tracker already?

It's the best way we have right now, until we have time to make it more
self-service.  We received one third-party CI request in 2 years, then
we received 88 more in 6 months.  Our current process is built around
the old conditions.  I don't know if the request list will continue
indefinitely, but the announce list will.  We definitely need a
low-volume place to announce changes to third-party CI operators.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally]Rally scenario Issue

2014-08-29 Thread Ajay Kalambur (akalambu)
Does this look right.


{

"VMTasks.boot_runcommand_delete": [

{

"args": {

"flavor": {

"name": "m1.small"

},

"image": {

"name": "Ubuntu Server 14.04"

},

"fixed_network": "net04",

"floating_network": "net04_ext",

"use_floatingip": true,

"script": "doc/samples/tasks/support/instance_dd_test.sh",

"interpreter": "/bin/sh",

"username": "ubuntu"

},

"runner": {

"type": "constant",

"times": 10,

"concurrency": 2

},

"context": {

"users": {

"tenants": 1,

"users_per_tenant": 1

}

}

"neutron_network": {

"network_cidr": "10.%s.0.0/16",

}

}

]

}

Because I see this error
2014-08-29 14:56:23.769 5495 TRACE rally "expected ',' or '}', but got %r" 
% token.id, token.start_mark)
2014-08-29 14:56:23.769 5495 TRACE rally ParserError: while parsing a flow 
mapping
2014-08-29 14:56:23.769 5495 TRACE rally   in "", line 3, column 9:
2014-08-29 14:56:23.769 5495 TRACE rally {
2014-08-29 14:56:23.769 5495 TRACE rally ^
2014-08-29 14:56:23.769 5495 TRACE rally expected ',' or '}', but got ''
2014-08-29 14:56:23.769 5495 TRACE rally   in "", line 29, column 13:
2014-08-29 14:56:23.769 5495 TRACE rally "neutron_network": {
2014-08-29 14:56:23.769 5495 TRACE rally ^
2014-08-29 14:56:23.769 5495 TRACE rally


Ajay


From: Timur Nurlygayanov 
mailto:tnurlygaya...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 at 1:54 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Harshil Shah (harsshah)" mailto:harss...@cisco.com>>
Subject: Re: [openstack-dev] Rally scenario Issue

Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron Networks 
during the benchmarks execution.
We now working on merge of two different comits with NeutronContext 
implementation:
https://review.openstack.org/#/c/96300  and 
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and run 
your benchmarks? Neutron Network with subnetworks and routers will be 
automatically created for each created tenant and you should have the ability 
to connect to VMs. Please, note, that you should add the following part to your 
task JSON to enable Neutron context:
...
"context": {
...
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network="private",
   floating_network="public",
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print "fixed network:%s floating network:%s" 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name("rally_novaserver_"),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no errors when I run with
rally task start boot-and-delete.json

And reports results. First what am I missing in this case. Thing is I am using 
neutron not nova-network
Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc

[http://www.openstacksv.com/]
___
OpenStack-dev mailing lis

Re: [openstack-dev] [OpenStack-Infra] [third-party] [infra] New mailing lists for third party announcements and account requests

2014-08-29 Thread Elizabeth K. Joseph
On Fri, Aug 29, 2014 at 1:03 PM, Stefano Maffulli  wrote:
> On Fri 29 Aug 2014 12:47:00 PM PDT, Elizabeth K. Joseph wrote:
>> Third-party-request
>>
>> This list is the new place to request the creation or modification of
>> your third party account. Note that old requests sent to the
>> openstack-infra mailing list don't need to be resubmitted, they are
>> already in the queue for creation.
>
> I'm not happy about this decision: creating new lists is expensive, it
> multiplies entry points for newcomers, which need to be explained *and*
> understood. We've multiplying processes, rules, points of contact and
> places to monitor, be aware of... I feel overwhelmed. I wonder how much
> worse that feeling is for people who are not 150% of their time
> following discussions online and offline on all OpenStack channels.
>
> Are you sure that a mailing list is the most appropriate way of handling
> requests? Aren't bug trackers more appropriate instead?  And don't we
> have a bug tracker already?

I can't speak to all of your points (there are others who are more
qualified and have been involved in this discussion longer) but the
process prior to having these mailing lists was either emailing the
openstack-infra mailing list or submitting a bug in the infra tracker.
This was arguably more confusing as an entry point because we talk
about all kinds of infrastructure stuff that 3rd parties aren't
interested in and many infrastructure people weren't interested in all
the generic 3rd party discussions and account requests. It was also
easy for requests to get lost, kudos to Sergey Lukjanov for diligently
tracking them these past few months!

Our hope is that most 3rd party testing folks will be referencing our
documentation on the topic, so it won't be confusing to know where to
send the request, we made sure this was updated before making the
announcement: 
http://ci.openstack.org/third_party.html#requesting-a-service-account

More discussion that ended up with the decision to create these lists
was at the infra meeting on the 19th:

http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-19-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests

2014-08-29 Thread Zane Bitter

On 29/08/14 14:27, Jay Pipes wrote:

On 08/26/2014 10:14 AM, Zane Bitter wrote:

Steve Baker has started the process of moving Heat tests out of the
Tempest repository and into the Heat repository, and we're looking for
some guidance on how they should be packaged in a consistent way.
Apparently there are a few projects already packaging functional tests
in the package .tests.functional (alongside
.tests.unit for the unit tests).

That strikes me as odd in our context, because while the unit tests run
against the code in the package in which they are embedded, the
functional tests run against some entirely different code - whatever
OpenStack cloud you give it the auth URL and credentials for. So these
tests run from the outside, just like their ancestors in Tempest do.

There's all kinds of potential confusion here for users and packagers.
None of it is fatal and all of it can be worked around, but if we
refrain from doing the thing that makes zero conceptual sense then there
will be no problem to work around :)

I suspect from reading the previous thread about "In-tree functional
test vision" that we may actually be dealing with three categories of
test here rather than two:

* Unit tests that run against the package they are embedded in
* Functional tests that run against the package they are embedded in
* Integration tests that run against a specified cloud

i.e. the tests we are now trying to add to Heat might be qualitatively
different from the .tests.functional suites that already
exist in a few projects. Perhaps someone from Neutron and/or Swift can
confirm?

I'd like to propose that tests of the third type get their own top-level
package with a name of the form -integrationtests (second
choice: -tempest on the principle that they're essentially
plugins for Tempest). How would people feel about standardising that
across OpenStack?


By its nature, Heat is one of the only projects that would have
integration tests of this nature. For Nova, there are some "functional"
tests in nova/tests/integrated/ (yeah, badly named, I know) that are
tests of the REST API endpoints and running service daemons (the things
that are RPC endpoints), with a bunch of stuff faked out (like RPC
comms, image services, authentication and the hypervisor layer itself).
So, the "integrated" tests in Nova are really not testing integration
with other projects, but rather integration of the subsystems and
processes inside Nova.

I'd support a policy that true integration tests -- tests that test the
interaction between multiple real OpenStack service endpoints -- be left
entirely to Tempest. Functional tests that test interaction between
internal daemons and processes to a project should go into
/$project/tests/functional/.

For Heat, I believe tests that rely on faked-out other OpenStack
services but stress the interaction between internal Heat
daemons/processes should be in /heat/tests/functional/ and any tests the
rely on working, real OpenStack service endpoints should be in Tempest.


Well, the problem with that is that last time I checked there was 
exactly one Heat scenario test in Tempest because tempest-core doesn't 
have the bandwidth to merge all (any?) of the other ones folks submitted.


So we're moving them to openstack/heat for the pure practical reason 
that it's the only way to get test coverage at all, rather than concerns 
about overloading the gate or theories about the best venue for 
cross-project integration testing.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally scenario Issue

2014-08-29 Thread Boris Pavlovic
Timur,

Thanks for pointing Ajay.

Ajay,

 Also I cannot see this failure unless I run rally with –v –d object.


Actually rally is sotring information about all failures. To get
information about them you can run next command:

*rally task results --pprint*

It will display all information about all iterations (including exceptions)


Second when most of the steps in the scenario failed like attaching to
> network, ssh and run command why bother reporting the results


Because, bad results are better then nothing...


Best regards,
Boris Pavlovic


On Sat, Aug 30, 2014 at 12:54 AM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi Ajay,
>
> looks like you need to use NeutronContext feature to configure Neutron
> Networks during the benchmarks execution.
> We now working on merge of two different comits with NeutronContext
> implementation:
> https://review.openstack.org/#/c/96300  and
> https://review.openstack.org/#/c/103306
>
> could you please apply commit https://review.openstack.org/#/c/96300 and
> run your benchmarks? Neutron Network with subnetworks and routers will be
> automatically created for each created tenant and you should have the
> ability to connect to VMs. Please, note, that you should add the following
> part to your task JSON to enable Neutron context:
> ...
> "context": {
> ...
> "neutron_network": {
> "network_cidr": "10.%s.0.0/16",
> }
> }
> ...
>
> Hope this will help.
>
>
>
> On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) <
> akala...@cisco.com> wrote:
>
>>  Hi
>> I am trying to run the Rally scenario boot-runcommand-delete. This
>> scenario has the following code
>>   def boot_runcommand_delete(self, image, flavor,
>>script, interpreter, username,
>>fixed_network="private",
>>floating_network="public",
>>ip_version=4, port=22,
>>use_floatingip=True, **kwargs):
>>server = None
>> floating_ip = None
>> try:
>> print "fixed network:%s floating network:%s"
>> %(fixed_network,floating_network)
>> server = self._boot_server(
>> self._generate_random_name("rally_novaserver_"),
>> image, flavor, key_name='rally_ssh_key', **kwargs)
>>
>>  *self.check_network(server, fixed_network)*
>>
>>  The question I have is the instance is created with a call to
>> boot_server but no networks are attached to this server instance. Next step
>> it goes and checks if the fixed network is attached to the instance and
>> sure enough it fails
>> At the step highlighted in bold. Also I cannot see this failure unless I
>> run rally with –v –d object. So it actually reports benchmark scenario
>> numbers in a table with no errors when I run with
>> rally task start boot-and-delete.json
>>
>>  And reports results. First what am I missing in this case. Thing is I
>> am using neutron not nova-network
>> Second when most of the steps in the scenario failed like attaching to
>> network, ssh and run command why bother reporting the results
>>
>>  Ajay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Timur,
> QA Engineer
> OpenStack Projects
> Mirantis Inc
>
> [image: http://www.openstacksv.com/] 
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally scenario Issue

2014-08-29 Thread Timur Nurlygayanov
Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron
Networks during the benchmarks execution.
We now working on merge of two different comits with NeutronContext
implementation:
https://review.openstack.org/#/c/96300  and
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and
run your benchmarks? Neutron Network with subnetworks and routers will be
automatically created for each created tenant and you should have the
ability to connect to VMs. Please, note, that you should add the following
part to your task JSON to enable Neutron context:
...
"context": {
...
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) <
akala...@cisco.com> wrote:

>  Hi
> I am trying to run the Rally scenario boot-runcommand-delete. This
> scenario has the following code
>   def boot_runcommand_delete(self, image, flavor,
>script, interpreter, username,
>fixed_network="private",
>floating_network="public",
>ip_version=4, port=22,
>use_floatingip=True, **kwargs):
>server = None
> floating_ip = None
> try:
> print "fixed network:%s floating network:%s"
> %(fixed_network,floating_network)
> server = self._boot_server(
> self._generate_random_name("rally_novaserver_"),
> image, flavor, key_name='rally_ssh_key', **kwargs)
>
>  *self.check_network(server, fixed_network)*
>
>  The question I have is the instance is created with a call to
> boot_server but no networks are attached to this server instance. Next step
> it goes and checks if the fixed network is attached to the instance and
> sure enough it fails
> At the step highlighted in bold. Also I cannot see this failure unless I
> run rally with –v –d object. So it actually reports benchmark scenario
> numbers in a table with no errors when I run with
> rally task start boot-and-delete.json
>
>  And reports results. First what am I missing in this case. Thing is I am
> using neutron not nova-network
> Second when most of the steps in the scenario failed like attaching to
> network, ssh and run command why bother reporting the results
>
>  Ajay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc

[image: http://www.openstacksv.com/] 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party] [infra] New mailing lists for third party announcements and account requests

2014-08-29 Thread Stefano Maffulli
On Fri 29 Aug 2014 12:47:00 PM PDT, Elizabeth K. Joseph wrote:
> Third-party-request
>
> This list is the new place to request the creation or modification of
> your third party account. Note that old requests sent to the
> openstack-infra mailing list don't need to be resubmitted, they are
> already in the queue for creation.

I'm not happy about this decision: creating new lists is expensive, it
multiplies entry points for newcomers, which need to be explained *and*
understood. We've multiplying processes, rules, points of contact and
places to monitor, be aware of... I feel overwhelmed. I wonder how much
worse that feeling is for people who are not 150% of their time
following discussions online and offline on all OpenStack channels.

Are you sure that a mailing list is the most appropriate way of handling
requests? Aren't bug trackers more appropriate instead?  And don't we
have a bug tracker already?

> It would also be helpful for third party operators to join this
> mailing list as well as the -announce list in order to reply when they
> can to distribute workload and support new participants to thethird
> party community.

What makes you think they will join a list called 'request'? It's a
request: I file a request, get back what I asked for, I say goodbye.
Doesn't sound like a place for discussions.

Also, if the problem with third-party operators is that they don't stick
around, how did you come to the conclusion that two more mailing lists
would solve (or help solving) the problem?


-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-08-29 Thread Timur Sufiev
Drago,

It sounds like you convinced me to give D3.js a second chance :). I'll
experiment with what can be achieved using force-directed graph layout
combined with some composable svg object, hopefully this will save me
from placing objects on the canvas on my own.

I've read the barricade_Spec.js several times and part of
barricade.js. Code is very interesting and allowed me to refresh some
JavaScript knowledge I used a looong ago :). The topic I definitely
haven't fully grasped is deferred/deferrable/referenced objects. The
thing I've understood is that if some scheme includes '@ref' key, then
it tries to get value returned from the resolver function no matter
what value has provided during scheme instantiation. Am I right? Is
'needs' section required for the value to be resolved? The examples
in file with tests are a bit mind-bending, so I failed to imagine how
it works for real use-cases. Also I'm interested whether it is
possible to define a schema that allows both to provide the value
directly and via reference? Among other things that inclined me to
give some feedback are:
* '@type' vs. '@class' - is the only difference between them that
'@type' refers to primitive type and '@class' refers to Barricade.js
scheme? Perhaps it could be reduced to a single word to make things
simpler?
* '?' vs '*' - seems they are used in different contexts, '?' is for
Object and '*' for Array - are 2 distinct markers actually needed?
* Is it better for objects with fixed schema to fail when unexpected
key is passed to them? Currently they don't.
* Pushing an element of wrong type into an arraylike scheme still
creates an element with empty default value.
* Is it possible to create schemas with arbitrary default values (the
example from spec caused me to think that default values cannot be
specified)?
* 'required' property does not really force such key to be provided
during schema instantiation - I presume this is going to change when
the real validation arrives?
* What are the conceptual changes between objects instantiated from
mutable (with '?') and from immutable (with fixed keys) schema?

Thank you very much for your efforts, I think that Barricade.js could
provide a solid foundation for Merlin!

On Thu, Aug 28, 2014 at 9:31 PM, Drago Rosson
 wrote:
> Timur,
>
> Composable entities can be a real need for Heat if provider templates
> (which allow templates to be used as a resource, with a template’s
> parameters and outputs becoming properties and attributes, respectively)
> are to be included in the app. A provider template resource, since it is a
> template itself, would be composed of resources which would require a
> composable entity. What is great about D3’s force graph is that it’s nodes
> and links can be completely arbitrary - meaning they can be any JavaScript
> object (including an SVG or DOM element). Additionally, the force graph
> simulation updates x and y properties on those elements and calls a
> user-defined “tick” function. The tick function can use the x and y
> properties in any way it wants to do the *actual* update to the position
> of each element. For example, this is how multiple foci can be implemented
> [1]. Lots of other customization is available, including starting and
> stopping the simulation, updating the node and link data, and having
> per-element control of most (all?) properties such as charge or link
> distance.
>
> Composability can be achieved using SVG’s  elements to group multiple
> graphical elements together. The tick function would need to update the
> ’s transform attribute [2]. This is how it is done in my app since my
> nodes and links are composed of icons, labels, backgrounds, etc. I think
> that D3’s force graph is not a limiting factor since it itself does not
> concern itself with graphics at all. Therefore, the question seems to be
> whether D3 can do everything graphically that Merlin needs. D3 is not a
> graphics API, but it does have support for graphical manipulation,
> animations, and events. They have sufficed for me so far. Plus, D3 can do
> these things without having to use its fancy data transformations so it
> can be used as a low-level SVG library where necessary. D3 can do a lot
> [3] so hopefully it could also do what Merlin needs.
>
> You are in luck, because I have just now open-sourced Barricade! Check it
> out [4]. I am working on getting documentation written for it but to see
> some ways it can be used, look at its test suite [5].
>
> [1] http://bl.ocks.org/mbostock/1021953
> [2] node.attr("transform", function (d) {
> return "translate(" + d.x + ", " + d.y + ")";
> });
> [3] http://christopheviau.com/d3list/
> [4] https://github.com/rackerlabs/barricade
>
> [5]
> https://github.com/rackerlabs/barricade/blob/master/test/barricade_Spec.js
>
> On 8/28/14, 10:03 AM, "Timur Sufiev"  wrote:
>
>>Hello, Drago!
>>
>>I'm extremely interested in learning more about your HOT graphical
>>builder. The screenshots you had attached look gorgeou

[openstack-dev] [third-party] [infra] New mailing lists for third party announcements and account requests

2014-08-29 Thread Elizabeth K. Joseph
Hi everyone,

In an effort to move the third party work into its own space, we've
created two new mailing lists:

Third-party-announce

This is where we will send announcements that third party operators
need to know about and when the OpenStack Infrastructure team disables
your account, with the reason for doing so and the action required
from you to get your system re-enabled.

We are requiring all third party operators to subscribe to this list,
email addresses we have for existing gerrit accounts have been sent an
invitation to subscribe.

http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-announce

Third-party-request

This list is the new place to request the creation or modification of
your third party account. Note that old requests sent to the
openstack-infra mailing list don't need to be resubmitted, they are
already in the queue for creation.

It would also be helpful for third party operators to join this
mailing list as well as the -announce list in order to reply when they
can to distribute workload and support new participants to thethird
party community.

http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-request

Feel free to let us know if you have any questions.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Rally scenario Issue

2014-08-29 Thread Ajay Kalambur (akalambu)
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network="private",
   floating_network="public",
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print "fixed network:%s floating network:%s" 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name("rally_novaserver_"),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no errors when I run with
rally task start boot-and-delete.json

And reports results. First what am I missing in this case. Thing is I am using 
neutron not nova-network
Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] python-neutronclient, launchpad, and milestones

2014-08-29 Thread Kyle Mestery
On Fri, Aug 29, 2014 at 1:40 PM, Matt Riedemann
 wrote:
>
>
> On 7/29/2014 4:12 PM, Kyle Mestery wrote:
>>
>> On Tue, Jul 29, 2014 at 3:50 PM, Nader Lahouti 
>> wrote:
>>>
>>> Hi Kyle,
>>>
>>> I have a BP listed in
>>> https://blueprints.launchpad.net/python-neutronclient
>>> and looks like it is targeted for 3.0 (it is needed fro juno-3) The code
>>> is
>>> ready and in the review. Can it be a included for 2.3.7 release?
>>>
>> Yes, you can target it there. We'll see about including it in that
>> release, pending review.
>>
>> Thanks!
>> Kyle
>>
>>> Thanks,
>>> Nader.
>>>
>>>
>>>
>>> On Tue, Jul 29, 2014 at 12:28 PM, Kyle Mestery 
>>> wrote:


 All:

 I spent some time today cleaning up python-neutronclient in LP. I
 created a 2.3 series, and created milestones for the 2.3.5 (June 26)
 and 2.3.6 (today) releases. I also targeted bugs which were released
 in those milestones to the appropriate places. My next step is to
 remove the 3.0 series, as I don't believe this is necessary anymore.

 One other note: I've tentatively created a 2.3.7 milestone in LP, so
 we can start targeting client bugs which merge there for the next
 client release.

 If you have any questions, please let me know.

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> What are the thoughts on when a 2.3.7 release is going to happen? I'm
> specifically interested in getting the keystone v3 support [1] into a
> released version of the library.
>
> 9/4 and feature freeze seems like a decent target date.
>
I can make that happen. I'll take a pass through the existing client
reviews to see what's there, and roll another release which would
include the keystone v3 work which is already merged.

Thanks,
Kyle

> [1] https://review.openstack.org/#/c/92390/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] python-neutronclient, launchpad, and milestones

2014-08-29 Thread Matt Riedemann



On 7/29/2014 4:12 PM, Kyle Mestery wrote:

On Tue, Jul 29, 2014 at 3:50 PM, Nader Lahouti  wrote:

Hi Kyle,

I have a BP listed in https://blueprints.launchpad.net/python-neutronclient
and looks like it is targeted for 3.0 (it is needed fro juno-3) The code is
ready and in the review. Can it be a included for 2.3.7 release?


Yes, you can target it there. We'll see about including it in that
release, pending review.

Thanks!
Kyle


Thanks,
Nader.



On Tue, Jul 29, 2014 at 12:28 PM, Kyle Mestery  wrote:


All:

I spent some time today cleaning up python-neutronclient in LP. I
created a 2.3 series, and created milestones for the 2.3.5 (June 26)
and 2.3.6 (today) releases. I also targeted bugs which were released
in those milestones to the appropriate places. My next step is to
remove the 3.0 series, as I don't believe this is necessary anymore.

One other note: I've tentatively created a 2.3.7 milestone in LP, so
we can start targeting client bugs which merge there for the next
client release.

If you have any questions, please let me know.

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What are the thoughts on when a 2.3.7 release is going to happen? I'm 
specifically interested in getting the keystone v3 support [1] into a 
released version of the library.


9/4 and feature freeze seems like a decent target date.

[1] https://review.openstack.org/#/c/92390/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VPNaaS pending state handling

2014-08-29 Thread Sridhar Ramaswamy
Thanks Paul for your thoughts. See inline [SridharR] ...


On Fri, Aug 29, 2014 at 4:19 AM, Paul Michali (pcm)  wrote:

> Comments in-line @PCM
>
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pcm_ (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
> On Aug 28, 2014, at 11:57 AM, Sridhar Ramaswamy  wrote:
>
>
> https://bugs.launchpad.net/neutron/+bug/1355360
>
> I'm working on this vpn vendor bug and am looking for guidance on the
> approach. I'm also relatively new to neutron development so bear with some
> newbie gaffs :)
>
> The problem reported in this bug, in a nutshell, is the policies in the
> neutron vpn db and virtual-machine implementing vpn goes out of sync when
> the agent restarts (restart could be either operator driven or due to a
> software error).
>
>
> @PCM To clarify, the bug is an enhancement to VPN to support restart
> handling (which doesn’t currently exist), right?
>
>
[SridharR] Yeah, you can say that. I reported (and trying to fix) the issue
with vpn functionality in mind where it fails removing the vpn-tunnel in
some valid operational scenarios.


>
>
> CSR vpn device driver currently doesn't do a sync when it comes up. I'm
> going to add that as part of this bug fix.
>
>
> @PCM Does the reference implementation handle restart? Is the handling
> non-disruptive (no loss to existing VPN connections)? Will this bug fix
> both reference and vendor VPN implementations?
>

[SridharR] Looking at the reference implementation I don't think it
explicitly handles a restart. However looks if it finds an active openswan
process the agents does a stop / start - so yes it will disrupt existing
VPN connections.


>
>
> Still it will only partially solve the problem as it will take care of new
> connections created (which goes to PENDING_CREATE state) & updates to
> existing connections while the agent was down but NOT for deletes. For
> deletes the connection entry gets deleted right at vpn_db level.
>
> My proposal is to introduce PENDING_DELETE state for vpn site-to-site
> connection.  Implementing pending_delete will involve,
>
>
> @PCM The PENDING_DELETE state already exists, but is not used currently
> for reference/vendor solutions, right?
>

[SridharR] Yes. I propose we introduce it in the plug-in side first. And
incrementally enhance the agents to support it. This way we can ensure the
code changes are relatively smaller & easier to review.


>
>
>
> 1) Moving the delete operation from vpn_db into service driver
>
>
> @PCM Concerned about my understanding of this, or if it is how I’m
> interpreting the wording. The delete has two parts - database update and
> driver update to actually remove the connection. Are the database
> operations staying in vpn_db.py?
>

[SridharR] My proposal is to have the database delete code in vpn_db as a
utility method. It will get called once driver 'acks' the delete the
operation.


>
>
> 2) Changing the reference ipsec service driver to handle PENDING_DELETE
> state. For now we can just do a simple db delete to preserve the existing
> behavior.
> 3) CSR device driver will make use of PENDING_DELETE to correctly delete
> the entries in the CSR device when the agent comes up.
>
>
> @PCM Would the process be…
>
> 1) delete request puts connection in DELETE_PENDING state (dbase write),
> and notifies service driver
> 2) service driver sends request to device driver
> 3) device driver does actions to delete the connection
> 4) device driver notifies that delete is completed (I think this would be
> asynchronous, as the device driver doesn’t reply to the request)
> 5) database would update and remove the connection entry.
>
> Is that correct?
>

[SridharR] Exactly!


thanks,
Sridhar

*IRC: SridharRamaswamy (irc.freenode.net )*



>
> Regards,
>
> PCM
>
>
>
> Sounds reasonable? Any thoughts?
>
> thanks,
> - Sridhar
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests

2014-08-29 Thread Jay Pipes

On 08/26/2014 10:14 AM, Zane Bitter wrote:

Steve Baker has started the process of moving Heat tests out of the
Tempest repository and into the Heat repository, and we're looking for
some guidance on how they should be packaged in a consistent way.
Apparently there are a few projects already packaging functional tests
in the package .tests.functional (alongside
.tests.unit for the unit tests).

That strikes me as odd in our context, because while the unit tests run
against the code in the package in which they are embedded, the
functional tests run against some entirely different code - whatever
OpenStack cloud you give it the auth URL and credentials for. So these
tests run from the outside, just like their ancestors in Tempest do.

There's all kinds of potential confusion here for users and packagers.
None of it is fatal and all of it can be worked around, but if we
refrain from doing the thing that makes zero conceptual sense then there
will be no problem to work around :)

I suspect from reading the previous thread about "In-tree functional
test vision" that we may actually be dealing with three categories of
test here rather than two:

* Unit tests that run against the package they are embedded in
* Functional tests that run against the package they are embedded in
* Integration tests that run against a specified cloud

i.e. the tests we are now trying to add to Heat might be qualitatively
different from the .tests.functional suites that already
exist in a few projects. Perhaps someone from Neutron and/or Swift can
confirm?

I'd like to propose that tests of the third type get their own top-level
package with a name of the form -integrationtests (second
choice: -tempest on the principle that they're essentially
plugins for Tempest). How would people feel about standardising that
across OpenStack?


By its nature, Heat is one of the only projects that would have 
integration tests of this nature. For Nova, there are some "functional" 
tests in nova/tests/integrated/ (yeah, badly named, I know) that are 
tests of the REST API endpoints and running service daemons (the things 
that are RPC endpoints), with a bunch of stuff faked out (like RPC 
comms, image services, authentication and the hypervisor layer itself). 
So, the "integrated" tests in Nova are really not testing integration 
with other projects, but rather integration of the subsystems and 
processes inside Nova.


I'd support a policy that true integration tests -- tests that test the 
interaction between multiple real OpenStack service endpoints -- be left 
entirely to Tempest. Functional tests that test interaction between 
internal daemons and processes to a project should go into 
/$project/tests/functional/.


For Heat, I believe tests that rely on faked-out other OpenStack 
services but stress the interaction between internal Heat 
daemons/processes should be in /heat/tests/functional/ and any tests the 
rely on working, real OpenStack service endpoints should be in Tempest.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread John Garbutt
On 29 August 2014 18:48, Dugger, Donald D  wrote:
> All good points but I want to add an observation.
>
> IRC seems to be the generic answer to all problems and, personally, I don't 
> think that's a good medium.  Having to depend upon who just might be on IRC 
> at a particular moment seems rather hit or miss.  I much prefer something 
> like email where I have a little more time to compose my thoughts, you don't 
> have to be right there constantly and there's an easy history.
>
> Note, that's just personal preference, given that IRC is the preferred medium 
> for many things I'll just have to change my processes.

After moving to use ZNC, I find IRC works much better for me now, but
I am still learning really.

Email is fine. Both works well too (IRC ping me to read the email).

Personally, I find conversations on IRC more efficient than long email
threads, with slow replies. But I certainly want to respect everyones
communication preferences.

John

> -Original Message-
> From: John Garbutt [mailto:j...@johngarbutt.com]
> Sent: Friday, August 29, 2014 4:35 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>
> Going a bit further up the thread where we are still talking about spec 
> reviews and not code reviews...
>
> On 28 August 2014 21:42, Dugger, Donald D  wrote:
>> I would contend that that right there is an indication that there's a 
>> problem with the process.
>
> We got two nova-core reviewer sponsors, to ensure the code would get reviewed 
> before FF.
>
> We probably should have got two nova-driver sponsors for a spec freeze. The 
> cores don't have +2 in spec land.
>
> This is the first release we are doing specs, so there are likely to be holes 
> in the process. I think next time we could try two nova-cores and two 
> nova-drivers (the driver might sign up for the spec review, but not the code 
> review).
>
> Also, the spec only got an exception for one week only. I was very late on 
> adding the -2, apologies. I just spotted it was missed out, when doing a bit 
> of house keeping for juno-3.
>
>> You submit a BP and then you have no idea of what is happening and no way of 
>> addressing any issues.  If the priority is wrong I can explain why I think 
>> the priority should be higher, getting stonewalled leaves me with no idea 
>> what's wrong and no way to address any problems.
>
> Feel free to raise this in the nova-meeting, or ping me or mikal on IRC or 
> via email.
>
>> I think, in general, almost everyone is more than willing to adjust 
>> proposals based upon feedback.  Tell me what you think is wrong and I'll 
>> either explain why the proposal is correct or I'll change it to address the 
>> concerns.
>
> Right. In this case, we just didn't get it reviewed. As mentioned, probably 
> because people didn't see this as important right now.
>
>> Trying to deal with silence is really hard and really frustrating.  
>> Especially given that we're not supposed to spam the mailing it's really 
>> hard to know what to do.
>
> For blueprint process stuff, email or catch me (johnthetubaguy) on IRC, or 
> mikal on IRC, or any of the nova-drivers. We can usually get you an answer. 
> Or generally ask people in #openstack-nova who should be able to point you in 
> the right direction.
>
>>I don't know the solution but we need to do something.  More core team 
>>members would help, maybe something like an automatic timeout where 
>>BPs/patches with no negative scores and no activity for a week get flagged 
>>for special handling.
>
> We are brainstorming ideas for Kilo. But its always a balance. I don't want 
> to add extra red tape for every issue we have.
>
> Right now we rely on people shouting on IRC if we forget really important 
> things, and fixing stuff up as required.
>
> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?

2014-08-29 Thread Jay Pipes

On 08/29/2014 02:48 AM, Preston L. Bannister wrote:

Looking to put a proper implementation of instance backup into
OpenStack. Started by writing a simple set of baseline tests and running
against the stable/icehouse branch. They failed!

https://github.com/dreadedhill-work/openstack-backup-scripts

Scripts and configuration are in the above. Simple tests.

At first I assumed there was a configuration error in my Devstack ...
but at this point I believe the errors are in fact in OpenStack. (Also I
have rather more colorful things to say about what is and is not logged.)

Try to backup bootable Cinder volumes attached to instances ... and all
fail. Try to backup instances booted from images, and all-but-one fail
(without logged errors, so far as I see).

Was concerned about preserving existing behaviour (as I am currently
hacking the Nova backup API), but ... if the existing is badly broken,
this may not be a concern. (Makes my job a bit simpler.)

If someone is using "nova backup" successfully (more than one backup at
a time), I *would* rather like to know!

Anyone with different experience?


IMO, the create_backup API extension should be removed from the Compute 
API. It's completely unnecessary and backups should be the purview of 
external (to Nova) scripts or configuration management modules. This API 
extension is essentially trying to be a Cloud Cron, which is 
inappropriate for the Compute API, IMO.


-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Specs Schedule

2014-08-29 Thread John Garbutt
On 28 August 2014 23:53, Joe Gordon  wrote:
> We just finished discussing when to open up Kilo specs at the nova meeting
> today [0], and Kilo specs will open right after we cut Juno RC1 (around Sept
> 25th [1]). Additionally, the spec template will most likely be revised.
>
> We still have a huge amount of work to do for Juno and the nova team is
> mostly concerned with the 50 blueprints we have up for review [2] and the
> 1000 open bugs [3] (186 of which have patches up for review). The RC1
> timeframe is the right fit for when we can start to move our focus out to
> upcoming kilo items.
>
> [0]
> http://eavesdrop.openstack.org/meetings/nova/2014/nova.2014-08-28-21.01.log.html
> [1] https://wiki.openstack.org/wiki/Juno_Release_Schedule
> [2] https://blueprints.launchpad.net/nova/juno
> [3] http://54.201.139.117/nova-bugs.html
>

+1 seems like the right balance

It seems right we concentrate our efforts on all the code that needs
reviewing right now.

As a heads up, its likely that summit sessions will require a spec
(probably a partial spec), to be reviewed in tandem with the summit
proposal. But no need to worrying about that till we open specs for
Kilo.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-08-29 Thread Dean Troyer
On Fri, Aug 29, 2014 at 9:02 AM, Sean Dague  wrote:

> If pathspec did the right thing, pulling in the extra dep would be fine,
> but it doesn't seem like it does.
>

After looking at it with fresh eyes, the issue could be resolved by
combining two methods from pathspec and still leveraging the regex
compilation stuff, which is complicated...that part it gets right enough.

I think I got it worked out properly, and it might be even flexible enough
to replace discover_files() with the right patterns in .bashateignore.  If
not we can inject patterns too, I did that for .gitignore and
.bashateifnore. ;)

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] refactoring of resize/migrate

2014-08-29 Thread John Garbutt
On 28 August 2014 09:50, Markus Zoeller  wrote:
> Jay Pipes  wrote on 08/27/2014 08:57:08 PM:
>
>> From: Jay Pipes 
>> To: openstack-dev@lists.openstack.org
>> Date: 08/27/2014 08:59 PM
>> Subject: Re: [openstack-dev] [nova] refactoring of resize/migrate
>>
>> On 08/27/2014 06:41 AM, Markus Zoeller wrote:
>> > The review of the spec to blueprint "hot-resize" has several comments
>> > about the need of refactoring the existing code base of "resize" and
>> > "migrate" before the blueprint could be considered (see [1]).
>> > I'm interested in the result of the blueprint therefore I want to
> offer
>> > my support. How can I participate?
>> >
>> > [1] https://review.openstack.org/95054
>>
>> Are you offering support to refactor resize/migrate, or are you offering
>
>> support to work only on the hot-resize functionality?
>
> I'm offering support to refactor resize/migrate (with the goal in
> mind to have a hot resize feature in the future).
>
>> I'm very much interested in refactoring the resize/migrate
>> functionality, and would appreciate any help and insight you might have.
>
>> Unfortunately, such a refactoring:
>>
>> a) Must start in Kilo
>> b) Begins with un-crufting the simply horrible, inconsistent, and
>> duplicative REST API and public behaviour of the resize and migrate
> actions
>
> If you give me some pointers to look at I can make some thoughts
> about them.
>
>> In any case, I'm happy to start the conversation about this going in
>> about a month or so, or whenever Kilo blueprints open up. Until then,
>> we're pretty much working on reviews for already-approved blueprints and
>
>> bug fixing.
>>
>> Best,
>> -jay
>
> Just ping me and I will participate and give as much as I can.

Happy to help with some planning/reviewing of specs etc.

I did have a plan here. It was to move to the conductor the migrate
and live-migrate code paths. The idea was to simplify the code paths,
so the commonalty and missing bits could be compared, etc:
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L470

That has proved hard to finish, probably because was the wrong
approach. Turns out there isn't much in common.

I did also plan on updating the user API, but kinda decided to wait
for v3 to get sorted, probably incorrectly.

The main pain with the work is the lack of live-migrate testing in the
gate, waiting for the multi-node gate work. Its starting to rot in
there because people are scared of change in there, etc.

Helping fix some live-migrate bugs, and helping out with live-migrate
testing, might be a good firsts steps? But depends how you like to
work really.

Anyways, happy to see that area get some more love!

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Dugger, Donald D
All good points but I want to add an observation.

IRC seems to be the generic answer to all problems and, personally, I don't 
think that's a good medium.  Having to depend upon who just might be on IRC at 
a particular moment seems rather hit or miss.  I much prefer something like 
email where I have a little more time to compose my thoughts, you don't have to 
be right there constantly and there's an easy history.

Note, that's just personal preference, given that IRC is the preferred medium 
for many things I'll just have to change my processes.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Friday, August 29, 2014 4:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

Going a bit further up the thread where we are still talking about spec reviews 
and not code reviews...

On 28 August 2014 21:42, Dugger, Donald D  wrote:
> I would contend that that right there is an indication that there's a problem 
> with the process.

We got two nova-core reviewer sponsors, to ensure the code would get reviewed 
before FF.

We probably should have got two nova-driver sponsors for a spec freeze. The 
cores don't have +2 in spec land.

This is the first release we are doing specs, so there are likely to be holes 
in the process. I think next time we could try two nova-cores and two 
nova-drivers (the driver might sign up for the spec review, but not the code 
review).

Also, the spec only got an exception for one week only. I was very late on 
adding the -2, apologies. I just spotted it was missed out, when doing a bit of 
house keeping for juno-3.

> You submit a BP and then you have no idea of what is happening and no way of 
> addressing any issues.  If the priority is wrong I can explain why I think 
> the priority should be higher, getting stonewalled leaves me with no idea 
> what's wrong and no way to address any problems.

Feel free to raise this in the nova-meeting, or ping me or mikal on IRC or via 
email.

> I think, in general, almost everyone is more than willing to adjust proposals 
> based upon feedback.  Tell me what you think is wrong and I'll either explain 
> why the proposal is correct or I'll change it to address the concerns.

Right. In this case, we just didn't get it reviewed. As mentioned, probably 
because people didn't see this as important right now.

> Trying to deal with silence is really hard and really frustrating.  
> Especially given that we're not supposed to spam the mailing it's really hard 
> to know what to do.

For blueprint process stuff, email or catch me (johnthetubaguy) on IRC, or 
mikal on IRC, or any of the nova-drivers. We can usually get you an answer. Or 
generally ask people in #openstack-nova who should be able to point you in the 
right direction.

>I don't know the solution but we need to do something.  More core team members 
>would help, maybe something like an automatic timeout where BPs/patches with 
>no negative scores and no activity for a week get flagged for special handling.

We are brainstorming ideas for Kilo. But its always a balance. I don't want to 
add extra red tape for every issue we have.

Right now we rely on people shouting on IRC if we forget really important 
things, and fixing stuff up as required.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Status of Neutron IPv6 dual stack

2014-08-29 Thread Harm Weites
Hi Dane,

Just wondering if you've made some progression on the matter :)

Regards,
Harm

op 19-08-14 19:08, Dane Leblanc (leblancd) schreef:
>
> Hi Harm:
>
>  
>
> Unfortunately I haven't had time to complete the changes yet. Even
> if/when these changes are completed, it's unlikely that this blueprint
> will get approved for Juno, but I'll see what I can do.
>
>  
>
> Thanks,
>
> Dane
>
>  
>
>  
>
> *From:*Harm Weites [mailto:h...@weites.com]
> *Sent:* Tuesday, August 19, 2014 12:53 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] Status of Neutron IPv6 dual stack
>
>  
>
> Thiago,
>
> My old setup was dual-stacked, simply using a flat linuxbridge. It's
> just that I now realy would like to separate multiple tenants using L3
> routers, which should be easy (dual stacked) to achieve once Dane's
> work is completed.
>
> Did you find the time to commit those required changes for that yet Dane?
>
> Regards,
> Harm
>
> op 16-08-14 23:33, Martinx - ? schreef:
>
> Guys,
>
>  
>
> Just for the record, I'm using IceHouse in a Dual-Stacked
> environment (with security groups working) but, Instance's IPv6
> address are static (no upstream SLAAC, arrived in Juno-2, I think)
> and the topology is `VLAN Provider Networks`, no Neutron L3
> Router. Where each VLAN have v4/v6 addrs, same upstream router
> (also dual-stacked - still no radvd enabled).
>
>  
>
> Looking forward to start testing L3 + IPv6 in K...
>
>  
>
> Best,
>
> Thiago
>
>  
>
> On 16 August 2014 16:21, Harm Weites  > wrote:
>
> Hi Dane,
>
> Thanks, that looks promising. Once support for multiple v6
> addresses on
> gateway ports is added I'll be happy to give this a go. Should it work
> just fine with an otherwise Icehouse based deployment?
>
> Regards,
> Harm
>
> op 16-08-14 20:31, Dane Leblanc (leblancd) schreef:
>
> > Hi Harm:
> >
> > Can you take a look at the following, which should address this:
> >   
>  https://blueprints.launchpad.net/neutron/+spec/multiple-ipv6-prefixes
> >
> > There are some diffs out for review for this blueprint:
> >https://review.openstack.org/#/c/113339/
> > but the change to support 1 V4 + multiple V6 addresses on a
> gateway port hasn't been added yet. I should be adding this soon.
> >
> > There was a request for a Juno feature freeze exception for this
> blueprint, but there's been no response, so this may not get
> approved until K release.
> >
> > -Dane
> >
> > -Original Message-
> > From: Harm Weites [mailto:h...@weites.com ]
> > Sent: Saturday, August 16, 2014 2:22 PM
> > To: openstack-dev@lists.openstack.org
> 
> > Subject: [openstack-dev] Status of Neutron IPv6 dual stack
> >
> > Hi,
> >
> > Given the work on [1] has been abandoned, I'm wondering what the
> current status of going dual stack is. Of course, given Neutron
> got something like that on it's roadmap.
> >
> > The initial BP [2] aimed for Havana and Icehouse, and I'm
> unaware of something similar to achieve a dual stack network. What
> are the options, if any? To my knowledge it all comes down to
> supporting multiple exterior interfaces (networks) on a l3-agent,
> which is currently limited to just 1: either IP4 or IP6.
> >
> > [1] https://review.openstack.org/#/c/77471/
> > [2]
> >
> 
> https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port
> >
> > Regards,
> > Harm
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>  
>
>
>
>
> ___
>
> OpenStack-dev mailing list
>
> OpenStack-dev@lists.openstack.org 
> 
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>  
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread John Garbutt
I think this is now more about code reviews, but this is important...

On 29 August 2014 10:30, Daniel P. Berrange  wrote:
> On Fri, Aug 29, 2014 at 11:07:33AM +0200, Thierry Carrez wrote:
>> Joe Gordon wrote:
>> > On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
>> > mailto:alan.kavan...@ericsson.com>> wrote:
>> >
>> >> I share Donald's points here, I believe what would help is to
>> >> clearly describe in the Wiki the process and workflow for the BP
>> >> approval process and build in this process how to deal with
>> >> discrepancies/disagreements and build timeframes for each stage and
>> >> process of appeal etc.
>> >> The current process would benefit from some fine tuning and helping
>> >> to build safe guards and time limits/deadlines so folks can expect
>> >> responses within a reasonable time and not be left waiting in the 
>> >> cold.
>> >
>> > This is a resource problem, the nova team simply does not have enough
>> > people doing enough reviews to make this possible.
>>
>> I think Nova lacks core reviewers more than it lacks reviewers, though.
>> Just looking at the ratio of core developers vs. patchsets proposed,
>> it's pretty clear that the core team is too small:
>>
>> Nova: 750 patchsets/month for 21 core = 36
>> Heat: 230/14 = 16
>> Swift: 50/16 = 3
>>
>> Neutron has the same issue (550/14 = 39). I think above 20, you have a
>> dysfunctional setup. No amount of process, spec, or runway will solve
>> that fundamental issue.

+1

>> The problem is, you can't just add core reviewers, they have to actually
>> understand enough of the code base to be trusted with that +2 power. All
>> potential candidates are probably already in. In Nova, the code base is
>> so big it's difficult to find people that know enough of it. In Neutron,
>> the contributors are often focused on subsections of the code base so
>> they are not really interested in learning enough of the rest. That
>> makes the pool of core candidates quite dry.

The other point is keeping the reviews consistent. Making the team
larger makes that harder.

If we did a better job of discussing core disagreements more in the
nova-meeting, maybe that would help keep consistency between a larger
group of people. But it boils down to trusting each other, and a group
bigger than 20, is a lot of people to get to know.

>> I fear the only solution is smaller groups being experts on smaller
>> codebases. There is less to review, and more candidates that are likely
>> to be experts in this limited area.
>>
>> Applied to Nova, that means modularization -- having strong internal
>> interfaces and trusting subteams to +2 the code they are experts on.
>> Maybe VMWare driver people should just +2 VMware-related code. We've had
>> that discussion before, and I know there is a dangerous potential
>> quality slope there -- I just fail to see any other solution to bring
>> that 750/21=36 figure down to a bearable level, before we burn out all
>> of the Nova core team.

This worked really well for Cinder, and I hope Gantt will do the same
kind of thing for Scheduling.

It certainly feels like we really need to split things up, maybe:
* API (talks to compute api to creates tasks and gets objects)
* core task orchestration and persistence (compute api, db objects,
conductor, talks to compute manager api, scheduler api, network api)
* compute manager + "drivers" (gets instance objects)
* Scheduling (models resources, gets )
* nova-network

But clearly, that will make evolving those interfaces much harder, the
separate they become.

Certainly we fee a few release away from some of those splits.

> I broadly agree - I think that unless Nova moves more towards something
> that is closer to the Linux style subsystem maintainer model we are
> doomed. I know in Linux, the maintainers actually use separate git trees,
> and that isn't what I mean - I think using a single git tree is still
> desirable (at least for now). What I mean is that we should place more
> trust on the opinion of the people who are experts for a particular
> area of code. Let those experts take on a greater burden of the code
> review so core team can put more focus on actual merge approval.
>
> I know some of the core team try to do this implicitly - eg we know who
> some of the main people involved in hyperv or vmware are, so will tend
> to treat their +1 as an effective +2 from the POV of their driver code,
> but our rules still require two actual +2s from core, so it doesn't
> entirely help us right now. I think we need to do some work in tooling
> to make this more of an explicit process though.

I do prefer a distinction between core and sub-core-team.

Just because we want multiple sub-core-team members, but still make it
easier to spot the core reviewer.

> The problem is that gerrit does not allow us to say person X has +2 for
> code that touches directory path /foo/bar. The +2 is global to the entire
> repository. We could try to deal with this problem outsi

Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Kevin Benton
I think the point is that if there were discussions that lead to
uncertainty about the split, they should have resulted in a - 1/-2 on the
spec instead of letting it sit there.
On Aug 29, 2014 9:46 AM, "Jay Pipes"  wrote:

> On 08/29/2014 12:25 PM, Zane Bitter wrote:
>
>> On 28/08/14 17:02, Jay Pipes wrote:
>>
>>> I understand your frustration about the silence, but the silence from
>>> core team members may actually be a loud statement about where their
>>> priorities are.
>>>
>>
>> I don't know enough about the Nova review situation to say if the
>> process is broken or not. But I can say that if passive-aggressively
>> ignoring people is considered a primary communication channel, something
>> is definitely broken.
>>
>
> Nobody is ignoring anyone. There have ongoing conversations about the
> scheduler and Gantt, and those conversations haven't resulted in all the
> decisions that Don would like. That is unfortunate, but it's not a sign of
> a broken process.
>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Dugger, Donald D
Well, I think that there is a sign of a broken (or at least bent) process and 
that's what I'm trying to expose.  Especially given the ongoing conversations 
over Gantt it seems wrong that ultimately it was rejected due to silence.  
Maybe rejecting the BP was the right decision but the way the decision was made 
was just wrong.

Note that dealing with silence is `really` difficult.  You point out that maybe 
silence means people don't agree with the BP but how do I know?  Maybe it means 
no one has time, maybe no one has an opinion, maybe it got lost in the shuffle, 
maybe I'm being too obnoxious - who knows.  A simple -1 with a one sentence 
explanation would helped a lot.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, August 29, 2014 10:43 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/29/2014 12:25 PM, Zane Bitter wrote:
> On 28/08/14 17:02, Jay Pipes wrote:
>> I understand your frustration about the silence, but the silence from 
>> core team members may actually be a loud statement about where their 
>> priorities are.
>
> I don't know enough about the Nova review situation to say if the 
> process is broken or not. But I can say that if passive-aggressively 
> ignoring people is considered a primary communication channel, 
> something is definitely broken.

Nobody is ignoring anyone. There have ongoing conversations about the scheduler 
and Gantt, and those conversations haven't resulted in all the decisions that 
Don would like. That is unfortunate, but it's not a sign of a broken process.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-29 Thread Eichberger, German
Kyle,

I am confused. So basically you (and Mark) are saying:

1) We deprecate Neutron LBaaS v1
2) We spin out Neutron LBaaS v2 into it's own project in stackforge
3) Users don't have an OpenStack LBaaS any longer until we graduate from 
OpenStack incubation (as opposed Neutron incubation)

I am hoping you can clarify how this will be shaping up - 

Thanks,
German


-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Thursday, August 28, 2014 6:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
> I think we need some clarification here too about the difference 
> between the general OpenStack Incubation and the Neutron incubation. 
> From my understanding, the Neutron incubation isn't the path to a 
> separate project and independence from Neutron. It's a process to get 
> into Neutron. So if you want to keep it as a separate project with its 
> own cores and a PTL, Neutron incubation would not be the way to go.

That's not true, there are 3 ways out of incubation: 1) The project withers and 
dies on it's own. 2) The project is spun back into Neutron. 3) The project is 
spun out into it's own project.

However, it's worth noting that if the project is spun out into it's own 
entity, it would have to go through incubation to become a fully functioning 
OpenStack project of it's own.

>
>
> On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
> wrote:
>>
>> Just for us to learn about the incubator status, here are some of the 
>> info on incubation:
>>
>> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
>> https://wiki.openstack.org/wiki/Governance/NewProjects
>>
>> Susanne
>>
>>
>> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
>> 
>> wrote:
>>>
>>>  I would like to discuss the pros and cons of putting Octavia into 
>>> the Neutron LBaaS incubator project right away. If it is going to be 
>>> the reference implementation for LBaaS v 2 then I believe Octavia 
>>> belong in Neutron LBaaS v2 incubator.
>>>
>>> The Pros:
>>> * Octavia is in Openstack incubation right away along with the lbaas 
>>> v2 code. We do not have to apply for incubation later on.
>>> * As incubation project we have our own core and should be able ot 
>>> commit our code
>>> * We are starting out as an OpenStack incubated project
>>>
>>> The Cons:
>>> * Not sure of the velocity of the project
>>> * Incubation not well defined.
>>>
>>> If Octavia starts as a standalone stackforge project we are assuming 
>>> that it would be looked favorable on when time is to move it into 
>>> incubated status.
>>>
>>> Susanne
>>>
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Jay Pipes

On 08/29/2014 12:25 PM, Zane Bitter wrote:

On 28/08/14 17:02, Jay Pipes wrote:

I understand your frustration about the silence, but the silence from
core team members may actually be a loud statement about where their
priorities are.


I don't know enough about the Nova review situation to say if the
process is broken or not. But I can say that if passive-aggressively
ignoring people is considered a primary communication channel, something
is definitely broken.


Nobody is ignoring anyone. There have ongoing conversations about the 
scheduler and Gantt, and those conversations haven't resulted in all the 
decisions that Don would like. That is unfortunate, but it's not a sign 
of a broken process.


-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-29 Thread Dmitry Pyzhov
I've updated the spec: https://review.openstack.org/#/c/116874/

Major change in this spec: get rid of unpacked upgrade tarball. Use only
lrzipped archives. It will save disk space and network traffic, it will
make upgrade process longer, it will make our upgrade tests longer as well,
it will make things simpler.

Code is already merged, docs are on review, we need to update our system
tests and jenkins jobs. It will be done after merge of the spec.


On Tue, Aug 26, 2014 at 6:07 PM, Aleksandra Fedorova  wrote:

> As an update, please check and review commit [1] to fuel-specs with
> detailed feature description.
>
> According to this feature, we are going to switch our CI system to
> lrzipped tarballs.
>
> [1] https://review.openstack.org/#/c/116874/
>
>
>
> On Thu, Aug 21, 2014 at 5:50 PM, Dmitry Pyzhov 
> wrote:
>
>> Fuelers,
>>
>> Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
>> 2Gb with lrzip tool (ticket
>> , change in build system
>> , change in docs
>> ), but it will dramatically
>> increase unpacking time. I've run unpack on my virtualbox environment and
>> got this result:
>> [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
>> Decompressing...
>> 100%7637.48 /   7637.48 MB
>> Average DeCompression Speed:  8.014MB/s
>> [OK] - 8008478720 bytes
>> Total time: 00:15:52.93
>>
>> My suggestion is to reject this change, release 5.1 with big tarball and
>> find another solution in next release. Any objections?
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Aleksandra Fedorova
> bookwar
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Zane Bitter

On 28/08/14 17:02, Jay Pipes wrote:

I understand your frustration about the silence, but the silence from
core team members may actually be a loud statement about where their
priorities are.


I don't know enough about the Nova review situation to say if the 
process is broken or not. But I can say that if passive-aggressively 
ignoring people is considered a primary communication channel, something 
is definitely broken.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-29 Thread Salvatore Orlando
I agree with Brandon that it will be difficult to find spaces for Octavia,
and the pod is a valid option.
Nevertheless it is always worth trying.

For the "traditional" load balancing service instead I reckon #1 is a very
good thing to discuss. Problem is that it is also hard to conclude anything
in 40 minutes. Async vs Synchronous has been discussed several times at the
summit; those sessions were really not very productive. Maybe if the
discussion is started early on the mailing list and supported with PoC code
it will be possible to scope the summit session in the right way.

#2 is not a LBaaS problem. It's a Neutron-wide problem. Async or
synchronous communication patterns also have a bearing on it. This is not
the first time this problem comes up. I think it might deserve a Neutron
session, but again starting the discussion on the mailing list will help to
ensure a productive outcome (and who knows we might even not need a summit
session after all!)

As a side note, the format for the next summit has not yet been formalised.
So maybe it's a bit early to talk about sessions. On the other hand, it's
good to dump topics which are worth being discussed at the summit.

Salvatore


On 29 August 2014 06:49, Brandon Logan  wrote:

> Adding correct subject tags because I replied to the original email.  I
> blame you Susanne!
>
> On Thu, 2014-08-28 at 23:47 -0500, Brandon Logan wrote:
> > I'm not sure exactly how many design sessions will be available but it
> > seems like 2 for Neutron LBaaS and 2 for Octavia will be hard to
> > accomplish.  Neutron LBaaS had 2 in Atlanta didn't it?  One broad one
> > ofr Neutron LBaaS and one more specific to TLS and L7.  I'm totally on
> > board for having 2 for each though.  I just think since Octavia is still
> > just an idea at this point, it'd be hard getting space and time for a
> > design session for it, much less 2.  Doesn't stop us from doing the pods
> > or ad hoc sessions though.
> >
> > As for topics:
> > Neutron LBaaS
> > 1) I've been wanting to try and solve the problem (at least I think it
> > is a problem) of drivers being responsible for managing the status of
> > entities.  In my opinion, Neutron LBaaS should be as consistent as
> > possible not matter what drivers are being used.  This is caused by
> > supporting both Asynchronous and Synchronous drivers.  I've got some
> > ideas on how to solve this.
> > 2) Different status types on entities.  Operating status and
> > Provisioning status.
> >
> > Octavia
> > I hope we have gotten far enough along this to have some really detailed
> > design discussions.  Hopefully we are within reach of a 0.5 milestone.
> > Other than that, too early to tell what exact kind of design talks we
> > will need.
> >
> > Thanks,
> > Brandon
> >
> > On Thu, 2014-08-28 at 10:49 -0400, Susanne Balle wrote:
> > >
> > >
> > > LBaaS team,
> > >
> > >
> > > As we discussed in the Weekly LBaaS meeting this morning we should
> > > make sure we get the design sessions scheduled that we are interested
> > > in.
> > >
> > >
> > > We currently agreed on the following:
> > >
> > >
> > > * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
> > > want to go over status and also the whole incubator thingy and how we
> > > will best move forward.
> > >
> > >
> > > * Octavia: We want to schedule 2 sessions.
> > > ---  During one of the sessions I would like to discuss the pros and
> > > cons of putting Octavia into the Neutron LBaaS incubator project right
> > > away. If it is going to be the reference implementation for LBaaS v 2
> > > then I believe Octavia belong in Neutron LBaaS v2 incubator.
> > >
> > >
> > > * Flavors which should be coordinated with markmcclain and
> > > enikanorov.
> > > --- https://review.openstack.org/#/c/102723/
> > >
> > >
> > > Is this too many sessions given the constraints? I am assuming that we
> > > can also meet at the pods like we did at the last summit.
> > >
> > >
> > > thoughts?
> > >
> > >
> > > Regards Susanne
> > >
> > > Thierry
> > > Carrez 
> > > Aug 27 (1 day
> > > ago)
> > >
> > >
> > >
> > >
> > > to OpenStack
> > >
> > >
> > >
> > >
> > >
> > > Hi everyone,
> > >
> > > I've been thinking about what changes we can bring to
> > > the Design Summit
> > > format to make it more productive. I've heard the feedback from the
> > > mid-cycle meetups and would like to apply some of those ideas for
> > > Paris,
> > > within the constraints we have (already booked space and time). Here
> > > is
> > > something we could do:
> > >
> > > Day 1. Cross-project sessions / incubated projects / other projects
> > >
> > > I think that worked well last time. 3 parallel rooms where we can
> > > address top cross-project questions, discuss the results of the
> > > various
> > > experiments we conducted during juno. Don't hesitate to schedule 2
> > > slots
> > > for discussions, so that we have time to come to the bottom of those
> > > issues. Incubated projects (and maybe "other" projects, if space
> > > 

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-29 Thread Susanne Balle
Stephen



See inline comments.



Susanne



-



Susanne--



I think you are conflating the difference between "OpenStack incubation"
and "Neutron incubator." These are two very different matters and should be
treated separately. So, addressing each one individually:



*"OpenStack Incubation"*

I think this has been the end-goal of Octavia all along and continues to be
the end-goal. Under this scenario, Octavia is its own stand-alone project
with its own PTL and core developer team, its own governance, and should
eventually become part of the integrated OpenStack release. No project ever
starts out as "OpenStack incubated."



[Susanne] I totally agree that the end goal is for Neutron LBaaS to become
its own incubated project. I did miss the nuance that was pointed out by
Mestery in an earlier email that if a Neutron incubator project wants to
become a separate project it will have to apply for incubation again or at
that time. It was my understanding that such a Neutron incubated project
would be grandfathered in but again we do not have much details on the
process yet.



To me Octavia is a driver so it is very hard to me to think of it as a
standalone project. It needs the new Neutron LBaaS v2 to function which is
why I think of them together. This of course can change since we can add
whatever layers we want to Octavia.



*"Neutron Incubator"*

This has only become a serious discussion in the last few weeks and has yet
to land, so there are many assumptions about this which don't pan out
(either because of purposeful design and governance decisions, or because
of how this project actually ends up being implemented from a practical
standpoint). But given the inherent limitations about making statements
with so many unknowns, the following seem fairly clear from what has been
shared so far:

·  Neutron incubator is the on-ramp for projects which should eventually
become a part of Neutron itself.

·  Projects which enter the Neutron incubator on-ramp should be fairly
close to maturity in their final form. I think the intent here is for them
to live in incubator for 1 or 2 cycles before either being merged into
Neutron core, or being ejected (as abandoned, or as a separate project).

·  Neutron incubator projects effectively do not have their own PTL and
core developer team, and do not have their own governance.

[Susanne] Ok I missed the last point. In an earlier discussion Mestery
implied that an incubated project would have at least one or two of its own
cores. Maybe that changed between now and then.

In addition we know the following about Neutron LBaaS and Octavia:

·  It's already (informally?) agreed that the ultimate long-term place for
a LBaaS solution is probably to be spun out into its own project, which
might appropriately live under a yet-to-be-defined master "Networking"
project. (This would make Neutron, LBaaS, VPNaaS, FWaaS, etc. effective
"peer" projects under the Networking umbrella.)  Since this "Networking"
umbrella project has even less defined about it than Neutron incubator,
it's impossible to know whether being a part of Neutron incubator would be
of any benefit to Octavia (or, conversely, to Neutron incubator) at all as
an on-ramp to becoming part of "Networking." Presumably, Octavia *might* fit
well under the "Networking" umbrella-- but, again, with nothing defined
there it's impossible to draw any reasonable conclusions at this time.

[Susanne] We are in agreement here. This was the reasons we had the ad-hoc
meeting in Atlanta so get a feel for hw people felt if we made Neutron
LBaaS its own project and also how we got an operator large scale LBaaS
that fit most of our service provider requirements. I am just worried
because you keep on talking of Octavia as a standaloe project. To me it is
an extension of Neutron LBaaS or of a new LBaaS …. I do not see us (== me)
use Octavia in a non OpenStack context. And yes it is a driver that I am
hoping we all expect to become the reference implementation for LBaaS.

·  When the LBaaS component spins out of Neutron, it will more than likely
not be Octavia.  Octavia is *intentionally* less friendly to 3rd party load
balancer vendors both because it's envisioned that Octavia would just be
another implementation which lives along-side said 3rd party vendor
products (plugging into a higher level LBaaS layer via a driver), and
because we don't want to have to compromise certain design features of
Octavia to meet the lowest common denominator 3rd party vendor product.
(3rd party vendors are welcome, but we will not make design compromises to
meet the needs of a proprietary product-- compatibility with available
open-source products and standards trumps this.)

[Susanne] Ok now I am confused… But I agree with you that it need to focus
on our use cases. I remember us discussing Octavia being the refenece
implementation for OpenStack LBaaS (whatever that is). Has that changed
while I was on vacation?

Th

Re: [openstack-dev] [zaqar] [marconi] Removing GET message by ID in v1.1 (Redux)

2014-08-29 Thread Everett Toews
On Aug 28, 2014, at 3:08 AM, Flavio Percoco  wrote:

> Unfortunately, as Nataliia mentioned, we can't just get rid of it in
> v1.1 because that implies a major change in the API, which would require
> a major release. What we can do, though, is start working on a spec for
> the V2 of the API.

+1

Please don’t make breaking changes in minor version releases. v2 would be the 
place for this change.

Thanks,
Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-29 Thread Thierry Carrez
Hayes, Graham wrote:
>>> Yep, I think this works in theory, the tough part will be when all the
>>> incubating projects realize they're sending people for a single day?
>>> Maybe it'll work out differently than I think though. It means fitting
>>> ironic, barbican, designate, manila, marconi in a day? 
>>
>> Actually those projects would get pod space for the rest of the week, so
>> they should stay! Also some of them might have graduated by then :)
> 
> Would the programs for those projects not get design summit time? I
> thought the Programs got Design summit time, not projects... If not, can
> the Programs get design summit time? 

Sure, that's what Anne probably meant. Time for the program behind every
incubated project.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax additions/changes

2014-08-29 Thread Miguel Lavalle
Yair,

I am very well plugged-in to this project and feeding the necessary
information to the weekly Tempest IRC meeting. In fact, since a few weeks
ago, I've made a point of sharing weekly with the Tempest team what I am
doing with the LBaaS team from the Tempest point of view.

Cheers


On Thu, Aug 28, 2014 at 11:31 PM, Brandon Logan  wrote:

> On Tue, 2014-08-26 at 14:22 +0300, John Schwarz wrote:
> >
> > On 08/25/2014 10:06 PM, Brandon Logan wrote:
> > >>
> > >> 2. Therefor, there should be some configuration to specifically enable
> > >> either version (not both) in case LBaaS is needed. In this case, the
> > >> other version is disabled (ie. a REST query for non-active version
> > >> should return a "not activated" error). Additionally, adding a
> > >> 'lb-version' command to return the version currently active seems
> like a
> > >> good user-facing idea. We should see how this doesn't negatively
> effect
> > >> the db migration process (for example, allowing read-only commands for
> > >> both versions?)
> > >
> > > A /version endpoint can be added for both v1 and v2 extensions and
> > > service plugins.  If it doesn't already exist, it would be nice if
> > > neutron had an endpoint that would return the list of loaded extensions
> > > and their versions.
> > >
> > There is 'neutron ext-list', but I'm not familiar enough with it or with
> > the REST API to say if we can use that.
>
> Looks like this will be sufficient.  No new rest endpoint needed.
>
> > >>
> > >> 3. Another decision that's needed to be made is the syntax for v2. As
> > >> mentioned, the current new syntax is 'neutron
> lbaas--'
> > >> (against the old 'lb--'), keeping in mind that once v1
> > >> is deprecated, a syntax like 'lbv2--' would be
> probably
> > >> unwanted. Is 'lbaas--' okay with everyone?
> > >
> > > That is the reason we with with lbaas because lbv2 looks ugly and we'd
> > > be stuck with it for the lifetime of v2, unless we did another
> migration
> > > back to lb for it.  Which seemed wrong to do, since then we'd have to
> > > accept both lbv2 and lb commands, and then deprecate lbv2.
> > >
> > > I assume this also means you are fine with the prefix in the API
> > > resource of /lbaas as well then?
> > >
> > I don't mind, as long there is a similar mechanism which disables the
> > non-active REST API commands. Does anyone disagree?
> > >>
> > >> 4. If we are going for different API between versions, appropriate
> > >> patches also need to be written for lbaas-related scripts and also
> > >> Tempest, and their maintainers should probably be notified.
> > >
> > > Could you elaborate on this? I don't understand what you mean by
> > > "different API between version."
> > >
> > The intention was that the change of the user-facing API also forces
> > changes on other levels - not only neutronclient needs to be modified
> > accordingly, but also tempest system tests, horizon interface regarding
> > LBaaS...
>
> Oh yes this is in the works.  Miguel is spearheading the tempest tests
> and has made good progress on it.  Horizon integration hasn't begun yet
> though.  Definitely something we want to get in though.  Have to wait
> until more information about the incubator comes out and where these
> patches for other products need to go.
>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread Miguel Lavalle
Yeah,  Sean's proposal looks great to me


On Fri, Aug 29, 2014 at 10:13 AM, David Kranz  wrote:

> On 08/29/2014 10:56 AM, Sean Dague wrote:
>
>> On 08/29/2014 10:19 AM, David Kranz wrote:
>>
>>> While reviewing patches for moving response checking to the clients, I
>>> noticed that there are places where client methods do not return any
>>> value.
>>> This is usually, but not always, a delete method. IMO, every rest client
>>> method should return at least the response. Some services return just
>>> the response for delete methods and others return (resp, body). Does any
>>> one object to cleaning this up by just making all client methods return
>>> resp, body? This is mostly a change to the clients. There were only a
>>> few places where a non-delete  method was returning just a body that was
>>> used in test code.
>>>
>> Yair and I were discussing this yesterday. As the response correctness
>> checking is happening deeper in the code (and you are seeing more and
>> more people assigning the response object to _ ) my feeling is Tempest
>> clients should probably return a body obj that's basically.
>>
>> class ResponseBody(dict):
>>  def __init__(self, body={}, resp=None):
>>  self.update(body)
>> self.resp = resp
>>
>> Then all the clients would have single return values, the body would be
>> the default thing you were accessing (which is usually what you want),
>> and the response object is accessible if needed to examine headers.
>>
>> -Sean
>>
>>  Heh. I agree with that and it is along a similar line to what I proposed
> here https://review.openstack.org/#/c/106916/ but using a dict rather
> than an attribute dict. I did not propose this since it is such a big
> change. All the test code would have to be changed to remove the resp or _
> that is now receiving the response. But I think we should do this before
> the client code is moved to tempest-lib.
>
>  -David
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread David Kranz

On 08/29/2014 10:56 AM, Sean Dague wrote:

On 08/29/2014 10:19 AM, David Kranz wrote:

While reviewing patches for moving response checking to the clients, I
noticed that there are places where client methods do not return any value.
This is usually, but not always, a delete method. IMO, every rest client
method should return at least the response. Some services return just
the response for delete methods and others return (resp, body). Does any
one object to cleaning this up by just making all client methods return
resp, body? This is mostly a change to the clients. There were only a
few places where a non-delete  method was returning just a body that was
used in test code.

Yair and I were discussing this yesterday. As the response correctness
checking is happening deeper in the code (and you are seeing more and
more people assigning the response object to _ ) my feeling is Tempest
clients should probably return a body obj that's basically.

class ResponseBody(dict):
 def __init__(self, body={}, resp=None):
 self.update(body)
self.resp = resp

Then all the clients would have single return values, the body would be
the default thing you were accessing (which is usually what you want),
and the response object is accessible if needed to examine headers.

-Sean

Heh. I agree with that and it is along a similar line to what I proposed 
here https://review.openstack.org/#/c/106916/ but using a dict rather 
than an attribute dict. I did not propose this since it is such a big 
change. All the test code would have to be changed to remove the resp or 
_ that is now receiving the response. But I think we should do this 
before the client code is moved to tempest-lib.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread Sean Dague
On 08/29/2014 10:19 AM, David Kranz wrote:
> While reviewing patches for moving response checking to the clients, I
> noticed that there are places where client methods do not return any value.
> This is usually, but not always, a delete method. IMO, every rest client
> method should return at least the response. Some services return just
> the response for delete methods and others return (resp, body). Does any
> one object to cleaning this up by just making all client methods return
> resp, body? This is mostly a change to the clients. There were only a
> few places where a non-delete  method was returning just a body that was
> used in test code.

Yair and I were discussing this yesterday. As the response correctness
checking is happening deeper in the code (and you are seeing more and
more people assigning the response object to _ ) my feeling is Tempest
clients should probably return a body obj that's basically.

class ResponseBody(dict):
def __init__(self, body={}, resp=None):
self.update(body)
self.resp = resp

Then all the clients would have single return values, the body would be
the default thing you were accessing (which is usually what you want),
and the response object is accessible if needed to examine headers.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread Jay Pipes

On 08/29/2014 10:19 AM, David Kranz wrote:

While reviewing patches for moving response checking to the clients, I
noticed that there are places where client methods do not return any value.
This is usually, but not always, a delete method. IMO, every rest client
method should return at least the response. Some services return just
the response for delete methods and others return (resp, body). Does any
one object to cleaning this up by just making all client methods return
resp, body? This is mostly a change to the clients. There were only a
few places where a non-delete  method was returning just a body that was
used in test code.


Sounds good to me. :)

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-08-29 Thread Denis Makogon
On Fri, Aug 29, 2014 at 4:29 PM, Dmitriy Ukhlov 
wrote:

> Hello Denis,
> Thank you for very useful knowledge sharing.
>
> But I have one more question. As far as I understood if we have
> replication factor 3 it means that our backup may contain three copies of
> the same data. Also it may contain some not compacted sstables set. Do we
> have any ability to compact collected backup data before moving it to
> backup storage?
>

Thanks for fast response, Dmitriy.

With replication factor 3 - yes, this looks like a feature that allows to
backup only one node instead of 3 of them. In other cases, we would need to
iterate over each node, as you know.
Correct, it is possible to have not compacted SSTables. To accomplish
compaction we might need to use compaction mechanism provided by the
nodetool, see
http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsCompact.html,
we just need take into account that it's possible that sstable was already
compacted and force compaction wouldn't give valuable benefits.


Best regards,
Denis Makogon


>
> On Fri, Aug 29, 2014 at 2:01 PM, Denis Makogon 
> wrote:
>
>> Hello, stackers. I'd like to start thread related to backuping procedure
>> for MagnetoDB, to be precise, for Cassandra backend.
>>
>> In order to accomplish backuping procedure for Cassandra we need to
>> understand how does backuping work.
>>
>> To perform backuping:
>>
>>1.
>>
>>We need to SSH into each node
>>2.
>>
>>Call ‘nodetool snapshot’ with appropriate parameters
>>3.
>>
>>Collect backup.
>>4.
>>
>>Send backup to remote storage.
>>5.
>>
>>Remove initial snapshot
>>
>>
>>  Lets take a look how does ‘nodetool snapshot’ works. Cassandra backs up
>> data by taking a snapshot of all on-disk data files (SSTable files) stored
>> in the data directory. Each time an SSTable gets flushed and snapshotted it
>> becomes a hard link against initial SSTable pinned to specific timestamp.
>>
>> Snapshots are taken per keyspace or per-CF and while the system is
>> online. However, nodes must be taken offline in order to restore a snapshot.
>>
>> Using a parallel ssh tool (such as pssh), you can flush and then snapshot
>> an entire cluster. This provides an eventually consistent backup.
>> Although no one node is guaranteed to be consistent with its replica nodes
>> at the time a snapshot is taken, a restored snapshot can resume consistency
>> using Cassandra's built-in consistency mechanisms.
>>
>> After a system-wide snapshot has been taken, you can enable incremental
>> backups on each node (disabled by default) to backup data that has changed
>> since the last snapshot was taken. Each time an SSTable is flushed, a hard
>> link is copied into a /backups subdirectory of the data directory.
>>
>> Now lets see how can we deal with snapshot once its taken. Below you can
>> see a list of command that needs to be executed to prepare a snapshot:
>>
>> Flushing SSTables for consistency
>>
>> 'nodetool flush'
>>
>> Creating snapshots (for example of all keyspaces)
>>
>> "nodetool snapshot -t %(backup_name)s 1>/dev/null",
>>
>> where
>>
>>-
>>
>>backup_name - is a name of snapshot
>>
>>
>> Once it’s done we would need to collect all hard links into a common
>> directory (with keeping initial file hierarchy):
>>
>> sudo tar cpzfP /tmp/all_ks.tar.gz\
>>
>> $(sudo find %(datadir)s -type d -name %(backup_name)s)"
>>
>> where
>>
>>-
>>
>>backup_name - is a name of snapshot,
>>-
>>
>>datadir - storage location (/var/lib/cassandra/data, by the default)
>>
>>
>>  Note that this operation can be extended:
>>
>>-
>>
>>if cassandra was launched with more than one data directory (see
>>cassandra.yaml
>>
>> 
>>)
>>-
>>
>>if we want to backup only:
>>-
>>
>>   certain keyspaces at the same time
>>   -
>>
>>   one keyspace
>>   -
>>
>>   a list of CF’s for given keyspace
>>
>>
>> Useful links
>>
>>
>> http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsNodetool_r.html
>>
>> Best regards,
>> Denis Makogon
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
> Dmitriy Ukhlov
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-29 Thread David Kranz
While reviewing patches for moving response checking to the clients, I 
noticed that there are places where client methods do not return any value.
This is usually, but not always, a delete method. IMO, every rest client 
method should return at least the response. Some services return just 
the response for delete methods and others return (resp, body). Does any 
one object to cleaning this up by just making all client methods return 
resp, body? This is mostly a change to the clients. There were only a 
few places where a non-delete  method was returning just a body that was 
used in test code.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds

2014-08-29 Thread Joe Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On 27/08/14 12:59, Tim Bell wrote:
>> -Original Message- From: Michael Still
>> [mailto:mi...@stillhq.com] Sent: 26 August 2014 22:20 To:
>> OpenStack Development Mailing List (not for usage questions) 
>> Subject: Re: [openstack-dev] [nova][neutron] Migration from
>> nova-network to Neutron for large production clouds
> ...
>> 
>> Mark and I finally got a chance to sit down and write out a basic
>> proposal. It looks like this:
>> 
> 
> Thanks... I've put a few questions inline and I'll ask the experts
> to review the steps when they're back from holidays
> 
>> == neutron step 0 == configure neutron to reverse proxy calls to
>> Nova (part to be written)
>> 
>> == nova-compute restart one == Freeze nova's network state
>> (probably by stopping nova-api, but we could be smarter than that
>> if required) Update all nova-compute nodes to point Neutron and
>> remove nova-net agent for Neutron Nova aware L2 agent Enable
>> Neutron Layer 2 agent on each node, this might have the side
>> effect of causing the network configuration to be rebuilt for
>> some instances API can be unfrozen at this time until ready for
>> step 2
>> 
> 
> - Would it be possible to only update some of the compute nodes ?
> We'd like to stage the upgrade if we can in view of scaling risks.
> Worst case, we'd look to do it cell by cell but those are quite
> large already (200+ hypervisors)

I have a few what-ifs when comes to this:-

- - What if the migration fails halfway through? How do we administrate
nova in this situation?

Unfortunately Tim, last time I checked Neutron has no awareness of
Nova's cells (and only "recently" became aware of nova regions) so I
don't see how this would be taken into account for a migration.

> 
>> == neutron restart two == Freeze nova's network state (probably
>> by stopping nova-api, but we could be smarter than that if
>> required) Dump/translate/restore date from Nova-Net to Neutron
>> Configure Neutron to point to its own database Unfreeze Nova API
>> 

I think it's a good idea to be smarter.

> 
> - Linked with the point above, we'd like to do the nova-net to
> neutron in stages if we can

Again, this sounds like a nightmare if it fails. This sounds like it's
meant to be one big transaction, but it is anything but.

For this to be done safely in a production cloud (which is one of the
few reasons to actually do a replacement instead of just swapping out
the component), we need to be able to run Neutron and Nova-net at the
same time or it *does* have to become a transactional migration.

If the migration fails at some stage, you're left in limbo. Does Nova
work? Does Neutron work?

There needs to be some sort of fault tolerance or rollback feature if
you're going down the "all or nothing" approach to stop a cloud being
left in an inconsistent (and impossible to administrate or operate via
APIs) state.

If the two of them (Nova-network and Neutron) could both exist and
operate at the same time in a cloud, it wouldn't have to be a one-shot
migration. If some nodes fail, that's fine as you could just let them
fall back to Nova-net and fix them whilst your cloud still works and
more importantly nova-api is up and running.

> 
>> *** Stopping point for linuxbridge to linuxbridge translation, or
>> continue for rollout of new tech
>> 
>> == nova-compute restart two == Configure OVS or new technology,
>> ensure that proper ML2 driver is installed Restart Layer2 agent
>> on each hypervisor where next gen networking should be enabled
>> 
>> 
>> So, I want to stop using the word "cold" to describe this. Its
>> more of a rolling upgrade than a cold migration. So... Would two
>> shorter nova API outages be acceptable?
>> 
> 
> Two Nova API outages would be OK for us.

I think the Nova API outages are the least concern in comparison to
being left in a "halfway" state in a production environment. Hopefully
these concerns can be addresses.

> 
>> Michael
>> 
>> -- Rackspace Australia

Whilst I wholeheartedly agree that this migration plan seems like a
good idea (and reminds me of an Raiders of the Lost Ark-esque scene),
I'm afraid of what would happen if something went wrong in the middle
of this swap.

It wouldn't be a good idea to stop nova-api to fix this, as users and
services would be able to use it again.

Perhaps we should change the policy on nova-api during this migration
to only allow access to a special "migration" role or the like? This
would disable services or users from accessing Nova's api when a
special policy is applied for the migration, but allow administrators
to continue monitoring via the API and fix any problems. This seems
like a currently absent must-have.

I like the idea of the migration, but I hope that any and all "what
if?" questions have been addressed and the problems are mitigated.

I wish you and Mark lots of luck with this migration, but please make
sure it's not fragile and ensure it's fault tolerant!

Cheers,
Joe
-BEG

Re: [openstack-dev] [bashate] .bashateignore

2014-08-29 Thread Sean Dague
On 08/29/2014 08:53 AM, Dean Troyer wrote:
> On Fri, Aug 29, 2014 at 7:42 AM, Sean Dague  > wrote:
> 
> Integrating bashate into something as complicated as devstack, the file
> ignore problem has come up.
> 
> We seem to have 3 approaches out under review right now:
> 
> https://review.openstack.org/#/c/117425 : --exclude-dirs
> https://review.openstack.org/#/c/115794 : --exclude-dirs (different
> implementation)
> https://review.openstack.org/#/c/113892 : removing hidden directories
> 
> I'm actually kind of convinced now that none of these approaches are
> what we need, and that we should instead have a .bashateignore file in
> the root dir for the project instead, which would be regex that would
> match files or directories to throw out of the walk.
> 
> I think that would handle the concerns that everyone is having, and
> hopefully provides a more clear set of semantics in integrating.
> 
> Anyone up for taking a stab at this patch?
> 
> 
> I started the other night and ran into the usual semantic problems wrt
> meaning...rather than re-invent this wheel I found the pathspec module
> another new dependency!) that purports to do .gitignore-style handling,
> only it doesn't.  It's closer to  rsync include file syntax.  I managed
> to get it really close only to fail on handling bare directories
> properly.  Example:
> 
> Ignoring a doc directory in .gitignore:
> doc
> 
> Ignoring a doc directory in my trial:
> doc/
> 
> It occurs to me that fixing this too means maybe I started down the
> wrong path.  This matters to be because I want to also leverage the
> existing .gitignore files we have.
> 
> Just to join the party I pushed up the working state
> in https://review.openstack.org/117772.

If pathspec did the right thing, pulling in the extra dep would be fine,
but it doesn't seem like it does.

What if we just used 'glob' instead, find all the glob patterns and
intersect them out? I think there is a little bit of trickiness around
directories, but as glob.glob('topleveldir') matches it, I think that
intersection probably would work out fine as well.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-08-29 Thread Dmitriy Ukhlov
Hello Denis,
Thank you for very useful knowledge sharing.

But I have one more question. As far as I understood if we have replication
factor 3 it means that our backup may contain three copies of the same
data. Also it may contain some not compacted sstables set. Do we have any
ability to compact collected backup data before moving it to backup storage?


On Fri, Aug 29, 2014 at 2:01 PM, Denis Makogon 
wrote:

> Hello, stackers. I'd like to start thread related to backuping procedure
> for MagnetoDB, to be precise, for Cassandra backend.
>
> In order to accomplish backuping procedure for Cassandra we need to
> understand how does backuping work.
>
> To perform backuping:
>
>1.
>
>We need to SSH into each node
>2.
>
>Call ‘nodetool snapshot’ with appropriate parameters
>3.
>
>Collect backup.
>4.
>
>Send backup to remote storage.
>5.
>
>Remove initial snapshot
>
>
>  Lets take a look how does ‘nodetool snapshot’ works. Cassandra backs up
> data by taking a snapshot of all on-disk data files (SSTable files) stored
> in the data directory. Each time an SSTable gets flushed and snapshotted it
> becomes a hard link against initial SSTable pinned to specific timestamp.
>
> Snapshots are taken per keyspace or per-CF and while the system is online.
> However, nodes must be taken offline in order to restore a snapshot.
>
> Using a parallel ssh tool (such as pssh), you can flush and then snapshot
> an entire cluster. This provides an eventually consistent backup.
> Although no one node is guaranteed to be consistent with its replica nodes
> at the time a snapshot is taken, a restored snapshot can resume consistency
> using Cassandra's built-in consistency mechanisms.
>
> After a system-wide snapshot has been taken, you can enable incremental
> backups on each node (disabled by default) to backup data that has changed
> since the last snapshot was taken. Each time an SSTable is flushed, a hard
> link is copied into a /backups subdirectory of the data directory.
>
> Now lets see how can we deal with snapshot once its taken. Below you can
> see a list of command that needs to be executed to prepare a snapshot:
>
> Flushing SSTables for consistency
>
> 'nodetool flush'
>
> Creating snapshots (for example of all keyspaces)
>
> "nodetool snapshot -t %(backup_name)s 1>/dev/null",
>
> where
>
>-
>
>backup_name - is a name of snapshot
>
>
> Once it’s done we would need to collect all hard links into a common
> directory (with keeping initial file hierarchy):
>
> sudo tar cpzfP /tmp/all_ks.tar.gz\
>
> $(sudo find %(datadir)s -type d -name %(backup_name)s)"
>
> where
>
>-
>
>backup_name - is a name of snapshot,
>-
>
>datadir - storage location (/var/lib/cassandra/data, by the default)
>
>
>  Note that this operation can be extended:
>
>-
>
>if cassandra was launched with more than one data directory (see
>cassandra.yaml
>
> 
>)
>-
>
>if we want to backup only:
>-
>
>   certain keyspaces at the same time
>   -
>
>   one keyspace
>   -
>
>   a list of CF’s for given keyspace
>
>
> Useful links
>
>
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsNodetool_r.html
>
> Best regards,
> Denis Makogon
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Specs for K release

2014-08-29 Thread Kyle Mestery
On Thu, Aug 28, 2014 at 11:12 PM, Brandon Logan
 wrote:
> Kyle,
> Does this apply to blueprints that are destined for the incubator as
> well?  I assume the incubator does require a spec process too.
>
Incubator code still requires a spec, yes. For the things which are
incubator candidates, I have not removed the specs from Juno, I've
left them there for now. What I was thinking of doing was creating an
incubator directory in the specs repo and moving them there since they
will be incubated in Juno.

Thanks,
Kyle

> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 08:37 -0500, Kyle Mestery wrote:
>> On Thu, Aug 28, 2014 at 8:30 AM, Michael Still  wrote:
>> > For nova we haven't gotten around to doing this, but it shouldn't be a
>> > big deal. I'll add it to the agenda for today's meeting.
>> >
>> > Michael
>> >
>> For Neutron, I have not gone through and removed specs which merged
>> and haven't made it yet. I'll do that today with a review to
>> neutron-specs, and once we hit FF next week I'll make another pass to
>> remove things which didn't make Juno. Keep in mind if your spec
>> doesn't make Juno you will have to re-propose it for Kilo.
>>
>> Thanks!
>> Kyle
>>
>> > On Thu, Aug 28, 2014 at 2:07 AM, Andreas Scheuring
>> >  wrote:
>> >> Hi,
>> >> is it already possible to submit specs (nova & neutron) for the K
>> >> release? Would be great for getting early feedback and tracking
>> >> comments. Or should I just commit it to the juno folder?
>> >>
>> >> Thanks,
>> >> Andreas
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > --
>> > Rackspace Australia
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]pylint errors with hashlib

2014-08-29 Thread Ivan Kolodyazhny
Already done by xing-yang: https://review.openstack.org/#/c/117685/.

Thanks for raising this topic.

Regards,
Ivan Kolodyazhny,
Software Engineer,
Mirantis Inc.

On Fri, Aug 29, 2014 at 7:40 AM, John Griffith 
wrote:

>
>
>
> On Mon, Aug 25, 2014 at 8:47 PM, Clark Boylan 
> wrote:
>
>> On Mon, Aug 25, 2014, at 06:45 PM, Murali Balcha wrote:
>> > Pylint on my patch is failing with the following error:
>> >
>> > Module 'hashlib' has no 'sha256'
>> >
>> > Cinder pylint already has following exceptions,
>> >
>> >
>> > pylint_exceptions:["Instance of 'sha1' has no 'update' member", ""]
>> >
>> > pylint_exceptions:["Module 'hashlib' has no 'sha224' member", ""]
>> >
>> >
>> > So I think "hashlib has no 'sha256'" should be added to the exception
>> > list as well. How can I update the exception list?
>> >
>> >
>> > Thanks,
>> >
>> > Murali Balcha
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> I think this may be related to your install of python. Mine does not
>> have this problem.
>>
>> $ python
>> Python 2.7.6 (default, Mar 22 2014, 22:59:56)
>> [GCC 4.8.2] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>> >>> import hashlib
>> >>> hashlib.sha256
>> 
>> >>> hashlib.sha224
>> 
>> >>> s = hashlib.sha1()
>> >>> s.update('somestring')
>> >>>
>>
>> You should not need to treat these as acceptable failures.
>>
>> Clark
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> ​
> The error pointed out by Murali is actually showing up in the gate [1].  I
> think adding the pylint exception is fine in this case.
>
> [1]:
> http://logs.openstack.org/68/110068/8/check/gate-cinder-pylint/8c6813d/console.html
> ​
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] [Neutron][stable] How to backport database schema fixes

2014-08-29 Thread Mike Bayer

On Aug 29, 2014, at 7:23 AM, Alan Pevec  wrote:

>> It seems that currently it's hard to backport any database schema fix to
>> Neutron [1] which uses alembic to manage db schema version. Nova has the
>> same issue before
>> and a workaround is to put some placeholder files before each release.
>> So first do we allow db schema fixes to be backport to stable for Neutron ?
> 
> DB schema backports was a topic at StableBranch session last design
> summit [*] and policy did not change: not allowed in general but
> exceptions could always be discussed on stable-maint list.
> 
>> If we do, then how about put some placeholder files similar to Nova at the
>> end of each release cycle? or we have some better solution for alembic.
> 
> AFAIK you can't have placeholders in alembic, there was an action item
> from design session for Mark to summarize his best practices for db
> backports.


Alembic doesn’t need “placeholder” files, if we’re referring to the practice 
with migrate to have empty migration files present so that new migrations can 
be spliced in.   Alembic migrations can be spliced anywhere in the series.  The 
only current limitation, which is on deck to be opened up, is that the 
migrations ultimately have to be arranged linearly in some way (e.g. if two 
different environments are the product of two branches, and need to run the 
same series of migrations, but one needs to skip certain files and the other 
needs to skip others, only those migrations that are needed on each are 
applied.  But SQLalchemy-migrate certainly has no capability for that either).  
 If this issue needs to be fast-tracked I can move my efforts there.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-08-29 Thread Dean Troyer
On Fri, Aug 29, 2014 at 7:42 AM, Sean Dague  wrote:

> Integrating bashate into something as complicated as devstack, the file
> ignore problem has come up.
>
> We seem to have 3 approaches out under review right now:
>
> https://review.openstack.org/#/c/117425 : --exclude-dirs
> https://review.openstack.org/#/c/115794 : --exclude-dirs (different
> implementation)
> https://review.openstack.org/#/c/113892 : removing hidden directories
>
> I'm actually kind of convinced now that none of these approaches are
> what we need, and that we should instead have a .bashateignore file in
> the root dir for the project instead, which would be regex that would
> match files or directories to throw out of the walk.
>
> I think that would handle the concerns that everyone is having, and
> hopefully provides a more clear set of semantics in integrating.
>
> Anyone up for taking a stab at this patch?
>

I started the other night and ran into the usual semantic problems wrt
meaning...rather than re-invent this wheel I found the pathspec module
another new dependency!) that purports to do .gitignore-style handling,
only it doesn't.  It's closer to  rsync include file syntax.  I managed to
get it really close only to fail on handling bare directories properly.
 Example:

Ignoring a doc directory in .gitignore:
doc

Ignoring a doc directory in my trial:
doc/

It occurs to me that fixing this too means maybe I started down the wrong
path.  This matters to be because I want to also leverage the existing
.gitignore files we have.

Just to join the party I pushed up the working state in
https://review.openstack.org/117772.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-29 Thread Hayes, Graham


On Fri, 2014-08-29 at 11:23 +0200, Thierry Carrez wrote:
> Anne Gentle wrote:
> > On Wed, Aug 27, 2014 at 7:51 AM, Thierry Carrez  > > wrote:
> > 
> > Hi everyone,
> > 
> > I've been thinking about what changes we can bring to the Design Summit
> > format to make it more productive. I've heard the feedback from the
> > mid-cycle meetups and would like to apply some of those ideas for Paris,
> > within the constraints we have (already booked space and time). Here is
> > something we could do:
> > 
> > Day 1. Cross-project sessions / incubated projects / other projects
> > 
> > I think that worked well last time. 3 parallel rooms where we can
> > address top cross-project questions, discuss the results of the various
> > experiments we conducted during juno. Don't hesitate to schedule 2 slots
> > for discussions, so that we have time to come to the bottom of those
> > issues. Incubated projects (and maybe "other" projects, if space allows)
> > occupy the remaining space on day 1, and could occupy "pods" on the
> > other days.
> > 
> > Yep, I think this works in theory, the tough part will be when all the
> > incubating projects realize they're sending people for a single day?
> > Maybe it'll work out differently than I think though. It means fitting
> > ironic, barbican, designate, manila, marconi in a day? 
> 
> Actually those projects would get pod space for the rest of the week, so
> they should stay! Also some of them might have graduated by then :)

Would the programs for those projects not get design summit time? I
thought the Programs got Design summit time, not projects... If not, can
the Programs get design summit time? 

> 
> > Also since QA, Infra, and Docs are cross-project AND Programs, where do
> > they land?
> 
> I think those teams work on different issues. Some issues require a lot
> of communication and input because they are cross-project problems that
> those teams are tasked with solving -- in which case that belongs to the
> cross-project day. Other issues are more implementation details and
> require mostly the team members but not so much external input -- those
> belong to the specific slots or the "contributors meetup". Obviously
> some things will be a bit borderline and we'll have to pick one or the
> other based on available slots.
> 


signature.asc
Description: This is a digitally signed message part


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [bashate] .bashateignore

2014-08-29 Thread Sean Dague
Integrating bashate into something as complicated as devstack, the file
ignore problem has come up.

We seem to have 3 approaches out under review right now:

https://review.openstack.org/#/c/117425 : --exclude-dirs
https://review.openstack.org/#/c/115794 : --exclude-dirs (different
implementation)
https://review.openstack.org/#/c/113892 : removing hidden directories

I'm actually kind of convinced now that none of these approaches are
what we need, and that we should instead have a .bashateignore file in
the root dir for the project instead, which would be regex that would
match files or directories to throw out of the walk.

I think that would handle the concerns that everyone is having, and
hopefully provides a more clear set of semantics in integrating.

Anyone up for taking a stab at this patch?

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][stable] How to backport database schema fixes

2014-08-29 Thread Russell Bryant
On 08/29/2014 06:54 AM, Salvatore Orlando wrote:
> If you are running version from a stable branch, changes in DB
> migrations should generally be forbidden as the policy states since
> those migrations are not likely to be executed again. Downgrading and
> then upgrading again is extremely risky and I don't think anybody would
> ever do that.
> 
> However, if one is running stable branch X-2 where X is the current
> development branch, back porting migration fixes could make sense for
> upgrading to version X-1 if the migration being fixed is in the path
> between X-2 and X-1.
> Therefore I would forbid every fix to migration earlier than X-2 release
> (there should not be any in theory but neutron has migrations back to
> folsom). For the path between X-2 and  X-1 fixes might be ok. 

I think it's safe to backport to X-1.  The key bit is that the migration
in master and the backported version must be reentrant.  They need to
inspect the schema and only perform the change if it hasn't already been
applied.  This is a good best practice to adopt for *all* migrations to
make the backport option easier.

> However,
> rather than amending existing migration is always better to add new
> migrations - even if it's a matter of enabling a given change for a
> particular plugin (*). 

Agreed, in general.

It depends on the bug.  If there's an error in the migration that will
prevent the original code from running properly, breaking the migration,
that obviously needs to be fixed.

> As nova does, the best place for doing that is
> always immediately before release.

Doing what, adding placeholders?

Note that we actually add placeholders at the very *beginning* of a
release cycle.  The placeholders have to be put in place as the first
set of migrations in a release.  That way:

1) X-1 has those migration slots unused.

2) X has those slots reserved.

If we did it just *before* release, you can't actually backport into
those positions.  They've already run as no-op.

> With alembic, we do not need to add placeholders, but just adjust
> pointers just like you would when inserting an element in a dynamic list.

Good point.

> (*) we are getting rid of this conditional migration logic for juno anyway

Yay!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] [Neutron][stable] How to backport database schema fixes

2014-08-29 Thread Alan Pevec
> It seems that currently it's hard to backport any database schema fix to
> Neutron [1] which uses alembic to manage db schema version. Nova has the
> same issue before
> and a workaround is to put some placeholder files before each release.
> So first do we allow db schema fixes to be backport to stable for Neutron ?

DB schema backports was a topic at StableBranch session last design
summit [*] and policy did not change: not allowed in general but
exceptions could always be discussed on stable-maint list.

> If we do, then how about put some placeholder files similar to Nova at the
> end of each release cycle? or we have some better solution for alembic.

AFAIK you can't have placeholders in alembic, there was an action item
from design session for Mark to summarize his best practices for db
backports.
Mark, do you have that published somewhere?

>  From the stable maintainer side, we have a policy for stable backport
> https://wiki.openstack.org/wiki/StableBranch
> DB schema changes is forbidden
> If we allow db schema backports for more than one project, I think we need
> to update the wiki.

Again, policy stays but we can use this thread as an exception request for [1]
My thoughts: adding index on (agent_type, host) is safe for backports
as it doesn't affect code but we need to do it properly
e.g. it must not break Icehouse->Juno upgrades and have clear
instructions how to apply in stable release notes e.g. [2] for similar
case in Keystone Havana.
Also it would be good to describe the impact and "why" part in the
commit message and/or bug 1350326 description, IIUC that would be
"prevent race condition in L2 plugin" ?

Cheers,
Alan

> [1] https://review.openstack.org/#/c/110642/

[*] https://etherpad.openstack.org/p/StableIcehouse
[2] https://wiki.openstack.org/wiki/ReleaseNotes/2013.2.2#Keystone

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Complex resource_metadata could fail to store in MongoDB

2014-08-29 Thread Igor Degtiarov
Hi, folks.

I was interested in the problem with storing of samples, that contain
complex resource_metadata, in MongoDB database [1].

If data is a dict that has a  key(s) with dots (i.e. .), dollar signs (i.e.
$), or null characters,
it wouldn't be stored. It is happened because these characters are
restricted to use in fields name in MongoDB [2], but so far there is no any
verification of the metadata in ceilometers mongodb driver, as a result we
will lose data.

Solution of this problem seemed to be rather simple, before storing data we
check keys in resourse_metadata, if it is a dict, and "quote" keys with
restricted characters in a similar way, as it was done in a change request
of redesign separators in columns in HBase [2]. After that store metering
data.

But other unexpected difficulties appear on the step of getting data. To
get stored data we constructs a meta query, and structure of that query was
chosen identical to initial query in MongoDB. So dots is used as a
separator for three nods of stored data.

Ex. If it is needed to check value in field "Foo"

{metadata:
{ Zoo:
   {Foo: ''value"}}}

query would be: "metadata.Zoo.Foo"

We don't know how deep is dict in metadata, so it is impossible to propose
any correct parsing of query, to "quote" field names contain dots.

I see two way for improvements. First is rather complex and based of
redesign structure of the metadata query in ceilometer. Don't know if it is
ever possible.

And second is based on removing from the samples "bad" resource_metadata.
In this case we also lose metadata,  but save other metering data. Of
course queries for not saved metadata will return nothing, so it is not
complete solution, but some kind of the hook.

What do you think about that?
Any thoughts and propositions are kindly welcome.

[1] https://bugs.launchpad.net/mos/+bug/1360240
[2] http://docs.mongodb.org/manual/reference/limits/
[3] https://review.openstack.org/#/c/106376/

-- Igor Degtiarov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VPNaaS pending state handling

2014-08-29 Thread Paul Michali (pcm)
Comments in-line @PCM


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Aug 28, 2014, at 11:57 AM, Sridhar Ramaswamy  wrote:

> 
> https://bugs.launchpad.net/neutron/+bug/1355360
> 
> I'm working on this vpn vendor bug and am looking for guidance on the 
> approach. I'm also relatively new to neutron development so bear with some 
> newbie gaffs :)
> 
> The problem reported in this bug, in a nutshell, is the policies in the 
> neutron vpn db and virtual-machine implementing vpn goes out of sync when the 
> agent restarts (restart could be either operator driven or due to a software 
> error). 

@PCM To clarify, the bug is an enhancement to VPN to support restart handling 
(which doesn’t currently exist), right?


> 
> CSR vpn device driver currently doesn't do a sync when it comes up. I'm going 
> to add that as part of this bug fix.

@PCM Does the reference implementation handle restart? Is the handling 
non-disruptive (no loss to existing VPN connections)? Will this bug fix both 
reference and vendor VPN implementations?


> Still it will only partially solve the problem as it will take care of new 
> connections created (which goes to PENDING_CREATE state) & updates to 
> existing connections while the agent was down but NOT for deletes. For 
> deletes the connection entry gets deleted right at vpn_db level. 
> 
> My proposal is to introduce PENDING_DELETE state for vpn site-to-site 
> connection.  Implementing pending_delete will involve,

@PCM The PENDING_DELETE state already exists, but is not used currently for 
reference/vendor solutions, right?


> 
> 1) Moving the delete operation from vpn_db into service driver

@PCM Concerned about my understanding of this, or if it is how I’m interpreting 
the wording. The delete has two parts - database update and driver update to 
actually remove the connection. Are the database operations staying in 
vpn_db.py?


> 2) Changing the reference ipsec service driver to handle PENDING_DELETE 
> state. For now we can just do a simple db delete to preserve the existing 
> behavior.
> 3) CSR device driver will make use of PENDING_DELETE to correctly delete the 
> entries in the CSR device when the agent comes up.

@PCM Would the process be…

1) delete request puts connection in DELETE_PENDING state (dbase write), and 
notifies service driver
2) service driver sends request to device driver
3) device driver does actions to delete the connection
4) device driver notifies that delete is completed (I think this would be 
asynchronous, as the device driver doesn’t reply to the request)
5) database would update and remove the connection entry.

Is that correct?

Regards,

PCM


> 
> Sounds reasonable? Any thoughts?
> 
> thanks,
> - Sridhar
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-08-29 Thread Denis Makogon
Hello, stackers. I'd like to start thread related to backuping procedure
for MagnetoDB, to be precise, for Cassandra backend.

In order to accomplish backuping procedure for Cassandra we need to
understand how does backuping work.

To perform backuping:

   1.

   We need to SSH into each node
   2.

   Call ‘nodetool snapshot’ with appropriate parameters
   3.

   Collect backup.
   4.

   Send backup to remote storage.
   5.

   Remove initial snapshot


Lets take a look how does ‘nodetool snapshot’ works. Cassandra backs up
data by taking a snapshot of all on-disk data files (SSTable files) stored
in the data directory. Each time an SSTable gets flushed and snapshotted it
becomes a hard link against initial SSTable pinned to specific timestamp.

Snapshots are taken per keyspace or per-CF and while the system is online.
However, nodes must be taken offline in order to restore a snapshot.

Using a parallel ssh tool (such as pssh), you can flush and then snapshot
an entire cluster. This provides an eventually consistent backup. Although
no one node is guaranteed to be consistent with its replica nodes at the
time a snapshot is taken, a restored snapshot can resume consistency using
Cassandra's built-in consistency mechanisms.

After a system-wide snapshot has been taken, you can enable incremental
backups on each node (disabled by default) to backup data that has changed
since the last snapshot was taken. Each time an SSTable is flushed, a hard
link is copied into a /backups subdirectory of the data directory.

Now lets see how can we deal with snapshot once its taken. Below you can
see a list of command that needs to be executed to prepare a snapshot:

Flushing SSTables for consistency

'nodetool flush'

Creating snapshots (for example of all keyspaces)

"nodetool snapshot -t %(backup_name)s 1>/dev/null",

where

   -

   backup_name - is a name of snapshot


Once it’s done we would need to collect all hard links into a common
directory (with keeping initial file hierarchy):

sudo tar cpzfP /tmp/all_ks.tar.gz\

$(sudo find %(datadir)s -type d -name %(backup_name)s)"

where

   -

   backup_name - is a name of snapshot,
   -

   datadir - storage location (/var/lib/cassandra/data, by the default)


Note that this operation can be extended:

   -

   if cassandra was launched with more than one data directory (see
   cassandra.yaml
   

   )
   -

   if we want to backup only:
   -

  certain keyspaces at the same time
  -

  one keyspace
  -

  a list of CF’s for given keyspace


Useful links

http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsNodetool_r.html

Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][stable] How to backport database schema fixes

2014-08-29 Thread Salvatore Orlando
If you are running version from a stable branch, changes in DB migrations
should generally be forbidden as the policy states since those migrations
are not likely to be executed again. Downgrading and then upgrading again
is extremely risky and I don't think anybody would ever do that.

However, if one is running stable branch X-2 where X is the current
development branch, back porting migration fixes could make sense for
upgrading to version X-1 if the migration being fixed is in the path
between X-2 and X-1.
Therefore I would forbid every fix to migration earlier than X-2 release
(there should not be any in theory but neutron has migrations back to
folsom). For the path between X-2 and  X-1 fixes might be ok. However,
rather than amending existing migration is always better to add new
migrations - even if it's a matter of enabling a given change for a
particular plugin (*). As nova does, the best place for doing that is
always immediately before release.

With alembic, we do not need to add placeholders, but just adjust pointers
just like you would when inserting an element in a dynamic list.

Salvatore

(*) we are getting rid of this conditional migration logic for juno anyway


On 29 August 2014 11:38, Yaguang Tang  wrote:

> Hi, all
>
> It seems that currently it's hard to backport any database schema fix to
> Neutron [1] which uses alembic to manage db schema version. Nova has the
> same issue before
> and a workaround is to put some placeholder files before each release.
> So first do we allow db schema fixes to be backport to stable for Neutron
> ?
> If we do, then how about put some placeholder files similar to Nova at the
> end of each release cycle? or we have some better solution for alembic.
>
>  From the stable maintainer side, we have a policy for stable backport
> https://wiki.openstack.org/wiki/StableBranch
>
>- DB schema changes is forbidden
>
> If we allow db schema backports for more than one project, I think we need
> to update the wiki.
>
> [1] https://review.openstack.org/#/c/110642/
> 
>
> --
> Tang Yaguang
>
> Canonical Ltd. | www.ubuntu.com | www.canonical.com
> gpg key: 0x187F664F
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread John Garbutt
On 28 August 2014 21:58, Chris Friesen  wrote:
> On 08/28/2014 02:25 PM, Jay Pipes wrote:
>> On 08/28/2014 04:05 PM, Chris Friesen wrote:
>>> The overall "scheduler-lib" Blueprint is marked with a "high" priority
>>> at "http://status.openstack.org/release/";.  Hopefully that would apply
>>> to sub-blueprints as well.
>>
>> a) There are no sub-blueprints to that scheduler-lib blueprint
>
> I guess my terminology was wrong.  The original email referred to
> "https://review.openstack.org/#/c/89893/"; as the "crucial BP that needs to
> be implemented".  That is part of
> "https://review.openstack.org/#/q/topic:bp/isolate-scheduler-db,n,z";, which
> is listed as a Gerrit topic in the "scheduler-lib" blueprint that I pointed
> out.

Yeah, its confusing. Those patches are meant for a different blueprint I assume.

>> b) If there were sub-blueprints, that does not mean that they would
>> necessarily take the same priority as their parent blueprint
>
> I'm not sure how that would work.  If we have a high-priority blueprint
> depending on work that is considered low-priority, that would seem to set up
> a classic priority inversion scenario.

What we do is this...

If something high priority depends on something low priority, the low
becomes high.
Or more often, they both become medium.

>> c) There's no reason priorities can't be revisited when necessary
>
> Sure, but in that case it might be a good idea to make the updated priority
> explicit.

If something looks like it has the wrong priority, just ping me, or
one of the other nova-drivers, and we can discuss it. We did a bit of
that at the mid-cylce meet up.

Sometimes I just messed up, sometimes we didn't realise how important
it is, sometimes we need to explain why the other things are
considered more important right now.

Maybe what I said before, but if you see a problem, please shout up
ASAP, and lets get it sorted sooner, when its usually easier to sort
it.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Some thoughts about Horizon's test suite

2014-08-29 Thread Richard Jones
Thanks for your thoughts Radomir. The nova api in question is memoized so
it'll only be called once per request. Caching it for longer would be a
very good idea, but that then brings into play deeper knowledge than I have
about how long to cache things like nova extension configuration. Also, I
looked into this see whether we could use a nicer existing memoizing system
(one with a timeout, that doesn't use weakref and will clean out stale
entries), but none of them will handle the existence of the varying request
parameter, so more work would be required to build our own solution. It's
still something I'd like to see done.

But that's not really the point of this email, as you note :)



On 29 August 2014 19:46, Radomir Dopieralski  wrote:

> On 29/08/14 04:22, Richard Jones wrote:
>
>
> > Very recently I attempted to fix a simple bug in which a Panel was being
> > displayed when it shouldn't have been. The resultant 5-line fix ended up
> > breaking 498 of the 1048 unit tests in the suite. I estimated that it
> > would take about a week's effort to address all the failing tests. For
> > more information see
> > 
>
> Having read that, I can't help but comment that maybe, just maybe,
> making an API call on each an every request to Horizon is not such a
> great idea after all, and should be very well thought out, as it is
> costly. In particular, it should be investigated if the call could be
> made only on some requests? That would have the side effect of breaking
> much fewer tests.
>
> But I agree that mox is horrible in that it effectively freezes the
> implementation details of the tested unit, instead of testing its behavior.
>
> --
> Radomir Dopieralski
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread John Garbutt
Going a bit further up the thread where we are still talking about
spec reviews and not code reviews...

On 28 August 2014 21:42, Dugger, Donald D  wrote:
> I would contend that that right there is an indication that there's a problem 
> with the process.

We got two nova-core reviewer sponsors, to ensure the code would get
reviewed before FF.

We probably should have got two nova-driver sponsors for a spec
freeze. The cores don't have +2 in spec land.

This is the first release we are doing specs, so there are likely to
be holes in the process. I think next time we could try two nova-cores
and two nova-drivers (the driver might sign up for the spec review,
but not the code review).

Also, the spec only got an exception for one week only. I was very
late on adding the -2, apologies. I just spotted it was missed out,
when doing a bit of house keeping for juno-3.

> You submit a BP and then you have no idea of what is happening and no way of 
> addressing any issues.  If the priority is wrong I can explain why I think 
> the priority should be higher, getting stonewalled leaves me with no idea 
> what's wrong and no way to address any problems.

Feel free to raise this in the nova-meeting, or ping me or mikal on
IRC or via email.

> I think, in general, almost everyone is more than willing to adjust proposals 
> based upon feedback.  Tell me what you think is wrong and I'll either explain 
> why the proposal is correct or I'll change it to address the concerns.

Right. In this case, we just didn't get it reviewed. As mentioned,
probably because people didn't see this as important right now.

> Trying to deal with silence is really hard and really frustrating.  
> Especially given that we're not supposed to spam the mailing it's really hard 
> to know what to do.

For blueprint process stuff, email or catch me (johnthetubaguy) on
IRC, or mikal on IRC, or any of the nova-drivers. We can usually get
you an answer. Or generally ask people in #openstack-nova who should
be able to point you in the right direction.

>I don't know the solution but we need to do something.  More core team members 
>would help, maybe something like an automatic timeout where BPs/patches with 
>no negative scores and no activity for a week get flagged for special handling.

We are brainstorming ideas for Kilo. But its always a balance. I don't
want to add extra red tape for every issue we have.

Right now we rely on people shouting on IRC if we forget really
important things, and fixing stuff up as required.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Review change to nova api pretty please?

2014-08-29 Thread Flavio Percoco
On 08/29/2014 07:52 AM, Alex Leonhardt wrote:
> Hi All,
> 
> Could someone please do the honor
> :) https://review.openstack.org/#/c/116472/ ? 
> PEP8 failed, but thats not my fault ;) hehe 
> 

Please, abstain to send review requests to the mailing list.

Thanks!

http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

> Thanks!
> Alex
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] change to deprecation policy in the incubator

2014-08-29 Thread Flavio Percoco
On 08/28/2014 06:14 PM, Doug Hellmann wrote:
> Before Juno we set a deprecation policy for graduating libraries that said 
> the incubated versions of the modules would stay in the incubator repository 
> for one full cycle after graduation. This gives projects time to adopt the 
> libraries and still receive bug fixes to the incubated version (see 
> https://wiki.openstack.org/wiki/Oslo#Graduation).
> 
> That policy worked well early on, but has recently introduced some challenges 
> with the low level modules. Other modules in the incubator are still 
> importing the incubated versions of, for example, timeutils, and so tests 
> that rely on mocking out or modifying the behavior of timeutils do not work 
> as expected when different parts of the application code end up calling 
> different versions of timeutils. We had similar issues with the notifiers and 
> RPC code, and I expect to find other cases as we continue with the 
> graduations.
> 
> To deal with this problem, I propose that for Kilo we delete graduating 
> modules as soon as the new library is released, rather than waiting to the 
> end of the cycle. We can update the other incubated modules at the same time, 
> so that the incubator will always use the new libraries and be consistent.
> 
> We have not had a lot of patches where backports were necessary, but there 
> have been a few important ones, so we need to retain the ability to handle 
> them and allow projects to adopt libraries at a reasonable pace. To handle 
> backports cleanly, we can “freeze” all changes to the master branch version 
> of modules slated for graduation during Kilo (we would need to make a good 
> list very early in the cycle), and use the stable/juno branch for backports.
> 
> The new process would be:
> 
> 1. Declare which modules we expect to graduate during Kilo.
> 2. Changes to those pre-graduation modules could be made in the master branch 
> before their library is released, as long as the change is also backported to 
> the stable/juno branch at the same time (we should enforce this by having 
> both patches submitted before accepting either).
> 3. When graduation for a library starts, freeze those modules in all branches 
> until the library is released.
> 4. Remove modules from the incubator’s master branch after the library is 
> released.
> 5. Land changes in the library first.
> 6. Backport changes, as needed, to stable/juno instead of master.
> 
> It would be better to begin the export/import process as early as possible in 
> Kilo to keep the window where point 2 applies very short.
> 
> If there are objections to using stable/juno, we could introduce a new branch 
> with a name like backports/kilo, but I am afraid having the extra branch to 
> manage would just cause confusion.
> 
> I would like to move ahead with this plan by creating the stable/juno branch 
> and starting to update the incubator as soon as the oslo.log repository is 
> imported (https://review.openstack.org/116934).
> 
> Thoughts?

I like the plan. By being more aggressive in the way we deprecate
graduated modules from oslo-incubator helps making sure the projects are
all aligned.

One thing we may want to think about is to graduate fewer modules in
order to give liaisons enough time to migrate the projects they're
taking care of. The more libs we graduate, the more work we're putting
on liaisons, which means they'll need more time (besides the time
they're dedicating to other projects) to do that work.

One more thing, we need to add to the list of ports to do during Kilo
the backlog of ports that haven't happened yet. For example, I haven't
ported glance to oslo.utils yet. I expect to do it before the end of the
cycle but Murphy :)

Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Some thoughts about Horizon's test suite

2014-08-29 Thread Radomir Dopieralski
On 29/08/14 04:22, Richard Jones wrote:


> Very recently I attempted to fix a simple bug in which a Panel was being
> displayed when it shouldn't have been. The resultant 5-line fix ended up
> breaking 498 of the 1048 unit tests in the suite. I estimated that it
> would take about a week's effort to address all the failing tests. For
> more information see
> 

Having read that, I can't help but comment that maybe, just maybe,
making an API call on each an every request to Horizon is not such a
great idea after all, and should be very well thought out, as it is
costly. In particular, it should be investigated if the call could be
made only on some requests? That would have the side effect of breaking
much fewer tests.

But I agree that mox is horrible in that it effectively freezes the
implementation details of the tested unit, instead of testing its behavior.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] change to deprecation policy in the incubator

2014-08-29 Thread Thierry Carrez
That all makes sense to me.

Doug Hellmann wrote:
> Before Juno we set a deprecation policy for graduating libraries that said 
> the incubated versions of the modules would stay in the incubator repository 
> for one full cycle after graduation. This gives projects time to adopt the 
> libraries and still receive bug fixes to the incubated version (see 
> https://wiki.openstack.org/wiki/Oslo#Graduation).
> 
> That policy worked well early on, but has recently introduced some challenges 
> with the low level modules. Other modules in the incubator are still 
> importing the incubated versions of, for example, timeutils, and so tests 
> that rely on mocking out or modifying the behavior of timeutils do not work 
> as expected when different parts of the application code end up calling 
> different versions of timeutils. We had similar issues with the notifiers and 
> RPC code, and I expect to find other cases as we continue with the 
> graduations.
> 
> To deal with this problem, I propose that for Kilo we delete graduating 
> modules as soon as the new library is released, rather than waiting to the 
> end of the cycle. We can update the other incubated modules at the same time, 
> so that the incubator will always use the new libraries and be consistent.
> 
> We have not had a lot of patches where backports were necessary, but there 
> have been a few important ones, so we need to retain the ability to handle 
> them and allow projects to adopt libraries at a reasonable pace. To handle 
> backports cleanly, we can “freeze” all changes to the master branch version 
> of modules slated for graduation during Kilo (we would need to make a good 
> list very early in the cycle), and use the stable/juno branch for backports.
> 
> The new process would be:
> 
> 1. Declare which modules we expect to graduate during Kilo.
> 2. Changes to those pre-graduation modules could be made in the master branch 
> before their library is released, as long as the change is also backported to 
> the stable/juno branch at the same time (we should enforce this by having 
> both patches submitted before accepting either).
> 3. When graduation for a library starts, freeze those modules in all branches 
> until the library is released.
> 4. Remove modules from the incubator’s master branch after the library is 
> released.
> 5. Land changes in the library first.
> 6. Backport changes, as needed, to stable/juno instead of master.
> 
> It would be better to begin the export/import process as early as possible in 
> Kilo to keep the window where point 2 applies very short.
> 
> If there are objections to using stable/juno, we could introduce a new branch 
> with a name like backports/kilo, but I am afraid having the extra branch to 
> manage would just cause confusion.
> 
> I would like to move ahead with this plan by creating the stable/juno branch 
> and starting to update the incubator as soon as the oslo.log repository is 
> imported (https://review.openstack.org/116934).
> 
> Thoughts?
> 
> Doug
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][stable] How to backport database schema fixes

2014-08-29 Thread Yaguang Tang
Hi, all

It seems that currently it's hard to backport any database schema fix to
Neutron [1] which uses alembic to manage db schema version. Nova has the
same issue before
and a workaround is to put some placeholder files before each release.
So first do we allow db schema fixes to be backport to stable for Neutron ?
If we do, then how about put some placeholder files similar to Nova at the
end of each release cycle? or we have some better solution for alembic.

 From the stable maintainer side, we have a policy for stable backport
https://wiki.openstack.org/wiki/StableBranch

   - DB schema changes is forbidden

If we allow db schema backports for more than one project, I think we need
to update the wiki.

[1] https://review.openstack.org/#/c/110642/


-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-29 Thread Thierry Carrez
James Polley wrote:
> 
> > However, Thierry pointed
> > to https://wiki.openstack.org/wiki/Governance/Foundation/Structure
> which
> > still refers to Project Technical Leads and says explicitly that they
> > lead individual projects, not programs. I actually have edit access to
> > that page, so I could at least update that with a simple
> > "s/Project/Program/", if I was sure that was the right thing to do.
> 
> Don't underestimate how stale wiki pages can become! Yes, fix it.
> 
> I don't know if I've fixed it, but I've certainly replaced all users of
> the word Project with Program.
> 
> Whether or not it now matches reality, I'm not sure.
> 
> I alsp removed (what I assume is) a stale reference to the PPB and added
> a new heading for the TC.

It looks correct to me, thanks!

> > http://www.openstack.org/ has a link in the bottom nav that says
> > "Projects"; it points to http://www.openstack.org/projects/ which
> > redirects to http://www.openstack.org/software/ which has a list of
> > things like "Compute" and "Storage" - which as far as I know are
> > Programs, not Projects. I don't know how to update that link in
> the nav
> > panel.
> 
> That's because the same word ("compute") is used for two different
> things: a program name ("Compute") and an "official OpenStack name" for
> a project ("OpenStack Compute a.k.a. Nova"). Basically official
> OpenStack names reduce confusion for newcomers ("What is Nova ?"), but
> they confuse old-timers at some point ("so the Compute program produces
> Nova a.k.a. OpenStack Compute ?").
> 
> 
> That's confusing to me. I had thought that part of the reason for the
> separation was to enable a level of indirection - if the Compute program
> team decide that a new project called (for example) SuperNova should be
> the main project, that just means that Openstack Compute is now a
> pointer to a different project, supported by the same program team.
> 
> It sounds like that isn't the intent though?

That's more of a side-effect than the intent, IMHO. The indirection we
created is between teams and code repositories.

> > I wasn't around when the original Programs/Projects discussion was
> > happening - which, I suspect, has a lot to do with why I'm confused
> > today - it seems as though people who were around at the time
> understand
> > the difference, but people who have joined since then are relying on
> > multiple conflicting verbal definitions. I believe, though,
> > that
> http://lists.openstack.org/pipermail/openstack-dev/2013-June/010821.html
> > was one of the earliest starting points of the discussion. That page
> > points at https://wiki.openstack.org/wiki/Projects, which today
> contains
> > a list of Programs. That page does have a definition of what a Program
> > is, but doesn't explain what a Project is or how they relate to
> > Programs. This page seems to be locked down, so I can't edit it.
> 
> https://wiki.openstack.org/wiki/Projects was renamed to
> https://wiki.openstack.org/wiki/Programs with the wiki helpfully leaving
> a redirect behind. So the content you are seeing here is the "Programs"
> wiki page, which is why it doesn't define "projects".
> 
> We don't really use the word "project" that much anymore, we prefer to
> talk about code repositories. Programs are teams working on a set of
> code repositories. Some of those code repositories may appear in the
> integrated release.
> 
> This explanation of the difference between projects and programs sounds
> like it would be useful to add to /Programs - but I can't edit that page. 

This page reflects the official list of programs, which is why it's
protected. it's supposed to be replaced by an automatic publication from
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
which is the ultimate source of truth on that topic.

> [1] https://wiki.openstack.org/wiki/ProjectTypes
> 
> I *can* edit that page; I'd like to bring it up-to-date. It seems like a
> good basis for explaining the difference between Programs and Projects
> and the historical reasons for the split. I'll aim to take a stab at
> this next week.

Please feel free to do so, however that page is really an artifact of
the old way we were structured, and is therefore useful as an historic
leftover :) It's not linked from anywhere those days. Maybe you should
create a new page, like
https://wiki.openstack.org/wiki/Projects_vs_Programs ? What you want to
talk about is not really about "Project Types" anyway.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Daniel P. Berrange
On Fri, Aug 29, 2014 at 11:07:33AM +0200, Thierry Carrez wrote:
> Joe Gordon wrote:
> > On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
> > mailto:alan.kavan...@ericsson.com>> wrote:
> > 
> >> I share Donald's points here, I believe what would help is to
> >> clearly describe in the Wiki the process and workflow for the BP
> >> approval process and build in this process how to deal with
> >> discrepancies/disagreements and build timeframes for each stage and
> >> process of appeal etc.
> >> The current process would benefit from some fine tuning and helping
> >> to build safe guards and time limits/deadlines so folks can expect
> >> responses within a reasonable time and not be left waiting in the cold.
> > 
> > This is a resource problem, the nova team simply does not have enough
> > people doing enough reviews to make this possible. 
> 
> I think Nova lacks core reviewers more than it lacks reviewers, though.
> Just looking at the ratio of core developers vs. patchsets proposed,
> it's pretty clear that the core team is too small:
> 
> Nova: 750 patchsets/month for 21 core = 36
> Heat: 230/14 = 16
> Swift: 50/16 = 3
> 
> Neutron has the same issue (550/14 = 39). I think above 20, you have a
> dysfunctional setup. No amount of process, spec, or runway will solve
> that fundamental issue.
> 
> The problem is, you can't just add core reviewers, they have to actually
> understand enough of the code base to be trusted with that +2 power. All
> potential candidates are probably already in. In Nova, the code base is
> so big it's difficult to find people that know enough of it. In Neutron,
> the contributors are often focused on subsections of the code base so
> they are not really interested in learning enough of the rest. That
> makes the pool of core candidates quite dry.
> 
> I fear the only solution is smaller groups being experts on smaller
> codebases. There is less to review, and more candidates that are likely
> to be experts in this limited area.
> 
> Applied to Nova, that means modularization -- having strong internal
> interfaces and trusting subteams to +2 the code they are experts on.
> Maybe VMWare driver people should just +2 VMware-related code. We've had
> that discussion before, and I know there is a dangerous potential
> quality slope there -- I just fail to see any other solution to bring
> that 750/21=36 figure down to a bearable level, before we burn out all
> of the Nova core team.

I broadly agree - I think that unless Nova moves more towards something
that is closer to the Linux style subsystem maintainer model we are 
doomed. I know in Linux, the maintainers actually use separate git trees,
and that isn't what I mean - I think using a single git tree is still
desirable (at least for now). What I mean is that we should place more
trust on the opinion of the people who are experts for a particular
area of code. Let those experts take on a greater burden of the code
review so core team can put more focus on actual merge approval.

I know some of the core team try to do this implicitly - eg we know who
some of the main people involved in hyperv or vmware are, so will tend
to treat their +1 as an effective +2 from the POV of their driver code,
but our rules still require two actual +2s from core, so it doesn't
entirely help us right now. I think we need to do some work in tooling
to make this more of an explicit process though.

The problem is that gerrit does not allow us to say person X has +2 for
code that touches directory path /foo/bar. The +2 is global to the entire
repository. We could try to deal with this problem outside of gerrit
though. As a starting point, each virt driver (or major functional area
of nova codebase) should have an explicit list of people who are considered
to be the "core team" for that area of code.  From such a list, tools like
gerrymander (or anything else that can query gerrit), could see when a
person in those lists +1'd a change touching their area of responsibility
and change that to be presented as a "+1.5".

This would make it very explicit to the reviewers that they should consider
the change to be (almost) equivalent to a +2.  We could potentially then
relax the rule of

 "+A requires two +2s"

to be 

 "+A requires (two +2s) or (one +2 and one +1.5)"

I think this would significantly improve our review throughput. I think
it could also help us get people to gain greater responsibility. The
jump from regular contributor to core team member is quite a high bar.
If we had the intermediate step of subsystem team member that would ease
the progression. It would also give the subsystem teams a greater sense
of engagement & value in the nova community

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entang

Re: [openstack-dev] [all] Design Summit reloaded

2014-08-29 Thread Thierry Carrez
Anne Gentle wrote:
> On Wed, Aug 27, 2014 at 7:51 AM, Thierry Carrez  > wrote:
> 
> Hi everyone,
> 
> I've been thinking about what changes we can bring to the Design Summit
> format to make it more productive. I've heard the feedback from the
> mid-cycle meetups and would like to apply some of those ideas for Paris,
> within the constraints we have (already booked space and time). Here is
> something we could do:
> 
> Day 1. Cross-project sessions / incubated projects / other projects
> 
> I think that worked well last time. 3 parallel rooms where we can
> address top cross-project questions, discuss the results of the various
> experiments we conducted during juno. Don't hesitate to schedule 2 slots
> for discussions, so that we have time to come to the bottom of those
> issues. Incubated projects (and maybe "other" projects, if space allows)
> occupy the remaining space on day 1, and could occupy "pods" on the
> other days.
> 
> Yep, I think this works in theory, the tough part will be when all the
> incubating projects realize they're sending people for a single day?
> Maybe it'll work out differently than I think though. It means fitting
> ironic, barbican, designate, manila, marconi in a day? 

Actually those projects would get pod space for the rest of the week, so
they should stay! Also some of them might have graduated by then :)

> Also since QA, Infra, and Docs are cross-project AND Programs, where do
> they land?

I think those teams work on different issues. Some issues require a lot
of communication and input because they are cross-project problems that
those teams are tasked with solving -- in which case that belongs to the
cross-project day. Other issues are more implementation details and
require mostly the team members but not so much external input -- those
belong to the specific slots or the "contributors meetup". Obviously
some things will be a bit borderline and we'll have to pick one or the
other based on available slots.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-29 Thread Thierry Carrez
Sean Dague wrote:
> On 08/28/2014 03:06 PM, Jay Pipes wrote:
>> On 08/28/2014 02:21 PM, Sean Dague wrote:
>>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
 On 08/27/2014 11:34 AM, Doug Hellmann wrote:
> On Aug 27, 2014, at 8:51 AM, Thierry Carrez 
> wrote:
>> Day 1. Cross-project sessions / incubated projects / other
>> projects
>>
>> I think that worked well last time. 3 parallel rooms where we can
>> address top cross-project questions, discuss the results of the
>> various experiments we conducted during juno. Don't hesitate to
>> schedule 2 slots for discussions, so that we have time to come to
>> the bottom of those issues. Incubated projects (and maybe "other"
>> projects, if space allows) occupy the remaining space on day 1, and
>> could occupy "pods" on the other days.
>
> If anything, I’d like to have fewer cross-project tracks running
> simultaneously. Depending on which are proposed, maybe we can make
> that happen. On the other hand, cross-project issues is a big theme
> right now so maybe we should consider devoting more than a day to
> dealing with them.

 I agree with Doug here. I'd almost say having a single cross-project
 room, with serialized content would be better than 3 separate
 cross-project tracks. By nature, the cross-project sessions will attract
 developers that work or are interested in a set of projects that looks
 like a big Venn diagram. By having 3 separate cross-project tracks, we
 would increase the likelihood that developers would once more have to
 choose among simultaneous sessions that they have equal interest in. For
 Infra and QA folks, this likelihood is even greater...

 I think I'd prefer a single cross-project track on the first day.
>>>
>>> So the fallout of that is there will be 6 or 7 cross-project slots for
>>> the design summit. Maybe that's the right mix if the TC does a good job
>>> picking the top 5 things we want accomplished from a cross project
>>> standpoint during the cycle. But it's going to have to be a pretty
>>> directed pick. I think last time we had 21 slots, and with a couple of
>>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>>> slot set).
>>
>> I'm not sure that would be a bad thing :)
>>
>> I think one of the reasons the mid-cycles have been successful is that
>> they have adequately limited the scope of discussions and I think by
>> doing our homework by fully vetting and voting on cross-project sessions
>> and being OK with saying "No, not this time.", we will be more
>> productive than if we had 20+ cross-project sessions.
>>
>> Just my two cents, though..
> 
> I'm not sure it would be a bad thing either. I just wanted to be
> explicit about what we are saying the cross projects sessions are for in
> this case: the 5 key cross project activities the TC believes should be
> worked on this next cycle.

There is a trade-off here. Parallel cross-project tracks let us address
more issues in the limited time we have, and they also let us split the
audience so that we don't end up at 500 in the same room and nothing
gets done in 40min. It's true that sometimes you wish you could be in
two different places at the same time, but we generally prevent the most
blatant collisions during scheduling, and sometimes forcing people to
choose what they really care about is not that bad.

The feedback I got from Atlanta was that the 3-parallel-room setup went
well, and there weren't that many conflicts.

Maybe having *2* cross-project topics running at the same time (instead
of 3 or 1) would be the right trade-off. We would still need to be more
picky in selecting which issues we want to address, we would split the
audience into two rooms, and we would reduce the likelihood of conflict
significantly.

> The other question is if we did that what's running in competition to
> cross project day? Is it another free form pod day for people not
> working on those things?

The 3 or 4 other rooms would give incubated projects (and "other"
projects) some scheduled time. It also runs at the same time as the
conference.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Daniel P. Berrange
On Thu, Aug 28, 2014 at 04:27:59PM -0600, Chris Friesen wrote:
> On 08/28/2014 04:01 PM, Joe Gordon wrote:
> >
> >
> >
> >On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
> >mailto:alan.kavan...@ericsson.com>> wrote:
> >
> >I share Donald's points here, I believe what would help is to
> >clearly describe in the Wiki the process and workflow for the BP
> >approval process and build in this process how to deal with
> >discrepancies/disagreements and build timeframes for each stage and
> >process of appeal etc.
> >The current process would benefit from some fine tuning and helping
> >to build safe guards and time limits/deadlines so folks can expect
> >responses within a reasonable time and not be left waiting in the cold.
> >
> >
> >This is a resource problem, the nova team simply does not have enough
> >people doing enough reviews to make this possible.
> 
> All the more reason to make it obvious which reviews are not being addressed
> in a timely fashion.  (I'm thinking something akin to the order screen at a
> fast food restaurant that starts blinking in red and beeping if an order
> hasn't been filled in a certain amount of time.)

This information can easily be queried from gerrit. I proposed last week that
core team members should especially look for reviews which have one +2 already,
as being items that are potentially approvable and thus should not be left to
lie idle for a long time

  http://lists.openstack.org/pipermail/openstack-dev/2014-August/043657.html

There are a variety of other ways/criteria to query lists of changes which
I outline with the gerrymander tool

  http://lists.openstack.org/pipermail/openstack-dev/2014-August/043085.html

> Perhaps by making it clear that reviews are a bottleneck this will actually
> help to address the problem.

We have the tools / capabilities for this already. The problems are whether
people effectively use the tools to priortize their work, and whether we
even have enough review bandwidth at all.

I'm certain though that this schedular review should not have got lost
in the way it did.

I'd like to see the core team set a firm target that a review with one +2
and no -1s, should not be allowed to languish for more than 1 week, with
out having either a second +2 or a -1 (unless it is blocked by a change
it depends on).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Daniel P. Berrange
On Thu, Aug 28, 2014 at 03:44:25PM -0400, Jay Pipes wrote:
> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> >I’ll try and not whine about my pet project but I do think there is a
> >problem here.  For the Gantt project to split out the scheduler there is
> >a crucial BP that needs to be implemented (
> >https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP has
> >been rejected and we’ll have to try again for Kilo.  My question is did
> >we do something wrong or is the process broken?
> >
> >Note that we originally proposed the BP on 4/23/14, went through 10
> >iterations to the final version on 7/25/14 and the final version got
> >three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
> >specific people, we didn’t get the second +2, hence the rejection.
> >
> >I understand that reviews are a burden and very hard but it seems wrong
> >that a BP with multiple positive reviews and no negative reviews is
> >dropped because of what looks like indifference.
> 
> I would posit that this is not actually indifference. The reason that there
> may not have been >1 +2 from a core team member may very well have been that
> the core team members did not feel that the blueprint's priority was high
> enough to put before other work, or that the core team members did have the
> time to comment on the spec (due to them not feeling the blueprint had the
> priority to justify the time to do a full review).

That is fine from the POV of a general blueprint. In this case though
we explicitly approved an exception to the freeze for this blueprint.
This (w|sh)ould only have been done if we considered it high enough
priority and with a commitment to actually review it.  ie we should
not approve exceptions to freeze dates for things we don't care about.

> Note that I'm not a core drivers team member.

Which I think is an issue in itself. With all the problems we have
with review bandwidth, the idea that we should pick an even smaller
subset of 'nova core' to form a 'nova drivers' group is broken. I
was rather suprised myself when I first learnt that 'nova drivers'
even existed (by finding I could not +2 specs). I was lucky that
Mikal proposed to add me to nova drivers, so it ultimately didn't
impact me. Looking at nova core though, I really don't see why some
members of nova core should be privileged in reviewing & approving
specs, over others. IMHO, the idea of the smaller nova drivers group
should die and everyone in nova core should share that responsibility.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-08-29 Thread Thierry Carrez
Joe Gordon wrote:
> On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
> mailto:alan.kavan...@ericsson.com>> wrote:
> 
>> I share Donald's points here, I believe what would help is to
>> clearly describe in the Wiki the process and workflow for the BP
>> approval process and build in this process how to deal with
>> discrepancies/disagreements and build timeframes for each stage and
>> process of appeal etc.
>> The current process would benefit from some fine tuning and helping
>> to build safe guards and time limits/deadlines so folks can expect
>> responses within a reasonable time and not be left waiting in the cold.
> 
> This is a resource problem, the nova team simply does not have enough
> people doing enough reviews to make this possible. 

I think Nova lacks core reviewers more than it lacks reviewers, though.
Just looking at the ratio of core developers vs. patchsets proposed,
it's pretty clear that the core team is too small:

Nova: 750 patchsets/month for 21 core = 36
Heat: 230/14 = 16
Swift: 50/16 = 3

Neutron has the same issue (550/14 = 39). I think above 20, you have a
dysfunctional setup. No amount of process, spec, or runway will solve
that fundamental issue.

The problem is, you can't just add core reviewers, they have to actually
understand enough of the code base to be trusted with that +2 power. All
potential candidates are probably already in. In Nova, the code base is
so big it's difficult to find people that know enough of it. In Neutron,
the contributors are often focused on subsections of the code base so
they are not really interested in learning enough of the rest. That
makes the pool of core candidates quite dry.

I fear the only solution is smaller groups being experts on smaller
codebases. There is less to review, and more candidates that are likely
to be experts in this limited area.

Applied to Nova, that means modularization -- having strong internal
interfaces and trusting subteams to +2 the code they are experts on.
Maybe VMWare driver people should just +2 VMware-related code. We've had
that discussion before, and I know there is a dangerous potential
quality slope there -- I just fail to see any other solution to bring
that 750/21=36 figure down to a bearable level, before we burn out all
of the Nova core team.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev