[openstack-dev] [nova][infra] nova py27 unit test failures in libvirt

2014-01-07 Thread Lu, Lianhao
Hi guys,

This afternoon I suddenly find that there are quite a lot of nova py27 unit 
test failures on Jenkins, like 
http://logs.openstack.org/15/62815/5/gate/gate-nova-python27/82d5d52/console.html.

It seems to me that the registerCloseCallback method is not available any more 
in virConnect class. I'm not sure whether this is caused by a new version of 
libvirt python binding?

Any comments?

-Lianhao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Availability of external testing logs

2014-01-07 Thread Torbjorn Tornkvist

Hi,

Sorry for the problems.
I've missed any direct mails to me (I'm drowning in Openstack mails...)
I will make sure our Jenkins setup won't be left unattended in the future.

How can I remove those '-1' votes?

It seems that from:  Jan 2, 2014 5:46:26 PM
after change: https://review.openstack.org/#/c/64696/

something happend that makes the my tox crash with a traceback.
I'll include the traceback below in case someone can give some help.
(I'm afraid I don't know anything about python...)
---
vagrant@quantal64:~/neutron$ sudo tox -e py27 -r -- 
neutron.tests.unit.ml2.test_mechanism_ncs

GLOB sdist-make: /home/vagrant/neutron/setup.py
py27 create: /home/vagrant/neutron/.tox/py27
ERROR: invocation failed, logfile: 
/home/vagrant/neutron/.tox/py27/log/py27-0.log

ERROR: actionid=py27
msg=getenv
cmdargs=['/usr/bin/python2.7', 
'/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv.py', 
'--setuptools', '--python', '/usr/bin/python2.7', 'py27']
env={'LC_NUMERIC': 'sv_SE.UTF-8', 'LOGNAME': 'root', 'USER': 'root', 
'HOME': '/home/vagrant', 'LC_PAPER': 'sv_SE.UTF-8', 'PATH': 
'/home/vagrant/neutron/.tox/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 
'DISPLAY': 'localhost:10.0', 'LANG': 'en_US.utf8', 'TERM': 
'xterm-256color', 'SHELL': '/bin/bash', 'LANGUAGE': 'en_US:', 
'LC_MEASUREMENT': 'sv_SE.UTF-8', 'SUDO_USER': 'vagrant', 'USERNAME': 
'root', 'LC_IDENTIFICATION': 'sv_SE.UTF-8', 'LC_ADDRESS': 'sv_SE.UTF-8', 
'SUDO_UID': '1000', 'VIRTUAL_ENV': '/home/vagrant/neutron/.tox/py27', 
'SUDO_COMMAND': '/usr/local/bin/tox -e py27 -r -- 
neutron.tests.unit.ml2.test_mechanism_ncs', 'SUDO_GID': '1000', 
'LC_TELEPHONE': 'sv_SE.UTF-8', 'LC_MONETARY': 'sv_SE.UTF-8', 'LC_NAME': 
'sv_SE.UTF-8', 'MAIL': '/var/mail/root', 'LC_TIME': 'sv_SE.UTF-8', 
'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}

Already using interpreter /usr/bin/python2.7
New python executable in py27/bin/python2.7
Also creating executable in py27/bin/python
Installing setuptools, pip...
  Complete output from command 
/home/vagrant/neutron/.tox/py27/bin/python2.7 -c import sys, pip; 
pip...ll\] + sys.argv[1:]) setuptools pip:

  Traceback (most recent call last):
  File string, line 1, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/__init__.py, 
line 9, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/log.py, 
line 8, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 2696, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 429, in __init__
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 443, in add_entry
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 1722, in find_in_zip
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 1298, in has_metadata
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 1614, in _has
  File 

Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-07 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
From: ext Chmouel Boudjnah [mailto:chmo...@enovance.com]
Sent: Monday, January 06, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer


On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - 
FI/Espoo) vijayakumar.kodam@nsn.commailto:vijayakumar.kodam@nsn.com 
wrote:
In this case, simply changing the meter properties in a configuration file 
should be enough. There should be an inotify signal which shall notify 
ceilometer of the changes in the config file. Then ceilometer should 
automatically update the meters without restarting.

Why it cannot be something configured by the admin with inotifywait(1) command?

Or this can be an API call for enabling/disabling meters which could be more 
useful without having to change the config files.

Chmouel.

I haven't tried inotifywait() in this implementation. I need to check if it 
will be useful for the current implementation.
Yes. API call could be more useful than changing the config files manually.

Thanks,
VijayKumar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-07 Thread Thierry Carrez
Matt Riedemann wrote:
 There is discussion in this thread about wouldn't it be nice to have a
 tag on commits for changes that impact upgrades?.  There is.
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/016619.html
 
 https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
 
 Here is an example of a patch going through the gate now with
 UpgradeImpact:
 
 https://review.openstack.org/#/c/62815/

The good thing about UpgradeImpact is that it's less subjective than
OpsImpact, and I think it catches what matters: backward-incompatible
changes, upgrades needing manual intervention (or smart workarounds in
packaging), etc.

Additional benefit is that it's relevant for more than just the ops
population: packagers and the release notes writers also need to track
those.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Status of nova-network deprecation

2014-01-07 Thread Thierry Carrez
Tom Fifield wrote:
 It's a couple of weeks out from the slated decision milestone
 (icehouse-2) to potentially deprecate nova-network. Since I guess
 there's still time to affect this outcome, but I haven't seen much
 communication recently, here's a thread!
 [...]
 = Does anyone have any time to inform about these points, or any other
 salient ones?

Thanks for raising the thread.

I added this topic to the Project/Release status meeting agenda (Tuesday
21:00 UTC in #openstack-meeting).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Nominate Chmouel Boudjnah for core team

2014-01-07 Thread Gary Kotton
+1

On 1/6/14 7:55 PM, Florent Flament florent.flament-...@cloudwatt.com
wrote:

+1

- Original Message -
From: Sean Dague s...@dague.net
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Monday, January 6, 2014 5:30:09 PM
Subject: Re: [openstack-dev] [DevStack] Nominate Chmouel Boudjnah for
core team

On 01/06/2014 11:26 AM, Dean Troyer wrote:
 With the new year comes a long-overdue cleanup to the devstack-core
 membership and the desire to expand he team a bit.  I propose to add
 Chmouel Boudjnah as he has been a steady contributor for some time,
 doing much of the Swift implementation.
 
 dt
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com mailto:dtro...@gmail.com
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar
=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=53Z5LDc715UooKPoE2
13Vj0lKt7CUHw%2BrXpa6Wles%2Bg%3D%0As=402e30f564128120f358220904c2e24a56e
6cc8a2a23249f9b7578b8102fc396
 

+1

   -Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
https://urldefense.proofpoint.com/v1/url?u=http://dague.net/k=oIvRg1%2BdG
AgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%
0Am=53Z5LDc715UooKPoE213Vj0lKt7CUHw%2BrXpa6Wles%2Bg%3D%0As=51cdb7007e27c
2c40c9596dc8fd55707794bf515598bbab5d347c0bfb9db940b


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=53Z5LDc715UooKPoE213V
j0lKt7CUHw%2BrXpa6Wles%2Bg%3D%0As=402e30f564128120f358220904c2e24a56e6cc8
a2a23249f9b7578b8102fc396

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=53Z5LDc715UooKPoE213V
j0lKt7CUHw%2BrXpa6Wles%2Bg%3D%0As=402e30f564128120f358220904c2e24a56e6cc8
a2a23249f9b7578b8102fc396


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-07 Thread Day, Phil
Would be nice in this specific example though if the actual upgrade impact was 
explicitly called out in the commit message.

From the DocImpact it looks as if some Neutron config options are changing 
names - in which case the impact would seem to be that running systems have 
until the end of this cycle to change the names in their config files. 

(Is that the point at which the change would need to be made - i.e. if someone 
is planning an upgrade from H to I they need to make sure they have the new 
config names in place before the update ?)

Looking at the changes highlighted in nova.conf.sample it looks as if a lot 
more has changed - but I'm guessing this is an artifact of the way the file is 
generated rather that actual wholesale changes to config options.

Either way I'm not sure anyone trying to plan around the upgrade impact should 
be expected to have to dig into the diff's of the changed files to work out 
what they need to do, and what time period they have to do it in.

So it looks as if UpgradeImpact is really a warning of some change that needs 
to be considered at some point, but doesn't break a running system just by 
incorporating this change (since the deprecated names are still supported) - 
but the subsequent change that will eventually remove the deprecated names is 
the thing that is the actual upgrade impact (in that that once that change is 
incorporated the system will be broken if some extra action isn't taken).
Would both of those changes be tagged as UpgradeImpact ?  Should we make some 
distinction between these two cases ? 

Phil


From: Thierry Carrez [thie...@openstack.org]
Sent: 07 January 2014 10:04
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] minimum review period for functional 
changes that break backwards compatibility

Matt Riedemann wrote:
 There is discussion in this thread about wouldn't it be nice to have a
 tag on commits for changes that impact upgrades?.  There is.

 http://lists.openstack.org/pipermail/openstack-dev/2013-October/016619.html

 https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references

 Here is an example of a patch going through the gate now with
 UpgradeImpact:

 https://review.openstack.org/#/c/62815/

The good thing about UpgradeImpact is that it's less subjective than
OpsImpact, and I think it catches what matters: backward-incompatible
changes, upgrades needing manual intervention (or smart workarounds in
packaging), etc.

Additional benefit is that it's relevant for more than just the ops
population: packagers and the release notes writers also need to track
those.

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-07 Thread Michael Kerrin
I have been seeing this problem also. 

My problem is actually with oslo.sphinx. I ran sudo pip install -r 
test-requirements.txt in 
cinder so that I could run the tests there, which installed oslo.sphinx.

Strange thing is that the oslo.sphinx installed a directory called oslo in 
/usr/local/lib/python2.7/dist-packages with no __init__.py file. With this 
package 
installed like so I get the same error you get with oslo.config.

I don't need oslo.sphinx so I just went and manually deleted the oslo directory 
and the 
oslo.sphinx* files in /usr/local/lib/python2.7/dist-packages. Everything worked 
fine 
after that.

Not sure what to do about this, but that is my story

Michael



On Mon 23 Dec 2013 14:18:11 Sean Dague wrote:
 On 12/23/2013 11:52 AM, Ben Nemec wrote:
  On 2013-12-18 09:26, Sayali Lunkad wrote:
  Hello,
  
  I get the following error when I run stack.sh on Devstack
  
  Traceback (most recent call last):
File /usr/local/bin/ceilometer-dbsync, line 6, in module

  from ceilometer.storage import dbsync

File /opt/stack/ceilometer/ceilometer/storage/__init__.py, line
  
  23, in module
  
  from oslo.config import cfg
  
  ImportError: No module named config
  ++ failed
  ++ local r=1
  +++ jobs -p
  ++ kill
  ++ set +o xtrace
  
  Search gives me olso.config is installed. Please let me know of any
  solution.
  
  Devstack pulls oslo.config from git, so if you have it installed on the
  system through pip or something it could cause problems.  If you can
  verify that it's only in /opt/stack/oslo.config, you might try deleting
  that directory and rerunning devstack to pull down a fresh copy.  I
  don't know for sure what the problem is, but those are a couple of
  things to try.
 
 We actually try to resolve that here:
 
 https://github.com/openstack-dev/devstack/blob/master/lib/oslo#L43
 
 However, have I said how terrible python packaging is recently?
 Basically you can very easily get yourself in a situation where *just
 enough* of the distro package is left behind that pip thinks its there,
 so won't install it, but the python loader doesn't, so won't work.
 
 Then much sadness.
 
 If anyone has a more fool proof way to fix this, suggestions appreciated.
 
   -Sean

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] nova py27 unit test failures in libvirt

2014-01-07 Thread Jay Lau
A bug was filed: https://bugs.launchpad.net/nova/+bug/1266711

Thanks,

Jay


2014/1/7 Lu, Lianhao lianhao...@intel.com

 Hi guys,

 This afternoon I suddenly find that there are quite a lot of nova py27
 unit test failures on Jenkins, like
 http://logs.openstack.org/15/62815/5/gate/gate-nova-python27/82d5d52/console.html
 .

 It seems to me that the registerCloseCallback method is not available any
 more in virConnect class. I'm not sure whether this is caused by a new
 version of libvirt python binding?

 Any comments?

 -Lianhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] nova py27 unit test failures in libvirt

2014-01-07 Thread Sean Dague
This looks like it's a 100% failure bug at this point. I expect that 
because of timing it's based on a change in the base image due to 
nodepool rebuilding.


-Sean

On 01/07/2014 06:56 AM, Jay Lau wrote:

A bug was filed: https://bugs.launchpad.net/nova/+bug/1266711

Thanks,

Jay


2014/1/7 Lu, Lianhao lianhao...@intel.com mailto:lianhao...@intel.com

Hi guys,

This afternoon I suddenly find that there are quite a lot of nova
py27 unit test failures on Jenkins, like

http://logs.openstack.org/15/62815/5/gate/gate-nova-python27/82d5d52/console.html.

It seems to me that the registerCloseCallback method is not
available any more in virConnect class. I'm not sure whether this is
caused by a new version of libvirt python binding?

Any comments?

-Lianhao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Availability of external testing logs

2014-01-07 Thread Torbjorn Tornkvist

My problem seem to be the same as reported here:

https://bitbucket.org/pypa/setuptools/issue/129/assertionerror-egg-info-pkg-info-is-not-a

Not quite shure however how to bring in the fix into my setup.

Cheers, Tobbe

On 2014-01-07 10:38, Torbjorn Tornkvist wrote:

Hi,

Sorry for the problems.
I've missed any direct mails to me (I'm drowning in Openstack mails...)
I will make sure our Jenkins setup won't be left unattended in the future.

How can I remove those '-1' votes?

It seems that from:  Jan 2, 2014 5:46:26 PM
after change: https://review.openstack.org/#/c/64696/

something happend that makes the my tox crash with a traceback.
I'll include the traceback below in case someone can give some help.
(I'm afraid I don't know anything about python...)
---
vagrant@quantal64:~/neutron$ sudo tox -e py27 -r -- 
neutron.tests.unit.ml2.test_mechanism_ncs

GLOB sdist-make: /home/vagrant/neutron/setup.py
py27 create: /home/vagrant/neutron/.tox/py27
ERROR: invocation failed, logfile: 
/home/vagrant/neutron/.tox/py27/log/py27-0.log

ERROR: actionid=py27
msg=getenv
cmdargs=['/usr/bin/python2.7', 
'/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv.py', 
'--setuptools', '--python', '/usr/bin/python2.7', 'py27']
env={'LC_NUMERIC': 'sv_SE.UTF-8', 'LOGNAME': 'root', 'USER': 'root', 
'HOME': '/home/vagrant', 'LC_PAPER': 'sv_SE.UTF-8', 'PATH': 
'/home/vagrant/neutron/.tox/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 
'DISPLAY': 'localhost:10.0', 'LANG': 'en_US.utf8', 'TERM': 
'xterm-256color', 'SHELL': '/bin/bash', 'LANGUAGE': 'en_US:', 
'LC_MEASUREMENT': 'sv_SE.UTF-8', 'SUDO_USER': 'vagrant', 'USERNAME': 
'root', 'LC_IDENTIFICATION': 'sv_SE.UTF-8', 'LC_ADDRESS': 
'sv_SE.UTF-8', 'SUDO_UID': '1000', 'VIRTUAL_ENV': 
'/home/vagrant/neutron/.tox/py27', 'SUDO_COMMAND': '/usr/local/bin/tox 
-e py27 -r -- neutron.tests.unit.ml2.test_mechanism_ncs', 'SUDO_GID': 
'1000', 'LC_TELEPHONE': 'sv_SE.UTF-8', 'LC_MONETARY': 'sv_SE.UTF-8', 
'LC_NAME': 'sv_SE.UTF-8', 'MAIL': '/var/mail/root', 'LC_TIME': 
'sv_SE.UTF-8', 'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01 
;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}

Already using interpreter /usr/bin/python2.7
New python executable in py27/bin/python2.7
Also creating executable in py27/bin/python
Installing setuptools, pip...
  Complete output from command 
/home/vagrant/neutron/.tox/py27/bin/python2.7 -c import sys, pip; 
pip...ll\] + sys.argv[1:]) setuptools pip:

  Traceback (most recent call last):
  File string, line 1, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/__init__.py,

line 9, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/log.py, 
line 8, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 2696, in module
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 429, in __init__
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 443, in add_entry
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 
line 1722, in find_in_zip
  File 
/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py, 

Re: [openstack-dev] [Nova][BluePrint Register]Shrink the volume when file in the instance was deleted.

2014-01-07 Thread Qixiaozhen
 On 25 December 2013 05:14, Qixiaozhen qixiaoz...@huawei.com wrote:
  Hi,all
 
  A blueprint is registered that is about shrinking the volume in thin
  provision.
 
 Have you got the link?

The address is 
https://blueprints.launchpad.net/nova/+spec/shrink-volume-in-thin-provisoning 

 
  Thin provision means allocating the disk space once the instance
  writes the data on the area of volume in the first time.
 
  However, if the files in the instance were deleted, thin provision
  could not deal with this situation. The space that was allocated by
  the files could not be released.
 
  So it is necessary to shrink the volume when the files are deleted in
  the instance.
 
 In this case the user will probably need to zero out the free space of your
 filesystem too, in some cases, unless nova does that for them, which sounds a
 bit dodgy.

It seems that it is better that filling the free space with zero in the 
filesystem of the instance in offline mode. 

  The operation of shrinking can be manually executed by the user with
  the web portal or CLI command or periodically in the background.
 
 I wondered about an optimise disk call.

Agree with this. This shrinking operation can be a option.

 
 A few thoughts:
 * I am not sure it can always be done online for all drivers, may need an
 offline mode

Online Shrinking: The online shrink-volume would need the additional driver. If 
the guest os of instance is a linux, a additional driver named 
'qemu-guest-agent ' can be installed inside the guest, and the command 'fstrim' 
is called.

More information :  
http://dustymabe.com/2013/06/11/recover-space-from-vm-disk-images-by-using-discardfstrim/

 * Similar operations have ways of confirming and reverting to protect against
 dataloss
 * Ideally keep all operations on the virtual disk, and no operations on its
 content
 * With chains of disks, you may want to simplify the chain too (where it makes
 sense)
 
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-07 Thread Doug Hellmann
On Mon, Jan 6, 2014 at 6:21 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-01-06 17:23:31 -0500 (-0500), Doug Hellmann wrote:
 [...]
  The global requirements syncing seems to have fixed the issue for
  apps, although it just occurred to me that I'm not sure we check
  that the requirements lists are the same when we cut a release.
  Do we do that already?

 Not yet in any automated fashion but keep in mind that we've only
 gotten the requirements update proposal job working reliably this
 cycle, so it could still take some time for the various projects to
 decide how to finish syncing up.


Makes sense. I was just thinking about some of the assumptions we're making
elsewhere in this thread w.r.t. syncing requirements.

Doug



 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-07 Thread Doug Hellmann
On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi Dough,

 Thank you for pointing to this code. As I see you use OpenStack policy
 framework but not Pecan security features. How do you implement fine grain
 access control like user allowed to read only, writers and admins. Can you
 block part of API methods for specific user like access to create methods
 for specific user role?


The policy enforcement isn't simple on/off switching in ceilometer, so
we're using the policy framework calls in a couple of places within our API
code (look through v2.py for examples). As a result, we didn't need to
build much on top of the existing policy module to interface with pecan.

For your needs, it shouldn't be difficult to create a couple of decorators
to combine with pecan's hook framework to enforce the policy, which might
be less complex than trying to match the operating model of the policy
system to pecan's security framework.

This is the sort of thing that should probably go through Oslo and be
shared, so please consider contributing to the incubator when you have
something working.

Doug




 Thanks
 Georgy


 On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann doug.hellm...@dreamhost.com
  wrote:




 On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

 Hi,

 In Solum project we will need to implement security and ACL for Solum
 API. Currently we use Pecan framework for API. Pecan has its own security
 model based on SecureController class. At the same time OpenStack widely
 uses policy mechanism which uses json files to control access to specific
 API methods.

 I wonder if someone has any experience with implementing security and
 ACL stuff with using Pecan framework. What is the right way to provide
 security for API?


  In ceilometer we are using the keystone middleware and the policy
 framework to manage arguments that constrain the queries handled by the
 storage layer.


 http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py

 and


 http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337

 Doug




 Thanks
 Georgy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron][ipv6]Hairpinning in libvirt, once more

2014-01-07 Thread Ian Wells
See Sean Collins' review https://review.openstack.org/#/c/56381 which
disables hairpinning when Neutron is in use.  tl;dr - please upvote the
review.  Long form reasoning follows...

There's a solid logical reason for enabling hairpinning, but it only
applies to nova-network.  Hairpinning is used in nova-network so that
packets from a machine and destined for that same machine's floating IP
address are returned to it.  They then pass through the rewrite rules
(within the libvirt filters on the instance's tap interface) that do the
static NAT mapping to translate floating IP to fixed IP.

Whoever implemented this assumed that hairpinning in other situations is
harmless.  However, this same feature also prevents IPv6 from working -
returned neighbor discovery packets panic VMs into thinking they're using a
duplicate address on the network.  So we'd like to turn it off.  Accepting
that nova-network will change behaviour comprehensively if we just remove
the code, we've elected to turn it off only when Neutron is being used and
leave nova-network broken for ipv6.

Obviously, this presents an issue, because we're changing the way that
Openstack behaves in a user-visible way - hairpinning may not be necessary
or desirable for Neutron, but it's still detectable when it's on or off if
you try hard enough - so the review comments to date have been
conservatively suggesting that we avoid the functional change as much as
possible, and there's a downvote to that end.  But having done more
investigation I don't think there's sufficient justification to keep the
status quo.

We've also talked about leaving hairpinning off if and only if the Neutron
plugin explicitly says that it doesn't want to use hairpinning.  We can
certainly do this, and I've looked into it, but in practice it's not worth
the code and interface changes:

 - Neutron (not 'some drivers' - this is consistent across all of them)
does NAT rewriting in the routers now, not on the ports, so hairpinning
doesn't serve its intended purpose; what it actually does is waste CPU and
bandwidth by receives a packet every time it sends an outgoing packet and
precious little else.  The instance doesn't expect these packets, it always
ignores these packets, but it receives them anyway.  It's a pointless
no-op, though there exists the theoretical possibility that someone is
relying on it for their application.
- it's *only* libvirt that ever turns hairpinning on in the first place -
none of the other drivers do it
- libvirt only turns it on sometimes - for hybrid VIFs it's enabled, if
generic VIFs are configured and linuxbridge is in use it's enabled, but for
generic VIFs and OVS is in use then the enable function fails silently
(and, indeed, has been designed to fail silently, it seems).

Given these details, there seems little point in making the code more
complex to support a feature that isn't universal and isn't needed; better
that we just disable it for Neutron and be done.  So (and test failures
aside) could I ask that the core devs check and approve the patch review?
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Parallel testing update

2014-01-07 Thread Mathieu Rohon
I think that isaku is talking about a more intensive usage of
defer_apply_on/off as it is done in the patch of gongysh [1].

Isaku, i don't see any reason why this could not be done in
precess_network_ports, if needed. Moreover the patch from edouard [2]
resolves multithreading issues while precessing defer_apply_off.


[1]https://review.openstack.org/#/c/61341/
[2]https://review.openstack.org/#/c/63917/

On Mon, Jan 6, 2014 at 9:24 PM, Salvatore Orlando sorla...@nicira.com wrote:
 This thread is starting to get a bit confusing, at least for people with a
 single-pipeline brain like me!

 I am not entirely sure if I understand correctly Isaku's proposal concerning
 deferring the application of flow changes.
 I think it's worth discussing in a separate thread, and a supporting patch
 will help as well; I think that in order to avoid unexpected behaviours,
 vlan tagging on the port and flow setup should always be performed at the
 same time; if we get a much better performance using a mechanism similar to
 iptables' defer_apply, then we should it.

 Regarding rootwrap. This 6x slowdown, while proving that rootwrap imposes a
 non-negligible overhead, it should not be used as a sort of proof that
 rootwrap makes things 6 times worse! What I've been seeing on the gate and
 in my tests are ALRM_CLOCK errors raised by ovs commands, so rootwrap has
 little to do with it.

 Still, I think we can say that rootwrap adds about 50ms to each command,
 becoming particularly penalising especially for 'fast' commands.
 I think the best things to do, as Joe advices, a test with rootwrap disabled
 on the gate - and I will take care of that.

 On the other hand, I would invite community members picking up some of the
 bugs we've registered for 'less frequent' failures observed during parallel
 testing; especially if you're coming to Montreal next week.

 Salvatore



 On 6 January 2014 20:31, Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-01-06 at 11:17 -0800, Joe Gordon wrote:
 
 
 
  On Mon, Jan 6, 2014 at 10:35 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Mon, 2014-01-06 at 09:56 -0800, Joe Gordon wrote:
 
   What about it? Also those numbers are pretty old at this
  point. I was
   thinking disable rootwrap and run full parallel tempest
  against it.
 
 
  I think that is a little overkill for what we're trying to do
  here. We
  are specifically talking about combining many utils.execute()
  calls into
  a single one. I think it's pretty obvious that the latter will
  be better
  performing than the first, unless you think that rootwrap has
  no
  performance overhead at all?
 
 
  mocking out rootwrap with straight sudo, is a very quick way to
  approximate the performance benefit of combining many utlils.execute()
  calls together (at least rootwrap wise).  Also  it would tell us how
  much of the problem is rootwrap induced and how much is other.

 Yes, I understand that, which is what the article I linked earlier
 showed?

 % time sudo ip link /dev/null
 sudo ip link  /dev/null  0.00s user 0.00s system 43% cpu 0.009 total
 % sudo time quantum-rootwrap /etc/quantum/rootwrap.conf ip link
  /dev/null
 quantum-rootwrap /etc/quantum/rootwrap.conf ip link   /dev/null  0.04s
 user 0.02s system 87% cpu 0.059 total

 A very tiny, non-scientific simple indication that rootwrap is around 6
 times slower than a simple sudo call.

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-07 Thread Doug Hellmann
On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin michael.ker...@hp.comwrote:

  I have been seeing this problem also.



 My problem is actually with oslo.sphinx. I ran sudo pip install -r
 test-requirements.txt in cinder so that I could run the tests there, which
 installed oslo.sphinx.



 Strange thing is that the oslo.sphinx installed a directory called oslo in
 /usr/local/lib/python2.7/dist-packages with no __init__.py file. With this
 package installed like so I get the same error you get with oslo.config.


The oslo libraries use python namespace packages, which manifest
themselves as a directory in site-packages (or dist-packages) with
sub-packages but no __init__.py(c). That way oslo.sphinx and oslo.config
can be packaged separately, but still installed under the oslo directory
and imported as oslo.sphinx and oslo.config.

My guess is that installing oslo.sphinx globally (with sudo), set up 2
copies of the namespace package (one in the global dist-packages and
presumably one in the virtualenv being used for the tests).

Doug





 I don't need oslo.sphinx so I just went and manually deleted the oslo
 directory and the oslo.sphinx* files in
 /usr/local/lib/python2.7/dist-packages. Everything worked fine after that.



 Not sure what to do about this, but that is my story



 Michael







 On Mon 23 Dec 2013 14:18:11 Sean Dague wrote:

  On 12/23/2013 11:52 AM, Ben Nemec wrote:

   On 2013-12-18 09:26, Sayali Lunkad wrote:

   Hello,

  

   I get the following error when I run stack.sh on Devstack

  

   Traceback (most recent call last):

   File /usr/local/bin/ceilometer-dbsync, line 6, in module

  

   from ceilometer.storage import dbsync

  

   File /opt/stack/ceilometer/ceilometer/storage/__init__.py, line

  

   23, in module

  

   from oslo.config import cfg

  

   ImportError: No module named config

   ++ failed

   ++ local r=1

   +++ jobs -p

   ++ kill

   ++ set +o xtrace

  

   Search gives me olso.config is installed. Please let me know of any

   solution.

  

   Devstack pulls oslo.config from git, so if you have it installed on the

   system through pip or something it could cause problems. If you can

   verify that it's only in /opt/stack/oslo.config, you might try deleting

   that directory and rerunning devstack to pull down a fresh copy. I

   don't know for sure what the problem is, but those are a couple of

   things to try.

 

  We actually try to resolve that here:

 

  https://github.com/openstack-dev/devstack/blob/master/lib/oslo#L43

 

  However, have I said how terrible python packaging is recently?

  Basically you can very easily get yourself in a situation where *just

  enough* of the distro package is left behind that pip thinks its there,

  so won't install it, but the python loader doesn't, so won't work.

 

  Then much sadness.

 

  If anyone has a more fool proof way to fix this, suggestions appreciated.

 

  -Sean



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] libvirt unit test errors

2014-01-07 Thread Gary Kotton
Hi,
Anyone aware of the following:


2014-01-07 
11:59:47.428http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_428
 | Requirement already satisfied (use --upgrade to upgrade): markupsafe in 
./.tox/py27/lib/python2.7/site-packages (from Jinja2=2.3-sphinx=1.1.2,1.2)
2014-01-07 
11:59:47.429http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_429
 | Cleaning up...
2014-01-07 
12:01:32.134http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_32_134
 | Unimplemented block at ../../relaxng.c:3824
2014-01-07 
12:01:33.893http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_33_893
 | Unimplemented block at ../../relaxng.c:3824
2014-01-07 
12:10:25.292http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_10_25_292
 | libvirt:  error : internal error: could not initialize domain event timer
2014-01-07 
12:11:32.783http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
2014-01-07 
12:11:32.783http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
2014-01-07 
12:11:32.783http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
2014-01-07 
12:11:32.784http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
 | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests --list
2014-01-07 
12:11:32.784http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
2014-01-07 
12:11:32.784http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
2014-01-07 
12:11:32.784http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
2014-01-07 
12:11:32.784http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
 | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp.pS70eSqBIL/tmpZV93Uv
2014-01-07 
12:11:32.784http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
2014-01-07 
12:11:32.784http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
2014-01-07 
12:11:32.785http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
2014-01-07 
12:11:32.785http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
 | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp.pS70eSqBIL/tmpC2pLuK
2014-01-07 
12:11:32.785http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
2014-01-07 
12:11:32.785http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
2014-01-07 
12:11:32.785http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
2014-01-07 
12:11:32.785http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
 | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp.pS70eSqBIL/tmpd1ZnJj
2014-01-07 
12:11:32.785http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
2014-01-07 
12:11:32.786http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
2014-01-07 
12:11:32.786http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
2014-01-07 
12:11:32.786http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
 | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp.pS70eSqBIL/tmpRB7C09
2014-01-07 

Re: [openstack-dev] [Neutron][qa] Parallel testing update

2014-01-07 Thread Salvatore Orlando
Thanks Mathieu!

I think we should first merge Edouard's patch, which appears to be a
prerequisite.
I think we could benefit a lot by applying this mechanism to
process_network_ports.

However, I am not sure if there could be drawbacks arising from the fact
that the agent would assign the local VLAN port (either the lvm id or the
DEAD_VLAN tag) and then at the end of the iteration the flow modifications,
such as the drop all rule, will be applied.
This will probably create a short interval of time in which we might have
unexpected behaviours (such as VMs on DEAD VLAN able to communicate each
other for instance).

I think we can generalize this discussion and use deferred application for
ovs-vsctl as well.
Would you agree with that?

Thanks,
Salvatore


On 7 January 2014 14:08, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 I think that isaku is talking about a more intensive usage of
 defer_apply_on/off as it is done in the patch of gongysh [1].

 Isaku, i don't see any reason why this could not be done in
 precess_network_ports, if needed. Moreover the patch from edouard [2]
 resolves multithreading issues while precessing defer_apply_off.


 [1]https://review.openstack.org/#/c/61341/
 [2]https://review.openstack.org/#/c/63917/

 On Mon, Jan 6, 2014 at 9:24 PM, Salvatore Orlando sorla...@nicira.com
 wrote:
  This thread is starting to get a bit confusing, at least for people with
 a
  single-pipeline brain like me!
 
  I am not entirely sure if I understand correctly Isaku's proposal
 concerning
  deferring the application of flow changes.
  I think it's worth discussing in a separate thread, and a supporting
 patch
  will help as well; I think that in order to avoid unexpected behaviours,
  vlan tagging on the port and flow setup should always be performed at the
  same time; if we get a much better performance using a mechanism similar
 to
  iptables' defer_apply, then we should it.
 
  Regarding rootwrap. This 6x slowdown, while proving that rootwrap
 imposes a
  non-negligible overhead, it should not be used as a sort of proof that
  rootwrap makes things 6 times worse! What I've been seeing on the gate
 and
  in my tests are ALRM_CLOCK errors raised by ovs commands, so rootwrap has
  little to do with it.
 
  Still, I think we can say that rootwrap adds about 50ms to each command,
  becoming particularly penalising especially for 'fast' commands.
  I think the best things to do, as Joe advices, a test with rootwrap
 disabled
  on the gate - and I will take care of that.
 
  On the other hand, I would invite community members picking up some of
 the
  bugs we've registered for 'less frequent' failures observed during
 parallel
  testing; especially if you're coming to Montreal next week.
 
  Salvatore
 
 
 
  On 6 January 2014 20:31, Jay Pipes jaypi...@gmail.com wrote:
 
  On Mon, 2014-01-06 at 11:17 -0800, Joe Gordon wrote:
  
  
  
   On Mon, Jan 6, 2014 at 10:35 AM, Jay Pipes jaypi...@gmail.com
 wrote:
   On Mon, 2014-01-06 at 09:56 -0800, Joe Gordon wrote:
  
What about it? Also those numbers are pretty old at this
   point. I was
thinking disable rootwrap and run full parallel tempest
   against it.
  
  
   I think that is a little overkill for what we're trying to do
   here. We
   are specifically talking about combining many utils.execute()
   calls into
   a single one. I think it's pretty obvious that the latter will
   be better
   performing than the first, unless you think that rootwrap has
   no
   performance overhead at all?
  
  
   mocking out rootwrap with straight sudo, is a very quick way to
   approximate the performance benefit of combining many utlils.execute()
   calls together (at least rootwrap wise).  Also  it would tell us how
   much of the problem is rootwrap induced and how much is other.
 
  Yes, I understand that, which is what the article I linked earlier
  showed?
 
  % time sudo ip link /dev/null
  sudo ip link  /dev/null  0.00s user 0.00s system 43% cpu 0.009 total
  % sudo time quantum-rootwrap /etc/quantum/rootwrap.conf ip link
   /dev/null
  quantum-rootwrap /etc/quantum/rootwrap.conf ip link   /dev/null  0.04s
  user 0.02s system 87% cpu 0.059 total
 
  A very tiny, non-scientific simple indication that rootwrap is around 6
  times slower than a simple sudo call.
 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] libvirt unit test errors

2014-01-07 Thread Sean Dague
Yes, please see the email thread nova py27 unit test failures in
libvirt from 4 hrs ago. :)

-Sean

On 01/07/2014 08:28 AM, Gary Kotton wrote:
 Hi,
 Anyone aware of the following:
 
 2014-01-07 11:59:47.428 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_428
  | Requirement already satisfied (use --upgrade to upgrade): markupsafe in 
 ./.tox/py27/lib/python2.7/site-packages (from Jinja2=2.3-sphinx=1.1.2,1.2)
 2014-01-07 11:59:47.429 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_429
  | Cleaning up...
 2014-01-07 12:01:32.134 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_32_134
  | Unimplemented block at ../../relaxng.c:3824
 2014-01-07 12:01:33.893 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_33_893
  | Unimplemented block at ../../relaxng.c:3824
 2014-01-07 12:10:25.292 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_10_25_292
  | libvirt:  error : internal error: could not initialize domain event timer
 2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests --list 
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpZV93Uv
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpC2pLuK
 2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpd1ZnJj
 2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.786 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.786 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.786 
 

Re: [openstack-dev] [Nova] libvirt unit test errors

2014-01-07 Thread Kenichi Oomichi

Hi Gray,

This problem is https://bugs.launchpad.net/nova/+bug/1266711 .
The thread is 
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023575.html .


Thanks
Ken'ichi Ohmichi

---

 -Original Message-
 From: Gary Kotton [mailto:gkot...@vmware.com]
 Sent: Tuesday, January 07, 2014 10:29 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Nova] libvirt unit test errors
 
 Hi,
 Anyone aware of the following:
 
 2014-01-07 11:59:47.428 | Requirement already satisfied (use --upgrade to 
 upgrade): markupsafe
 in ./.tox/py27/lib/python2.7/site-packages (from 
 Jinja2=2.3-sphinx=1.1.2,1.2)
 2014-01-07 11:59:47.429 | Cleaning up...
 2014-01-07 12:01:32.134 | Unimplemented block at ../../relaxng.c:3824
 2014-01-07 12:01:33.893 | Unimplemented block at ../../relaxng.c:3824
 2014-01-07 12:10:25.292 | libvirt:  error : internal error: could not 
 initialize domain event timer
 2014-01-07 12:11:32.783 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.783 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.783 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.784 | ${PYTHON:-python} -m subunit.run discover -t ./ 
 ./nova/tests --list
 2014-01-07 12:11:32.784 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.784 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.784 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.784 | ${PYTHON:-python} -m subunit.run discover -t ./ 
 ./nova/tests  --load-list
 /tmp/tmp.pS70eSqBIL/tmpZV93Uv
 2014-01-07 12:11:32.784 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.784 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.785 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.785 | ${PYTHON:-python} -m subunit.run discover -t ./ 
 ./nova/tests  --load-list
 /tmp/tmp.pS70eSqBIL/tmpC2pLuK
 2014-01-07 12:11:32.785 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.785 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.785 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.785 | ${PYTHON:-python} -m subunit.run discover -t ./ 
 ./nova/tests  --load-list
 /tmp/tmp.pS70eSqBIL/tmpd1ZnJj
 2014-01-07 12:11:32.785 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 2014-01-07 12:11:32.786 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 2014-01-07 12:11:32.786 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 2014-01-07 12:11:32.786 | ${PYTHON:-python} -m subunit.run discover -t ./ 
 ./nova/tests  --load-list
 /tmp/tmp.pS70eSqBIL/tmpRB7C09
 2014-01-07 12:11:32.786 | 
 ==
 2014-01-07 12:11:32.786 | FAIL:
 nova.tests.virt.libvirt.test_libvirt.LibvirtNonblockingTestCase.test_connection_to_primitive
 2014-01-07 12:11:32.786 | tags: worker-2
 2014-01-07 12:11:32.786 | 
 --
 2014-01-07 12:11:32.787 | Empty attachments:
 2014-01-07 12:11:32.787 |   stderr
 2014-01-07 12:11:32.787 |   stdout
 2014-01-07 12:11:32.787 |
 2014-01-07 12:11:32.787 | pythonlogging:'': {{{WARNING 
 [nova.virt.libvirt.driver] URI test:///default does not support
 events: internal error: could not initialize domain event timer}}}
 2014-01-07 12:11:32.787 |
 2014-01-07 12:11:32.787 | Traceback (most recent call last):
 2014-01-07 12:11:32.788 |   File nova/tests/virt/libvirt/test_libvirt.py, 
 line 7570, in test_connection_to_primitive
 2014-01-07 12:11:32.788 | jsonutils.to_primitive(connection._conn, 
 convert_instances=True)
 2014-01-07 12:11:32.788 |   File nova/virt/libvirt/driver.py, line 666, in 
 _get_connection
 2014-01-07 12:11:32.788 | wrapped_conn = self._get_new_connection()
 2014-01-07 12:11:32.788 |   File nova/virt/libvirt/driver.py, line 652, in 
 _get_new_connection
 2014-01-07 12:11:32.788 | wrapped_conn.registerCloseCallback(
 2014-01-07 12:11:32.788 |   File
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/tpool.py,
  line 172,
 in __getattr__
 2014-01-07 12:11:32.789 | f = getattr(self._obj,attr_name)
 2014-01-07 12:11:32.789 | AttributeError: virConnect instance has no 
 attribute 'registerCloseCallback'
 
 Thanks
 Gary

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] libvirt unit test errors

2014-01-07 Thread Jay Lau
Gary,

Please search email with title as [openstack-dev] [nova][infra] nova py27
unit test failures in libvirt, a bug has been filed for this:
https://bugs.launchpad.net/nova/+bug/1266711

Thanks,

Jay



2014/1/7 Gary Kotton gkot...@vmware.com

 Hi,
 Anyone aware of the following:

 2014-01-07 11:59:47.428 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_428
  | Requirement already satisfied (use --upgrade to upgrade): markupsafe in 
 ./.tox/py27/lib/python2.7/site-packages (from 
 Jinja2=2.3-sphinx=1.1.2,1.2)2014-01-07 11:59:47.429 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_429
  | Cleaning up...2014-01-07 12:01:32.134 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_32_134
  | Unimplemented block at ../../relaxng.c:38242014-01-07 12:01:33.893 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_33_893
  | Unimplemented block at ../../relaxng.c:38242014-01-07 12:10:25.292 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_10_25_292
  | libvirt:  error : internal error: could not initialize domain event 
 timer2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests --list 
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpZV93Uv2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpC2pLuK2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpd1ZnJj2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.786 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.786 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 

Re: [openstack-dev] [Nova] libvirt unit test errors

2014-01-07 Thread Chmouel Boudjnah
On Tue, Jan 7, 2014 at 2:28 PM, Gary Kotton gkot...@vmware.com wrote:

 2014-01-07 
 11:59:47.428http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_428|
  Requirement already satisfied (use --upgrade to upgrade): markupsafe in
 ./.tox/py27/lib/python2.7/site-packages (from
 Jinja2=2.3-sphinx=1.1.2,1.2)


This is being worked on https://bugs.launchpad.net/nova/+bug/1266711


Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] where to expose network quota

2014-01-07 Thread Christopher Yeoh
On Mon, Jan 6, 2014 at 4:47 PM, Yaguang Tang yaguang.t...@canonical.comwrote:

 Hi all,

 Now Neutron has its own quota management API for network related
 items(floating ips, security groups .etc) which are also manged by Nova.
  when using nova with neutron as network service, the network related quota
 items are stored in two different databases and managed by different APIs.

 I'd like your suggestions on which of the following is best to fix the
 issue.

 1,  let nova to proxy all network related quota info operation(update,
  list,delete) through neutron API.

 2, filter network related quota info from nova when using neutron as
 network service, and change
 novaclient to get quota info from nova and neutron quota API.


For the V3 API clients should access neutron directly for quota
information. The V3 API will no longer proxy quota related information for
neutron. Also novaclient will not get the quota information from neutron,
but users should use neutronclient or python-openstackclient instead.

The V3 API mode for novaclient will only be accessing Nova - with one big
exception for querying glance
so images can be specified by name. And longer term I think we need to
think about how we share client code amongst clients because I think there
will be more cases where its useful to access other servers so things can
be specified by name rather than UUID but we don't want to duplicate code
in the clients.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] coding standards question

2014-01-07 Thread Greg Hill
I got a -1 on a review for a standards violation that isn't caught by the 
automated checks, so I was wondering why the automated check doesn't catch it.  
The violation was:

from X import Y, Z

According to the coding standards page on the openstack wiki, the coding 
standards are PEP8 (they just link to the PEP8 docs): 
https://wiki.openstack.org/wiki/CodingStandards and PEP8 explicitly says this 
format is allowed.

It was pointed out that there's an additional wiki page I had missed, 
http://docs.openstack.org/developer/hacking/ which specifies this rule.  So now 
that I see it is a rule, it comes back to my original question, why is it not 
enforced by the checker?  Apparently there's not a flake8 rule for this either 
https://flake8.readthedocs.org/en/2.0/warnings.html

So, two questions:

1. Is this really the rule or just a vaguely worded repeat of the PEP8 rule 
about import X, Y?
2. If it is the rule, what's involved in getting the pep8 tests to check for it?

My own personal frustration aside, this would be helpful for other newcomers I 
imagine.  We have some pretty rigid and extensive coding standards, so its not 
reasonable to expect new contributors to remember them all.  It's also much 
nicer to have an automated tool tell you you violated some coding standard than 
to think you were ok and then have your code rejected 2 weeks later because of 
it.

Thanks,
Greg

P.S. I can fix the wiki to point to the right page after the discussion.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-07 Thread Thierry Carrez
Day, Phil wrote:
 Would be nice in this specific example though if the actual upgrade impact 
 was explicitly called out in the commit message.

Yes, UpgradeImpact should definitely also elaborate on the exact impact,
rather than expect the reviewer to deduce it from the patch.

 [...]
 So it looks as if UpgradeImpact is really a warning of some change that needs 
 to be considered at some point, but doesn't break a running system just by 
 incorporating this change (since the deprecated names are still supported) - 
 but the subsequent change that will eventually remove the deprecated names is 
 the thing that is the actual upgrade impact (in that that once that change is 
 incorporated the system will be broken if some extra action isn't taken).
 Would both of those changes be tagged as UpgradeImpact ?  Should we make some 
 distinction between these two cases ? 

This is a bit of a corner case (deprecated options), but I feel that
UpgradeImpact is warranted in that case. It's good that people caring
about upgrades to be warned of deprecation *and* removal, even if
deprecation is technically not triggering an upgrade issue (yet).

I don't think we need to distinguish between the two cases. Both are of
interest to the same population.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Several topics for tomorrow's meeting

2014-01-07 Thread Collins, Sean
I will make sure to add these items to the agenda.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] coding standards question

2014-01-07 Thread Sean Dague

On 01/07/2014 09:26 AM, Greg Hill wrote:

I got a -1 on a review for a standards violation that isn't caught by
the automated checks, so I was wondering why the automated check doesn't
catch it.  The violation was:

from X import Y, Z

According to the coding standards page on the openstack wiki, the coding
standards are PEP8 (they just link to the PEP8 docs):
https://wiki.openstack.org/wiki/CodingStandards and PEP8 explicitly says
this format is allowed.

It was pointed out that there's an additional wiki page I had missed,
http://docs.openstack.org/developer/hacking/ which specifies this rule.
  So now that I see it is a rule, it comes back to my original question,
why is it not enforced by the checker?  Apparently there's not a flake8
rule for this either https://flake8.readthedocs.org/en/2.0/warnings.html

So, two questions:

1. Is this really the rule or just a vaguely worded repeat of the PEP8
rule about import X, Y?
2. If it is the rule, what's involved in getting the pep8 tests to check
for it?


Writing the hacking test to support it - 
https://github.com/openstack-dev/hacking


The policy leads the automatic enforcement scripts, so there are plenty 
of rules in the policy that are not in automatic enforcement. 
Contributions to fill in the gaps are welcomed.



My own personal frustration aside, this would be helpful for other
newcomers I imagine.  We have some pretty rigid and extensive coding
standards, so its not reasonable to expect new contributors to remember
them all.  It's also much nicer to have an automated tool tell you you
violated some coding standard than to think you were ok and then have
your code rejected 2 weeks later because of it.

Thanks,
Greg

P.S. I can fix the wiki to point to the right page after the discussion.


Agreed, it's all about bandwidth. Contributors on hacking to help fill 
it out are appreciated. Right now it's mostly just Joe with a few others 
throwing in when they can.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] NVP patch breaks Neutron install on Windows

2014-01-07 Thread Alessandro Pilotti
Hi guys,

This patch breaks “setup.py install” on Windows due to the usage on symbolic 
links: https://review.openstack.org/#/c/64747/

error: can't copy 'etc\neutron\plugins\nicira\nvp.ini': doesn't exist or not a 
regular file

I filed up a bug here: https://bugs.launchpad.net/neutron/+bug/1266794

This is a blocking issue on Windows.


Thanks!

Alessandro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] coding standards question

2014-01-07 Thread Greg Hill
Thanks Sean.  I'll work on adding this one to the hacking repo.  That brings up 
another question, though, what are the implications of suddenly enforcing a 
rule that wasn't previously enforced?  I know there are at least 30 other 
violations of this rule just within trove, and I imagine larger projects 
probably have more.  I'd hate to be the target of all the ire that sudden 
rejections of every commit would cause.  Do we have a way to make it off by 
default for some period to let the projects all clean themselves up then turn 
it on by default after that?

Or we could just loosen the coding standards, but that's just crazy talk :D

Greg

On Jan 7, 2014, at 8:46 AM, Sean Dague s...@dague.net wrote:

 On 01/07/2014 09:26 AM, Greg Hill wrote:
 I got a -1 on a review for a standards violation that isn't caught by
 the automated checks, so I was wondering why the automated check doesn't
 catch it.  The violation was:
 
 from X import Y, Z
 
 According to the coding standards page on the openstack wiki, the coding
 standards are PEP8 (they just link to the PEP8 docs):
 https://wiki.openstack.org/wiki/CodingStandards and PEP8 explicitly says
 this format is allowed.
 
 It was pointed out that there's an additional wiki page I had missed,
 http://docs.openstack.org/developer/hacking/ which specifies this rule.
  So now that I see it is a rule, it comes back to my original question,
 why is it not enforced by the checker?  Apparently there's not a flake8
 rule for this either https://flake8.readthedocs.org/en/2.0/warnings.html
 
 So, two questions:
 
 1. Is this really the rule or just a vaguely worded repeat of the PEP8
 rule about import X, Y?
 2. If it is the rule, what's involved in getting the pep8 tests to check
 for it?
 
 Writing the hacking test to support it - 
 https://github.com/openstack-dev/hacking
 
 The policy leads the automatic enforcement scripts, so there are plenty of 
 rules in the policy that are not in automatic enforcement. Contributions to 
 fill in the gaps are welcomed.
 
 My own personal frustration aside, this would be helpful for other
 newcomers I imagine.  We have some pretty rigid and extensive coding
 standards, so its not reasonable to expect new contributors to remember
 them all.  It's also much nicer to have an automated tool tell you you
 violated some coding standard than to think you were ok and then have
 your code rejected 2 weeks later because of it.
 
 Thanks,
 Greg
 
 P.S. I can fix the wiki to point to the right page after the discussion.
 
 Agreed, it's all about bandwidth. Contributors on hacking to help fill it out 
 are appreciated. Right now it's mostly just Joe with a few others throwing in 
 when they can.
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] coding standards question

2014-01-07 Thread Huang Zhiteng
Yes, you turn it off.  Once the change to enforce this rule is merged
into hacking, other projects can start refresh their hacking
dependency (e.g. upgrading to latest version).  The patch to update
the dependency has to turn the newly added check off and then
consecutive patches can fix all violations in that project and then
turn the rule back on.

On Tue, Jan 7, 2014 at 11:19 PM, Greg Hill greg.h...@rackspace.com wrote:
 Thanks Sean.  I'll work on adding this one to the hacking repo.  That brings 
 up another question, though, what are the implications of suddenly enforcing 
 a rule that wasn't previously enforced?  I know there are at least 30 other 
 violations of this rule just within trove, and I imagine larger projects 
 probably have more.  I'd hate to be the target of all the ire that sudden 
 rejections of every commit would cause.  Do we have a way to make it off by 
 default for some period to let the projects all clean themselves up then turn 
 it on by default after that?

 Or we could just loosen the coding standards, but that's just crazy talk :D

 Greg

 On Jan 7, 2014, at 8:46 AM, Sean Dague s...@dague.net wrote:

 On 01/07/2014 09:26 AM, Greg Hill wrote:
 I got a -1 on a review for a standards violation that isn't caught by
 the automated checks, so I was wondering why the automated check doesn't
 catch it.  The violation was:

 from X import Y, Z

 According to the coding standards page on the openstack wiki, the coding
 standards are PEP8 (they just link to the PEP8 docs):
 https://wiki.openstack.org/wiki/CodingStandards and PEP8 explicitly says
 this format is allowed.

 It was pointed out that there's an additional wiki page I had missed,
 http://docs.openstack.org/developer/hacking/ which specifies this rule.
  So now that I see it is a rule, it comes back to my original question,
 why is it not enforced by the checker?  Apparently there's not a flake8
 rule for this either https://flake8.readthedocs.org/en/2.0/warnings.html

 So, two questions:

 1. Is this really the rule or just a vaguely worded repeat of the PEP8
 rule about import X, Y?
 2. If it is the rule, what's involved in getting the pep8 tests to check
 for it?

 Writing the hacking test to support it - 
 https://github.com/openstack-dev/hacking

 The policy leads the automatic enforcement scripts, so there are plenty of 
 rules in the policy that are not in automatic enforcement. Contributions to 
 fill in the gaps are welcomed.

 My own personal frustration aside, this would be helpful for other
 newcomers I imagine.  We have some pretty rigid and extensive coding
 standards, so its not reasonable to expect new contributors to remember
 them all.  It's also much nicer to have an automated tool tell you you
 violated some coding standard than to think you were ok and then have
 your code rejected 2 weeks later because of it.

 Thanks,
 Greg

 P.S. I can fix the wiki to point to the right page after the discussion.

 Agreed, it's all about bandwidth. Contributors on hacking to help fill it 
 out are appreciated. Right now it's mostly just Joe with a few others 
 throwing in when they can.

   -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] coding standards question

2014-01-07 Thread Sean Dague
On 01/07/2014 10:19 AM, Greg Hill wrote:
 Thanks Sean.  I'll work on adding this one to the hacking repo.  That brings 
 up another question, though, what are the implications of suddenly enforcing 
 a rule that wasn't previously enforced?  I know there are at least 30 other 
 violations of this rule just within trove, and I imagine larger projects 
 probably have more.  I'd hate to be the target of all the ire that sudden 
 rejections of every commit would cause.  Do we have a way to make it off by 
 default for some period to let the projects all clean themselves up then turn 
 it on by default after that?

New rules only get released as part of new semver bumps on hacking, and
all the projects are pinned on their upper bound on hacking. i.e.

hacking=0.8.0,0.9

So new rules would be going into the 0.9.x release stream at this point.
Once 0.9.0 is released, we'll up the global requirements. Then projects
should update their pins, and either address the issues, or add an
ignore for the rules they do not want to enforce (either by policy, or
because now is not a good time to fix them).

So it is minimally disruptive.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Meeting time - now Tuesdays at 1400 UTC in #openstack-meeting

2014-01-07 Thread Collins, Sean
The 1500 UTC time conflicts with Marconi, although the time was not
listed on the Meetings wikipage.


-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Error when compiling nailgun

2014-01-07 Thread Andrew Woodward
Hi Kevein

the development setup docs are at
http://docs.mirantis.com/fuel-dev/develop/env.html (although they could be
slightly stale). The gist of it is that for nailgun, you need to install
all of the packages in nailgun/requirements.txt (pip install -r
requirements.txt) this should set you up package wise. The manage.py syncdb
command will also require that you have a working postgress setup aswell.

Andrew
Mirantis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] coding standards question

2014-01-07 Thread Greg Hill
So it turns out that trove just has this rule disabled.  At least I now know 
more about how this stuff works, I guess.  Sorry for the confusion.

Greg

On Jan 7, 2014, at 9:54 AM, Sean Dague s...@dague.net wrote:

 On 01/07/2014 10:19 AM, Greg Hill wrote:
 Thanks Sean.  I'll work on adding this one to the hacking repo.  That brings 
 up another question, though, what are the implications of suddenly enforcing 
 a rule that wasn't previously enforced?  I know there are at least 30 other 
 violations of this rule just within trove, and I imagine larger projects 
 probably have more.  I'd hate to be the target of all the ire that sudden 
 rejections of every commit would cause.  Do we have a way to make it off by 
 default for some period to let the projects all clean themselves up then 
 turn it on by default after that?
 
 New rules only get released as part of new semver bumps on hacking, and
 all the projects are pinned on their upper bound on hacking. i.e.
 
 hacking=0.8.0,0.9
 
 So new rules would be going into the 0.9.x release stream at this point.
 Once 0.9.0 is released, we'll up the global requirements. Then projects
 should update their pins, and either address the issues, or add an
 ignore for the rules they do not want to enforce (either by policy, or
 because now is not a good time to fix them).
 
 So it is minimally disruptive.
 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-07 Thread Jay Pipes
On Tue, 2014-01-07 at 11:04 +0100, Thierry Carrez wrote:
 Matt Riedemann wrote:
  There is discussion in this thread about wouldn't it be nice to have a
  tag on commits for changes that impact upgrades?.  There is.
  
  http://lists.openstack.org/pipermail/openstack-dev/2013-October/016619.html
  
  https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
  
  Here is an example of a patch going through the gate now with
  UpgradeImpact:
  
  https://review.openstack.org/#/c/62815/
 
 The good thing about UpgradeImpact is that it's less subjective than
 OpsImpact, and I think it catches what matters: backward-incompatible
 changes, upgrades needing manual intervention (or smart workarounds in
 packaging), etc.
 
 Additional benefit is that it's relevant for more than just the ops
 population: packagers and the release notes writers also need to track
 those.

+1 for UpgradeImpact

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Hashing MD5 to SHA256

2014-01-07 Thread Adam Young

On 01/06/2014 01:10 PM, Jeremy Stanley wrote:

On 2014-01-06 10:19:39 -0500 (-0500), Adam Young wrote:

If it were as  easy as just replaceing hteh hash algorithm, we
would have done it a year + ago. I'm guessing you figured that by
now.

[...]

With the lack of In-Reply-To header and not finding any previous
messages to the list in the past few months with a similar subject
line, I'm lacking some context (so forgive me if I'm off the mark).

If the goal is to thwart offline brute-forcing of the hashed data,
shouldn't we be talking about switching away from a plain hash to a
key derivation function anyway (PBKDF2, bcrypt, scrypt, et cetera)?
MD5 is still resistant to preimage and second preimage attacks as
far as I've seen, and SHA256 doesn't take too many orders of
magnitude more operations to calculate than MD5.



Sorry to all for the cryptic (ha) nature of this mail.  It was in 
response to an expired code review:


https://review.openstack.org/#/c/61445/

But I thought would benefit from a larger audience.

Note that the Hashes in question are not vulnerable to many of the 
attecks, as they are used primarily on very strict sets of data (the 
keystone tokens) and only between the keystone clients and servers.   It 
is not possible to create an arbitrary token, hash it, and have that at 
all usable in  Keystone.  Which is why MD5 has not been deemed to be a 
problem for us.


I like the Idea of prefixing the hashes with the algorithm, but we still 
need a way to integrate that in.   A specific Keystone server will only 
use one Hash alrgorithm, since it is useing the Hash as the unique ID 
for a database lookup.  Thus, in order for the clients and server to be 
in sync, they need a way to agree on the hash algorithm.  Identifying it 
in the hash is probably too late for most uses.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Hashing MD5 to SHA256

2014-01-07 Thread Dolph Mathews
On Tue, Jan 7, 2014 at 11:01 AM, Adam Young ayo...@redhat.com wrote:

 On 01/06/2014 01:10 PM, Jeremy Stanley wrote:

 On 2014-01-06 10:19:39 -0500 (-0500), Adam Young wrote:

 If it were as  easy as just replaceing hteh hash algorithm, we
 would have done it a year + ago. I'm guessing you figured that by
 now.

 [...]

 With the lack of In-Reply-To header and not finding any previous
 messages to the list in the past few months with a similar subject
 line, I'm lacking some context (so forgive me if I'm off the mark).

 If the goal is to thwart offline brute-forcing of the hashed data,
 shouldn't we be talking about switching away from a plain hash to a
 key derivation function anyway (PBKDF2, bcrypt, scrypt, et cetera)?
 MD5 is still resistant to preimage and second preimage attacks as
 far as I've seen, and SHA256 doesn't take too many orders of
 magnitude more operations to calculate than MD5.



 Sorry to all for the cryptic (ha) nature of this mail.  It was in response
 to an expired code review:

 https://review.openstack.org/#/c/61445/

 But I thought would benefit from a larger audience.

 Note that the Hashes in question are not vulnerable to many of the
 attecks, as they are used primarily on very strict sets of data (the
 keystone tokens) and only between the keystone clients and servers.   It is
 not possible to create an arbitrary token, hash it, and have that at all
 usable in  Keystone.  Which is why MD5 has not been deemed to be a problem
 for us.

 I like the Idea of prefixing the hashes with the algorithm, but we still
 need a way to integrate that in.   A specific Keystone server will only use
 one Hash alrgorithm, since it is useing the Hash as the unique ID for a
 database lookup.  Thus, in order for the clients and server to be in sync,
 they need a way to agree on the hash algorithm.  Identifying it in the hash
 is probably too late for most uses.


++ the current architecture requires *clients* to perform the hash, and
make requests against the server using the hashed token. So, the client
needs to know which hash to use, not just communicate the hash it chose to
the service (or have the service published hashed tokens?). Ideally, the
service would only have to index on a single hash, so it should be able to
communicate which algorithm it expects back to clients in order to provide
an upgrade path from MD5.







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-07 Thread Tim Bell
+1 from me too UpgradeImpact is a much better term.

Tim

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 07 January 2014 17:53
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] minimum review period for functional 
 changes that break backwards compatibility
 
 On Tue, 2014-01-07 at 11:04 +0100, Thierry Carrez wrote:
  Matt Riedemann wrote:
   There is discussion in this thread about wouldn't it be nice to
   have a tag on commits for changes that impact upgrades?.  There is.
  
   http://lists.openstack.org/pipermail/openstack-dev/2013-October/0166
   19.html
  
   https://wiki.openstack.org/wiki/GitCommitMessages#Including_external
   _references
  
   Here is an example of a patch going through the gate now with
   UpgradeImpact:
  
   https://review.openstack.org/#/c/62815/
 
  The good thing about UpgradeImpact is that it's less subjective than
  OpsImpact, and I think it catches what matters:
  backward-incompatible changes, upgrades needing manual intervention
  (or smart workarounds in packaging), etc.
 
  Additional benefit is that it's relevant for more than just the ops
  population: packagers and the release notes writers also need to track
  those.
 
 +1 for UpgradeImpact
 
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-07 Thread Ben Nemec
 

On 2014-01-07 07:16, Doug Hellmann wrote: 

 On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin michael.ker...@hp.com wrote:
 
 I have been seeing this problem also. 
 
 My problem is actually with oslo.sphinx. I ran sudo pip install -r 
 test-requirements.txt in cinder so that I could run the tests there, which 
 installed oslo.sphinx. 
 
 Strange thing is that the oslo.sphinx installed a directory called oslo in 
 /usr/local/lib/python2.7/dist-packages with no __init__.py file. With this 
 package installed like so I get the same error you get with oslo.config.
 
 The oslo libraries use python namespace packages, which manifest themselves 
 as a directory in site-packages (or dist-packages) with sub-packages but no 
 __init__.py(c). That way oslo.sphinx and oslo.config can be packaged 
 separately, but still installed under the oslo directory and imported as 
 oslo.sphinx and oslo.config. 
 
 My guess is that installing oslo.sphinx globally (with sudo), set up 2 copies 
 of the namespace package (one in the global dist-packages and presumably one 
 in the virtualenv being used for the tests).

Actually I think it may be the opposite problem, at least where I'm
currently running into this. oslo.sphinx is only installed in the venv
and it creates a namespace package there. Then if you try to load
oslo.config in the venv it looks in the namespace package, doesn't find
it, and bails with a missing module error. 

I'm personally running into this in tempest - I can't even run pep8 out
of the box because the sample config check fails due to missing
oslo.config. Here's what I'm seeing: 

In the tox venv: 
(pep8)[fedora@devstack site-packages]$ ls oslo*
oslo.sphinx-1.1-py2.7-nspkg.pth

oslo:
sphinx

oslo.sphinx-1.1-py2.7.egg-info:
dependency_links.txt namespace_packages.txt PKG-INFO top_level.txt
installed-files.txt not-zip-safe SOURCES.txt 

And in the system site-packages: 
[fedora@devstack site-packages]$ ls oslo*
oslo.config.egg-link oslo.messaging.egg-link 

Since I don't actually care about oslo.sphinx in this case, I also found
that deleting it from the venv fixes the problem, but obviously that's
just a hacky workaround. My initial thought is to install oslo.sphinx in
devstack the same way as oslo.config and oslo.messaging, but I assume
there's a reason we didn't do it that way in the first place so I'm not
sure if that will work. 

So I don't know what the proper fix is, but I thought I'd share what
I've found so far. Also, I'm not sure if this even relates to the
ceilometer issue since I wouldn't expect that to be running in a venv,
but it may have a similar issue. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-07 Thread Matt Riedemann



On 12/30/2013 6:21 AM, Michael Still wrote:

Hi.

The purpose of this email to is apologise for some incorrect -1 review
scores which turbo hipster sent out today. I think its important when
a third party testing tool is new to not have flakey results as people
learn to trust the tool, so I want to explain what happened here.

Turbo hipster is a system which takes nova code reviews, and runs
database upgrades against them to ensure that we can still upgrade for
users in the wild. It uses real user datasets, and also times
migrations and warns when they are too slow for large deployments. It
started voting on gerrit in the last week.

Turbo hipster uses zuul to learn about reviews in gerrit that it
should test. We run our own zuul instance, which talks to the
openstack.org zuul instance. This then hands out work to our pool of
testing workers. Another thing zuul does is it handles maintaining a
git repository for the workers to clone from.

This is where things went wrong today. For reasons I can't currently
explain, the git repo on our zuul instance ended up in a bad state (it
had a patch merged to master which wasn't in fact merged upstream
yet). As this code is stock zuul from openstack-infra, I have a
concern this might be a bug that other zuul users will see as well.

I've corrected the problem for now, and kicked off a recheck of any
patch with a -1 review score from turbo hipster in the last 24 hours.
I'll talk to the zuul maintainers tomorrow about the git problem and
see what we can learn.

Thanks heaps for your patience.

Michael



How do I interpret the warning and -1 from turbo-hipster on my patch 
here [1] with the logs here [2]?


I'm inclined to just do 'recheck migrations' on this since this patch 
doesn't have anything to do with this -1 as far as I can tell.


[1] https://review.openstack.org/#/c/64725/4/
[2] 
https://ssl.rcbops.com/turbo_hipster/logviewer/?q=/turbo_hipster/results/64/64725/4/check/gate-real-db-upgrade_nova_mysql_user_001/5186e53/user_001.log


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-07 Thread Noorul Islam Kamal Malmiyoda
On Tue, Jan 7, 2014 at 4:54 PM, Michael Kerrin michael.ker...@hp.com wrote:
 I have been seeing this problem also.



 My problem is actually with oslo.sphinx. I ran sudo pip install -r
 test-requirements.txt in cinder so that I could run the tests there, which
 installed oslo.sphinx.



 Strange thing is that the oslo.sphinx installed a directory called oslo in
 /usr/local/lib/python2.7/dist-packages with no __init__.py file. With this
 package installed like so I get the same error you get with oslo.config.



 I don't need oslo.sphinx so I just went and manually deleted the oslo
 directory and the oslo.sphinx* files in
 /usr/local/lib/python2.7/dist-packages. Everything worked fine after that.



 Not sure what to do about this, but that is my story




In solum, we are trying to use devstack job for functional testing. We
are installing test packages [1] from test-requirements.txt, hence we
are also facing similar issue. See [2] and [3]

Regards,
Noorul

[1] 
http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_20_26_838
[2] 
http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_26_11_500
[3] 
http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_26_15_996


 Michael







 On Mon 23 Dec 2013 14:18:11 Sean Dague wrote:

 On 12/23/2013 11:52 AM, Ben Nemec wrote:

  On 2013-12-18 09:26, Sayali Lunkad wrote:

  Hello,

 

  I get the following error when I run stack.sh on Devstack

 

  Traceback (most recent call last):

  File /usr/local/bin/ceilometer-dbsync, line 6, in module

 

  from ceilometer.storage import dbsync

 

  File /opt/stack/ceilometer/ceilometer/storage/__init__.py, line

 

  23, in module

 

  from oslo.config import cfg

 

  ImportError: No module named config

  ++ failed

  ++ local r=1

  +++ jobs -p

  ++ kill

  ++ set +o xtrace

 

  Search gives me olso.config is installed. Please let me know of any

  solution.

 

  Devstack pulls oslo.config from git, so if you have it installed on the

  system through pip or something it could cause problems. If you can

  verify that it's only in /opt/stack/oslo.config, you might try deleting

  that directory and rerunning devstack to pull down a fresh copy. I

  don't know for sure what the problem is, but those are a couple of

  things to try.



 We actually try to resolve that here:



 https://github.com/openstack-dev/devstack/blob/master/lib/oslo#L43



 However, have I said how terrible python packaging is recently?

 Basically you can very easily get yourself in a situation where *just

 enough* of the distro package is left behind that pip thinks its there,

 so won't install it, but the python loader doesn't, so won't work.



 Then much sadness.



 If anyone has a more fool proof way to fix this, suggestions appreciated.



 -Sean




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-07 Thread Ben Nemec

On 2014-01-07 12:11, Noorul Islam Kamal Malmiyoda wrote:
On Tue, Jan 7, 2014 at 4:54 PM, Michael Kerrin michael.ker...@hp.com 
wrote:

I have been seeing this problem also.



My problem is actually with oslo.sphinx. I ran sudo pip install -r
test-requirements.txt in cinder so that I could run the tests there, 
which

installed oslo.sphinx.



Strange thing is that the oslo.sphinx installed a directory called 
oslo in
/usr/local/lib/python2.7/dist-packages with no __init__.py file. With 
this
package installed like so I get the same error you get with 
oslo.config.




I don't need oslo.sphinx so I just went and manually deleted the oslo
directory and the oslo.sphinx* files in
/usr/local/lib/python2.7/dist-packages. Everything worked fine after 
that.




Not sure what to do about this, but that is my story





In solum, we are trying to use devstack job for functional testing. We
are installing test packages [1] from test-requirements.txt, hence we
are also facing similar issue. See [2] and [3]

Regards,
Noorul

[1]
http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_20_26_838
[2]
http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_26_11_500
[3]
http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_26_15_996


I went ahead and proposed a change to install oslo.sphinx from git in 
devstack: https://review.openstack.org/#/c/65336  That fixes my problems 
in tempest anyway.


If anyone knows of a reason we shouldn't do that, speak now or forever 
hold your peace. :-)


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-07 Thread Sean Dague
On 01/07/2014 01:44 PM, Ben Nemec wrote:
 On 2014-01-07 12:11, Noorul Islam Kamal Malmiyoda wrote:
 On Tue, Jan 7, 2014 at 4:54 PM, Michael Kerrin michael.ker...@hp.com
 wrote:
 I have been seeing this problem also.



 My problem is actually with oslo.sphinx. I ran sudo pip install -r
 test-requirements.txt in cinder so that I could run the tests there,
 which
 installed oslo.sphinx.



 Strange thing is that the oslo.sphinx installed a directory called
 oslo in
 /usr/local/lib/python2.7/dist-packages with no __init__.py file. With
 this
 package installed like so I get the same error you get with oslo.config.



 I don't need oslo.sphinx so I just went and manually deleted the oslo
 directory and the oslo.sphinx* files in
 /usr/local/lib/python2.7/dist-packages. Everything worked fine after
 that.



 Not sure what to do about this, but that is my story




 In solum, we are trying to use devstack job for functional testing. We
 are installing test packages [1] from test-requirements.txt, hence we
 are also facing similar issue. See [2] and [3]

 Regards,
 Noorul

 [1]
 http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_20_26_838

 [2]
 http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_26_11_500

 [3]
 http://logs.openstack.org/59/64059/8/check/gate-solum-devstack-dsvm/a7522b8/console.html#_2014-01-07_05_26_15_996

 
 I went ahead and proposed a change to install oslo.sphinx from git in
 devstack: https://review.openstack.org/#/c/65336  That fixes my problems
 in tempest anyway.
 
 If anyone knows of a reason we shouldn't do that, speak now or forever
 hold your peace. :-)

Yes, we shouldn't do that. That's there for installing things that need
to be self gating with the rest of openstack.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-07 Thread Eric Windisch
On Tue, Jan 7, 2014 at 1:16 AM, Swapnil Kulkarni 
swapnilkulkarni2...@gmail.com wrote:

 Thanks Eric.

 I had already tried the solution presented on ask.openstack.org.


It was worth a shot.

I also found a bug [1] and applied code changes in [2], but to no success.


Ah. I hadn't seen that change before. I agree with Sean's comment, but we
can fix up your change.

I was just curious to know if anyone else is working on this or can provide
 some pointers from development front.


I'm in the process of taking over active development and maintenance of
this driver from Sam Alba.

I'll try and reproduce this myself.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2014-01-07 Thread Nachi Ueno
Hi Anita

Let's me join this session also.

Nachi Ueno
NTT i3

2014/1/5 Anita Kuno ante...@anteaya.info:
 On 01/05/2014 03:42 AM, Sukhdev Kapur wrote:
 Folks,

 I finally got over my fear of weather and booked my flight and hotel for
 this sprint.

 I am relatively new to OpenStack community with a strong desire to learn
 and contribute.
 Having a strong desire to participate effectively is great.

 The difficulty for us is that we have already indicated that we need
 experienced participants at the code sprint. [0]

 Having long periods of silence and then simply announcing you have
 booked your flights makes things difficult since we have been in
 conversation with others about this for some time. I'm not saying don't
 come, I am saying that this now puts myself and Mark in a position of
 having to explain ourselves to others regarding consistency.

 Mark and I will address this, though I will need to discuss this with
 Mark to hear his thoughts and I am unable right now since I am at a
 conference all week. [1]

 Going forward, having regular conversations about items of this nature
 (irc is a great tool for this) is something I would like to see happen
 more often.

 You may have seen that Arista Testing has come alive and is voting on the
 newly submitted neutron patches. I have been busy putting together the
 framework, and learning the Jenkins/Gerrit interface.
 Yes. Can you respond on the Remove voting until your testing structure
 works thread please? This enables people who wish to respond to you on
 this point a place to conduct the conversation. It also preserves a
 history of the topic so that those searching the archives have all the
 relevant information in one place.

 Now, I have shifted
 my focus on Neutron/networking tempest tests. I notice some of these tests
 are failing on my setup. I have started to dig into these with the intent
 to understand them.
 That is a great place to begin. Thank you for taking interest and
 pursuing test bugs.


 In terms of this upcoming sprint, if you folks can give some pointers that
 will help me get better prepared and productive, that will be appreciated.
 We need folks attending the sprint who are familiar with offering
 patches to tempest. Seeing your name in this list would be a great
 indicator that you are at least able to offer a patch. [3]

 If you are able to focus this week on getting up to speed on Tempest and
 the Neutron Tempest process, then your attendance at the conference may
 possibly be effective both for yourself and for the rest of the
 participants.

 This wiki page is probably a good place to begin. [4]

 The etherpad tracking the Neutron Tempest team's progress is here. [5]

 Familiarizing yourself with the status of the conversation during the
 meetings will help as well, though it isn't as important in terms of
 being useful at the sprint as offering a tempest patch. Neutron meeting
 logs can be found here. [6]

 Also being available in channel will go a long way to fostering the kind
 of interactions which are constructive now and in the future. I don't
 see you in channel much, it would be great to see you more.

 Looking forward to meeting and working with you.
 And I you. Let's consider this an opportunity for greater participation
 with Neutron and having more conversations in irc is a great way to begin.

 Though I am not available in -neutron this week others are, so please
 announce when you are ready to work and hopefully someone will be
 keeping an eye out for you and offer you a hand.

 regards..
 -Sukhdev

 Thanks Sukhdev,
 Anita.

 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022918.html
 [1]
 http://eavesdrop.openstack.org/meetings/networking/2013/networking.2013-12-16-21.02.log.html
 timestamp 21:57:46
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/023228.html
 [3]
 https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z
 [4] https://wiki.openstack.org/wiki/Neutron/TempestAPITests
 [5] https://etherpad.openstack.org/p/icehouse-summit-qa-neutron
 [6] http://eavesdrop.openstack.org/meetings/networking/2013/





 On Fri, Dec 27, 2013 at 9:00 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/18/2013 04:17 PM, Anita Kuno wrote:
 Okay time for a recap.

 What: Neutron Tempest code sprint
 Where: Montreal, QC, Canada
 When: January 15, 16, 17 2014
 Location: I am about to sign the contract for Salle du Parc at 3625 Parc
 avenue, a room in a residence of McGill University.
 Time: 9am - 5am
 Time: 9am - 5pm

 I am expecting to see the following people in Montreal in January:
 Mark McClain
 Salvatore Orlando
 Sean Dague
 Matt Trenish
 Jay Pipes
 Sukhdev Kapur
 Miguel Lavelle
 Oleg Bondarev
 Rossella Sblendido
 Emilien Macchi
 Sylvain Afchain
 Nicolas Planel
 Kyle Mestery
 Dane Leblanc
 Sumit Naiksatam
 Henry Gessau
 Don Kehn
 Carl Baldwin
 Justin Hammond
 Anita Kuno

 If you are on the above list and can't attend, please email me so I have
 an 

[openstack-dev] [keystone] Changes to keystone-core!

2014-01-07 Thread Dolph Mathews
Hello everyone!

We've been talking this for a long while, and we finally have a bunch of
changes to make to keystone-core all at once. A few people have moved on,
the project has grown a bit, and our review queue grows ever longer. As
ayoung phrased it in today's keystone meeting, with entirely selfish
motivations, we'd like to welcome the following new Guardians of the Gate
to keystone-core:

+ Steve Martinelli (stevemar)
+ Jamie Lennox (jamielennox)
+ David Stanek (dstanek)

And I'll be removing the following inactive members from keystone-core
(which hopefully won't come as a surprise to anyone!):

- Andy Smith (termie)
- Devin Carlen (devcamcar)
- Gabriel Hurley (gabrielhurley)
- Joe Heck (heckj)

I'll be making these changes today. Thanks to EVERYONE for your
contributions!

Happy code reviewing,

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress

2014-01-07 Thread Susaant Kondapaneni
We work with images provided by vendors over which we do not always have
control. So we are considering the cases where vendor image does not come
installed with cloud-init. Is there a way to support heat software config
in such scenarios?

Thanks
Susaant


On Mon, Jan 6, 2014 at 4:47 PM, Steve Baker sba...@redhat.com wrote:

  On 07/01/14 06:25, Susaant Kondapaneni wrote:

  Hi Steve,

  I am trying to understand the software config implementation. Can you
 clarify the following:

  i. To use Software config and deploy in a template, instance resource
 MUST always be accompanied by user_data. User_data should specify how to
 bootstrap CM tool and signal it. Is that correct?

   Yes, currently the user_data contains cfn-init formatted metadata which
 tells os-collect-config how to poll for config changes. What happens when
 new config is fetched depends on the os-apply-config templates and
 os-refresh-config scripts which are already on that image (or set up with
 cloud-init).

  ii. Supposing we were to use images which do not have cloud-init
 packaged in them, (and a custom CM tool that won't require bootstrapping on
 the instance itself), can we still use software config and deploy resources
 to deploy software on such instances?

   Currently os-collect-config is more of a requirement than cloud-init,
 but as Clint said cloud-init does a good job of boot config so you'll need
 to elaborate on why you don't want to use it.

  iii. If ii. were possible who would signal the deployment resource to
 indicate that the instance is ready for the deployment?

 os-collect-config polls for the deployment data, and triggers the
 resulting deployment/config changes. One day this may be performed by a
 different agent like the unified agent that has been discussed. Currently
 os-collect-collect polls via a heat-api-cfn metadata call. This too may be
 done in any number of ways in the future such as messaging or long-polling.

 So you *could* consume the supplied user_data to know what to poll for
 subsequent config changes without cloud-init or os-collect-config, but you
 would have to describe what you're doing in detail for us to know if that
 sounds like a good idea.



  Thanks
 Susaant


 On Fri, Dec 13, 2013 at 3:46 PM, Steve Baker sba...@redhat.com wrote:

  I've been working on a POC in heat for resources which perform software
 configuration, with the aim of implementing this spec
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec

 The code to date is here:
 https://review.openstack.org/#/q/topic:bp/hot-software-config,n,z

 What would be helpful now is reviews which give the architectural
 approach enough of a blessing to justify fleshing this POC out into a ready
 to merge changeset.

 Currently it is possible to:
 - create templates containing OS::Heat::SoftwareConfig and
 OS::Heat::SoftwareDeployment resources
 - deploy configs to OS::Nova::Server, where the deployment resource
 remains in an IN_PROGRESS state until it is signalled with the output values
 - write configs which execute shell scripts and report back with output
 values that other resources can have access to.

 What follows is an overview of the architecture and implementation to
 help with your reviews.

 REST API
 
 Like many heat resources, OS::Heat::SoftwareConfig and
 OS::Heat::SoftwareDeployment are backed by real resources that are
 invoked via a REST API. However in this case, the API that is called is
 heat itself.

 The REST API for these resources really just act as structured storage
 for config and deployments, and the entities are managed via the REST paths
 /{tenant_id}/software_configs and /{tenant_id}/software_deployments:

 https://review.openstack.org/#/c/58878/7/heat/api/openstack/v1/__init__.py
 https://review.openstack.org/#/c/58878/
 RPC layer of REST API:
 https://review.openstack.org/#/c/58877/
 DB layer of REST API:
 https://review.openstack.org/#/c/58876
 heatclient lib access to REST API:
 https://review.openstack.org/#/c/58885

 This data could be stored in a less structured datastore like swift, but
 this API has a couple of important implementation details which I think
 justify it existing:
 - SoftwareConfig resources are immutable once created. There is no update
 API to modify an existing config. This gives confidence that a config can
 have a long lifecycle without changing, and a certainty of what exactly is
 deployed on a server with a given config.
 - Fetching all the deployments and configs for a given server is an
 operation done repeatedly throughout the lifecycle of the stack, so is
 optimized to be able to do in a single operation. This is called by using
 the deployments index API call,
 /{tenant_id}/software_deployments?server_id=server_id. The resulting list
 of deployments include the their associated config data[1].

 OS::Heat::SoftwareConfig resource
 =
 OS::Heat::SoftwareConfig can be used directly in a template, but it may
 

Re: [openstack-dev] [keystone] Changes to keystone-core!

2014-01-07 Thread Gabriel Hurley
Sounds like a good decision all around.

Cleaning out  -core lists seems to be the hip thing to do lately. ;-)

All the best,


-  Gabriel

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Tuesday, January 07, 2014 11:16 AM
To: OpenStack Development Mailing List
Cc: Steve Martinelli; Jamie Lennox; David Stanek; Andy Smith; Gabriel Hurley; 
Joe Heck; Devin Carlen
Subject: [keystone] Changes to keystone-core!

Hello everyone!

We've been talking this for a long while, and we finally have a bunch of 
changes to make to keystone-core all at once. A few people have moved on, the 
project has grown a bit, and our review queue grows ever longer. As ayoung 
phrased it in today's keystone meeting, with entirely selfish motivations, we'd 
like to welcome the following new Guardians of the Gate to keystone-core:

+ Steve Martinelli (stevemar)
+ Jamie Lennox (jamielennox)
+ David Stanek (dstanek)

And I'll be removing the following inactive members from keystone-core (which 
hopefully won't come as a surprise to anyone!):

- Andy Smith (termie)
- Devin Carlen (devcamcar)
- Gabriel Hurley (gabrielhurley)
- Joe Heck (heckj)

I'll be making these changes today. Thanks to EVERYONE for your contributions!

Happy code reviewing,

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress

2014-01-07 Thread Clint Byrum
I'd say it isn't so much cloud-init that you need, but some kind
of bootstrapper. The point of hot-software-config is to help with
in-instance orchestration. That's not going to happen without some way
to push the desired configuration into the instance.

Excerpts from Susaant Kondapaneni's message of 2014-01-07 11:16:16 -0800:
 We work with images provided by vendors over which we do not always have
 control. So we are considering the cases where vendor image does not come
 installed with cloud-init. Is there a way to support heat software config
 in such scenarios?
 
 Thanks
 Susaant
 
 On Mon, Jan 6, 2014 at 4:47 PM, Steve Baker sba...@redhat.com wrote:
 
   On 07/01/14 06:25, Susaant Kondapaneni wrote:
 
   Hi Steve,
 
   I am trying to understand the software config implementation. Can you
  clarify the following:
 
   i. To use Software config and deploy in a template, instance resource
  MUST always be accompanied by user_data. User_data should specify how to
  bootstrap CM tool and signal it. Is that correct?
 
Yes, currently the user_data contains cfn-init formatted metadata which
  tells os-collect-config how to poll for config changes. What happens when
  new config is fetched depends on the os-apply-config templates and
  os-refresh-config scripts which are already on that image (or set up with
  cloud-init).
 
   ii. Supposing we were to use images which do not have cloud-init
  packaged in them, (and a custom CM tool that won't require bootstrapping on
  the instance itself), can we still use software config and deploy resources
  to deploy software on such instances?
 
Currently os-collect-config is more of a requirement than cloud-init,
  but as Clint said cloud-init does a good job of boot config so you'll need
  to elaborate on why you don't want to use it.
 
   iii. If ii. were possible who would signal the deployment resource to
  indicate that the instance is ready for the deployment?
 
  os-collect-config polls for the deployment data, and triggers the
  resulting deployment/config changes. One day this may be performed by a
  different agent like the unified agent that has been discussed. Currently
  os-collect-collect polls via a heat-api-cfn metadata call. This too may be
  done in any number of ways in the future such as messaging or long-polling.
 
  So you *could* consume the supplied user_data to know what to poll for
  subsequent config changes without cloud-init or os-collect-config, but you
  would have to describe what you're doing in detail for us to know if that
  sounds like a good idea.
 
 
 
   Thanks
  Susaant
 
 
  On Fri, Dec 13, 2013 at 3:46 PM, Steve Baker sba...@redhat.com wrote:
 
   I've been working on a POC in heat for resources which perform software
  configuration, with the aim of implementing this spec
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
 
  The code to date is here:
  https://review.openstack.org/#/q/topic:bp/hot-software-config,n,z
 
  What would be helpful now is reviews which give the architectural
  approach enough of a blessing to justify fleshing this POC out into a ready
  to merge changeset.
 
  Currently it is possible to:
  - create templates containing OS::Heat::SoftwareConfig and
  OS::Heat::SoftwareDeployment resources
  - deploy configs to OS::Nova::Server, where the deployment resource
  remains in an IN_PROGRESS state until it is signalled with the output 
  values
  - write configs which execute shell scripts and report back with output
  values that other resources can have access to.
 
  What follows is an overview of the architecture and implementation to
  help with your reviews.
 
  REST API
  
  Like many heat resources, OS::Heat::SoftwareConfig and
  OS::Heat::SoftwareDeployment are backed by real resources that are
  invoked via a REST API. However in this case, the API that is called is
  heat itself.
 
  The REST API for these resources really just act as structured storage
  for config and deployments, and the entities are managed via the REST paths
  /{tenant_id}/software_configs and /{tenant_id}/software_deployments:
 
  https://review.openstack.org/#/c/58878/7/heat/api/openstack/v1/__init__.py
  https://review.openstack.org/#/c/58878/
  RPC layer of REST API:
  https://review.openstack.org/#/c/58877/
  DB layer of REST API:
  https://review.openstack.org/#/c/58876
  heatclient lib access to REST API:
  https://review.openstack.org/#/c/58885
 
  This data could be stored in a less structured datastore like swift, but
  this API has a couple of important implementation details which I think
  justify it existing:
  - SoftwareConfig resources are immutable once created. There is no update
  API to modify an existing config. This gives confidence that a config can
  have a long lifecycle without changing, and a certainty of what exactly is
  deployed on a server with a given config.
  - Fetching all the deployments and configs for a given server is an
  operation done 

Re: [openstack-dev] [Ceilometer] Sharing the load test result

2014-01-07 Thread Jay Pipes
On Mon, 2014-01-06 at 00:14 +, Deok-June Yi wrote:
 Hi, Ceilometer team.
 
 I'm writing to share my load test result and ask you for advice about
 Ceilometer.
 
 Before starting, for whom doesn’t know Synaps [1], Synaps is
 'monitoring as a service' project that provides AWS CloudWatch
 compatible API. It was discussed to be merged with Ceilometer project
 at Grizzly design phase, but Synaps developers could not join the
 project for it at that time. And now Ceilometer has its own alarming
 feature.
 
 A few days ago, I installed Ceilometer and Synaps on my test
 environment and ran load test for over 2 days to evaluate their
 alarming feature in the aspect of real-time requirement. Here I attached
 test environment diagram and test result. The load model was as below.
 1.  Create 5,000 alarms
 2.  [Every 1 minute] Create 5,000 samples
 
 As a result, alarm evaluation time of Ceilometer was not predictable,
 whereas Synaps evaluated all alarms within 2 seconds every minute.
 
 This comes from two different design decisions for alarm evaluation
 between Ceilometer and Synaps. Synaps does not read database but read
 in-memory and in-stream data for alarming while Ceilometer involves
 database read operations with REST API call.

So you are saying that the Synaps server is storing 14,400,000 samples
in memory (2 days of 5000 samples per minute)? Or are you saying that
Synaps is storing just the 5000 alarm records in memory and then
processing (determining if the alarm condition was met) the samples as
they pass through to a backend data store? I think it is the latter but
I just want to make sure :)

Best,
-jay

 I think Ceilometer is better to allow creating alarms with more complex
 query on metrics. However Synaps is better if we have real-time
 requirement with alarm evaluation.
 
 So, how about re-opening the blueprint, cw-publish [2]?  It was
 discussed and designed [3] at the start of Grizzly development cycle,
 but it has not been implemented. And now I would like to work for it. Or
 is there any good idea to fulfill the real-time requirement with
 Ceilometer?
 
 Please, don't hesitate in contacting me.
 
 Thank you, 
 June Yi
 
 [1] https://wiki.openstack.org/Synaps
 [2] https://blueprints.launchpad.net/ceilometer/+spec/cw-publish
 [3] https://wiki.openstack.org/wiki/Ceilometer/blueprints/multi-publisher
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
Hi,

I'd like to discuss some possible ways we could install the OpenStack
components from packages in tripleo-image-elements.  As most folks are
probably aware, there is a fork of tripleo-image-elements called
tripleo-puppet-elements which does install using packages, but it does
so using Puppet to do the installation and for managing the
configuration of the installed components.  I'd like to kind of set
that aside for a moment and just discuss how we might support
installing from packages using tripleo-image-elements directly and not
using Puppet.

One idea would be to add support for a new type (or likely 2 new
types: rpm and dpkg) to the source-repositories element.
source-repositories already knows about the git, tar, and file types,
so it seems somewhat natural to have additional types for rpm and
dpkg.

A complication with that approach is that the existing elements assume
they're setting up everything from source.  So, if we take a look at
the nova element, and specifically install.d/74-nova, that script does
stuff like install a nova service, adds a nova user, creates needed
directories, etc.  All of that wouldn't need to be done if we were
installing from rpm or dpkg, b/c presumably the package would take
care of all that.

We could fix that by making the install.d scripts only run if you're
installing a component from source.  In that sense, it might make
sense to add a new hook, source-install.d and only run those scripts
if the type is a source type in the source-repositories configuration.
 We could then have a package-install.d to handle the installation
from the packages type.   The install.d hook could still exist to do
things that might be common to the 2 methods.

Thoughts on that approach or other ideas?

I'm currently working on a patchset I can submit to help prove it out.
 But, I'd like to start discussion on the approach now to see if there
are other ideas or major opposition to that approach.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Thoughts on next steps

2014-01-07 Thread Joe Gordon
Everything sounds good!


On Mon, Jan 6, 2014 at 6:52 PM, Sean Dague s...@dague.net wrote:

 On 01/06/2014 07:04 PM, Joe Gordon wrote:

 Overall this looks really good, and very spot on.


 On Thu, Jan 2, 2014 at 6:29 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 A lot of elastic recheck this fall has been based on the ad hoc
 needs of the moment, in between diving down into the race bugs that
 were uncovered by it. This week away from it all helped provide a
 little perspective on what I think we need to do to call it *done*
 (i.e. something akin to a 1.0 even though we are CDing it).

 Here is my current thinking on the next major things that should
 happen. Opinions welcomed.

 (These are roughly in implementation order based on urgency)

 = Split of web UI =

 The elastic recheck page is becoming a mismash of what was needed at
 the time. I think what we really have emerging is:
   * Overall Gate Health
   * Known (to ER) Bugs
   * Unknown (to ER) Bugs - more below

 I think the landing page should be Know Bugs, as that's where we
 want both bug hunters to go to prioritize things, as well as where
 people looking for known bugs should start.

 I think the overall Gate Health graphs should move to the zuul
 status page. Possibly as part of the collection of graphs at the
 bottom.

 We should have a secondary page (maybe column?) of the
 un-fingerprinted recheck bugs, largely to use as candidates for
 fingerprinting. This will let us eventually take over /recheck.


 I think it would be cool to collect the list of unclassified failures
 (not by recheck bug), so we can see how many (and what percentage) need
 to be classified. This isn't gate health but more of e-r health or
 something like that.


 Agreed. I've got the percentage in check_success today, but I agree that
 every gate job that fails that we don't have a fingerprint should be listed
 somewhere we can work through them.


 = Data Analysis / Graphs =

 I spent a bunch of time playing with pandas over break
 (http://dague.net/2013/12/30/__ipython-notebook-experiments/
 http://dague.net/2013/12/30/ipython-notebook-experiments/)__, it's

 kind of awesome. It also made me rethink our approach to handling
 the data.

 I think the rolling average approach we were taking is more precise
 than accurate. As these are statistical events they really need
 error bars. Because when we have a quiet night, and 1 job fails at
 6am in the morning, the 100% failure rate it reflects in grenade
 needs to be quantified that it was 1 of 1, not 50 of 50.


 So my feeling is we should move away from the point graphs we have,
 and present these as weekly and daily failure rates (with graphs and
 error bars). And slice those per job. My suggestion is that we do
 the actual visualization with matplotlib because it's super easy to
 output that from pandas data sets.


 The one thing that the current graph does, that weekly and daily failure
 rates don't show, is a sudden spike in one of the lines.  If you stare
 at the current graphs for long enough and can read through the noise,
 you can see when the gate collectively crashes or if just the neutron
 related gates start failing. So I think one more graph is needed.


 The point of the visualizations is to make sense to people that don't
 understand all the data, especially core members of various teams that are
 trying to figure out if I attack 1 bug right now, what's the biggest bang
 for my buck.


Yes, that is one of the big uses for a visualization.  the one I had in
mind was being able to see if a new unclassified bug appeared.



  Basically we'll be mining Elastic Search - Pandas TimeSeries -
 transforms and analysis - output tables and graphs. This is
 different enough from our current jquery graphing that I want to get
 ACKs before doing a bunch of work here and finding out people don't
 like it in reviews.

 Also in this process upgrade the metadata that we provide for each
 of those bugs so it's a little more clear what you are looking at.


 For example?


 We should always be listing the bug title, not just the number. We should
 also list what projects it's filed against. I've stared at this bugs as
 much as anyone, and I still need to click through the top 4 to figure out
 which one is the ssh bug. :)


  = Take over of /recheck =

 There is still a bunch of useful data coming in on recheck bug
  data which hasn't been curated into ER queries. I think the
 right thing to do is treat these as a work queue of bugs we should
 be building patterns out of (or completely invalidating). I've got a
 preliminary gerrit bulk query piece of code that does this, which
 would remove the need of the daemon the way that's currently
 happening. The gerrit queries are a little long right now, 

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Fox, Kevin M
Sounds very useful. Would there be a diskimage-builder flag then to say you 
prefer packages over source? Would it fall back to source if you specified 
packages and there were only source-install.d for a given element?

Thanks,
Kevin

From: James Slagle [james.sla...@gmail.com]
Sent: Tuesday, January 07, 2014 12:01 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO] Installing from packages in  
tripleo-image-elements

Hi,

I'd like to discuss some possible ways we could install the OpenStack
components from packages in tripleo-image-elements.  As most folks are
probably aware, there is a fork of tripleo-image-elements called
tripleo-puppet-elements which does install using packages, but it does
so using Puppet to do the installation and for managing the
configuration of the installed components.  I'd like to kind of set
that aside for a moment and just discuss how we might support
installing from packages using tripleo-image-elements directly and not
using Puppet.

One idea would be to add support for a new type (or likely 2 new
types: rpm and dpkg) to the source-repositories element.
source-repositories already knows about the git, tar, and file types,
so it seems somewhat natural to have additional types for rpm and
dpkg.

A complication with that approach is that the existing elements assume
they're setting up everything from source.  So, if we take a look at
the nova element, and specifically install.d/74-nova, that script does
stuff like install a nova service, adds a nova user, creates needed
directories, etc.  All of that wouldn't need to be done if we were
installing from rpm or dpkg, b/c presumably the package would take
care of all that.

We could fix that by making the install.d scripts only run if you're
installing a component from source.  In that sense, it might make
sense to add a new hook, source-install.d and only run those scripts
if the type is a source type in the source-repositories configuration.
 We could then have a package-install.d to handle the installation
from the packages type.   The install.d hook could still exist to do
things that might be common to the 2 methods.

Thoughts on that approach or other ideas?

I'm currently working on a patchset I can submit to help prove it out.
 But, I'd like to start discussion on the approach now to see if there
are other ideas or major opposition to that approach.

--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Clint Byrum
What would be the benefit of using packages?

We've specifically avoided packages because they complect[1] configuration
and system state management with software delivery. The recent friction
we've seen with MySQL is an example where the packages are not actually
helping us, they're hurting us because they encode too much configuration
instead of just delivering binaries.

Perhaps those of us who have been involved a bit longer haven't done
a good job of communicating our reasons. I for one believe in the idea
that image based updates eliminate a lot of the entropy that comes along
with a package based updating system. For that reason alone I tend to
look at any packages that deliver configurable software as potentially
dangerous (note that I think they're wonderful for libraries, utilities,
and kernels. :)

[1] http://www.infoq.com/presentations/Simple-Made-Easy

Excerpts from James Slagle's message of 2014-01-07 12:01:07 -0800:
 Hi,
 
 I'd like to discuss some possible ways we could install the OpenStack
 components from packages in tripleo-image-elements.  As most folks are
 probably aware, there is a fork of tripleo-image-elements called
 tripleo-puppet-elements which does install using packages, but it does
 so using Puppet to do the installation and for managing the
 configuration of the installed components.  I'd like to kind of set
 that aside for a moment and just discuss how we might support
 installing from packages using tripleo-image-elements directly and not
 using Puppet.
 
 One idea would be to add support for a new type (or likely 2 new
 types: rpm and dpkg) to the source-repositories element.
 source-repositories already knows about the git, tar, and file types,
 so it seems somewhat natural to have additional types for rpm and
 dpkg.
 
 A complication with that approach is that the existing elements assume
 they're setting up everything from source.  So, if we take a look at
 the nova element, and specifically install.d/74-nova, that script does
 stuff like install a nova service, adds a nova user, creates needed
 directories, etc.  All of that wouldn't need to be done if we were
 installing from rpm or dpkg, b/c presumably the package would take
 care of all that.
 
 We could fix that by making the install.d scripts only run if you're
 installing a component from source.  In that sense, it might make
 sense to add a new hook, source-install.d and only run those scripts
 if the type is a source type in the source-repositories configuration.
  We could then have a package-install.d to handle the installation
 from the packages type.   The install.d hook could still exist to do
 things that might be common to the 2 methods.
 
 Thoughts on that approach or other ideas?
 
 I'm currently working on a patchset I can submit to help prove it out.
  But, I'd like to start discussion on the approach now to see if there
 are other ideas or major opposition to that approach.
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Change I005e752c: Whitelist external netaddr requirement, for bug 1266513, ineffective for me

2014-01-07 Thread Matt Riedemann



On 1/6/2014 8:55 PM, Sean Dague wrote:

On 01/06/2014 09:33 PM, Mike Spreitzer wrote:

I am suffering from bug 1266513, when trying to work on nova.  For
example, on MacOS 10.8.5, I clone nova and then (following the
instructions at https://wiki.openstack.org/wiki/DependsOnOSX) run `cd
nova; python tools/install_venv.py`.  It fails due to PyPI lacking a
sufficiently advanced netaddr.  So I applied patch
https://review.openstack.org/#/c/65019/to my nova/tox.ini, delete my
nova/.venv, and try again.  It fails again, in just the same way
(including the message Some externally hosted files were ignored (use
--allow-external to allow)).  Why is this patch not solving my problem?


Because it only fixes it for tox.

tox -epy27

run_tests.sh and install_venv need a whole other set of fixes that need
to go through oslo.

 -Sean



This [1] is the fix for oslo-incubator and run_tests.sh which will be 
synced to nova later.


[1] https://review.openstack.org/#/c/65151/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 3:22 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 Sounds very useful. Would there be a diskimage-builder flag then to say you 
 prefer packages over source? Would it fall back to source if you specified 
 packages and there were only source-install.d for a given element?

Yes, you could pick which you wanted via environment variables.
Similar to the way you can pick if you want git head, a specific
gerrit review, or a released tarball today via $DIB_REPOTYPE_name,
etc.  See: 
https://github.com/openstack/diskimage-builder/blob/master/elements/source-repositories/README.md
for more info about that.

If you specified something that didn't exist, it should probably fail
with an error.  The default behavior would still be installing from
git master source if you specified nothing though.



 Thanks,
 Kevin
 
 From: James Slagle [james.sla...@gmail.com]
 Sent: Tuesday, January 07, 2014 12:01 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [TripleO] Installing from packages in  
 tripleo-image-elements

 Hi,

 I'd like to discuss some possible ways we could install the OpenStack
 components from packages in tripleo-image-elements.  As most folks are
 probably aware, there is a fork of tripleo-image-elements called
 tripleo-puppet-elements which does install using packages, but it does
 so using Puppet to do the installation and for managing the
 configuration of the installed components.  I'd like to kind of set
 that aside for a moment and just discuss how we might support
 installing from packages using tripleo-image-elements directly and not
 using Puppet.

 One idea would be to add support for a new type (or likely 2 new
 types: rpm and dpkg) to the source-repositories element.
 source-repositories already knows about the git, tar, and file types,
 so it seems somewhat natural to have additional types for rpm and
 dpkg.

 A complication with that approach is that the existing elements assume
 they're setting up everything from source.  So, if we take a look at
 the nova element, and specifically install.d/74-nova, that script does
 stuff like install a nova service, adds a nova user, creates needed
 directories, etc.  All of that wouldn't need to be done if we were
 installing from rpm or dpkg, b/c presumably the package would take
 care of all that.

 We could fix that by making the install.d scripts only run if you're
 installing a component from source.  In that sense, it might make
 sense to add a new hook, source-install.d and only run those scripts
 if the type is a source type in the source-repositories configuration.
  We could then have a package-install.d to handle the installation
 from the packages type.   The install.d hook could still exist to do
 things that might be common to the 2 methods.

 Thoughts on that approach or other ideas?

 I'm currently working on a patchset I can submit to help prove it out.
  But, I'd like to start discussion on the approach now to see if there
 are other ideas or major opposition to that approach.

 --
 -- James Slagle
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Chris Jones
Hi

My guess for the easiest answer to that: distro vendor support.

Cheers,
--
Chris Jones

 On 7 Jan 2014, at 20:23, Clint Byrum cl...@fewbar.com wrote:
 
 What would be the benefit of using packages?
 
 We've specifically avoided packages because they complect[1] configuration
 and system state management with software delivery. The recent friction
 we've seen with MySQL is an example where the packages are not actually
 helping us, they're hurting us because they encode too much configuration
 instead of just delivering binaries.
 
 Perhaps those of us who have been involved a bit longer haven't done
 a good job of communicating our reasons. I for one believe in the idea
 that image based updates eliminate a lot of the entropy that comes along
 with a package based updating system. For that reason alone I tend to
 look at any packages that deliver configurable software as potentially
 dangerous (note that I think they're wonderful for libraries, utilities,
 and kernels. :)
 
 [1] http://www.infoq.com/presentations/Simple-Made-Easy
 
 Excerpts from James Slagle's message of 2014-01-07 12:01:07 -0800:
 Hi,
 
 I'd like to discuss some possible ways we could install the OpenStack
 components from packages in tripleo-image-elements.  As most folks are
 probably aware, there is a fork of tripleo-image-elements called
 tripleo-puppet-elements which does install using packages, but it does
 so using Puppet to do the installation and for managing the
 configuration of the installed components.  I'd like to kind of set
 that aside for a moment and just discuss how we might support
 installing from packages using tripleo-image-elements directly and not
 using Puppet.
 
 One idea would be to add support for a new type (or likely 2 new
 types: rpm and dpkg) to the source-repositories element.
 source-repositories already knows about the git, tar, and file types,
 so it seems somewhat natural to have additional types for rpm and
 dpkg.
 
 A complication with that approach is that the existing elements assume
 they're setting up everything from source.  So, if we take a look at
 the nova element, and specifically install.d/74-nova, that script does
 stuff like install a nova service, adds a nova user, creates needed
 directories, etc.  All of that wouldn't need to be done if we were
 installing from rpm or dpkg, b/c presumably the package would take
 care of all that.
 
 We could fix that by making the install.d scripts only run if you're
 installing a component from source.  In that sense, it might make
 sense to add a new hook, source-install.d and only run those scripts
 if the type is a source type in the source-repositories configuration.
 We could then have a package-install.d to handle the installation
 from the packages type.   The install.d hook could still exist to do
 things that might be common to the 2 methods.
 
 Thoughts on that approach or other ideas?
 
 I'm currently working on a patchset I can submit to help prove it out.
 But, I'd like to start discussion on the approach now to see if there
 are other ideas or major opposition to that approach.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Chris Jones
Hi

Assuming we want to do this, but not necessarily agreeing that we do want to, I 
would suggest:

1) I think it would be nice if we could avoid separate dpkg/rpm types by having 
a package type and reusing the package map facility.

2) Clear up the source-repositories inconsistency by making it clear that 
multiple repositories of the same type do not work in source-repositories-nova 
(this would be a behaviour change, but would mesh more closely with the docs, 
and would require refactoring the 4 elements we ship atm with multiple git 
repos listed)

3) extend arg_to_element to parse element names like nova/package, 
nova/tar, nova/file and nova/source (defaulting to source), storing the 
choice for later.

4) When processing the nova element, apply only the appropriate entry in 
source-repositories-nova

5) Keep install.d as-is and make the scripts be aware of the previously stored 
choice of element origin in the elements (as they add support for a package 
origin)

6) Probably rename source-repositories to something more appropriate.

As for whether we should do this or not... like Clint I want to say no, but I'm 
also worried about people forking t-i-e and not pushing their 
fixes/improvements and new elements back up to us because we're too diverged.

If this is a real customer need, I would come down in favour of doing it if the 
cost of the above implementation (or an alternate one) isn't too high.

Cheers,
--
Chris Jones

 On 7 Jan 2014, at 20:01, James Slagle james.sla...@gmail.com wrote:
 
 Hi,
 
 I'd like to discuss some possible ways we could install the OpenStack
 components from packages in tripleo-image-elements.  As most folks are
 probably aware, there is a fork of tripleo-image-elements called
 tripleo-puppet-elements which does install using packages, but it does
 so using Puppet to do the installation and for managing the
 configuration of the installed components.  I'd like to kind of set
 that aside for a moment and just discuss how we might support
 installing from packages using tripleo-image-elements directly and not
 using Puppet.
 
 One idea would be to add support for a new type (or likely 2 new
 types: rpm and dpkg) to the source-repositories element.
 source-repositories already knows about the git, tar, and file types,
 so it seems somewhat natural to have additional types for rpm and
 dpkg.
 
 A complication with that approach is that the existing elements assume
 they're setting up everything from source.  So, if we take a look at
 the nova element, and specifically install.d/74-nova, that script does
 stuff like install a nova service, adds a nova user, creates needed
 directories, etc.  All of that wouldn't need to be done if we were
 installing from rpm or dpkg, b/c presumably the package would take
 care of all that.
 
 We could fix that by making the install.d scripts only run if you're
 installing a component from source.  In that sense, it might make
 sense to add a new hook, source-install.d and only run those scripts
 if the type is a source type in the source-repositories configuration.
 We could then have a package-install.d to handle the installation
 from the packages type.   The install.d hook could still exist to do
 things that might be common to the 2 methods.
 
 Thoughts on that approach or other ideas?
 
 I'm currently working on a patchset I can submit to help prove it out.
 But, I'd like to start discussion on the approach now to see if there
 are other ideas or major opposition to that approach.
 
 -- 
 -- James Slagle
 --
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest][Solum] Writing functional tests in tempest style

2014-01-07 Thread Jay Pipes
On Mon, 2014-01-06 at 11:46 -0800, Georgy Okrokvertskhov wrote:

 Thank you for your input. Right now this approach allows to run
 integration tests with and without tempest. I think this is valuable
 for the project as anyone can run integration tests on their laptop
 having only keystone available.
 
 It will be great to have some input from Tempest team. Can we extract
 some core tempest component to create a testing framework for projects
 on stackforge? Having common integration test framework in tempest
 style will help further project integration to OpenStack ecosystem
 during incubation.

Hi Georgy,

I created a blueprint for tracking this work:

https://blueprints.launchpad.net/tempest/+spec/split-out-reusable-tempest-library

If I have some time this week, I'll look into estimating various
breakout work items for the blueprint.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 3:23 PM, Clint Byrum cl...@fewbar.com wrote:
 What would be the benefit of using packages?

We're building images on top of different distributions of Linux.
Those distributions themselves offer packaged and supported OpenStack
components.  So, one benefit is that you'd be using what's blessed
by your distro if you chose to.  I think that's a farily common way
people are going to be used to installing components. The OpenStack
Installation guide says to install from packages, fwiw.

 We've specifically avoided packages because they complect[1] configuration
 and system state management with software delivery. The recent friction
 we've seen with MySQL is an example where the packages are not actually
 helping us, they're hurting us because they encode too much configuration
 instead of just delivering binaries.

We're trying to do something fairly specific with the read only /
partition.  You're right, most packages aren't going to handle that
well.  So, yes you have a point from that perspective.

However, there are many examples of when packages help you.
Dependency resolution, version compatibility, methods of verification,
knowing what's installed, etc.  I don't think that's really an
argument or discussion worth having, because you either want to use
packages or you want to build it all from source.  There are
advantages/disadvantages to both methods, and what I'm proposing is
that we support both methods, and not require everyone to only be able
to install from source.

 Perhaps those of us who have been involved a bit longer haven't done
 a good job of communicating our reasons. I for one believe in the idea
 that image based updates eliminate a lot of the entropy that comes along
 with a package based updating system. For that reason alone I tend to
 look at any packages that deliver configurable software as potentially
 dangerous (note that I think they're wonderful for libraries, utilities,
 and kernels. :)

Using packages wouldn't prevent you from using the image based update
mechanism.  Anecdotally, I think image based updates could be a bit
heavy handed for something like picking up a quick security or bug fix
or the like.  That would be a scenario where a package update could
really be handy.  Especially if someone else (e.g., your distro) is
maintaining the package for you.

For this proposal though, I was only talking about installation of the
components at image build time.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Chris Jones
Hi

(FWIW I suggested using the element arguments like nova/package to avoid a 
huge and crazy environment  by using DIB_REPO foo for every element)

Cheers,
--
Chris Jones

 On 7 Jan 2014, at 20:32, James Slagle james.sla...@gmail.com wrote:
 
 On Tue, Jan 7, 2014 at 3:22 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 Sounds very useful. Would there be a diskimage-builder flag then to say you 
 prefer packages over source? Would it fall back to source if you specified 
 packages and there were only source-install.d for a given element?
 
 Yes, you could pick which you wanted via environment variables.
 Similar to the way you can pick if you want git head, a specific
 gerrit review, or a released tarball today via $DIB_REPOTYPE_name,
 etc.  See: 
 https://github.com/openstack/diskimage-builder/blob/master/elements/source-repositories/README.md
 for more info about that.
 
 If you specified something that didn't exist, it should probably fail
 with an error.  The default behavior would still be installing from
 git master source if you specified nothing though.
 
 
 
 Thanks,
 Kevin
 
 From: James Slagle [james.sla...@gmail.com]
 Sent: Tuesday, January 07, 2014 12:01 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [TripleO] Installing from packages in  
 tripleo-image-elements
 
 Hi,
 
 I'd like to discuss some possible ways we could install the OpenStack
 components from packages in tripleo-image-elements.  As most folks are
 probably aware, there is a fork of tripleo-image-elements called
 tripleo-puppet-elements which does install using packages, but it does
 so using Puppet to do the installation and for managing the
 configuration of the installed components.  I'd like to kind of set
 that aside for a moment and just discuss how we might support
 installing from packages using tripleo-image-elements directly and not
 using Puppet.
 
 One idea would be to add support for a new type (or likely 2 new
 types: rpm and dpkg) to the source-repositories element.
 source-repositories already knows about the git, tar, and file types,
 so it seems somewhat natural to have additional types for rpm and
 dpkg.
 
 A complication with that approach is that the existing elements assume
 they're setting up everything from source.  So, if we take a look at
 the nova element, and specifically install.d/74-nova, that script does
 stuff like install a nova service, adds a nova user, creates needed
 directories, etc.  All of that wouldn't need to be done if we were
 installing from rpm or dpkg, b/c presumably the package would take
 care of all that.
 
 We could fix that by making the install.d scripts only run if you're
 installing a component from source.  In that sense, it might make
 sense to add a new hook, source-install.d and only run those scripts
 if the type is a source type in the source-repositories configuration.
 We could then have a package-install.d to handle the installation
 from the packages type.   The install.d hook could still exist to do
 things that might be common to the 2 methods.
 
 Thoughts on that approach or other ideas?
 
 I'm currently working on a patchset I can submit to help prove it out.
 But, I'd like to start discussion on the approach now to see if there
 are other ideas or major opposition to that approach.
 
 --
 -- James Slagle
 --
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 -- James Slagle
 --
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Fox, Kevin M
I was going to stay silent on this one, but since you asked...

/me puts his customer hat on

We source OpenStack from RDO for the packages and additional integration 
testing that comes from the project instead of using OpenStack directly. I was 
a little turned off from Triple-O when I saw it was source only. The feeling 
being that it is too green for our tastes. Which may be inaccurate. While I 
might be convinced to use source, its a much harder sell to us currently then 
using packages.

/me takes of his customer hat.

Thanks again for all the hard work on Triple-O. Its looking great, and I hope I 
get the chance to use it soon.

Thanks,
Kevin


From: Chris Jones [c...@tenshu.net]
Sent: Tuesday, January 07, 2014 12:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [TripleO] Installing from packages in  
tripleo-image-elements

Hi

Assuming we want to do this, but not necessarily agreeing that we do want to, I 
would suggest:

1) I think it would be nice if we could avoid separate dpkg/rpm types by having 
a package type and reusing the package map facility.

2) Clear up the source-repositories inconsistency by making it clear that 
multiple repositories of the same type do not work in source-repositories-nova 
(this would be a behaviour change, but would mesh more closely with the docs, 
and would require refactoring the 4 elements we ship atm with multiple git 
repos listed)

3) extend arg_to_element to parse element names like nova/package, 
nova/tar, nova/file and nova/source (defaulting to source), storing the 
choice for later.

4) When processing the nova element, apply only the appropriate entry in 
source-repositories-nova

5) Keep install.d as-is and make the scripts be aware of the previously stored 
choice of element origin in the elements (as they add support for a package 
origin)

6) Probably rename source-repositories to something more appropriate.

As for whether we should do this or not... like Clint I want to say no, but I'm 
also worried about people forking t-i-e and not pushing their 
fixes/improvements and new elements back up to us because we're too diverged.

If this is a real customer need, I would come down in favour of doing it if the 
cost of the above implementation (or an alternate one) isn't too high.

Cheers,
--
Chris Jones

 On 7 Jan 2014, at 20:01, James Slagle james.sla...@gmail.com wrote:

 Hi,

 I'd like to discuss some possible ways we could install the OpenStack
 components from packages in tripleo-image-elements.  As most folks are
 probably aware, there is a fork of tripleo-image-elements called
 tripleo-puppet-elements which does install using packages, but it does
 so using Puppet to do the installation and for managing the
 configuration of the installed components.  I'd like to kind of set
 that aside for a moment and just discuss how we might support
 installing from packages using tripleo-image-elements directly and not
 using Puppet.

 One idea would be to add support for a new type (or likely 2 new
 types: rpm and dpkg) to the source-repositories element.
 source-repositories already knows about the git, tar, and file types,
 so it seems somewhat natural to have additional types for rpm and
 dpkg.

 A complication with that approach is that the existing elements assume
 they're setting up everything from source.  So, if we take a look at
 the nova element, and specifically install.d/74-nova, that script does
 stuff like install a nova service, adds a nova user, creates needed
 directories, etc.  All of that wouldn't need to be done if we were
 installing from rpm or dpkg, b/c presumably the package would take
 care of all that.

 We could fix that by making the install.d scripts only run if you're
 installing a component from source.  In that sense, it might make
 sense to add a new hook, source-install.d and only run those scripts
 if the type is a source type in the source-repositories configuration.
 We could then have a package-install.d to handle the installation
 from the packages type.   The install.d hook could still exist to do
 things that might be common to the 2 methods.

 Thoughts on that approach or other ideas?

 I'm currently working on a patchset I can submit to help prove it out.
 But, I'd like to start discussion on the approach now to see if there
 are other ideas or major opposition to that approach.

 --
 -- James Slagle
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 3:48 PM, Chris Jones c...@tenshu.net wrote:
 Hi

 Assuming we want to do this, but not necessarily agreeing that we do want to, 
 I would suggest:

 1) I think it would be nice if we could avoid separate dpkg/rpm types by 
 having a package type and reusing the package map facility.

Indeed, I'd like to see one package type as well. I think we could
start with that route, and only split it out if there was a proven
technical need.

 2) Clear up the source-repositories inconsistency by making it clear that 
 multiple repositories of the same type do not work in 
 source-repositories-nova (this would be a behaviour change, but would mesh 
 more closely with the docs, and would require refactoring the 4 elements we 
 ship atm with multiple git repos listed)

Could you expand on this a bit?  I'm not sure what inconsistency
you're referring to.

 3) extend arg_to_element to parse element names like nova/package, 
 nova/tar, nova/file and nova/source (defaulting to source), storing the 
 choice for later.

 4) When processing the nova element, apply only the appropriate entry in 
 source-repositories-nova

 5) Keep install.d as-is and make the scripts be aware of the previously 
 stored choice of element origin in the elements (as they add support for a 
 package origin)

 6) Probably rename source-repositories to something more appropriate.

All good ideas.  I like the mechanism to specify the type as well.  I
wonder if we could have a global build option as well that said to use
packages or source, or whatever, for all components that support that
type.  That way you wouldn't have to specify each individually.

 As for whether we should do this or not... like Clint I want to say no, but 
 I'm also worried about people forking t-i-e and not pushing their 
 fixes/improvements and new elements back up to us because we're too diverged.

I feel that not offering a choice will only turn people off from using
t-i-e. Only offering an install from source option is not likely to
cause large groups of people to suddenly decide that only installing
from source is the way to go and then start using t-i-e exclusively.
So, that's why I'd really like to see support for packages in the main
repo itself.

 If this is a real customer need, I would come down in favour of doing it if 
 the cost of the above implementation (or an alternate one) isn't too high.

+1.  Installing from source (master) would still be the default.  And
any implementations that allowed something different would have to not
disrupt that.  Similar to how we've added new install options in the
paste (source-repositories, tar, etc) and have kept disruptions to a
minimum.



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-01-07 12:53:57 -0800:
 On Tue, Jan 7, 2014 at 3:23 PM, Clint Byrum cl...@fewbar.com wrote:
  What would be the benefit of using packages?
 
 We're building images on top of different distributions of Linux.
 Those distributions themselves offer packaged and supported OpenStack
 components.  So, one benefit is that you'd be using what's blessed
 by your distro if you chose to.  I think that's a farily common way
 people are going to be used to installing components. The OpenStack
 Installation guide says to install from packages, fwiw.
 

Indeed, this is how many people deploy traditional applications.

However, what we're doing is intended to be a real, consumable deployment
of OpenStack. Specifically one that is in the gate and scales out to
any reasonable production load necessary.

One problem with scaling out to many nodes is that the traditional
application deployment patterns introduce too much entropy. This is really
hard to patch out later. We are designing the software distribution system
to avoid those problems from the beginning. Packages do the opposite,
and encourage entropy by promising to try and update software with
minimal invasion, thus enabling users to introduce one-off machines.

  We've specifically avoided packages because they complect[1] configuration
  and system state management with software delivery. The recent friction
  we've seen with MySQL is an example where the packages are not actually
  helping us, they're hurting us because they encode too much configuration
  instead of just delivering binaries.
 
 We're trying to do something fairly specific with the read only /
 partition.  You're right, most packages aren't going to handle that
 well.  So, yes you have a point from that perspective.


Readonly / is a really important feature of the deployment we're aiming
at. Doing it with packages is quite possible. My point in asking why
bother with packages is that when you have an entire image that has been
verified and is known to work, what advantage does having a package for
everything actually bring.

 However, there are many examples of when packages help you.
 Dependency resolution, version compatibility, methods of verification,
 knowing what's installed, etc.  I don't think that's really an
 argument or discussion worth having, because you either want to use
 packages or you want to build it all from source.  There are
 advantages/disadvantages to both methods, and what I'm proposing is
 that we support both methods, and not require everyone to only be able
 to install from source.


Install from source is probably not the right way to put this. We're
installing the virtualenvs from tarballs downloaded from pypi. We're
also installing 99.9% python, so we're not really going from source,
we're just going from git.

But anyway, I see your point and will capitulate that it is less weird
for people and thus may make the pill a little easier to swallow. But if
I could have it my way, I'd suggest that the packages be built to mirror
the structure of the element end-products as much as possible so that
they can be used with minimal change to elements.

  Perhaps those of us who have been involved a bit longer haven't done
  a good job of communicating our reasons. I for one believe in the idea
  that image based updates eliminate a lot of the entropy that comes along
  with a package based updating system. For that reason alone I tend to
  look at any packages that deliver configurable software as potentially
  dangerous (note that I think they're wonderful for libraries, utilities,
  and kernels. :)
 
 Using packages wouldn't prevent you from using the image based update
 mechanism.  Anecdotally, I think image based updates could be a bit
 heavy handed for something like picking up a quick security or bug fix
 or the like.  That would be a scenario where a package update could
 really be handy.  Especially if someone else (e.g., your distro) is
 maintaining the package for you.
 
 For this proposal though, I was only talking about installation of the
 components at image build time.
 

The entire point of image based updates is that they are heavy handed.
The problem we're trying to solve is that you have a data center of (n)
machines and you don't want (n) unique sets of software,  where each
machine might have some hot fixes and not others. At 1000 machines it
becomes critical. At 1 machines, the entropy matrix starts to get
scary.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest][Solum] Writing functional tests in tempest style

2014-01-07 Thread Georgy Okrokvertskhov
Hi Jay,

Thank you very much for working on that!

Thanks
Georgy


On Tue, Jan 7, 2014 at 12:50 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-01-06 at 11:46 -0800, Georgy Okrokvertskhov wrote:

  Thank you for your input. Right now this approach allows to run
  integration tests with and without tempest. I think this is valuable
  for the project as anyone can run integration tests on their laptop
  having only keystone available.
 
  It will be great to have some input from Tempest team. Can we extract
  some core tempest component to create a testing framework for projects
  on stackforge? Having common integration test framework in tempest
  style will help further project integration to OpenStack ecosystem
  during incubation.

 Hi Georgy,

 I created a blueprint for tracking this work:


 https://blueprints.launchpad.net/tempest/+spec/split-out-reusable-tempest-library

 If I have some time this week, I'll look into estimating various
 breakout work items for the blueprint.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-07 Thread Kurt Griffiths
You might also consider doing this in WSGI middleware:

Pros:

  *   Consolidates policy code in once place, making it easier to audit and 
maintain
  *   Simple to turn policy on/off – just don’t insert the middleware when off!
  *   Does not preclude the use of oslo.policy for rule checking
  *   Blocks unauthorized requests before they have a chance to touch the web 
framework or app. This reduces your attack surface and can improve performance  
 (since the web framework has yet to parse the request).

Cons:

  *   Doesn't work for policies that require knowledge that isn’t available 
this early in the pipeline (without having to duplicate a lot of code)
  *   You have to parse the WSGI environ dict yourself (this may not be a big 
deal, depending on how much knowledge you need to glean in order to enforce the 
policy).
  *   You have to keep your HTTP path matching in sync with with your route 
definitions in the code. If you have full test coverage, you will know when you 
get out of sync. That being said, API routes tend to be quite stable in 
relation to to other parts of the code implementation once you have settled on 
your API spec.

I’m sure there are other pros and cons I missed, but you can make your own best 
judgement whether this option makes sense in Solum’s case.

From: Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, January 7, 2014 at 6:54 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy




On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com wrote:
Hi Dough,

Thank you for pointing to this code. As I see you use OpenStack policy 
framework but not Pecan security features. How do you implement fine grain 
access control like user allowed to read only, writers and admins. Can you 
block part of API methods for specific user like access to create methods for 
specific user role?

The policy enforcement isn't simple on/off switching in ceilometer, so we're 
using the policy framework calls in a couple of places within our API code 
(look through v2.py for examples). As a result, we didn't need to build much on 
top of the existing policy module to interface with pecan.

For your needs, it shouldn't be difficult to create a couple of decorators to 
combine with pecan's hook framework to enforce the policy, which might be less 
complex than trying to match the operating model of the policy system to 
pecan's security framework.

This is the sort of thing that should probably go through Oslo and be shared, 
so please consider contributing to the incubator when you have something 
working.

Doug



Thanks
Georgy


On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:



On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com wrote:
Hi,

In Solum project we will need to implement security and ACL for Solum API. 
Currently we use Pecan framework for API. Pecan has its own security model 
based on SecureController class. At the same time OpenStack widely uses policy 
mechanism which uses json files to control access to specific API methods.

I wonder if someone has any experience with implementing security and ACL stuff 
with using Pecan framework. What is the right way to provide security for API?

In ceilometer we are using the keystone middleware and the policy framework to 
manage arguments that constrain the queries handled by the storage layer.

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py

and

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337

Doug



Thanks
Georgy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.comhttp://www.mirantis.com/
Tel. +1 650 963 9828tel:%2B1%20650%20963%209828
Mob. +1 650 996 3284tel:%2B1%20650%20996%203284

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Chris Jones
Hi

 On 7 Jan 2014, at 21:17, James Slagle james.sla...@gmail.com wrote:
 
 Could you expand on this a bit?  I'm not sure what inconsistency
 you're referring to.

That multiple repos work, but the docs don't say so, and the DIB_REPO foo 
doesn't support multiple repos. 

 I wonder if we could have a global build option as well that said to use
 packages or source

Definitely. Maybe DIB_PREFER_ORIGIN?

 I feel that not offering a choice will only turn people off from using
 t-i-e.

+1 (even if I wish that wasn't the case!)

Cheers,
Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Chris Jones
Hi

 On 7 Jan 2014, at 22:18, Clint Byrum cl...@fewbar.com wrote:
 Packages do the opposite,
 and encourage entropy by promising to try and update software 

Building with packages doesn't require updating running systems with packages 
and more than building with git requires updating running systems with git pull.
One can simply build (and test!) a new image with updated packages and 
rebuild/takeover nodes.

Cheers,
Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-07 Thread Neal, Phil
For multi-node deployments, implementing something like inotify would allow 
administrators to push configuration changes out to multiple targets using 
puppet/chef/etc. and have the daemons pick it up without restart. Thumbs up to 
that.

As Tim Bell suggested, API-based enabling/disabling would allow users to update 
meters via script, but then there's the question of how to work out the global 
vs. per-project tenant settings...right now we collect specified meters for all 
available projects, and the API returns whatever data is stored minus filtered 
values. Maybe I'm missing something in the suggestion, but turning off 
collection for an individual project seems like it'd require some deep changes.

Vijay, I'll repeat dhellmann's request: do you have more detail in another doc? 
:-)

-   Phil

 -Original Message-
 From: Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
 [mailto:vijayakumar.kodam@nsn.com]
 Sent: Tuesday, January 07, 2014 2:49 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: chmo...@enovance.com
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 From: ext Chmouel Boudjnah [mailto:chmo...@enovance.com]
 Sent: Monday, January 06, 2014 2:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 
 
 
 
 
 On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata Consultancy
 Ser - FI/Espoo) vijayakumar.kodam@nsn.com wrote:
 
 In this case, simply changing the meter properties in a configuration file
 should be enough. There should be an inotify signal which shall notify
 ceilometer of the changes in the config file. Then ceilometer should
 automatically update the meters without restarting.
 
 
 
 Why it cannot be something configured by the admin with inotifywait(1)
 command?
 
 
 
 Or this can be an API call for enabling/disabling meters which could be more
 useful without having to change the config files.
 
 
 
 Chmouel.
 
 
 
 I haven't tried inotifywait() in this implementation. I need to check if it 
 will be
 useful for the current implementation.
 
 Yes. API call could be more useful than changing the config files manually.
 
 
 
 Thanks,
 
 VijayKumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-07 Thread Michael Still
Hi. Thanks for reaching out about this.

It seems this patch has now passed turbo hipster, so I am going to
treat this as a more theoretical question than perhaps you intended. I
should note though that Joshua Hesketh and I have been trying to read
/ triage every turbo hipster failure, but that has been hard this week
because we're both at a conference.

The problem this patch faced is that we are having trouble defining
what is a reasonable amount of time for a database migration to run
for. Specifically:

2014-01-07 14:59:32,012 [output] 205 - 206...
2014-01-07 14:59:32,848 [heartbeat]
2014-01-07 15:00:02,848 [heartbeat]
2014-01-07 15:00:32,849 [heartbeat]
2014-01-07 15:00:39,197 [output] done

So applying migration 206 took slightly over a minute (67 seconds).
Our historical data (mean + 2 standard deviations) says that this
migration should take no more than 63 seconds. So this only just
failed the test.

However, we know there are issues with our methodology -- we've tried
normalizing for disk IO bandwidth and it hasn't worked out as well as
we'd hoped. This week's plan is to try to use mysql performance schema
instead, but we have to learn more about how it works first.

I apologise for this mis-vote.

Michael

On Wed, Jan 8, 2014 at 1:44 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 12/30/2013 6:21 AM, Michael Still wrote:

 Hi.

 The purpose of this email to is apologise for some incorrect -1 review
 scores which turbo hipster sent out today. I think its important when
 a third party testing tool is new to not have flakey results as people
 learn to trust the tool, so I want to explain what happened here.

 Turbo hipster is a system which takes nova code reviews, and runs
 database upgrades against them to ensure that we can still upgrade for
 users in the wild. It uses real user datasets, and also times
 migrations and warns when they are too slow for large deployments. It
 started voting on gerrit in the last week.

 Turbo hipster uses zuul to learn about reviews in gerrit that it
 should test. We run our own zuul instance, which talks to the
 openstack.org zuul instance. This then hands out work to our pool of
 testing workers. Another thing zuul does is it handles maintaining a
 git repository for the workers to clone from.

 This is where things went wrong today. For reasons I can't currently
 explain, the git repo on our zuul instance ended up in a bad state (it
 had a patch merged to master which wasn't in fact merged upstream
 yet). As this code is stock zuul from openstack-infra, I have a
 concern this might be a bug that other zuul users will see as well.

 I've corrected the problem for now, and kicked off a recheck of any
 patch with a -1 review score from turbo hipster in the last 24 hours.
 I'll talk to the zuul maintainers tomorrow about the git problem and
 see what we can learn.

 Thanks heaps for your patience.

 Michael


 How do I interpret the warning and -1 from turbo-hipster on my patch here
 [1] with the logs here [2]?

 I'm inclined to just do 'recheck migrations' on this since this patch
 doesn't have anything to do with this -1 as far as I can tell.

 [1] https://review.openstack.org/#/c/64725/4/
 [2]
 https://ssl.rcbops.com/turbo_hipster/logviewer/?q=/turbo_hipster/results/64/64725/4/check/gate-real-db-upgrade_nova_mysql_user_001/5186e53/user_001.log

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 5:18 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from James Slagle's message of 2014-01-07 12:53:57 -0800:
 On Tue, Jan 7, 2014 at 3:23 PM, Clint Byrum cl...@fewbar.com wrote:
  What would be the benefit of using packages?

 We're building images on top of different distributions of Linux.
 Those distributions themselves offer packaged and supported OpenStack
 components.  So, one benefit is that you'd be using what's blessed
 by your distro if you chose to.  I think that's a farily common way
 people are going to be used to installing components. The OpenStack
 Installation guide says to install from packages, fwiw.


 Indeed, this is how many people deploy traditional applications.

 However, what we're doing is intended to be a real, consumable deployment
 of OpenStack. Specifically one that is in the gate and scales out to
 any reasonable production load necessary.

 One problem with scaling out to many nodes is that the traditional
 application deployment patterns introduce too much entropy. This is really
 hard to patch out later. We are designing the software distribution system
 to avoid those problems from the beginning. Packages do the opposite,
 and encourage entropy by promising to try and update software with
 minimal invasion, thus enabling users to introduce one-off machines.

This sounds more like an argument for a systems management approach
vs. installation.  I realize there's a big relation there.  However, I
don't think just using an image based system makes the entropy problem
go away.  At scale of 10,000 nodes (or more), you could easily have a
proliferation of images both that you've built and that you've
deployed.  You're not going to update everything at once.  Nor, do I
think you would only ever have 2 images running, for your latest
version N, and N-1..  You're going to have several different deployed
images running to account for hardware differences, updates that have
not yet been applied, migrations, etc.  My point is, the entropy
problem does not go away.  Ergo, it's not introduced by packages.

Certainly it could be made worse by managing packages and their
updates by hand across 10,000 nodes.   But again, I don't think anyone
does that, they use a systems management tool that exists to
discourage drift and help with such entropy.

Likewise, you're not going to be calling nova rebuild 10,000 times
manually when you want to do image based updates.  You'd likely have
some tool (tuskar, something else, etc) that is going to manage that
for you, and help keep any drift in what images you actually have
deployed in check.


  We've specifically avoided packages because they complect[1] configuration
  and system state management with software delivery. The recent friction
  we've seen with MySQL is an example where the packages are not actually
  helping us, they're hurting us because they encode too much configuration
  instead of just delivering binaries.

 We're trying to do something fairly specific with the read only /
 partition.  You're right, most packages aren't going to handle that
 well.  So, yes you have a point from that perspective.


 Readonly / is a really important feature of the deployment we're aiming
 at. Doing it with packages is quite possible. My point in asking why
 bother with packages is that when you have an entire image that has been
 verified and is known to work, what advantage does having a package for
 everything actually bring.

Because distro packages are known to work, and thus you get higher
confidence from any image constructed from said packages.  At least, I
would, as opposed to installing from source (or from git as you say
below :).  It's the same reason I want to use a packaged kernel
instead of compiling it from source.  The benefit of the package is
not just in the compiling.  It's in the known good version and
compatibility with other known good versions I want to use.

Am I going to implicitly trust any packages blindly or completely?  Of
course not. But, there is some confidence there in that the distro has
done some testing and said these versions are compatible, etc.


 However, there are many examples of when packages help you.
 Dependency resolution, version compatibility, methods of verification,
 knowing what's installed, etc.  I don't think that's really an
 argument or discussion worth having, because you either want to use
 packages or you want to build it all from source.  There are
 advantages/disadvantages to both methods, and what I'm proposing is
 that we support both methods, and not require everyone to only be able
 to install from source.


 Install from source is probably not the right way to put this. We're
 installing the virtualenvs from tarballs downloaded from pypi. We're
 also installing 99.9% python, so we're not really going from source,
 we're just going from git.

Yes, from source basically means from git.  But, I fail to see the
distinction you're making in this context. Yes, 

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Clint Byrum
Excerpts from Chris Jones's message of 2014-01-07 14:43:31 -0800:
 Hi
 
  On 7 Jan 2014, at 22:18, Clint Byrum cl...@fewbar.com wrote:
  Packages do the opposite,
  and encourage entropy by promising to try and update software 
 
 Building with packages doesn't require updating running systems with packages 
 and more than building with git requires updating running systems with git 
 pull.
 One can simply build (and test!) a new image with updated packages and 
 rebuild/takeover nodes.
 

Indeed, however one can _more_ simply build an image without package
tooling...  and they will be more similar across multiple platforms.

My question still stands, what are the real advantages? So far the only
one that matters to me is makes it easier for people to think about
using it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2014-01-07 13:11:13 -0800:
 I was going to stay silent on this one, but since you asked...
 
 /me puts his customer hat on
 
 We source OpenStack from RDO for the packages and additional integration 
 testing that comes from the project instead of using OpenStack directly. I 
 was a little turned off from Triple-O when I saw it was source only. The 
 feeling being that it is too green for our tastes. Which may be inaccurate. 
 While I might be convinced to use source, its a much harder sell to us 
 currently then using packages.
 

Kevin, thanks for sharing. I think I understand that it is just a new
way of thinking and that makes it that much harder to consume.

We have good reasons for not using packages. And we're not just making
this up as a new crazy idea, we're copying what companies like Netflix
have published about running at scale. We need to do a better job at
making sure why we're doing some of the things we're doing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 6:04 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Chris Jones's message of 2014-01-07 14:43:31 -0800:
 Hi

  On 7 Jan 2014, at 22:18, Clint Byrum cl...@fewbar.com wrote:
  Packages do the opposite,
  and encourage entropy by promising to try and update software

 Building with packages doesn't require updating running systems with 
 packages and more than building with git requires updating running systems 
 with git pull.
 One can simply build (and test!) a new image with updated packages and 
 rebuild/takeover nodes.


 Indeed, however one can _more_ simply build an image without package
 tooling...  and they will be more similar across multiple platforms.

 My question still stands, what are the real advantages? So far the only
 one that matters to me is makes it easier for people to think about
 using it.

I'm reminded of when I first started looking at TripleO there were a
few issues with installing from git (I'll say that from now on :)
related to all the python distribute - setuptools migration.  Things
like if you're base cloud image had the wrong version of pip you
couldn't migrate to setuptools cleanly.  Then you had to run the
setuptools update twice, once to get the distribute legacy wrapper and
then again to latest setuptools.  If I recall there were other
problems with virtualenv incompatibilities as well.

Arguably, installing from packages would have made that easier and less complex.

Sure, the crux of the problem was likely that versions in the distro
were too old and they needed to be updated.  But unless we take on
building the whole OS from source/git/whatever every time, we're
always going to have that issue.  So, an additional benefit of
packages is that you can install a known good version of an OpenStack
component that is known to work with the versions of dependent
software you already have installed.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Thoughts on next steps

2014-01-07 Thread Matt Riedemann



On 1/2/2014 8:29 PM, Sean Dague wrote:

A lot of elastic recheck this fall has been based on the ad hoc needs of
the moment, in between diving down into the race bugs that were
uncovered by it. This week away from it all helped provide a little
perspective on what I think we need to do to call it *done* (i.e.
something akin to a 1.0 even though we are CDing it).

Here is my current thinking on the next major things that should happen.
Opinions welcomed.

(These are roughly in implementation order based on urgency)

= Split of web UI =

The elastic recheck page is becoming a mismash of what was needed at the
time. I think what we really have emerging is:
  * Overall Gate Health
  * Known (to ER) Bugs
  * Unknown (to ER) Bugs - more below

I think the landing page should be Know Bugs, as that's where we want
both bug hunters to go to prioritize things, as well as where people
looking for known bugs should start.

I think the overall Gate Health graphs should move to the zuul status
page. Possibly as part of the collection of graphs at the bottom.

We should have a secondary page (maybe column?) of the un-fingerprinted
recheck bugs, largely to use as candidates for fingerprinting. This will
let us eventually take over /recheck.

= Data Analysis / Graphs =

I spent a bunch of time playing with pandas over break
(http://dague.net/2013/12/30/ipython-notebook-experiments/), it's kind
of awesome. It also made me rethink our approach to handling the data.

I think the rolling average approach we were taking is more precise than
accurate. As these are statistical events they really need error bars.
Because when we have a quiet night, and 1 job fails at 6am in the
morning, the 100% failure rate it reflects in grenade needs to be
quantified that it was 1 of 1, not 50 of 50.

So my feeling is we should move away from the point graphs we have, and
present these as weekly and daily failure rates (with graphs and error
bars). And slice those per job. My suggestion is that we do the actual
visualization with matplotlib because it's super easy to output that
from pandas data sets.

Basically we'll be mining Elastic Search - Pandas TimeSeries -
transforms and analysis - output tables and graphs. This is different
enough from our current jquery graphing that I want to get ACKs before
doing a bunch of work here and finding out people don't like it in reviews.

Also in this process upgrade the metadata that we provide for each of
those bugs so it's a little more clear what you are looking at.

= Take over of /recheck =

There is still a bunch of useful data coming in on recheck bug 
data which hasn't been curated into ER queries. I think the right thing
to do is treat these as a work queue of bugs we should be building
patterns out of (or completely invalidating). I've got a preliminary
gerrit bulk query piece of code that does this, which would remove the
need of the daemon the way that's currently happening. The gerrit
queries are a little long right now, but I think if we are only doing
this on hourly cron, the additional load will be negligible.

This would get us into a single view, which I think would be more
informative than the one we currently have.

= Categorize all the jobs =

We need a bit of refactoring to let us comment on all the jobs (not just
tempest ones). Basically we assumed pep8 and docs don't fail in the gate
at the beginning. Turns out they do, and are good indicators of infra /
external factor bugs. They are a part of the story so we should put them
in.

= Multi Line Fingerprints =

We've definitely found bugs where we never had a really satisfying
single line match, but we had some great matches if we could do multi line.

We could do that in ER, however it will mean giving up logstash as our
UI, because those queries can't be done in logstash. So in order to do
this we'll really need to implement some tools - cli minimum, which will
let us easily test a bug. A custom web UI might be in order as well,
though that's going to be it's own chunk of work, that we'll need more
volunteers for.

This would put us in a place where we should have all the infrastructure
to track 90% of the race conditions, and talk about them in certainty as
1%, 5%, 0.1% bugs.

 -Sean



Let's add regexp query support to elastic-recheck so that I could have 
fixed this better:


https://review.openstack.org/#/c/65303/

Then I could have just filtered the build_name with this:

build_name:/(check|gate)-(tempest|grenade)-[a-z\-]+/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Fox, Kevin M
One of the major features using a distro over upstream gives is integration.
rhel6 behaves differently then ubuntu 13.10. Sometimes it takes a while to fix 
upstream for a given distro, and even then it may not even be accepted because 
the distro's too old, go away. Packages allow a distro to make sure all the 
pieces can work together properly. And quickly patch things when needed. Its 
kind of a fork but not quite the same thing. The distro integrates not just 
OpenStack, but all its dependencies and their dependencies all the way up. For 
example there can be subtle issues if neutron, open vswitch and the kernel are 
out of sync. It is the integration that folks like about distro's. I can trust 
that since it came from X, all the pieces should work together, or I know who 
to call.

The only way I can think of to get the same stability out of just source is for 
Triple-O to provide a whole source distro and test it, itself. More work then 
it probably wants to do. Or pick just one distro and support source only on 
that. Though if you pick the wrong distro, then you get into trust issues and 
religious wars. :/

Kevin


From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, January 07, 2014 3:04 PM
To: openstack-dev
Subject: Re: [openstack-dev] [TripleO] Installing from packages in  
tripleo-image-elements

Excerpts from Chris Jones's message of 2014-01-07 14:43:31 -0800:
 Hi

  On 7 Jan 2014, at 22:18, Clint Byrum cl...@fewbar.com wrote:
  Packages do the opposite,
  and encourage entropy by promising to try and update software

 Building with packages doesn't require updating running systems with packages 
 and more than building with git requires updating running systems with git 
 pull.
 One can simply build (and test!) a new image with updated packages and 
 rebuild/takeover nodes.


Indeed, however one can _more_ simply build an image without package
tooling...  and they will be more similar across multiple platforms.

My question still stands, what are the real advantages? So far the only
one that matters to me is makes it easier for people to think about
using it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2014-01-07 Thread Anita Kuno
On 01/08/2014 03:10 AM, Nachi Ueno wrote:
 Hi Anita
 
 Let's me join this session also.
 
 Nachi Ueno
 NTT i3
Wonderful Nachi. We have spoken on irc to ensure you have your questions
answered.

It will be great to have another neutron-core at the code sprint.

Thank you,
Anita.
 
 2014/1/5 Anita Kuno ante...@anteaya.info:
 On 01/05/2014 03:42 AM, Sukhdev Kapur wrote:
 Folks,

 I finally got over my fear of weather and booked my flight and hotel for
 this sprint.

 I am relatively new to OpenStack community with a strong desire to learn
 and contribute.
 Having a strong desire to participate effectively is great.

 The difficulty for us is that we have already indicated that we need
 experienced participants at the code sprint. [0]

 Having long periods of silence and then simply announcing you have
 booked your flights makes things difficult since we have been in
 conversation with others about this for some time. I'm not saying don't
 come, I am saying that this now puts myself and Mark in a position of
 having to explain ourselves to others regarding consistency.

 Mark and I will address this, though I will need to discuss this with
 Mark to hear his thoughts and I am unable right now since I am at a
 conference all week. [1]

 Going forward, having regular conversations about items of this nature
 (irc is a great tool for this) is something I would like to see happen
 more often.

 You may have seen that Arista Testing has come alive and is voting on the
 newly submitted neutron patches. I have been busy putting together the
 framework, and learning the Jenkins/Gerrit interface.
 Yes. Can you respond on the Remove voting until your testing structure
 works thread please? This enables people who wish to respond to you on
 this point a place to conduct the conversation. It also preserves a
 history of the topic so that those searching the archives have all the
 relevant information in one place.

 Now, I have shifted
 my focus on Neutron/networking tempest tests. I notice some of these tests
 are failing on my setup. I have started to dig into these with the intent
 to understand them.
 That is a great place to begin. Thank you for taking interest and
 pursuing test bugs.


 In terms of this upcoming sprint, if you folks can give some pointers that
 will help me get better prepared and productive, that will be appreciated.
 We need folks attending the sprint who are familiar with offering
 patches to tempest. Seeing your name in this list would be a great
 indicator that you are at least able to offer a patch. [3]

 If you are able to focus this week on getting up to speed on Tempest and
 the Neutron Tempest process, then your attendance at the conference may
 possibly be effective both for yourself and for the rest of the
 participants.

 This wiki page is probably a good place to begin. [4]

 The etherpad tracking the Neutron Tempest team's progress is here. [5]

 Familiarizing yourself with the status of the conversation during the
 meetings will help as well, though it isn't as important in terms of
 being useful at the sprint as offering a tempest patch. Neutron meeting
 logs can be found here. [6]

 Also being available in channel will go a long way to fostering the kind
 of interactions which are constructive now and in the future. I don't
 see you in channel much, it would be great to see you more.

 Looking forward to meeting and working with you.
 And I you. Let's consider this an opportunity for greater participation
 with Neutron and having more conversations in irc is a great way to begin.

 Though I am not available in -neutron this week others are, so please
 announce when you are ready to work and hopefully someone will be
 keeping an eye out for you and offer you a hand.

 regards..
 -Sukhdev

 Thanks Sukhdev,
 Anita.

 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022918.html
 [1]
 http://eavesdrop.openstack.org/meetings/networking/2013/networking.2013-12-16-21.02.log.html
 timestamp 21:57:46
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/023228.html
 [3]
 https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z
 [4] https://wiki.openstack.org/wiki/Neutron/TempestAPITests
 [5] https://etherpad.openstack.org/p/icehouse-summit-qa-neutron
 [6] http://eavesdrop.openstack.org/meetings/networking/2013/





 On Fri, Dec 27, 2013 at 9:00 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/18/2013 04:17 PM, Anita Kuno wrote:
 Okay time for a recap.

 What: Neutron Tempest code sprint
 Where: Montreal, QC, Canada
 When: January 15, 16, 17 2014
 Location: I am about to sign the contract for Salle du Parc at 3625 Parc
 avenue, a room in a residence of McGill University.
 Time: 9am - 5am
 Time: 9am - 5pm

 I am expecting to see the following people in Montreal in January:
 Mark McClain
 Salvatore Orlando
 Sean Dague
 Matt Trenish
 Jay Pipes
 Sukhdev Kapur
 Miguel Lavelle
 Oleg Bondarev
 Rossella Sblendido
 Emilien Macchi
 Sylvain 

Re: [openstack-dev] [elastic-recheck] Thoughts on next steps

2014-01-07 Thread Matt Riedemann



On 1/7/2014 5:26 PM, Sean Dague wrote:

On 01/07/2014 06:20 PM, Matt Riedemann wrote:



On 1/2/2014 8:29 PM, Sean Dague wrote:

A lot of elastic recheck this fall has been based on the ad hoc needs of
the moment, in between diving down into the race bugs that were
uncovered by it. This week away from it all helped provide a little
perspective on what I think we need to do to call it *done* (i.e.
something akin to a 1.0 even though we are CDing it).

Here is my current thinking on the next major things that should happen.
Opinions welcomed.

(These are roughly in implementation order based on urgency)

= Split of web UI =

The elastic recheck page is becoming a mismash of what was needed at the
time. I think what we really have emerging is:
  * Overall Gate Health
  * Known (to ER) Bugs
  * Unknown (to ER) Bugs - more below

I think the landing page should be Know Bugs, as that's where we want
both bug hunters to go to prioritize things, as well as where people
looking for known bugs should start.

I think the overall Gate Health graphs should move to the zuul status
page. Possibly as part of the collection of graphs at the bottom.

We should have a secondary page (maybe column?) of the un-fingerprinted
recheck bugs, largely to use as candidates for fingerprinting. This will
let us eventually take over /recheck.

= Data Analysis / Graphs =

I spent a bunch of time playing with pandas over break
(http://dague.net/2013/12/30/ipython-notebook-experiments/), it's kind
of awesome. It also made me rethink our approach to handling the data.

I think the rolling average approach we were taking is more precise than
accurate. As these are statistical events they really need error bars.
Because when we have a quiet night, and 1 job fails at 6am in the
morning, the 100% failure rate it reflects in grenade needs to be
quantified that it was 1 of 1, not 50 of 50.

So my feeling is we should move away from the point graphs we have, and
present these as weekly and daily failure rates (with graphs and error
bars). And slice those per job. My suggestion is that we do the actual
visualization with matplotlib because it's super easy to output that
from pandas data sets.

Basically we'll be mining Elastic Search - Pandas TimeSeries -
transforms and analysis - output tables and graphs. This is different
enough from our current jquery graphing that I want to get ACKs before
doing a bunch of work here and finding out people don't like it in
reviews.

Also in this process upgrade the metadata that we provide for each of
those bugs so it's a little more clear what you are looking at.

= Take over of /recheck =

There is still a bunch of useful data coming in on recheck bug 
data which hasn't been curated into ER queries. I think the right thing
to do is treat these as a work queue of bugs we should be building
patterns out of (or completely invalidating). I've got a preliminary
gerrit bulk query piece of code that does this, which would remove the
need of the daemon the way that's currently happening. The gerrit
queries are a little long right now, but I think if we are only doing
this on hourly cron, the additional load will be negligible.

This would get us into a single view, which I think would be more
informative than the one we currently have.

= Categorize all the jobs =

We need a bit of refactoring to let us comment on all the jobs (not just
tempest ones). Basically we assumed pep8 and docs don't fail in the gate
at the beginning. Turns out they do, and are good indicators of infra /
external factor bugs. They are a part of the story so we should put them
in.

= Multi Line Fingerprints =

We've definitely found bugs where we never had a really satisfying
single line match, but we had some great matches if we could do multi
line.

We could do that in ER, however it will mean giving up logstash as our
UI, because those queries can't be done in logstash. So in order to do
this we'll really need to implement some tools - cli minimum, which will
let us easily test a bug. A custom web UI might be in order as well,
though that's going to be it's own chunk of work, that we'll need more
volunteers for.

This would put us in a place where we should have all the infrastructure
to track 90% of the race conditions, and talk about them in certainty as
1%, 5%, 0.1% bugs.

 -Sean



Let's add regexp query support to elastic-recheck so that I could have
fixed this better:

https://review.openstack.org/#/c/65303/

Then I could have just filtered the build_name with this:

build_name:/(check|gate)-(tempest|grenade)-[a-z\-]+/


If you want to extend the query files with:

regex:
- build_name: /(check|gate)-(tempest|grenade)-[a-z\-]+/
- some_other_field: /some other regex/

And make it work with the query builder, I think we should consider it.
It would be good to know how much more expensive those queries get
though, because our ES is under decent load as it is.

 -Sean





Yeah, alternatively we could turn on 

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-01-07 15:03:33 -0800:
 On Tue, Jan 7, 2014 at 5:18 PM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from James Slagle's message of 2014-01-07 12:53:57 -0800:
  On Tue, Jan 7, 2014 at 3:23 PM, Clint Byrum cl...@fewbar.com wrote:
   What would be the benefit of using packages?
 
  We're building images on top of different distributions of Linux.
  Those distributions themselves offer packaged and supported OpenStack
  components.  So, one benefit is that you'd be using what's blessed
  by your distro if you chose to.  I think that's a farily common way
  people are going to be used to installing components. The OpenStack
  Installation guide says to install from packages, fwiw.
 
 
  Indeed, this is how many people deploy traditional applications.
 
  However, what we're doing is intended to be a real, consumable deployment
  of OpenStack. Specifically one that is in the gate and scales out to
  any reasonable production load necessary.
 
  One problem with scaling out to many nodes is that the traditional
  application deployment patterns introduce too much entropy. This is really
  hard to patch out later. We are designing the software distribution system
  to avoid those problems from the beginning. Packages do the opposite,
  and encourage entropy by promising to try and update software with
  minimal invasion, thus enabling users to introduce one-off machines.
 
 This sounds more like an argument for a systems management approach
 vs. installation.  I realize there's a big relation there.  However, I
 don't think just using an image based system makes the entropy problem
 go away.  At scale of 10,000 nodes (or more), you could easily have a
 proliferation of images both that you've built and that you've
 deployed.  You're not going to update everything at once.  Nor, do I
 think you would only ever have 2 images running, for your latest
 version N, and N-1..  You're going to have several different deployed
 images running to account for hardware differences, updates that have
 not yet been applied, migrations, etc.  My point is, the entropy
 problem does not go away.  Ergo, it's not introduced by packages.
 
 Certainly it could be made worse by managing packages and their
 updates by hand across 10,000 nodes.   But again, I don't think anyone
 does that, they use a systems management tool that exists to
 discourage drift and help with such entropy.
 
 Likewise, you're not going to be calling nova rebuild 10,000 times
 manually when you want to do image based updates.  You'd likely have
 some tool (tuskar, something else, etc) that is going to manage that
 for you, and help keep any drift in what images you actually have
 deployed in check.
 

Image proliferation is far easier to measure than package proliferation.
But, that is not really the point.

The point is that we have a tool for software distribution that fits into
our system management approach, and the distro packages do not take that
system management approach into account.

So if you can't use the distro packages as-is, I'm questioning what the
actual benefit of using them at all is.

 
   We've specifically avoided packages because they complect[1] 
   configuration
   and system state management with software delivery. The recent friction
   we've seen with MySQL is an example where the packages are not actually
   helping us, they're hurting us because they encode too much configuration
   instead of just delivering binaries.
 
  We're trying to do something fairly specific with the read only /
  partition.  You're right, most packages aren't going to handle that
  well.  So, yes you have a point from that perspective.
 
 
  Readonly / is a really important feature of the deployment we're aiming
  at. Doing it with packages is quite possible. My point in asking why
  bother with packages is that when you have an entire image that has been
  verified and is known to work, what advantage does having a package for
  everything actually bring.
 
 Because distro packages are known to work, and thus you get higher
 confidence from any image constructed from said packages.  At least, I
 would, as opposed to installing from source (or from git as you say
 below :).  It's the same reason I want to use a packaged kernel
 instead of compiling it from source.  The benefit of the package is
 not just in the compiling.  It's in the known good version and
 compatibility with other known good versions I want to use.
 

I would disagree with known to work. They are known to have been
tested at some level. But IMO known to work requires testing _with
your workload_.

Since you have to test your workload, why bother with the distro packages
when you can get the upstream software and testing suite directly.

 Am I going to implicitly trust any packages blindly or completely?  Of
 course not. But, there is some confidence there in that the distro has
 done some testing and said these versions are compatible, etc.

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 6:12 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Fox, Kevin M's message of 2014-01-07 13:11:13 -0800:
 I was going to stay silent on this one, but since you asked...

 /me puts his customer hat on

 We source OpenStack from RDO for the packages and additional integration 
 testing that comes from the project instead of using OpenStack directly. I 
 was a little turned off from Triple-O when I saw it was source only. The 
 feeling being that it is too green for our tastes. Which may be 
 inaccurate. While I might be convinced to use source, its a much harder sell 
 to us currently then using packages.


 Kevin, thanks for sharing. I think I understand that it is just a new
 way of thinking and that makes it that much harder to consume.

 We have good reasons for not using packages. And we're not just making
 this up as a new crazy idea, we're copying what companies like Netflix
 have published about running at scale. We need to do a better job at
 making sure why we're doing some of the things we're doing.

Do you have a link for the publication handy? I know they use a
blessed AMI approach.  But I'm curious about the not using packages
part, and the advantages they get from that.  All I could find from
googling is people trying to install netflix from packages to watch
movies :).

Their AMI build tool seems to indicate they package their apps:
https://github.com/Netflix/aminator

As does this presentation:
http://www.slideshare.net/garethbowles/building-netflixstreamingwithjenkins-juc




-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Dan Prince


- Original Message -
 From: James Slagle james.sla...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Tuesday, January 7, 2014 3:01:07 PM
 Subject: [openstack-dev] [TripleO] Installing from packages in
 tripleo-image-elements
 
 Hi,
 
 I'd like to discuss some possible ways we could install the OpenStack
 components from packages in tripleo-image-elements.  As most folks are
 probably aware, there is a fork of tripleo-image-elements called
 tripleo-puppet-elements which does install using packages, but it does
 so using Puppet to do the installation and for managing the
 configuration of the installed components.  I'd like to kind of set
 that aside for a moment and just discuss how we might support
 installing from packages using tripleo-image-elements directly and not
 using Puppet.

I very much support having the option to use real packages within the 
tripleo-image-elements. Given we already use packages for some elements like 
Rabbit/MySQL/etc... supporting the option to use standard distribution packages 
for the OpenStack elements makes a lot of sense to me. 

 
 One idea would be to add support for a new type (or likely 2 new
 types: rpm and dpkg) to the source-repositories element.
 source-repositories already knows about the git, tar, and file types,
 so it seems somewhat natural to have additional types for rpm and
 dpkg.
 
 A complication with that approach is that the existing elements assume
 they're setting up everything from source.  So, if we take a look at
 the nova element, and specifically install.d/74-nova, that script does
 stuff like install a nova service, adds a nova user, creates needed
 directories, etc.  All of that wouldn't need to be done if we were
 installing from rpm or dpkg, b/c presumably the package would take
 care of all that.
 
 We could fix that by making the install.d scripts only run if you're
 installing a component from source.  In that sense, it might make
 sense to add a new hook, source-install.d and only run those scripts
 if the type is a source type in the source-repositories configuration.
  We could then have a package-install.d to handle the installation
 from the packages type.   The install.d hook could still exist to do
 things that might be common to the 2 methods.
 
 Thoughts on that approach or other ideas?
 
 I'm currently working on a patchset I can submit to help prove it out.
  But, I'd like to start discussion on the approach now to see if there
 are other ideas or major opposition to that approach.

This seems like a reasonable first stab at it. Can't wait to see what you come 
up with.


 
 --
 -- James Slagle
 --
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Thoughts on next steps

2014-01-07 Thread Sean Dague

On 01/07/2014 06:44 PM, Matt Riedemann wrote:



On 1/7/2014 5:26 PM, Sean Dague wrote:

On 01/07/2014 06:20 PM, Matt Riedemann wrote:



On 1/2/2014 8:29 PM, Sean Dague wrote:

A lot of elastic recheck this fall has been based on the ad hoc
needs of
the moment, in between diving down into the race bugs that were
uncovered by it. This week away from it all helped provide a little
perspective on what I think we need to do to call it *done* (i.e.
something akin to a 1.0 even though we are CDing it).

Here is my current thinking on the next major things that should
happen.
Opinions welcomed.

(These are roughly in implementation order based on urgency)

= Split of web UI =

The elastic recheck page is becoming a mismash of what was needed at
the
time. I think what we really have emerging is:
  * Overall Gate Health
  * Known (to ER) Bugs
  * Unknown (to ER) Bugs - more below

I think the landing page should be Know Bugs, as that's where we want
both bug hunters to go to prioritize things, as well as where people
looking for known bugs should start.

I think the overall Gate Health graphs should move to the zuul status
page. Possibly as part of the collection of graphs at the bottom.

We should have a secondary page (maybe column?) of the un-fingerprinted
recheck bugs, largely to use as candidates for fingerprinting. This
will
let us eventually take over /recheck.

= Data Analysis / Graphs =

I spent a bunch of time playing with pandas over break
(http://dague.net/2013/12/30/ipython-notebook-experiments/), it's kind
of awesome. It also made me rethink our approach to handling the data.

I think the rolling average approach we were taking is more precise
than
accurate. As these are statistical events they really need error bars.
Because when we have a quiet night, and 1 job fails at 6am in the
morning, the 100% failure rate it reflects in grenade needs to be
quantified that it was 1 of 1, not 50 of 50.

So my feeling is we should move away from the point graphs we have, and
present these as weekly and daily failure rates (with graphs and error
bars). And slice those per job. My suggestion is that we do the actual
visualization with matplotlib because it's super easy to output that
from pandas data sets.

Basically we'll be mining Elastic Search - Pandas TimeSeries -
transforms and analysis - output tables and graphs. This is different
enough from our current jquery graphing that I want to get ACKs before
doing a bunch of work here and finding out people don't like it in
reviews.

Also in this process upgrade the metadata that we provide for each of
those bugs so it's a little more clear what you are looking at.

= Take over of /recheck =

There is still a bunch of useful data coming in on recheck bug 
data which hasn't been curated into ER queries. I think the right thing
to do is treat these as a work queue of bugs we should be building
patterns out of (or completely invalidating). I've got a preliminary
gerrit bulk query piece of code that does this, which would remove the
need of the daemon the way that's currently happening. The gerrit
queries are a little long right now, but I think if we are only doing
this on hourly cron, the additional load will be negligible.

This would get us into a single view, which I think would be more
informative than the one we currently have.

= Categorize all the jobs =

We need a bit of refactoring to let us comment on all the jobs (not
just
tempest ones). Basically we assumed pep8 and docs don't fail in the
gate
at the beginning. Turns out they do, and are good indicators of infra /
external factor bugs. They are a part of the story so we should put
them
in.

= Multi Line Fingerprints =

We've definitely found bugs where we never had a really satisfying
single line match, but we had some great matches if we could do multi
line.

We could do that in ER, however it will mean giving up logstash as our
UI, because those queries can't be done in logstash. So in order to do
this we'll really need to implement some tools - cli minimum, which
will
let us easily test a bug. A custom web UI might be in order as well,
though that's going to be it's own chunk of work, that we'll need more
volunteers for.

This would put us in a place where we should have all the
infrastructure
to track 90% of the race conditions, and talk about them in
certainty as
1%, 5%, 0.1% bugs.

 -Sean



Let's add regexp query support to elastic-recheck so that I could have
fixed this better:

https://review.openstack.org/#/c/65303/

Then I could have just filtered the build_name with this:

build_name:/(check|gate)-(tempest|grenade)-[a-z\-]+/


If you want to extend the query files with:

regex:
- build_name: /(check|gate)-(tempest|grenade)-[a-z\-]+/
- some_other_field: /some other regex/

And make it work with the query builder, I think we should consider it.
It would be good to know how much more expensive those queries get
though, because our ES is under decent load as it is.

 -Sean




Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Dan Prince


- Original Message -
 From: James Slagle james.sla...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, January 7, 2014 3:53:57 PM
 Subject: Re: [openstack-dev] [TripleO] Installing from packages in
 tripleo-image-elements
 
 On Tue, Jan 7, 2014 at 3:23 PM, Clint Byrum cl...@fewbar.com wrote:
  What would be the benefit of using packages?
 
 We're building images on top of different distributions of Linux.
 Those distributions themselves offer packaged and supported OpenStack
 components.  So, one benefit is that you'd be using what's blessed
 by your distro if you chose to.  I think that's a farily common way
 people are going to be used to installing components. The OpenStack
 Installation guide says to install from packages, fwiw.
 
  We've specifically avoided packages because they complect[1] configuration
  and system state management with software delivery. The recent friction
  we've seen with MySQL is an example where the packages are not actually
  helping us, they're hurting us because they encode too much configuration
  instead of just delivering binaries.
 
 We're trying to do something fairly specific with the read only /
 partition.  You're right, most packages aren't going to handle that
 well.  So, yes you have a point from that perspective.
 
 However, there are many examples of when packages help you.
 Dependency resolution, version compatibility, methods of verification,
 knowing what's installed, etc.  I don't think that's really an
 argument or discussion worth having, because you either want to use
 packages or you want to build it all from source.  There are
 advantages/disadvantages to both methods, and what I'm proposing is
 that we support both methods, and not require everyone to only be able
 to install from source.

I think James gives a nice summary here. The fact is some people do want to use 
packages with the OpenStack image elements and we should support them. And from 
the sounds of it the implementation details aren't going to cost that much. The 
benefit we get from this is that we as a community can focus in on making a 
single set of tripleo-image-elements rock solid for both packages and source 
installs.

 
  Perhaps those of us who have been involved a bit longer haven't done
  a good job of communicating our reasons. I for one believe in the idea
  that image based updates eliminate a lot of the entropy that comes along
  with a package based updating system. For that reason alone I tend to
  look at any packages that deliver configurable software as potentially
  dangerous (note that I think they're wonderful for libraries, utilities,
  and kernels. :)
 
 Using packages wouldn't prevent you from using the image based update
 mechanism.  Anecdotally, I think image based updates could be a bit
 heavy handed for something like picking up a quick security or bug fix
 or the like.  That would be a scenario where a package update could
 really be handy.  Especially if someone else (e.g., your distro) is
 maintaining the package for you.
 
 For this proposal though, I was only talking about installation of the
 components at image build time.
 
 --
 -- James Slagle
 --
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Chris Jones
Hi

 On 7 Jan 2014, at 23:04, Clint Byrum cl...@fewbar.com wrote:
 
 My question still stands, what are the real advantages? So far the only
 one that matters to me is makes it easier for people to think about
 using it.

If I were to put on my former sysadmin hat, I would always strongly prefer to 
use packages for things, so I have the weight of the distro vendor behind me 
(particularly if I only have a few hundred servers and want a 6 month upgrade 
cadence with easy access to security fixes between upgrades).

I'm not necessarily advocating for or against tripleo supporting packages as a 
source of openstack, but I do think it is likely that some/many users will have 
their reasons for wanting to leverage our tools without following all of our 
preferred processes.

What I am advocating though, is that if there is a need and someone gives us a 
patch that satisfies it without hurting our preferred processes, we look 
favourably on it. I would rather have users on the road to doing things our 
way, than be immediately turned away or forced into dramatically forking us.

Cheers,
--
Chris Jones
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Dan Prince


- Original Message -
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Sent: Tuesday, January 7, 2014 3:23:24 PM
 Subject: Re: [openstack-dev] [TripleO] Installing from packages in
 tripleo-image-elements
 
 What would be the benefit of using packages?
 
 We've specifically avoided packages because they complect[1] configuration
 and system state management with software delivery. The recent friction
 we've seen with MySQL is an example where the packages are not actually
 helping us, they're hurting us because they encode too much configuration
 instead of just delivering binaries.
 
 Perhaps those of us who have been involved a bit longer haven't done
 a good job of communicating our reasons. I for one believe in the idea
 that image based updates eliminate a lot of the entropy that comes along
 with a package based updating system. For that reason alone I tend to
 look at any packages that deliver configurable software as potentially
 dangerous (note that I think they're wonderful for libraries, utilities,
 and kernels. :)

To be clear James is talking initially about using packages to build images 
(not update them at runtime).

In any case for much the same reason I look at what we do today with the source 
based pip installs an an entropy bomb:

-multiple venvs in the same image
-each venv contains its own copies of libraries
-everything is compiled on the fly
-due to the above ^^ image builds often fail!!!

I'd expect that anyone using real packages for OpenStack element has much less 
entropy on many fronts:

-more control over what is installed
-faster image build times
-less duplication

The cost when using real packages is maintaining them. But once you have that 
(which we will)... you've got a lot more control over things.


 
 [1] http://www.infoq.com/presentations/Simple-Made-Easy
 
 Excerpts from James Slagle's message of 2014-01-07 12:01:07 -0800:
  Hi,
  
  I'd like to discuss some possible ways we could install the OpenStack
  components from packages in tripleo-image-elements.  As most folks are
  probably aware, there is a fork of tripleo-image-elements called
  tripleo-puppet-elements which does install using packages, but it does
  so using Puppet to do the installation and for managing the
  configuration of the installed components.  I'd like to kind of set
  that aside for a moment and just discuss how we might support
  installing from packages using tripleo-image-elements directly and not
  using Puppet.
  
  One idea would be to add support for a new type (or likely 2 new
  types: rpm and dpkg) to the source-repositories element.
  source-repositories already knows about the git, tar, and file types,
  so it seems somewhat natural to have additional types for rpm and
  dpkg.
  
  A complication with that approach is that the existing elements assume
  they're setting up everything from source.  So, if we take a look at
  the nova element, and specifically install.d/74-nova, that script does
  stuff like install a nova service, adds a nova user, creates needed
  directories, etc.  All of that wouldn't need to be done if we were
  installing from rpm or dpkg, b/c presumably the package would take
  care of all that.
  
  We could fix that by making the install.d scripts only run if you're
  installing a component from source.  In that sense, it might make
  sense to add a new hook, source-install.d and only run those scripts
  if the type is a source type in the source-repositories configuration.
   We could then have a package-install.d to handle the installation
  from the packages type.   The install.d hook could still exist to do
  things that might be common to the 2 methods.
  
  Thoughts on that approach or other ideas?
  
  I'm currently working on a patchset I can submit to help prove it out.
   But, I'd like to start discussion on the approach now to see if there
  are other ideas or major opposition to that approach.
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Fox, Kevin M
Another piece to the conversation I think is update philosophy. If you are 
always going to require a new image and no customization after build ever, 
ever, the messiness that source usually cause in the file system image really 
doesn't matter. The package system allows you to easily update, add, and remove 
packages bits at runtime cleanly. In our experimenting with OpenStack, its 
becoming hard to determine which philosophy is better. Golden Images for some 
things make a lot of sense. For other random services, the maintenance of the 
Golden Image seems to be too much to bother with and just installing a few 
packages after image start is preferable. I think both approaches are valuable. 
This may not directly relate to what is best for Triple-O elements, but since 
we are talking philosophy anyway...

Again though, I think if you wish to make the argument that packages are 
undesirable, then ALL packages are probably undesirable for the same reasons. 
Right? Why not make elements for all dependencies, instead of using distro 
packages to get you 90% of the way there and then using source just for 
OpenStack bits. If you always want the newest, latest greatest Neutron, don't 
you want the newest VSwitch too? I'd argue though there is a point of 
diminishing returns with source that packages fill. Then the argument is where 
is the point. Some folks think the point is all the way over to just use 
packages for everything.

I still think using distro packages buys you a lot, even if you are just using 
them to create a static golden image.

Thanks,
Kevin


From: James Slagle [james.sla...@gmail.com]
Sent: Tuesday, January 07, 2014 3:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Installing from packages in  
tripleo-image-elements

On Tue, Jan 7, 2014 at 6:12 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Fox, Kevin M's message of 2014-01-07 13:11:13 -0800:
 I was going to stay silent on this one, but since you asked...

 /me puts his customer hat on

 We source OpenStack from RDO for the packages and additional integration 
 testing that comes from the project instead of using OpenStack directly. I 
 was a little turned off from Triple-O when I saw it was source only. The 
 feeling being that it is too green for our tastes. Which may be 
 inaccurate. While I might be convinced to use source, its a much harder sell 
 to us currently then using packages.


 Kevin, thanks for sharing. I think I understand that it is just a new
 way of thinking and that makes it that much harder to consume.

 We have good reasons for not using packages. And we're not just making
 this up as a new crazy idea, we're copying what companies like Netflix
 have published about running at scale. We need to do a better job at
 making sure why we're doing some of the things we're doing.

Do you have a link for the publication handy? I know they use a
blessed AMI approach.  But I'm curious about the not using packages
part, and the advantages they get from that.  All I could find from
googling is people trying to install netflix from packages to watch
movies :).

Their AMI build tool seems to indicate they package their apps:
https://github.com/Netflix/aminator

As does this presentation:
http://www.slideshare.net/garethbowles/building-netflixstreamingwithjenkins-juc




--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa]API testing update

2014-01-07 Thread Sukhdev Kapur
Hi Miguel,

As I am using neutron API tempest tests, I notice that in the create_port
tests, the port context is set partially - i.e. only network Id is
available.
ML2 drivers expect more in formation in the port context in order to test
the API on the back-ends.

I noticed such an enhancement is not listed in the etherpad.
This is really not a new test, but, enhancement of the test coverage to
allow third party ML2 drivers to perform end-to-end API testing.

If you like, I will be happy to update the ehterpad to include this
information.

regards..
-Sukhdev




On Mon, Jan 6, 2014 at 10:37 AM, Miguel Lavalle mig...@mlavalle.com wrote:

 As described in a previous message, the community is focusing efforts in
 developing a comprehensive set of API tests in Tempest for Neutron. We are
 keeping track of this effort in the API tests gap analysis section at
 https://etherpad.openstack.org/p/icehouse-summit-qa-neutron

 These are recent developments in this regard:

 1) The gap analysis is complete as of January 5th. The analysis takes into
 consideration what already exists in Tempest and what is in the Gerrit
 review process
 2) Soon there is going to be a generative (i.e. non manual) tool to
 create negative tests in Tempest. As a consequence, all negative tests
 specifications were removed from the gap analysis described in the previous
 point

 If you are interested in helping in this effort, please go to the etherpad
 indicated above and select from the API tests gap analysis section the
 tests you want to contribute. Please put your name and email address next
 to the selected tests. Also, when your code merges, please come back to the
 etherpad and update it indicating that your test is done.

 If your are new to OpenStack, Neutron or Tempest, implementing tests is an
 excellent way to learn an API. We have put together the following guide to
 help you get started
 https://wiki.openstack.org/wiki/Neutron/TempestAPITests



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 6:45 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from James Slagle's message of 2014-01-07 15:03:33 -0800:
 On Tue, Jan 7, 2014 at 5:18 PM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from James Slagle's message of 2014-01-07 12:53:57 -0800:
  On Tue, Jan 7, 2014 at 3:23 PM, Clint Byrum cl...@fewbar.com wrote:

 Image proliferation is far easier to measure than package proliferation.
 But, that is not really the point.

If you're managing it right, you don't have package proliferation, any
more than you do image proliferation.  But, if that's no longer the
point, then we can move on :).

 The point is that we have a tool for software distribution that fits into
 our system management approach, and the distro packages do not take that
 system management approach into account.

 So if you can't use the distro packages as-is, I'm questioning what the
 actual benefit of using them at all is.

Well, I can't argue that if we install from packages, and then have to
hack up the installed files to make them work with our approach, that
is not ideal and not really sane. Personally, it would be great if the
distro packages had support for the image based approach.  And truly,
if OpenStack is adopting the image based installation and deployment
mechanism as the way, then the Installation guide isn't even going
to say install from packages anymore on top of your base OS.  It's
just going to be all TripleO.  And, I think the distros would have to
adapt to that.

I don't honestly know how much you could use the current distro
packages as-is.  But, I'm sure I'll find out soon enough.


 
   We've specifically avoided packages because they complect[1] 
   configuration
   and system state management with software delivery. The recent friction
   we've seen with MySQL is an example where the packages are not actually
   helping us, they're hurting us because they encode too much 
   configuration
   instead of just delivering binaries.
 
  We're trying to do something fairly specific with the read only /
  partition.  You're right, most packages aren't going to handle that
  well.  So, yes you have a point from that perspective.
 
 
  Readonly / is a really important feature of the deployment we're aiming
  at. Doing it with packages is quite possible. My point in asking why
  bother with packages is that when you have an entire image that has been
  verified and is known to work, what advantage does having a package for
  everything actually bring.

 Because distro packages are known to work, and thus you get higher
 confidence from any image constructed from said packages.  At least, I
 would, as opposed to installing from source (or from git as you say
 below :).  It's the same reason I want to use a packaged kernel
 instead of compiling it from source.  The benefit of the package is
 not just in the compiling.  It's in the known good version and
 compatibility with other known good versions I want to use.


 I would disagree with known to work. They are known to have been
 tested at some level. But IMO known to work requires testing _with
 your workload_.

So this is just semantics around known to work.  Definitely you have
to test in your own environment before deploying anything.  I more
meant known to be compatible, known to be supported, known to
work on certain hardware.  Or advertised to.  Things of that
nature. But, none of those imply you don't test.

Also, the inverse can be equally valuable.  These versions do *not*
work on this hardware, etc.

 Since you have to test your workload, why bother with the distro packages
 when you can get the upstream software and testing suite directly.

 Am I going to implicitly trust any packages blindly or completely?  Of
 course not. But, there is some confidence there in that the distro has
 done some testing and said these versions are compatible, etc.


 I think that confidence is misplaced and unnecessary.

Certainly if you set the expectation that that nothing is known to
work, then no one is going to have any confidence that it does.
OpenStack does releases of projects that are in some form of a known
good state.  I would think most people would reasonably expect that
downloading that release would likely work better vs. grabbing from
git 2 weeks into a new development cycle.

Meaning, I have some confidence that OpenStack as a community has done
some testing.  We have qa, gates, unit and functional tests, etc.  I
don't think that confidence is misplaced or unnecessary.

If a distro says a set of OpenStack packages work with a given version
of their OS, then that confidence is not misplaced either IMO.

 We provide test
 suites to users and we will encourage users to test their own things. I
 imagine some will also ship packaged products based on TripleO that will
 also be tested as a whole, not as individual packages.

This is a new and rather orthogonal point.  I'm not talking about
testing individual packages.  You're right, that 

[openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-07 Thread Ray Sun
Stackers,
I tried to create a new VM using the driver VMwareVCDriver, but I found
it's very slow when I try to create a new VM, for example, 7GB Windows
Image spent 3 hours.

Then I tried to use curl to upload a iso to vcenter directly.

curl -H Expect: -v --insecure --upload-file windows2012_server_cn_x64.iso

https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenterdsName=datastore2


The average speed is 0.8 MB/s.

Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

I am not sure if there any special configurations for web interface for
vcenter. Please help.

Best Regards
-- Ray
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-01-07 15:18:00 -0800:
 On Tue, Jan 7, 2014 at 6:04 PM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from Chris Jones's message of 2014-01-07 14:43:31 -0800:
  Hi
 
   On 7 Jan 2014, at 22:18, Clint Byrum cl...@fewbar.com wrote:
   Packages do the opposite,
   and encourage entropy by promising to try and update software
 
  Building with packages doesn't require updating running systems with 
  packages and more than building with git requires updating running systems 
  with git pull.
  One can simply build (and test!) a new image with updated packages and 
  rebuild/takeover nodes.
 
 
  Indeed, however one can _more_ simply build an image without package
  tooling...  and they will be more similar across multiple platforms.
 
  My question still stands, what are the real advantages? So far the only
  one that matters to me is makes it easier for people to think about
  using it.
 
 I'm reminded of when I first started looking at TripleO there were a
 few issues with installing from git (I'll say that from now on :)
 related to all the python distribute - setuptools migration.  Things
 like if you're base cloud image had the wrong version of pip you
 couldn't migrate to setuptools cleanly.  Then you had to run the
 setuptools update twice, once to get the distribute legacy wrapper and
 then again to latest setuptools.  If I recall there were other
 problems with virtualenv incompatibilities as well.
 
 Arguably, installing from packages would have made that easier and less 
 complex.

No argument, it would have been easier. But it would not have been less
complex. The complexity would have been obscured.

It really would have just deferred the problem to the distro. That may
be a good thing, as the distro knows how to solve its own problems. But
then Fedora solves it, and Ubuntu solves it, and Debian solves it...

Or, OpenStack solves it, once, and OpenStack's users roll on no matter
what they choose for distro.

 
 Sure, the crux of the problem was likely that versions in the distro
 were too old and they needed to be updated.  But unless we take on
 building the whole OS from source/git/whatever every time, we're
 always going to have that issue.  So, an additional benefit of
 packages is that you can install a known good version of an OpenStack
 component that is known to work with the versions of dependent
 software you already have installed.
 

I disagree on the all or nothing argument. We can have a pre-packaged OS
that implements stable interfaces like POSIX, python-2.7 and even git,
and we can have a fast moving application like OpenStack riding on top
of that in a self contained application container like a virtualenv or
even a Docker container.

The Linux distro model is _really_ good at providing an OS, tools and
libraries. I am not so convinced that the distro model is actually
any good for providing applications. I say that as one of the current
maintainers of MySQL, which mixes libraries and applications, in Debian.

The pain that the MySQL packaging team goes through just to have the thing
work similar to postgresql and apache is a little bit crazy-inducing,
so please excuse me when I shake up the bottle and spray crazy all over
the list. :)

Also, known good is incredibly subjective, as good implies a set
of expectations that are being met. But whatever that good means is
fairly poorly defined and probably not automated. :P

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Parallel testing update

2014-01-07 Thread Isaku Yamahata
Mathieu, Thank you for clarification.
I'll take a look at the patches.

On Tue, Jan 07, 2014 at 02:34:24PM +0100,
Salvatore Orlando sorla...@nicira.com wrote:

 Thanks Mathieu!
 
 I think we should first merge Edouard's patch, which appears to be a
 prerequisite.
 I think we could benefit a lot by applying this mechanism to
 process_network_ports.
 
 However, I am not sure if there could be drawbacks arising from the fact
 that the agent would assign the local VLAN port (either the lvm id or the
 DEAD_VLAN tag) and then at the end of the iteration the flow modifications,
 such as the drop all rule, will be applied.
 This will probably create a short interval of time in which we might have
 unexpected behaviours (such as VMs on DEAD VLAN able to communicate each
 other for instance).

Agree that more careful ordered update is necessary with deferred
application.

Thanks,
Isaku Yamahata


 I think we can generalize this discussion and use deferred application for
 ovs-vsctl as well.
 Would you agree with that?

 Thanks,
 Salvatore
 
 
 On 7 January 2014 14:08, Mathieu Rohon mathieu.ro...@gmail.com wrote:
 
  I think that isaku is talking about a more intensive usage of
  defer_apply_on/off as it is done in the patch of gongysh [1].
 
  Isaku, i don't see any reason why this could not be done in
  precess_network_ports, if needed. Moreover the patch from edouard [2]
  resolves multithreading issues while precessing defer_apply_off.
 
 
  [1]https://review.openstack.org/#/c/61341/
  [2]https://review.openstack.org/#/c/63917/
 
  On Mon, Jan 6, 2014 at 9:24 PM, Salvatore Orlando sorla...@nicira.com
  wrote:
   This thread is starting to get a bit confusing, at least for people with
  a
   single-pipeline brain like me!
  
   I am not entirely sure if I understand correctly Isaku's proposal
  concerning
   deferring the application of flow changes.
   I think it's worth discussing in a separate thread, and a supporting
  patch
   will help as well; I think that in order to avoid unexpected behaviours,
   vlan tagging on the port and flow setup should always be performed at the
   same time; if we get a much better performance using a mechanism similar
  to
   iptables' defer_apply, then we should it.
  
   Regarding rootwrap. This 6x slowdown, while proving that rootwrap
  imposes a
   non-negligible overhead, it should not be used as a sort of proof that
   rootwrap makes things 6 times worse! What I've been seeing on the gate
  and
   in my tests are ALRM_CLOCK errors raised by ovs commands, so rootwrap has
   little to do with it.
  
   Still, I think we can say that rootwrap adds about 50ms to each command,
   becoming particularly penalising especially for 'fast' commands.
   I think the best things to do, as Joe advices, a test with rootwrap
  disabled
   on the gate - and I will take care of that.
  
   On the other hand, I would invite community members picking up some of
  the
   bugs we've registered for 'less frequent' failures observed during
  parallel
   testing; especially if you're coming to Montreal next week.
  
   Salvatore
  
  
  
   On 6 January 2014 20:31, Jay Pipes jaypi...@gmail.com wrote:
  
   On Mon, 2014-01-06 at 11:17 -0800, Joe Gordon wrote:
   
   
   
On Mon, Jan 6, 2014 at 10:35 AM, Jay Pipes jaypi...@gmail.com
  wrote:
On Mon, 2014-01-06 at 09:56 -0800, Joe Gordon wrote:
   
 What about it? Also those numbers are pretty old at this
point. I was
 thinking disable rootwrap and run full parallel tempest
against it.
   
   
I think that is a little overkill for what we're trying to do
here. We
are specifically talking about combining many utils.execute()
calls into
a single one. I think it's pretty obvious that the latter will
be better
performing than the first, unless you think that rootwrap has
no
performance overhead at all?
   
   
mocking out rootwrap with straight sudo, is a very quick way to
approximate the performance benefit of combining many utlils.execute()
calls together (at least rootwrap wise).  Also  it would tell us how
much of the problem is rootwrap induced and how much is other.
  
   Yes, I understand that, which is what the article I linked earlier
   showed?
  
   % time sudo ip link /dev/null
   sudo ip link  /dev/null  0.00s user 0.00s system 43% cpu 0.009 total
   % sudo time quantum-rootwrap /etc/quantum/rootwrap.conf ip link
/dev/null
   quantum-rootwrap /etc/quantum/rootwrap.conf ip link   /dev/null  0.04s
   user 0.02s system 87% cpu 0.059 total
  
   A very tiny, non-scientific simple indication that rootwrap is around 6
   times slower than a simple sudo call.
  
   Best,
   -jay
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   

[openstack-dev] [Solum] Devstack gate is failing

2014-01-07 Thread Noorul Islam Kamal Malmiyoda
Hi team,

After merging [1] devstack gate started failing. There is already a
thread [2] related to this in mailing list. Until this gets fixed
shall we make this job non-voting?

Regards,
Noorul

[1] https://review.openstack.org/64226
[2] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg12440.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-07 Thread Murali Allada
I'm ok with making this non-voting for Solum until this gets fixed.

-Murali


On Jan 7, 2014, at 8:53 PM, Noorul Islam Kamal Malmiyoda noo...@noorul.com 
wrote:

 Hi team,
 
 After merging [1] devstack gate started failing. There is already a
 thread [2] related to this in mailing list. Until this gets fixed
 shall we make this job non-voting?
 
 Regards,
 Noorul
 
 [1] https://review.openstack.org/64226
 [2] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg12440.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Robert Collins
On 8 January 2014 12:26, Fox, Kevin M kevin@pnnl.gov wrote:
 One of the major features using a distro over upstream gives is integration.
 rhel6 behaves differently then ubuntu 13.10. Sometimes it takes a while to 
 fix upstream for a given distro, and even then it may not even be accepted 
 because the distro's too old, go away. Packages allow a distro to make sure 
 all the pieces can work together properly. And quickly patch things when 
 needed. Its kind of a fork but not quite the same thing. The distro 
 integrates not just OpenStack, but all its dependencies and their 
 dependencies all the way up. For example there can be subtle issues if 
 neutron, open vswitch and the kernel are out of sync. It is the integration 
 that folks like about distro's. I can trust that since it came from X, all 
 the pieces should work together, or I know who to call.

 The only way I can think of to get the same stability out of just source is 
 for Triple-O to provide a whole source distro and test it, itself. More work 
 then it probably wants to do. Or pick just one distro and support source only 
 on that. Though if you pick the wrong distro, then you get into trust issues 
 and religious wars. :/

It is precisely that integration tested confidence that inspired the
TripleO design: we don't know if a given combination of components
work unless we've tested it. So TripleO is about the lifecycle:

code - image - test - deploy.

What is deployed is what was tested.

Where we source what was tested - packages or pypi or git - is
irrelevant to the test results.

I think we need to support everything that someone wants to offer up
patches to make work (and ongoing support for whatever approach that
is) because there are many use cases for users: if they get OpenStack
using TripleO from e.g. Mirantis, or RedHat, or upstream, they will
want to be using the install mechanism preferred by their service
provider - and for TripleO to be widely applicable, we need to permit
service providers to dial their own story.

We're at the top of a waterfall of
pull-and-build-and-test-and-distribute - we don't get to pick Chef vs
Puppet vs nothing, or packages vs git vs pypi, or RHEL vs Ubuntu or
Xen vs Kvm or Ceph vs GlusterFS.

We do need to start with one thing and then add more backends -
balance generalising everything with the cost ofa dding plug points
after things are in use.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-07 Thread Georgy Okrokvertskhov
Should we rather revert patch to make gate working?

Thanks
Georgy


On Tue, Jan 7, 2014 at 7:19 PM, Murali Allada
murali.all...@rackspace.comwrote:

 I'm ok with making this non-voting for Solum until this gets fixed.

 -Murali


 On Jan 7, 2014, at 8:53 PM, Noorul Islam Kamal Malmiyoda 
 noo...@noorul.com wrote:

  Hi team,
 
  After merging [1] devstack gate started failing. There is already a
  thread [2] related to this in mailing list. Until this gets fixed
  shall we make this job non-voting?
 
  Regards,
  Noorul
 
  [1] https://review.openstack.org/64226
  [2]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg12440.html
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-07 Thread Swapnil Kulkarni
Let me know in case I can be of any help getting this resolved.

Best Regards,
Swapnil


On Wed, Jan 8, 2014 at 12:38 AM, Eric Windisch ewindi...@docker.com wrote:

 On Tue, Jan 7, 2014 at 1:16 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 Thanks Eric.

 I had already tried the solution presented on ask.openstack.org.


 It was worth a shot.

 I also found a bug [1] and applied code changes in [2], but to no success.


 Ah. I hadn't seen that change before. I agree with Sean's comment, but we
 can fix up your change.

 I was just curious to know if anyone else is working on this or can
 provide some pointers from development front.


 I'm in the process of taking over active development and maintenance of
 this driver from Sam Alba.

 I'll try and reproduce this myself.

 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Robert Collins
On 8 January 2014 09:23, Clint Byrum cl...@fewbar.com wrote:
 What would be the benefit of using packages?

- Defense in depth - tests at the package build time + tests of the image.
- Enumeration of installed software using the admins expected tooling
(dpkg / rpm etc)
- Familiarity

 We've specifically avoided packages because they complect[1] configuration
 and system state management with software delivery. The recent friction
 we've seen with MySQL is an example where the packages are not actually
 helping us, they're hurting us because they encode too much configuration
 instead of just delivering binaries.

Thats not why I'd say we're avoiding packages upstream :) I would say
that running [professional quality] repositories is non-trivial, not
directly related to our goals (because we have code - image - test
- deploy) and thus not something we've had incentive to do. Add in
the numerous distros we support now and it becomes a significant
burden for little benefit to our direct goals, and little benefit to
our users. But if someone *has* packages, I see no harm (from the
code-image-test-deploy) cycle to them being used, and it certainly
helps vendors who are invested in packages to share effort between
teams installing OpenStack in more traditional ways and those getting
on the TripleO pipeline.

I'm not disputing the possible downsides intrinsic to packages,
particularly the not-at-scale temptation to create non automated
snowflakes :)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread Robert Collins
On 8 January 2014 12:18, James Slagle james.sla...@gmail.com wrote:

 I'm reminded of when I first started looking at TripleO there were a
 few issues with installing from git (I'll say that from now on :)
 related to all the python distribute - setuptools migration.  Things
 like if you're base cloud image had the wrong version of pip you
 couldn't migrate to setuptools cleanly.  Then you had to run the
 setuptools update twice, once to get the distribute legacy wrapper and
 then again to latest setuptools.  If I recall there were other
 problems with virtualenv incompatibilities as well.

 Arguably, installing from packages would have made that easier and less 
 complex.

We should have that argument with a beverage and plenty of time ;).
Certainly it was an automated fail - but automation detected the
issues, and *if* we were in the gate, the changes that were done in
OpenStack to trigger [most] of those issues would not have landed at
all.

 Sure, the crux of the problem was likely that versions in the distro
 were too old and they needed to be updated.  But unless we take on
 building the whole OS from source/git/whatever every time, we're
 always going to have that issue.  So, an additional benefit of
 packages is that you can install a known good version of an OpenStack
 component that is known to work with the versions of dependent
 software you already have installed.

The problem is that OpenStack is building against newer stuff than is
in distros, so folk building on a packaging toolchain are going to
often be in catchup mode - I think we need to anticipate package based
environments running against releases rather than CD.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-07 Thread Noorul Islam Kamal Malmiyoda
On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
gokrokvertsk...@mirantis.com wrote:
 Should we rather revert patch to make gate working?


I think it is always good to have test packages reside in
test-requirements.txt. So -1 on reverting that patch.

Here [1] is a temporary solution.

Regards,
Noorul

[1] https://review.openstack.org/65414


 On Tue, Jan 7, 2014 at 7:19 PM, Murali Allada murali.all...@rackspace.com
 wrote:

 I'm ok with making this non-voting for Solum until this gets fixed.

 -Murali


 On Jan 7, 2014, at 8:53 PM, Noorul Islam Kamal Malmiyoda
 noo...@noorul.com wrote:

  Hi team,
 
  After merging [1] devstack gate started failing. There is already a
  thread [2] related to this in mailing list. Until this gets fixed
  shall we make this job non-voting?
 
  Regards,
  Noorul
 
  [1] https://review.openstack.org/64226
  [2]
  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg12440.html
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >