Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Angus Salkeld
On Thu, Apr 30, 2015 at 9:25 PM, Jastrzebski, Michal 
michal.jastrzeb...@intel.com wrote:

 Hello,

 After discussions, we've spotted possible gap in versioned objects:
 backporting of too-new versions in RPC.
 Nova does that by conductor, but not every service has something like
 that. I want to propose another approach:

 1. Milestone pinning - we need to make single reference to versions of
 various objects - for example heat in version 15.1 will mean stack in
 version 1.1 and resource in version 1.5.
 2. Compatibility mode - this will add flag to service
 --compatibility=15.1, that will mean that every outgoing RPC communication
 will be backported before sending to object versions bound to this
 milestone.

 With this 2 things landed we'll achieve rolling upgrade like that:
 1. We have N nodes in version V
 2. We take down 1 node and upgrade code to version V+1
 3. Run code in ver V+1 with --compatibility=V
 4. Repeat 2 and 3 until every node will have version V+1
 5. Restart each service without compatibility flag

 This approach has one big disadvantage - 2 restarts required, but should
 solve problem of backporting of too-new versions.
 Any ideas? Alternatives?


AFAIK if nova gets a message that is too new, it just forwards it on (and a
newer server will handle it).

With that this *should* work, shouldn't it?
1. rolling upgrade of heat-engine
2. db sync
3. rolling upgrade of heat-api

-Angus



 Regards,
 Michał

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][tempest] Data-driven testing (DDT) samples

2015-05-04 Thread Salvatore Orlando
Among the OpenStack project of which I have some knowledge, none of them
uses any DDT library.
If you think there might be a library from which lbaas, neutron, or any
other openstack project might take advantage, we should consider it.

Salvatore

On 14 April 2015 at 20:33, Madhusudhan Kandadai 
madhusudhan.openst...@gmail.com wrote:

 Hi,

 I would like to start a thread for the tempest DDT in neutron-lbaas tree.
 The problem comes in when we have testcases for both admin/non-admin user.
 (For example, there is an ongoing patch activity:
 https://review.openstack.org/#/c/171832/). Ofcourse it has duplication
 and want to adhere as per the tempest guidelines. Just wondering, whether
 we are using DDT library in other projects, if it is so, can someone please
 point me the sample code that are being used currently. It can speed up
 this DDT activity for neutron-lbaas.

 In the meantime, I am also gathering/researching about that. Should I have
 any update, I shall keep you posted on the same.

 Thanks,
 Madhusudhan

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon

2015-05-04 Thread liuxinguo
· I’m just trying to have a  analysisapp:ds:analysis into it, maybe 
can begin with the “wrapper around the python-cinderclient” as George 
Peristerakis suggested.


发件人: Erlon Cruz [mailto:sombra...@gmail.com]
发送时间: 2015年4月27日 20:07
收件人: OpenStack Development Mailing List (not for usage questions)
抄送: Luozhen; Fanyaohong
主题: Re: [openstack-dev] [cinder] Is there any way to put the driver backend 
error message to the horizon

Alex,

Any scratch of the solution you plan to propose?

On Mon, Apr 27, 2015 at 5:57 AM, liuxinguo 
liuxin...@huawei.commailto:liuxin...@huawei.com wrote:
Thanks for your suggestion, George. But when I looked into python-cinderclient 
(not very deep), I can not find the “wrapper around the python-cinderclient” 
you have mentioned.
Could you please give me a little more hint to find the “wrapper”?

Thanks,
Liu


发件人: George Peristerakis 
[mailto:gperi...@redhat.commailto:gperi...@redhat.com]
发送时间: 2015年4月13日 23:22
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [cinder] Is there any way to put the driver backend 
error message to the horizon

Hi Lui,

I'm not familiar with the error you are trying to show, but Here's how Horizon 
typically works. In the case of cinder, we have a wrapper around the 
python-cinderclient which if the client sends a exception with a valid message, 
by default Horizon will display the exception message. The message can also be 
overridden in the translation file. So a good start is to look in 
python-cinderclient and see if you could produce a more meaningful message.


Cheers.
George
On 10/04/15 06:16 AM, liuxinguo wrote:

Hi,



When we create a volume in the horizon, there may occurrs some errors at the 
driver

backend, and the in horizon we just see a error in the volume status.



So is there any way to put the error information to the horizon so users can 
know what happened exactly just from the horizon?

Thanks,

Liu





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Jastrzebski, Michal
W dniu 5/4/2015 o 8:21 AM, Angus Salkeld pisze: On Thu, Apr 30, 2015 at 9:25 
PM, Jastrzebski, Michal 
 michal.jastrzeb...@intel.com mailto:michal.jastrzeb...@intel.com wrote:
 
 Hello,
 
 After discussions, we've spotted possible gap in versioned objects:
 backporting of too-new versions in RPC.
 Nova does that by conductor, but not every service has something
 like that. I want to propose another approach:
 
 1. Milestone pinning - we need to make single reference to versions
 of various objects - for example heat in version 15.1 will mean
 stack in version 1.1 and resource in version 1.5.
 2. Compatibility mode - this will add flag to service
 --compatibility=15.1, that will mean that every outgoing RPC
 communication will be backported before sending to object versions
 bound to this milestone.
 
 With this 2 things landed we'll achieve rolling upgrade like that:
 1. We have N nodes in version V
 2. We take down 1 node and upgrade code to version V+1
 3. Run code in ver V+1 with --compatibility=V
 4. Repeat 2 and 3 until every node will have version V+1
 5. Restart each service without compatibility flag
 
 This approach has one big disadvantage - 2 restarts required, but
 should solve problem of backporting of too-new versions.
 Any ideas? Alternatives?
 
 
 AFAIK if nova gets a message that is too new, it just forwards it on 
 (and a newer server will handle it).
 
 With that this *should* work, shouldn't it?
 1. rolling upgrade of heat-engine

That will be hard part. When we'll have only one engine from given version, we 
lose HA. Also, since we never know where given task lands, we might end up with 
one task bouncing from old version to old version, making call indefinitely 
long. Ofc with each upgraded engine we'll lessen change for that to happen, but 
I think we should aim for lowest possible downtime. That being said, that might 
be good idea to solve this problem not-too-clean, but quickly.

 2. db sync
 3. rolling upgrade of heat-api
 
 -Angus
 
 
 Regards,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Port Nova to Python 3

2015-05-04 Thread Victor Stinner
Hi,

 I wrote a spec to port Nova to Python 3:

   https://review.openstack.org/#/c/176868/

I updated my spec to take in account all comments. Example of changes:

- Explicitly say that Python 2 support is kept (ex: change the title to Adding 
Python 3.4 support to Nova)
- Clarify what are the Nova tests
- Shorter spec

I prefer to exclude Tempest tests, and restrict the scope of the spec to Nova 
unit and functional tests. Most Python 3 issues should be catched by Nova unit 
and functional tests anyway.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tempest] Question on time precision in auth providers

2015-05-04 Thread Andrea Frittoli
Hi Daryl,

the original reason for having a strict check in there was that the auth
provider a specialised to the identity provider in use
(KeystoneV2AuthProvider, KeystoneV3AuthProvider), and the timestamp format
embedded in there is what's provided by keystone's identity v2 and identity
v3 token APIs.

But I agree with you that we could relax those checks to be ISO8601, and
possibly leave it to keystone tests to validate a specific format of the
timestamp.

andrea

On Mon, May 4, 2015 at 4:40 AM Daryl Walleck daryl.wall...@rackspace.com
wrote:

  Hi folks,



 While running Tempest I ran into a bit of interesting behavior. As part of
 the Tempest auth client, checks are performed to assure the token in use is
 still valid. When parsing the expiry timestamp from the auth response, a
 very specific datetime format is expected for Keystone v2 and v3
 respectively:




 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/auth.py#L232


 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/auth.py#L313



 Each of these are formats strings are valid under ISO 8601, but a one to
 one mapping is being made between Keystone versions and their timestamp
 formats. This can be problematic if the timestamps generated by your
 identity provider are valid timestamps, but vary in the granularity (second
 vs. millisecond vs. microsecond). In my case, I cannot execute any Tempest
 tests as this check occurs early in the fixture setup, which halts any test
 from running.



 The guidance I've observed from the API working group is that any ISO 8601
 timestamp format should be sufficient (
 https://review.openstack.org/#/c/159892/11/guidelines/time.rst). Since
 this parsing occurs in client code as opposed to a test, would it be
 sufficient to verify that the timestamp is of a valid ISO 8601 format
 before parsing it and leave explicit checking to Keystone tests if
 necessary? I didn't want to open this as a bug because I realized there
 might be some historical context to this decision. I'd be grateful for any
 feedback on this point.



 Thanks,



 Daryl
  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] gate-nova-python27 failure

2015-05-04 Thread Deepak Shetty
Hi All,
  I am seeing the below failure for one of my patch (which i believe is not
related to the changes i did in my patch) - Correct me if i am wrong :)

2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| {3} 
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read
[0.026257s] ... FAILED2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| 2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| Captured traceback:2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| ~~~2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| Traceback (most recent call last):2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
|   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
line 1201, in patched2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| return func(*args, **keywargs)2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
|   File nova/tests/unit/virt/vmwareapi/test_read_write_util.py,
line 49, in test_ipv6_host_read2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| verify=False)2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
|   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
line 846, in assert_called_once_with2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| return self.assert_called_with(*args, **kwargs)2015-05-04
07:22:06.117 
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
|   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
line 835, in assert_called_with2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| raise AssertionError(msg)2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| AssertionError: Expected call: request('get',
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
verify=False, headers={'User-Agent': 'OpenStack-ESX-Adapter'},
stream=True, allow_redirects=True)2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| Actual call: request('get',
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
verify=False, params=None, headers={'User-Agent':
'OpenStack-ESX-Adapter'}, stream=True, allow_redirects=True)


I ran it locally on my setup with my patch present and the test passes. See
below:

stack@devstack-f21 nova]$ git log --pretty=oneline -1
df8fb45121dacf22e232f72d096305cf285a0d12 libvirt: Use 'relative' flag for
online snapshot's commit/rebase operations

[stack@devstack-f21 nova]$  ./run_tests.sh -N
nova.tests.unit.virt.vmwareapi.test_read_write_util
Running ` python setup.py testr --testr-args='--subunit --concurrency 0
nova.tests.unit.virt.vmwareapi.test_read_write_util'`
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./
${OS_TEST_PATH:-./nova/tests} --list
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./
${OS_TEST_PATH:-./nova/tests}  --load-list /tmp/tmpueNq4A
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase
test_ipv6_host_read   OK
0.06


Ran 1 test in 10.977s

OK
=

Addnl Details:

My patch @ https://review.openstack.org/#/c/168805/
Complete failure log @
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_23_32_995

thanx,
deepak

Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-04 Thread Victor Stinner
Hi,

Mike Bayer wrote:
 It is not feasible to use MySQLclient in Python 2 because it uses the
 same module name as Python-MySQL, and would wreak havoc with distro
 packaging and many other things.

IMO mysqlclient is just the new upstream for MySQL-Python, since MySQL-Python 
is no more maintained.

Why Linux distributions would not package mysqlclient if it provides Python 3 
support, contains bugfixes and more features?

It's quite common to have two packages in conflicts beceause they provide the 
same function, same library, same program, etc.

I would even suggest packagers to use mysqlclient as the new source without 
modifying their package.


 It is also imprudent to switch
 production openstack applications to a driver that is new and untested
 (even though it is a port), nor is it necessary.

Why do you consider that mysqlclient is not tested or less tested than 
mysql-python? Which kind of regression do you expect in mysqlclient?

As mysql-python, mysqlclient Github project is connected to Travis:
https://travis-ci.org/PyMySQL/mysqlclient-python
(tests pass)

I trust more a project which is actively developed.


 There should be no
 reason Openstack applications are hardcoded to one database driver.
 The approach should be simply that in Python 3, the mysqlclient library
 is installed instead of mysql-python.

Technically, it's now possible to have different dependencies on Python 2 and 
Python 3. But in practice, there are some annoying corner cases. It's more 
convinient to have same dependencies on Python 2 and Python 3.

Using mysqlclient on Python 2 and Python 3 would avoid to have bugs specific to 
Python 2 (bugs already fixed in mysqlclient) and new features only available on 
Python 3.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Angus Salkeld
On Mon, May 4, 2015 at 6:33 PM, Jastrzebski, Michal 
michal.jastrzeb...@intel.com wrote:

 W dniu 5/4/2015 o 8:21 AM, Angus Salkeld pisze: On Thu, Apr 30, 2015 at
 9:25 PM, Jastrzebski, Michal
  michal.jastrzeb...@intel.com mailto:michal.jastrzeb...@intel.com
 wrote:
 
  Hello,
 
  After discussions, we've spotted possible gap in versioned objects:
  backporting of too-new versions in RPC.
  Nova does that by conductor, but not every service has something
  like that. I want to propose another approach:
 
  1. Milestone pinning - we need to make single reference to versions
  of various objects - for example heat in version 15.1 will mean
  stack in version 1.1 and resource in version 1.5.
  2. Compatibility mode - this will add flag to service
  --compatibility=15.1, that will mean that every outgoing RPC
  communication will be backported before sending to object versions
  bound to this milestone.
 
  With this 2 things landed we'll achieve rolling upgrade like that:
  1. We have N nodes in version V
  2. We take down 1 node and upgrade code to version V+1
  3. Run code in ver V+1 with --compatibility=V
  4. Repeat 2 and 3 until every node will have version V+1
  5. Restart each service without compatibility flag
 
  This approach has one big disadvantage - 2 restarts required, but
  should solve problem of backporting of too-new versions.
  Any ideas? Alternatives?
 
 
  AFAIK if nova gets a message that is too new, it just forwards it on
  (and a newer server will handle it).
 
  With that this *should* work, shouldn't it?
  1. rolling upgrade of heat-engine

 That will be hard part. When we'll have only one engine from given
 version, we lose HA. Also, since we never know where given task lands, we
 might end up with one task bouncing from old version to old version, making
 call indefinitely long. Ofc with each upgraded engine we'll lessen change
 for that to happen, but I think we should aim for lowest possible downtime.
 That being said, that might be good idea to solve this problem
 not-too-clean, but quickly.


I don't think losing HA in the time it takes some heat-engines to stop,
install new software and restart the heat-engines is a big deal (IMHO).

-Angus



  2. db sync
  3. rolling upgrade of heat-api
 
  -Angus
 
 
  Regards,
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-04 Thread Victor Stinner
 I propose to replace mysql-python with mysqlclient in OpenStack applications
 to get Python 3 support, bug fixes and some new features (support MariaDB's
 libmysqlclient.so, support microsecond in TIME column).

I just proposed a change to add mysqlclient dependency to global requirements:

   https://review.openstack.org/#/c/179745/

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][tempest] Data-driven testing (DDT) samples

2015-05-04 Thread Tom Barron
On 5/4/15 3:05 AM, Salvatore Orlando wrote:
 Among the OpenStack project of which I have some knowledge, none of them
 uses any DDT library.

FYI, manila uses DDT for unit tests.

 If you think there might be a library from which lbaas, neutron, or any
 other openstack project might take advantage, we should consider it.
 
 Salvatore
 
 On 14 April 2015 at 20:33, Madhusudhan Kandadai
 madhusudhan.openst...@gmail.com
 mailto:madhusudhan.openst...@gmail.com wrote:
 
 Hi,
 
 I would like to start a thread for the tempest DDT in neutron-lbaas
 tree. The problem comes in when we have testcases for both
 admin/non-admin user. (For example, there is an ongoing patch
 activity: https://review.openstack.org/#/c/171832/). Ofcourse it has
 duplication and want to adhere as per the tempest guidelines. Just
 wondering, whether we are using DDT library in other projects, if it
 is so, can someone please point me the sample code that are being
 used currently. It can speed up this DDT activity for neutron-lbaas.


  $ grep -R '@ddt' manila/tests/ | wc -l
198

 In the meantime, I am also gathering/researching about that. Should
 I have any update, I shall keep you posted on the same.
 
 Thanks,
 Madhusudhan
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Regards,

-- Tom



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-04 Thread Thierry Carrez
Monty Taylor wrote:
 On 04/30/2015 08:06 PM, John Dickinson wrote:
 What advantages does a compiled-language object server bring,
 and do they outweigh the costs of using a different language?

 Of course, there are a ton of things we need to explore on this
  topic, but I'm happy that we'll be doing it in the context of 
 the open community instead of behind closed doors. We will
 have a fishbowl session in Vancouver on this topic. I'm
 looking forward to the discussion.
 
 I'm excited to see where this discussion goes.
 
 If we decide that a portion of swift being in Go (or C++ or Rust or
 nim) is a good idea, (just as we've decided that devstack being in
 shell and portions of horizon and tuskar being in Javascript is a good
 idea) I'd like to caution people from thinking that must necessarily
 mean that our general policy of python is dead. The stance has
 always been python unless there is a compelling reason otherwise. It
 sounds like there may be a compelling reason otherwise here.
 
 Also:
 
 http://mcfunley.com/choose-boring-technology

I'm pretty much with Monty on this one. There was (and still is)
community benefits in sharing the same language and development culture.
One of the reasons that people that worked on one OpenStack project
continue to work on OpenStack (but on another project) is because we
share so much (language, values, CI...) between projects.

Now it's always been a trade-off -- unless there is a compelling reason
otherwise. JavaScript is for example already heavily used in OpenStack
GUI development. We just need to make sure the trade-off is worth it.
That the technical benefit is compelling enough to outweigh the
community / network drawbacks or the fragmentation risks.

That said, of all the languages we could add, I think Go is one that
makes the most sense community-wise (due to its extensive use in the
container world).

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] gate-nova-python27 failure

2015-05-04 Thread Robert Collins
Appears to be a failure due to insufficient isolation from requests.
See 
https://review.openstack.org/#/c/179746/2/nova/tests/unit/virt/vmwareapi/test_read_write_util.py
which should fix it.

On 4 May 2015 at 21:32, Deepak Shetty dpkshe...@gmail.com wrote:
 Hi All,
   I am seeing the below failure for one of my patch (which i believe is not
 related to the changes i did in my patch) - Correct me if i am wrong :)

 2015-05-04 07:22:06.116 | {3}
 nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read
 [0.026257s] ... FAILED
 2015-05-04 07:22:06.116 |
 2015-05-04 07:22:06.116 | Captured traceback:
 2015-05-04 07:22:06.116 | ~~~
 2015-05-04 07:22:06.116 | Traceback (most recent call last):
 2015-05-04 07:22:06.116 |   File
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
 2015-05-04 07:22:06.116 | return func(*args, **keywargs)
 2015-05-04 07:22:06.116 |   File
 nova/tests/unit/virt/vmwareapi/test_read_write_util.py, line 49, in
 test_ipv6_host_read
 2015-05-04 07:22:06.117 | verify=False)
 2015-05-04 07:22:06.117 |   File
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 846, in assert_called_once_with
 2015-05-04 07:22:06.117 | return self.assert_called_with(*args,
 **kwargs)
 2015-05-04 07:22:06.117 |   File
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 835, in assert_called_with
 2015-05-04 07:22:06.117 | raise AssertionError(msg)
 2015-05-04 07:22:06.117 | AssertionError: Expected call: request('get',
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
 verify=False, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, stream=True,
 allow_redirects=True)
 2015-05-04 07:22:06.117 | Actual call: request('get',
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
 verify=False, params=None, headers={'User-Agent': 'OpenStack-ESX-Adapter'},
 stream=True, allow_redirects=True)


 I ran it locally on my setup with my patch present and the test passes. See
 below:

 stack@devstack-f21 nova]$ git log --pretty=oneline -1
 df8fb45121dacf22e232f72d096305cf285a0d12 libvirt: Use 'relative' flag for
 online snapshot's commit/rebase operations

 [stack@devstack-f21 nova]$  ./run_tests.sh -N
 nova.tests.unit.virt.vmwareapi.test_read_write_util
 Running ` python setup.py testr --testr-args='--subunit --concurrency 0
 nova.tests.unit.virt.vmwareapi.test_read_write_util'`
 running testr
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./
 ${OS_TEST_PATH:-./nova/tests} --list
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./
 ${OS_TEST_PATH:-./nova/tests}  --load-list /tmp/tmpueNq4A
 nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase
 test_ipv6_host_read   OK
 0.06


 Ran 1 test in 10.977s

 OK
 =

 Addnl Details:

 My patch @ https://review.openstack.org/#/c/168805/
 Complete failure log @
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_23_32_995

 thanx,
 deepak


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SOLVED] [Nova] gate-nova-python27 failure

2015-05-04 Thread Deepak Shetty
https://bugs.launchpad.net/nova/+bug/1451389 and the associated bugfix @
https://review.openstack.org/179746 should solve this.

thanks garyk!

thanx,
deepak


On Mon, May 4, 2015 at 3:02 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi All,
   I am seeing the below failure for one of my patch (which i believe is
 not related to the changes i did in my patch) - Correct me if i am wrong :)

 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | {3} 
 nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read
  [0.026257s] ... FAILED2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | Captured traceback:2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | ~~~2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | Traceback (most recent call last):2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 1201, in patched2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | return func(*args, **keywargs)2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  |   File nova/tests/unit/virt/vmwareapi/test_read_write_util.py, line 
 49, in test_ipv6_host_read2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | verify=False)2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 846, in assert_called_once_with2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | return self.assert_called_with(*args, **kwargs)2015-05-04 
 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 835, in assert_called_with2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | raise AssertionError(msg)2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | AssertionError: Expected call: request('get', 
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
  verify=False, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, stream=True, 
 allow_redirects=True)2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | Actual call: request('get', 
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
  verify=False, params=None, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
 stream=True, allow_redirects=True)


 I ran it locally on my setup with my patch present and the test passes.
 See below:

 stack@devstack-f21 nova]$ git log --pretty=oneline -1
 df8fb45121dacf22e232f72d096305cf285a0d12 libvirt: Use 'relative' flag for
 online snapshot's commit/rebase operations

 [stack@devstack-f21 nova]$  ./run_tests.sh -N
 nova.tests.unit.virt.vmwareapi.test_read_write_util
 Running ` python setup.py testr --testr-args='--subunit --concurrency 0
 nova.tests.unit.virt.vmwareapi.test_read_write_util'`
 running testr
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./
 ${OS_TEST_PATH:-./nova/tests} --list
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./
 ${OS_TEST_PATH:-./nova/tests}  --load-list /tmp/tmpueNq4A
 nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase
 test_ipv6_host_read   OK
 

Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Thierry Carrez
Morgan Fainberg wrote:
 On Friday, May 1, 2015, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 05/01/2015 02:22 PM, Tim Bell wrote:
 
  The spec review process has made it much easier for operators to see
  what is being proposed and give input.
 
  Recognition is a different topic. It also comes into who would be the
  operator/user electorate ? ATC is simple to define where the
 equivalent
  operator/user definition is less clear.
 
 I think spec review participation is a great example of where it would
 make sense to grant extra ATC status.  If someone provides valuable spec
 input, but hasn't made any commits that get ATC status, I'd vote to
 approve their ATC status if proposed.
 
 
 This is exactly the case for David Chadwick (U of Kent) if anyone is
 looking for prior examples of someone who has contributed to the spec
 process but has not landed code and has received ATC for the contributions. 
 
 This is a great way to confer ATC for spec participation. 

I think we are still bound by the Foundation bylaws and should not
completely merge the User Committee and Technical Committee
mandates. That said, I think operators contributions need to be
recognized as such. So we can probably follow a strategy in three
directions:

* Continue to encourage operators to participate in spec review, code
tryouts etc.

* Encourage developers to recognize significant input from operators as
co-authorship of a feature (like Keystone did with David) -- which would
lead to more operators being ATC

* Develop the User Committee -- go beyond organizing the user survey
and really be the representative body of operators. That may involve
finding a way to identify operators so that they can participate in
elections there (and therefore feel represented).

My point being... operating OpenStack is different from contributing to
OpenStack development. Both activities are valuable and necessary, but
they are separate activities, represented by separate committees. Some
people do both, by providing essential operator feedback during feature
design (let's call them contributing operators) -- those people are
awesome and should definitely be recognized on *both* sides.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Jastrzebski, Michal
W dniu 5/4/2015 o 11:50 AM, Angus Salkeld pisze: On Mon, May 4, 2015 at 6:33 
PM, Jastrzebski, Michal 
 michal.jastrzeb...@intel.com mailto:michal.jastrzeb...@intel.com wrote:
 
 W dniu 5/4/2015 o 8:21 AM, Angus Salkeld pisze: On Thu, Apr 30,
 2015 at 9:25 PM, Jastrzebski, Michal
   michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com wrote:
  
   Hello,
  
   After discussions, we've spotted possible gap in versioned
 objects:
   backporting of too-new versions in RPC.
   Nova does that by conductor, but not every service has something
   like that. I want to propose another approach:
  
   1. Milestone pinning - we need to make single reference to
 versions
   of various objects - for example heat in version 15.1 will mean
   stack in version 1.1 and resource in version 1.5.
   2. Compatibility mode - this will add flag to service
   --compatibility=15.1, that will mean that every outgoing RPC
   communication will be backported before sending to object
 versions
   bound to this milestone.
  
   With this 2 things landed we'll achieve rolling upgrade like
 that:
   1. We have N nodes in version V
   2. We take down 1 node and upgrade code to version V+1
   3. Run code in ver V+1 with --compatibility=V
   4. Repeat 2 and 3 until every node will have version V+1
   5. Restart each service without compatibility flag
  
   This approach has one big disadvantage - 2 restarts required, but
   should solve problem of backporting of too-new versions.
   Any ideas? Alternatives?
  
  
   AFAIK if nova gets a message that is too new, it just forwards it on
   (and a newer server will handle it).
  
   With that this *should* work, shouldn't it?
   1. rolling upgrade of heat-engine
 
 That will be hard part. When we'll have only one engine from given
 version, we lose HA. Also, since we never know where given task
 lands, we might end up with one task bouncing from old version to
 old version, making call indefinitely long. Ofc with each upgraded
 engine we'll lessen change for that to happen, but I think we should
 aim for lowest possible downtime. That being said, that might be
 good idea to solve this problem not-too-clean, but quickly.
 
 
 I don't think losing HA in the time it takes some heat-engines to stop, 
 install new software and restart the heat-engines is a big deal (IMHO).
 
 -Angus

We will also lose guarantee that this RPC call will be completed in any given 
time. It can bounce from incompatible node to incompatible node until there are 
no incompatible nodes. Especially if there are no other tasks on queue and when 
service returns it to queue and takes call right afterwards, there is good 
chance that it will take this particular one, and we'll get loop out there.

 
 
  2. db sync
  3. rolling upgrade of heat-api
 
  -Angus
 
 
  Regards,
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _get_subnet() in OpenContrail tests results in port deletion

2015-05-04 Thread Pavel Bondar
Hi Kevin,

Thanks for your answer, that is what I was looking for!
I'll check with you in irc to decide which workaround is better:
1. Mocking NeutronDbSubnet fetch_subnet for opencontrail tests.
2. Using session.query() directly in NeutronDbSubnet fetch_subnet.

- Pavel Bondar

On 30.04.2015 22:46, Kevin Benton wrote:
 The OpenContrail plugin itself doesn't even use the Neutron DB. I
 believe what you are observing is a side effect of the fake server they
 have for their tests, which does inherit the neutron DB.
 
 When you call a method on the core plugin in the contrail unit test
 case, it will go through their request logic and will be piped into the
 fake server. During this time, the db session that was associated with
 the original context passed to the core plugin will be lost do to its
 conversion to a dict.[1, 2]
 
 So I believe what you're seeing is this. 
 
 1. The FakeServer gets create_port called and starts its transactions. 
 2. It now hits the ipam driver which calls out to the neutron manager to
 get the core plugin handle, which is actually the contrail plugin and
 not the FakeServer.
 3. IPAM calls _get_subnet on the contrail plugin, which serializes the
 context[1] and sends it to the FakeServer.
 4. The FakeServer code receives the request and deserializes the
 context[2], which no longer has the db session.
 5. The FakeServer then ends up starting a new session to read the
 subnet, which will interfere with the transaction you created the port
 under since they are from the same engine.
 
 This is why you can query the DB directly rather than calling the core
 plugin. The good news is that you don't have to worry because the actual
 contrail plugin won't be using any of this logic so you're not actually
 breaking anything.
 
 I think what you'll want to do is add a mock.patch for the
 NeutronDbSubnet fetch_subnet method to monkey patch in a reference to
 their FakeServer's _get_subnet method. Ping me on IRC (kevinbenton) if
 you need help.
 
 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/opencontrail/contrail_plugin.py#L111
 2.
 https://github.com/openstack/neutron/blob/master/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py#L121
 
 On Thu, Apr 30, 2015 at 6:37 AM, Pavel Bondar pbon...@infoblox.com
 mailto:pbon...@infoblox.com wrote:
 
 Hi,
 
 I am debugging issue observed in OpenContrail tests[1] and so far it
 does not look obvious.
 
 Issue:
 
 In create_port[2] new transaction is started.
 Port gets created, but disappears right after reading subnet from plugin
 in reference ipam driver[3]:
 
 plugin = manager.NeutronManager.get_plugin()
 return plugin._get_subnet(context, id)
 
 Port no longer seen in transaction, like it never existed before
 (magic?). As a result inserting IPAllocation fails with foreing key
 constraint error:
 
 DBReferenceError: (IntegrityError) FOREIGN KEY constraint failed
 u'INSERT INTO ipallocations (port_id, ip_address, subnet_id, network_id)
 VALUES (?, ?, ?, ?)' ('aba6eaa2-2b2f-4ab9-97b0-4d8a36659363',
 u'10.0.0.2', u'be7bb05b-d501-4cf3-a29a-3861b3b54950',
 u'169f6a61-b5d0-493a-b7fa-74fd5b445c84')
 }}}
 
 Only OpenContrail tests fail with that error (116 failures[1]). Tests
 for other plugin passes fine. As I see OpenContrail is different from
 other plugins: each call to plugin is wrapped into http request, so
 getting subnet happens in another transaction. In tests requests.post()
 is mocked and http call gets translated into self.get_subnet(...).
 Stack trace from plugin._get_subnet() to db_base get_subnet() in open
 contrail tests looks next[4].
 
 Also single test failure with full db debug was uploaded for
 investigation[5]:
 - Port is inserted at 362.
 - Subnet is read by plugin at 384.
 - IPAllocation was tried to be inserted at 407.
 Between Port and IPAllocation insert no COMMIT/ROLLBACK or delete
 statement were issued, so can't find explanation why port no longer
 exists on IPAllocation insert step.
 Am I missing something obvious?
 
 For now I have several workarounds, which are basically do not use
 plugin._get_subnet(). Direct session.query() works without such side
 effects.
 But this issue bothers me much since I can't explain why it even happens
 in OpenContrail tests.
 Any ideas are welcome!
 
 My best theory for now: OpenContrail silently wipes currently running
 transaction in tests (in this case it doesn't sound good).
 
 Anyone can checkout and debug patch set 50 (where issue is observed)
 from review page[6].
 
 Thank you in advance.
 
 - Pavel Bondar
 
 [1]
 
 http://logs.openstack.org/36/153236/50/check/gate-neutron-python27/dd83d43/testr_results.html.gz
 [2]
 https://review.openstack.org/#/c/153236/50/neutron/db/db_base_plugin_v2.py
 line 1578 / line 1857
 [3]
 
 

Re: [openstack-dev] [swift] Go! Swift!

2015-05-04 Thread Flavio Percoco

On 30/04/15 10:54 -0700, Clint Byrum wrote:

* +1 for compiled languages that are less-daunting than say C++, but
 still help find problems early. Anybody up for a Rust version too? ;-)


/me is always up for some Rust

--
@flaper87
Flavio Percoco


pgpe5UOvbPWsd.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-04 Thread Flavio Percoco

On 04/05/15 12:17 +0200, Thierry Carrez wrote:

Monty Taylor wrote:

On 04/30/2015 08:06 PM, John Dickinson wrote:

What advantages does a compiled-language object server bring,
and do they outweigh the costs of using a different language?

Of course, there are a ton of things we need to explore on this
 topic, but I'm happy that we'll be doing it in the context of
the open community instead of behind closed doors. We will
have a fishbowl session in Vancouver on this topic. I'm
looking forward to the discussion.


I'm excited to see where this discussion goes.

If we decide that a portion of swift being in Go (or C++ or Rust or
nim) is a good idea, (just as we've decided that devstack being in
shell and portions of horizon and tuskar being in Javascript is a good
idea) I'd like to caution people from thinking that must necessarily
mean that our general policy of python is dead. The stance has
always been python unless there is a compelling reason otherwise. It
sounds like there may be a compelling reason otherwise here.

Also:

http://mcfunley.com/choose-boring-technology


I'm pretty much with Monty on this one. There was (and still is)
community benefits in sharing the same language and development culture.
One of the reasons that people that worked on one OpenStack project
continue to work on OpenStack (but on another project) is because we
share so much (language, values, CI...) between projects.

Now it's always been a trade-off -- unless there is a compelling reason
otherwise. JavaScript is for example already heavily used in OpenStack
GUI development. We just need to make sure the trade-off is worth it.
That the technical benefit is compelling enough to outweigh the
community / network drawbacks or the fragmentation risks.


TBH, I'm a bit torn. I'm always cheering for innovation, for using the
right tool, etc, but I also agree with Monty, the linked post and some
of the arguments that have been made in this thread.

To some extent, I believe it'd be fair to say that as long as all the
other aspects are maintained by the project itself, it should be fine
for projects to do this. To be more precise, I don't think our infra
team should reply to the request of having a Go/Rust/Nim CI unless
there are enough cases that would make this worth it for them to offer
this service. This means, swift needs to run their own CI for the Go
code, provide tools for deplying it, etc.

One question that raises (naturally?) is whether Swift will end up
being completely rewritten in Go? I wouldn't discard this option.


That said, of all the languages we could add, I think Go is one that
makes the most sense community-wise (due to its extensive use in the
container world).


Not going to get into language wars but the above is also arguable.
I'm not against Go itself, it's just that choosing a language to use
for a task is hardly that simple.

Cheers,
Flavio


--
@flaper87
Flavio Percoco


pgpyc53lM9bBb.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] replace external ip monitor

2015-05-04 Thread Peter V. Saveliev

…

Hello.

I would like to discuss the possibility to replace external ip monitor 
in the neutron code [1] with an internal native Python code [2]


The issues of the current implementation:
* an external process management
* text output parsing (possibly buffered)

The proposed library:
* pure Python code
* threadless (by default) socket-like objects to work with netlink
* optional eventlet optimization
* netlink messages as native python objects
* compatible license

If it's ok, I would prepare a patchset this week.

[1] neutron/agent/linux/ip_monitor.py
[2] https://github.com/svinota/pyroute2

--
Peter V. Saveliev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Weekly meeting #34

2015-05-04 Thread Emilien Macchi
Hi,

Tomorrow is our weekly meeting.
Please look at the agenda [1].

Feel free to bring new topics and reviews/bugs if needed.
Also, if you had any action, make sure you can give a status during the
meeting or in the etherpad directly.

See you tomorrow,

[1]
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150505
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SOLVED] [Nova] gate-nova-python27 failure

2015-05-04 Thread Sean Dague
We actually isolated from requests a little more in this review instead
- https://review.openstack.org/#/c/179757/

-Sean

On 05/04/2015 06:23 AM, Deepak Shetty wrote:
 https://bugs.launchpad.net/nova/+bug/1451389 and the associated bugfix @
 https://review.openstack.org/179746 should solve this.
 
 thanks garyk!
 
 thanx,
 deepak
 
 
 On Mon, May 4, 2015 at 3:02 PM, Deepak Shetty dpkshe...@gmail.com
 mailto:dpkshe...@gmail.com wrote:
 
 Hi All,
   I am seeing the below failure for one of my patch (which i believe
 is not related to the changes i did in my patch) - Correct me if i
 am wrong :)
 
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | {3} 
 nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read
  [0.026257s] ... FAILED
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | 
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | Captured traceback:
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | ~~~
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | Traceback (most recent call last):
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 1201, in patched
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | return func(*args, **keywargs)
 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  |   File nova/tests/unit/virt/vmwareapi/test_read_write_util.py, line 
 49, in test_ipv6_host_read
 2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | verify=False)
 2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 846, in assert_called_once_with
 2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | return self.assert_called_with(*args, **kwargs)
 2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 835, in assert_called_with
 2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | raise AssertionError(msg)
 2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | AssertionError: Expected call: request('get', 
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
  verify=False, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, stream=True, 
 allow_redirects=True)
 2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | Actual call: request('get', 
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
  verify=False, params=None, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
 stream=True, allow_redirects=True)
 
 
 I ran it locally on my setup with my patch present and the test
 passes. See below:
 
 stack@devstack-f21 nova]$ git log --pretty=oneline -1
 df8fb45121dacf22e232f72d096305cf285a0d12 libvirt: Use 'relative'
 flag for online snapshot's commit/rebase operations
 
 [stack@devstack-f21 nova]$  ./run_tests.sh -N
 nova.tests.unit.virt.vmwareapi.test_read_write_util
 Running ` python setup.py testr --testr-args='--subunit
 --concurrency 0  nova.tests.unit.virt.vmwareapi.test_read_write_util'`
 running testr
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./
 ${OS_TEST_PATH:-./nova/tests} --list
 

[openstack-dev] [all] Sign up for oslo liaisons for liberty cycle

2015-05-04 Thread Davanum Srinivas
[ re-sending an almost identical email from Doug sent just before kilo
cycle :) ]

The Oslo team is responsible for managing code shared between projects. There
are a LOT more projects than Oslo team members, so we created the liaison
program at the beginning of the Juno cycle, asking each team that uses Oslo
libraries to provide one volunteer liaison. Our liaisons facilitate
communication and work with us to make the application code changes needed as
code moves out of the incubator and into libraries. With this extra help in
place, we were able to successfully graduate 7 new libraries and begin having
them adopted across OpenStack.

With the change-over to the new release cycle, it’s time to ask for volunteers
to sign up to be liaisons again. If you are interested in acting as a liaison
for your project, please sign up on the wiki page [1]. It would be very helpful
to have a full roster before the summit, so we can make sure liaisons are
invited to participate in any relevant discussions there. If you are
curious about
the current state of planning for Liberty, please peek at [2] and [3].

Thanks,
Dims

[1] https://wiki.openstack.org/wiki/Oslo/ProjectLiaisons
[2] https://etherpad.openstack.org/p/liberty-oslo-summit-planning
[3] https://libertydesignsummit.sched.org/overview/type/design+summit/Oslo


-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] replace external ip monitor

2015-05-04 Thread Assaf Muller
.

- Original Message -
 …
 
 Hello.
 
 I would like to discuss the possibility to replace external ip monitor
 in the neutron code [1] with an internal native Python code [2]
 
 The issues of the current implementation:
 * an external process management
 * text output parsing (possibly buffered)
 
 The proposed library:
 * pure Python code
 * threadless (by default) socket-like objects to work with netlink
 * optional eventlet optimization
 * netlink messages as native python objects
 * compatible license
 

How's packaging looking on all supported platforms?

On a related note, ip_monitor.py is 87 lines of code, I'd be wary of getting
rid of it and using a full blown library instead. Then again, using pyroute2,
we might want to replace other pieces of code (Such as parts of ip_lib).

 If it's ok, I would prepare a patchset this week.
 
 [1] neutron/agent/linux/ip_monitor.py
 [2] https://github.com/svinota/pyroute2
 
 --
 Peter V. Saveliev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] replace external ip monitor

2015-05-04 Thread Miguel Ángel Ajo
Does the library require root privileges to work
for the operations you’re planning to do?

That would be a stopper, since all the agents run unprivileged, and all the
operations are filtered by the oslo root wrap daemon or cmdline tool.

Best,
Miguel Ángel.


Miguel Ángel Ajo


On Monday, 4 de May de 2015 at 14:58, Assaf Muller wrote:

 .
  
 - Original Message -
  …
   
  Hello.
   
  I would like to discuss the possibility to replace external ip monitor
  in the neutron code [1] with an internal native Python code [2]
   
  The issues of the current implementation:
  * an external process management
  * text output parsing (possibly buffered)
   
  The proposed library:
  * pure Python code
  * threadless (by default) socket-like objects to work with netlink
  * optional eventlet optimization
  * netlink messages as native python objects
  * compatible license
   
  
  
 How's packaging looking on all supported platforms?
  
 On a related note, ip_monitor.py is 87 lines of code, I'd be wary of getting
 rid of it and using a full blown library instead. Then again, using pyroute2,
 we might want to replace other pieces of code (Such as parts of ip_lib).
  
  If it's ok, I would prepare a patchset this week.
   
  [1] neutron/agent/linux/ip_monitor.py
  [2] https://github.com/svinota/pyroute2
   
  --
  Peter V. Saveliev
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] replace external ip monitor

2015-05-04 Thread Peter V. Saveliev



On 05/04/2015 03:25 PM, Miguel Ángel Ajo wrote:

Does the library require root privileges to work
for the operations you’re planning to do?


Nope.

Only the network stack changes need CAP_NET_ADMIN (add/del address, 
interface, route, traffic queue etc), and netns operations need the root 
access.


Just monitoring doesn't require any special permissions and can run 
under «nobody».




That would be a stopper, since all the agents run unprivileged, and all the
operations are filtered by the oslo root wrap daemon or cmdline tool.


Offtopic: btw, here I would like to ping Angus with his patchset [1].

skip /

[1] https://review.openstack.org/#/c/155631/


--
Peter V. Saveliev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-ansible-deplpoyment has released Kilo

2015-05-04 Thread Kevin Carter
Hey Dani,

Are you looking for support for Ironic for baremetal provisioning or for 
deployments on baremetal without the use of LXC containers?

—

Kevin

 On May 3, 2015, at 06:45, Daniel Comnea comnea.d...@gmail.com wrote:
 
 Great job Kevin  co !!
 
 Are there any plans in supporting configure the baremetal as well ?
 
 Dani
 
 On Thu, Apr 30, 2015 at 11:46 PM, Liu, Guang Jun (Gene) 
 gene@alcatel-lucent.com wrote:
 cool!
 
 From: Kevin Carter [kevin.car...@rackspace.com]
 Sent: Thursday, April 30, 2015 4:36 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] os-ansible-deplpoyment has released Kilo
 
 Hello Stackers,
 
 The OpenStack Ansible Deployment (OSAD) project is happy to announce our 
 stable Kilo release, version 11.0.0. The project has come a very long way 
 from initial inception and taken a lot of work to excise our original vendor 
 logic from the stack and transform it into a community-driven architecture 
 and deployment process. If you haven’t yet looked at the 
 `os-ansible-deployment` project on StackForge, we'd love for you to take a 
 look now [ https://github.com/stackforge/os-ansible-deployment ]. We offer an 
 OpenStack solution orchestrated by Ansible and powered by upstream OpenStack 
 source. OSAD is a batteries included OpenStack deployment solution that 
 delivers OpenStack as the developers intended it: no modifications to nor 
 secret sauce in the services it deploys. This release includes 436 commits 
 that brought the project from Rackspace Private Cloud technical debt to an 
 OpenStack community deployment solution. I'd like to recognize the following 
 people (from Git logs) for all of their hard work in making the OSAD project 
 successful:
 
 Andy McCrae
 Matt Thompson
 Jesse Pretorius
 Hugh Saunders
 Darren Birkett
 Nolan Brubaker
 Christopher H. Laco
 Ian Cordasco
 Miguel Grinberg
 Matthew Kassawara
 Steve Lewis
 Matthew Oliver
 git-harry
 Justin Shepherd
 Dave Wilde
 Tom Cameron
 Charles Farquhar
 BjoernT
 Dolph Mathews
 Evan Callicoat
 Jacob Wagner
 James W Thorne
 Sudarshan Acharya
 Jesse P
 Julian Montez
 Sam Yaple
 paul
 Jeremy Stanley
 Jimmy McCrory
 Miguel Alex Cantu
 elextro
 
 
 While Rackspace remains the main proprietor of the project in terms of 
 community members and contributions, we're looking forward to more community 
 participation especially after our stable Kilo release with a community 
 focus. Thank you to everyone that contributed on the project so far and we 
 look forward to working with more of you as we march on.
 
 —
 
 Kevin Carter
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help: help resolve Depends-On for review

2015-05-04 Thread Yu Xing YX Wang

Hi,

I have a review(https://review.openstack.org/#/c/178561/) which is depend
on another review(https://review.openstack.org/#/c/178546/)
This Depends-On makes the first review's build failure. who can help me
resolve it?

fixes Bug1448217
Closes-Bug: 1448217
Depends-On: I86bf157f1bdb44f5fc579dc5317784fe31df8521
Change-Id: I57649f5aac9b1abe1a9961d4b35479372ebee519


Regards,


YuxingWang( 王宇行 )
Software Enginner, GTS Offerings Development
IBM China Development Laboratory (CDL)
- Ring Bldg, #28 ZhongGuanCun Software Park, No.8 Dong Bei Wang West Road,
Haidian District Beijing, P.R.China 100193
( 86-10-82450791 + yuxi...@cn.ibm.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] replace external ip monitor

2015-05-04 Thread Peter V. Saveliev



On 05/04/2015 02:58 PM, Assaf Muller wrote:
skip /

How's packaging looking on all supported platforms?


It is packaged already for Fedora, RHEL, Debian, Ubuntu, Gentoo as soon 
as I know. Maybe for some other platforms. Pypi is provided as well.




On a related note, ip_monitor.py is 87 lines of code, I'd be wary of getting
rid of it and using a full blown library instead


Strictly speaking, the monitoring code using native netlink will not be 
more complicated. In the current state Popen + iproute2 are under the 
hood. Though well-tested and widely used, that's true. In the proposed 
way it will be pyroute2, and it also has more than 200 hundreds of 
functional tests in the regression testing cycle.



Then again, using pyroute2, we might want to replace other pieces of

 code (Such as parts of ip_lib).

Probably. It is capable to manage most of interface types (incl. 
transparent support of userspace-managed ones like OVS and teamd) and 
provides more or less comprehensive support of RTNL, nfnetlink (ipset), 
netns (via ioctl) etc.


skip/

--
Peter V. Saveliev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Anita Kuno
I'd like to go back to the beginning to clarify something.

On 04/29/2015 02:34 PM, Adam Lawson wrote:
 So I started replying to Doug's email in a different thread but didn't want
 to hi-jack that so I figured I'd present my question as a more general
 question about how voting is handled for the TC.
 
 Anyway, I find it curious that the TC is elected by those within the
 developer community but TC candidates talk about representing the operator
 community

In my statements I talked about acknowledging the operator community not
representing them. When I speak, I represent myself and my best
understanding of a certain situation, if others find value in the
position I hold, they will let me know.

In my view of what comprises OpenStack, the TC is one point of a
triangle and the operators are an entirely different point. Trying to
get two points of a triangle to be the same thing compromises the
integrity of the structure. Each needs to play its part, not try to be
something it is not.

There have been many helpful comments on how those operators who wish to
contribute to reviews, patches and specs as well as receive ATC status
may do so, for those operators who wish to be acknowledged as
contributors as well as being operators.

Operators have a very useful, very valuable, very necessary perspective
that is not a developer's perspective that needs to be heard and
communicated.

Thierry has made the suggestion that a strong User Committee
representing the voice of the operator would be a good direction here. I
support this suggestion. Tim Bell is working on an etherpad here:
https://etherpad.openstack.org/p/YVR-ops-user-committee

Thank you Adam,
Anita.


 who are not allowed to vote. Operators meaning Admins,
 Architects, etc. It sounds like this is something most TC candidates want
 which most would agree is a good thing. At least I think so. ; )
 
 Is it be feasible to start allowing the operator community to also cast
 votes for TC candidates? Is the TC *only* addressing technical concerns
 that are relevant to the development community? Since the TC candidates are
 embracing the idea of representing more than just the developer community,
 it would /seem/ the voters electing the TC members should include the
 communities being represented. If the TC only addresses developer concerns,
 it would seem they become at risk of losing touch with the
 operator/architecture/user concerns because the operator community voice is
 never heard in the voting booth.
 
 Perhaps this bumps into how it used to be versus how it should be. I don't
 know. Just struck me as incongruent with the platform of almost every
 candidate - broadening representation while the current rules prohibit that
 level of co-participation.
 
 Thoughts?
 
 
 *Adam Lawson*
 
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (844) 4-AQORN-NOW ext. 101
 International: +1 302-387-4660
 Direct: +1 916-246-2072
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] replace external ip monitor

2015-05-04 Thread Peter V. Saveliev



On 05/04/2015 02:58 PM, Assaf Muller wrote:
skip /

errata, should be read as:
… also has more than 200 functional tests in the regression testing cycle.

pardonnez-moi, multiple editions of the mail sometimes lead to mistakes.

skip/

--
Peter V. Saveliev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon

2015-05-04 Thread Alex Meade
Hey Erlon,

The summit etherpad is here:
https://etherpad.openstack.org/p/liberty-cinder-async-reporting

It links to what we discussed in Paris. I will be filling it out this week.
Also note, I have submitted this topic for a cross-project session:
https://docs.google.com/spreadsheets/d/1vCTZBJKCMZ2xBhglnuK3ciKo3E8UMFo5S5lmIAYMCSE/edit#gid=827503418

-Alex

On Mon, May 4, 2015 at 3:30 AM, liuxinguo liuxin...@huawei.com wrote:

   · I’m just trying to have a  analysis into it, maybe can begin
 with the “wrapper around the python-cinderclient” as George Peristerakis
 suggested.





 *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
 *发送时间:* 2015年4月27日 20:07
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *抄送:* Luozhen; Fanyaohong
 *主题:* Re: [openstack-dev] [cinder] Is there any way to put the driver
 backend error message to the horizon



 Alex,



 Any scratch of the solution you plan to propose?



 On Mon, Apr 27, 2015 at 5:57 AM, liuxinguo liuxin...@huawei.com wrote:

 Thanks for your suggestion, George. But when I looked into
 python-cinderclient (not very deep), I can not find the “wrapper around the
 python-cinderclient” you have mentioned.

 Could you please give me a little more hint to find the “wrapper”?



 Thanks,

 Liu





 *发件人:* George Peristerakis [mailto:gperi...@redhat.com]
 *发送时间:* 2015年4月13日 23:22
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *主题:* Re: [openstack-dev] [cinder] Is there any way to put the driver
 backend error message to the horizon



 Hi Lui,

 I'm not familiar with the error you are trying to show, but Here's how
 Horizon typically works. In the case of cinder, we have a wrapper around
 the python-cinderclient which if the client sends a exception with a valid
 message, by default Horizon will display the exception message. The message
 can also be overridden in the translation file. So a good start is to look
 in python-cinderclient and see if you could produce a more meaningful
 message.


 Cheers.
 George

 On 10/04/15 06:16 AM, liuxinguo wrote:

 Hi,



 When we create a volume in the horizon, there may occurrs some errors at the 
 driver

 backend, and the in horizon we just see a error in the volume status.



 So is there any way to put the error information to the horizon so users can 
 know what happened exactly just from the horizon?

 Thanks,

 Liu





  __

 OpenStack Development Mailing List (not for usage questions)

 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Maish Saidel-Keesing

On 05/04/15 17:07, Anita Kuno wrote:

I'd like to go back to the beginning to clarify something.

On 04/29/2015 02:34 PM, Adam Lawson wrote:

So I started replying to Doug's email in a different thread but didn't want
to hi-jack that so I figured I'd present my question as a more general
question about how voting is handled for the TC.

Anyway, I find it curious that the TC is elected by those within the
developer community but TC candidates talk about representing the operator
community

In my statements I talked about acknowledging the operator community not
representing them. When I speak, I represent myself and my best
understanding of a certain situation, if others find value in the
position I hold, they will let me know.

In my view of what comprises OpenStack, the TC is one point of a
triangle and the operators are an entirely different point. Trying to
get two points of a triangle to be the same thing compromises the
integrity of the structure. Each needs to play its part, not try to be
something it is not.
A three point triangle. I like the idea! Anita I assume that you are 
talking about the TC[3], the board [1] and the user committee [2].


I honestly do not see this at the moment as an equally weighted triangle.
Should they be? Perhaps not, maybe yes.

It could be that my view of things is skew, but here it is.

The way to get something into OpenStack is through code.
Who submits the code? Developers.
Who approves code? Reviewers and core
On top of that you have the PTL
Above the PTL - you have the TC. They decide what is added into 
OpenStack and (are supposed) drive overall direction.


These are the people that have actionable influence into what goes into 
the products.


AFAIK neither the Foundation - nor the User committee have any 
actionable influence into what goes into the products, what items are 
prioritized and what is dropped.


If each of the three point of the triangle had proper (actionable) 
influence and (actionable) say in what goes on and happens within the 
OpenStack then that would be ideal. Does the representation have to be 
equal? I don't think so. But it should be there somehow.


One of the points of the User Committee mission is:
Consolidate user requirements and present these to the management board 
and technical committee


There is no mention that I could find on any of the other missions[3][1] 
that says that the TC or the board have to do anything with user 
requirements presented to them.


I do not know if this has ever been addressed before, but it should be 
defined. A process with where the TC and collects requirements from the 
User Committee or Board and with a defined process this trickles down 
into the teams and projects.


My 0.02 Shekels.


There have been many helpful comments on how those operators who wish to
contribute to reviews, patches and specs as well as receive ATC status
may do so, for those operators who wish to be acknowledged as
contributors as well as being operators.

Operators have a very useful, very valuable, very necessary perspective
that is not a developer's perspective that needs to be heard and
communicated.

Thierry has made the suggestion that a strong User Committee
representing the voice of the operator would be a good direction here. I
support this suggestion. Tim Bell is working on an etherpad here:
https://etherpad.openstack.org/p/YVR-ops-user-committee

Thank you Adam,
Anita.



who are not allowed to vote. Operators meaning Admins,
Architects, etc. It sounds like this is something most TC candidates want
which most would agree is a good thing. At least I think so. ; )

Is it be feasible to start allowing the operator community to also cast
votes for TC candidates? Is the TC *only* addressing technical concerns
that are relevant to the development community? Since the TC candidates are
embracing the idea of representing more than just the developer community,
it would /seem/ the voters electing the TC members should include the
communities being represented. If the TC only addresses developer concerns,
it would seem they become at risk of losing touch with the
operator/architecture/user concerns because the operator community voice is
never heard in the voting booth.

Perhaps this bumps into how it used to be versus how it should be. I don't
know. Just struck me as incongruent with the platform of almost every
candidate - broadening representation while the current rules prohibit that
level of co-participation.

Thoughts?


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



[1] https://wiki.openstack.org/wiki/Governance/Foundation/Mission
[2] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee
[3] https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee
--
Best Regards,
Maish Saidel-Keesing


Re: [openstack-dev] Mellanox request for permission for Nova CI

2015-05-04 Thread Matt Riedemann



On 5/3/2015 10:32 AM, Lenny Verkhovsky wrote:

Hi Dan and the team,

Here you can see full logs and tempest.conf  
http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_PORT_20150503_1854/
( suspend-resume test is skipped and we are checking this issue )

Except of running Tempest API on Mellanox flavor VM we are also running 
tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps
With configured port_vnic_type = direct

We will add more tests in the future.

Thanks in advance.
Lenny Verkhovsky
SW Engineer,  Mellanox Technologies
www.mellanox.com

Office:+972 74 712 9244
Mobile:  +972 54 554 0233
Fax:+972 72 257 9400

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com]
Sent: Friday, April 24, 2015 7:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Mellanox request for permission for Nova CI

Hi Lenny,


Is there anything missing for us to start 'non-voting' Nova CI ?


Sorry for the slow response from the team.

The results that you've posted look good to me. A quick scan of the tempest 
results don't seem to indicate any new tests that are specifically testing 
SRIOV things. I assume this is mostly implied because of the flavor you're 
configuring for testing, right?

Could you also persist the tempest.conf just so it's easy to see?

Regardless of the above, I think that the results look clean enough to start 
commenting on patches, IMHO. So, count me as +1.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1 for non-voting on nova changes from me.  Looks like it's running 
tests from the tempest repo and not a private third party repo which is 
good.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-04 Thread John Dickinson

 On May 4, 2015, at 5:26 AM, Flavio Percoco fla...@redhat.com wrote:
 
 On 04/05/15 12:17 +0200, Thierry Carrez wrote:
 Monty Taylor wrote:
 On 04/30/2015 08:06 PM, John Dickinson wrote:
 What advantages does a compiled-language object server bring,
 and do they outweigh the costs of using a different language?
 
 Of course, there are a ton of things we need to explore on this
 topic, but I'm happy that we'll be doing it in the context of
 the open community instead of behind closed doors. We will
 have a fishbowl session in Vancouver on this topic. I'm
 looking forward to the discussion.
 
 I'm excited to see where this discussion goes.
 
 If we decide that a portion of swift being in Go (or C++ or Rust or
 nim) is a good idea, (just as we've decided that devstack being in
 shell and portions of horizon and tuskar being in Javascript is a good
 idea) I'd like to caution people from thinking that must necessarily
 mean that our general policy of python is dead. The stance has
 always been python unless there is a compelling reason otherwise. It
 sounds like there may be a compelling reason otherwise here.
 
 Also:
 
 http://mcfunley.com/choose-boring-technology
 
 I'm pretty much with Monty on this one. There was (and still is)
 community benefits in sharing the same language and development culture.
 One of the reasons that people that worked on one OpenStack project
 continue to work on OpenStack (but on another project) is because we
 share so much (language, values, CI...) between projects.
 
 Now it's always been a trade-off -- unless there is a compelling reason
 otherwise. JavaScript is for example already heavily used in OpenStack
 GUI development. We just need to make sure the trade-off is worth it.
 That the technical benefit is compelling enough to outweigh the
 community / network drawbacks or the fragmentation risks.
 
 TBH, I'm a bit torn. I'm always cheering for innovation, for using the
 right tool, etc, but I also agree with Monty, the linked post and some
 of the arguments that have been made in this thread.
 
 To some extent, I believe it'd be fair to say that as long as all the
 other aspects are maintained by the project itself, it should be fine
 for projects to do this. To be more precise, I don't think our infra
 team should reply to the request of having a Go/Rust/Nim CI unless
 there are enough cases that would make this worth it for them to offer
 this service. This means, swift needs to run their own CI for the Go
 code, provide tools for deplying it, etc.
 
 One question that raises (naturally?) is whether Swift will end up
 being completely rewritten in Go? I wouldn't discard this option.


Just as a point of clarity (since I've seen it mentioned a few times now in 
this thread and elsewhere):

At the current time, we are NOT planning to rewrite everything in Go. We are 
exploring one specific question: is a compiled language object server worth 
it?. Or restated with more words: Is there a specific part of Swift that we 
can make more efficient by implementing in a different language such that the 
benefits gained outweigh the costs of using another language? And additionally, 
how can we as a community explore this question in the open rather than pushing 
this work to be done apart from the community and behind closed doors?

There's already an ecosystem out there of people who have written stuff for and 
around Swift (ie middleware and DiskFile implementations). One of the first 
questions that comes up in the what if it's not Python discussions is what 
about WSGI middleware. That right there is one reason that, for now, there are 
no plans to rewrite everything in Go, and should *anything* be written in 
something other than Python in the master branch and in a release, it will be 
done only after much careful consideration.


--John




 
 That said, of all the languages we could add, I think Go is one that
 makes the most sense community-wise (due to its extensive use in the
 container world).
 
 Not going to get into language wars but the above is also arguable.
 I'm not against Go itself, it's just that choosing a language to use
 for a task is hardly that simple.
 
 Cheers,
 Flavio
 
 
 --
 @flaper87
 Flavio Percoco
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mellanox request for permission for Nova CI

2015-05-04 Thread Lenny Verkhovsky
Thanks Matt,
We will start Nova non-voting commenting.


Lenny Verkhovsky
SW Engineer,  Mellanox Technologies
www.mellanox.com 

Office:+972 74 712 9244
Mobile:  +972 54 554 0233
Fax:+972 72 257 9400

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Monday, May 04, 2015 6:20 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Mellanox request for permission for Nova CI



On 5/3/2015 10:32 AM, Lenny Verkhovsky wrote:
 Hi Dan and the team,

 Here you can see full logs and tempest.conf  
 http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_POR
 T_20150503_1854/ ( suspend-resume test is skipped and we are checking 
 this issue )

 Except of running Tempest API on Mellanox flavor VM we are also 
 running 
 tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedS
 erverOps With configured port_vnic_type = direct

 We will add more tests in the future.

 Thanks in advance.
 Lenny Verkhovsky
 SW Engineer,  Mellanox Technologies
 www.mellanox.com

 Office:+972 74 712 9244
 Mobile:  +972 54 554 0233
 Fax:+972 72 257 9400

 -Original Message-
 From: Dan Smith [mailto:d...@danplanet.com]
 Sent: Friday, April 24, 2015 7:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Mellanox request for permission for Nova 
 CI

 Hi Lenny,

 Is there anything missing for us to start 'non-voting' Nova CI ?

 Sorry for the slow response from the team.

 The results that you've posted look good to me. A quick scan of the tempest 
 results don't seem to indicate any new tests that are specifically testing 
 SRIOV things. I assume this is mostly implied because of the flavor you're 
 configuring for testing, right?

 Could you also persist the tempest.conf just so it's easy to see?

 Regardless of the above, I think that the results look clean enough to start 
 commenting on patches, IMHO. So, count me as +1.

 --Dan

 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1 for non-voting on nova changes from me.  Looks like it's running
tests from the tempest repo and not a private third party repo which is good.

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Anita Kuno
On 05/04/2015 10:46 AM, Maish Saidel-Keesing wrote:
 On 05/04/15 17:07, Anita Kuno wrote:
 I'd like to go back to the beginning to clarify something.

 On 04/29/2015 02:34 PM, Adam Lawson wrote:
 So I started replying to Doug's email in a different thread but
 didn't want
 to hi-jack that so I figured I'd present my question as a more general
 question about how voting is handled for the TC.

 Anyway, I find it curious that the TC is elected by those within the
 developer community but TC candidates talk about representing the
 operator
 community
 In my statements I talked about acknowledging the operator community not
 representing them. When I speak, I represent myself and my best
 understanding of a certain situation, if others find value in the
 position I hold, they will let me know.

 In my view of what comprises OpenStack, the TC is one point of a
 triangle and the operators are an entirely different point. Trying to
 get two points of a triangle to be the same thing compromises the
 integrity of the structure. Each needs to play its part, not try to be
 something it is not.
 A three point triangle. I like the idea! Anita I assume that you are
 talking about the TC[3], the board [1] and the user committee [2].

No that wasn't what I meant. You seem to be making a point so I won't
detract from your point, except to clarify that was not my meaning.

Thanks,
Anita.

 
 I honestly do not see this at the moment as an equally weighted triangle.
 Should they be? Perhaps not, maybe yes.
 
 It could be that my view of things is skew, but here it is.
 
 The way to get something into OpenStack is through code.
 Who submits the code? Developers.
 Who approves code? Reviewers and core
 On top of that you have the PTL
 Above the PTL - you have the TC. They decide what is added into
 OpenStack and (are supposed) drive overall direction.
 
 These are the people that have actionable influence into what goes into
 the products.
 
 AFAIK neither the Foundation - nor the User committee have any
 actionable influence into what goes into the products, what items are
 prioritized and what is dropped.
 
 If each of the three point of the triangle had proper (actionable)
 influence and (actionable) say in what goes on and happens within the
 OpenStack then that would be ideal. Does the representation have to be
 equal? I don't think so. But it should be there somehow.
 
 One of the points of the User Committee mission is:
 Consolidate user requirements and present these to the management board
 and technical committee
 
 There is no mention that I could find on any of the other missions[3][1]
 that says that the TC or the board have to do anything with user
 requirements presented to them.
 
 I do not know if this has ever been addressed before, but it should be
 defined. A process with where the TC and collects requirements from the
 User Committee or Board and with a defined process this trickles down
 into the teams and projects.
 
 My 0.02 Shekels.
 
 There have been many helpful comments on how those operators who wish to
 contribute to reviews, patches and specs as well as receive ATC status
 may do so, for those operators who wish to be acknowledged as
 contributors as well as being operators.

 Operators have a very useful, very valuable, very necessary perspective
 that is not a developer's perspective that needs to be heard and
 communicated.

 Thierry has made the suggestion that a strong User Committee
 representing the voice of the operator would be a good direction here. I
 support this suggestion. Tim Bell is working on an etherpad here:
 https://etherpad.openstack.org/p/YVR-ops-user-committee

 Thank you Adam,
 Anita.


 who are not allowed to vote. Operators meaning Admins,
 Architects, etc. It sounds like this is something most TC candidates
 want
 which most would agree is a good thing. At least I think so. ; )

 Is it be feasible to start allowing the operator community to also cast
 votes for TC candidates? Is the TC *only* addressing technical concerns
 that are relevant to the development community? Since the TC
 candidates are
 embracing the idea of representing more than just the developer
 community,
 it would /seem/ the voters electing the TC members should include the
 communities being represented. If the TC only addresses developer
 concerns,
 it would seem they become at risk of losing touch with the
 operator/architecture/user concerns because the operator community
 voice is
 never heard in the voting booth.

 Perhaps this bumps into how it used to be versus how it should be. I
 don't
 know. Just struck me as incongruent with the platform of almost every
 candidate - broadening representation while the current rules
 prohibit that
 level of co-participation.

 Thoughts?


 *Adam Lawson*

 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (844) 4-AQORN-NOW ext. 101
 International: +1 302-387-4660
 Direct: +1 916-246-2072


 [1] 

[openstack-dev] [nova] What happened with the Hyper-V generation 2 VMs spec?

2015-05-04 Thread Matt Riedemann

This spec was never approved [1] but the code was merged in Kilo [2].

The blueprint is marked complete in launchpad [3] and it's referenced as 
a new feature in the hyper-v driver in the kilo release notes [4], but 
there is no spec published for consumers that detail the feature [5]. 
Also, the spec mentioned doc impacts which I have to assume weren't 
made, and there were abandoned patches [6] tied to the blueprint, so is 
this half-baked or not?  Are we missing information in the kilo release 
notes?


How do we retroactively approve a spec so it's published to 
specs.openstack.org for posterity when obviously our review process 
broke down?


[1] https://review.openstack.org/#/c/103945/
[2] 
https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z

[3] https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms
[4] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Hyper-V
[5] http://specs.openstack.org/openstack/nova-specs/specs/kilo/
[6] 
https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] On-demand VPNaaS IRC meeting

2015-05-04 Thread Paul Michali
Since it has been a while, I decided to plan for a VPN meeting in our time
slot of Tuesday, 1600 UTC, for tomorrow the 5th of May.

Please join in!

Paul Michali (pc_m)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] On-demand VPNaaS IRC meeting

2015-05-04 Thread Paul Michali
Re: https://wiki.openstack.org/wiki/Meetings/VPNaaS


On Mon, May 4, 2015 at 12:03 PM Paul Michali p...@michali.net wrote:

 Since it has been a while, I decided to plan for a VPN meeting in our time
 slot of Tuesday, 1600 UTC, for tomorrow the 5th of May.

 Please join in!

 Paul Michali (pc_m)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What happened with the Hyper-V generation 2 VMs spec?

2015-05-04 Thread Alessandro Pilotti
Hi Matt,

We originally proposed a Juno spec for this blueprint, but it got postponed to 
Kilo where it has been approved without a spec together with other hypervisor 
specific blueprints (the so called “trivial” case).

The BP itself is completed and marked accordingly on launchpad. 

Patches referenced in the BP:

https://review.openstack.org/#/c/103945/
Abandoned: Juno specs. 

https://review.openstack.org/#/c/107177/
Merged

https://review.openstack.org/#/c/107185/
Merged

https://review.openstack.org/#/c/137429/
Abandoned: Acording to the previous discussions on IRC, this commit is no 
longer necessary.

https://review.openstack.org/#/c/137429/
Abandoned: Acording to the previous discussions on IRC, this commit is no 
longer necessary.

https://review.openstack.org/#/c/145268/
Abandoned, due to sqlalchemy model limitations



 On 04 May 2015, at 18:41, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
 This spec was never approved [1] but the code was merged in Kilo [2].
 
 The blueprint is marked complete in launchpad [3] and it's referenced as a 
 new feature in the hyper-v driver in the kilo release notes [4], but there is 
 no spec published for consumers that detail the feature [5]. Also, the spec 
 mentioned doc impacts which I have to assume weren't made, and there were 
 abandoned patches [6] tied to the blueprint, so is this half-baked or not?  
 Are we missing information in the kilo release notes?
 
 How do we retroactively approve a spec so it's published to 
 specs.openstack.org for posterity when obviously our review process broke 
 down?
 
 [1] https://review.openstack.org/#/c/103945/
 [2] 
 https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z
 [3] https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms
 [4] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Hyper-V
 [5] http://specs.openstack.org/openstack/nova-specs/specs/kilo/
 [6] 
 https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z
 
 -- 
 
 Thanks,
 
 Matt Riedemann
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Keystone] Rehashing the Pecan/Falcon/other WSGI debate

2015-05-04 Thread Flavio Percoco

On 02/05/15 12:02 -0700, Morgan Fainberg wrote:




On May 2, 2015, at 10:28, Monty Taylor mord...@inaugust.com wrote:


On 05/01/2015 09:16 PM, Jamie Lennox wrote:
Hi all,

At around the time Barbican was applying for incubation there was a
discussion about supported WSGI frameworks. From memory the decision
at the time was that Pecan was to be the only supported framework and
that for incubation Barbican had to convert to Pecan (from Falcon).

Keystone is looking to ditch our crusty old, home-grown wsgi layer for
an external framework and both Pecan and Falcon are in global
requirements.

In the experimenting I've done Pecan provides a lot of stuff we don't
need and some that just gets in the way. To call out a few:
* the rendering engine really doesn't make sense for us, for APIs, and
where we are often returning different data (not just different views or
data) based on Content-Type.
* The security enforcement within Pecan does not really mesh with how
we enforce policy, nor does the way we build controller objects per
resource. It seems we will have to build this for ourselves on top of
pecan

and there are just various other niggles.

THIS IS NOT SUPPOSED TO START A DEBATE ON THE VIRTUES OF EACH FRAMEWORK.

Everything I've found can be dealt with and pecan will be a vast
improvement over what we use now. I have also not written a POC with
Falcon to know that it will suit any better.

My question is: Does the ruling that Pecan is the only WSGI framework
for OpenStack stand? I don't want to have 100s of frameworks in the
global requirements, but given falcon is already there iff a POC
determines that Falcon is a better fit for keystone can we use it?


a) Just to be clear - I don't actually care


Just to be super clear, I don't care either. :)



That said:

falcon is a wsgi framework written by kgriffs who was PTL of marconi who
has since being involved with OpenStack. My main perception of it has
always been as a set of people annoyed by openstack doing their own
thing. That's fine - but I don't have much of a use for that myself.


ok, I'll bite.

We didn't pick Falcon because Kurt was Marconi's PTL and Falcon's
maintainer. The main reason it was picked was related to performance
first[0] and time (We didn't/don't have enough resources to even think
of porting the API) and at this point, I believe it's not even going
to be considered anymore in the short future.

There were lots of discussions around this, there were POCs and team
work. I think it's fair to say that the team didn't blindly *ignored*
what was recommended as the community framework but it picked what
worked best for the service.

[0] https://wiki.openstack.org/wiki/Zaqar/pecan-evaluation



pecan is a wsgi framework written by Dreamhost that eventually moved
itself into stackforge to better enable collaboration with our community
after we settled on it as the API for things moving forward.

Since the decision that new REST apis should be written in Pecan, the
following projects have adopted it:

openstack:
barbican
ceilometer
designate
gnocchi
ironic
ironic-python-agent
kite
magnum
storyboard
tuskar

stackforge:
anchor
blazar
cerberus
cloudkitty
cue
fuel-ostf
fuel-provision
graffiti
libra
magnetodb
monasca-api
mistral
octavia
poppy
radar
refstack
solum
storyboard
surveil
terracotta

On the other hand, the following use falcon:

stachtach-quincy
zaqar



To me this is a strong indicator that pecan will see more eyes and possibly be 
more open to improvement to meet the general need.


+1


That means that for all of the moaning and complaining, there is
essentially one thing that uses it - the project that was started by the
person who wrote it and has since quit.

I'm sure it's not perfect - but the code is in stackforge - I'm sure we
can improve it if there is something missing. OTOH - if we're going to
go back down this road, I'd think it would be more useful to maybe look
at flask or something else that has a large following in the python
community at large to try to reduce the amount of special we are.



+1


Please, lets not go back down this road, not yet at least. :)




But honestly - I think it matters almost not at all, which is why I keep
telling people to just use pecan ... basically, the argument is not
worth it.


+1, go with Pecan if your requirements are not like Zaqar's.
Contribute to Pecan and make it better.

Flavio

--
@flaper87
Flavio Percoco


pgp7Mf9zCWPWn.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What happened with the Hyper-V generation 2 VMs spec?

2015-05-04 Thread Matt Riedemann



On 5/4/2015 11:12 AM, Alessandro Pilotti wrote:

Hi Matt,

We originally proposed a Juno spec for this blueprint, but it got postponed to 
Kilo where it has been approved without a spec together with other hypervisor 
specific blueprints (the so called “trivial” case).

The BP itself is completed and marked accordingly on launchpad.

Patches referenced in the BP:

https://review.openstack.org/#/c/103945/
Abandoned: Juno specs.

https://review.openstack.org/#/c/107177/
Merged

https://review.openstack.org/#/c/107185/
Merged

https://review.openstack.org/#/c/137429/
Abandoned: Acording to the previous discussions on IRC, this commit is no 
longer necessary.

https://review.openstack.org/#/c/137429/
Abandoned: Acording to the previous discussions on IRC, this commit is no 
longer necessary.

https://review.openstack.org/#/c/145268/
Abandoned, due to sqlalchemy model limitations




On 04 May 2015, at 18:41, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

This spec was never approved [1] but the code was merged in Kilo [2].

The blueprint is marked complete in launchpad [3] and it's referenced as a new 
feature in the hyper-v driver in the kilo release notes [4], but there is no 
spec published for consumers that detail the feature [5]. Also, the spec 
mentioned doc impacts which I have to assume weren't made, and there were 
abandoned patches [6] tied to the blueprint, so is this half-baked or not?  Are 
we missing information in the kilo release notes?

How do we retroactively approve a spec so it's published to specs.openstack.org 
for posterity when obviously our review process broke down?

[1] https://review.openstack.org/#/c/103945/
[2] 
https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z
[3] https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms
[4] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Hyper-V
[5] http://specs.openstack.org/openstack/nova-specs/specs/kilo/
[6] 
https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK, but this doesn't answer all of the questions.

1. Are there doc impacts from the spec that need to be in the kilo 
release notes?  For example, the spec says:


The Nova driver documentation should include an entry about this topic
including when to use and when not to use generation 2 VMs. A note on 
the relevant Glance image property should be added as well.


I don't see any of that in the kilo release notes.

2. If we have a feature merged, we should have something in 
specs.openstack.org for operators to go back to reference rather than 
dig through ugly launchpad whiteboards or incomplete gerrit reviews 
where what was merged might differ from what was originally proposed in 
the spec in Juno.


3. Is the Hyper-V CI now testing with gen-2 images?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Thierry Carrez
Maish Saidel-Keesing wrote:
 A three point triangle. I like the idea! Anita I assume that you are
 talking about the TC[3], the board [1] and the user committee [2].
 
 I honestly do not see this at the moment as an equally weighted triangle.
 Should they be? Perhaps not, maybe yes.
 
 It could be that my view of things is skew, but here it is.
 
 The way to get something into OpenStack is through code.
 Who submits the code? Developers.
 Who approves code? Reviewers and core
 On top of that you have the PTL
 Above the PTL - you have the TC. They decide what is added into
 OpenStack and (are supposed) drive overall direction.
 
 These are the people that have actionable influence into what goes into
 the products.
 
 AFAIK neither the Foundation - nor the User committee have any
 actionable influence into what goes into the products, what items are
 prioritized and what is dropped.

That's simply acknowledging the mechanics of an open source / open
innovation project like OpenStack. Having the Board or the User
committee decide what goes into the products, what items are
prioritized and what is dropped won't make it magically happen. At the
end of the day, you need a contributor willing to write, review,
prioritize that code.

The contributors to an open source project ultimately make things go in
the open source project. They can be (and should be) influenced by
outside input, especially users of the project. Companies can influence
what is being worked on by funding developers to work on specific
things. But in the end, it all boils down to contributors that get the
work done and therefore make it going in one direction or another.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-04 Thread Ed Leafe
On May 4, 2015, at 7:26 AM, Flavio Percoco fla...@redhat.com wrote:

 TBH, I'm a bit torn. I'm always cheering for innovation, for using the
 right tool, etc, but I also agree with Monty, the linked post and some
 of the arguments that have been made in this thread.

Using something different isn't innovating.

Finding a way to do something significantly better is innovating.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-04 Thread Scott Drennan
VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't
see any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and
specs[2] which was deferred from Kilo - is this something anyone is looking
at as a Liberty candidate?  I looked but didn't find any recent work - is
there somewhere else work on this is happening?  No-one has listed it on
the liberty summit topics[3] etherpad, which could mean it's
uncontroversial, but given history on this, I think that's unlikely.

cheers,
Scott

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]: https://review.openstack.org/#/c/94612
[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-04 Thread Flavio Percoco

On 04/05/15 11:33 -0500, Ed Leafe wrote:

On May 4, 2015, at 7:26 AM, Flavio Percoco fla...@redhat.com wrote:


TBH, I'm a bit torn. I'm always cheering for innovation, for using the
right tool, etc, but I also agree with Monty, the linked post and some
of the arguments that have been made in this thread.


Using something different isn't innovating.

Finding a way to do something significantly better is innovating.


FWIW, I didn't say using something different is innovating. It's what
you do with that tool that makes it so. However, you can also use
innovative tools for common tasks and I could get really philosophical
about this so I'll stop here :)

Flavio



--
@flaper87
Flavio Percoco


pgpi23bwYkvVe.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][tempest] Data-driven testing (DDT) samples

2015-05-04 Thread Madhusudhan Kandadai
Thanks Salvatore and Tom for your response.

Yes, we were able to write data driven tests in two ways:

(1) using ddt package - needs to be installed separately and invoke the
module when writing the tests
(2) using testscenarios package - this package is already installed and
being used in neutron projects. Except for one project, rest are using
testscenarios.

Hence, the community have decided to stick with option 2 in neutron-lbaas
tree, to follow same process.

Regards,
Madhusudhan

On Mon, May 4, 2015 at 3:08 AM, Tom Barron t...@dyncloud.net wrote:

 On 5/4/15 3:05 AM, Salvatore Orlando wrote:
  Among the OpenStack project of which I have some knowledge, none of them
  uses any DDT library.

 FYI, manila uses DDT for unit tests.

  If you think there might be a library from which lbaas, neutron, or any
  other openstack project might take advantage, we should consider it.
 
  Salvatore
 
  On 14 April 2015 at 20:33, Madhusudhan Kandadai
  madhusudhan.openst...@gmail.com
  mailto:madhusudhan.openst...@gmail.com wrote:
 
  Hi,
 
  I would like to start a thread for the tempest DDT in neutron-lbaas
  tree. The problem comes in when we have testcases for both
  admin/non-admin user. (For example, there is an ongoing patch
  activity: https://review.openstack.org/#/c/171832/). Ofcourse it has
  duplication and want to adhere as per the tempest guidelines. Just
  wondering, whether we are using DDT library in other projects, if it
  is so, can someone please point me the sample code that are being
  used currently. It can speed up this DDT activity for neutron-lbaas.
 

   $ grep -R '@ddt' manila/tests/ | wc -l
 198

  In the meantime, I am also gathering/researching about that. Should
  I have any update, I shall keep you posted on the same.
 
  Thanks,
  Madhusudhan
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 Regards,

 -- Tom



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Clarification required

2015-05-04 Thread Madhusudhan Kandadai
Thanks Brandon for clearing the confusion.

On Mon, May 4, 2015 at 9:52 AM, Brandon Logan brandon.lo...@rackspace.com
wrote:

  Hi Madhu,​

 You won't see any pool details from a GET of a loadbalancer.  For every
 entity, you'll only be shown minor information from their parent (if one
 exists) and their children (if any exist).  We may one day have a call to
 just show everything under a loadbalancer, but thats not there yet.


  Thanks,

 Brandon
  --
 *From:* Madhusudhan Kandadai madhusudhan.openst...@gmail.com
 *Sent:* Friday, May 1, 2015 9:43 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [Neutron][LBaaS] Clarification required

Hello,

  I am playing around with Neutron LBaaS API calls as per Neutron/LBaaS/API
 2.0 docs. I have noticed below behavior against working devstack. I would
 like to clarify on it:

  1. I create a loadbalancer using RESTAPI with the attributes -
 'vip_subnet_id' and 'admin_state_up' as 'False'. It is getting created
 successfully

  when I do a GET on loadbalancer, I could see the relevant their
 information.

  2. I create a listener with the loadbalancer_id from step 1 and
 'admin_state_up' as 'False' and able to create listener.

  when I do a GET on loadbalancer again, I could see listener details
 associated with the loadbalancer as expected

  Now the question comes in:-

  3. I create a pool with listener _id and 'admin_state_up' as 'False' and
 able to create pool accordingly

  But, when I do a GET on loadbalancer, I could not see pool details under
 listener associated with the loadbalancer.

  Just curious, why I could not see like this when I do a GET on
 loadbalancer:

 {
  loadbalancer {
   listener {
 pools
 }
}
 }


  4. I could see all the details including pool correctly when I do GET on
 loadbalancers/lb_id/statuses

  Thanks,
  Madhusudhan

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Bhandaru, Malini K
In the discussion of N nodes in version V, needing to get upgraded to V+1,
I do not see the issue of loss of HA.
These N nodes are the servers. The clients are the ones still at version V. 
Does it not make sense to upgrade all the servers to V+1.
(need to cross check against database that all servers upgraded)
Then start on the clients. When all clients upgrades (need to cross check 
against database that all clients upgraded) before turning off
Compatibility mode. 
Also it would not be a server reboot, it would just be a single service on the 
servers?

Regards
Malini

-Original Message-
From: Jastrzebski, Michal [mailto:michal.jastrzeb...@intel.com] 
Sent: Monday, May 04, 2015 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][heat] Versioned objects backporting

W dniu 5/4/2015 o 11:50 AM, Angus Salkeld pisze: On Mon, May 4, 2015 at 6:33 
PM, Jastrzebski, Michal 
 michal.jastrzeb...@intel.com mailto:michal.jastrzeb...@intel.com wrote:
 
 W dniu 5/4/2015 o 8:21 AM, Angus Salkeld pisze: On Thu, Apr 30,
 2015 at 9:25 PM, Jastrzebski, Michal
   michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com wrote:
  
   Hello,
  
   After discussions, we've spotted possible gap in versioned
 objects:
   backporting of too-new versions in RPC.
   Nova does that by conductor, but not every service has something
   like that. I want to propose another approach:
  
   1. Milestone pinning - we need to make single reference to
 versions
   of various objects - for example heat in version 15.1 will mean
   stack in version 1.1 and resource in version 1.5.
   2. Compatibility mode - this will add flag to service
   --compatibility=15.1, that will mean that every outgoing RPC
   communication will be backported before sending to object
 versions
   bound to this milestone.
  
   With this 2 things landed we'll achieve rolling upgrade like
 that:
   1. We have N nodes in version V
   2. We take down 1 node and upgrade code to version V+1
   3. Run code in ver V+1 with --compatibility=V
   4. Repeat 2 and 3 until every node will have version V+1
   5. Restart each service without compatibility flag
  
   This approach has one big disadvantage - 2 restarts required, but
   should solve problem of backporting of too-new versions.
   Any ideas? Alternatives?
  
  
   AFAIK if nova gets a message that is too new, it just forwards it on
   (and a newer server will handle it).
  
   With that this *should* work, shouldn't it?
   1. rolling upgrade of heat-engine
 
 That will be hard part. When we'll have only one engine from given
 version, we lose HA. Also, since we never know where given task
 lands, we might end up with one task bouncing from old version to
 old version, making call indefinitely long. Ofc with each upgraded
 engine we'll lessen change for that to happen, but I think we should
 aim for lowest possible downtime. That being said, that might be
 good idea to solve this problem not-too-clean, but quickly.
 
 
 I don't think losing HA in the time it takes some heat-engines to 
 stop, install new software and restart the heat-engines is a big deal (IMHO).
 
 -Angus

We will also lose guarantee that this RPC call will be completed in any given 
time. It can bounce from incompatible node to incompatible node until there are 
no incompatible nodes. Especially if there are no other tasks on queue and when 
service returns it to queue and takes call right afterwards, there is good 
chance that it will take this particular one, and we'll get loop out there.

 
 
  2. db sync
  3. rolling upgrade of heat-api
 
  -Angus
 
 
  Regards,
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Doug Hellmann
Excerpts from Maish Saidel-Keesing's message of 2015-05-04 17:46:21 +0300:
 On 05/04/15 17:07, Anita Kuno wrote:
  I'd like to go back to the beginning to clarify something.
 
  On 04/29/2015 02:34 PM, Adam Lawson wrote:
  So I started replying to Doug's email in a different thread but didn't want
  to hi-jack that so I figured I'd present my question as a more general
  question about how voting is handled for the TC.
 
  Anyway, I find it curious that the TC is elected by those within the
  developer community but TC candidates talk about representing the operator
  community
  In my statements I talked about acknowledging the operator community not
  representing them. When I speak, I represent myself and my best
  understanding of a certain situation, if others find value in the
  position I hold, they will let me know.
 
  In my view of what comprises OpenStack, the TC is one point of a
  triangle and the operators are an entirely different point. Trying to
  get two points of a triangle to be the same thing compromises the
  integrity of the structure. Each needs to play its part, not try to be
  something it is not.
 A three point triangle. I like the idea! Anita I assume that you are 
 talking about the TC[3], the board [1] and the user committee [2].
 
 I honestly do not see this at the moment as an equally weighted triangle.
 Should they be? Perhaps not, maybe yes.
 
 It could be that my view of things is skew, but here it is.
 
 The way to get something into OpenStack is through code.
 Who submits the code? Developers.
 Who approves code? Reviewers and core
 On top of that you have the PTL
 Above the PTL - you have the TC. They decide what is added into 
 OpenStack and (are supposed) drive overall direction.
 
 These are the people that have actionable influence into what goes into 
 the products.
 
 AFAIK neither the Foundation - nor the User committee have any 
 actionable influence into what goes into the products, what items are 
 prioritized and what is dropped.

 
 If each of the three point of the triangle had proper (actionable) 
 influence and (actionable) say in what goes on and happens within the 
 OpenStack then that would be ideal. Does the representation have to be 
 equal? I don't think so. But it should be there somehow.
 
 One of the points of the User Committee mission is:
 Consolidate user requirements and present these to the management board 
 and technical committee
 
 There is no mention that I could find on any of the other missions[3][1] 
 that says that the TC or the board have to do anything with user 
 requirements presented to them.
 
 I do not know if this has ever been addressed before, but it should be 
 defined. A process with where the TC and collects requirements from the 
 User Committee or Board and with a defined process this trickles down 
 into the teams and projects.

You're describing these relationships in a much more hierarchical manner
than I think reflects their reality.

Decisions about the future of OpenStack are made by the people who
show up and contribute.  We try to identify common goals and
priorities, and where there's little overlap we support each other
in ways that we perceive improve the project. That process uses
input from many sources, including product managers from contributing
companies and operator/user feedback. As Thierry pointed out, there's
no community group dictating what anyone works on or what the
priorities are.

Again, I'm curious about the specific issues driving this discussion.
Are there bugs or blueprints that you feel need more attention?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-04 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 05/04/2015 11:42 AM, Flavio Percoco wrote:

 Using something different isn't innovating.
 
 Finding a way to do something significantly better is
 innovating.
 
 FWIW, I didn't say using something different is innovating. It's
 what you do with that tool that makes it so. However, you can also
 use innovative tools for common tasks and I could get really
 philosophical about this so I'll stop here :)

Oh, I know you know. I was just commenting on the common reaction to
Oooh, shiny! as a persuasive technical argument. :)

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVR6ozAAoJEKMgtcocwZqLPaQP/jKYecECgzPU0l1ANLXiZB8y
DcNiRpSR30L+7sFfuesBUKl5dnLYjExgUIH9eBonjaJLh7XFTnhAUYnt0/TkxKrQ
gz8PcA2iYMUY6uOi6JGp/4vdvXSa9pIYNCpEU3YHh71IlgtXkifXU0D/rVVwqyxZ
te3tR3B+Kx3X/smVRNlCxzbFuEcZL7f2lez3I4rr8rgBpSuILeK5H7rfuTrOXFle
JCYXjkDu8MKE5hMTOOVU49Fwl4GpYDI8g1570UutqDs5hn37H6KKenaJC83xnD4I
6Hi4M8hpCfJsrVHzVlgnuDceYCSBxcKfW8gN/efGcmN5RZtwXq0uMKGtEGS5/f3i
SOSFTJ+PMnUz/gTv7MPrlBLma3Ntr/HHk+Cu+axZ9mn69DCLtjJkPpLU6yIvYd/c
qNm6QYoz7YFxbzuL/NltCnywcniTt8hMtjcoKH21bR2HF6cfL8XGeOXh6mtMBfZX
OzNQwuwRvQnxFYFpsE5Voy9MAGlAr00goKpRCeZFRSDZKbP8nVEeL5GfRbSv2xXD
v3VoiPMuEJZQBO1DiEvM1gWfDtb7vwW4LYJyS9IQVts94RuIA7INXRHaGBr8ng1t
YX0TuX3g0OOLcW99MnqWmbpLKMUne0W9/JQEdn76qiKpL1IQHH/rPzPGFlD5jP53
++YWvY60Fy827VLe0+ki
=HQql
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Adam Lawson
So Thierry I agree. Developers are required to make it happen. I would say
however that acknowledging the importance of developer contributions and
selecting leadership from the development community is really half the
battle as it's pretty rare to see project teams led and governed by only
developers. I think addressing the inclusion of architects/operators/admins
within this committee is a hugely positive development.

I also liked your suggestions earlier about potential ways to implement.
On May 4, 2015 9:31 AM, Thierry Carrez thie...@openstack.org wrote:

 Maish Saidel-Keesing wrote:
  A three point triangle. I like the idea! Anita I assume that you are
  talking about the TC[3], the board [1] and the user committee [2].
 
  I honestly do not see this at the moment as an equally weighted triangle.
  Should they be? Perhaps not, maybe yes.
 
  It could be that my view of things is skew, but here it is.
 
  The way to get something into OpenStack is through code.
  Who submits the code? Developers.
  Who approves code? Reviewers and core
  On top of that you have the PTL
  Above the PTL - you have the TC. They decide what is added into
  OpenStack and (are supposed) drive overall direction.
 
  These are the people that have actionable influence into what goes into
  the products.
 
  AFAIK neither the Foundation - nor the User committee have any
  actionable influence into what goes into the products, what items are
  prioritized and what is dropped.

 That's simply acknowledging the mechanics of an open source / open
 innovation project like OpenStack. Having the Board or the User
 committee decide what goes into the products, what items are
 prioritized and what is dropped won't make it magically happen. At the
 end of the day, you need a contributor willing to write, review,
 prioritize that code.

 The contributors to an open source project ultimately make things go in
 the open source project. They can be (and should be) influenced by
 outside input, especially users of the project. Companies can influence
 what is being worked on by funding developers to work on specific
 things. But in the end, it all boils down to contributors that get the
 work done and therefore make it going in one direction or another.

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?

2015-05-04 Thread Pavel Bondar
Hi,

During fixing failures in db_base_plugin_v2.py with new IPAM[1] I faced
to check-grenade-dsvm-neutron failures[2].
check-grenade-dsvm-neutron installs stable/kilo, creates
networks/subnets and upgrades to patched master.
So it validates that migrations passes fine and installation is works
fine after it.

This is where failure occurs.
Earlier there was an agreement about using pluggable IPAM only for
greenhouse installation, so migrate script from built-in IPAM to
pluggable IPAM was postponed.
And check-grenade-dsvm-neutron validates greyhouse scenario.
So do we want to update this agreement and implement migration scripts
from built-in IPAM to pluggable IPAM now?

Details about failures.
Subnets created before patch was applied does not have correspondent
IPAM subnet,
so observed a lot of failures like this in [2]:
Subnet 2c702e2a-f8c2-4ea9-a25d-924e32ef5503 could not be found
Currently config option in patch is modified to use pluggable_ipam by
default (to catch all possible UT/tempest failures).
But before the merge patch will be switched back to non-ipam
implementation by default.

I would prefer to implement migrate script as a separate review,
since [1] is already quite big and hard for review.

[1] https://review.openstack.org/#/c/153236
[2]
http://logs.openstack.org/36/153236/54/check/check-grenade-dsvm-neutron/42ab4ac/logs/grenade.sh.txt.gz

- Pavel Bondar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Sean Dague
On 05/04/2015 01:11 PM, Doug Hellmann wrote:
 Excerpts from Maish Saidel-Keesing's message of 2015-05-04 17:46:21 +0300:
 On 05/04/15 17:07, Anita Kuno wrote:
 I'd like to go back to the beginning to clarify something.

 On 04/29/2015 02:34 PM, Adam Lawson wrote:
 So I started replying to Doug's email in a different thread but didn't want
 to hi-jack that so I figured I'd present my question as a more general
 question about how voting is handled for the TC.

 Anyway, I find it curious that the TC is elected by those within the
 developer community but TC candidates talk about representing the operator
 community
 In my statements I talked about acknowledging the operator community not
 representing them. When I speak, I represent myself and my best
 understanding of a certain situation, if others find value in the
 position I hold, they will let me know.

 In my view of what comprises OpenStack, the TC is one point of a
 triangle and the operators are an entirely different point. Trying to
 get two points of a triangle to be the same thing compromises the
 integrity of the structure. Each needs to play its part, not try to be
 something it is not.
 A three point triangle. I like the idea! Anita I assume that you are 
 talking about the TC[3], the board [1] and the user committee [2].

 I honestly do not see this at the moment as an equally weighted triangle.
 Should they be? Perhaps not, maybe yes.

 It could be that my view of things is skew, but here it is.

 The way to get something into OpenStack is through code.
 Who submits the code? Developers.
 Who approves code? Reviewers and core
 On top of that you have the PTL
 Above the PTL - you have the TC. They decide what is added into 
 OpenStack and (are supposed) drive overall direction.

 These are the people that have actionable influence into what goes into 
 the products.

 AFAIK neither the Foundation - nor the User committee have any 
 actionable influence into what goes into the products, what items are 
 prioritized and what is dropped.


 If each of the three point of the triangle had proper (actionable) 
 influence and (actionable) say in what goes on and happens within the 
 OpenStack then that would be ideal. Does the representation have to be 
 equal? I don't think so. But it should be there somehow.

 One of the points of the User Committee mission is:
 Consolidate user requirements and present these to the management board 
 and technical committee

 There is no mention that I could find on any of the other missions[3][1] 
 that says that the TC or the board have to do anything with user 
 requirements presented to them.

 I do not know if this has ever been addressed before, but it should be 
 defined. A process with where the TC and collects requirements from the 
 User Committee or Board and with a defined process this trickles down 
 into the teams and projects.
 
 You're describing these relationships in a much more hierarchical manner
 than I think reflects their reality.
 
 Decisions about the future of OpenStack are made by the people who
 show up and contribute.  We try to identify common goals and
 priorities, and where there's little overlap we support each other
 in ways that we perceive improve the project. That process uses
 input from many sources, including product managers from contributing
 companies and operator/user feedback. As Thierry pointed out, there's
 no community group dictating what anyone works on or what the
 priorities are.

I think that's the dead on point. You get other people to help with
features / fixes not because they are told to, but because they also
believe them to be important / exciting.

Being a PTL or on the TC gives you a slightly larger soapbox, however
I'd argue that typically the individual earned the larger soapbox first,
and becoming PTL / TC was the effect of having built credibility and
influence, not the cause.

This is the nature of collaborative open development.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group (gantt) meeting 5/5

2015-05-04 Thread Dugger, Donald D
I will not be able to make the meeting tomorrow but the IRC channel will be 
available for anyone who wants to get together.  For an agenda I would suggest:


1) Liberty specs (tracking page - https://wiki.openstack.org/wiki/Gantt/liberty 
)

2) Vancouver design summit - more thoughts?

3) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][release] oslo.middleware 1.2.0

2015-05-04 Thread Doug Hellmann
We are overjoyed to announce the release of:

oslo.middleware 1.2.0: Oslo Middleware library

For more details, please see the git log history below and:

http://launchpad.net/oslo.middleware/+milestone/1.2.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

Changes in oslo.middleware 1.1.0..1.2.0
---

bff184a Imported Translations from Transifex
09ffae3 Update CORS tests to use config fixture's load_raw_values
ee1fc55 Updated from global requirements

Diffstat (except docs and test files)
-

.../de/LC_MESSAGES/oslo.middleware-log-error.po|  1 -
.../locale/de/LC_MESSAGES/oslo.middleware.po   |  7 +--
.../en_GB/LC_MESSAGES/oslo.middleware-log-error.po |  1 -
.../locale/en_GB/LC_MESSAGES/oslo.middleware.po|  7 +--
.../fr/LC_MESSAGES/oslo.middleware-log-error.po|  1 -
.../locale/fr/LC_MESSAGES/oslo.middleware.po   |  7 +--
requirements.txt   |  2 +-
8 files changed, 38 insertions(+), 60 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index b488bbd..4547208 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ Babel=1.3
-oslo.config=1.9.3  # Apache-2.0
+oslo.config=1.11.0  # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help: help resolve Depends-On for review

2015-05-04 Thread Matthew Treinish
On Mon, May 04, 2015 at 10:05:03PM +0800, Yu Xing YX Wang wrote:
 
 Hi,
 
 I have a review(https://review.openstack.org/#/c/178561/) which is depend
 on another review(https://review.openstack.org/#/c/178546/)
 This Depends-On makes the first review's build failure. who can help me
 resolve it?

So I think the issue you're hitting is caused by none of the check jobs running
on 178561 installing tempest-lib from source, but instead use the latest
released version, (in this case 0.5.0) which will not include your proposed
tempest-lib patch.  You won't be able to land a change in tempest to use a new
api in tempest-lib until a new release is pushed which includes the new feature
you're adding.

If you want to verify your tempest-lib patch works with your tempest patch
you'll need to run a job that installs tempest-lib from source instead of the
latest release. There is a job configured to do this included in tempest's
experimental queue. So just leave a review comment 'check experimental' in your
tempest commit (with the depends-on still in the commit msg) and look for the
results from the gate-tempest-dsvm-neutron-src-tempest-lib job.

-Matt Treinish

 fixes Bug1448217
 Closes-Bug: 1448217
 Depends-On: I86bf157f1bdb44f5fc579dc5317784fe31df8521
 Change-Id: I57649f5aac9b1abe1a9961d4b35479372ebee519
 
 


pgp_v8yVbsXom.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-04 Thread Joe Gordon
Before going any further, I am proposing something to make it easier for
the developer community to keep track of what other projects are working
on. I am not proposing anything to directly help operators or users, that
is a separate problem space.



In Mark McClain's TC candidacy email he brought up the issue of cross
project communication[0]:

  Our codebase has grown significantly over the years and a
contributor must invest significant time to understand and follow
every project; however many contributors have limited time must choose
a subset of projects to direct their focus.  As a result, it becomes
important to employ cross project communication to explain major
technical decisions, share solutions to project challenges that might
be widely applicable, and leverage our collective experience.  The TC
should seek new ways to facilitate cross project communication that
will enable the community to craft improvements to the interfaces
between projects as there will be greater familiarity between across
the boundary.

 Better cross project communication will make it easier to share technical
solutions and promote a more unified experience across projects.  It seems
like just about every time I talk to people from different projects I learn
about something interesting and relevant that they are working on.

While I usually track discussions on the mailing list, it is a poor way of
keeping track of what the big issues each project is working on. Stefano's
'OpenStack Community Weekly Newsletter' does a good job of highlighting
many things including important mailing list conversations, but it doesn't
really answer the question of What is X (Ironic, Nova, Neutron, Cinder,
Keystone, Heat etc.) up to?

To tackle this I would like to propose the idea of a periodic developer
oriented newsletter, and if we agree to go forward with this, hopefully the
foundation can help us find someone to write newsletter.

Now on to the details.

I am not sure what the right cadence for this newsletter would be, but I
think weekly is too
frequent and once a 6 month cycle would be too infrequent.

The  big questions I would like to see answered are:

* What are the big challenges each project is currently working on?
* What can we learn from each other?
* Where are individual projects trying to solve the same problem
independently?

To answer these questions one needs to look at a lot of sources, including:

* Weekly meeting logs, or hopefully just the notes assuming we get better
at taking detailed notes
* approved specs
* periodically talk to the PTL of each project to see if any big
discussions were discussed else where
* Topics selected for discussion at summits

Off the top of my head here are a few topics that would make good
candidates for this newsletter:

* What are different projects doing with microversioned APIs, I know that
at least two projects are tackling this
* How has the specs process evolved in each project, we all started out
from a common point but seem to have all gone in slightly different
directions
* What will each projects priorities be in Liberty? Do any of them overlap?
* Any process changes that projects have tried that worked or didn't work
* How is functional testing evolving in each project


Would this help with cross project communication? Is this feasible? Other
thoughts?

best,
Joe




[0]
http://lists.openstack.org/pipermail/openstack-dev/2015-April/062361.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Next weekly meeting cancelled.

2015-05-04 Thread Nikhil Komawar
Hi all,

In the event of a conflict with Glance virtual mini-summit-first-session [1] 
this Thursday May 7th UTC 1400 onwards, we are cancelling the weekly meeting. 
Please look for updates on the next meeting in the weekly meeting etherpad [2].

[1] https://etherpad.openstack.org/p/liberty-glance-virtual-mini-summit
[2] https://etherpad.openstack.org/p/glance-team-meeting-agenda

Thanks,
 -Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-04 Thread Nikhil Komawar

This is a really nice idea. I feel the same, that we can offload some of the 
work from the liaisons and reduce the number of syncs that need to happen. This 
can be a good source of asynchronous communication however, I still feel that 
we need to keep a good balance of both.  Also, I like the proposed scope of the 
newsletter.

Thanks,
 -Nikhil


From: Joe Gordon joe.gord...@gmail.com
Sent: Monday, May 4, 2015 3:03 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [all] cross project communication: periodic developer 
newsletter?
  


Before going any further, I am proposing something to make it easier for the 
developer community to keep track of what other projects are working on. I am 
not proposing anything to directly help operators or users, that is a separate 
problem space.







In Mark McClain's TC candidacy email he brought up the issue of cross project 
communication[0]:

   Our codebase has grown significantly over the years and a contributor must 
invest significant time to understand and follow every project; however many 
contributors have limited time must choose a subset of projects to direct their 
focus.  As a result, it becomes important to employ cross project communication 
to explain major technical decisions, share solutions to project challenges 
that might be widely applicable, and leverage our collective experience.  The 
TC should seek new ways to facilitate cross project communication that will 
enable the community to craft improvements to the interfaces between projects 
as there will be greater familiarity between across the boundary. 
Better cross project communication will make it easier to share technical 
solutions and promote a more unified experience across projects.  It seems like 
just about every time I talk to people from different projects I learn about 
something interesting  and relevant that they are working on. 


While I usually track discussions on the mailing list, it is a poor way of 
keeping track of what the big issues each project is working on. Stefano's 
'OpenStack Community Weekly Newsletter' does a good job of highlighting many 
things including important  mailing list conversations, but it doesn't really 
answer the question of What is X (Ironic, Nova, Neutron, Cinder, Keystone, Heat 
etc.) up to?


To tackle this I would like to propose the idea of a periodic developer 
oriented newsletter, and if we agree to go forward with this, hopefully the 
foundation can help us find someone to write newsletter.



Now on to the details.


I am not sure what the right cadence for this newsletter would be, but I think 
weekly is too 
frequent and once a 6 month cycle would be too infrequent.


The  big questions I would like to see answered are:


* What are the big challenges each project is currently working on?
* What can we learn from each other?
* Where are individual projects trying to solve the same problem independently?



To answer these questions one needs to look at a lot of sources, including:


* Weekly meeting logs, or hopefully just the notes assuming we get better at 
taking detailed notes
* approved specs
* periodically talk to the PTL of each project to see if any big discussions 
were discussed else where
* Topics selected for discussion at summits


Off the top of my head here are a few topics that would make good candidates 
for this newsletter:


* What are different projects doing with microversioned APIs, I know that at 
least two projects are tackling this
* How has the specs process evolved in each project, we all started out from a 
common point but seem to have all gone in slightly different directions
* What will each projects priorities be in Liberty? Do any of them overlap?
* Any process changes that projects have tried that worked or didn't work
* How is functional testing evolving in each project




Would this help with cross project communication? Is this feasible? Other 
thoughts?


best,
Joe








[0]  http://lists.openstack.org/pipermail/openstack-dev/2015-April/062361.html  
   
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-04 Thread Robert Collins
On 5 May 2015 at 07:03, Joe Gordon joe.gord...@gmail.com wrote:
 Before going any further, I am proposing something to make it easier for the
 developer community to keep track of what other projects are working on. I
 am not proposing anything to directly help operators or users, that is a
 separate problem space.

I like the thrust of your proposal.

Any reason not to ask the existing newsletter to be a bit richer? Much
of the same effort is required to do the existing one and the content
you propose IMO.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Rich Megginson

On 05/04/2015 06:03 PM, Mathieu Gagné wrote:

On 2015-05-04 7:35 PM, Rich Megginson wrote:

The way authentication works with the Icehouse branch is that
puppet-keystone reads the admin_token and admin_endpoint from
/etc/keystone/keystone.conf and passes these to the keystone command via
the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument,
respectively.

This will not work on a node where Keystone is not installed (unless you
copy /etc/keystone/keystone.conf to all of your nodes).

I am assuming there are admins/operators that have actually deployed
OpenStack using puppet on nodes where Keystone is not installed?

We are provisioning keystone resources from a privileged keystone node
which accepts the admin_token. All other keystone servers has the
admin_token_auth middleware removed for obvious security reasons.



If so, how?  How do you specify the authentication credentials?  Do you
use environment variables?  If so, how are they specified?

When provisioning resources other than Keystones ones, we use custom
puppet resources and the credentials are passed as env variables to the
exec command. (they are mainly based on exec resources)


I'm talking about the case where you are installing an OpenStack service 
other than Keystone using puppet, and that puppet code for that module 
needs to create some sort of Keystone resource.


For example, install Glance on a node other than the Keystone node. 
puppet-glance is going to call class glance::keystone::auth, which will 
call keystone::resource::service_identity, which will call keystone_user 
{ $name }.  The openstack provider used by keystone_user is going to 
need Keystone admin credentials in order to create the user.  How are 
you passing those credentials?  As env. vars?  How?





I'm starting to think about moving away from env variables and use a
configuration file instead. I'm not sure yet about the implementation
details but that's the main idea.


Is there a standard openrc location?  Could openrc be extended to hold 
parameters such as the default domain to use for Keystone resources?  
I'm not talking about OS_DOMAIN_NAME, OS_USER_DOMAIN_NAME, etc. which 
are used for _authentication_, not resource creation.






For Keystone v3, in order to use v3 for authentication, and in order to
use the v3 identity api, there must be some way to specify the various
domains to use - the domain for the user, the domain for the project, or
the domain to get a domain scoped token.

If I understand correctly, you have to scope the user to a domain and
scope the project to a domain: user1@domain1 wishes to get a token
scoped to project1@domain2 to manage resources within the project?


Correct.  So you need to have some way to specify the domain for the 
user and the domain for the project (or the domain for a domain scoped 
token which allows you to manage resources within a domain). These 
correspond to the openstack command line parameters:

http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/
./myapp --os-auth-plugin v3password --help





There is a similar issue when creating domain scoped resources like
users and projects.  As opposed to editing dozens of manifests to add
domain parameters to every user and project (and the classes that call
keystone_user/tenant, and the classes that call those classes, etc.), is
there some mechanism to specify a default domain to use?  If not, what
about using the same mechanism used today to specify the Keystone
credentials?

I see there is support for a default domain in keystone.conf. You will
find it defined by the identity/default_domain_id=default config value.

Is this value not usable?


It is usable, and will be used, _only on Keystone nodes_. If you are on 
a node without Keystone, where will the default id come from?



And is it reasonable to assume the domain
default will always be present?


Yes, but that may not be the default domain.  Consider the case where 
you may want to separate user accounts from service pseudo accounts, 
by having them in separate domains.




Or is the question more related to the need to somehow override this
value in Puppet?


If there is a standard Puppet mechanism for being able to provide global 
parameters, other than something like rc files or environment variables, 
then yes.






The goal is that all keystone domain scoped resources will eventually
require specifying a domain, but that will take quite a while and I
would like to provide an incremental upgrade path.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara][oslo]Error in Log.debug

2015-05-04 Thread Li, Chen
Hi all,

I just upgrade my devstack and re-installed.

My sahara keep reporting errors:

2015-05-05 10:42:00.453 DEBUG sahara.openstack.common.loopingcall [-] Dynamic 
looping call bound method SaharaPeriodicTasks.run_periodic_tasks of 
sahara.service.periodic.SaharaPeriodicTasks object at 0x7f2ccef987d0 
sleeping for 35.91 seconds from (pid=5361) _inner 
/opt/stack/sahara/sahara/openstack/common/loopingcall.py:132
2015-05-05 10:42:36.397 DEBUG sahara.openstack.common.periodic_task [-] Running 
periodic task SaharaPeriodicTasks.terminate_unneeded_transient_clusters from 
(pid=5361) run_periodic_tasks 
/opt/stack/sahara/sahara/openstack/common/periodic_task.py:219
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_name'
Logged from file periodic.py, line 137
2015-05-05 10:42:36.434 DEBUG sahara.openstack.common.loopingcall [-] Dynamic 
looping call bound method SaharaPeriodicTasks.run_periodic_tasks of 
sahara.service.periodic.SaharaPeriodicTasks object at 0x7f2ccef987d0 
sleeping for 9.96 seconds from (pid=5361) _inner 
/opt/stack/sahara/sahara/openstack/common/loopingcall.py:132
2015-05-05 10:42:46.408 DEBUG sahara.openstack.common.periodic_task [-] Running 
periodic task SaharaPeriodicTasks.update_job_statuses from (pid=5361) 
run_periodic_tasks 
/opt/stack/sahara/sahara/openstack/common/periodic_task.py:219
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_name'
Logged from file periodic.py, line 131


Anyone know why this happens ???


Thanks.
-chen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Emilien Macchi


On 05/04/2015 10:37 PM, Rich Megginson wrote:
 On 05/04/2015 07:52 PM, Mathieu Gagné wrote:
 On 2015-05-04 9:15 PM, Rich Megginson wrote:
 On 05/04/2015 06:03 PM, Mathieu Gagné wrote:
 On 2015-05-04 7:35 PM, Rich Megginson wrote:
 The way authentication works with the Icehouse branch is that
 puppet-keystone reads the admin_token and admin_endpoint from
 /etc/keystone/keystone.conf and passes these to the keystone
 command via
 the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument,
 respectively.

 This will not work on a node where Keystone is not installed
 (unless you
 copy /etc/keystone/keystone.conf to all of your nodes).

 I am assuming there are admins/operators that have actually deployed
 OpenStack using puppet on nodes where Keystone is not installed?
 We are provisioning keystone resources from a privileged keystone node
 which accepts the admin_token. All other keystone servers has the
 admin_token_auth middleware removed for obvious security reasons.


 If so, how?  How do you specify the authentication credentials?  Do
 you
 use environment variables?  If so, how are they specified?
 When provisioning resources other than Keystones ones, we use custom
 puppet resources and the credentials are passed as env variables to the
 exec command. (they are mainly based on exec resources)
 I'm talking about the case where you are installing an OpenStack service
 other than Keystone using puppet, and that puppet code for that module
 needs to create some sort of Keystone resource.

 For example, install Glance on a node other than the Keystone node.
 puppet-glance is going to call class glance::keystone::auth, which will
 call keystone::resource::service_identity, which will call keystone_user
 { $name }.  The openstack provider used by keystone_user is going to
 need Keystone admin credentials in order to create the user.
 We fixed that part by not provisioning Keystone resources from Glance
 nodes but from Keystone nodes instead.

 We do not allow our users to create users/groups/projects, only a user
 with the admin role can do it. So why would you want to store/use admin
 credentials on an unprivileged nodes such as Glance? IMO, the glance
 user shouldn't be able to create/edit/delete users/projects/endpoints,
 that's the keystone nodes' job.
 
 Ok.  You don't need the Keystone superuser admin credentials on the
 Glance node.
 
 Is the puppet-glance code completely separable so that you can call only
 glance::keystone::auth (or other classes that use Keystone resources)
 from the Keystone node, and all of the other puppet-glance code on the
 Glance node?  Does the same apply to all of the other puppet modules?
 

 If you do not wish to explicitly define Keystone resources for Glance on
 Keystone nodes but instead let Glance nodes manage their own resources,
 you could always use exported resources.

 You let Glance nodes export their keystone resources and then you ask
 Keystone nodes to realize them where admin credentials are available. (I
 know some people don't really like exported resources for various
 reasons)
 
 I'm not familiar with exported resources.  Is this a viable option that
 has less impact than just requiring Keystone resources to be realized on
 the Keystone node?

I'm not in favor of having exported resources because it requires
PuppetDB, and a lot of people try to avoid that.
For now, we've been able to setup all OpenStack without PuppetDB in
TripleO and in some other installers, we might want to keep this benefit.



 How are you passing those credentials?  As env. vars?  How?
 As stated, we use custom Puppet resources (defined types) which are
 mainly wrapper around exec. You can pass environment variable to exec
 through the environment parameter. I don't like it but that's how I did
 it ~2 years ago. I haven't changed it due to lack of need to change it.
 This might change soon with Keystone v3.
 
 Ok.
 


 I'm starting to think about moving away from env variables and use a
 configuration file instead. I'm not sure yet about the implementation
 details but that's the main idea.
 Is there a standard openrc location?  Could openrc be extended to hold
 parameters such as the default domain to use for Keystone resources?
 I'm not talking about OS_DOMAIN_NAME, OS_USER_DOMAIN_NAME, etc. which
 are used for _authentication_, not resource creation.
 I'm not aware of any standard openrc location other than ~/.openrc
 which needs to be sourced before running any OpenStack client commands.

 I however understand what you mean. I do not have any idea on how I
 would implement it. I'm still hoping someday to be enlightened by a
 great solution.

 I'm starting to think about some sort of credentials vault. You store
 credentials in it and you tell your resource to use that specific
 credentials. You then no longer need to pass around 6-7
 variables/parameters.
 
 I'm sure Adam Young has some ideas about this . . .
 


 There is a similar issue when creating domain scoped 

Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-05-04 Thread Robert Collins
On 7 April 2015 at 10:43, Robert Collins robe...@robertcollins.net wrote:

 $ time openstack -h
 snip
 real0m2.491s
 user0m2.378s
 sys 0m0.111s


 pbr should be snappy - taking 100ms to get the version is wrong.

I've now tested this.
With an egg-info present in a git tree:
python -m timeit -n 1 -r 1 -s import pbr.version
pbr.version.VersionInfo('testtools').semantic_version()
1 loops, best of 1: 166 usec per loop

Without an egg-info present in a git tree:
python -m timeit -n 1 -r 1 -s import pbr.version
pbr.version.VersionInfo('testtools').semantic_version()
1 loops, best of 1: 254 msec per loop

Installed:
python -m timeit -n 1 -r 1 -s import pbr.version
pbr.version.VersionInfo('testtools').semantic_version()
1 loops, best of 1: 189 usec per loop

So: the 200s case occurs when:
 - you're running out of git
 - have not built an egg_info

This is precisely the case where pkg_resources lookups cannot work,
and we are falling back to git. Its also the case where not using pbr
would result in no version being available and an error or $whatnot.

From this I conclude that the tests testing performance are not
representative of end user experience - because we expect end users to
be running installed trees (either via pip install -e . [which also
creates an egg-info directory] or pip install $projectname or
apt-get/yum/etc install $projectname). I don't know what other
things may be wrong with the measurement environment, but we should
fix them so that we can be confident what we change matters.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Mathieu Gagné
On 2015-05-04 9:15 PM, Rich Megginson wrote:
 On 05/04/2015 06:03 PM, Mathieu Gagné wrote:
 On 2015-05-04 7:35 PM, Rich Megginson wrote:
 The way authentication works with the Icehouse branch is that
 puppet-keystone reads the admin_token and admin_endpoint from
 /etc/keystone/keystone.conf and passes these to the keystone command via
 the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument,
 respectively.

 This will not work on a node where Keystone is not installed (unless you
 copy /etc/keystone/keystone.conf to all of your nodes).

 I am assuming there are admins/operators that have actually deployed
 OpenStack using puppet on nodes where Keystone is not installed?
 We are provisioning keystone resources from a privileged keystone node
 which accepts the admin_token. All other keystone servers has the
 admin_token_auth middleware removed for obvious security reasons.


 If so, how?  How do you specify the authentication credentials?  Do you
 use environment variables?  If so, how are they specified?
 When provisioning resources other than Keystones ones, we use custom
 puppet resources and the credentials are passed as env variables to the
 exec command. (they are mainly based on exec resources)
 
 I'm talking about the case where you are installing an OpenStack service
 other than Keystone using puppet, and that puppet code for that module
 needs to create some sort of Keystone resource.
 
 For example, install Glance on a node other than the Keystone node.
 puppet-glance is going to call class glance::keystone::auth, which will
 call keystone::resource::service_identity, which will call keystone_user
 { $name }.  The openstack provider used by keystone_user is going to
 need Keystone admin credentials in order to create the user.

We fixed that part by not provisioning Keystone resources from Glance
nodes but from Keystone nodes instead.

We do not allow our users to create users/groups/projects, only a user
with the admin role can do it. So why would you want to store/use admin
credentials on an unprivileged nodes such as Glance? IMO, the glance
user shouldn't be able to create/edit/delete users/projects/endpoints,
that's the keystone nodes' job.

If you do not wish to explicitly define Keystone resources for Glance on
Keystone nodes but instead let Glance nodes manage their own resources,
you could always use exported resources.

You let Glance nodes export their keystone resources and then you ask
Keystone nodes to realize them where admin credentials are available. (I
know some people don't really like exported resources for various reasons)


 How are you passing those credentials?  As env. vars?  How?

As stated, we use custom Puppet resources (defined types) which are
mainly wrapper around exec. You can pass environment variable to exec
through the environment parameter. I don't like it but that's how I did
it ~2 years ago. I haven't changed it due to lack of need to change it.
This might change soon with Keystone v3.


 I'm starting to think about moving away from env variables and use a
 configuration file instead. I'm not sure yet about the implementation
 details but that's the main idea.
 
 Is there a standard openrc location?  Could openrc be extended to hold
 parameters such as the default domain to use for Keystone resources? 
 I'm not talking about OS_DOMAIN_NAME, OS_USER_DOMAIN_NAME, etc. which
 are used for _authentication_, not resource creation.

I'm not aware of any standard openrc location other than ~/.openrc
which needs to be sourced before running any OpenStack client commands.

I however understand what you mean. I do not have any idea on how I
would implement it. I'm still hoping someday to be enlightened by a
great solution.

I'm starting to think about some sort of credentials vault. You store
credentials in it and you tell your resource to use that specific
credentials. You then no longer need to pass around 6-7
variables/parameters.


 There is a similar issue when creating domain scoped resources like
 users and projects.  As opposed to editing dozens of manifests to add
 domain parameters to every user and project (and the classes that call
 keystone_user/tenant, and the classes that call those classes, etc.), is
 there some mechanism to specify a default domain to use?  If not, what
 about using the same mechanism used today to specify the Keystone
 credentials?
 I see there is support for a default domain in keystone.conf. You will
 find it defined by the identity/default_domain_id=default config value.

 Is this value not usable?
 
 It is usable, and will be used, _only on Keystone nodes_. If you are on
 a node without Keystone, where will the default id come from?

As you probably know already, Puppet can't guess those default values,
nor could Glance.

I'm suggesting to not provision keystone resources from nodes other than
keystone themselves. It solves (or avoid) a lot of problems.

I think we have to change the way we think about Keystone resources

Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Rich Megginson

On 05/04/2015 07:52 PM, Mathieu Gagné wrote:

On 2015-05-04 9:15 PM, Rich Megginson wrote:

On 05/04/2015 06:03 PM, Mathieu Gagné wrote:

On 2015-05-04 7:35 PM, Rich Megginson wrote:

The way authentication works with the Icehouse branch is that
puppet-keystone reads the admin_token and admin_endpoint from
/etc/keystone/keystone.conf and passes these to the keystone command via
the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument,
respectively.

This will not work on a node where Keystone is not installed (unless you
copy /etc/keystone/keystone.conf to all of your nodes).

I am assuming there are admins/operators that have actually deployed
OpenStack using puppet on nodes where Keystone is not installed?

We are provisioning keystone resources from a privileged keystone node
which accepts the admin_token. All other keystone servers has the
admin_token_auth middleware removed for obvious security reasons.



If so, how?  How do you specify the authentication credentials?  Do you
use environment variables?  If so, how are they specified?

When provisioning resources other than Keystones ones, we use custom
puppet resources and the credentials are passed as env variables to the
exec command. (they are mainly based on exec resources)

I'm talking about the case where you are installing an OpenStack service
other than Keystone using puppet, and that puppet code for that module
needs to create some sort of Keystone resource.

For example, install Glance on a node other than the Keystone node.
puppet-glance is going to call class glance::keystone::auth, which will
call keystone::resource::service_identity, which will call keystone_user
{ $name }.  The openstack provider used by keystone_user is going to
need Keystone admin credentials in order to create the user.

We fixed that part by not provisioning Keystone resources from Glance
nodes but from Keystone nodes instead.

We do not allow our users to create users/groups/projects, only a user
with the admin role can do it. So why would you want to store/use admin
credentials on an unprivileged nodes such as Glance? IMO, the glance
user shouldn't be able to create/edit/delete users/projects/endpoints,
that's the keystone nodes' job.


Ok.  You don't need the Keystone superuser admin credentials on the 
Glance node.


Is the puppet-glance code completely separable so that you can call only 
glance::keystone::auth (or other classes that use Keystone resources) 
from the Keystone node, and all of the other puppet-glance code on the 
Glance node?  Does the same apply to all of the other puppet modules?




If you do not wish to explicitly define Keystone resources for Glance on
Keystone nodes but instead let Glance nodes manage their own resources,
you could always use exported resources.

You let Glance nodes export their keystone resources and then you ask
Keystone nodes to realize them where admin credentials are available. (I
know some people don't really like exported resources for various reasons)


I'm not familiar with exported resources.  Is this a viable option that 
has less impact than just requiring Keystone resources to be realized on 
the Keystone node?






How are you passing those credentials?  As env. vars?  How?

As stated, we use custom Puppet resources (defined types) which are
mainly wrapper around exec. You can pass environment variable to exec
through the environment parameter. I don't like it but that's how I did
it ~2 years ago. I haven't changed it due to lack of need to change it.
This might change soon with Keystone v3.


Ok.





I'm starting to think about moving away from env variables and use a
configuration file instead. I'm not sure yet about the implementation
details but that's the main idea.

Is there a standard openrc location?  Could openrc be extended to hold
parameters such as the default domain to use for Keystone resources?
I'm not talking about OS_DOMAIN_NAME, OS_USER_DOMAIN_NAME, etc. which
are used for _authentication_, not resource creation.

I'm not aware of any standard openrc location other than ~/.openrc
which needs to be sourced before running any OpenStack client commands.

I however understand what you mean. I do not have any idea on how I
would implement it. I'm still hoping someday to be enlightened by a
great solution.

I'm starting to think about some sort of credentials vault. You store
credentials in it and you tell your resource to use that specific
credentials. You then no longer need to pass around 6-7
variables/parameters.


I'm sure Adam Young has some ideas about this . . .





There is a similar issue when creating domain scoped resources like
users and projects.  As opposed to editing dozens of manifests to add
domain parameters to every user and project (and the classes that call
keystone_user/tenant, and the classes that call those classes, etc.), is
there some mechanism to specify a default domain to use?  If not, what
about using the same mechanism used today to specify the Keystone

Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-05-04 Thread Zhou Zheng Sheng / 周征晟
Thank you Andrew.

on 2015/05/05 08:03, Andrew Beekhof wrote:
 On 28 Apr 2015, at 11:15 pm, Bogdan Dobrelya bdobre...@mirantis.com wrote:

 Hello,
 Hello, Zhou

 I using Fuel 6.0.1 and find that RabbitMQ recover time is long after
 power failure. I have a running HA environment, then I reset power of
 all the machines at the same time. I observe that after reboot it
 usually takes 10 minutes for RabittMQ cluster to appear running
 master-slave mode in pacemaker. If I power off all the 3 controllers and
 only start 2 of them, the downtime sometimes can be as long as 20 minutes.
 Yes, this is a known issue [0]. Note, there were many bugfixes, like
 [1],[2],[3], merged for MQ OCF script, so you may want to try to
 backport them as well by the following guide [4]

 [0] https://bugs.launchpad.net/fuel/+bug/1432603
 [1] https://review.openstack.org/#/c/175460/
 [2] https://review.openstack.org/#/c/175457/
 [3] https://review.openstack.org/#/c/175371/
 [4] https://review.openstack.org/#/c/170476/
 Is there a reason you’re using a custom OCF script instead of the upstream[a] 
 one?
 Please have a chat with David (the maintainer, in CC) if there is something 
 you believe is wrong with it.

 [a] 
 https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster

I'm using the OCF script from the Fuel project, specifically from the
6.0 stable branch [alpha].

Comparing with upstream OCF code, the main difference is that Fuel
RabbitMQ OCF is a master-slave resource. Fuel RabbitMQ OCF does more
bookkeeping, for example, blocking client access when RabbitMQ cluster
is not ready. I beleive the upstream OCF should be OK to use as well
after I read the code, but it might not fit into the Fuel project. As
far as I test, the Fuel OCF script is good except sometimes the full
reassemble time is long, and as I find out, it is mostly because the
Fuel MySQL Galera OCF script keeps pacemaker from promoting RabbitMQ
resource, as I mentioned in the previous emails.

Maybe Vladimir and Sergey can give us more insight on why Fuel needs a
master-slave RabbitMQ. I see Vladimir and Sergey works on the original
Fuel blueprint RabbitMQ cluster [beta].

[alpha]
https://github.com/stackforge/fuel-library/blob/stable/6.0/deployment/puppet/nova/files/ocf/rabbitmq
[beta]
https://blueprints.launchpad.net/fuel/+spec/rabbitmq-cluster-controlled-by-pacemaker

 I have a little investigation and find out there are some possible causes.

 1. MySQL Recovery Takes Too Long [1] and Blocking RabbitMQ Clustering in
 Pacemaker

 The pacemaker resource p_mysql start timeout is set to 475s. Sometimes
 MySQL-wss fails to start after power failure, and pacemaker would wait
 475s before retry starting it. The problem is that pacemaker divides
 resource state transitions into batches. Since RabbitMQ is master-slave
 resource, I assume that starting all the slaves and promoting master are
 put into two different batches. If unfortunately starting all RabbitMQ
 slaves are put in the same batch as MySQL starting, even if RabbitMQ
 slaves and all other resources are ready, pacemaker will not continue
 but just wait for MySQL timeout.
 Could you please elaborate the what is the same/different batches for MQ
 and DB? Note, there is a MQ clustering logic flow charts available here
 [5] and we're planning to release a dedicated technical bulletin for this.

 [5] http://goo.gl/PPNrw7

 I can re-produce this by hard powering off all the controllers and start
 them again. It's more likely to trigger MySQL failure in this way. Then
 I observe that if there is one cloned mysql instance not starting, the
 whole pacemaker cluster gets stuck and does not emit any log. On the
 host of the failed instance, I can see a mysql resource agent process
 calling the sleep command. If I kill that process, the pacemaker comes
 back alive and RabbitMQ master gets promoted. In fact this long timeout
 is blocking every resource from state transition in pacemaker.

 This maybe a known problem of pacemaker and there are some discussions
 in Linux-HA mailing list [2]. It might not be fixed in the near future.
 It seems in generally it's bad to have long timeout in state transition
 actions (start/stop/promote/demote). There maybe another way to
 implement MySQL-wss resource agent to use a short start timeout and
 monitor the wss cluster state using monitor action.
 This is very interesting, thank you! I believe all commands for MySQL RA
 OCF script should be as well wrapped with timeout -SIGTERM or -SIGKILL
 as we did for MQ RA OCF. And there should no be any sleep calls. I
 created a bug for this [6].

 [6] https://bugs.launchpad.net/fuel/+bug/1449542

 I also find a fix to improve MySQL start timeout [3]. It shortens the
 timeout to 300s. At the time I sending this email, I can not find it in
 stable/6.0 branch. Maybe the maintainer needs to cherry-pick it to
 stable/6.0 ?

 [1] https://bugs.launchpad.net/fuel/+bug/1441885
 [2] 

Re: [openstack-dev] [PKG-Openstack-devel][horizon][xstatic] XStatic-Angular-Bootstrap in violation of the MIT/Expat license (forwarded from: python-xstatic-angular-bootstrap_0.11.0.2-1_amd64.changes R

2015-05-04 Thread Robert Collins
On 5 May 2015 at 11:13, Thomas Goirand z...@debian.org wrote:
 On 05/05/2015 12:15 AM, Ian Cordasco wrote:

 For what it’s worth Thomas and Maxime, removing the old versions from PyPI
 is likely to be a bad idea.


 Probably, but it's legally wrong (ie: worst case, you can be sued) to leave
 a package which is in direct violation of the license of things it contains.

So,we shouldn't use angular at all then, because as a js framework its
distributed to users when they use the website, but the license file
isn't included in that distribution.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-05-04 Thread Zhou Zheng Sheng / 周征晟
Thank you Bogdan for clearing the pacemaker promotion process for me.

on 2015/05/05 10:32, Andrew Beekhof wrote:
 On 29 Apr 2015, at 5:38 pm, Zhou Zheng Sheng / 周征晟 zhengsh...@awcloud.com 
 wrote:
 [snip]

 Batch is a pacemaker concept I found when I was reading its
 documentation and code. There is a batch-limit: 30 in the output of
 pcs property list --all. The pacemaker official documentation
 explanation is that it's The number of jobs that the TE is allowed to
 execute in parallel. From my understanding, pacemaker maintains cluster
 states, and when we start/stop/promote/demote a resource, it triggers a
 state transition. Pacemaker puts as many as possible transition jobs
 into a batch, and process them in parallel.
 Technically it calculates an ordered graph of actions that need to be 
 performed for a set of related resources.
 You can see an example of the kinds of graphs it produces at:


 http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/s-config-testing-changes.html

 There is a more complex one which includes promotion and demotion on the next 
 page.

 The number of actions that can run at any one time is therefor limited by
 - the value of batch-limit (the total number of in-flight actions)
 - the number of resources that do not have ordering constraints between them 
 (eg. rsc{1,2,3} in the above example)  

 So in the above example, if batch-limit = 3, the monitor_0 actions will 
 still all execute in parallel.
 If batch-limit == 2, one of them will be deferred until the others complete.

 Processing of the graph stops the moment any action returns a value that was 
 not expected.
 If that happens, we wait for currently in-flight actions to complete, 
 re-calculate a new graph based on the new information and start again.
So can I infer the following statement? In a big cluster with many
resources, chances are some resource agent actions return unexpected
values, and if any of the in-flight action timeout is long, it would
block pacemaker from re-calculating a new transition graph? I see the
current batch-limit is 30 and I tried to increase it to 100, but did not
help. I'm sure that the cloned MySQL Galera resource is not related to
master-slave RabbitMQ resource. I don't find any dependency, order or
rule connecting them in the cluster deployed by Fuel [1].

Is there anything I can do to make sure all the resource actions return
expected values in a full reassembling? Is it because node-1 and node-2
happen to boot faster than node-3 and form a cluster, when node-3 joins,
it triggers new state transition? Or may because some resources are
already started, so pacemaker needs to stop them firstly? Does setting
default-resource-stickiness to 1 help?

I also tried crm history XXX commands in a live and correct cluster,
but didn't find much information. I can see there are many log entries
like run_graph: Transition 7108  Next I'll inspect the pacemaker
log to see which resource action returns the unexpected value or which
thing triggers new state transition.

[1] http://paste.openstack.org/show/214919/

 The problem is that pacemaker can only promote a resource after it
 detects the resource is started.
 First we do a non-recurring monitor (*_monitor_0) to check what state the 
 resource is in.
 We can’t assume its off because a) we might have crashed, b) the admin might 
 have accidentally configured it to start at boot or c) the admin may have 
 asked us to re-check everything.

 During a full reassemble, in the first
 transition batch, pacemaker starts all the resources including MySQL and
 RabbitMQ. Pacemaker issues resource agent start invocation in parallel
 and reaps the results.

 For a multi-state resource agent like RabbitMQ, pacemaker needs the
 start result reported in the first batch, then transition engine and
 policy engine decide if it has to retry starting or promote, and put
 this new transition job into a new batch.
 Also important to know, the order of actions is:

 1. any necessary demotions
 2. any necessary stops
 3. any necessary starts
 4. any necessary promotions



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Best wishes!
Zhou Zheng Sheng / 周征晟  Software Engineer
Beijing AWcloud Software Co., Ltd.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-04 Thread Daniel Comnea
Thanks a bunch Doug, very clear  helpful info.

so with that said those who run IceHouse or Juno are (more or less :) )
dead in the water as the only option is v1 ...hmm

Dani

On Mon, May 4, 2015 at 10:21 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 lbaas v1:

 This is the original Neutron LBaaS, and what you see in Horizon or in the
 neutron CLI as “lb-*”.  It has an haproxy backend, and a few vendors
 supporting it. Feature-wise, it’s basically a byte pump.

 lbaas v2:

 This is the “new” Neutron LBaaS, and is in the neutron CLI as “lbaas-*”
 (it’s not yet in Horizon.)  It first shipped in Kilo. It re-organizes the
 objects, and adds TLS termination support, and has L7 plus other new
 goodies planned in Liberty. It similarly has an haproxy reference backend
 with a few vendors supporting it.

 octavia:

 Think of this as a service vm framework that is specific to lbaas, to
 implement lbaas via nova VMs instead of “lbaas agents. It is expected to
 be the reference backend implementation for neutron lbaasv2 in liberty. It
 could also be used as its own front-end, and/or given drivers to be a load
 balancing framework completely outside neutron/nova, though that is not the
 present direction of development.

 Thanks,
 doug




  On May 4, 2015, at 1:57 PM, Daniel Comnea comnea.d...@gmail.com wrote:
 
  Hi all,
 
  I'm trying to gather more info about the differences between
 
  Neutron LBaaS v1
  Neutron LBaaS v2
  Octavia
 
  I know Octavia is still not marked production but on the other hand i
 keep hearing inside my organization that Neutron LBaaS is missing few
 critical pieces so i'd very much appreciate if anyone can provide
 detailed info about the differences above.
 
  Thanks,
  Dani
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Monty Taylor
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/04/2015 08:47 PM, Emilien Macchi wrote:
 
 
 On 05/04/2015 10:37 PM, Rich Megginson wrote:
 On 05/04/2015 07:52 PM, Mathieu Gagné wrote:
 On 2015-05-04 9:15 PM, Rich Megginson wrote:
 On 05/04/2015 06:03 PM, Mathieu Gagné wrote:
 On 2015-05-04 7:35 PM, Rich Megginson wrote:
 The way authentication works with the Icehouse branch is
 that puppet-keystone reads the admin_token and
 admin_endpoint from /etc/keystone/keystone.conf and
 passes these to the keystone command via the
 OS_SERVICE_TOKEN env. var. and the --os-endpoint
 argument, respectively.
 
 This will not work on a node where Keystone is not
 installed (unless you copy /etc/keystone/keystone.conf to
 all of your nodes).
 
 I am assuming there are admins/operators that have
 actually deployed OpenStack using puppet on nodes where
 Keystone is not installed?
 We are provisioning keystone resources from a privileged
 keystone node which accepts the admin_token. All other
 keystone servers has the admin_token_auth middleware
 removed for obvious security reasons.
 
 
 If so, how?  How do you specify the authentication
 credentials?  Do you use environment variables?  If so,
 how are they specified?
 When provisioning resources other than Keystones ones, we
 use custom puppet resources and the credentials are passed
 as env variables to the exec command. (they are mainly
 based on exec resources)
 I'm talking about the case where you are installing an
 OpenStack service other than Keystone using puppet, and that
 puppet code for that module needs to create some sort of
 Keystone resource.
 
 For example, install Glance on a node other than the Keystone
 node. puppet-glance is going to call class
 glance::keystone::auth, which will call
 keystone::resource::service_identity, which will call
 keystone_user { $name }.  The openstack provider used by
 keystone_user is going to need Keystone admin credentials in
 order to create the user.
 We fixed that part by not provisioning Keystone resources
 from Glance nodes but from Keystone nodes instead.
 
 We do not allow our users to create users/groups/projects, only
 a user with the admin role can do it. So why would you want to
 store/use admin credentials on an unprivileged nodes such as
 Glance? IMO, the glance user shouldn't be able to
 create/edit/delete users/projects/endpoints, that's the
 keystone nodes' job.
 
 Ok.  You don't need the Keystone superuser admin credentials on
 the Glance node.
 
 Is the puppet-glance code completely separable so that you can
 call only glance::keystone::auth (or other classes that use
 Keystone resources) from the Keystone node, and all of the other
 puppet-glance code on the Glance node?  Does the same apply to
 all of the other puppet modules?
 
 
 If you do not wish to explicitly define Keystone resources for
 Glance on Keystone nodes but instead let Glance nodes manage
 their own resources, you could always use exported resources.
 
 You let Glance nodes export their keystone resources and then
 you ask Keystone nodes to realize them where admin credentials
 are available. (I know some people don't really like exported
 resources for various reasons)
 
 I'm not familiar with exported resources.  Is this a viable
 option that has less impact than just requiring Keystone
 resources to be realized on the Keystone node?
 
 I'm not in favor of having exported resources because it requires 
 PuppetDB, and a lot of people try to avoid that. For now, we've
 been able to setup all OpenStack without PuppetDB in TripleO and in
 some other installers, we might want to keep this benefit.

+100

We're looking at using these puppet modules in a bit, but we're also a
few steps away from getting rid of our puppetmaster and moving to a
completely puppet apply based workflow. I would be double-plus
sad-panda if we were not able to use the openstack puppet modules to
do openstack because they'd been done in such as way as to require a
puppetmaster or puppetdb.

 
 
 How are you passing those credentials?  As env. vars?  How?
 As stated, we use custom Puppet resources (defined types) which
 are mainly wrapper around exec. You can pass environment
 variable to exec through the environment parameter. I don't
 like it but that's how I did it ~2 years ago. I haven't changed
 it due to lack of need to change it. This might change soon
 with Keystone v3.
 
 Ok.
 
 
 
 I'm starting to think about moving away from env variables
 and use a configuration file instead. I'm not sure yet
 about the implementation details but that's the main idea.
 Is there a standard openrc location?  Could openrc be
 extended to hold parameters such as the default domain to use
 for Keystone resources? I'm not talking about OS_DOMAIN_NAME,
 OS_USER_DOMAIN_NAME, etc. which are used for
 _authentication_, not resource creation.
 I'm not aware of any standard openrc location other than
 ~/.openrc which needs to be sourced before running any
 OpenStack client 

Re: [openstack-dev] [PKG-Openstack-devel][horizon][xstatic] XStatic-Angular-Bootstrap in violation of the MIT/Expat license (forwarded from: python-xstatic-angular-bootstrap_0.11.0.2-1_amd64.changes R

2015-05-04 Thread Ian Cordasco


On 5/3/15, 11:46, Thomas Goirand z...@debian.org wrote:

Hi,

According to Paul Tagliamonte, who is from the Debian FTP master team
(which peer-reviews NEW packages in Debian before they reach the
archive) python-xstatic-angular-bootstrap cannot be uploaded as-is to
Debian because it doesn't include an Expat LICENSE file, which is in
direct violation of the license itself (ie: anything which is shipped
using the MIT / Expat license *must* include the said license). Below is
a copy of reply to me, after the package was rejected.

Maxime, since you're the maintainer of this xstatic package, could you
please include the Expat (aka: MIT) license inside
xstatic-angular-bootstrap, then retag and re-release the package?

Also, when this is done, I would strongly suggest fixing the
global-requirements.txt to force using the correct package, then remove
license infringing version from PyPi. This wont change anything for me
as long as there's a new package which fixes the licensing issue, but
legally, I don't think it's right to leave downloadable what has already
been released.

 Forwarded Message 
Subject: Re: [PKG-Openstack-devel]
python-xstatic-angular-bootstrap_0.11.0.2-1_amd64.changes REJECTED
Date: Sat, 2 May 2015 17:21:10 -0400
From: Paul Tagliamonte paul...@debian.org
Reply-To: Tracking bugs and development for OpenStack
openstack-de...@lists.alioth.debian.org
To: Thomas Goirand tho...@goirand.fr
CC: Paul Richards Tagliamonte ftpmas...@ftp-master.debian.org, PKG
OpenStack openstack-de...@lists.alioth.debian.org

On Sat, May 02, 2015 at 11:07:51PM +0200, Thomas Goirand wrote:
 Hi Paul!

 First of all, thanks a lot for all the package review. This is simply
 awesome, and helps me really a lot in my work!

np :)

 Well, for all XStatic projects, the habit is to use the same licensing
as
 for the javascript that is packaged as Python module. So in this file:

 xstatic/pkg/angular_bootstrap/__init__.py

 you can see:

 LICENSE = '(same as %s)' % DISPLAY_NAME

 then in xstatic/pkg/angular_bootstrap/data/angular-bootstrap.js, in the
 header of the file, you may see:

  * angular-ui-bootstrap
  * http://angular-ui.github.io/bootstrap/

  * Version: 0.11.0 - 2014-05-01
  * License: MIT

 So, python-xstatic-angular-bootstrap uses the same Expat license.

 Is this enough?

So, I trust this *is* MIT/Expat licensed, but if you look at the terms
they're granting us::

| Permission is hereby granted, free of charge, to any person obtaining
a copy
| of this software and associated documentation files (the Software),
to deal
| in the Software without restriction, including without limitation the
rights
| to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell
| copies of the Software, and to permit persons to whom the Software is
| furnished to do so, subject to the following conditions:
|
| The above copyright notice and this permission notice shall be included
in
| all copies or substantial portions of the Software.
|
| THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM,
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN
| THE SOFTWARE.

The critical bit here --

| The above copyright notice and this permission notice shall be included
in
| all copies or substantial portions of the Software.

The source distribution is non-complient. They can do that since they
can't infringe on themselves. We would be infringing by distributed the
source tarball.

Just do a DFSG repack and include the license in it. That'll be great
and enough.

 Can I upload again the package? Or should I ask for a more
 clear statement from upstream (which by the way, I have met face to
face,
 and I know how to ping him on Freenode...)?

Cheers,
   Paul

-- 
  .''`.  Paul Tagliamonte paul...@debian.org  |   Proud Debian Developer
: :'  : 4096R / 8F04 9AD8 2C92 066C 7352  D28A 7B58 5B30 807C 2A87
`. `'`  http://people.debian.org/~paultag
  `- http://people.debian.org/~paultag/conduct-statement.txt




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

For what it’s worth Thomas and Maxime, removing the old versions from PyPI
is likely to be a bad idea. An increasing number of deployers have stopped
relying on system packages and install either from source or from PyPI. If
they’re creating frozen lists of dependencies, you *will* break them.
While I agree that those distributions are violating the license, I think
it is a mistake that no one 

[openstack-dev] [Infra] Meeting Tuesday May 5th at 19:00 UTC

2015-05-04 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday May 5th[0], at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, meeting logs and
minutes from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-04-28-19.04.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-04-28-19.04.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-04-28-19.04.log.html

[0] Cinco de Mayo cervezas optional, but there may be a piñata

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Rich Megginson
I'm currently working on Keystone v3 support in the openstack puppet 
modules.


The way authentication works with the Icehouse branch is that 
puppet-keystone reads the admin_token and admin_endpoint from 
/etc/keystone/keystone.conf and passes these to the keystone command via 
the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument, respectively.


This will not work on a node where Keystone is not installed (unless you 
copy /etc/keystone/keystone.conf to all of your nodes).


I am assuming there are admins/operators that have actually deployed 
OpenStack using puppet on nodes where Keystone is not installed?


If so, how?  How do you specify the authentication credentials?  Do you 
use environment variables?  If so, how are they specified?


For Keystone v3, in order to use v3 for authentication, and in order to 
use the v3 identity api, there must be some way to specify the various 
domains to use - the domain for the user, the domain for the project, or 
the domain to get a domain scoped token.


There is a similar issue when creating domain scoped resources like 
users and projects.  As opposed to editing dozens of manifests to add 
domain parameters to every user and project (and the classes that call 
keystone_user/tenant, and the classes that call those classes, etc.), is 
there some mechanism to specify a default domain to use?  If not, what 
about using the same mechanism used today to specify the Keystone 
credentials?


The goal is that all keystone domain scoped resources will eventually 
require specifying a domain, but that will take quite a while and I 
would like to provide an incremental upgrade path.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Jeremy Stanley
On 2015-05-05 07:53:20 +1200 (+1200), Robert Collins wrote:
[...]
 release weekly
[...]

I'm fine with releasing weekly when there's something to release,
but as PBR is somewhat stabilized and relatively tightly scoped I
_hope_ that we get to the point where we don't have bugs or new
features in PBR on a weekly basis.

Cool with the rest of the proposal too.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Dave Walker
On 4 May 2015 at 23:01, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-05-05 07:53:20 +1200 (+1200), Robert Collins wrote:
 [...]
 release weekly
 [...]

 I'm fine with releasing weekly when there's something to release,
 but as PBR is somewhat stabilized and relatively tightly scoped I
 _hope_ that we get to the point where we don't have bugs or new
 features in PBR on a weekly basis.

 Cool with the rest of the proposal too.

Hey,

As someone that did track master PBR Master for internal cross-project
builds during the SemanticVersioning ping-pong, I have to agree that
having a core tool that should be pretty static in feature deliverable
be a regular blocker and instigator of build fails is a real pain.

I am not sure that weekly builds provide much in the way of value,
depending on the consumer of the library.  The release cadence would
be too short to really get value out of time base releases.  Is it
expected to assist openstack-infra in being able to plan upgrading?  I
can't see it helping distros or other vendors building derived
versions of OpenStack?  Would this mean that OpenStack projects would
have to start caring about PBR, or can they expected the core
pipelines to continue to work without knowledge of PBR's releases?
What version(s) would stable/* be expected to use?

For sake of argument, what does weekly provide that monthly doesn't?
Or in the opposite direction - why would each commit not be treated as
a release?

As a consumer of PBR, I stopped tracking master because I was
frustrated rebasing, and I had low confidence the next rebase wouldn't
break my entire pipeline in hidden or subtle ways.

The last change I made to PBR took 4 months to get approved, just
idling as unreviewed.  There is *nothing* more demotivating having a
changeset blocked in this status, particularly when it is a simplistic
change that is forcing you to use a derived version for internal usage
causing additional cost of rebasing.  So, what is happening with the
project now to make reviews happen soon enough to make frequent
time-based release useful?

Perhaps it would be useful to spell out some of the API breaking
changes you are planning?  It feels to me that PBR should be pretty
static in the near term... I am not convinced that frequent releases
make API breaking changes easier, as I am not sure a core library like
PBR can just support 1.0 and n-1 - so would each release keep support
for pbr's major and minor?

(PS. I really like PBR)

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Mathieu Gagné
On 2015-05-04 7:35 PM, Rich Megginson wrote:
 
 The way authentication works with the Icehouse branch is that
 puppet-keystone reads the admin_token and admin_endpoint from
 /etc/keystone/keystone.conf and passes these to the keystone command via
 the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument,
 respectively.
 
 This will not work on a node where Keystone is not installed (unless you
 copy /etc/keystone/keystone.conf to all of your nodes).
 
 I am assuming there are admins/operators that have actually deployed
 OpenStack using puppet on nodes where Keystone is not installed?

We are provisioning keystone resources from a privileged keystone node
which accepts the admin_token. All other keystone servers has the
admin_token_auth middleware removed for obvious security reasons.


 If so, how?  How do you specify the authentication credentials?  Do you
 use environment variables?  If so, how are they specified?

When provisioning resources other than Keystones ones, we use custom
puppet resources and the credentials are passed as env variables to the
exec command. (they are mainly based on exec resources)

I'm starting to think about moving away from env variables and use a
configuration file instead. I'm not sure yet about the implementation
details but that's the main idea.


 For Keystone v3, in order to use v3 for authentication, and in order to
 use the v3 identity api, there must be some way to specify the various
 domains to use - the domain for the user, the domain for the project, or
 the domain to get a domain scoped token.

If I understand correctly, you have to scope the user to a domain and
scope the project to a domain: user1@domain1 wishes to get a token
scoped to project1@domain2 to manage resources within the project?


 There is a similar issue when creating domain scoped resources like
 users and projects.  As opposed to editing dozens of manifests to add
 domain parameters to every user and project (and the classes that call
 keystone_user/tenant, and the classes that call those classes, etc.), is
 there some mechanism to specify a default domain to use?  If not, what
 about using the same mechanism used today to specify the Keystone
 credentials?

I see there is support for a default domain in keystone.conf. You will
find it defined by the identity/default_domain_id=default config value.

Is this value not usable? And is it reasonable to assume the domain
default will always be present?

Or is the question more related to the need to somehow override this
value in Puppet?


 The goal is that all keystone domain scoped resources will eventually
 require specifying a domain, but that will take quite a while and I
 would like to provide an incremental upgrade path.


-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [pbr] pbr-release includes oslo-core?

2015-05-04 Thread Robert Collins
Clark and I spotted this when releasing 0.11 - pbr-release, the group
of folk that can cut a release, includes oslo-release (expected) and
also unexpectedly oslo-core.
https://review.openstack.org/#/admin/groups/387,members

This means that folk that are olso-core, but can't release regular
libraries can release the one special-case library in oslo (pbr,
because setup_requires) - this doesn't make sense to me or Clark :).

Anyone with insight please shout out now, if we don't hear anything
back justifying this, I'll remove oslo-core from pbr-release later
this week.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-05-04 Thread Andrew Beekhof

 On 28 Apr 2015, at 11:15 pm, Bogdan Dobrelya bdobre...@mirantis.com wrote:
 
 Hello,
 
 Hello, Zhou
 
 
 I using Fuel 6.0.1 and find that RabbitMQ recover time is long after
 power failure. I have a running HA environment, then I reset power of
 all the machines at the same time. I observe that after reboot it
 usually takes 10 minutes for RabittMQ cluster to appear running
 master-slave mode in pacemaker. If I power off all the 3 controllers and
 only start 2 of them, the downtime sometimes can be as long as 20 minutes.
 
 Yes, this is a known issue [0]. Note, there were many bugfixes, like
 [1],[2],[3], merged for MQ OCF script, so you may want to try to
 backport them as well by the following guide [4]
 
 [0] https://bugs.launchpad.net/fuel/+bug/1432603
 [1] https://review.openstack.org/#/c/175460/
 [2] https://review.openstack.org/#/c/175457/
 [3] https://review.openstack.org/#/c/175371/
 [4] https://review.openstack.org/#/c/170476/

Is there a reason you’re using a custom OCF script instead of the upstream[a] 
one?
Please have a chat with David (the maintainer, in CC) if there is something you 
believe is wrong with it.

[a] 
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster

 
 
 I have a little investigation and find out there are some possible causes.
 
 1. MySQL Recovery Takes Too Long [1] and Blocking RabbitMQ Clustering in
 Pacemaker
 
 The pacemaker resource p_mysql start timeout is set to 475s. Sometimes
 MySQL-wss fails to start after power failure, and pacemaker would wait
 475s before retry starting it. The problem is that pacemaker divides
 resource state transitions into batches. Since RabbitMQ is master-slave
 resource, I assume that starting all the slaves and promoting master are
 put into two different batches. If unfortunately starting all RabbitMQ
 slaves are put in the same batch as MySQL starting, even if RabbitMQ
 slaves and all other resources are ready, pacemaker will not continue
 but just wait for MySQL timeout.
 
 Could you please elaborate the what is the same/different batches for MQ
 and DB? Note, there is a MQ clustering logic flow charts available here
 [5] and we're planning to release a dedicated technical bulletin for this.
 
 [5] http://goo.gl/PPNrw7
 
 
 I can re-produce this by hard powering off all the controllers and start
 them again. It's more likely to trigger MySQL failure in this way. Then
 I observe that if there is one cloned mysql instance not starting, the
 whole pacemaker cluster gets stuck and does not emit any log. On the
 host of the failed instance, I can see a mysql resource agent process
 calling the sleep command. If I kill that process, the pacemaker comes
 back alive and RabbitMQ master gets promoted. In fact this long timeout
 is blocking every resource from state transition in pacemaker.
 
 This maybe a known problem of pacemaker and there are some discussions
 in Linux-HA mailing list [2]. It might not be fixed in the near future.
 It seems in generally it's bad to have long timeout in state transition
 actions (start/stop/promote/demote). There maybe another way to
 implement MySQL-wss resource agent to use a short start timeout and
 monitor the wss cluster state using monitor action.
 
 This is very interesting, thank you! I believe all commands for MySQL RA
 OCF script should be as well wrapped with timeout -SIGTERM or -SIGKILL
 as we did for MQ RA OCF. And there should no be any sleep calls. I
 created a bug for this [6].
 
 [6] https://bugs.launchpad.net/fuel/+bug/1449542
 
 
 I also find a fix to improve MySQL start timeout [3]. It shortens the
 timeout to 300s. At the time I sending this email, I can not find it in
 stable/6.0 branch. Maybe the maintainer needs to cherry-pick it to
 stable/6.0 ?
 
 [1] https://bugs.launchpad.net/fuel/+bug/1441885
 [2] http://lists.linux-ha.org/pipermail/linux-ha/2014-March/047989.html
 [3] https://review.openstack.org/#/c/171333/
 
 
 2. RabbitMQ Resource Agent Breaks Existing Cluster
 
 Read the code of the RabbitMQ resource agent, I find it does the
 following to start RabbitMQ master-slave cluster.
 On all the controllers:
 (1) Start Erlang beam process
 (2) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 (3) Stop RabbitMQ App but do not stop the beam process
 
 Then in pacemaker, all the RabbitMQ instances are in slave state. After
 pacemaker determines the master, it does the following.
 On the to-be-master host:
 (4) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 On the slaves hosts:
 (5) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 (6) Join RabbitMQ cluster of the master host
 
 
 Yes, something like that. As I mentioned, there were several bug fixes
 in the 6.1 dev, and you can also check the MQ clustering flow charts.
 
 As far as I can understand, this process is to make sure the master
 determined by pacemaker is the same as the master determined in RabbitMQ
 cluster. If there is no existing 

Re: [openstack-dev] [PKG-Openstack-devel][horizon][xstatic] XStatic-Angular-Bootstrap in violation of the MIT/Expat license (forwarded from: python-xstatic-angular-bootstrap_0.11.0.2-1_amd64.changes R

2015-05-04 Thread Thomas Goirand

On 05/05/2015 12:15 AM, Ian Cordasco wrote:

For what it’s worth Thomas and Maxime, removing the old versions from PyPI
is likely to be a bad idea.


Probably, but it's legally wrong (ie: worst case, you can be sued) to 
leave a package which is in direct violation of the license of things it 
contains.



An increasing number of deployers have stopped
relying on system packages and install either from source or from PyPI. If
they’re creating frozen lists of dependencies, you *will* break them.


I don't think we have a choice here. Or do you want to push Maxime to 
take the legal risks? I wouldn't do that...


Anyway, here, we're talking about xstatic-angular-bootstrap, and I it's 
safe to say that nothing else but horizon depends on it. So we should be 
fine.



While I agree that those distributions are violating the license, I think
it is a mistake that no one believes is malicious and which no one will
actually chase after you for.


Are you a lawyer? Do you have a special connection with people from 
bootstrap and angular, and they told you so?



If you’re very concerned about it, you can
create updated releases of all of those packages (for PyPI).


Even if you aren't concerned, please do create an updated release on 
PyPi so that it can be uploaded to Debian.



If you have
version 1.2.3, you can release version 1.2.3.post1 to indicate that the
source code itself didn’t exactly change but some metadata was added or
fixed. Pip should, then if I recall correctly, select 1.2.3.post1 over
1.2.3.


There's no need to do this, there's already 4 digits in XStatic 
packages. Just increasing the ultra-micro (ie: the last digit) in the 
version number is fine. I fail to see why one would need to 
over-engineer this with a .post1 suffix.


Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-04 Thread Stefano Maffulli
Thanks Joe for bringing this up. I have always tried to find topics
worth being covered in the weekly newsletter. I assemble that newsletter
thinking of developers and operators as the main targets, I'd like both
audiences to have one place to look at weekly and skim rapidly to see if
they missed something interesting.

Over the years I have tried to change it based on feedback I received so
this conversation is great to have

On 05/04/2015 12:03 PM, Joe Gordon wrote:
 The  big questions I would like to see answered are:
 
 * What are the big challenges each project is currently working on?
 * What can we learn from each other?
 * Where are individual projects trying to solve the same problem
 independently?

These are all important and interesting questions. When there were fewer
projects it wasn't too hard to keep things together. Nowadays there is a
lot more going on and in gerrit, which requires a bit more upfront
investment to be useful.

 To answer these questions one needs to look at a lot of sources, including:
 
 * Weekly meeting logs, or hopefully just the notes assuming we get
 better at taking detailed notes

I counted over 80 meetings each week: if they all took excellent notes,
with clear #info lines for the relevant stuff, one person would be able
probably to parse them all in a couple hours every week to identify
items worth reporting.

My experience is that IRC meeting logs don't convey anything useful to
outsiders. Pick any project log from
http://eavesdrop.openstack.org/meetings/ and you'll see what I mean.
Even the best ones don't really mean much to those not into the project
itself.

I am very skeptical that we can educate all meeting participants to take
notes that can be meaningful to outsiders. I am also not sure that the
IRC meeting notes are the right place for this.

Maybe it would make more sense to educate PTLs and liasons to nudge me
with a brief email or log in some sort of notification bucket a quick
snippet of text to share with the rest of the contributors.

 * approved specs

More than the approved ones, which are easy to spot on
specs.openstack.org, I think the new ones proposed are more interesting.

Ideally I would find a way to publish draft specs on
specs.openstack.org/drafts/ or somehow provide a way for uneducated (to
gerrit) readers to more easily discover what's coming.

Until a better technical solution exists, I can pull regularly from all
status:open changesets from *-specs repositories and put them in a
section of the weekly newsletter.

 * periodically talk to the PTL of each project to see if any big
 discussions were discussed else where

I think this already happen in the xproject meeting, doesn't it?

 * Topics selected for discussion at summits

I'm confused about this: aren't these visible already as part of the
schedule?

 Off the top of my head here are a few topics that would make good
 candidates for this newsletter:
 
 * What are different projects doing with microversioned APIs, I know
 that at least two projects are tackling this
 * How has the specs process evolved in each project, we all started out
 from a common point but seem to have all gone in slightly different
 directions
 * What will each projects priorities be in Liberty? Do any of them overlap?
 * Any process changes that projects have tried that worked or didn't work
 * How is functional testing evolving in each project

Great to have precise examples to work with. It's useful exercise to
start from the end and trace back to where the answer will be. How would
the answer to these question look like?

 Would this help with cross project communication? Is this feasible?
 Other thoughts?

I think it would help, it is feasible. Let's keep the ideas rolling :)

/stef

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-04 Thread Thomas Goirand



On 04/30/2015 07:48 PM, Mike Bayer wrote:



On 4/30/15 11:00 AM, Victor Stinner wrote:

Hi,

I propose to replace mysql-python with mysqlclient in OpenStack
applications to get Python 3 support, bug fixes and some new features
(support MariaDB's libmysqlclient.so, support microsecond in TIME
column).


It is not feasible to use MySQLclient in Python 2 because it uses the
same module name as Python-MySQL, and would wreak havoc with distro
packaging and many other things.


I don't see what it would break. If I do:

Package: python-mysqlclient
Breaks: python-mysqldb
Replaces: python-mysqldb
Provides: python-mysqldb

everything is fine, and python-mysqlclient becomes another 
implementation of the same thing. Then I believe it'd be a good idea to 
simply remove python-mysqldb from Debian, since it's not maintained 
upstream anymore.



It is also imprudent to switch
production openstack applications to a driver that is new and untested
(even though it is a port), nor is it necessary.


Supporting Python 3 is necessary, as we are going to remove Python 2 
from Debian from Buster.



There should be no
reason Openstack applications are hardcoded to one database driver.


If they share the same import mysqldb, and if they are API compatible, 
how is this a problem?



The
approach should be simply that in Python 3, the mysqlclient library is
installed instead of mysql-python.


So, in Python 3, we'd have some bugfixes, and not in Python 2? This 
seems a very weird approach to me, which *will* lead to lots of issues.



MySQLclient installs under the same
name, so in this case there isn't even any change to the SQLAlchemy URL
required.


Nor there should be in anything else, if they are completely API compatible.


PyMySQL is monkeypatchable, so as long as we are using eventlet, it is
*insane* that we are using MySQL-Python at all, because it is actively
making openstack applications perform much much more poorly than if we
just removed eventlet.  So as long as eventlet is running, PyMySQL
wins the performance argument hands down (as described at the link
http://www.diamondtin.com/2014/sqlalchemy-gevent-mysql-python-drivers-comparison/
which is in the third paragraph of that wiki page).  And it's Py3k
compatible.


Ok, so you are for switching to pymysql. Good. But is this realistic? 
Are you going to provide yourself all the patches for absolutely all 
projects of OpenStack that is using python-mysqldb?



1. keep Mysql-python on Py2K, use mysqlclient on py3k, changing the
implementation of the MySQLdb module on Py2K, server-wide, would be
very disruptive


I'm sorry to say it this way, because I respect you a lot and you did a 
lot of very good things. But Mike, this is a very silly idea. We are 
already having difficulties to push support for Py3, and in some cases, 
it's hard to deal with the differences. Now, you want to add even more 
source of problems, with bugs specific to Py2 or Py3 implementation? Why 
should we make our life even more miserable? I completely fail to 
understand what we would try to achieve by doing this.



2. if we actually care about performance, we either A. dump eventlet or
B. use pymysql.All other performance arguments are moot right now as
we are in the basement.


Eventlet has to die, we all know it. Not only for performances reason. 
But this is completely orthogonal to the discussion we're having about 
having Python 3 support. Please don't stand on the way to do it, just 
because we have other (unrelated) issues with Eventlet + MySQL.


Switching to mysqlclient is basically almost free (by that, I mean 
effortless), if I understand what Victor wrote. The same thing can't be 
said of removing Eventlet or switching to pymysql, even though if both 
may be needed. So why add the later as a blocker for the former?


Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Robert Collins
Hi Dave :)

On 5 May 2015 at 10:37, Dave Walker em...@daviey.com wrote:
...
 Hey,

 As someone that did track master PBR Master for internal cross-project
 builds during the SemanticVersioning ping-pong, I have to agree that
 having a core tool that should be pretty static in feature deliverable
 be a regular blocker and instigator of build fails is a real pain.

Yup.

 I am not sure that weekly builds provide much in the way of value,
 depending on the consumer of the library.  The release cadence would
 be too short to really get value out of time base releases.  Is it
 expected to assist openstack-infra in being able to plan upgrading?  I
 can't see it helping distros or other vendors building derived
 versions of OpenStack?  Would this mean that OpenStack projects would
 have to start caring about PBR, or can they expected the core
 pipelines to continue to work without knowledge of PBR's releases?
 What version(s) would stable/* be expected to use?

The point isn't weekly builds, its not keeping inventory for extended
periods. pbr is only consumed via releases, and fixes to it tend to be
of the 'unbreak my workflow' sort - so fairly important to get them
out to users. As per the discussion about the inability to use
versioned dependencies, *all branches of OpenStack will use the latest
release of pbr always*. We simply cannot do stable release branches or
anything like that for pbr, because of setup_requires - see my first
email in this thread for the details. Thats a constraint, not a goal,
and it will be at least 18months before that constraint *can* be
lifted. Whether we should lift it isn't worth thinking about in the
mean time IMO - it will eventually get lifted because bugs will be
fixed, and we can discuss then whether it makes sense to start using
versioned dependencies on pbr.

 For sake of argument, what does weekly provide that monthly doesn't?
 Or in the opposite direction - why would each commit not be treated as
 a release?

As I said in my reply to Monty, we need some time for reviewers to
catch things that got in by mistake. The set of reviewers for pbr
includes all of oslo, not all of whom are familiar with the
intricacies of the Python packaging ecosystem and the constraints that
places on things. From time to time things merge that are fine python
code but not fine in the larger picture. So, that takes care of 'why
not make each commit a release'. As for monthly - why value does
holding an improvement back for 3 more weeks offer, assuming a week is
enough time for interested reviewers to cast an eye over recent
commits? Its obviously a dial, and I think the place to set it is
where the pbr reviewers are comfortable, which based on the thread so
far seems to be 'a week would be ok' - the consequence of a given
setting is that anyone interested in catching the occasional mistake
needs to review trunk changes no less than $setting.

 As a consumer of PBR, I stopped tracking master because I was
 frustrated rebasing, and I had low confidence the next rebase wouldn't
 break my entire pipeline in hidden or subtle ways.

 The last change I made to PBR took 4 months to get approved, just
 idling as unreviewed.  There is *nothing* more demotivating having a
 changeset blocked in this status, particularly when it is a simplistic
 change that is forcing you to use a derived version for internal usage
 causing additional cost of rebasing.  So, what is happening with the
 project now to make reviews happen soon enough to make frequent
 time-based release useful?

So there are a couple of related things here. Firstly, I feel your
pain. It sucks to have that happen. We don't have a good systemic
answer in OpenStack for this issue yet, and it affects nearly every
project. There are 10 or so reviewers with +A access to pbr changes. I
don't know why your change sat idle so long -
https://review.openstack.org/#/c/142144/ is the review I believe. Only
two of those 10 reviewers reviewed your patch, and indeed most of the
time it was sitting idle.

As to whats happening, we've finally gotten the year long semver stuff
out the door, which removes the ambiguity over master/0.10, and at
least at the moment I see a bunch of important feature work being done
around the ecosystem (in pip, and soon in setuptools and wheel and
pbr) to get what we need to address the CI fragility issues in a
system way. I find having a regular cadence for releases - we do
weekly releases in tripleo too - helps ensure that someone is looking
at the project each week. So - examining pbr to see if a release is
needed on a weekly basis should make the gap between reviews be
clamped at a week, which is a good thing for patches like that patch
of yours.

 Perhaps it would be useful to spell out some of the API breaking
 changes you are planning?

There are non planned.

  It feels to me that PBR should be pretty
 static in the near term... I am not convinced that frequent releases
 make API breaking changes easier, as I am not sure a 

Re: [openstack-dev] [pbr] pbr-release includes oslo-core?

2015-05-04 Thread Davanum Srinivas
Not sure about the history, but +1 to remove oslo-core from that group.

-- dims

On Mon, May 4, 2015 at 6:59 PM, Robert Collins
robe...@robertcollins.net wrote:
 Clark and I spotted this when releasing 0.11 - pbr-release, the group
 of folk that can cut a release, includes oslo-release (expected) and
 also unexpectedly oslo-core.
 https://review.openstack.org/#/admin/groups/387,members

 This means that folk that are olso-core, but can't release regular
 libraries can release the one special-case library in oslo (pbr,
 because setup_requires) - this doesn't make sense to me or Clark :).

 Anyone with insight please shout out now, if we don't hear anything
 back justifying this, I'll remove oslo-core from pbr-release later
 this week.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] pbr-release includes oslo-core?

2015-05-04 Thread Robert Collins
On 5 May 2015 at 11:48, Davanum Srinivas dava...@gmail.com wrote:
 Not sure about the history, but +1 to remove oslo-core from that group.

Done.

If its a terrible mistake we can add it back easily enough.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Davanum Srinivas
+1 to call the current master as 1.0
+1 to more frequent releases (not sure if it should be every monday though!)

-- dims

On Mon, May 4, 2015 at 3:53 PM, Robert Collins
robe...@robertcollins.net wrote:
 Hi, I'd like to talk about how often we can and should release pbr,
 and what criteria we should use for 1.0.

 tl;dr: release weekly [outside of organisation-wide-freezes], do a 1.0
 immediately.

 pbr, like all our libraries affects everything when its released, but
 unlike everything else in oslo, it is an open ended setup_requires
 dependency. It's like this because of
 https://github.com/pypa/pip/issues/2666 - when setuptools encounters a
 setup_requires constraint that conflicts with an already installed
 version, it just gives up.

 Until thats fixed, if we express pbr constraints like pbr  1.0
 we'll cause everything that has previously released to hard-fail to
 install as soon as anything in the environment has pulled in a pbr
 that doesn't match the constraint. This will get better once we have
 pip handle setup_requires with more scaffolding... we can in principle
 get to the point where we can version the pbr setup_requires
 dependencies. However - thats future, and indefinite at this point.

 So, for pbr we need to have wide open constraints in setup_requires,
 and it must be in setup_requires (otherwise pip can't build egg info
 at all and thus can't probe the install_requires dependencies).

 The consequence of this is that pbr has to be ultra conservative -
 we're not allowed any deliberate API breaks for the indefinite future,
 and even once the tooling supports it we'd have to wait for all the
 current releases of things that couldn't be capped to semantic
 versioning limits, to be unsupported. So - we're at least 18 months
 away from any possible future where API breaks - a 2.0 - are possible
 without widespread headaches.

 In light of this, I'd like to make two somewhat related proposals.

 Firstly, I'd like to just call the current master 1.0: its stable,
 we're supporting it, its not going anywhere rash, it has its core
 feature set. Those are the characteristics of 1.0 in most projects :).
 Its not a big splashy 1.0 but who cares..., and there's more we need,
 but thats what 1.x is for.

 Secondly, I'd like to release every Monday (assuming no pending
 reverts): I'd like to acknowledge the reality that we have
 approximately zero real world testing of master - we're heavily
 dependent on our functional tests. The only two reasons to wait for
 releasing are a) to get more testing, and we don't get that, and b) to
 let -core notice mistakes and back things out. Waiting to release once
 an improvement is in master just delays giving the benefits to our
 users.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Robert Collins
On 5 May 2015 at 08:04, Clark Boylan cboy...@sapwetik.org wrote:


 On Mon, May 4, 2015, at 12:53 PM, Robert Collins wrote:

 I don't understand what you mean by zero real world testing of master.
 We test that pbr can make sdists and install packages on every change to
 master before merging those changes. That is the majority of what pbr
 does for us. We can definitely add more testing of the other bits
 (running tests, building docs, etc), but I don't think it is fair to say
 we have approximately zero real world testing of master.

We do exhaustive tests that work across a large set of openstack projects.

Thats not at all the same as real world tests. We don't know that it
works on Arch linux, for instance. Or windows. We don't know that it
works when someone has a GSM modem and flaky internet. As we identify
common failure modes or things we want to support we add them and
thats entirely appropriate, but its never going to be the same degree
of pathology that end users can bring.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Monty Taylor
On 05/04/2015 03:53 PM, Robert Collins wrote:
 Hi, I'd like to talk about how often we can and should release pbr,
 and what criteria we should use for 1.0.
 
 tl;dr: release weekly [outside of organisation-wide-freezes], do a 1.0
 immediately.
 
 pbr, like all our libraries affects everything when its released, but
 unlike everything else in oslo, it is an open ended setup_requires
 dependency. It's like this because of
 https://github.com/pypa/pip/issues/2666 - when setuptools encounters a
 setup_requires constraint that conflicts with an already installed
 version, it just gives up.
 
 Until thats fixed, if we express pbr constraints like pbr  1.0
 we'll cause everything that has previously released to hard-fail to
 install as soon as anything in the environment has pulled in a pbr
 that doesn't match the constraint. This will get better once we have
 pip handle setup_requires with more scaffolding... we can in principle
 get to the point where we can version the pbr setup_requires
 dependencies. However - thats future, and indefinite at this point.
 
 So, for pbr we need to have wide open constraints in setup_requires,
 and it must be in setup_requires (otherwise pip can't build egg info
 at all and thus can't probe the install_requires dependencies).
 
 The consequence of this is that pbr has to be ultra conservative -
 we're not allowed any deliberate API breaks for the indefinite future,
 and even once the tooling supports it we'd have to wait for all the
 current releases of things that couldn't be capped to semantic
 versioning limits, to be unsupported. So - we're at least 18 months
 away from any possible future where API breaks - a 2.0 - are possible
 without widespread headaches.
 
 In light of this, I'd like to make two somewhat related proposals.
 
 Firstly, I'd like to just call the current master 1.0: its stable,
 we're supporting it, its not going anywhere rash, it has its core
 feature set. Those are the characteristics of 1.0 in most projects :).
 Its not a big splashy 1.0 but who cares..., and there's more we need,
 but thats what 1.x is for.

WFM

 Secondly, I'd like to release every Monday (assuming no pending
 reverts): I'd like to acknowledge the reality that we have
 approximately zero real world testing of master - we're heavily
 dependent on our functional tests. The only two reasons to wait for
 releasing are a) to get more testing, and we don't get that, and b) to
 let -core notice mistakes and back things out. Waiting to release once
 an improvement is in master just delays giving the benefits to our
 users.

I'm fine with that in principle - I tend to release personal libraries
pretty much as soon as something interesting hits them. I have no
personal fear of high release counts.

I'm not sure I 100% agree with every Monday ... I think we should be
flexible enough at this point to just release actively. But I don't feel
strongly about it.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-04 Thread Joe Gordon
On Mon, May 4, 2015 at 12:27 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 5 May 2015 at 07:03, Joe Gordon joe.gord...@gmail.com wrote:
  Before going any further, I am proposing something to make it easier for
 the
  developer community to keep track of what other projects are working on.
 I
  am not proposing anything to directly help operators or users, that is a
  separate problem space.

 I like the thrust of your proposal.

 Any reason not to ask the existing newsletter to be a bit richer? Much
 of the same effort is required to do the existing one and the content
 you propose IMO.


Short answer: Maybe.  To make the existing newsletter 'a bit richer' would
require a fairly significant amount of additional work. Also I think
sending out this more detailed newsletter on a weekly basis would be too
way to frequent. Lastly, there are sections in the current newsletter that
are very useful that doesn't belong in the one I am proposing; sections
such as Upcoming events, Tips n' tricks, The road to Vancouver.


 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Monty Taylor
On 05/04/2015 04:15 PM, Robert Collins wrote:
 On 5 May 2015 at 08:12, Monty Taylor mord...@inaugust.com wrote:
 On 05/04/2015 03:53 PM, Robert Collins wrote:
 
 I'm fine with that in principle - I tend to release personal libraries
 pretty much as soon as something interesting hits them. I have no
 personal fear of high release counts.

 I'm not sure I 100% agree with every Monday ... I think we should be
 flexible enough at this point to just release actively. But I don't feel
 strongly about it.
 
 Perhaps:
  - no more frequently than weekly in the absence of brown-bag-fixups
- to allow pbr-core to look for 'oh WAT no we're not doing that'
 things slipping in due to review team de-sync.
  - Early in the week rather than late
- to avoid surprises on weekends

++


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon

2015-05-04 Thread Erlon Cruz
Thanks Alex!

On Mon, May 4, 2015 at 11:38 AM, Alex Meade mr.alex.me...@gmail.com wrote:

 Hey Erlon,

 The summit etherpad is here:
 https://etherpad.openstack.org/p/liberty-cinder-async-reporting

 It links to what we discussed in Paris. I will be filling it out this
 week. Also note, I have submitted this topic for a cross-project session:
 https://docs.google.com/spreadsheets/d/1vCTZBJKCMZ2xBhglnuK3ciKo3E8UMFo5S5lmIAYMCSE/edit#gid=827503418

 -Alex

 On Mon, May 4, 2015 at 3:30 AM, liuxinguo liuxin...@huawei.com wrote:

   · I’m just trying to have a  analysis into it, maybe can begin
 with the “wrapper around the python-cinderclient” as George Peristerakis
 suggested.





 *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
 *发送时间:* 2015年4月27日 20:07
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *抄送:* Luozhen; Fanyaohong
 *主题:* Re: [openstack-dev] [cinder] Is there any way to put the driver
 backend error message to the horizon



 Alex,



 Any scratch of the solution you plan to propose?



 On Mon, Apr 27, 2015 at 5:57 AM, liuxinguo liuxin...@huawei.com wrote:

 Thanks for your suggestion, George. But when I looked into
 python-cinderclient (not very deep), I can not find the “wrapper around the
 python-cinderclient” you have mentioned.

 Could you please give me a little more hint to find the “wrapper”?



 Thanks,

 Liu





 *发件人:* George Peristerakis [mailto:gperi...@redhat.com]
 *发送时间:* 2015年4月13日 23:22
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *主题:* Re: [openstack-dev] [cinder] Is there any way to put the driver
 backend error message to the horizon



 Hi Lui,

 I'm not familiar with the error you are trying to show, but Here's how
 Horizon typically works. In the case of cinder, we have a wrapper around
 the python-cinderclient which if the client sends a exception with a valid
 message, by default Horizon will display the exception message. The message
 can also be overridden in the translation file. So a good start is to look
 in python-cinderclient and see if you could produce a more meaningful
 message.


 Cheers.
 George

 On 10/04/15 06:16 AM, liuxinguo wrote:

 Hi,



 When we create a volume in the horizon, there may occurrs some errors at the 
 driver

 backend, and the in horizon we just see a error in the volume status.



 So is there any way to put the error information to the horizon so users can 
 know what happened exactly just from the horizon?

 Thanks,

 Liu





  __

 OpenStack Development Mailing List (not for usage questions)

 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-04 Thread Jeremy Stanley
On 2015-05-04 10:25:10 -0700 (-0700), Adam Lawson wrote:
[...]
 it's pretty rare to see project teams led and governed by only
 developers.
[...]

Not sure what other free software projects you've worked on/with
before but not only is it not rare, it's the vast majority of them.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Lightning Talks for the Design Summit

2015-05-04 Thread Kyle Mestery
As most of you know, the Neutron team used one of it's design summit slots
in Paris for Lightning Talks. The format was so successful we decided to
replicate this again in Vancouver. Thanks to those who submitted ideas. We
ended up with seven Lightning Talk submissions, which means all seven were
accepted! You can see the schedule here [1], but I'll post it below as well:

Testing do's and don'ts: Unit, functional, full-stack, API and Tempest
(amuller)
Neutron cascading to address scalability (joehuang)
ML2 MechanismDrivers for bare-metal deployments (Sukhdev)
Stateful OpenFlow Firewall Driver (ajo)
distributed data-plane performance testing with Shakar (obondarev)
multi-node deployment in your laptop with open source tools (emagana)
How not to get your patch merged in Neutron (kevinbenton)

For those who submitted talks, thank you and we look forward to seeing you
in Vancouver.

Kyle

[1] http://sched.co/3BNR
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Clark Boylan


On Mon, May 4, 2015, at 12:53 PM, Robert Collins wrote:
 Hi, I'd like to talk about how often we can and should release pbr,
 and what criteria we should use for 1.0.
 
 tl;dr: release weekly [outside of organisation-wide-freezes], do a 1.0
 immediately.
 
 pbr, like all our libraries affects everything when its released, but
 unlike everything else in oslo, it is an open ended setup_requires
 dependency. It's like this because of
 https://github.com/pypa/pip/issues/2666 - when setuptools encounters a
 setup_requires constraint that conflicts with an already installed
 version, it just gives up.
 
 Until thats fixed, if we express pbr constraints like pbr  1.0
 we'll cause everything that has previously released to hard-fail to
 install as soon as anything in the environment has pulled in a pbr
 that doesn't match the constraint. This will get better once we have
 pip handle setup_requires with more scaffolding... we can in principle
 get to the point where we can version the pbr setup_requires
 dependencies. However - thats future, and indefinite at this point.
 
 So, for pbr we need to have wide open constraints in setup_requires,
 and it must be in setup_requires (otherwise pip can't build egg info
 at all and thus can't probe the install_requires dependencies).
 
 The consequence of this is that pbr has to be ultra conservative -
 we're not allowed any deliberate API breaks for the indefinite future,
 and even once the tooling supports it we'd have to wait for all the
 current releases of things that couldn't be capped to semantic
 versioning limits, to be unsupported. So - we're at least 18 months
 away from any possible future where API breaks - a 2.0 - are possible
 without widespread headaches.
 
 In light of this, I'd like to make two somewhat related proposals.
 
 Firstly, I'd like to just call the current master 1.0: its stable,
 we're supporting it, its not going anywhere rash, it has its core
 feature set. Those are the characteristics of 1.0 in most projects :).
 Its not a big splashy 1.0 but who cares..., and there's more we need,
 but thats what 1.x is for.
Sounds good.
 
 Secondly, I'd like to release every Monday (assuming no pending
 reverts): I'd like to acknowledge the reality that we have
 approximately zero real world testing of master - we're heavily
 dependent on our functional tests. The only two reasons to wait for
 releasing are a) to get more testing, and we don't get that, and b) to
 let -core notice mistakes and back things out. Waiting to release once
 an improvement is in master just delays giving the benefits to our
 users.
I don't understand what you mean by zero real world testing of master.
We test that pbr can make sdists and install packages on every change to
master before merging those changes. That is the majority of what pbr
does for us. We can definitely add more testing of the other bits
(running tests, building docs, etc), but I don't think it is fair to say
we have approximately zero real world testing of master.

Clark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] regular releases, and path to 1.0

2015-05-04 Thread Robert Collins
On 5 May 2015 at 08:12, Monty Taylor mord...@inaugust.com wrote:
 On 05/04/2015 03:53 PM, Robert Collins wrote:

 I'm fine with that in principle - I tend to release personal libraries
 pretty much as soon as something interesting hits them. I have no
 personal fear of high release counts.

 I'm not sure I 100% agree with every Monday ... I think we should be
 flexible enough at this point to just release actively. But I don't feel
 strongly about it.

Perhaps:
 - no more frequently than weekly in the absence of brown-bag-fixups
   - to allow pbr-core to look for 'oh WAT no we're not doing that'
things slipping in due to review team de-sync.
 - Early in the week rather than late
   - to avoid surprises on weekends

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-ansible-deplpoyment has released Kilo

2015-05-04 Thread Daniel Comnea
Hey Kevin,

Let me add more info:

1) trying to understand if there is any support for baremetal provisioning
(e.g setup the UCS manager if using UCS blades etc, dump the OS on it). I
don't care if is Ironic or PXE/ Kickstart or Foreman etc
2) deployments on baremetal without the use of LXC containers (stick with
default VM instances)

Dani

On Mon, May 4, 2015 at 3:02 PM, Kevin Carter kevin.car...@rackspace.com
wrote:

 Hey Dani,

 Are you looking for support for Ironic for baremetal provisioning or for
 deployments on baremetal without the use of LXC containers?

 —

 Kevin

  On May 3, 2015, at 06:45, Daniel Comnea comnea.d...@gmail.com wrote:
 
  Great job Kevin  co !!
 
  Are there any plans in supporting configure the baremetal as well ?
 
  Dani
 
  On Thu, Apr 30, 2015 at 11:46 PM, Liu, Guang Jun (Gene) 
 gene@alcatel-lucent.com wrote:
  cool!
  
  From: Kevin Carter [kevin.car...@rackspace.com]
  Sent: Thursday, April 30, 2015 4:36 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] os-ansible-deplpoyment has released Kilo
 
  Hello Stackers,
 
  The OpenStack Ansible Deployment (OSAD) project is happy to announce our
 stable Kilo release, version 11.0.0. The project has come a very long way
 from initial inception and taken a lot of work to excise our original
 vendor logic from the stack and transform it into a community-driven
 architecture and deployment process. If you haven’t yet looked at the
 `os-ansible-deployment` project on StackForge, we'd love for you to take a
 look now [ https://github.com/stackforge/os-ansible-deployment ]. We
 offer an OpenStack solution orchestrated by Ansible and powered by upstream
 OpenStack source. OSAD is a batteries included OpenStack deployment
 solution that delivers OpenStack as the developers intended it: no
 modifications to nor secret sauce in the services it deploys. This release
 includes 436 commits that brought the project from Rackspace Private Cloud
 technical debt to an OpenStack community deployment solution. I'd like to
 recognize the following people (from Git logs) for all of their hard work
 in making the OSAD project successful:
 
  Andy McCrae
  Matt Thompson
  Jesse Pretorius
  Hugh Saunders
  Darren Birkett
  Nolan Brubaker
  Christopher H. Laco
  Ian Cordasco
  Miguel Grinberg
  Matthew Kassawara
  Steve Lewis
  Matthew Oliver
  git-harry
  Justin Shepherd
  Dave Wilde
  Tom Cameron
  Charles Farquhar
  BjoernT
  Dolph Mathews
  Evan Callicoat
  Jacob Wagner
  James W Thorne
  Sudarshan Acharya
  Jesse P
  Julian Montez
  Sam Yaple
  paul
  Jeremy Stanley
  Jimmy McCrory
  Miguel Alex Cantu
  elextro
 
 
  While Rackspace remains the main proprietor of the project in terms of
 community members and contributions, we're looking forward to more
 community participation especially after our stable Kilo release with a
 community focus. Thank you to everyone that contributed on the project so
 far and we look forward to working with more of you as we march on.
 
  —
 
  Kevin Carter
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _get_subnet() in OpenContrail tests results in port deletion

2015-05-04 Thread Salvatore Orlando
I think the first workaround is the solution we're looking for as it better
reflects the fact that opencontrail is a db-less plugin.
I hope it will be the easier too, but you can never be too sure with
neutron unit tests.

Salvatore

On 4 May 2015 at 12:56, Pavel Bondar pbon...@infoblox.com wrote:

 Hi Kevin,

 Thanks for your answer, that is what I was looking for!
 I'll check with you in irc to decide which workaround is better:
 1. Mocking NeutronDbSubnet fetch_subnet for opencontrail tests.
 2. Using session.query() directly in NeutronDbSubnet fetch_subnet.

 - Pavel Bondar

 On 30.04.2015 22:46, Kevin Benton wrote:
  The OpenContrail plugin itself doesn't even use the Neutron DB. I
  believe what you are observing is a side effect of the fake server they
  have for their tests, which does inherit the neutron DB.
 
  When you call a method on the core plugin in the contrail unit test
  case, it will go through their request logic and will be piped into the
  fake server. During this time, the db session that was associated with
  the original context passed to the core plugin will be lost do to its
  conversion to a dict.[1, 2]
 
  So I believe what you're seeing is this.
 
  1. The FakeServer gets create_port called and starts its transactions.
  2. It now hits the ipam driver which calls out to the neutron manager to
  get the core plugin handle, which is actually the contrail plugin and
  not the FakeServer.
  3. IPAM calls _get_subnet on the contrail plugin, which serializes the
  context[1] and sends it to the FakeServer.
  4. The FakeServer code receives the request and deserializes the
  context[2], which no longer has the db session.
  5. The FakeServer then ends up starting a new session to read the
  subnet, which will interfere with the transaction you created the port
  under since they are from the same engine.
 
  This is why you can query the DB directly rather than calling the core
  plugin. The good news is that you don't have to worry because the actual
  contrail plugin won't be using any of this logic so you're not actually
  breaking anything.
 
  I think what you'll want to do is add a mock.patch for the
  NeutronDbSubnet fetch_subnet method to monkey patch in a reference to
  their FakeServer's _get_subnet method. Ping me on IRC (kevinbenton) if
  you need help.
 
  1.
 
 https://github.com/openstack/neutron/blob/master/neutron/plugins/opencontrail/contrail_plugin.py#L111
  2.
 
 https://github.com/openstack/neutron/blob/master/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py#L121
 
  On Thu, Apr 30, 2015 at 6:37 AM, Pavel Bondar pbon...@infoblox.com
  mailto:pbon...@infoblox.com wrote:
 
  Hi,
 
  I am debugging issue observed in OpenContrail tests[1] and so far it
  does not look obvious.
 
  Issue:
 
  In create_port[2] new transaction is started.
  Port gets created, but disappears right after reading subnet from
 plugin
  in reference ipam driver[3]:
 
  plugin = manager.NeutronManager.get_plugin()
  return plugin._get_subnet(context, id)
 
  Port no longer seen in transaction, like it never existed before
  (magic?). As a result inserting IPAllocation fails with foreing key
  constraint error:
 
  DBReferenceError: (IntegrityError) FOREIGN KEY constraint failed
  u'INSERT INTO ipallocations (port_id, ip_address, subnet_id,
 network_id)
  VALUES (?, ?, ?, ?)' ('aba6eaa2-2b2f-4ab9-97b0-4d8a36659363',
  u'10.0.0.2', u'be7bb05b-d501-4cf3-a29a-3861b3b54950',
  u'169f6a61-b5d0-493a-b7fa-74fd5b445c84')
  }}}
 
  Only OpenContrail tests fail with that error (116 failures[1]). Tests
  for other plugin passes fine. As I see OpenContrail is different from
  other plugins: each call to plugin is wrapped into http request, so
  getting subnet happens in another transaction. In tests
 requests.post()
  is mocked and http call gets translated into self.get_subnet(...).
  Stack trace from plugin._get_subnet() to db_base get_subnet() in open
  contrail tests looks next[4].
 
  Also single test failure with full db debug was uploaded for
  investigation[5]:
  - Port is inserted at 362.
  - Subnet is read by plugin at 384.
  - IPAllocation was tried to be inserted at 407.
  Between Port and IPAllocation insert no COMMIT/ROLLBACK or delete
  statement were issued, so can't find explanation why port no longer
  exists on IPAllocation insert step.
  Am I missing something obvious?
 
  For now I have several workarounds, which are basically do not use
  plugin._get_subnet(). Direct session.query() works without such side
  effects.
  But this issue bothers me much since I can't explain why it even
 happens
  in OpenContrail tests.
  Any ideas are welcome!
 
  My best theory for now: OpenContrail silently wipes currently running
  transaction in tests (in this case it doesn't sound good).
 
  Anyone can 

[openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-04 Thread Daniel Comnea
Hi all,

I'm trying to gather more info about the differences between

Neutron LBaaS v1
Neutron LBaaS v2
Octavia

I know Octavia is still not marked production but on the other hand i keep
hearing inside my organization that Neutron LBaaS is missing few critical
pieces so i'd very much appreciate if anyone can provide detailed info
about the differences above.

Thanks,
Dani
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >