Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-29 Thread Doug Hellmann


On Thu, Jan 29, 2015, at 11:03 AM, Thomas Goirand wrote:
 On 01/24/2015 02:01 AM, Doug Hellmann wrote:
  
  
  On Fri, Jan 23, 2015, at 07:48 PM, Thomas Goirand wrote:
  Hi,
 
  I've just noticed that oslo.log made it to global-requirements.txt 9
  days ago. How come are we still adding some name.spaced oslo libs?
  Wasn't the outcome of the discussion in Paris that we shouldn't do that
  anymore, and that we should be using oslo-log instead of oslo.log?
 
  Is three something that I am missing here?
 
  Cheers,
 
  Thomas Goirand (zigo)
  
  The naming is described in the spec:
  http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html
  
  tl;dr - We did it this way to make life easier for the packagers.
  
  Doug
 
 Hi Doug,
 
 Sorry for the late reply.
 
 Well, you're not making the life of *package maintainers* more easy,
 that's in fact the opposite way, I'm afraid.
 
 The Debian policy is that Python module packages should be named after
 the import statement in a source file. Meaning that if we do:
 
 import oslo_db
 
 then the package should be called python-oslo-db. This means that I will
 have to rename all the Debian packages to remove the dot and put a dash
 instead. But by doing so, if OpenStack upstream is keeping the old
 naming convention, then all the requirements.txt will be wrong (by
 wrong, I mean from my perspective as a package maintainer), and the
 automated dependency calculation of dh_python2 will put package names
 with dots instead of dashes.
 
 So, what is going to happen, is that I'll have to, for each and every
 package, build a dictionary of translations in debian/pydist-overrides.

That's unfortunate, but you're the only packager who seems to have this
issue.

I've already spent 2 months more time working on this transition than I
planned, so I'm not planning to do anything else disruptive with it this
cycle. If it remains a problem, or some of the other packagers support
renaming the packages, we can discuss it at the L summit to be done
during the L cycle.

Doug

 For example:
 
 # cat debian/pydist-overrides
 oslo.db python-oslo-db
 
 This is very error prone, and I may miss lots of dependencies this way,
 leading to the packages having wrong dependencies. I have a way to avoid
 the issue, which would be to add a Provides: python-oslo.db in the
 python-oslo-db package, but this should only be considered as a
 transition thing.
 
 Also, as a side note, but it may be interesting for some: the package
 python-oslo-db should have Breaks: python-oslo.db ( OLD_VERSION) and
 Replaces: ( OLD_VERSION), as otherwise upgrades will simply fail
 (because 2 different packages can't contain the same files on the
 filesystem).
 
 So if you really want to make our lives more easy, please do the full
 migration, and move completely away from dots.
 
 Also, I'd like to tell you that I feel very sorry that I couldn't attend
 the session about the oslo namespace in Paris. I was taken by my company
 to a culture building session all the afternoon. After reading the
 above, I feel sorry that I didn't attend the namespace session instead.
 :(
 
 Cheers,
 
 Thomas Goirand (zigo)
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Eugeniya Kudryashova
Hi, all


Openstack APIs interact with each other and external systems partially by
passing of HTTP errors. The only valuable difference between types of
exceptions is HTTP-codes, but current codes are generalized, so external
system can’t distinguish what actually happened.

As an example two different failures below differs only by error message:

request:

POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

Host: 192.168.122.195:8774

X-Auth-Project-Id: demo

Accept-Encoding: gzip, deflate, compress

Content-Length: 189

Accept: application/json

User-Agent: python-novaclient

X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf

Content-Type: application/json

{server: {name: demo, imageRef:
171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
42, max_count: 1, min_count: 1, security_groups: [{name: bar}]}}

response:

HTTP/1.1 400 Bad Request

Content-Length: 118

Content-Type: application/json; charset=UTF-8

X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0

Date: Fri, 23 Jan 2015 10:43:33 GMT

{badRequest: {message: Security group bar not found for project
790f5693e97a40d38c4d5bfdc45acb09., code: 400}}

and

request:

POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

Host: 192.168.122.195:8774

X-Auth-Project-Id: demo

Accept-Encoding: gzip, deflate, compress

Content-Length: 192

Accept: application/json

User-Agent: python-novaclient

X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71

Content-Type: application/json

{server: {name: demo, imageRef:
171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
42, max_count: 1, min_count: 1, security_groups: [{name:
default}]}}

response:

HTTP/1.1 400 Bad Request

Content-Length: 70

Content-Type: application/json; charset=UTF-8

X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5

Date: Fri, 23 Jan 2015 10:39:43 GMT

{badRequest: {message: Invalid key_name provided., code: 400}}

The former specifies an incorrect security group name, and the latter an
incorrect keypair name. And the problem is, that just looking at the
response body and HTTP response code an external system can’t understand
what exactly went wrong. And parsing of error messages here is not the way
we’d like to solve this problem.

Another example for solving this problem is AWS EC2 exception codes [1]

So if we have some service based on Openstack projects it would be useful
to have some concrete error codes(textual or numeric), which could allow to
define what actually goes wrong and later correctly process obtained
exception. These codes should be predefined for each exception, have
documented structure and allow to parse exception correctly in each step of
exception handling.

So I’d like to discuss implementing such codes and its usage in openstack
projects.

[1] -
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.middleware 0.4.0 released

2015-01-29 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.middleware 0.4.0: Oslo Middleware library

This release adds a new middleware class for adding a health-check
endpoint to a REST service.

For more details, please see the git log history below and:

http://launchpad.net/oslo.middleware/+milestone/0.4.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware
Changes in /home/dhellmann/repos/openstack/oslo.middleware 0.3.0..0.4.0
---

3687523 Fixes the healthcheck factory method and docs
d345a4b add shortcut to healthcheck middleware
88c69a0 Updated from global requirements
3bc703b Move i18n module to a private name
c46b2c6 Update Oslo imports to remove namespace package
edabbd1 Add healthcheck middleware
2e6685d Updated from global requirements
020fd92 Fix bug tracker link in readme

Diffstat (except docs and test files)
-

README.rst |   2 +-
oslo_middleware/__init__.py|   2 +
oslo_middleware/_i18n.py   |  35 
oslo_middleware/catch_errors.py|   2 +-
oslo_middleware/healthcheck/__init__.py| 116 +
oslo_middleware/healthcheck/disable_by_file.py |  54 
oslo_middleware/healthcheck/pluginbase.py  |  35 
oslo_middleware/i18n.py|  35 
oslo_middleware/sizelimit.py   |   6 +-
requirements.txt   |   5 +-
setup.cfg  |   3 +
tox.ini|   2 +-
16 files changed, 347 insertions(+), 44 deletions(-)

Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1b66bf0..00cbf4f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ Babel=1.3
-oslo.config=1.4.0  # Apache-2.0
+oslo.config=1.6.0  # Apache-2.0
@@ -9 +9 @@ oslo.context=0.1.0 # Apache-2.0
-oslo.i18n=1.0.0  # Apache-2.0
+oslo.i18n=1.3.0  # Apache-2.0
@@ -10,0 +11 @@ six=1.7.0
+stevedore=1.1.0  # Apache-2.0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-29 Thread Julien Danjou
On Thu, Jan 29 2015, Thomas Goirand wrote:

Hi Thomas,

 The Debian policy is that Python module packages should be named after
 the import statement in a source file. Meaning that if we do:

 import oslo_db

 then the package should be called python-oslo-db.
 
 This means that I will have to rename all the Debian packages to
 remove the dot and put a dash instead. But by doing so, if OpenStack
 upstream is keeping the old naming convention, then all the
 requirements.txt will be wrong (by wrong, I mean from my perspective
 as a package maintainer), and the automated dependency calculation of
 dh_python2 will put package names with dots instead of dashes.

So that's a mistake from the Debian policy and/or tools.

The import statement is unrelated to the package name as it is published
on PyPI. A package could provide several Python modules, for example you
could have oslo_foo and olso_bar provided by the same package, e.g.
example oslo.foobar or oslo_foobar.

What's in requirements.txt are package names, not modules names. So if
you or Debian packages in general rely on that to build the list of
dependencies and packages name, you should somehow fix dh_python2.

What we decided to do is to name our packages (published on PyPI) by
oslo.something and provide one and only Python modules that is called
oslo_something. That is totally valid.

I understand that it's hard for you because of dh_python2 likely, but
it's Debian tooling or policy that should be fixed.

I'm not doing much Debian stuff nowadays, but I'll be happy to help you
out if you need to amend the policy or clear things out with dh_python2.

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-29 Thread Zane Bitter
I got a question today about creating keystone users/roles/tenants in 
Heat templates. We currently support creating users via the 
AWS::IAM::User resource, but we don't have a native equivalent.


IIUC keystone now allows you to add users to a domain that is otherwise 
backed by a read-only backend (i.e. LDAP). If this means that it's now 
possible to configure a cloud so that one need not be an admin to create 
users then I think it would be a really useful thing to expose in Heat. 
Does anyone know if that's the case?


I think roles and tenants are likely to remain admin-only, but we have 
precedent for including resources like that in /contrib... this seems 
like it would be comparably useful.


Thoughts?

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Jeremy Stanley
On 2015-01-29 18:35:20 +0200 (+0200), Roman Podoliaka wrote:
[...]
 Otherwise, PyMySQL would be much slower than MySQL-Python for the
 typical SQL queries we do (e.g. ask for *a lot* of data from the DB).
[...]

Is this assertion based on representative empirical testing (for
example profiling devstack+tempest, or perhaps comparing rally
benchmarks), or merely an assumption which still needs validating?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Roman Podoliaka
Hi all,

Mike, thanks for summarizing this in
https://wiki.openstack.org/wiki/PyMySQL_evaluation !

On PyMySQL: this is something we need to enable testing of oslo.db on
Python 3.x and PyPy. Though, I doubt we want to make PyMySQL the
default DB API driver for OpenStack services for Python 2.x. At least,
not until PyMySQL provides C-speedups for hot spots in the driver code
(I assume this can be done in eventlet/PyPy friendly way using cffi).
Otherwise, PyMySQL would be much slower than MySQL-Python for the
typical SQL queries we do (e.g. ask for *a lot* of data from the DB).

On native threads vs green threads: I very much like the Keystone
approach, which allows to run the service using either eventlet or
Apache. It would be great, if we could do that for other services as
well.

Thanks,
Roman

On Thu, Jan 29, 2015 at 5:54 PM, Ed Leafe e...@leafe.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/28/2015 06:57 PM, Johannes Erdfelt wrote:

 Not sure if it helps more than this explanation, but there was a
 blueprint and accompanying wiki page that explains the move from twisted
 to eventlet:

 Yeah, it was never threads vs. greenthreads. There was a lot of pushback
 to relying on Twisted, which many people found confusing to use, and
 more importantly, to follow when reading code. Whatever the performance
 difference may be, eventlet code is a lot easier to follow, as it more
 closely resembles single-threaded linear execution.

 - --

 - -- Ed Leafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)

 iQIcBAEBAgAGBQJUyle8AAoJEKMgtcocwZqLRX4P/j3LhEhubBOJmWepfO6KrG9V
 KxltAshKRvZ9OAKlezprZh8N5XUcxRTDxLhxkYP6Qsaf1pxHacAIOhLRobnV5Y+A
 oF6E6eRORs13pUhLkL+EzZ07Kon+SjSmvDDZiIo+xe8OTbgfMpo5j7zMPeLJpcr0
 RUSS+knJ/ewNCMaX4gTwTY3sDYFZTbVuGHFUtjSgeoJP0T5aP05UR73xeT7/AsbW
 O8dOL4tX+6GcxIHyX4XdFH9hng1P2vBZJ5l8yV6BxB6U8xsiQjlsCpwrb8orYJ0r
 f7+YW0D0FHOvY/TV4dzsLC/2NGc2AwMszWL3kB/AzbUuDyMMDEGpbAS/VHDcyhZg
 l7zFKEQy+9UybVzWjw764hpzcUT/ICPbTBiX/QuN4qY9YTUNTlCNrRAslgM+cr2y
 x0/nb6cd+Qq21RPIygJ9HavRqOm8npF6HpUrE55Dn+3/OvfAftlWNPcdlXAjtDOt
 4WUFUoZjUTsNUjLlEiiTzgfJg7+eQqbR/HFubCpILFQgOlOAuZIDcr3g8a3yw7ED
 wt5UZz/89LDQqpF2TZX8lKFWxeKk1CnxWEWO208+E/700JS4xKHpnVi4tj18udsY
 AHnihUwGO4d9Q0i+TqbomnzyqOW6SM+gDUcahfJ92IJj9e13bqpbuoN/YMbpD/o8
 evOz8G3OeC/KaOgG5F/1
 =1M8U
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Thierry Carrez
Sean Dague wrote:
 On 01/27/2015 05:21 PM, Sean Dague wrote:
 On 01/27/2015 03:55 PM, Douglas Mendizabal wrote:
 Hi openstack-dev,

 The barbican team would like to announce the release of
 python-barbicanclient 3.0.2.  This is a minor release that fixes a bug
 in the pbr versioning that was preventing the client from working correctly.

 The release is available on PyPI

 https://pypi.python.org/pypi/python-barbicanclient/3.0.2

 Which just broke everything, because it creates incompatible
 requirements in stable/juno with cinder. :(
 
 Here is the footnote -
 http://logs.openstack.org/18/150618/1/check/check-grenade-dsvm/c727602/logs/grenade.sh.txt.gz#_2015-01-28_00_04_54_429

This seems to have been caused by this requirements sync:

http://git.openstack.org/cgit/openstack/python-barbicanclient/commit/requirements.txt?id=054d81fb63053c3ce5f1c87736f832750f6311b3

but then the same requirements sync happened in all other clients:

http://git.openstack.org/cgit/openstack/python-novaclient/commit/requirements.txt?id=17367002609f011710014aef12a898e9f16db81c

Does that mean that all the clients are time bombs that will break
stable/juno when their next release is tagged ?

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Mike Bayer


Roman Podoliaka rpodoly...@mirantis.com wrote:

 
 On native threads vs green threads: I very much like the Keystone
 approach, which allows to run the service using either eventlet or
 Apache. It would be great, if we could do that for other services as
 well.

but why do we need two approaches to be at all explicit?   Basically, if you 
write a WSGI application, you normally are writing non-threaded code with a 
shared nothing approach.  Whether the WSGI app is used in a threaded apache 
container or a gevent style uWSGI container is a deployment option.  This 
shouldn’t be exposed in the code.




 
 Thanks,
 Roman
 
 On Thu, Jan 29, 2015 at 5:54 PM, Ed Leafe e...@leafe.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 01/28/2015 06:57 PM, Johannes Erdfelt wrote:
 
 Not sure if it helps more than this explanation, but there was a
 blueprint and accompanying wiki page that explains the move from twisted
 to eventlet:
 
 Yeah, it was never threads vs. greenthreads. There was a lot of pushback
 to relying on Twisted, which many people found confusing to use, and
 more importantly, to follow when reading code. Whatever the performance
 difference may be, eventlet code is a lot easier to follow, as it more
 closely resembles single-threaded linear execution.
 
 - --
 
 - -- Ed Leafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAEBAgAGBQJUyle8AAoJEKMgtcocwZqLRX4P/j3LhEhubBOJmWepfO6KrG9V
 KxltAshKRvZ9OAKlezprZh8N5XUcxRTDxLhxkYP6Qsaf1pxHacAIOhLRobnV5Y+A
 oF6E6eRORs13pUhLkL+EzZ07Kon+SjSmvDDZiIo+xe8OTbgfMpo5j7zMPeLJpcr0
 RUSS+knJ/ewNCMaX4gTwTY3sDYFZTbVuGHFUtjSgeoJP0T5aP05UR73xeT7/AsbW
 O8dOL4tX+6GcxIHyX4XdFH9hng1P2vBZJ5l8yV6BxB6U8xsiQjlsCpwrb8orYJ0r
 f7+YW0D0FHOvY/TV4dzsLC/2NGc2AwMszWL3kB/AzbUuDyMMDEGpbAS/VHDcyhZg
 l7zFKEQy+9UybVzWjw764hpzcUT/ICPbTBiX/QuN4qY9YTUNTlCNrRAslgM+cr2y
 x0/nb6cd+Qq21RPIygJ9HavRqOm8npF6HpUrE55Dn+3/OvfAftlWNPcdlXAjtDOt
 4WUFUoZjUTsNUjLlEiiTzgfJg7+eQqbR/HFubCpILFQgOlOAuZIDcr3g8a3yw7ED
 wt5UZz/89LDQqpF2TZX8lKFWxeKk1CnxWEWO208+E/700JS4xKHpnVi4tj18udsY
 AHnihUwGO4d9Q0i+TqbomnzyqOW6SM+gDUcahfJ92IJj9e13bqpbuoN/YMbpD/o8
 evOz8G3OeC/KaOgG5F/1
 =1M8U
 -END PGP SIGNATURE-
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-29 Thread Kyle Mestery
On Thu, Jan 29, 2015 at 2:55 AM, Kevin Benton blak...@gmail.com wrote:

 Why would users want to change an active port's IP address anyway?

 Re-addressing. It's not common, but the entire reason I brought this up is
 because a user was moving an instance to another subnet on the same network
 and stranded one of their VMs.

  I worry about setting a default config value to handle a very unusual
 use case.

 Changing a static lease is something that works on normal networks so I
 don't think we should break it in Neutron without a really good reason.

 Right now, the big reason to keep a high lease time that I agree with is
 that it buys operators lots of dnsmasq downtime without affecting running
 clients. To get the best of both worlds we can set DHCP option 58 (a.k.a
 dhcp-renewal-time or T1) to 240 seconds. Then the lease time can be left to
 be something large like 10 days to allow for tons of DHCP server downtime
 without affecting running clients.

 There are two issues with this approach. First, some simple dhcp clients
 don't honor that dhcp option (e.g. the one with Cirros), but it works with
 dhclient so it should work on CentOS, Fedora, etc (I verified it works on
 Ubuntu). This isn't a big deal because the worst case is what we have
 already (half of the lease time). The second issue is that dnsmasq
 hardcodes that option, so a patch would be required to allow it to be
 specified in the options file. I am happy to submit the patch required
 there so that isn't a big deal either.

 I'll defer to distributions here, but they would have to consume this
patch and release it before it would become prevalent in distributions
deployed with Neutron. Just something to note here. That said, I think
submitting a patch to remove hard coding this is a good idea, and ideally
you would submit that patch quickly while we hash out the details here.



 If we implement that fix, the remaining issue is Brian's other comment
 about too much DHCP traffic. I've been doing some packet captures and the
 standard request/reply for a renewal is 2 unicast packets totaling about
 725 bytes. Assuming 10,000 VMs renewing every 240 seconds, there will be an
 average of 242 kbps background traffic across the entire network. Even at a
 density of 50 VMs, that's only 1.2 kbps per compute node. If that's still
 too much, then the deployer can adjust the value upwards, but that's hardly
 a reason to have a high default.

 That just leaves the logging problem. Since we require a change to dnsmasq
 anyway, perhaps we could also request an option to suppress logs from
 renewals? If that's not adequate, I think 2 log entries per vm every 240
 seconds is really only a concern for operators with large clouds and they
 should have the knowledge required to change a config file anyway. ;-)


 On Wed, Jan 28, 2015 at 3:59 PM, Chuck Carlino chuckjcarl...@gmail.com
 wrote:

  On 01/28/2015 12:51 PM, Kevin Benton wrote:

 If we are going to ignore the IP address changing use-case, can we just
 make the default infinity? Then nobody ever has to worry about control
 plane outages for existing client. 24 hours is way too long to be useful
 anyway.


 Why would users want to change an active port's IP address anyway?  I can
 see possible use in changing an inactive port's IP address, but that
 wouldn't cause the dhcp issues mentioned here.  I worry about setting a
 default config value to handle a very unusual use case.

 Chuck



  On Jan 28, 2015 12:44 PM, Salvatore Orlando sorla...@nicira.com
 wrote:



 On 28 January 2015 at 20:19, Brian Haley brian.ha...@hp.com wrote:

 Hi Kevin,

 On 01/28/2015 03:50 AM, Kevin Benton wrote:
  Hi,
 
  Approximately a year and a half ago, the default DHCP lease time in
 Neutron was
  increased from 120 seconds to 86400 seconds.[1] This was done with
 the goal of
  reducing DHCP traffic with very little discussion (based on what I
 can see in
  the review and bug report). While it it does indeed reduce DHCP
 traffic, I don't
  think any bug reports were filed showing that a 120 second lease time
 resulted
  in too much traffic or that a jump all of the way to 86400 seconds
 was required
  instead of a value in the same order of magnitude.
 
  Why does this matter?
 
  Neutron ports can be updated with a new IP address from the same
 subnet or
  another subnet on the same network. The port update will result in
 anti-spoofing
  iptables rule changes that immediately stop the old IP address from
 working on
  the host. This means the host is unreachable for 0-12 hours based on
 the current
  default lease time without manual intervention[2] (assuming
 half-lease length
  DHCP renewal attempts).

 So I'll first comment on the problem.  You're essentially pulling the
 rug out
 from under these VMs by changing their IP (and that of their router and
 DHCP/DNS
 server), but you expect they should fail quickly and come right back
 online.  In
 a non-Neutron environment wouldn't the IT person that did this need
 

Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Anne Gentle
On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova 
ekudryash...@mirantis.com wrote:

 Hi, all


 Openstack APIs interact with each other and external systems partially by
 passing of HTTP errors. The only valuable difference between types of
 exceptions is HTTP-codes, but current codes are generalized, so external
 system can’t distinguish what actually happened.

 As an example two different failures below differs only by error message:

 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 189

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf

 Content-Type: application/json

 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name: bar}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 118

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0

 Date: Fri, 23 Jan 2015 10:43:33 GMT

 {badRequest: {message: Security group bar not found for project
 790f5693e97a40d38c4d5bfdc45acb09., code: 400}}

 and

 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 192

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71

 Content-Type: application/json

 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name:
 default}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 70

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5

 Date: Fri, 23 Jan 2015 10:39:43 GMT

 {badRequest: {message: Invalid key_name provided., code: 400}}

 The former specifies an incorrect security group name, and the latter an
 incorrect keypair name. And the problem is, that just looking at the
 response body and HTTP response code an external system can’t understand
 what exactly went wrong. And parsing of error messages here is not the way
 we’d like to solve this problem.


For the Compute API v 2 we have the shortened Error Code in the
documentation at
http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses

such as:

Error response codes
computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
itemNotFound (404), buildInProgress (409)

Thanks to a recent update (well, last fall) to our build tool for docs.

What we don't have is a table in the docs saying computeFault has this
longer Description -- is that what you are asking for, for all OpenStack
APIs?

Tell me more.

Anne




 Another example for solving this problem is AWS EC2 exception codes [1]

 So if we have some service based on Openstack projects it would be useful
 to have some concrete error codes(textual or numeric), which could allow to
 define what actually goes wrong and later correctly process obtained
 exception. These codes should be predefined for each exception, have
 documented structure and allow to parse exception correctly in each step of
 exception handling.

 So I’d like to discuss implementing such codes and its usage in openstack
 projects.

 [1] -
 http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Roman Podoliaka
Jeremy,

I don't have exact numbers, so yeah, it's just an assumption based on
looking at the nova-api/scheduler logs with connection_debug set to
100.

But that's a good point you are making here: it will be interesting to
see what difference enabling of PyMySQL will make for tempest/rally
workloads, rather than just running synthetic tests. I'm going to give
it a try on my devstack installation.

Thanks,
Roman

On Thu, Jan 29, 2015 at 6:42 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-01-29 18:35:20 +0200 (+0200), Roman Podoliaka wrote:
 [...]
 Otherwise, PyMySQL would be much slower than MySQL-Python for the
 typical SQL queries we do (e.g. ask for *a lot* of data from the DB).
 [...]

 Is this assertion based on representative empirical testing (for
 example profiling devstack+tempest, or perhaps comparing rally
 benchmarks), or merely an assumption which still needs validating?
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Morgan Fainberg
Good question! I was planning a keystone liens release very soon, but will
hold off of it will break everything.

--Morgan

On Thursday, January 29, 2015, Thierry Carrez thie...@openstack.org wrote:

 Sean Dague wrote:
  On 01/27/2015 05:21 PM, Sean Dague wrote:
  On 01/27/2015 03:55 PM, Douglas Mendizabal wrote:
  Hi openstack-dev,
 
  The barbican team would like to announce the release of
  python-barbicanclient 3.0.2.  This is a minor release that fixes a bug
  in the pbr versioning that was preventing the client from working
 correctly.
 
  The release is available on PyPI
 
  https://pypi.python.org/pypi/python-barbicanclient/3.0.2
 
  Which just broke everything, because it creates incompatible
  requirements in stable/juno with cinder. :(
 
  Here is the footnote -
 
 http://logs.openstack.org/18/150618/1/check/check-grenade-dsvm/c727602/logs/grenade.sh.txt.gz#_2015-01-28_00_04_54_429

 This seems to have been caused by this requirements sync:


 http://git.openstack.org/cgit/openstack/python-barbicanclient/commit/requirements.txt?id=054d81fb63053c3ce5f1c87736f832750f6311b3

 but then the same requirements sync happened in all other clients:


 http://git.openstack.org/cgit/openstack/python-novaclient/commit/requirements.txt?id=17367002609f011710014aef12a898e9f16db81c

 Does that mean that all the clients are time bombs that will break
 stable/juno when their next release is tagged ?

 --
 Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Morgan Fainberg
On Thursday, January 29, 2015, Roman Podoliaka rpodoly...@mirantis.com
wrote:

 Hi all,

 Mike, thanks for summarizing this in
 https://wiki.openstack.org/wiki/PyMySQL_evaluation !

 On PyMySQL: this is something we need to enable testing of oslo.db on
 Python 3.x and PyPy. Though, I doubt we want to make PyMySQL the
 default DB API driver for OpenStack services for Python 2.x. At least,
 not until PyMySQL provides C-speedups for hot spots in the driver code
 (I assume this can be done in eventlet/PyPy friendly way using cffi).
 Otherwise, PyMySQL would be much slower than MySQL-Python for the
 typical SQL queries we do (e.g. ask for *a lot* of data from the DB).

 On native threads vs green threads: I very much like the Keystone
 approach, which allows to run the service using either eventlet or
 Apache. It would be great, if we could do that for other services as
 well.


I have heard that a few of the very large deployments out there use Apache
or nginx and mod_wsgi / equiv for nginx for all of the services. I
personally think this deployment model  is the way I'd like to see all
OpenStack API services go. For cases like conductor or nova compute it may
not be as good of a deployment option (not exactly traditional wsgi apps).

--Morgan


Thanks,
 Roman

 On Thu, Jan 29, 2015 at 5:54 PM, Ed Leafe e...@leafe.com javascript:;
 wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 01/28/2015 06:57 PM, Johannes Erdfelt wrote:
 
  Not sure if it helps more than this explanation, but there was a
  blueprint and accompanying wiki page that explains the move from twisted
  to eventlet:
 
  Yeah, it was never threads vs. greenthreads. There was a lot of pushback
  to relying on Twisted, which many people found confusing to use, and
  more importantly, to follow when reading code. Whatever the performance
  difference may be, eventlet code is a lot easier to follow, as it more
  closely resembles single-threaded linear execution.
 
  - --
 
  - -- Ed Leafe
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v2.0.14 (GNU/Linux)
 
  iQIcBAEBAgAGBQJUyle8AAoJEKMgtcocwZqLRX4P/j3LhEhubBOJmWepfO6KrG9V
  KxltAshKRvZ9OAKlezprZh8N5XUcxRTDxLhxkYP6Qsaf1pxHacAIOhLRobnV5Y+A
  oF6E6eRORs13pUhLkL+EzZ07Kon+SjSmvDDZiIo+xe8OTbgfMpo5j7zMPeLJpcr0
  RUSS+knJ/ewNCMaX4gTwTY3sDYFZTbVuGHFUtjSgeoJP0T5aP05UR73xeT7/AsbW
  O8dOL4tX+6GcxIHyX4XdFH9hng1P2vBZJ5l8yV6BxB6U8xsiQjlsCpwrb8orYJ0r
  f7+YW0D0FHOvY/TV4dzsLC/2NGc2AwMszWL3kB/AzbUuDyMMDEGpbAS/VHDcyhZg
  l7zFKEQy+9UybVzWjw764hpzcUT/ICPbTBiX/QuN4qY9YTUNTlCNrRAslgM+cr2y
  x0/nb6cd+Qq21RPIygJ9HavRqOm8npF6HpUrE55Dn+3/OvfAftlWNPcdlXAjtDOt
  4WUFUoZjUTsNUjLlEiiTzgfJg7+eQqbR/HFubCpILFQgOlOAuZIDcr3g8a3yw7ED
  wt5UZz/89LDQqpF2TZX8lKFWxeKk1CnxWEWO208+E/700JS4xKHpnVi4tj18udsY
  AHnihUwGO4d9Q0i+TqbomnzyqOW6SM+gDUcahfJ92IJj9e13bqpbuoN/YMbpD/o8
  evOz8G3OeC/KaOgG5F/1
  =1M8U
  -END PGP SIGNATURE-
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Next meeting agenda

2015-01-29 Thread Anne Gentle
On Thu, Jan 29, 2015 at 4:10 AM, Thierry Carrez thie...@openstack.org
wrote:

 Everett Toews wrote:
  A couple of important topics came up as a result of attending the
  Cross Project Meeting. I’ve added both to the agenda for the next
  meeting on Thursday 2015/01/29 at 16:00 UTC.
 
  https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
 
  The first is the suggestion from ttx to consider using
  openstack-specs [1] for the API guidelines.

 Precision: my suggestion was to keep the api-wg repository for the
 various drafting stages, and move to openstack-specs when it's ready to
 be recommended and request wider community comments. Think Draft and
 RFC stages in the IETF process :)


Oh, thanks for clarifying, I hadn't understood it that way.

To me, it seems more efficient to get votes and iterate in one repo rather
than going through two iterations and two review groups. What do others
think?
Thanks,
Anne



 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Next meeting agenda

2015-01-29 Thread Sean Dague
On 01/29/2015 07:17 AM, Anne Gentle wrote:
 
 
 On Thu, Jan 29, 2015 at 4:10 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:
 
 Everett Toews wrote:
  A couple of important topics came up as a result of attending the
  Cross Project Meeting. I’ve added both to the agenda for the next
  meeting on Thursday 2015/01/29 at 16:00 UTC.
 
  https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
 
  The first is the suggestion from ttx to consider using
  openstack-specs [1] for the API guidelines.
 
 Precision: my suggestion was to keep the api-wg repository for the
 various drafting stages, and move to openstack-specs when it's ready to
 be recommended and request wider community comments. Think Draft and
 RFC stages in the IETF process :)
 
 
 Oh, thanks for clarifying, I hadn't understood it that way. 
 
 To me, it seems more efficient to get votes and iterate in one repo
 rather than going through two iterations and two review groups. What do
 others think?
 Thanks,
 Anne

Honestly, I'm more a fan of the one repository approach. Jumping
repositories means the history gets lost, and you have to restart a
bunch of conversations. That happened in the logging jump from nova -
openstack-specs, which probably created 3 - 4 months additional delay in
the process.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-29 Thread Steven Hardy
On Thu, Jan 29, 2015 at 11:41:36AM -0500, Zane Bitter wrote:
 I got a question today about creating keystone users/roles/tenants in Heat
 templates. We currently support creating users via the AWS::IAM::User
 resource, but we don't have a native equivalent.

Note that AWS::IAM::User actually creates a stack domain user[1], e.g a
special user inside the heat domain, intended to be used primarily for
association with an ec2-keypair (e.g AWS::IAM::AccessKey) so we can do
signed requests via the CFN API from inside instances etc.

So it's not really creating normal keystone users, but Angus and I have
already discused the idea of a native equivalent to this, due to the fact
that all StackUser (engine/stack_user.py) subclasses effectively create a
hidden resource which is a keystone user in the heat domain.

I'd definitely support adding a native OS::Heat::StackUser resource,
effectively exposing the stack_user.py code as a resource, and adding
optional user properties to all existing StackUser subclasses (e.g all
SignalResponders like ScalingPolicy and SoftwareDeployment).

This would make the creation of the user for signal auth more transparent,
and enable template authors to choose if they want a single user associated
with multiple resources (vs now when we force them to have different users
for every SignalResponder).

[1] 
http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-2-stack.html

 IIUC keystone now allows you to add users to a domain that is otherwise
 backed by a read-only backend (i.e. LDAP). If this means that it's now
 possible to configure a cloud so that one need not be an admin to create
 users then I think it would be a really useful thing to expose in Heat. Does
 anyone know if that's the case?

I've not heard of that feature, but it's definitely now possible to
configure per-domain backends, so for example you could have the heat
domain backed by SQL and other domains containing real human users backed
by a read-only directory.

 I think roles and tenants are likely to remain admin-only, but we have
 precedent for including resources like that in /contrib... this seems like
 it would be comparably useful.

If the requirement is more to enable admins to create any
users/roles/projects in templates rather than the heat domain specifically,
I'd personally have no problem with e.g OS::Keystone::User provided it was
in contrib (as it's going to be admin-only with the default keystone policies).

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Roman Podoliaka
Mike,

I can't agree more: as far as we are concerned, every service is yet
another WSGI app. And it should be left up to operator, how to deploy
it.

So 'green thread awareness' (i.e. patching of the world) should go to
separate keystone|*-eventlet binary, while everyone else will still be
able to use it as a general WSGI app.

Thanks,
Roman

On Thu, Jan 29, 2015 at 6:55 PM, Mike Bayer mba...@redhat.com wrote:


 Roman Podoliaka rpodoly...@mirantis.com wrote:


 On native threads vs green threads: I very much like the Keystone
 approach, which allows to run the service using either eventlet or
 Apache. It would be great, if we could do that for other services as
 well.

 but why do we need two approaches to be at all explicit?   Basically, if you 
 write a WSGI application, you normally are writing non-threaded code with a 
 shared nothing approach.  Whether the WSGI app is used in a threaded apache 
 container or a gevent style uWSGI container is a deployment option.  This 
 shouldn’t be exposed in the code.





 Thanks,
 Roman

 On Thu, Jan 29, 2015 at 5:54 PM, Ed Leafe e...@leafe.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/28/2015 06:57 PM, Johannes Erdfelt wrote:

 Not sure if it helps more than this explanation, but there was a
 blueprint and accompanying wiki page that explains the move from twisted
 to eventlet:

 Yeah, it was never threads vs. greenthreads. There was a lot of pushback
 to relying on Twisted, which many people found confusing to use, and
 more importantly, to follow when reading code. Whatever the performance
 difference may be, eventlet code is a lot easier to follow, as it more
 closely resembles single-threaded linear execution.

 - --

 - -- Ed Leafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)

 iQIcBAEBAgAGBQJUyle8AAoJEKMgtcocwZqLRX4P/j3LhEhubBOJmWepfO6KrG9V
 KxltAshKRvZ9OAKlezprZh8N5XUcxRTDxLhxkYP6Qsaf1pxHacAIOhLRobnV5Y+A
 oF6E6eRORs13pUhLkL+EzZ07Kon+SjSmvDDZiIo+xe8OTbgfMpo5j7zMPeLJpcr0
 RUSS+knJ/ewNCMaX4gTwTY3sDYFZTbVuGHFUtjSgeoJP0T5aP05UR73xeT7/AsbW
 O8dOL4tX+6GcxIHyX4XdFH9hng1P2vBZJ5l8yV6BxB6U8xsiQjlsCpwrb8orYJ0r
 f7+YW0D0FHOvY/TV4dzsLC/2NGc2AwMszWL3kB/AzbUuDyMMDEGpbAS/VHDcyhZg
 l7zFKEQy+9UybVzWjw764hpzcUT/ICPbTBiX/QuN4qY9YTUNTlCNrRAslgM+cr2y
 x0/nb6cd+Qq21RPIygJ9HavRqOm8npF6HpUrE55Dn+3/OvfAftlWNPcdlXAjtDOt
 4WUFUoZjUTsNUjLlEiiTzgfJg7+eQqbR/HFubCpILFQgOlOAuZIDcr3g8a3yw7ED
 wt5UZz/89LDQqpF2TZX8lKFWxeKk1CnxWEWO208+E/700JS4xKHpnVi4tj18udsY
 AHnihUwGO4d9Q0i+TqbomnzyqOW6SM+gDUcahfJ92IJj9e13bqpbuoN/YMbpD/o8
 evOz8G3OeC/KaOgG5F/1
 =1M8U
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-29 Thread Anita Kuno
On 01/28/2015 07:24 PM, Adam Lawson wrote:
 I'm short on time so I apologize for my candor since I need to get straight
 to the point.
 
 I love reading the various opinions and my team is immensely excited with
 OpenStack is maturing. But this is lunacy.
 
 I looked at the patch being worked [1] to change how things are done and
 have more questions than I can count.
 
 So I'll start with the obvious ones:
 
- Are you proposing this change as a Foundation Individual Board
Director tasked with representing the interests of all Individual Members
of the OpenStack community or as a member of the TC? Context matters
because your two hats are presenting a conflict of interest in my opinion.
One cannot propose a change that gives them greater influence while
suggesting they're doing it for everyone's benefit.
How can Jim be proposing a change as a Foundation Individual Board
Director? He isn't a member of the Board.

http://www.openstack.org/foundation/board-of-directors/

He is a member of the Technical Committee.

http://www.openstack.org/foundation/tech-committee/

Keep in mind that the repository that he offered the change to, the
openstack/governance repository, welcomes patches from anyone who takes
the time to learn our developer workflow and offers a patch to the
repository using Gerrit.

http://docs.openstack.org/infra/manual/developers.html

Thanks,
Anita.
- How is fun remotely relevant when discussing process improvement?
I'm really hoping we aren't developing processes based on how fun a process
is or isn't.
- Why is this discussion being limited to the development community
only? Where's the openness in that?
- What exactly is the problem we're attempting to fix?
- Does the current process not work?
- Is there group of individuals being disenfranchised with our current
process somehow that suggests the process should limit participation
differently?
 
 And some questions around the participation proposals:
 
- Why is the election process change proposing to limit participation to
ATC members only?
There are numerous enthusiasts within our community that don't fall
within the ATC category such as marketing (as some have brought up),
corporate sponsors (where I live) and I'm sure there are many more.
- Is taking back the process a hint that the current process is being
mishandled or restores a sense of process control?
- Is the presumption that the election process belongs to someone or
some group?
That strikes me as an incredibly subjective assertion to make.
 
 opinionThis is one reason I feel so strongly folks should not be allowed
 to hold more than one position of leadership within the OpenStack project.
 Obfuscated context coupled with increased influence rarely produces
 excellence on either front. But that's me./opinion
 
 Mahalo,
 Adam
 
 [1] https://review.openstack.org/#/c/150604/
 
 
 *Adam Lawson*
 
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (844) 4-AQORN-NOW ext. 101
 International: +1 302-387-4660
 Direct: +1 916-246-2072
 
 
 On Wed, Jan 28, 2015 at 10:23 AM, Anita Kuno ante...@anteaya.info wrote:
 
 On 01/28/2015 11:36 AM, Thierry Carrez wrote:
 Monty Taylor wrote:
 What if, to reduce stress on you, we make this 100% mechanical:

 - Anyone can propose a name
 - Election officials verify that the name matches the criteria
 -  * note: how do we approve additive exceptions without tons of effort

 Devil is in the details, as reading some of my hatemail would tell you.
 For example in the past I rejected Foo which was proposed because
 there was a Foo Bar landmark in the vicinity. The rules would have to
 be pretty detailed to be entirely objective.
 Naming isn't objective. That is both the value and the hardship.

 - Marketing team provides feedback to the election officials on names
 they find image-wise problematic
 - The poll is created with the roster of all foundation members
 containing all of the choices, but with the marketing issues clearly
 labeled, like this:

 * Love
 * Lumber
 Ohh, it gives me a thrill to see a name that means something even
 remotely Canadian. (not advocating it be added to this round)
 * Lettuce
 * Lemming - marketing issues identified

 - post poll - foundation staff run trademarks checks on the winners in
 order until a legally acceptable winner is found

 This way nobody is excluded, it's not a burden on you, it's about as
 transparent as it could be - and there are no special privileges needed
 for anyone to volunteer to be an election official.

 I'm going to continue to advocate that we use condorcet instead of a
 launchpad poll because we need the ability to rank things for post-vote
 trademark checks to not get weird. (also, we're working on getting off
 of launchpad, so let's not re-add another connection)

 It's been some time since we last used a Launchpad poll. I recently used
 an open 

Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Kyle Mestery
Maybe we should defer all client releases until we know for sure if each of
them are ticking timebombs.

On Thu, Jan 29, 2015 at 11:00 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

 Good question! I was planning a keystone liens release very soon, but will
 hold off of it will break everything.

 --Morgan


 On Thursday, January 29, 2015, Thierry Carrez thie...@openstack.org
 wrote:

 Sean Dague wrote:
  On 01/27/2015 05:21 PM, Sean Dague wrote:
  On 01/27/2015 03:55 PM, Douglas Mendizabal wrote:
  Hi openstack-dev,
 
  The barbican team would like to announce the release of
  python-barbicanclient 3.0.2.  This is a minor release that fixes a bug
  in the pbr versioning that was preventing the client from working
 correctly.
 
  The release is available on PyPI
 
  https://pypi.python.org/pypi/python-barbicanclient/3.0.2
 
  Which just broke everything, because it creates incompatible
  requirements in stable/juno with cinder. :(
 
  Here is the footnote -
 
 http://logs.openstack.org/18/150618/1/check/check-grenade-dsvm/c727602/logs/grenade.sh.txt.gz#_2015-01-28_00_04_54_429

 This seems to have been caused by this requirements sync:


 http://git.openstack.org/cgit/openstack/python-barbicanclient/commit/requirements.txt?id=054d81fb63053c3ce5f1c87736f832750f6311b3

 but then the same requirements sync happened in all other clients:


 http://git.openstack.org/cgit/openstack/python-novaclient/commit/requirements.txt?id=17367002609f011710014aef12a898e9f16db81c

 Does that mean that all the clients are time bombs that will break
 stable/juno when their next release is tagged ?

 --
 Thierry Carrez (ttx)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Roman Podoliaka
Hi Anne,

I think Eugeniya refers to a problem, that we can't really distinguish
between two different  badRequest (400) errors (e.g. wrong security
group name vs wrong key pair name when starting an instance), unless
we parse the error description, which might be error prone.

Thanks,
Roman

On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
annegen...@justwriteclick.com wrote:


 On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
 ekudryash...@mirantis.com wrote:

 Hi, all


 Openstack APIs interact with each other and external systems partially by
 passing of HTTP errors. The only valuable difference between types of
 exceptions is HTTP-codes, but current codes are generalized, so external
 system can’t distinguish what actually happened.


 As an example two different failures below differs only by error message:


 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 189

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf

 Content-Type: application/json


 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name: bar}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 118

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0

 Date: Fri, 23 Jan 2015 10:43:33 GMT


 {badRequest: {message: Security group bar not found for project
 790f5693e97a40d38c4d5bfdc45acb09., code: 400}}


 and


 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 192

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71

 Content-Type: application/json


 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name:
 default}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 70

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5

 Date: Fri, 23 Jan 2015 10:39:43 GMT


 {badRequest: {message: Invalid key_name provided., code: 400}}


 The former specifies an incorrect security group name, and the latter an
 incorrect keypair name. And the problem is, that just looking at the
 response body and HTTP response code an external system can’t understand
 what exactly went wrong. And parsing of error messages here is not the way
 we’d like to solve this problem.


 For the Compute API v 2 we have the shortened Error Code in the
 documentation at
 http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses

 such as:

 Error response codes
 computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
 unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
 itemNotFound (404), buildInProgress (409)

 Thanks to a recent update (well, last fall) to our build tool for docs.

 What we don't have is a table in the docs saying computeFault has this
 longer Description -- is that what you are asking for, for all OpenStack
 APIs?

 Tell me more.

 Anne




 Another example for solving this problem is AWS EC2 exception codes [1]


 So if we have some service based on Openstack projects it would be useful
 to have some concrete error codes(textual or numeric), which could allow to
 define what actually goes wrong and later correctly process obtained
 exception. These codes should be predefined for each exception, have
 documented structure and allow to parse exception correctly in each step of
 exception handling.


 So I’d like to discuss implementing such codes and its usage in openstack
 projects.


 [1] -
 http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-29 Thread Thierry Carrez
Julien Danjou wrote:
 On Thu, Jan 29 2015, Thomas Goirand wrote:
 
 Hi Thomas,
 
 The Debian policy is that Python module packages should be named after
 the import statement in a source file. Meaning that if we do:

 import oslo_db

 then the package should be called python-oslo-db.

 This means that I will have to rename all the Debian packages to
 remove the dot and put a dash instead. But by doing so, if OpenStack
 upstream is keeping the old naming convention, then all the
 requirements.txt will be wrong (by wrong, I mean from my perspective
 as a package maintainer), and the automated dependency calculation of
 dh_python2 will put package names with dots instead of dashes.
 
 So that's a mistake from the Debian policy and/or tools.
 
 The import statement is unrelated to the package name as it is published
 on PyPI. A package could provide several Python modules, for example you
 could have oslo_foo and olso_bar provided by the same package, e.g.
 example oslo.foobar or oslo_foobar.
 
 What's in requirements.txt are package names, not modules names. So if
 you or Debian packages in general rely on that to build the list of
 dependencies and packages name, you should somehow fix dh_python2.
 
 What we decided to do is to name our packages (published on PyPI) by
 oslo.something and provide one and only Python modules that is called
 oslo_something. That is totally valid.
 
 I understand that it's hard for you because of dh_python2 likely, but
 it's Debian tooling or policy that should be fixed.
 
 I'm not doing much Debian stuff nowadays, but I'll be happy to help you
 out if you need to amend the policy or clear things out with dh_python2.

The policy is actually not that strict, if I understand it correctly:


The binary package for module foo should preferably be named python-foo,
if the module name allows, but this is not required if the binary
package ships multiple modules. In the latter case the maintainer
chooses the name of the module which represents the package the most.
For subpackages such as foo.bar, the recommendation is to name the
binary packages python-foo.bar and python3-foo.bar.


That feels like a recommendation to me.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-29 Thread Evgeniy L
Dmitry,

 But why to add another interface when there is one already (rest api)?

I'm ok if we decide to use REST API, but of course there is a problem which
we should solve, like versioning, which is much harder to support, than
versioning
in core-serializers. Also do you have any ideas how it can be implemented?
You run some code which get the information from api on the master node and
then sets the information in tasks? Or you are going to run this code on
OpenStack
nodes? As you mentioned in case of tokens, you should get the token right
before
you really need it, because of expiring problem, but in this case you don't
need any serializers, get required token right in the task.

 What is your opinion about serializing additional information in plugins
code? How it can be done, without exposing db schema?

With exposing the data in more abstract way the way it's done right now
for the current deployment logic.

Thanks,

On Thu, Jan 29, 2015 at 12:06 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


 1. as I mentioned above, we should have an interface, and if interface
 doesn't
 provide required information, you will have to fix it in two places,
 in Nailgun and in external-serializers, instead of a single place
 i.e. in Nailgun,
 another thing if astute.yaml is a bad interface and we should provide
 another
 versioned interface, or add more data into deployment serializer.

 But why to add another interface when there is one already (rest api)? And
 plugin developer
 may query whatever he want (detailed information about volumes,
 interfaces, master node settings).
 It is most full source of information in fuel and it is already needs to
 be protected from incompatible changes.

 If our API will be not enough for general use - ofcourse we will need to
 fix it, but i dont quite understand what do
 you mean by - fix it in two places. API provides general information that
 can be consumed by serializers (or any other service/human actually),
 and if there is some issues with that information - API should be fixed.
 Serializers expects that information in specific format and makes
 additional transformation or computation based on that info.

 What is your opinion about serializing additional information in plugins
 code? How it can be done, without exposing db schema?

 2. it can be handled in python or any other code (which can be wrapped
 into tasks),
 why should we implement here another entity (a.k.a external
 serializers)?

 Yep, i guess this is true, i thought that we may not want to deliver
 credentials to the target nodes, and only token that can be used
 for limited time, but...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-29 Thread Zane Bitter

On 29/01/15 12:03, Steven Hardy wrote:

On Thu, Jan 29, 2015 at 11:41:36AM -0500, Zane Bitter wrote:

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat. Does
anyone know if that's the case?


I've not heard of that feature, but it's definitely now possible to
configure per-domain backends, so for example you could have the heat
domain backed by SQL and other domains containing real human users backed
by a read-only directory.


http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-29 Thread Nikola Đipanov
On 01/27/2015 11:41 PM, Michael Still wrote:
 Greetings,
 
 I would like to nominate Melanie Witt for the python-novaclient-core team.
 
 (What is python-novaclient-core? Its a new group which will contain
 all of nova-core as well as anyone else we think should have core
 reviewer powers on just the python-novaclient code).
 
 Melanie has been involved with nova for a long time now. She does
 solid reviews in python-novaclient, and at least two current
 nova-cores have suggested her as ready for core review powers on that
 repository.
 
 Please respond with +1s or any concerns.
 
 References:
 
 
 https://review.openstack.org/#/q/project:openstack/python-novaclient+reviewer:%22melanie+witt+%253Cmelwitt%2540yahoo-inc.com%253E%22,n,z
 
 As a reminder, we use the voting process outlined at
 https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
 core team.
 
 Thanks,
 Michael
 

+1

N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-29 Thread Vladimir Kuklin
Guys, could you point out where I suggested to use python for testing
puppet manifests?

On Thu, Jan 29, 2015 at 1:28 AM, Sergii Golovatiuk sgolovat...@mirantis.com
 wrote:

 We need to write tests in way how Puppet community writes. Though if user
 uses salt in one stage, it's fine to use tests on python.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Jan 28, 2015 at 11:15 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Guys, is it crazy idea to write tests for deployment state on node in
 python?
 It even can be done in unit tests fashion..

 I mean there is no strict dependency on tool from puppet world, what is
 needed is access to os and shell, maybe some utils.

  What plans have Fuel Nailgun team for testing the results of deploy steps
 aka tasks?
 From nailgun/orchestration point of view - verification of deployment
 should be done as another task, or included in original.

 On Thu, Jan 22, 2015 at 5:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Moreover I would suggest to use server spec as beaker is already
 duplicating part of our infrastructure automatization.

 On Thu, Jan 22, 2015 at 6:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Guys, I suggest that we create a blueprint how to integrate beaker with
 our existing infrastructure to increase test coverage. My optimistic
 estimate is that we can see its implementation in 7.0.

 On Thu, Jan 22, 2015 at 2:07 AM, Andrew Woodward xar...@gmail.com
 wrote:

 My understanding is serverspec is not going to work well / going to be
 supported. I think it was discusssed on IRC (as i cant find it in my
 email). Stackforge/puppet-ceph moved from ?(something)spec to beaker,
 as its more functional and actively developed.

 On Mon, Jan 12, 2015 at 6:10 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  Hi,
 
  Puppet OpenStack community uses Beaker for acceptance testing. I
 would
  consider it as option [2]
 
  [2] https://github.com/puppetlabs/beaker
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya 
 bdobre...@mirantis.com
  wrote:
 
  Hello.
 
  We are working on the modularization of Openstack deployment by
 puppet
  manifests in Fuel library [0].
 
  Each deploy step should be post-verified with some testing
 framework as
  well.
 
  I believe the framework should:
  * be shipped as a part of Fuel library for puppet manifests instead
 of
  orchestration or Nailgun backend logic;
  * allow the deployer to verify results right in-place, at the node
 being
  deployed, for example, with a rake tool;
  * be compatible / easy to integrate with the existing orchestration
 in
  Fuel and Mistral as an option?
 
  It looks like test resources provided by Serverspec [1] are a good
  option, what do you think?
 
  What plans have Fuel Nailgun team for testing the results of deploy
  steps aka tasks? The spec for blueprint gives no a clear answer.
 
  [0]
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  [1] http://serverspec.org/resource_types.html
 
  --
  Best regards,
  Bogdan Dobrelya,
  Skype #bogdando_at_yahoo.com
  Irc #bogdando
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

[openstack-dev] CLI support in ML2 driver

2015-01-29 Thread Naresh Kumar
Hi,

I have more expertise in Opendaylight than Openstack.I have created a CLI 
application in Opendaylight which uses AdventNetCLI library in SB that will 
create/delete services in my non-openflow carrier ethernet switch through 
RESTCONF(It's working!). I want this app to be called from the Neutron server 
of Openstack and that REST call should be routed to my ODL NB and my controller 
should take care of the operation. Anyone has any ideas how this can be 
implemented ?


Thanks,
Naresh.

LT Technology Services Ltd

www.LntTechservices.comhttp://www.lnttechservices.com/

This Email may contain confidential or privileged information for the intended 
recipient (s). If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Plugins repositories moved to stackforge

2015-01-29 Thread Alexander Kislitsky
Hi, guys

All our plugins repositories are moved to stackforge. Each plugin has
separate repo and now can be maintained independently from core product
cycle and without core-team attraction.

Here is a map of migration:

https://github.com/stackforge/fuel-plugins/tree/master/external_glusterfs
- https://github.com/stackforge/fuel-plugin-external-glusterfs

https://github.com/stackforge/fuel-plugins/tree/master/external_nfs -
https://github.com/stackforge/fuel-plugin-external-nfs

https://github.com/stackforge/fuel-plugins/tree/master/ha_fencing -
https://github.com/stackforge/fuel-plugin-ha-fencing

https://github.com/stackforge/fuel-plugins/tree/master/lbaas -
https://github.com/stackforge/fuel-plugin-neutron-lbaas

https://github.com/Mirantis/fuel-plugins/tree/master/cinder_netapp -
https://github.com/stackforge/fuel-plugin-cinder-netapp

https://github.com/Mirantis/fuel-plugins/tree/master/vpnaas -
https://github.com/stackforge/fuel-plugin-neutron-vpnaas


Now external_glusterfs, external_nfs, ha_fencing, lbaas can be removed from
the https://github.com/stackforge/fuel-plugins:
https://review.openstack.org/#/c/151143/

Repository https://github.com/Mirantis/fuel-plugins/ is not required and
can be removed or made private.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Swift GUI (free or open source)?

2015-01-29 Thread Nicolas Trangez
On Wed, 2015-01-28 at 17:35 -0800, Adam Lawson wrote:
 Okay cool beans. Incidentally, are there any efforts out there going
 through the motions that you know about (even abandoned)? I'd be willing to
 prop them a bit if they're low on development resources..

I don't know of any such project(s), but I'd be very interested in this
as well.

Nicolas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Vladimir Kuklin
Dmitry, Evgeniy

This is exactly what I was talking about when I mentioned serializers for
tasks - taking data from 3rd party sources if user wants. In this case user
will be able to generate some data somewhere and fetch it using this code
that we import.

On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko adide...@mirantis.com
  wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-29 Thread Kevin Benton
Why would users want to change an active port's IP address anyway?

Re-addressing. It's not common, but the entire reason I brought this up is
because a user was moving an instance to another subnet on the same network
and stranded one of their VMs.

 I worry about setting a default config value to handle a very unusual use
case.

Changing a static lease is something that works on normal networks so I
don't think we should break it in Neutron without a really good reason.

Right now, the big reason to keep a high lease time that I agree with is
that it buys operators lots of dnsmasq downtime without affecting running
clients. To get the best of both worlds we can set DHCP option 58 (a.k.a
dhcp-renewal-time or T1) to 240 seconds. Then the lease time can be left to
be something large like 10 days to allow for tons of DHCP server downtime
without affecting running clients.

There are two issues with this approach. First, some simple dhcp clients
don't honor that dhcp option (e.g. the one with Cirros), but it works with
dhclient so it should work on CentOS, Fedora, etc (I verified it works on
Ubuntu). This isn't a big deal because the worst case is what we have
already (half of the lease time). The second issue is that dnsmasq
hardcodes that option, so a patch would be required to allow it to be
specified in the options file. I am happy to submit the patch required
there so that isn't a big deal either.


If we implement that fix, the remaining issue is Brian's other comment
about too much DHCP traffic. I've been doing some packet captures and the
standard request/reply for a renewal is 2 unicast packets totaling about
725 bytes. Assuming 10,000 VMs renewing every 240 seconds, there will be an
average of 242 kbps background traffic across the entire network. Even at a
density of 50 VMs, that's only 1.2 kbps per compute node. If that's still
too much, then the deployer can adjust the value upwards, but that's hardly
a reason to have a high default.

That just leaves the logging problem. Since we require a change to dnsmasq
anyway, perhaps we could also request an option to suppress logs from
renewals? If that's not adequate, I think 2 log entries per vm every 240
seconds is really only a concern for operators with large clouds and they
should have the knowledge required to change a config file anyway. ;-)


On Wed, Jan 28, 2015 at 3:59 PM, Chuck Carlino chuckjcarl...@gmail.com
wrote:

  On 01/28/2015 12:51 PM, Kevin Benton wrote:

 If we are going to ignore the IP address changing use-case, can we just
 make the default infinity? Then nobody ever has to worry about control
 plane outages for existing client. 24 hours is way too long to be useful
 anyway.


 Why would users want to change an active port's IP address anyway?  I can
 see possible use in changing an inactive port's IP address, but that
 wouldn't cause the dhcp issues mentioned here.  I worry about setting a
 default config value to handle a very unusual use case.

 Chuck



  On Jan 28, 2015 12:44 PM, Salvatore Orlando sorla...@nicira.com
 wrote:



 On 28 January 2015 at 20:19, Brian Haley brian.ha...@hp.com wrote:

 Hi Kevin,

 On 01/28/2015 03:50 AM, Kevin Benton wrote:
  Hi,
 
  Approximately a year and a half ago, the default DHCP lease time in
 Neutron was
  increased from 120 seconds to 86400 seconds.[1] This was done with the
 goal of
  reducing DHCP traffic with very little discussion (based on what I can
 see in
  the review and bug report). While it it does indeed reduce DHCP
 traffic, I don't
  think any bug reports were filed showing that a 120 second lease time
 resulted
  in too much traffic or that a jump all of the way to 86400 seconds was
 required
  instead of a value in the same order of magnitude.
 
  Why does this matter?
 
  Neutron ports can be updated with a new IP address from the same
 subnet or
  another subnet on the same network. The port update will result in
 anti-spoofing
  iptables rule changes that immediately stop the old IP address from
 working on
  the host. This means the host is unreachable for 0-12 hours based on
 the current
  default lease time without manual intervention[2] (assuming half-lease
 length
  DHCP renewal attempts).

 So I'll first comment on the problem.  You're essentially pulling the
 rug out
 from under these VMs by changing their IP (and that of their router and
 DHCP/DNS
 server), but you expect they should fail quickly and come right back
 online.  In
 a non-Neutron environment wouldn't the IT person that did this need some
 pretty
 good heat-resistant pants for all the flames from pissed-off users?
 Sure, the
 guy on his laptop will just bounce the connection, but servers (aka VMs)
 should
 stay pretty static.  VMs are servers (and cows according to some).


  I actually expect this kind operation to not be one Neutron users will
 do very often, mostly because regardless of whether you're in the cloud or
 not, you'd still need to wear those heat resistant pants.



 The correct 

Re: [openstack-dev] [Keystone] Deprecation of LDAP Assignment (Only Affects Project/Tenant/Role/Assignment info in LDAP)

2015-01-29 Thread Tim Bell
 -Original Message-
 From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
 Sent: 28 January 2015 21:29
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Keystone] Deprecation of LDAP Assignment (Only
 Affects Project/Tenant/Role/Assignment info in LDAP)
 
 To make it perfectly clear: We are NOT removing nor plan to remove the ability
 to use LDAP for users and groups in Keystone.
 
 NOTE: Please be sure to read the whole email AND FAQ before worrying about
 the impact of this deprecation.
 
 
 LDAP is used in Keystone as a backend for both the Identity (Users and groups)
 and assignments (assigning roles to users) backend.
 
 Where did the LDAP Assignment backend come from? We originally had a single
 backend for Identity (users, groups, etc) and Assignment (Projects/Tenants,
 Domains, Roles, and everything else not-users-and-groups). When we did the
 split of Identity and Assignment we needed to support the organizations that
 deployed everything in the LDAP backend. This required both a driver for 
 Identity
 and Assignment.
 
  We are planning on keeping support for identity while deprecating support for
 assignment.  There is only one known organization that this will impact (CERN)
 and they have a transition plan in place already.
 

To confirm, CERN are fine with the plans and will move the project data out of 
LDAP while keeping users and groups in LDAP. We're aiming for this as part of 
our Juno migration.

 Now before anyone starts worrying about this please read the whole email and
 FAQ at the end. Let me be perfectly clear. LDAP assignment is *not*  
 referring to
 using LDAP for user and groups. That highly popular feature remains in
 Keystone.This change should have no impact for other users of LDAP in
 Keystone.
 
 
 The Problem
 ——
 The SQL Assignment backend has become significantly more feature rich and
 due to the limitations of the basic LDAP schemas available (most LDAP admins
 wont let someone load custom schemas), the LDAP assignment backend has
 languished and fallen further and further behind. It turns out almost no
 deployments use LDAP to house projects/tenants, domains, roles, etc. A lot of
 deployments use LDAP for users and groups.
 
 We explored many options on this front and it boiled down to three:
 
 1. Try and figure out how to wedge all the new features into a sub-optimal 
 data
 store (basic/standard LDAP schemas) 2. Create a custom schema for LDAP
 Assignment. This would require convincing LDAP admins (or Active Directory
 admins) to load a custom schema. This also was a very large amount of work for
 a very small deployment base.
 3. Deprecate the LDAP Assignment backend and work with the community to
 support (if desired) an out-of-tree LDAP driver (supported by those who need 
 it).
 
 
 Based upon interest, workload, and general maintainability issues, we have
 opted to deprecate the LDAP Assignment backend. What does this mean?
 
 1. This means effective as of Kilo, the LDAP assignment backend is deprecated
 and Frozen.
 1.a. No new code/features will be added to the LDAP Assignment backend.
 1.b. Only exception to 1.a is security-related fixes.
 
 2.The LDAP Assignment backend ([assignment]/driver” config option set to
 “keystone.assignment.backends.ldap.Assignment” or a subclass) will remain in-
 tree with plans to be removed in the “M”-release.
 2.a. This is subject to support beyond the “M”-release based upon what the
 keystone development team and community require.
 
 
 FAQ
 ——
 
 Q: Will Keystone still support Users and Groups in LDAP?
 A: Absolutely! There are no plans to deprecate utilizing LDAP (or Active
 Directory) to store users and groups. The Keystone team is committed to
 maintaining and improving the LDAP Identity driver.
 
 Q: Will there be a migration from LDAP Assignment to SQL Assignment for the
 deployers that are still using LDAP Assignment backend?
 A: Each deployment is highly specific to the LDAP data store used and schema
 defined by the organization. The Keystone team has spoken with the deployers
 that have stated they are using LDAP Assignment (and plan to move to SQL
 assignment). Most deployers using LDAP Assignment already have plans on how
 to Migrate. The Keystone team will be happy to provide advice (come chat with
 us in #openstack-keystone on Freenode) but we do not expect to provide a
 canned script to make the migration happen.
 

We can share our scripts if there are others interested but they would be CERN 
specific. However, if there is someone else using this, they could act as a 
base for their migration.

 Q: Why not just keep Assignment in LDAP as an option, but freeze it like the 
 V2
 API?
 A: We explored this option, but with all of the new functionality (including
 identity federation), code fixes, and maintenance issues, it just doesn’t make
 sense from a cloud-interoperability standpoint to maintain a second-class [at
 best barely implementing feature 

Re: [openstack-dev] [Fuel] Plugins repositories moved to stackforge

2015-01-29 Thread Aleksandra Fedorova
Do we have a plan to enable CI for this repositories? Tests to run on
commits, nightly builds, integration with nightly system tests?

Should it be part of Fuel CI or completely independent?


On Thu, Jan 29, 2015 at 11:55 AM, Alexander Kislitsky
akislit...@mirantis.com wrote:
 Hi, guys

 All our plugins repositories are moved to stackforge. Each plugin has
 separate repo and now can be maintained independently from core product
 cycle and without core-team attraction.

 Here is a map of migration:

 https://github.com/stackforge/fuel-plugins/tree/master/external_glusterfs -
 https://github.com/stackforge/fuel-plugin-external-glusterfs

 https://github.com/stackforge/fuel-plugins/tree/master/external_nfs -
 https://github.com/stackforge/fuel-plugin-external-nfs

 https://github.com/stackforge/fuel-plugins/tree/master/ha_fencing -
 https://github.com/stackforge/fuel-plugin-ha-fencing

 https://github.com/stackforge/fuel-plugins/tree/master/lbaas -
 https://github.com/stackforge/fuel-plugin-neutron-lbaas

 https://github.com/Mirantis/fuel-plugins/tree/master/cinder_netapp -
 https://github.com/stackforge/fuel-plugin-cinder-netapp

 https://github.com/Mirantis/fuel-plugins/tree/master/vpnaas -
 https://github.com/stackforge/fuel-plugin-neutron-vpnaas


 Now external_glusterfs, external_nfs, ha_fencing, lbaas can be removed from
 the https://github.com/stackforge/fuel-plugins:
 https://review.openstack.org/#/c/151143/

 Repository https://github.com/Mirantis/fuel-plugins/ is not required and can
 be removed or made private.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-29 Thread Vladimir Kuklin
Guys

I would just like to point out that we will certainly need to consume
resources from 3rd party sources also. We also want to remove any specific
data manipulation from puppet code. Please, consider these use cases also.

On Thu, Jan 29, 2015 at 12:06 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


 1. as I mentioned above, we should have an interface, and if interface
 doesn't
 provide required information, you will have to fix it in two places,
 in Nailgun and in external-serializers, instead of a single place
 i.e. in Nailgun,
 another thing if astute.yaml is a bad interface and we should provide
 another
 versioned interface, or add more data into deployment serializer.

 But why to add another interface when there is one already (rest api)? And
 plugin developer
 may query whatever he want (detailed information about volumes,
 interfaces, master node settings).
 It is most full source of information in fuel and it is already needs to
 be protected from incompatible changes.

 If our API will be not enough for general use - ofcourse we will need to
 fix it, but i dont quite understand what do
 you mean by - fix it in two places. API provides general information that
 can be consumed by serializers (or any other service/human actually),
 and if there is some issues with that information - API should be fixed.
 Serializers expects that information in specific format and makes
 additional transformation or computation based on that info.

 What is your opinion about serializing additional information in plugins
 code? How it can be done, without exposing db schema?

 2. it can be handled in python or any other code (which can be wrapped
 into tasks),
 why should we implement here another entity (a.k.a external
 serializers)?

 Yep, i guess this is true, i thought that we may not want to deliver
 credentials to the target nodes, and only token that can be used
 for limited time, but...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins repositories moved to stackforge

2015-01-29 Thread Alexander Kislitsky
CI is requirement only for certified plugins.

Is CI part of Fuel CI or not depends only on the plugin. In some cases we
just can't run plugin CI on Fuel CI environment.

On Thu, Jan 29, 2015 at 1:03 PM, Aleksandra Fedorova afedor...@mirantis.com
 wrote:

 Do we have a plan to enable CI for this repositories? Tests to run on
 commits, nightly builds, integration with nightly system tests?

 Should it be part of Fuel CI or completely independent?


 On Thu, Jan 29, 2015 at 11:55 AM, Alexander Kislitsky
 akislit...@mirantis.com wrote:
  Hi, guys
 
  All our plugins repositories are moved to stackforge. Each plugin has
  separate repo and now can be maintained independently from core product
  cycle and without core-team attraction.
 
  Here is a map of migration:
 
 
 https://github.com/stackforge/fuel-plugins/tree/master/external_glusterfs
 -
  https://github.com/stackforge/fuel-plugin-external-glusterfs
 
  https://github.com/stackforge/fuel-plugins/tree/master/external_nfs -
  https://github.com/stackforge/fuel-plugin-external-nfs
 
  https://github.com/stackforge/fuel-plugins/tree/master/ha_fencing -
  https://github.com/stackforge/fuel-plugin-ha-fencing
 
  https://github.com/stackforge/fuel-plugins/tree/master/lbaas -
  https://github.com/stackforge/fuel-plugin-neutron-lbaas
 
  https://github.com/Mirantis/fuel-plugins/tree/master/cinder_netapp -
  https://github.com/stackforge/fuel-plugin-cinder-netapp
 
  https://github.com/Mirantis/fuel-plugins/tree/master/vpnaas -
  https://github.com/stackforge/fuel-plugin-neutron-vpnaas
 
 
  Now external_glusterfs, external_nfs, ha_fencing, lbaas can be removed
 from
  the https://github.com/stackforge/fuel-plugins:
  https://review.openstack.org/#/c/151143/
 
  Repository https://github.com/Mirantis/fuel-plugins/ is not required
 and can
  be removed or made private.
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Aleksandra Fedorova
 Fuel Devops Engineer
 bookwar

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] oslo namespace package releases are done

2015-01-29 Thread Ihar Hrachyshka

On 01/29/2015 01:00 AM, Doug Hellmann wrote:

You will all, I am sure, be relieved to know that the oslo.vmware release today 
was the last library that needed to be released with namespace package changes. 
There are a few more patches to land to the requirements list to update the 
minimum required version of the oslo libs, and of course the work to update 
projects to actually use the new package name is still ongoing.


Those requirements updates are:
- https://review.openstack.org/150529
- https://review.openstack.org/150556

I hope those get attention and go merged in reasonable time.
/Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Vladimir Kuklin
Evgeniy

This is not about layers - it is about how we get data. And we need to
separate data sources from the way we manipulate it. Thus, sources may be:
1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
DNS Servers. Then all this data is aggregated and transformed somehow.
After that it is shipped to the deployment layer. That's how I see it.

On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers for
 tasks - taking data from 3rd party sources if user wants. In this case user
 will be able to generate some data somewhere and fetch it using this code
 that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file
 on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and i 
 guess
 it will be responsibility of controller to store current keys that are
 valid for cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-29 Thread Alexandre Levine

Thomas,

I'm the lead of the team working on it.
The project is in a release-candidate state and the EC2 (non-VPC) part 
is just being finished, so there are no tags or branches yet. Also we 
were not sure about what should we do with it since we were told that 
it'll have a chance of going as a part of nova eventually. So we've 
created a spec and blueprint and only now the discussion has started. 
Whatever the decisions we're ready to follow. If the first thing to get 
it closer to customers is to create a package (now it can be only 
installed from sources obviously) and a tag is required for it, then 
that's what we should do.


So bottom line - we're not sure ourselves what the best way to move. Do 
we put a tag (in what format? 1.0? m1? 2015.1.rc1?)? Or do we create a 
branch?

My thinking now is to just put a tag - something like 1.0.rc1.
What do you think?

Best regards,
  Alex Levine

On 1/29/15 2:13 AM, Thomas Goirand wrote:

On 01/28/2015 08:56 PM, Sean Dague wrote:

There is a new stackforge project which is getting some activity now -
https://github.com/stackforge/ec2-api. The intent and hope is that is
the path forward for the portion of the community that wants this
feature, and that efforts will be focused there.

I'd be happy to provide a Debian package for this, however, there's not
even a single git tag there. That's not so nice for tracking issues.
Who's working on it?

Also, is this supposed to be branch-less? Or will it follow juno/kilo/l... ?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Evgeniy L
Vladimir,

It's no clear how it's going to help. You can generate keys with one
tasks and then upload them with another task, why do we need
another layer/entity here?

Thanks,

On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers for
 tasks - taking data from 3rd party sources if user wants. In this case user
 will be able to generate some data somewhere and fetch it using this code
 that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file
 on master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Next meeting agenda

2015-01-29 Thread Thierry Carrez
Everett Toews wrote:
 A couple of important topics came up as a result of attending the
 Cross Project Meeting. I’ve added both to the agenda for the next
 meeting on Thursday 2015/01/29 at 16:00 UTC.
 
 https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
 
 The first is the suggestion from ttx to consider using
 openstack-specs [1] for the API guidelines.

Precision: my suggestion was to keep the api-wg repository for the
various drafting stages, and move to openstack-specs when it's ready to
be recommended and request wider community comments. Think Draft and
RFC stages in the IETF process :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-29 Thread Tomasz Napierala
I hope, we don’t even consider using python for that. Let’s be as close as 
possible to community and use rspec for manifests.

Regards,

 On 29 Jan 2015, at 09:50, Vladimir Kuklin vkuk...@mirantis.com wrote:
 
 Guys, could you point out where I suggested to use python for testing puppet 
 manifests?
 
 On Thu, Jan 29, 2015 at 1:28 AM, Sergii Golovatiuk sgolovat...@mirantis.com 
 wrote:
 We need to write tests in way how Puppet community writes. Though if user 
 uses salt in one stage, it's fine to use tests on python.
 
 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser
 
 On Wed, Jan 28, 2015 at 11:15 PM, Dmitriy Shulyak dshul...@mirantis.com 
 wrote:
 Guys, is it crazy idea to write tests for deployment state on node in python?
 It even can be done in unit tests fashion..
 
 I mean there is no strict dependency on tool from puppet world, what is 
 needed is access to os and shell, maybe some utils.
 
  What plans have Fuel Nailgun team for testing the results of deploy steps 
  aka tasks?
 From nailgun/orchestration point of view - verification of deployment should 
 be done as another task, or included in original.
 
 On Thu, Jan 22, 2015 at 5:44 PM, Vladimir Kuklin vkuk...@mirantis.com wrote:
 Moreover I would suggest to use server spec as beaker is already duplicating 
 part of our infrastructure automatization.
 
 On Thu, Jan 22, 2015 at 6:44 PM, Vladimir Kuklin vkuk...@mirantis.com wrote:
 Guys, I suggest that we create a blueprint how to integrate beaker with our 
 existing infrastructure to increase test coverage. My optimistic estimate is 
 that we can see its implementation in 7.0.
 
 On Thu, Jan 22, 2015 at 2:07 AM, Andrew Woodward xar...@gmail.com wrote:
 My understanding is serverspec is not going to work well / going to be
 supported. I think it was discusssed on IRC (as i cant find it in my
 email). Stackforge/puppet-ceph moved from ?(something)spec to beaker,
 as its more functional and actively developed.
 
 On Mon, Jan 12, 2015 at 6:10 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  Hi,
 
  Puppet OpenStack community uses Beaker for acceptance testing. I would
  consider it as option [2]
 
  [2] https://github.com/puppetlabs/beaker
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya bdobre...@mirantis.com
  wrote:
 
  Hello.
 
  We are working on the modularization of Openstack deployment by puppet
  manifests in Fuel library [0].
 
  Each deploy step should be post-verified with some testing framework as
  well.
 
  I believe the framework should:
  * be shipped as a part of Fuel library for puppet manifests instead of
  orchestration or Nailgun backend logic;
  * allow the deployer to verify results right in-place, at the node being
  deployed, for example, with a rake tool;
  * be compatible / easy to integrate with the existing orchestration in
  Fuel and Mistral as an option?
 
  It looks like test resources provided by Serverspec [1] are a good
  option, what do you think?
 
  What plans have Fuel Nailgun team for testing the results of deploy
  steps aka tasks? The spec for blueprint gives no a clear answer.
 
  [0]
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  [1] http://serverspec.org/resource_types.html
 
  --
  Best regards,
  Bogdan Dobrelya,
  Skype #bogdando_at_yahoo.com
  Irc #bogdando
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Andrew
 Mirantis
 Ceph community
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com
 www.mirantis.ru
 vkuk...@mirantis.com
 
 
 
 -- 
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com
 www.mirantis.ru
 vkuk...@mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [Keystone] Deprecation of LDAP Assignment (Only Affects Project/Tenant/Role/Assignment info in LDAP)

2015-01-29 Thread Yuriy Taraday
Hello.

On Wed Jan 28 2015 at 11:30:43 PM Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 LDAP is used in Keystone as a backend for both the Identity (Users and
 groups) and assignments (assigning roles to users) backend.

 Where did the LDAP Assignment backend come from? We originally had a
 single backend for Identity (users, groups, etc) and Assignment
 (Projects/Tenants, Domains, Roles, and everything else
 not-users-and-groups). When we did the split of Identity and Assignment we
 needed to support the organizations that deployed everything in the LDAP
 backend. This required both a driver for Identity and Assignment.

  We are planning on keeping support for identity while deprecating support
 for assignment.  There is only one known organization that this will impact
 (CERN) and they have a transition plan in place already.


I can (or actually can't do it here) name quite a few of our customers who
do use LDAP assignment backend. The issue that is solved by this is data
replication across data centers. What would be the proposed solution for
them? MySQL multi-master replication (Galera) is feared to perform badly
across DC.

The Problem
 ——
 The SQL Assignment backend has become significantly more feature rich and
 due to the limitations of the basic LDAP schemas available (most LDAP
 admins wont let someone load custom schemas), the LDAP assignment backend
 has languished and fallen further and further behind. It turns out almost
 no deployments use LDAP to house projects/tenants, domains, roles, etc. A
 lot of deployments use LDAP for users and groups.

 We explored many options on this front and it boiled down to three:

 1. Try and figure out how to wedge all the new features into a sub-optimal
 data store (basic/standard LDAP schemas)
 2. Create a custom schema for LDAP Assignment. This would require
 convincing LDAP admins (or Active Directory admins) to load a custom
 schema. This also was a very large amount of work for a very small
 deployment base.
 3. Deprecate the LDAP Assignment backend and work with the community to
 support (if desired) an out-of-tree LDAP driver (supported by those who
 need it).


I'd like to note that it is in fact possible to make LDAP backend work even
with native AD schema without modifications. The only issue that has been
hanging with LDAP schema from the very beginning of LDAP driver is usage of
groupOfNames for projects and nesting other objects under it. With some
fixes we managed to make it work with stock AD schema with no modifications
for Havana and port that to Icehouse.

Based upon interest, workload, and general maintainability issues, we have
 opted to deprecate the LDAP Assignment backend. What does this mean?


 1. This means effective as of Kilo, the LDAP assignment backend is
 deprecated and Frozen.
 1.a. No new code/features will be added to the LDAP Assignment backend.
 1.b. Only exception to 1.a is security-related fixes.

 2.The LDAP Assignment backend ([assignment]/driver” config option set to
 “keystone.assignment.backends.ldap.Assignment” or a subclass) will remain
 in-tree with plans to be removed in the “M”-release.
 2.a. This is subject to support beyond the “M”-release based upon what the
 keystone development team and community require.


Is there a possibility that this decision will be amended if someone steps
up to properly maintain LDAP backend? Developing such driver out of main
tree would be really hard mostly catch up with mainline work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-29 Thread Evgeniy L
Hi,

Ok, looks like everybody agree that we should implement similar
approach for plugins.
But I'm not sure if we should implicitly assume that primary is set
if there is only 'controller', in this case we won't be able to run some
tasks on controllers only.

Thanks,

On Thu, Jan 29, 2015 at 1:05 AM, Andrew Woodward xar...@gmail.com wrote:

 On Wed, Jan 28, 2015 at 3:06 AM, Evgeniy L e...@mirantis.com wrote:
  Hi,
 
  +1 for having primary-controller role in terms of deployment.

 Yes, we need to continue to be able to differentiate the difference
 between the first node in a set of roles, and all the others.

 For controllers we have logic around how the services start, and if we
 attempt to create resources. This allows the deployment to run more
 smoothly.
 For mongo the logic is used to setup the primary vs backup data nodes.
 For plugins I would expect to continue to see this kind of need and
 would need to be able to expose a similar logic when adding roles /
 tasks

 I'm however not sure that we need to do this with some kind of role,
 this could simply be some parameter that we then use to set the
 conditional that we use to apply primary logic already. Alternately,
 this could cause the inclusion of 'primary' or 'first node' tasks that
 would do these specific work with out the presence of the conditional
 to run this testing

  In our tasks user should be able to run specific task on
 primary-controller.
  But I agree that it can be tricky because after the cluster is deployed,
 we
  cannot say who is really primary, is there a case when it's important to
  know
  who is really primary after deployment is done?

 for mongo, its important to find out who is currently the primary
 prior to deployment starting (which may not have been the primary that
 the deployment started with) So it may be special in it's case.

 for controller, its irrelevant as long as it's not set to a newly
 added node (a node with a lower node.id will cause this and create
 problems)

  Also I would like to mention that in plugins user currently can write
  'roles': ['controller'],
  which means that the task will be applied on 'controller' and
  'primary-controller' nodes.
  Plugin developer can get this information from astute.yaml file. But I'm
  curious if we
  should change this behaviour for plugins (with backward compatibility of
  course)?
 

 writing roles: ['controller'] should apply to all controllers as
 expected, with the addition of roles: ['primary-controller'] only
 applying to the primary controller.
  Thanks,
 
 
  On Wed, Jan 28, 2015 at 1:07 PM, Aleksandr Didenko 
 adide...@mirantis.com
  wrote:
 
  Hi,
 
  we definitely need such separation on orchestration layer.
 
   Is it possible to have significantly different sets of tasks for
   controller and primary-controller?
 
  Right now we already do different things on primary and secondary
  controllers, but it's all conducted in the same manifest and controlled
 by
  conditionals inside the manifest. So when we split our tasks into
 smaller
  ones, we may want/need to separate them for primary and secondary
  controllers.
 
   I wouldn't differentiate tasks for primary and other controllers.
   Primary-controller logic should be controlled by task itself. That
 will
   allow to have elegant and tiny task framework
 
  Sergii, we still need this separation on the orchestration layer and, as
  you know, our deployment process is based on it. Currently we already
 have
  separate task groups for primary and secondary controller roles. So it
 will
  be up to the task developer how to handle some particular task for
 different
  roles: developer can write 2 different tasks (one for
 'primary-controller'
  and the other one for 'controller'), or he can write the same task for
 both
  groups and handle differences inside the task.
 
  --
  Regards,
  Aleksandr Didenko
 
 
  On Wed, Jan 28, 2015 at 11:25 AM, Dmitriy Shulyak 
 dshul...@mirantis.com
  wrote:
 
  But without this separation on orchestration layer, we are unable to
  differentiate between nodes.
  What i mean is - we need to run subset of tasks on primary first and
 then
  on all others, and we are using role as mapper
  to node identities (and this mechanism was hardcoded in nailgun for a
  long time).
 
  Lets say we have task A that is mapped to primary-controller and B that
  is mapped to secondary controller, task B requires task A.
  If there is no primary in mapping - we will execute task A on all
  controllers and then task B on all controllers.
 
  And how in such case deployment code will know that it should not
 execute
  commands in task A for secondary controllers and
  in task B on primary ?
 
  On Wed, Jan 28, 2015 at 10:44 AM, Sergii Golovatiuk
  sgolovat...@mirantis.com wrote:
 
  Hi,
 
  But with introduction of plugins and granular deployment, in my
 opinion,
  we need to be able
  to specify that task should run specifically on primary, or on
  secondaries. Alternative to this 

Re: [openstack-dev] [Fuel] removing single mode

2015-01-29 Thread Aleksandr Didenko
Mike,

 Any objections / additional suggestions?

no objections from me, and it's already covered by LP 1415116 bug [1]

[1] https://bugs.launchpad.net/fuel/+bug/1415116

On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Folks,
 one of the things we should not forget about - is out Fuel CI gating
 jobs/tests. [1], [2].

 One of them is actually runs simple mode. Unfortunately, I don't see
 details about tests ran for [1], [2], but I'm pretty sure it's same set as
 [3], [4].

 I suggest to change tests. First of all, we need to get rid of simple runs
 (since we are deprecating it), and second - I'd like us to run Ubuntu HA +
 Neutron VLAN for one of the tests.

 Any objections / additional suggestions?

 [1]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/

 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko svasile...@mirantis.com
  wrote:

 +1 to replace simple to HA with one controller

 /sv

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-29 Thread Vladimir Kuklin
Guys I just suggested to use serverspec as beaker is a kind of overkill
duplicating our fuel-devops framework, so that we do not need to mess with
beaker ways of environment creation.

On Thu, Jan 29, 2015 at 2:08 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:

 I hope, we don’t even consider using python for that. Let’s be as close as
 possible to community and use rspec for manifests.

 Regards,

  On 29 Jan 2015, at 09:50, Vladimir Kuklin vkuk...@mirantis.com wrote:
 
  Guys, could you point out where I suggested to use python for testing
 puppet manifests?
 
  On Thu, Jan 29, 2015 at 1:28 AM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:
  We need to write tests in way how Puppet community writes. Though if
 user uses salt in one stage, it's fine to use tests on python.
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Wed, Jan 28, 2015 at 11:15 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:
  Guys, is it crazy idea to write tests for deployment state on node in
 python?
  It even can be done in unit tests fashion..
 
  I mean there is no strict dependency on tool from puppet world, what is
 needed is access to os and shell, maybe some utils.
 
   What plans have Fuel Nailgun team for testing the results of deploy
 steps aka tasks?
  From nailgun/orchestration point of view - verification of deployment
 should be done as another task, or included in original.
 
  On Thu, Jan 22, 2015 at 5:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Moreover I would suggest to use server spec as beaker is already
 duplicating part of our infrastructure automatization.
 
  On Thu, Jan 22, 2015 at 6:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Guys, I suggest that we create a blueprint how to integrate beaker with
 our existing infrastructure to increase test coverage. My optimistic
 estimate is that we can see its implementation in 7.0.
 
  On Thu, Jan 22, 2015 at 2:07 AM, Andrew Woodward xar...@gmail.com
 wrote:
  My understanding is serverspec is not going to work well / going to be
  supported. I think it was discusssed on IRC (as i cant find it in my
  email). Stackforge/puppet-ceph moved from ?(something)spec to beaker,
  as its more functional and actively developed.
 
  On Mon, Jan 12, 2015 at 6:10 AM, Sergii Golovatiuk
  sgolovat...@mirantis.com wrote:
   Hi,
  
   Puppet OpenStack community uses Beaker for acceptance testing. I would
   consider it as option [2]
  
   [2] https://github.com/puppetlabs/beaker
  
   --
   Best regards,
   Sergii Golovatiuk,
   Skype #golserge
   IRC #holser
  
   On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya 
 bdobre...@mirantis.com
   wrote:
  
   Hello.
  
   We are working on the modularization of Openstack deployment by puppet
   manifests in Fuel library [0].
  
   Each deploy step should be post-verified with some testing framework
 as
   well.
  
   I believe the framework should:
   * be shipped as a part of Fuel library for puppet manifests instead of
   orchestration or Nailgun backend logic;
   * allow the deployer to verify results right in-place, at the node
 being
   deployed, for example, with a rake tool;
   * be compatible / easy to integrate with the existing orchestration in
   Fuel and Mistral as an option?
  
   It looks like test resources provided by Serverspec [1] are a good
   option, what do you think?
  
   What plans have Fuel Nailgun team for testing the results of deploy
   steps aka tasks? The spec for blueprint gives no a clear answer.
  
   [0]
  
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
   [1] http://serverspec.org/resource_types.html
  
   --
   Best regards,
   Bogdan Dobrelya,
   Skype #bogdando_at_yahoo.com
   Irc #bogdando
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
  --
  Andrew
  Mirantis
  Ceph community
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
 

Re: [openstack-dev] [nova] The libvirt.cpu_mode and libvirt.cpu_model

2015-01-29 Thread Daniel P. Berrange
On Wed, Jan 28, 2015 at 10:10:29PM +, Jiang, Yunhong wrote:
 Hi, Daniel
   I recently tried the libvirt.cpu_mode and libvirt.cpu_model
 when I was working on cpu_info related code and found bug
 https://bugs.launchpad.net/nova/+bug/1412994 .  The reason is because
 with these two flags, all guests launched on the host will use them,
 while when host report back the compute capability, they report the
 real-hardware compute capability, instead of the compute capabilities
 masked by these two configs.
 
 I think the key thing is, these two flags are per-instance properties
 instead of per-host properties.

No, these are intended to be per host properties. The idea is that all
hosts should be configured with a consistent CPU model so you can live
migrate between all hosts without hitting compatibility probems. There
is however currently a bug in the live migration CPU compat checking
but I have a fix for that in progress.

   How about remove these two config items? And I don't think we
 should present cpu_mode/model option to end user, instead, we should
 only expose the feature request like disable/force some cpu_features,
 and the libvirt driver select the cpu_mode/model based on user's
 feature requirement.

I don't see any reason to remove these config items

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] MVP implementation of Granular Deployment merged into Fuel master branch

2015-01-29 Thread Evgeniy L
Hi,

+1 to Dmitriy's comment.
We can spend several month on polishing the spec, will it help
to release feature in time? I don't think so.
Also with your suggestion we'll get a lot of patches over 2 thousands
lines of code, after spec is merged. Huge patches reduce quality,
because it's too hard to review, also such patches much harder
to get merged.
I think the spec should be a synchronization point, where different
teams can discuss details and make sure that everything is correct.
The spec should represent the current state of the code which is
merged and which is going to be merged.

Thanks,

On Thu, Jan 29, 2015 at 1:03 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Andrew,
 What should be sorted out? It is unavoidable that people will comment and
 ask questions during development cycle.
 I am not sure that merging spec as early as possible, and than add
 comments and different fixes is good strategy.
 On the other hand we need to eliminate risks.. but how merging spec can
 help?

 On Wed, Jan 28, 2015 at 8:49 PM, Andrew Woodward xar...@gmail.com wrote:

 Vova,

 Its great to see so much progress on this, however it appears that we
 have started merging code prior to the spec landing [0] lets get it
 sorted ASAP.

 [0] https://review.openstack.org/#/c/113491/

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and operations
  architecture as well as it is going to significantly improve our
 testing and
  engineering process.
 
  Starting from now we can start merging code for:
 
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task - only
 the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and operations
  architecture as well as it is going to significantly improve our
 testing and
  engineering process.
 
  Starting from now we can start merging code for:
 
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task - only
 the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 

Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2015-01-29 Thread Tomasz Napierala
Guys,

We have requests for this improvement. It will help with huge environments, we 
are talking about 5GiB of logs.
Is it on the agenda?

Regards,


 On 22 Dec 2014, at 07:28, Bartlomiej Piotrowski bpiotrow...@mirantis.com 
 wrote:
 
 FYI, xz with multithreading support (5.2 release) has been marked as stable 
 yesterday.
 
 Regards,
 Bartłomiej Piotrowski
 
 On Mon, Nov 24, 2014 at 12:32 PM, Bartłomiej Piotrowski 
 bpiotrow...@mirantis.com wrote:
 On 24 Nov 2014, at 12:25, Matthew Mosesohn mmoses...@mirantis.com wrote:
  I did this exercise over many iterations during Docker container
  packing and found that as long as the data is under 1gb, it's going to
  compress really well with xz. Over 1gb and lrzip looks more attractive
  (but only on high memory systems). In reality, we're looking at log
  footprints from OpenStack environments on the order of 500mb to 2gb.
 
  xz is very slow on single-core systems with 1.5gb of memory, but it's
  quite a bit faster if you run it on a more powerful system. I've found
  level 4 compression to be the best compromise that works well enough
  that it's still far better than gzip. If increasing compression time
  by 3-5x is too much for you guys, why not just go to bzip? You'll
  still improve compression but be able to cut back on time.
 
  Best Regards,
  Matthew Mosesohn
 
 Alpha release of xz supports multithreading via -T (or —threads) parameter.
 We could also use pbzip2 instead of regular bzip to cut some time on 
 multi-core
 systems.
 
 Regards,
 Bartłomiej Piotrowski
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Next meeting agenda

2015-01-29 Thread Ryan Brown
On 01/29/2015 10:20 AM, Sean Dague wrote:
 On 01/29/2015 07:17 AM, Anne Gentle wrote:


 On Thu, Jan 29, 2015 at 4:10 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:

 Everett Toews wrote:
  A couple of important topics came up as a result of attending the
  Cross Project Meeting. I’ve added both to the agenda for the next
  meeting on Thursday 2015/01/29 at 16:00 UTC.
 
  https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
 
  The first is the suggestion from ttx to consider using
  openstack-specs [1] for the API guidelines.

 Precision: my suggestion was to keep the api-wg repository for the
 various drafting stages, and move to openstack-specs when it's ready to
 be recommended and request wider community comments. Think Draft and
 RFC stages in the IETF process :)


 Oh, thanks for clarifying, I hadn't understood it that way. 

 To me, it seems more efficient to get votes and iterate in one repo
 rather than going through two iterations and two review groups. What do
 others think?
 Thanks,
 Anne
 
 Honestly, I'm more a fan of the one repository approach. Jumping
 repositories means the history gets lost, and you have to restart a
 bunch of conversations. That happened in the logging jump from nova -
 openstack-specs, which probably created 3 - 4 months additional delay in
 the process.
 
   -Sean
 

+1 for single repo, commits aren't as important for specs as unifying
discussion around them.

I think a spec being merged/unmerged is a good enough distinction for
the draft status of a spec and wouldn't be improved by having a
draft/final repo split.

___
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Weekly status report (Thu, Jan 22)

2015-01-29 Thread Matthias Runge
Hello,

a bit early, but here's my status report for the past week

RED:
- none

AMBER:
- still preparing my workshop for devconf in Brno
- prepared a patch to add OVA format to horizon
https://review.openstack.org/150751
- continued JavaScript fun
- cleaned up blueprints upstream
- bugzilla housekeeping

GREEN:
- lots of reviews upstream
- Feature for Fedora 22 was approved:
https://fedoraproject.org/wiki/Changes/Django18
- debugged Ciscos Dashboard unavailable issue
- changed Django package due to several guideline changes
  (bash completion change, python-sphinx-latex was added and broke
python-django builds)
- packaged and got python-oslo-concurrency approved

Will be traveling tomorrow to FOSDEM.

See you there and/or have a great weekend,
Matthias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Doug Hellmann


On Wed, Jan 28, 2015, at 07:57 PM, Johannes Erdfelt wrote:
 On Wed, Jan 28, 2015, Vishvananda Ishaya vishvana...@gmail.com wrote:
  On Jan 28, 2015, at 4:03 PM, Doug Hellmann d...@doughellmann.com wrote:
   I hope someone who was around at the time will chime in with more detail
   about why green threads were deemed better than regular threads, and I
   look forward to seeing your analysis of a change. There is already a
   thread-based executor in oslo.messaging, which *should* be usable in the
   applications when you remove eventlet.
  
  Threading was never really considered. The initial version tried to get a
  working api server up as quickly as possible and it used tonado. This was
  quickly replaced with twisted since tornado was really new at the time and
  had bugs. We then switched to eventlet when swift joined the party so we
  didn’t have multiple concurrency stacks.
  
  By the time someone came up with the idea of using different concurrency
  models for the api server and the backend services, we were already pretty
  far down the greenthread path.
 
 Not sure if it helps more than this explanation, but there was a
 blueprint and accompanying wiki page that explains the move from twisted
 to eventlet:
 
 https://blueprints.launchpad.net/nova/+spec/unified-service-architecture
 
 https://wiki.openstack.org/wiki/UnifiedServiceArchitecture

Thanks Vish  Johannes, that's good background. There are a lot of
factors to consider, and having a full picture of the history of how we
got to where we are will help us make informed decisions about any
proposed changes.

Doug

 
 JE
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Weekly status report (Thu, Jan 22)

2015-01-29 Thread Matthias Runge
On 29/01/15 13:20, Matthias Runge wrote:
 Hello,
 
 a bit early, but here's my status report for the past week
Yupp, please ignore the noise here.

Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CLI support in ML2 driver

2015-01-29 Thread Mathieu Rohon
Hi,

you can develop you own service plugin which extends the current Neutron
API, and transforms Neutron API call in ODL NB API call.

You can take example from the GBP service plugin to understand how to route
Neutron API call to an independent service plugin.

regards,

Mathieu


On Thu, Jan 29, 2015 at 9:42 AM, Naresh Kumar 
naresh.saa...@lnttechservices.com wrote:

  Hi,

 I have more expertise in Opendaylight than Openstack.I have created a CLI 
 application in Opendaylight which uses AdventNetCLI library in SB that will 
 create/delete services in my non-openflow carrier ethernet switch through 
 RESTCONF(It's working!). I want this app to be called from the Neutron server 
 of Openstack and that REST call should be routed to my ODL NB and my 
 controller should take care of the operation. Anyone has any ideas how this 
 can be implemented ?

 Thanks,
 Naresh.

  *LT Technology Services Ltd*

 www.LntTechservices.com http://www.lnttechservices.com/

 This Email may contain confidential or privileged information for the
 intended recipient (s). If you are not the intended recipient, please do
 not use or disseminate the information, notify the sender and delete it
 from your system.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-29 Thread Adam Lawson
Hi Anne; this was more or less directed in Monty's direction and/or those
in agreement with his position. Sorry for the confusion, I probably should
have been a bit more clear. ; )

Mahalo,
Adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Thu, Jan 29, 2015 at 9:05 AM, Anita Kuno ante...@anteaya.info wrote:

 On 01/28/2015 07:24 PM, Adam Lawson wrote:
  I'm short on time so I apologize for my candor since I need to get
 straight
  to the point.
 
  I love reading the various opinions and my team is immensely excited with
  OpenStack is maturing. But this is lunacy.
 
  I looked at the patch being worked [1] to change how things are done and
  have more questions than I can count.
 
  So I'll start with the obvious ones:
 
 - Are you proposing this change as a Foundation Individual Board
 Director tasked with representing the interests of all Individual
 Members
 of the OpenStack community or as a member of the TC? Context matters
 because your two hats are presenting a conflict of interest in my
 opinion.
 One cannot propose a change that gives them greater influence while
 suggesting they're doing it for everyone's benefit.
 How can Jim be proposing a change as a Foundation Individual Board
 Director? He isn't a member of the Board.

 http://www.openstack.org/foundation/board-of-directors/

 He is a member of the Technical Committee.

 http://www.openstack.org/foundation/tech-committee/

 Keep in mind that the repository that he offered the change to, the
 openstack/governance repository, welcomes patches from anyone who takes
 the time to learn our developer workflow and offers a patch to the
 repository using Gerrit.

 http://docs.openstack.org/infra/manual/developers.html

 Thanks,
 Anita.
 - How is fun remotely relevant when discussing process improvement?
 I'm really hoping we aren't developing processes based on how fun a
 process
 is or isn't.
 - Why is this discussion being limited to the development community
 only? Where's the openness in that?
 - What exactly is the problem we're attempting to fix?
 - Does the current process not work?
 - Is there group of individuals being disenfranchised with our current
 process somehow that suggests the process should limit participation
 differently?
 
  And some questions around the participation proposals:
 
 - Why is the election process change proposing to limit participation
 to
 ATC members only?
 There are numerous enthusiasts within our community that don't fall
 within the ATC category such as marketing (as some have brought up),
 corporate sponsors (where I live) and I'm sure there are many more.
 - Is taking back the process a hint that the current process is being
 mishandled or restores a sense of process control?
 - Is the presumption that the election process belongs to someone or
 some group?
 That strikes me as an incredibly subjective assertion to make.
 
  opinionThis is one reason I feel so strongly folks should not be
 allowed
  to hold more than one position of leadership within the OpenStack
 project.
  Obfuscated context coupled with increased influence rarely produces
  excellence on either front. But that's me./opinion
 
  Mahalo,
  Adam
 
  [1] https://review.openstack.org/#/c/150604/
 
 
  *Adam Lawson*
 
  AQORN, Inc.
  427 North Tatnall Street
  Ste. 58461
  Wilmington, Delaware 19801-2230
  Toll-free: (844) 4-AQORN-NOW ext. 101
  International: +1 302-387-4660
  Direct: +1 916-246-2072
 
 
  On Wed, Jan 28, 2015 at 10:23 AM, Anita Kuno ante...@anteaya.info
 wrote:
 
  On 01/28/2015 11:36 AM, Thierry Carrez wrote:
  Monty Taylor wrote:
  What if, to reduce stress on you, we make this 100% mechanical:
 
  - Anyone can propose a name
  - Election officials verify that the name matches the criteria
  -  * note: how do we approve additive exceptions without tons of
 effort
 
  Devil is in the details, as reading some of my hatemail would tell you.
  For example in the past I rejected Foo which was proposed because
  there was a Foo Bar landmark in the vicinity. The rules would have to
  be pretty detailed to be entirely objective.
  Naming isn't objective. That is both the value and the hardship.
 
  - Marketing team provides feedback to the election officials on names
  they find image-wise problematic
  - The poll is created with the roster of all foundation members
  containing all of the choices, but with the marketing issues clearly
  labeled, like this:
 
  * Love
  * Lumber
  Ohh, it gives me a thrill to see a name that means something even
  remotely Canadian. (not advocating it be added to this round)
  * Lettuce
  * Lemming - marketing issues identified
 
  - post poll - foundation staff run trademarks checks on the winners in
  order until a legally acceptable winner is 

Re: [openstack-dev] [nova] The libvirt.cpu_mode and libvirt.cpu_model

2015-01-29 Thread Jiang, Yunhong

 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com]
 Sent: Thursday, January 29, 2015 2:34 AM
 To: Jiang, Yunhong
 Cc: openstack-dev@lists.openstack.org
 Subject: Re: [nova] The libvirt.cpu_mode and libvirt.cpu_model
 
 On Wed, Jan 28, 2015 at 10:10:29PM +, Jiang, Yunhong wrote:
  Hi, Daniel
  I recently tried the libvirt.cpu_mode and libvirt.cpu_model
  when I was working on cpu_info related code and found bug
  https://bugs.launchpad.net/nova/+bug/1412994 .  The reason is because
  with these two flags, all guests launched on the host will use them,
  while when host report back the compute capability, they report the
  real-hardware compute capability, instead of the compute capabilities
  masked by these two configs.
 
  I think the key thing is, these two flags are per-instance properties
  instead of per-host properties.
 
 No, these are intended to be per host properties. The idea is that all
 hosts should be configured with a consistent CPU model so you can live
 migrate between all hosts without hitting compatibility probems. There
 is however currently a bug in the live migration CPU compat checking
 but I have a fix for that in progress.

Although configuring all hosts with consistent CPU model means nova live 
migration issue, does it also means the cloud can only present features based 
on oldest machine in the cloud, and latest CPU features like SSE4.1 etc can't 
be utilized? 

Also, if we want to expose host feature to guest (please check 
https://bugs.launchpad.net/nova/+bug/1412930 for a related issue), we have to 
use per-instance cpu_model configuration, which will anyway break the 
all-host-live-migration.

For your live migration fix, is it https://review.openstack.org/#/c/53746/ ? On 
your patch, the check_can_live_migrate_destination() will use the guest CPU 
info to compare with the target host CPU info, instead of compare the 
source/target host cpu_model. 

Thanks
--jyh

 
  How about remove these two config items? And I don't think we
  should present cpu_mode/model option to end user, instead, we should
  only expose the feature request like disable/force some cpu_features,
  and the libvirt driver select the cpu_mode/model based on user's
  feature requirement.
 
 I don't see any reason to remove these config items
 
 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Sean Dague
Correct. This actually came up at the Nova mid cycle in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to describe what happens
when a REST service goes wrong, especially if it goes wrong in a way
that would let the client do something other than blindly try the same
request, or fail.

Having a standard json error payload would be really nice.

{
 fault: ComputeFeatureUnsupportedOnInstanceType,
 messsage: This compute feature is not supported on this kind of
instance type. If you need this feature please use a different instance
type. See your cloud provider for options.
}

That would let us surface more specific errors.

Today there is a giant hodgepodge - see:

https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424

https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492

Especially blocks like this:

if 'cloudServersFault' in resp_body:
message =
resp_body['cloudServersFault']['message']
elif 'computeFault' in resp_body:
message = resp_body['computeFault']['message']
elif 'error' in resp_body:
message = resp_body['error']['message']
elif 'message' in resp_body:
message = resp_body['message']

Standardization here from the API WG would be really great.

-Sean

On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
 Hi Anne,
 
 I think Eugeniya refers to a problem, that we can't really distinguish
 between two different  badRequest (400) errors (e.g. wrong security
 group name vs wrong key pair name when starting an instance), unless
 we parse the error description, which might be error prone.
 
 Thanks,
 Roman
 
 On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
 annegen...@justwriteclick.com wrote:


 On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
 ekudryash...@mirantis.com wrote:

 Hi, all


 Openstack APIs interact with each other and external systems partially by
 passing of HTTP errors. The only valuable difference between types of
 exceptions is HTTP-codes, but current codes are generalized, so external
 system can’t distinguish what actually happened.


 As an example two different failures below differs only by error message:


 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 189

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf

 Content-Type: application/json


 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name: bar}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 118

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0

 Date: Fri, 23 Jan 2015 10:43:33 GMT


 {badRequest: {message: Security group bar not found for project
 790f5693e97a40d38c4d5bfdc45acb09., code: 400}}


 and


 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 192

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71

 Content-Type: application/json


 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name:
 default}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 70

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5

 Date: Fri, 23 Jan 2015 10:39:43 GMT


 {badRequest: {message: Invalid key_name provided., code: 400}}


 The former specifies an incorrect security group name, and the latter an
 incorrect keypair name. And the problem is, that just looking at the
 response body and HTTP response code an external system can’t understand
 what exactly went wrong. And parsing of error messages here is not the way
 we’d like to solve this problem.


 For the Compute API v 2 we have the shortened Error Code in the
 documentation at
 http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses

 such as:

 Error response codes
 computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
 unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
 itemNotFound (404), buildInProgress (409)

 Thanks to a recent update (well, last fall) to our build tool for docs.

 What we don't have is a table in the docs saying computeFault has this
 longer Description -- is that what you are asking for, for all OpenStack
 APIs?

 

Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-29 Thread Joshua Harlow

Just something to note; anvil has never ran had issues with this (afaik).

It seems fairly easy to have a mapping/algorithm/other... that is used 
imho (and this is just what anvil has); I don't recall that the number 
of problems with weird names has been any real issue when building 
rpms... It's sometimes an annoyance but meh, make better tooling in that 
case?


Some of the mapping + other tweaks @ 
https://github.com/stackforge/anvil/blob/master/conf/distros/redhat.yaml#L9


Also the code at:

https://github.com/stackforge/anvil/blob/master/tools/py2rpm#L161 (which 
does the brute work of most of the openstack *non-core* project package 
building). The *core* (nova, glance...) project spec files are treated 
specially @ 
https://github.com/stackforge/anvil/tree/master/conf/templates/packaging/specs 
(these are cheetah templates that are expanded; allowing for 
conditionals like '#if $newer_than_eq('2014.2')' to work...)


Food for thought,

-Josh

Thomas Goirand wrote:

On 01/24/2015 02:01 AM, Doug Hellmann wrote:


On Fri, Jan 23, 2015, at 07:48 PM, Thomas Goirand wrote:

Hi,

I've just noticed that oslo.log made it to global-requirements.txt 9
days ago. How come are we still adding some name.spaced oslo libs?
Wasn't the outcome of the discussion in Paris that we shouldn't do that
anymore, and that we should be using oslo-log instead of oslo.log?

Is three something that I am missing here?

Cheers,

Thomas Goirand (zigo)

The naming is described in the spec:
http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html

tl;dr - We did it this way to make life easier for the packagers.

Doug


Hi Doug,

Sorry for the late reply.

Well, you're not making the life of *package maintainers* more easy,
that's in fact the opposite way, I'm afraid.

The Debian policy is that Python module packages should be named after
the import statement in a source file. Meaning that if we do:

import oslo_db

then the package should be called python-oslo-db. This means that I will
have to rename all the Debian packages to remove the dot and put a dash
instead. But by doing so, if OpenStack upstream is keeping the old
naming convention, then all the requirements.txt will be wrong (by
wrong, I mean from my perspective as a package maintainer), and the
automated dependency calculation of dh_python2 will put package names
with dots instead of dashes.

So, what is going to happen, is that I'll have to, for each and every
package, build a dictionary of translations in debian/pydist-overrides.
For example:

# cat debian/pydist-overrides
oslo.db python-oslo-db

This is very error prone, and I may miss lots of dependencies this way,
leading to the packages having wrong dependencies. I have a way to avoid
the issue, which would be to add a Provides: python-oslo.db in the
python-oslo-db package, but this should only be considered as a
transition thing.

Also, as a side note, but it may be interesting for some: the package
python-oslo-db should have Breaks: python-oslo.db (  OLD_VERSION) and
Replaces: (  OLD_VERSION), as otherwise upgrades will simply fail
(because 2 different packages can't contain the same files on the
filesystem).

So if you really want to make our lives more easy, please do the full
migration, and move completely away from dots.

Also, I'd like to tell you that I feel very sorry that I couldn't attend
the session about the oslo namespace in Paris. I was taken by my company
to a culture building session all the afternoon. After reading the
above, I feel sorry that I didn't attend the namespace session instead. :(

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Ihar Hrachyshka

On 01/29/2015 05:57 PM, Roman Podoliaka wrote:

Jeremy,

I don't have exact numbers, so yeah, it's just an assumption based on
looking at the nova-api/scheduler logs with connection_debug set to
100.

But that's a good point you are making here: it will be interesting to
see what difference enabling of PyMySQL will make for tempest/rally
workloads, rather than just running synthetic tests. I'm going to give
it a try on my devstack installation.


Yeah, also realistic testing would mean a decent level of parallelism in 
requests. If you compare serial scenarios, then yes, of course 
mysqldb-python will be quicker.


/Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Joe Gordon
On Thu, Jan 29, 2015 at 9:52 AM, Sean Dague s...@dague.net wrote:

 So, honestly, yes.

 For a library to release safely it must:

 * have stable-compat jobs running (this was the issue with barbican client)
 * if it has a stable/juno branch it must be pinned in stable/juno (this
 was the issue on most of the oslo libs)


We use the clients for two very different things.

1. As a python library to communicate between services
2. A command line tool that can talk to all supported versions of our APIs
(stable/icehouse, stable/juno and master).

With our current testing setup, we try using the latest release on stable
branches for both 1. and 2.  That means clients need overlapping
dependencies with the stable branches.   I don't think this is a reasonable
requirement, and am not sure what we gain from it.

Instead I propose we pin the client version for 1 and use the latest
release for 2.  We should be able to do this easily by putting all command
line tools inside of venvs using pipsi[0]. This way we continue to test the
clients API compatibility with stable branches without requiring dependency
compatibility (I roped Dean Troyer into working on this at the nova mid
cycle).

Once this is done, we should be able to go ahead and pin all explicit
dependencies on stable branches [1].

[0] https://pypi.python.org/pypi/pipsi/0.8
[1] https://review.openstack.org/#/c/147451/



 -Sean

 On 01/29/2015 12:07 PM, Kyle Mestery wrote:
  Maybe we should defer all client releases until we know for sure if each
  of them are ticking timebombs.
 
  On Thu, Jan 29, 2015 at 11:00 AM, Morgan Fainberg
  morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote:
 
  Good question! I was planning a keystone liens release very soon,
  but will hold off of it will break everything.
 
  --Morgan
 
 
  On Thursday, January 29, 2015, Thierry Carrez thie...@openstack.org
  mailto:thie...@openstack.org wrote:
 
  Sean Dague wrote:
   On 01/27/2015 05:21 PM, Sean Dague wrote:
   On 01/27/2015 03:55 PM, Douglas Mendizabal wrote:
   Hi openstack-dev,
  
   The barbican team would like to announce the release of
   python-barbicanclient 3.0.2.  This is a minor release that
  fixes a bug
   in the pbr versioning that was preventing the client from
  working correctly.
  
   The release is available on PyPI
  
   https://pypi.python.org/pypi/python-barbicanclient/3.0.2
  
   Which just broke everything, because it creates incompatible
   requirements in stable/juno with cinder. :(
  
   Here is the footnote -
  
 
 http://logs.openstack.org/18/150618/1/check/check-grenade-dsvm/c727602/logs/grenade.sh.txt.gz#_2015-01-28_00_04_54_429
 
  This seems to have been caused by this requirements sync:
 
 
 http://git.openstack.org/cgit/openstack/python-barbicanclient/commit/requirements.txt?id=054d81fb63053c3ce5f1c87736f832750f6311b3
 
  but then the same requirements sync happened in all other
 clients:
 
 
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/requirements.txt?id=17367002609f011710014aef12a898e9f16db81c
 
  Does that mean that all the clients are time bombs that will
 break
  stable/juno when their next release is tagged ?
 
  --
  Thierry Carrez (ttx)
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-29 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2015-01-29 08:41:36 -0800:
 I got a question today about creating keystone users/roles/tenants in 
 Heat templates. We currently support creating users via the 
 AWS::IAM::User resource, but we don't have a native equivalent.
 
 IIUC keystone now allows you to add users to a domain that is otherwise 
 backed by a read-only backend (i.e. LDAP). If this means that it's now 
 possible to configure a cloud so that one need not be an admin to create 
 users then I think it would be a really useful thing to expose in Heat. 
 Does anyone know if that's the case?
 

I think you got that a little backward. Keystone lets you have domains
that are read/write, and domains that are read-only. So you can have
the real users in LDAP and then give a different class of user their
own keystone-only domain that they can control.

That is a bit confusing to the real functionality gap, which I think
is a corner case but worth exploring. Being able to create a user in a
domain that the user provides credentials for is a useful thing. A user
may want to deploy their own instance control mechanism (like standalone
Heat!) for instance, and having a limited-access user for this created
by a domain admin with credentials that are only ever stored in Heat
seems like a win. Some care is needed to make sure the role can't just
'stack show' on Heat and grab the admin creds, but that seems like
something that would go in a deployer guide.. something like Make sure
domain admins know not to give delegated users the 'heat-user' role.

 I think roles and tenants are likely to remain admin-only, but we have 
 precedent for including resources like that in /contrib... this seems 
 like it would be comparably useful.
 

I feel like admin-only things will matter as soon as real multi-cloud
support exists in Heat. What I really want is to have a Heat in my
management cloud that reaches into my managed cloud when necessary.
Right now in TripleO we have to keep anything admin-only out of the
Heat templates and run the utilities from os-cloud-config somewhere
because we don't want admin credentials (even just trusts) in Heat. But
if we could use the deployment(under) cloud's Heat to reach into the
user(over) cloud to add users, roles, networks, etc., then that would
maintain the separation our security auditors desire.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Sean Dague
So, honestly, yes.

For a library to release safely it must:

* have stable-compat jobs running (this was the issue with barbican client)
* if it has a stable/juno branch it must be pinned in stable/juno (this
was the issue on most of the oslo libs)

-Sean

On 01/29/2015 12:07 PM, Kyle Mestery wrote:
 Maybe we should defer all client releases until we know for sure if each
 of them are ticking timebombs.
 
 On Thu, Jan 29, 2015 at 11:00 AM, Morgan Fainberg
 morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote:
 
 Good question! I was planning a keystone liens release very soon,
 but will hold off of it will break everything. 
 
 --Morgan
 
 
 On Thursday, January 29, 2015, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:
 
 Sean Dague wrote:
  On 01/27/2015 05:21 PM, Sean Dague wrote:
  On 01/27/2015 03:55 PM, Douglas Mendizabal wrote:
  Hi openstack-dev,
 
  The barbican team would like to announce the release of
  python-barbicanclient 3.0.2.  This is a minor release that
 fixes a bug
  in the pbr versioning that was preventing the client from
 working correctly.
 
  The release is available on PyPI
 
  https://pypi.python.org/pypi/python-barbicanclient/3.0.2
 
  Which just broke everything, because it creates incompatible
  requirements in stable/juno with cinder. :(
 
  Here is the footnote -
 
 
 http://logs.openstack.org/18/150618/1/check/check-grenade-dsvm/c727602/logs/grenade.sh.txt.gz#_2015-01-28_00_04_54_429
 
 This seems to have been caused by this requirements sync:
 
 
 http://git.openstack.org/cgit/openstack/python-barbicanclient/commit/requirements.txt?id=054d81fb63053c3ce5f1c87736f832750f6311b3
 
 but then the same requirements sync happened in all other clients:
 
 
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/requirements.txt?id=17367002609f011710014aef12a898e9f16db81c
 
 Does that mean that all the clients are time bombs that will break
 stable/juno when their next release is tagged ?
 
 --
 Thierry Carrez (ttx)
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread John Dickinson
If you're going to make up your own extensions[1] to HTTP, why don't you use 
ones that are already used?

http://support.microsoft.com/kb/943891



[1] ok, what's proposed isn't technically an extension, it's response body 
context for the response code. But response bodies are hard to modify when 
you're dealing with APIs that aren't control-plane APIs.

--John



 On Jan 29, 2015, at 9:41 AM, Sean Dague s...@dague.net wrote:
 
 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.
 
 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.
 
 Having a standard json error payload would be really nice.
 
 {
 fault: ComputeFeatureUnsupportedOnInstanceType,
 messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }
 
 That would let us surface more specific errors.
 
 Today there is a giant hodgepodge - see:
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492
 
 Especially blocks like this:
 
if 'cloudServersFault' in resp_body:
message =
 resp_body['cloudServersFault']['message']
elif 'computeFault' in resp_body:
message = resp_body['computeFault']['message']
elif 'error' in resp_body:
message = resp_body['error']['message']
elif 'message' in resp_body:
message = resp_body['message']
 
 Standardization here from the API WG would be really great.
 
   -Sean
 
 On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
 Hi Anne,
 
 I think Eugeniya refers to a problem, that we can't really distinguish
 between two different  badRequest (400) errors (e.g. wrong security
 group name vs wrong key pair name when starting an instance), unless
 we parse the error description, which might be error prone.
 
 Thanks,
 Roman
 
 On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
 annegen...@justwriteclick.com wrote:
 
 
 On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
 ekudryash...@mirantis.com wrote:
 
 Hi, all
 
 
 Openstack APIs interact with each other and external systems partially by
 passing of HTTP errors. The only valuable difference between types of
 exceptions is HTTP-codes, but current codes are generalized, so external
 system can’t distinguish what actually happened.
 
 
 As an example two different failures below differs only by error message:
 
 
 request:
 
 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
 Host: 192.168.122.195:8774
 
 X-Auth-Project-Id: demo
 
 Accept-Encoding: gzip, deflate, compress
 
 Content-Length: 189
 
 Accept: application/json
 
 User-Agent: python-novaclient
 
 X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
 
 Content-Type: application/json
 
 
 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name: 
 bar}]}}
 
 response:
 
HTTP/1.1 400 Bad Request
 
 Content-Length: 118
 
 Content-Type: application/json; charset=UTF-8
 
 X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0
 
 Date: Fri, 23 Jan 2015 10:43:33 GMT
 
 
 {badRequest: {message: Security group bar not found for project
 790f5693e97a40d38c4d5bfdc45acb09., code: 400}}
 
 
 and
 
 
 request:
 
 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
 Host: 192.168.122.195:8774
 
 X-Auth-Project-Id: demo
 
 Accept-Encoding: gzip, deflate, compress
 
 Content-Length: 192
 
 Accept: application/json
 
 User-Agent: python-novaclient
 
 X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71
 
 Content-Type: application/json
 
 
 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name:
 default}]}}
 
 response:
 
 HTTP/1.1 400 Bad Request
 
 Content-Length: 70
 
 Content-Type: application/json; charset=UTF-8
 
 X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5
 
 Date: Fri, 23 Jan 2015 10:39:43 GMT
 
 
 {badRequest: {message: Invalid key_name provided., code: 400}}
 
 
 The former specifies an incorrect security group name, and the latter an
 incorrect keypair name. And the problem is, that just looking at the
 response body and HTTP response code an external system can’t understand
 what exactly went wrong. And parsing of error messages here is not the way
 we’d like to solve this problem.
 
 
 For the Compute API v 2 we have the shortened Error Code in the
 documentation at
 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Evgeniy L
Vladimir,

 1) Nailgun DB

Just a small note, we should not provide access to the database, this
approach
has serious issues, what we can do is to provide this information for
example
via REST API.

What you are saying is already implemented in any deployment tool for
example
lets take a look at Ansible [1].

What you can do there is to create a task which stores the result of
executed
shell command in some variable.
And you can reuse it in any other task. I think we should use this approach.

[1] http://docs.ansible.com/playbooks_variables.html#registered-variables

On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers
 for tasks - taking data from 3rd party sources if user wants. In this case
 user will be able to generate some data somewhere and fetch it using this
 code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file
 on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and i 
 guess
 it will be responsibility of controller to store current keys that are
 valid for cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Jeremy Stanley
On 2015-01-29 09:00:42 -0800 (-0800), Morgan Fainberg wrote:
 Good question! I was planning a keystone liens release very soon,
 but will hold off of it will break everything. 

I believe part of the problem is that python-barbicanclient doesn't
yet have backward-compatibility jobs running on its master branch
changes, hopefully solved once https://review.openstack.org/150645
merges.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Vishvananda Ishaya

On Jan 29, 2015, at 8:57 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 Jeremy,
 
 I don't have exact numbers, so yeah, it's just an assumption based on
 looking at the nova-api/scheduler logs with connection_debug set to
 100.
 
 But that's a good point you are making here: it will be interesting to
 see what difference enabling of PyMySQL will make for tempest/rally
 workloads, rather than just running synthetic tests. I'm going to give
 it a try on my devstack installation.


FWIW I tested this a while ago on some perf tests on nova and cinder that we
run internally and I found pymysql to be slower by about 10%. It appears that
we were cpu bound in python more often than we were blocking talking to the
db. I do recall someone doing a similar test in neutron saw some speedup,
however. On our side we also exposed a few race conditions which made it less
stable. We hit a few hard deadlocks in volume create IIRC. 

I don’t think switching is going to give us much benefit right away. We will
need a few optimizations and bugfixes in other areas (particularly in our
sqlalchemy usage) before we will derive any benefit from the switch.

Vish

 
 Thanks,
 Roman
 
 On Thu, Jan 29, 2015 at 6:42 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-01-29 18:35:20 +0200 (+0200), Roman Podoliaka wrote:
 [...]
 Otherwise, PyMySQL would be much slower than MySQL-Python for the
 typical SQL queries we do (e.g. ask for *a lot* of data from the DB).
 [...]
 
 Is this assertion based on representative empirical testing (for
 example profiling devstack+tempest, or perhaps comparing rally
 benchmarks), or merely an assumption which still needs validating?
 --
 Jeremy Stanley
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Clint Byrum
Excerpts from Vishvananda Ishaya's message of 2015-01-29 10:21:58 -0800:
 
 On Jan 29, 2015, at 8:57 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:
 
  Jeremy,
  
  I don't have exact numbers, so yeah, it's just an assumption based on
  looking at the nova-api/scheduler logs with connection_debug set to
  100.
  
  But that's a good point you are making here: it will be interesting to
  see what difference enabling of PyMySQL will make for tempest/rally
  workloads, rather than just running synthetic tests. I'm going to give
  it a try on my devstack installation.
 
 
 FWIW I tested this a while ago on some perf tests on nova and cinder that we
 run internally and I found pymysql to be slower by about 10%. It appears that
 we were cpu bound in python more often than we were blocking talking to the
 db. I do recall someone doing a similar test in neutron saw some speedup,
 however. On our side we also exposed a few race conditions which made it less
 stable. We hit a few hard deadlocks in volume create IIRC. 
 
 I don’t think switching is going to give us much benefit right away. We will
 need a few optimizations and bugfixes in other areas (particularly in our
 sqlalchemy usage) before we will derive any benefit from the switch.
 

No magic bullets, right? I think we can all resolve this statement in
our heads though: fast and never concurrent will eventually lose to
concurrent and potentially fast with optimizations.

The question is, how long does the hare (python-mysqldb) have to sleep
before the tortoise (PyMySQL) wins? Right now it's still a close race, but
if the tortoise even gains 10% speed, it likely becomes no contest at all.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Next meeting agenda

2015-01-29 Thread Ian Cordasco


On 1/29/15, 04:10, Thierry Carrez thie...@openstack.org wrote:

Everett Toews wrote:
 A couple of important topics came up as a result of attending the
 Cross Project Meeting. I’ve added both to the agenda for the next
 meeting on Thursday 2015/01/29 at 16:00 UTC.
 
 https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
 
 The first is the suggestion from ttx to consider using
 openstack-specs [1] for the API guidelines.

Precision: my suggestion was to keep the api-wg repository for the
various drafting stages, and move to openstack-specs when it's ready to
be recommended and request wider community comments. Think Draft and
RFC stages in the IETF process :)

-- 
Thierry Carrez (ttx)

Thanks for that clarification Thierry!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-29 Thread Thomas Goirand
On 01/24/2015 02:01 AM, Doug Hellmann wrote:
 
 
 On Fri, Jan 23, 2015, at 07:48 PM, Thomas Goirand wrote:
 Hi,

 I've just noticed that oslo.log made it to global-requirements.txt 9
 days ago. How come are we still adding some name.spaced oslo libs?
 Wasn't the outcome of the discussion in Paris that we shouldn't do that
 anymore, and that we should be using oslo-log instead of oslo.log?

 Is three something that I am missing here?

 Cheers,

 Thomas Goirand (zigo)
 
 The naming is described in the spec:
 http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html
 
 tl;dr - We did it this way to make life easier for the packagers.
 
 Doug

Hi Doug,

Sorry for the late reply.

Well, you're not making the life of *package maintainers* more easy,
that's in fact the opposite way, I'm afraid.

The Debian policy is that Python module packages should be named after
the import statement in a source file. Meaning that if we do:

import oslo_db

then the package should be called python-oslo-db. This means that I will
have to rename all the Debian packages to remove the dot and put a dash
instead. But by doing so, if OpenStack upstream is keeping the old
naming convention, then all the requirements.txt will be wrong (by
wrong, I mean from my perspective as a package maintainer), and the
automated dependency calculation of dh_python2 will put package names
with dots instead of dashes.

So, what is going to happen, is that I'll have to, for each and every
package, build a dictionary of translations in debian/pydist-overrides.
For example:

# cat debian/pydist-overrides
oslo.db python-oslo-db

This is very error prone, and I may miss lots of dependencies this way,
leading to the packages having wrong dependencies. I have a way to avoid
the issue, which would be to add a Provides: python-oslo.db in the
python-oslo-db package, but this should only be considered as a
transition thing.

Also, as a side note, but it may be interesting for some: the package
python-oslo-db should have Breaks: python-oslo.db ( OLD_VERSION) and
Replaces: ( OLD_VERSION), as otherwise upgrades will simply fail
(because 2 different packages can't contain the same files on the
filesystem).

So if you really want to make our lives more easy, please do the full
migration, and move completely away from dots.

Also, I'd like to tell you that I feel very sorry that I couldn't attend
the session about the oslo namespace in Paris. I was taken by my company
to a culture building session all the afternoon. After reading the
above, I feel sorry that I didn't attend the namespace session instead. :(

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/28/2015 06:57 PM, Johannes Erdfelt wrote:

 Not sure if it helps more than this explanation, but there was a
 blueprint and accompanying wiki page that explains the move from twisted
 to eventlet:

Yeah, it was never threads vs. greenthreads. There was a lot of pushback
to relying on Twisted, which many people found confusing to use, and
more importantly, to follow when reading code. Whatever the performance
difference may be, eventlet code is a lot easier to follow, as it more
closely resembles single-threaded linear execution.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUyle8AAoJEKMgtcocwZqLRX4P/j3LhEhubBOJmWepfO6KrG9V
KxltAshKRvZ9OAKlezprZh8N5XUcxRTDxLhxkYP6Qsaf1pxHacAIOhLRobnV5Y+A
oF6E6eRORs13pUhLkL+EzZ07Kon+SjSmvDDZiIo+xe8OTbgfMpo5j7zMPeLJpcr0
RUSS+knJ/ewNCMaX4gTwTY3sDYFZTbVuGHFUtjSgeoJP0T5aP05UR73xeT7/AsbW
O8dOL4tX+6GcxIHyX4XdFH9hng1P2vBZJ5l8yV6BxB6U8xsiQjlsCpwrb8orYJ0r
f7+YW0D0FHOvY/TV4dzsLC/2NGc2AwMszWL3kB/AzbUuDyMMDEGpbAS/VHDcyhZg
l7zFKEQy+9UybVzWjw764hpzcUT/ICPbTBiX/QuN4qY9YTUNTlCNrRAslgM+cr2y
x0/nb6cd+Qq21RPIygJ9HavRqOm8npF6HpUrE55Dn+3/OvfAftlWNPcdlXAjtDOt
4WUFUoZjUTsNUjLlEiiTzgfJg7+eQqbR/HFubCpILFQgOlOAuZIDcr3g8a3yw7ED
wt5UZz/89LDQqpF2TZX8lKFWxeKk1CnxWEWO208+E/700JS4xKHpnVi4tj18udsY
AHnihUwGO4d9Q0i+TqbomnzyqOW6SM+gDUcahfJ92IJj9e13bqpbuoN/YMbpD/o8
evOz8G3OeC/KaOgG5F/1
=1M8U
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-01-29 Thread David Moreau Simard
Is SSL involved anywhere from the client browser to the server ?
Is there any load balancing ? Which load balancer software ?

--
David Moreau Simard


On 2015-01-28, 11:57 PM, Chris Friesen chris.frie...@windriver.com
wrote:

On 01/28/2015 10:33 PM, Mathieu Gagné wrote:
 On 2015-01-28 11:13 PM, Chris Friesen wrote:

 Anyone have any suggestions on where to start digging?


 We have a similar issue which has yet to be properly diagnosed on our
side.

 One workaround which looks to be working for us is enabling the
private mode
 in the browser. If it doesn't work, try deleting your cookies.

 Can you see if those workarounds work for you?

Neither of those seems to work for me.  I still get a multi-second delay
and 
then the red bar with Connect timeout.

I suspect it's something related to websockify, but I can't figure out
what.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Jeremy Stanley
On 2015-01-29 10:21:58 -0800 (-0800), Vishvananda Ishaya wrote:
[...]
 I don’t think switching is going to give us much benefit right
 away. We will need a few optimizations and bugfixes in other areas
 (particularly in our sqlalchemy usage) before we will derive any
 benefit from the switch.

There are other benefits which we can potentially leverage right
away though. Off the top of my head:

1. It removes yet another system library header dependency on our
toolchain, simplifying testing and development environments.

2. It removes yet another blocker to running/testing under Python 3.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread Morgan Fainberg
As a quick preface, today there is the assumption you can upgrade and downgrade 
your SQL Schema. For the most part we do our best to test all of this in our 
unit tests (do upgrades and downgrades land us in the same schema). What isn’t 
clearly addressed is that the concept of downgrade might be inherently flawed, 
especially when you’re talking about the data associated with the migration. 
The concept that there is a utility that can (and in many cases willfully) 
cause permanent, and in some cases irrevocable, data loss from a simple command 
line interface sounds crazy when I try and explain it to someone.

The more I work with the data stored in SQL, and the more I think we should 
really recommend the tried-and-true best practices when trying to revert from a 
migration: Restore your DB to a known good state.

* If a migration fails in some spectacular way (that was not handled 
gracefully) is it possible to use the downgrade mechanic to “fix” it? More 
importantly, would you trust the data after that downgrade?
* Once an upgrade has happened and new code is run (potentially making use of 
the new data structures), is it really safe to downgrade and lose that data 
without stepping back to a known consistent state?

The other side of this coin is that it prevents us from collapsing data types 
in the stable store without hints to reverse the migration.

I get the feeling that the reason we provide downward migrations today is 
because we have in the past. Due to the existence of these downward migrations 
the expectation is that they will work, and so we’re in a weird feedback-loop.

I’d like to propose we stop setting the expectation that a downwards migration 
is a “good idea” or even something we should really support. Offering 
upwards-only migrations would also simplify the migrations in general. This 
downward migration path is also somewhat broken by the migration collapses 
performed in a number of projects (to limit the number of migrations that need 
to be updated when we change a key component such as oslo.db or SQL-Alchemy 
Migrate to Alembic).

Are downward migrations really a good idea for us to support? Is this downward 
migration path a sane expectation? In the real world, would any one really 
trust the data after migrating downwards?

Thanks for reading!
—Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] volunteer to be rep for third-party CI

2015-01-29 Thread Ruby Loo
Hi,

Want to contribute even more to the Ironic community? Here's your
opportunity!

Anita Kuno (anteaya) would like someone to be the Ironic representative for
third party CIs. What would you have to do? In her own words: mostly I
need to know who they are so that when someone has questions I can work
with that person to learn the answers so that they can learn to answer the
questions

There are regular third party meetings [1] and it would be great if you
would attend them, but that isn't necessary.

Let us know if you're interested. No resumes need to be submitted. In case
there is a lot of interest, hmm..., the PTL, Devananda, will decide.
(That's what he gets for not being around now. ;))

Thanks in advance for all your interest,
--ruby

[1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Doug Hellmann


On Thu, Jan 29, 2015, at 01:31 PM, Joe Gordon wrote:
 On Thu, Jan 29, 2015 at 9:52 AM, Sean Dague s...@dague.net wrote:
 
  So, honestly, yes.
 
  For a library to release safely it must:
 
  * have stable-compat jobs running (this was the issue with barbican client)
  * if it has a stable/juno branch it must be pinned in stable/juno (this
  was the issue on most of the oslo libs)
 
 
 We use the clients for two very different things.
 
 1. As a python library to communicate between services
 2. A command line tool that can talk to all supported versions of our
 APIs
 (stable/icehouse, stable/juno and master).
 
 With our current testing setup, we try using the latest release on stable
 branches for both 1. and 2.  That means clients need overlapping
 dependencies with the stable branches.   I don't think this is a
 reasonable
 requirement, and am not sure what we gain from it.
 
 Instead I propose we pin the client version for 1 and use the latest
 release for 2.  We should be able to do this easily by putting all
 command
 line tools inside of venvs using pipsi[0]. This way we continue to test
 the
 clients API compatibility with stable branches without requiring
 dependency
 compatibility (I roped Dean Troyer into working on this at the nova mid
 cycle).
 
 Once this is done, we should be able to go ahead and pin all explicit
 dependencies on stable branches [1].

++

Thanks to both of you for working on this!

Doug

 
 [0] https://pypi.python.org/pypi/pipsi/0.8
 [1] https://review.openstack.org/#/c/147451/
 
 
 
  -Sean
 
  On 01/29/2015 12:07 PM, Kyle Mestery wrote:
   Maybe we should defer all client releases until we know for sure if each
   of them are ticking timebombs.
  
   On Thu, Jan 29, 2015 at 11:00 AM, Morgan Fainberg
   morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote:
  
   Good question! I was planning a keystone liens release very soon,
   but will hold off of it will break everything.
  
   --Morgan
  
  
   On Thursday, January 29, 2015, Thierry Carrez thie...@openstack.org
   mailto:thie...@openstack.org wrote:
  
   Sean Dague wrote:
On 01/27/2015 05:21 PM, Sean Dague wrote:
On 01/27/2015 03:55 PM, Douglas Mendizabal wrote:
Hi openstack-dev,
   
The barbican team would like to announce the release of
python-barbicanclient 3.0.2.  This is a minor release that
   fixes a bug
in the pbr versioning that was preventing the client from
   working correctly.
   
The release is available on PyPI
   
https://pypi.python.org/pypi/python-barbicanclient/3.0.2
   
Which just broke everything, because it creates incompatible
requirements in stable/juno with cinder. :(
   
Here is the footnote -
   
  
  http://logs.openstack.org/18/150618/1/check/check-grenade-dsvm/c727602/logs/grenade.sh.txt.gz#_2015-01-28_00_04_54_429
  
   This seems to have been caused by this requirements sync:
  
  
  http://git.openstack.org/cgit/openstack/python-barbicanclient/commit/requirements.txt?id=054d81fb63053c3ce5f1c87736f832750f6311b3
  
   but then the same requirements sync happened in all other
  clients:
  
  
  http://git.openstack.org/cgit/openstack/python-novaclient/commit/requirements.txt?id=17367002609f011710014aef12a898e9f16db81c
  
   Does that mean that all the clients are time bombs that will
  break
   stable/juno when their next release is tagged ?
  
   --
   Thierry Carrez (ttx)
  
  
  
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   
  http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
  
  __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
  --
  Sean Dague
  http://dague.net
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] django-openstack-auth and stable/icehouse

2015-01-29 Thread Ryan Hsu
Hi All,

There was a change [1] 2 days ago in django-openstack-auth that introduces a 
new requirement oslo.config=1.6.0 to the project, which is now present in the 
1.1.9 release of django-openstack-auth. While this change is in sync with 
master requirements, oslo.config=1.6.0, it does not jive with stable/icehouse 
requirements which is =1.2.0,1.5. Because stable/icehouse horizon does not 
have an upper-bound version requirement for django-openstack-auth, it currently 
takes this 1.1.9 release of django-openstack-auth with the conflicting 
oslo.config requirement. I have a bug open for this situation here [2].

My first thought was to create a patch [3] to cap the django-openstack-auth 
version in stable/icehouse requirements, however, a reviewer pointed out that 
django-openstack-auth 1.1.8 has a security fix that would be desired. My other 
thought was to decrease the minimum required version in django-openstack-auth 
to equal that of stable/icehouse requirements but this would then conflict with 
master requirements. Does anyone have thoughts on how to best resolve this?

Thank you,
Ryan

[1] 
https://github.com/openstack/django_openstack_auth/commit/2b10c7b51081306b4c675046fd7dfe9df375943d
[2] https://bugs.launchpad.net/horizon/+bug/1415243
[3] https://review.openstack.org/#/c/150612/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Jan 29 1400 UTC

2015-01-29 Thread Sergey Lukjanov
Log:
http://eavesdrop.openstack.org/meetings/sahara/2015/sahara.2015-01-29-14.04.html
Minutes:
http://eavesdrop.openstack.org/meetings/sahara/2015/sahara.2015-01-29-14.04.log.html

On Wed, Jan 28, 2015 at 6:46 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

 We'll be having the Sahara team meeting in #openstack-meeting-3 channel.

 Agenda:
 https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings


 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150129T14

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-01-29 Thread michael mccune

On 01/28/2015 12:56 PM, Max Lincoln wrote:

tl;dr: I wanted to be able to see what OpenStack APIs might look like in
Swagger and starting experimenting with Swagger in projects for things
like stubbing services, API test coverage, and code generation. In order
to do that I created wadl2swagger [1]. I've published copies [2] of what
the converted documents look like. If you follow the Open on
Swagger-Editor links at
http://rackerlabs.github.io/wadl2swagger/openstack.html you can open the
Swagger-equivalent of any WADL file and see both the source and a
preview of what Swagger-generated documentation would look like.


this is awesome, very cool to look at the converted WADL.

in a similar vein, i started to work on marking up the sahara and 
barbican code bases to produce swagger. for sahara this was a little 
easier as flask makes it simple to query the paths. for barbican i 
started a pecan-swagger[1] project to aid in marking up the code. it's 
still in infancy but i have a few ideas.


also, i've collected my efforts so far here[2].

mike

[1]: https://github.com/elmiko/pecan-swagger
[2]: https://github.com/elmiko/os-swagger-docs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread Monty Taylor
On 01/29/2015 11:06 AM, Morgan Fainberg wrote:
 As a quick preface, today there is the assumption you can upgrade and 
 downgrade your SQL Schema. For the most part we do our best to test all of 
 this in our unit tests (do upgrades and downgrades land us in the same 
 schema). What isn’t clearly addressed is that the concept of downgrade might 
 be inherently flawed, especially when you’re talking about the data 
 associated with the migration. The concept that there is a utility that can 
 (and in many cases willfully) cause permanent, and in some cases irrevocable, 
 data loss from a simple command line interface sounds crazy when I try and 
 explain it to someone.
 
 The more I work with the data stored in SQL, and the more I think we should 
 really recommend the tried-and-true best practices when trying to revert from 
 a migration: Restore your DB to a known good state.
 
 * If a migration fails in some spectacular way (that was not handled 
 gracefully) is it possible to use the downgrade mechanic to “fix” it? More 
 importantly, would you trust the data after that downgrade?
 * Once an upgrade has happened and new code is run (potentially making use of 
 the new data structures), is it really safe to downgrade and lose that data 
 without stepping back to a known consistent state?
 
 The other side of this coin is that it prevents us from collapsing data types 
 in the stable store without hints to reverse the migration.
 
 I get the feeling that the reason we provide downward migrations today is 
 because we have in the past. Due to the existence of these downward 
 migrations the expectation is that they will work, and so we’re in a weird 
 feedback-loop.
 
 I’d like to propose we stop setting the expectation that a downwards 
 migration is a “good idea” or even something we should really support. 
 Offering upwards-only migrations would also simplify the migrations in 
 general. This downward migration path is also somewhat broken by the 
 migration collapses performed in a number of projects (to limit the number of 
 migrations that need to be updated when we change a key component such as 
 oslo.db or SQL-Alchemy Migrate to Alembic).
 
 Are downward migrations really a good idea for us to support? Is this 
 downward migration path a sane expectation? In the real world, would any one 
 really trust the data after migrating downwards?

I do not think downward migrations are a good idea. I think they are a
spectacularly bad idea that is essentially designed to get users into a
state where they are massively broken.

Operators should fail forward or restore from backup. Giving them
downgrade scripts will imply that they work, which they probably will
not once actual data is involved.

Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Kevin Benton
I don't understand what you are suggesting here. All of these codes will be
for specific openstack service error conditions. The codes in the link you
provided don't provide any useful information to callers other than more
details about how a web-server is (mis)configured. I didn't see anything
related to Neutron port-binding failures due to mechanism driver errors in
that list. :)


On Thu, Jan 29, 2015 at 9:56 AM, John Dickinson m...@not.mn wrote:

 If you're going to make up your own extensions[1] to HTTP, why don't you
 use ones that are already used?

 http://support.microsoft.com/kb/943891



 [1] ok, what's proposed isn't technically an extension, it's response
 body context for the response code. But response bodies are hard to modify
 when you're dealing with APIs that aren't control-plane APIs.

 --John



  On Jan 29, 2015, at 9:41 AM, Sean Dague s...@dague.net wrote:
 
  Correct. This actually came up at the Nova mid cycle in a side
  conversation with Ironic and Neutron folks.
 
  HTTP error codes are not sufficiently granular to describe what happens
  when a REST service goes wrong, especially if it goes wrong in a way
  that would let the client do something other than blindly try the same
  request, or fail.
 
  Having a standard json error payload would be really nice.
 
  {
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
  instance type. If you need this feature please use a different instance
  type. See your cloud provider for options.
  }
 
  That would let us surface more specific errors.
 
  Today there is a giant hodgepodge - see:
 
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424
 
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492
 
  Especially blocks like this:
 
 if 'cloudServersFault' in resp_body:
 message =
  resp_body['cloudServersFault']['message']
 elif 'computeFault' in resp_body:
 message = resp_body['computeFault']['message']
 elif 'error' in resp_body:
 message = resp_body['error']['message']
 elif 'message' in resp_body:
 message = resp_body['message']
 
  Standardization here from the API WG would be really great.
 
-Sean
 
  On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
  Hi Anne,
 
  I think Eugeniya refers to a problem, that we can't really distinguish
  between two different  badRequest (400) errors (e.g. wrong security
  group name vs wrong key pair name when starting an instance), unless
  we parse the error description, which might be error prone.
 
  Thanks,
  Roman
 
  On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
  annegen...@justwriteclick.com wrote:
 
 
  On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
  ekudryash...@mirantis.com wrote:
 
  Hi, all
 
 
  Openstack APIs interact with each other and external systems
 partially by
  passing of HTTP errors. The only valuable difference between types of
  exceptions is HTTP-codes, but current codes are generalized, so
 external
  system can’t distinguish what actually happened.
 
 
  As an example two different failures below differs only by error
 message:
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 189
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test,
 flavorRef:
  42, max_count: 1, min_count: 1, security_groups: [{name:
 bar}]}}
 
  response:
 
 HTTP/1.1 400 Bad Request
 
  Content-Length: 118
 
  Content-Type: application/json; charset=UTF-8
 
  X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0
 
  Date: Fri, 23 Jan 2015 10:43:33 GMT
 
 
  {badRequest: {message: Security group bar not found for project
  790f5693e97a40d38c4d5bfdc45acb09., code: 400}}
 
 
  and
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 192
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo,
 flavorRef:
  42, max_count: 1, min_count: 1, security_groups: [{name:
  default}]}}
 
  response:
 
  HTTP/1.1 400 Bad Request
 
  Content-Length: 70
 
  Content-Type: application/json; charset=UTF-8
 
  X-Compute-Request-Id: 

Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Brant Knudson
On Thu, Jan 29, 2015 at 11:41 AM, Sean Dague s...@dague.net wrote:

 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.

 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.

 Having a standard json error payload would be really nice.

 {
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }

 That would let us surface more specific errors.

 Today there is a giant hodgepodge - see:


 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424


 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492

 Especially blocks like this:

 if 'cloudServersFault' in resp_body:
 message =
 resp_body['cloudServersFault']['message']
 elif 'computeFault' in resp_body:
 message = resp_body['computeFault']['message']
 elif 'error' in resp_body:
 message = resp_body['error']['message']
 elif 'message' in resp_body:
 message = resp_body['message']

 Standardization here from the API WG would be really great.

 -Sean

 On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
  Hi Anne,
 
  I think Eugeniya refers to a problem, that we can't really distinguish
  between two different  badRequest (400) errors (e.g. wrong security
  group name vs wrong key pair name when starting an instance), unless
  we parse the error description, which might be error prone.
 
  Thanks,
  Roman
 
  On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
  annegen...@justwriteclick.com wrote:
 
 
  On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
  ekudryash...@mirantis.com wrote:
 
  Hi, all
 
 
  Openstack APIs interact with each other and external systems partially
 by
  passing of HTTP errors. The only valuable difference between types of
  exceptions is HTTP-codes, but current codes are generalized, so
 external
  system can’t distinguish what actually happened.
 
 
  As an example two different failures below differs only by error
 message:
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 189
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test,
 flavorRef:
  42, max_count: 1, min_count: 1, security_groups: [{name:
 bar}]}}
 
  response:
 
  HTTP/1.1 400 Bad Request
 
  Content-Length: 118
 
  Content-Type: application/json; charset=UTF-8
 
  X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0
 
  Date: Fri, 23 Jan 2015 10:43:33 GMT
 
 
  {badRequest: {message: Security group bar not found for project
  790f5693e97a40d38c4d5bfdc45acb09., code: 400}}
 
 
  and
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 192
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
  42, max_count: 1, min_count: 1, security_groups: [{name:
  default}]}}
 
  response:
 
  HTTP/1.1 400 Bad Request
 
  Content-Length: 70
 
  Content-Type: application/json; charset=UTF-8
 
  X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5
 
  Date: Fri, 23 Jan 2015 10:39:43 GMT
 
 
  {badRequest: {message: Invalid key_name provided., code: 400}}
 
 
  The former specifies an incorrect security group name, and the latter
 an
  incorrect keypair name. And the problem is, that just looking at the
  response body and HTTP response code an external system can’t
 understand
  what exactly went wrong. And parsing of error messages here is not the
 way
  we’d like to solve this problem.
 
 
  For the Compute API v 2 we have the shortened Error Code in the
  documentation at
 
 http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses
 
  such as:
 
  Error response codes
  computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
  unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
  itemNotFound (404), 

Re: [openstack-dev] [tc] Take back the naming process

2015-01-29 Thread Anita Kuno
On 01/29/2015 11:56 AM, Adam Lawson wrote:
 Hi Anne; this was more or less directed in Monty's direction and/or those
 in agreement with his position. Sorry for the confusion, I probably should
 have been a bit more clear. ; )
 
 Mahalo,
 Adam
Okay, thanks Adam.

My name is Anita.

Thanks,
Anita.
 
 
 *Adam Lawson*
 
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (844) 4-AQORN-NOW ext. 101
 International: +1 302-387-4660
 Direct: +1 916-246-2072
 
 
 On Thu, Jan 29, 2015 at 9:05 AM, Anita Kuno ante...@anteaya.info wrote:
 
 On 01/28/2015 07:24 PM, Adam Lawson wrote:
 I'm short on time so I apologize for my candor since I need to get
 straight
 to the point.

 I love reading the various opinions and my team is immensely excited with
 OpenStack is maturing. But this is lunacy.

 I looked at the patch being worked [1] to change how things are done and
 have more questions than I can count.

 So I'll start with the obvious ones:

- Are you proposing this change as a Foundation Individual Board
Director tasked with representing the interests of all Individual
 Members
of the OpenStack community or as a member of the TC? Context matters
because your two hats are presenting a conflict of interest in my
 opinion.
One cannot propose a change that gives them greater influence while
suggesting they're doing it for everyone's benefit.
 How can Jim be proposing a change as a Foundation Individual Board
 Director? He isn't a member of the Board.

 http://www.openstack.org/foundation/board-of-directors/

 He is a member of the Technical Committee.

 http://www.openstack.org/foundation/tech-committee/

 Keep in mind that the repository that he offered the change to, the
 openstack/governance repository, welcomes patches from anyone who takes
 the time to learn our developer workflow and offers a patch to the
 repository using Gerrit.

 http://docs.openstack.org/infra/manual/developers.html

 Thanks,
 Anita.
- How is fun remotely relevant when discussing process improvement?
I'm really hoping we aren't developing processes based on how fun a
 process
is or isn't.
- Why is this discussion being limited to the development community
only? Where's the openness in that?
- What exactly is the problem we're attempting to fix?
- Does the current process not work?
- Is there group of individuals being disenfranchised with our current
process somehow that suggests the process should limit participation
differently?

 And some questions around the participation proposals:

- Why is the election process change proposing to limit participation
 to
ATC members only?
There are numerous enthusiasts within our community that don't fall
within the ATC category such as marketing (as some have brought up),
corporate sponsors (where I live) and I'm sure there are many more.
- Is taking back the process a hint that the current process is being
mishandled or restores a sense of process control?
- Is the presumption that the election process belongs to someone or
some group?
That strikes me as an incredibly subjective assertion to make.

 opinionThis is one reason I feel so strongly folks should not be
 allowed
 to hold more than one position of leadership within the OpenStack
 project.
 Obfuscated context coupled with increased influence rarely produces
 excellence on either front. But that's me./opinion

 Mahalo,
 Adam

 [1] https://review.openstack.org/#/c/150604/


 *Adam Lawson*

 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (844) 4-AQORN-NOW ext. 101
 International: +1 302-387-4660
 Direct: +1 916-246-2072


 On Wed, Jan 28, 2015 at 10:23 AM, Anita Kuno ante...@anteaya.info
 wrote:

 On 01/28/2015 11:36 AM, Thierry Carrez wrote:
 Monty Taylor wrote:
 What if, to reduce stress on you, we make this 100% mechanical:

 - Anyone can propose a name
 - Election officials verify that the name matches the criteria
 -  * note: how do we approve additive exceptions without tons of
 effort

 Devil is in the details, as reading some of my hatemail would tell you.
 For example in the past I rejected Foo which was proposed because
 there was a Foo Bar landmark in the vicinity. The rules would have to
 be pretty detailed to be entirely objective.
 Naming isn't objective. That is both the value and the hardship.

 - Marketing team provides feedback to the election officials on names
 they find image-wise problematic
 - The poll is created with the roster of all foundation members
 containing all of the choices, but with the marketing issues clearly
 labeled, like this:

 * Love
 * Lumber
 Ohh, it gives me a thrill to see a name that means something even
 remotely Canadian. (not advocating it be added to this round)
 * Lettuce
 * Lemming - marketing issues identified

 - post poll - foundation staff run trademarks checks on the winners in
 order until a 

Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread Donald Stufft

 On Jan 29, 2015, at 2:27 PM, Monty Taylor mord...@inaugust.com wrote:
 
 On 01/29/2015 11:06 AM, Morgan Fainberg wrote:
 As a quick preface, today there is the assumption you can upgrade and 
 downgrade your SQL Schema. For the most part we do our best to test all of 
 this in our unit tests (do upgrades and downgrades land us in the same 
 schema). What isn’t clearly addressed is that the concept of downgrade might 
 be inherently flawed, especially when you’re talking about the data 
 associated with the migration. The concept that there is a utility that can 
 (and in many cases willfully) cause permanent, and in some cases 
 irrevocable, data loss from a simple command line interface sounds crazy 
 when I try and explain it to someone.
 
 The more I work with the data stored in SQL, and the more I think we should 
 really recommend the tried-and-true best practices when trying to revert 
 from a migration: Restore your DB to a known good state.
 
 * If a migration fails in some spectacular way (that was not handled 
 gracefully) is it possible to use the downgrade mechanic to “fix” it? More 
 importantly, would you trust the data after that downgrade?
 * Once an upgrade has happened and new code is run (potentially making use 
 of the new data structures), is it really safe to downgrade and lose that 
 data without stepping back to a known consistent state?
 
 The other side of this coin is that it prevents us from collapsing data 
 types in the stable store without hints to reverse the migration.
 
 I get the feeling that the reason we provide downward migrations today is 
 because we have in the past. Due to the existence of these downward 
 migrations the expectation is that they will work, and so we’re in a weird 
 feedback-loop.
 
 I’d like to propose we stop setting the expectation that a downwards 
 migration is a “good idea” or even something we should really support. 
 Offering upwards-only migrations would also simplify the migrations in 
 general. This downward migration path is also somewhat broken by the 
 migration collapses performed in a number of projects (to limit the number 
 of migrations that need to be updated when we change a key component such as 
 oslo.db or SQL-Alchemy Migrate to Alembic).
 
 Are downward migrations really a good idea for us to support? Is this 
 downward migration path a sane expectation? In the real world, would any one 
 really trust the data after migrating downwards?
 
 I do not think downward migrations are a good idea. I think they are a
 spectacularly bad idea that is essentially designed to get users into a
 state where they are massively broken.
 
 Operators should fail forward or restore from backup. Giving them
 downgrade scripts will imply that they work, which they probably will
 not once actual data is involved.

+1 on disabling downgrades.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread John Dickinson
I think there are two points. First, the original requirement (in the first 
email on this thread) is not what's wanted:

...looking at the response body and HTTP response code an external system 
can’t understand what exactly went wrong. And parsing of error messages here is 
not the way we’d like to solve this problem.

So adding a response body to parse doesn't solve the problem. The request as I 
read it is to have a set of well-defined error codes to know what happens.

Second, my response is a little tongue-in-cheek, because I think the IIS 
response codes are a perfect example of extending a common, well-known protocol 
with custom extensions that breaks existing clients. I would hate to see us do 
that.

So if we can't subtly break http, and we can't have error response documents, 
then we're left with custom error codes in the particular response-code class. 
eg 461 SecurityGroupNotFound or 462 InvalidKeyName (from the original examples)


--John






 On Jan 29, 2015, at 11:39 AM, Brant Knudson b...@acm.org wrote:
 
 
 
 On Thu, Jan 29, 2015 at 11:41 AM, Sean Dague s...@dague.net wrote:
 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.
 
 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.
 
 Having a standard json error payload would be really nice.
 
 {
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }
 
 That would let us surface more specific errors.
 
 Today there is a giant hodgepodge - see:
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492
 
 Especially blocks like this:
 
 if 'cloudServersFault' in resp_body:
 message =
 resp_body['cloudServersFault']['message']
 elif 'computeFault' in resp_body:
 message = resp_body['computeFault']['message']
 elif 'error' in resp_body:
 message = resp_body['error']['message']
 elif 'message' in resp_body:
 message = resp_body['message']
 
 Standardization here from the API WG would be really great.
 
 -Sean
 
 On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
  Hi Anne,
 
  I think Eugeniya refers to a problem, that we can't really distinguish
  between two different  badRequest (400) errors (e.g. wrong security
  group name vs wrong key pair name when starting an instance), unless
  we parse the error description, which might be error prone.
 
  Thanks,
  Roman
 
  On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
  annegen...@justwriteclick.com wrote:
 
 
  On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
  ekudryash...@mirantis.com wrote:
 
  Hi, all
 
 
  Openstack APIs interact with each other and external systems partially by
  passing of HTTP errors. The only valuable difference between types of
  exceptions is HTTP-codes, but current codes are generalized, so external
  system can’t distinguish what actually happened.
 
 
  As an example two different failures below differs only by error message:
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 189
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
  42, max_count: 1, min_count: 1, security_groups: [{name: 
  bar}]}}
 
  response:
 
  HTTP/1.1 400 Bad Request
 
  Content-Length: 118
 
  Content-Type: application/json; charset=UTF-8
 
  X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0
 
  Date: Fri, 23 Jan 2015 10:43:33 GMT
 
 
  {badRequest: {message: Security group bar not found for project
  790f5693e97a40d38c4d5bfdc45acb09., code: 400}}
 
 
  and
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 192
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
  42, max_count: 1, min_count: 1, 

Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread gordon chung



  Are downward migrations really a good idea for us to support? Is this 
  downward migration path a sane expectation? In the real world, would any 
  one really trust the data after migrating downwards?
are downward migrations actually mandated? i always assumed it was just been a 
pain that no one bother tried breaking. i'd be all for ending the downward path 
-- i would assume/hope most people upgrading have some sort of fail back 
(backup) already.
cheers,
gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] volunteer to be rep for third-party CI

2015-01-29 Thread Anita Kuno
On 01/29/2015 02:32 PM, Adam Lawson wrote:
 Hi ruby, I'd be interested in this. Let me know next steps when ready?
 
 Thanks!
Hi Adam:

It requires someone who knows the code base really well. While core
review permissions are not required, the person fulfilling this role
needs to have the confidence of the cores for support of decisions they
make.

Since folks with these abilities spend much of their time in irc and
read backscroll I had brought the subject up in channel. I hadn't
expected a post to the mailing list as folks in the larger community may
not have the skill set that would make them effective in this role.

Which is not to say they can't learn the role. The starting place would
be to contribute to the code base as a contributor
(http://docs.openstack.org/infra/manual/developers.html) and earn the
trust of the program's cores through participation in channel and in
reviews.

To be honest, I had thought someone already had said they would do this
but since Ironic doesn't have much third party ci activity, I have
forgotten who said they would. Mostly I was asking if anyone else
remembered who this was.

Thanks Adam,
Anita.
 On Jan 29, 2015 11:14 AM, Ruby Loo rlooya...@gmail.com wrote:
 
 Hi,

 Want to contribute even more to the Ironic community? Here's your
 opportunity!

 Anita Kuno (anteaya) would like someone to be the Ironic representative
 for third party CIs. What would you have to do? In her own words: mostly
 I need to know who they are so that when someone has questions I can work
 with that person to learn the answers so that they can learn to answer the
 questions

 There are regular third party meetings [1] and it would be great if you
 would attend them, but that isn't necessary.

 Let us know if you're interested. No resumes need to be submitted. In case
 there is a lot of interest, hmm..., the PTL, Devananda, will decide.
 (That's what he gets for not being around now. ;))

 Thanks in advance for all your interest,
 --ruby

 [1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] design question : green thread model

2015-01-29 Thread Belmiro Moreira
Hi,
nova-conductor starts multiple processes as well.

Belmiro


On Wednesday, January 28, 2015, Johannes Erdfelt johan...@erdfelt.com
wrote:

 On Wed, Jan 28, 2015, murali reddy muralimmre...@gmail.com javascript:;
 wrote:
  On hosts with multi-core processors, it does not seem optimal to run a
  single service instance with just green thread. I understand that on
  controller node, we can run one or more nova services but still it does
 not
  seem to utilize multi-core processors.
 
  Is it not a nova scaling concern?

 It certainly depends on the service.

 nova-compute isn't CPU limited for instance, so utilizing multiple cores
 isn't necessary.

 nova-scheduler generally isn't CPU limited either in our usage (at
 Rackspace), but we use cells and as a result, we run multiple independent
 nova-scheduler services.

 If you've seen some scaling problems, I know the Nova team is open to
 reports. In some cases, patches wouldn't be hard to develop to start
 multiple processes, but no one has ever reported a need beyond nova-api.

 JE


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-29 Thread Kevin Benton
How is Neutron breaking this?  If I move a port on my physical switch to a
different subnet, can you still communicate with the host sitting on it?
Probably not since it has a view of the world (next-hop router) that no
longer
exists, and the network won't route packets for it's old IP address to the
new
location.  It has to wait for it's current DHCP lease to tick down to the
point
where it will use broadcast to get a new one, after which point it will
work.

That's not just moving to a different subnet. That's moving to a different
broadcast domain. Neutron supports multiple subnets per network (broadcast
domain). An address on either subnet will work. The router has two
interfaces into the network, one on each subnet.[2]


Does it work on Windows VMs too?  People run those in clouds too.  The
point is
that if we don't know if all the DHCP clients will support it then it's a
non-starter since there's no way to tell from the server side.

It appears they do.[1] Even for clients that don't, the worst case scenario
is just that they are stuck where we are now.

... then the deployer can adjust the value upwards..., hmm, can they
adjust it
downwards as well?  :)

Yes, but most people doing initial openstack deployments don't and wouldn't
think to without understanding the intricacies of the security groups
filtering in Neutron.

I'm glad you're willing to boil the ocean to try and get the default
changed,
but is all this really worth it when all you have to do is edit the config
file
in your deployment?  That's why the value is there in the first place.

The default value is basically incompatible with port IP changes. We
shouldn't be shipping defaults that lead to half-broken functionality. What
I'm understanding is that the current default value is to workaround
shortcomings in dnsmasq. This is an example of implementation details
leaking out and leading to bad UX.

If we had an option to configure how often iptables rules were refreshed to
match their security group, there is no way we would have a default of 12
hours. This is essentially the same level of connectivity interruption, it
just happens to be a narrow use case so it hasn't been getting any
attention.

To flip your question around, why do you care if the default is lower? You
already adjust it beyond the 1 day default in your deployment, so how would
a different default impact you?


1. http://support.microsoft.com/kb/121005
2. Similar to using the secondary keyword on Cisco devices. Or just the
ip addr add command on linux.

On Thu, Jan 29, 2015 at 1:34 PM, Brian Haley brian.ha...@hp.com wrote:

 On 01/29/2015 03:55 AM, Kevin Benton wrote:
 Why would users want to change an active port's IP address anyway?
 
  Re-addressing. It's not common, but the entire reason I brought this up
 is
  because a user was moving an instance to another subnet on the same
 network and
  stranded one of their VMs.
 
  I worry about setting a default config value to handle a very unusual
 use case.
 
  Changing a static lease is something that works on normal networks so I
 don't
  think we should break it in Neutron without a really good reason.

 How is Neutron breaking this?  If I move a port on my physical switch to a
 different subnet, can you still communicate with the host sitting on it?
 Probably not since it has a view of the world (next-hop router) that no
 longer
 exists, and the network won't route packets for it's old IP address to the
 new
 location.  It has to wait for it's current DHCP lease to tick down to the
 point
 where it will use broadcast to get a new one, after which point it will
 work.

  Right now, the big reason to keep a high lease time that I agree with is
 that it
  buys operators lots of dnsmasq downtime without affecting running
 clients. To
  get the best of both worlds we can set DHCP option 58 (a.k.a
 dhcp-renewal-time
  or T1) to 240 seconds. Then the lease time can be left to be something
 large
  like 10 days to allow for tons of DHCP server downtime without affecting
 running
  clients.
 
  There are two issues with this approach. First, some simple dhcp clients
 don't
  honor that dhcp option (e.g. the one with Cirros), but it works with
 dhclient so
  it should work on CentOS, Fedora, etc (I verified it works on Ubuntu).
 This
  isn't a big deal because the worst case is what we have already (half of
 the
  lease time). The second issue is that dnsmasq hardcodes that option, so
 a patch
  would be required to allow it to be specified in the options file. I am
 happy to
  submit the patch required there so that isn't a big deal either.

 Does it work on Windows VMs too?  People run those in clouds too.  The
 point is
 that if we don't know if all the DHCP clients will support it then it's a
 non-starter since there's no way to tell from the server side.

  If we implement that fix, the remaining issue is Brian's other comment
 about too
  much DHCP traffic. I've been doing some packet captures and the standard
  

Re: [openstack-dev] [Ironic] volunteer to be rep for third-party CI

2015-01-29 Thread Adam Lawson
Hi ruby, I'd be interested in this. Let me know next steps when ready?

Thanks!
On Jan 29, 2015 11:14 AM, Ruby Loo rlooya...@gmail.com wrote:

 Hi,

 Want to contribute even more to the Ironic community? Here's your
 opportunity!

 Anita Kuno (anteaya) would like someone to be the Ironic representative
 for third party CIs. What would you have to do? In her own words: mostly
 I need to know who they are so that when someone has questions I can work
 with that person to learn the answers so that they can learn to answer the
 questions

 There are regular third party meetings [1] and it would be great if you
 would attend them, but that isn't necessary.

 Let us know if you're interested. No resumes need to be submitted. In case
 there is a lot of interest, hmm..., the PTL, Devananda, will decide.
 (That's what he gets for not being around now. ;))

 Thanks in advance for all your interest,
 --ruby

 [1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-29 Thread Igor Belikov
Folks,

Changes in CI jobs have been made, for master branch of fuel-library we are 
running CentOS HA + Nova VLAN and Ubuntu HA + Neutron VLAN .
Job naming schema has also been changed, so now it includes actual testgroup. 
Current links for master branch CI jobs are [1] and [2], all other jobs can be 
found here[3] or will show up in your gerrit reviews.
ISO and environments have been updated to the latest ones.

[1]https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 
https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
[2]https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
 
https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
[3]https://fuel-jenkins.mirantis.com https://fuel-jenkins.mirantis.com/
--
Igor Belikov
Fuel DevOps
ibeli...@mirantis.com





 On 29 Jan 2015, at 13:42, Aleksandr Didenko adide...@mirantis.com wrote:
 
 Mike,
 
  Any objections / additional suggestions?
 
 no objections from me, and it's already covered by LP 1415116 bug [1]
 
 [1] https://bugs.launchpad.net/fuel/+bug/1415116 
 https://bugs.launchpad.net/fuel/+bug/1415116
 
 On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov mscherba...@mirantis.com 
 mailto:mscherba...@mirantis.com wrote:
 Folks,
 one of the things we should not forget about - is out Fuel CI gating 
 jobs/tests. [1], [2].
 
 One of them is actually runs simple mode. Unfortunately, I don't see details 
 about tests ran for [1], [2], but I'm pretty sure it's same set as [3], [4].
 
 I suggest to change tests. First of all, we need to get rid of simple runs 
 (since we are deprecating it), and second - I'd like us to run Ubuntu HA + 
 Neutron VLAN for one of the tests.
 
 Any objections / additional suggestions?
 
 [1] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/ 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2] 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/ 
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3] https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/ 
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4] https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/ 
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/
 
 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko svasile...@mirantis.com 
 mailto:svasile...@mirantis.com wrote:
 +1 to replace simple to HA with one controller
 
 /sv
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Mike Scherbakov
 #mihgen
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Brant Knudson
On Thu, Jan 29, 2015 at 11:05 AM, Roman Podoliaka rpodoly...@mirantis.com
wrote:

 Mike,

 I can't agree more: as far as we are concerned, every service is yet
 another WSGI app. And it should be left up to operator, how to deploy
 it.

 So 'green thread awareness' (i.e. patching of the world) should go to
 separate keystone|*-eventlet binary,


This is keystone-all:
http://git.openstack.org/cgit/openstack/keystone/tree/bin/keystone-all


 while everyone else will still be
 able to use it as a general WSGI app.


This is httpd/keystone:
http://git.openstack.org/cgit/openstack/keystone/tree/httpd/keystone.py


 Thanks,
 Roman



- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread Boris Bobrov
On Thursday 29 January 2015 22:06:25 Morgan Fainberg wrote:
 I’d like to propose we stop setting the expectation that a downwards
 migration is a “good idea” or even something we should really support.
 Offering upwards-only migrations would also simplify the migrations in
 general. This downward migration path is also somewhat broken by the
 migration collapses performed in a number of projects (to limit the number
 of migrations that need to be updated when we change a key component such
 as oslo.db or SQL-Alchemy Migrate to Alembic).
 
 Are downward migrations really a good idea for us to support? Is this
 downward migration path a sane expectation? In the real world, would any
 one really trust the data after migrating downwards?

Frankly, I don't see a case when a downgrade from n to (n - 1) in development 
cannot be replaced with a set of fixtures and upgrade from 0 to (n - 1).

If we assume that upgrade can possible break something in production, we 
should not rely on fixing by downgrading the schema, because a) the code 
depends on the latest schema and b) break can be different and unrecoverable.

IMO downward migrations should be disabled. We could make a survey though, 
maybe someone has a story of using them in the fields.

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread Mike Bayer


Morgan Fainberg morgan.fainb...@gmail.com wrote:

 
 Are downward migrations really a good idea for us to support? Is this 
 downward migration path a sane expectation? In the real world, would any one 
 really trust the data after migrating downwards?

It’s a good idea for a migration script to include a rudimentary downgrade 
operation to complement the upgrade operation, if feasible.  The purpose of 
this downgrade is from a practical standpoint helpful when locally testing a 
specific, typically small series of migrations.

A downgrade however typically only applies to schema objects, and not so much 
data.   It is often impossible to provide downgrades of data changes as it is 
likely that a data upgrade operation was destructive of some data.  Therefore, 
when dealing with a full series of real world migrations that include data 
migrations within them, downgrades are typically impossible.   I’m getting the 
impression that our migration scripts have data migrations galore in them.   

So I am +1 on establishing a policy that the deployer of the application would 
not have access to any “downgrade” migrations, and -1 on removing “downgrade” 
entirely from individual migrations.   Specific migration scripts may return 
NotImplemented for their downgrade if its really not feasible, but for things 
like table and column changes where autogenerate has already rendered the 
downgrade, it’s handy to keep at least the smaller ones working.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Vladimir Kuklin
Evgeniy,

I am not suggesting to go to Nailgun DB directly. There obviously should be
some layer between a serializier and DB.

On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1] http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers
 for tasks - taking data from 3rd party sources if user wants. In this case
 user will be able to generate some data somewhere and fetch it using this
 code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for 
 different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync 
 as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder
 /etc/fuel/keys, and then copy them with rsync task (but it feels not 
 very
 secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from
 file on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and i 
 guess
 it will be responsibility of controller to store current keys that are
 valid for cluster. Alex please provide more details about 3rd 
 approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 

Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-29 Thread Douglas Mendizabal

 On Jan 29, 2015, at 1:19 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Thu, Jan 29, 2015, at 01:31 PM, Joe Gordon wrote:
 On Thu, Jan 29, 2015 at 9:52 AM, Sean Dague s...@dague.net wrote:
 
 So, honestly, yes.
 
 For a library to release safely it must:
 
 * have stable-compat jobs running (this was the issue with barbican client)
 * if it has a stable/juno branch it must be pinned in stable/juno (this
 was the issue on most of the oslo libs)
 
 

[snip]

 Thanks to both of you for working on this!
 
 Doug
 

[snip]

+1  I definitely appreciate all the help sorting this out.  I think it’s good 
that we’ll have a stable-compat job in our gate now, but it’s still not clear 
to me how this would have prevented the problem?   Sure, we could have noticed 
that the global-requirements-sync CR was a breaking change in cinder/juno, but 
by that point the offending oslo library was already in global-requirements.

I’m still not sure what the barbican team needs to do going forward.  It seems 
that pinning an oslo lib pretty much forces every client using it to fork a 
stable branch?  If so, should we plan on supporting a stable/juno branch in 
python-barbicanclient which forks at 3.0.1 (the last juno compatible release?) 
and then back port fixes as 3.0.1.x?

Thanks,
-Doug Mendizabal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-29 Thread Brian Haley
On 01/29/2015 03:55 AM, Kevin Benton wrote:
Why would users want to change an active port's IP address anyway?
 
 Re-addressing. It's not common, but the entire reason I brought this up is
 because a user was moving an instance to another subnet on the same network 
 and
 stranded one of their VMs.
 
 I worry about setting a default config value to handle a very unusual use 
 case.
 
 Changing a static lease is something that works on normal networks so I don't
 think we should break it in Neutron without a really good reason.

How is Neutron breaking this?  If I move a port on my physical switch to a
different subnet, can you still communicate with the host sitting on it?
Probably not since it has a view of the world (next-hop router) that no longer
exists, and the network won't route packets for it's old IP address to the new
location.  It has to wait for it's current DHCP lease to tick down to the point
where it will use broadcast to get a new one, after which point it will work.

 Right now, the big reason to keep a high lease time that I agree with is that 
 it
 buys operators lots of dnsmasq downtime without affecting running clients. To
 get the best of both worlds we can set DHCP option 58 (a.k.a dhcp-renewal-time
 or T1) to 240 seconds. Then the lease time can be left to be something large
 like 10 days to allow for tons of DHCP server downtime without affecting 
 running
 clients.
 
 There are two issues with this approach. First, some simple dhcp clients don't
 honor that dhcp option (e.g. the one with Cirros), but it works with dhclient 
 so
 it should work on CentOS, Fedora, etc (I verified it works on Ubuntu). This
 isn't a big deal because the worst case is what we have already (half of the
 lease time). The second issue is that dnsmasq hardcodes that option, so a 
 patch
 would be required to allow it to be specified in the options file. I am happy 
 to
 submit the patch required there so that isn't a big deal either.

Does it work on Windows VMs too?  People run those in clouds too.  The point is
that if we don't know if all the DHCP clients will support it then it's a
non-starter since there's no way to tell from the server side.

 If we implement that fix, the remaining issue is Brian's other comment about 
 too
 much DHCP traffic. I've been doing some packet captures and the standard
 request/reply for a renewal is 2 unicast packets totaling about 725 bytes.
 Assuming 10,000 VMs renewing every 240 seconds, there will be an average of 
 242
 kbps background traffic across the entire network. Even at a density of 50 
 VMs,
 that's only 1.2 kbps per compute node. If that's still too much, then the
 deployer can adjust the value upwards, but that's hardly a reason to have a 
 high
 default.

... then the deployer can adjust the value upwards..., hmm, can they adjust it
downwards as well?  :)

 That just leaves the logging problem. Since we require a change to dnsmasq
 anyway, perhaps we could also request an option to suppress logs from 
 renewals?
 If that's not adequate, I think 2 log entries per vm every 240 seconds is 
 really
 only a concern for operators with large clouds and they should have the
 knowledge required to change a config file anyway. ;-)

I'm glad you're willing to boil the ocean to try and get the default changed,
but is all this really worth it when all you have to do is edit the config file
in your deployment?  That's why the value is there in the first place.

Sorry, I'm still unconvinced we need to do anything more than document this.

-Brian



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Kevin Benton
Oh, I understood it a little differently. I took parsing of error messages
here is not the way we’d like to solve this problem as meaning that
parsing them in their current ad-hoc, project-specific format is not the
way we want to solve this (e.g. the way tempest does it). But if we had a
structured way like the EC2 errors, it would be a much easier problem to
solve.

So either way we are still parsing the body, the only difference is that
the parser no longer has to understand how to parse Neutron errors vs. Nova
errors. It just needs to parse the standard OpenStack error format that
we come up with.



On Thu, Jan 29, 2015 at 12:04 PM, John Dickinson m...@not.mn wrote:

 I think there are two points. First, the original requirement (in the
 first email on this thread) is not what's wanted:

 ...looking at the response body and HTTP response code an external system
 can’t understand what exactly went wrong. And parsing of error messages
 here is not the way we’d like to solve this problem.

 So adding a response body to parse doesn't solve the problem. The request
 as I read it is to have a set of well-defined error codes to know what
 happens.

 Second, my response is a little tongue-in-cheek, because I think the IIS
 response codes are a perfect example of extending a common, well-known
 protocol with custom extensions that breaks existing clients. I would hate
 to see us do that.

 So if we can't subtly break http, and we can't have error response
 documents, then we're left with custom error codes in the particular
 response-code class. eg 461 SecurityGroupNotFound or 462 InvalidKeyName
 (from the original examples)


 --John






  On Jan 29, 2015, at 11:39 AM, Brant Knudson b...@acm.org wrote:
 
 
 
  On Thu, Jan 29, 2015 at 11:41 AM, Sean Dague s...@dague.net wrote:
  Correct. This actually came up at the Nova mid cycle in a side
  conversation with Ironic and Neutron folks.
 
  HTTP error codes are not sufficiently granular to describe what happens
  when a REST service goes wrong, especially if it goes wrong in a way
  that would let the client do something other than blindly try the same
  request, or fail.
 
  Having a standard json error payload would be really nice.
 
  {
   fault: ComputeFeatureUnsupportedOnInstanceType,
   messsage: This compute feature is not supported on this kind of
  instance type. If you need this feature please use a different instance
  type. See your cloud provider for options.
  }
 
  That would let us surface more specific errors.
 
  Today there is a giant hodgepodge - see:
 
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424
 
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492
 
  Especially blocks like this:
 
  if 'cloudServersFault' in resp_body:
  message =
  resp_body['cloudServersFault']['message']
  elif 'computeFault' in resp_body:
  message =
 resp_body['computeFault']['message']
  elif 'error' in resp_body:
  message = resp_body['error']['message']
  elif 'message' in resp_body:
  message = resp_body['message']
 
  Standardization here from the API WG would be really great.
 
  -Sean
 
  On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
   Hi Anne,
  
   I think Eugeniya refers to a problem, that we can't really distinguish
   between two different  badRequest (400) errors (e.g. wrong security
   group name vs wrong key pair name when starting an instance), unless
   we parse the error description, which might be error prone.
  
   Thanks,
   Roman
  
   On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
   annegen...@justwriteclick.com wrote:
  
  
   On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
   ekudryash...@mirantis.com wrote:
  
   Hi, all
  
  
   Openstack APIs interact with each other and external systems
 partially by
   passing of HTTP errors. The only valuable difference between types of
   exceptions is HTTP-codes, but current codes are generalized, so
 external
   system can’t distinguish what actually happened.
  
  
   As an example two different failures below differs only by error
 message:
  
  
   request:
  
   POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
  
   Host: 192.168.122.195:8774
  
   X-Auth-Project-Id: demo
  
   Accept-Encoding: gzip, deflate, compress
  
   Content-Length: 189
  
   Accept: application/json
  
   User-Agent: python-novaclient
  
   X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
  
   Content-Type: application/json
  
  
   {server: {name: demo, imageRef:
   171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test,
 flavorRef:
   42, max_count: 1, min_count: 1, security_groups: [{name:
 bar}]}}
  
   response:
  
   HTTP/1.1 400 Bad Request
  
   Content-Length: 118
  

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-29 Thread Boris Pavlovic
Michael,

Seems like Wataru from CERN is working on testing EC2.
He is adding Rally scenarios related to EC2:
https://review.openstack.org/#/c/147550/

So at least EC2 will have good functional/perf test coverage.

Best regards,
Boris Pavlovic

On Fri, Jan 30, 2015 at 3:11 AM, matt m...@nycresistor.com wrote:

 Is there a blue print or some set of bugs tagged in some way to tackle?

 -matt

 On Thu, Jan 29, 2015 at 7:01 PM, Michael Still mi...@stillhq.com wrote:

 Hi,

 as you might have read on openstack-dev, the Nova EC2 API
 implementation is in a pretty sad state. I wont repeat all of those
 details here -- you can read the thread on openstack-dev for detail.

 However, we got here because no one is maintaining the code in Nova
 for the EC2 API. This is despite repeated calls over the last 18
 months (at least).

 So, does the Foundation have a role here? The Nova team has failed to
 find someone to help us resolve these issues. Can the board perhaps
 find resources as the representatives of some of the largest
 contributors to OpenStack? Could the Foundation employ someone to help
 us our here?

 I suspect the correct plan is to work on getting the stackforge
 replacement finished, and ensuring that it is feature compatible with
 the Nova implementation. However, I don't want to preempt the design
 process -- there might be other ways forward here.

 I feel that a continued discussion which just repeats the last 18
 months wont actually fix the situation -- its time to break out of
 that mode and find other ways to try and get someone working on this
 problem.

 Thoughts welcome.

 Michael

 --
 Rackspace Australia

 ___
 Foundation mailing list
 foundat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Kenichi Oomichi
 -Original Message-
 From: Roman Podoliaka [mailto:rpodoly...@mirantis.com]
 Sent: Friday, January 30, 2015 2:12 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [api][nova] Openstack HTTP error codes
 
 Hi Anne,
 
 I think Eugeniya refers to a problem, that we can't really distinguish
 between two different  badRequest (400) errors (e.g. wrong security
 group name vs wrong key pair name when starting an instance), unless
 we parse the error description, which might be error prone.

Yeah, current Nova v2 API (not v2.1 API) returns inconsistent messages
in badRequest responses, because these messages are implemented at many
places. But Nova v2.1 API can return consistent messages in most cases
because its input validation framework generates messages automatically[1].

Thanks
Ken'ichi Ohmichi

---
[1]: 
https://github.com/openstack/nova/blob/master/nova/api/validation/validators.py#L104

 On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
 annegen...@justwriteclick.com wrote:
 
 
  On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
  ekudryash...@mirantis.com wrote:
 
  Hi, all
 
 
  Openstack APIs interact with each other and external systems partially by
  passing of HTTP errors. The only valuable difference between types of
  exceptions is HTTP-codes, but current codes are generalized, so external
  system can’t distinguish what actually happened.
 
 
  As an example two different failures below differs only by error message:
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 189
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
  42, max_count: 1, min_count: 1, security_groups: [{name: 
  bar}]}}
 
  response:
 
  HTTP/1.1 400 Bad Request
 
  Content-Length: 118
 
  Content-Type: application/json; charset=UTF-8
 
  X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0
 
  Date: Fri, 23 Jan 2015 10:43:33 GMT
 
 
  {badRequest: {message: Security group bar not found for project
  790f5693e97a40d38c4d5bfdc45acb09., code: 400}}
 
 
  and
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 192
 
  Accept: application/json
 
  User-Agent: python-novaclient
 
  X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71
 
  Content-Type: application/json
 
 
  {server: {name: demo, imageRef:
  171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
  42, max_count: 1, min_count: 1, security_groups: [{name:
  default}]}}
 
  response:
 
  HTTP/1.1 400 Bad Request
 
  Content-Length: 70
 
  Content-Type: application/json; charset=UTF-8
 
  X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5
 
  Date: Fri, 23 Jan 2015 10:39:43 GMT
 
 
  {badRequest: {message: Invalid key_name provided., code: 400}}
 
 
  The former specifies an incorrect security group name, and the latter an
  incorrect keypair name. And the problem is, that just looking at the
  response body and HTTP response code an external system can’t understand
  what exactly went wrong. And parsing of error messages here is not the way
  we’d like to solve this problem.
 
 
  For the Compute API v 2 we have the shortened Error Code in the
  documentation at
  http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses
 
  such as:
 
  Error response codes
  computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
  unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
  itemNotFound (404), buildInProgress (409)
 
  Thanks to a recent update (well, last fall) to our build tool for docs.
 
  What we don't have is a table in the docs saying computeFault has this
  longer Description -- is that what you are asking for, for all OpenStack
  APIs?
 
  Tell me more.
 
  Anne
 
 
 
 
  Another example for solving this problem is AWS EC2 exception codes [1]
 
 
  So if we have some service based on Openstack projects it would be useful
  to have some concrete error codes(textual or numeric), which could allow to
  define what actually goes wrong and later correctly process obtained
  exception. These codes should be predefined for each exception, have
  documented structure and allow to parse exception correctly in each step of
  exception handling.
 
 
  So I’d like to discuss implementing such codes and its usage in openstack
  projects.
 
 
  [1] -
  http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html
 
  __
  OpenStack Development Mailing List 

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-29 Thread Michael Still
There is an ec2 bug tag in launchpad. I would link to it except I am
writing this offline. I will fix that later today.

However I think what we've shown is that moving this code out of nova is
the future. I would like to see someone come up with a plan to transition
users to the stackforge project. That seems the best way forward at this
point.

Thanks,
Michael
On 30 Jan 2015 11:11 am, matt m...@nycresistor.com wrote:

 Is there a blue print or some set of bugs tagged in some way to tackle?

 -matt

 On Thu, Jan 29, 2015 at 7:01 PM, Michael Still mi...@stillhq.com wrote:

 Hi,

 as you might have read on openstack-dev, the Nova EC2 API
 implementation is in a pretty sad state. I wont repeat all of those
 details here -- you can read the thread on openstack-dev for detail.

 However, we got here because no one is maintaining the code in Nova
 for the EC2 API. This is despite repeated calls over the last 18
 months (at least).

 So, does the Foundation have a role here? The Nova team has failed to
 find someone to help us resolve these issues. Can the board perhaps
 find resources as the representatives of some of the largest
 contributors to OpenStack? Could the Foundation employ someone to help
 us our here?

 I suspect the correct plan is to work on getting the stackforge
 replacement finished, and ensuring that it is feature compatible with
 the Nova implementation. However, I don't want to preempt the design
 process -- there might be other ways forward here.

 I feel that a continued discussion which just repeats the last 18
 months wont actually fix the situation -- its time to break out of
 that mode and find other ways to try and get someone working on this
 problem.

 Thoughts welcome.

 Michael

 --
 Rackspace Australia

 ___
 Foundation mailing list
 foundat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Jay Faulkner

On Jan 29, 2015, at 2:52 PM, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:

Oh, I understood it a little differently. I took parsing of error messages 
here is not the way we’d like to solve this problem as meaning that parsing 
them in their current ad-hoc, project-specific format is not the way we want to 
solve this (e.g. the way tempest does it). But if we had a structured way like 
the EC2 errors, it would be a much easier problem to solve.

So either way we are still parsing the body, the only difference is that the 
parser no longer has to understand how to parse Neutron errors vs. Nova errors. 
It just needs to parse the standard OpenStack error format that we come up 
with.


This would be especially helpful for things like haproxy or other load 
balancers, as you could then have them put up a static, openstack-formatted 
JSON error page for their own errors and trust the clients could parse them 
properly.

-Jay



On Thu, Jan 29, 2015 at 12:04 PM, John Dickinson 
m...@not.mnmailto:m...@not.mn wrote:
I think there are two points. First, the original requirement (in the first 
email on this thread) is not what's wanted:

...looking at the response body and HTTP response code an external system 
can’t understand what exactly went wrong. And parsing of error messages here is 
not the way we’d like to solve this problem.

So adding a response body to parse doesn't solve the problem. The request as I 
read it is to have a set of well-defined error codes to know what happens.

Second, my response is a little tongue-in-cheek, because I think the IIS 
response codes are a perfect example of extending a common, well-known protocol 
with custom extensions that breaks existing clients. I would hate to see us do 
that.

So if we can't subtly break http, and we can't have error response documents, 
then we're left with custom error codes in the particular response-code class. 
eg 461 SecurityGroupNotFound or 462 InvalidKeyName (from the original examples)


--John






 On Jan 29, 2015, at 11:39 AM, Brant Knudson 
 b...@acm.orgmailto:b...@acm.org wrote:



 On Thu, Jan 29, 2015 at 11:41 AM, Sean Dague 
 s...@dague.netmailto:s...@dague.net wrote:
 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.

 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.

 Having a standard json error payload would be really nice.

 {
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }

 That would let us surface more specific errors.

 Today there is a giant hodgepodge - see:

 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424

 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492

 Especially blocks like this:

 if 'cloudServersFault' in resp_body:
 message =
 resp_body['cloudServersFault']['message']
 elif 'computeFault' in resp_body:
 message = resp_body['computeFault']['message']
 elif 'error' in resp_body:
 message = resp_body['error']['message']
 elif 'message' in resp_body:
 message = resp_body['message']

 Standardization here from the API WG would be really great.

 -Sean

 On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
  Hi Anne,
 
  I think Eugeniya refers to a problem, that we can't really distinguish
  between two different  badRequest (400) errors (e.g. wrong security
  group name vs wrong key pair name when starting an instance), unless
  we parse the error description, which might be error prone.
 
  Thanks,
  Roman
 
  On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
  annegen...@justwriteclick.commailto:annegen...@justwriteclick.com wrote:
 
 
  On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
  ekudryash...@mirantis.commailto:ekudryash...@mirantis.com wrote:
 
  Hi, all
 
 
  Openstack APIs interact with each other and external systems partially by
  passing of HTTP errors. The only valuable difference between types of
  exceptions is HTTP-codes, but current codes are generalized, so external
  system can’t distinguish what actually happened.
 
 
  As an example two different failures below differs only by error message:
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774http://192.168.122.195:8774/
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 189
 
  Accept: 

  1   2   >