Re: [openstack-dev] [gerrit] Gerrit review problem

2014-12-01 Thread Thierry Carrez
Jay Lau wrote:
 
 When I review a patch for OpenStack, after review finished, I want to
 check more patches for this project and then after click the Project
 content for this patch, it will **not** jump to all patches but project
 description. I think it is not convenient for a reviewer if s/he wants
 to review more patches for this project.

I usually click on the name of the branch (master) just below to
workaround that UI issue. That gives me the list of patches to review in
same branch, same project, which is generally a good approximation of
what I want.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] image create mysql error

2014-12-01 Thread Vineet Menon
Hi,

Looks like the password supplied wasn't correct.

Have you changed the pasword given in local.conf after a devstack
installation? If yes, you need to purge mysql  credentials stores in your
machine.

Regards,

Vineet Menon


On 1 December 2014 at 01:33, liuxinguo liuxin...@huawei.com wrote:

  When our CI run devstack, it occurs error when run “ image create
 mysql”. Log is pasted as following:



   22186 2014-11-29 21:11:48.611 | ++ basename
 /opt/stack/new/devstack/files/mysql.qcow2 .qcow2

   22187 2014-11-29 21:11:48.623 | + image_name=mysql

   22188 2014-11-29 21:11:48.624 | + disk_format=qcow2

   22189 2014-11-29 21:11:48.624 | + container_format=bare

   22190 2014-11-29 21:11:48.624 | + is_arch ppc64

   22191 2014-11-29 21:11:48.628 | ++ uname -m

   22192 2014-11-29 21:11:48.710 | + [[ i686 == \p\p\c\6\4 ]]

   22193 2014-11-29 21:11:48.710 | + '[' bare = bare ']'

   22194 2014-11-29 21:11:48.710 | + '[' '' = zcat ']'

   22195 2014-11-29 21:11:48.710 | + openstack --os-token
 5387fe9c6f6d4182b09461fe232501db --os-url http://127.0.0.1:9292 image
 create mysql --public --container-format=bare --disk-format qcow2

   22196 2014-11-29 21:11:57.275 | ERROR: openstack html

   22197 2014-11-29 21:11:57.275 |  head

   22198 2014-11-29 21:11:57.275 |   title401 Unauthorized/title

   22199 2014-11-29 21:11:57.275 |  /head

   22200 2014-11-29 21:11:57.275 |  body

   22201 2014-11-29 21:11:57.275 |   h1401 Unauthorized/h1

   22202 2014-11-29 21:11:57.275 |   This server could not verify that you
 are authorized to access the document you requested. Either you supplied
 the wrong credentials (e.g., bad password), or your browser does not
 understand how to supply the credentials required.br /br /

   22203 2014-11-29 21:11:57.275 |

   22204 2014-11-29 21:11:57.276 |  /body

   22205 2014-11-29 21:11:57.276 | /html (HTTP 401)

   22206 2014-11-29 21:11:57.344 | + exit_trap



 · Any one can give me some hint?

 · Thanks.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fw: #Personal# Ref: L3 service integration with service framework

2014-12-01 Thread Priyanka Chopra
Hi Kyle,

Gentle reminder. 
Please suggest a good time when we can discuss on blueprint - L3 router 
Service Type Framework 

I am looking for a L3 driver and plugin to enable neutron L3 calls from OS 
to ODL.


Best Regards
Priyanka 
- Forwarded by Priyanka Chopra/TVM/TCS on 12/01/2014 03:44 PM -

From:   Priyanka Chopra/TVM/TCS
To: mest...@mestery.com
Cc: kwat...@juniper.net, Partha Datta/DEL/TCS@TCS, Deepankar 
Gupta/DEL/TCS@TCS
Date:   11/27/2014 11:36 AM
Subject:Re: [openstack-dev] #Personal# Ref: L3 service integration 
with service framework


Hi Kyle,


Can we setup a call to understand the current state and future 
developments in detail?
Please suggest a good time. Will share webex/bridge details.


Best Regards
Priyanka 



From:   Kyle Mestery mest...@mestery.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   11/27/2014 01:06 AM
Subject:Re: [openstack-dev] #Personal# Ref: L3 service integration 
with service framework



There is already an out-of-tree L3 plugin, and as part of the plugin
decomposition work, I'm planning to use this as the base for the new
ODL driver in Kilo. Before you file specs and BPs, we should talk a
bit more.

Thanks,
Kyle

[1] https://github.com/dave-tucker/odl-neutron-drivers

On Wed, Nov 26, 2014 at 12:53 PM, Kevin Benton blak...@gmail.com wrote:
 +1. In the ODL case you would just want a completely separate L3 plugin.

 On Wed, Nov 26, 2014 at 7:29 AM, Mathieu Rohon mathieu.ro...@gmail.com
 wrote:

 Hi,

 you can still add your own service plugin, as a mixin of
 L3RouterPlugin (have a look at brocade's code).
 AFAIU service framework would manage the coexistence several
 implementation of a single service plugin.

 This is currently not prioritized by neutron. This kind of work might
 restart in the advanced_services project.

 On Wed, Nov 26, 2014 at 2:28 PM, Priyanka Chopra
 priyanka.cho...@tcs.com wrote:
  Hi Gary, All,
 
 
  This is with reference to blueprint - L3 router Service Type 
Framework
  and
  corresponding development at github repo.
 
  I noticed that the patch was abandoned due to inactivity. Wanted to 
know
  if
  there is a specific reason for which the development was put on hold?
 
  I am working on a Use-case to enable neutron calls (L2 and L3) from
  OpenStack to OpenDaylight neutron. However presently ML2 forwards the 
L2
  calls to ODL neutron, but not the L3 calls (router and FIP).
  With this blueprint submission the L3 Service framework (that 
includes
  L3
  driver, agent and plugin) will be completed and hence L3 calls from
  OpenStack can be redirected to any controller platform. Please 
suggest
  in
  case anyone else is working on the same or if we can do the 
enhancements
  required and submit the code to enable such a usecase.
 
 
  Best Regards
  Priyanka
 
  =-=-=
  Notice: The information contained in this e-mail
  message and/or attachments to it may contain
  confidential or privileged information. If you are
  not the intended recipient, any dissemination, use,
  review, distribution, printing or copying of the
  information contained in this e-mail message
  and/or attachments to it are strictly prohibited. If
  you have received this communication in error,
  please notify us by reply e-mail or telephone and
  immediately and permanently delete the message
  and any attachments. Thank you
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-12-01 Thread Eoghan Glynn


 detail=concise is not a media type and looking at the grammar in the RFC it
 wouldn’t be valid.
 I think the grammar would allow for application/json; detail=concise. See
 the last line in the definition of the media-range nonterminal in the
 grammar (copied below for convenience):
 Accept = Accept :
 #( media-range [ accept-params ] )
 media-range= ( */*
 | ( type / * )
 | ( type / subtype )
 ) *( ; parameter )
accept-params  = ; q = qvalue *( accept-extension )
accept-extension = ; token [ = ( token | quoted-string ) ]
 The grammar does not define the parameter nonterminal but there is an
 example in the same section that seems to suggest what it could look like:
 Accept: text/*, text/html, text/html;level=1, */*
 Shaunak
 On Nov 26, 2014, at 2:03 PM, Everett Toews  everett.to...@rackspace.com 
 wrote:
 
 
 
 
 On Nov 20, 2014, at 4:06 PM, Eoghan Glynn  egl...@redhat.com  wrote:
 
 
 
 How about allowing the caller to specify what level of detail
 they require via the Accept header?
 
 ▶ GET /prefix/resource_name
 Accept: application/json; detail=concise
 
 The Accept request-header field can be used to specify certain media types
 which are acceptable for the response.” [1]
 
 detail=concise is not a media type and looking at the grammar in the RFC it
 wouldn’t be valid. It’s not appropriate for the Accept header.

Well it's not a media type for sure, as it's intended to be an
accept-extension.

(which is allowed by the spec to be specified in the Accept header,
 in addition to media types  q-values)

Cheers,
Eoghan
 
 Everett
 
 [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate

2014-12-01 Thread Miguel Ángel Ajo

My proposal here, is, _let’s not deprecate this setting_, as it’s a valid use 
case of a gateway configuration, and let’s provide it on the reference 
implementation.  

TL;DR

I’ve been looking at this yesterday, during a test deployment
on a site  where they provide external connectivity with the
gateway outside subnet.

And I needed to switch it of, to actually be able to have any external 
connectivity.

https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121

This is handled by providing an on-link route to the gateway first,
and then adding the default gateway.  

It looks to me very interesting (not only because it’s the only way to work on 
that specific site [2][3][4]), because you can dynamically wire RIPE blocks to 
your server, without needing to use an specific IP for external routing or 
broadcast purposes, and instead use the full block in openstack.


I have a tiny patch to support this on the neutron l3-agent [1] I yet need to 
add the logic to check “gateway outside subnet”, then add the “onlink” route.


[1]

diff --git a/neutron/agent/linux/interface.py b/neutron/agent/linux/interface.py
index 538527b..5a9f186 100644
--- a/neutron/agent/linux/interface.py
+++ b/neutron/agent/linux/interface.py
@@ -116,15 +116,16 @@ class LinuxInterfaceDriver(object):
 namespace=namespace,
 ip=ip_cidr)

-if gateway:
-device.route.add_gateway(gateway)
-
 new_onlink_routes = set(s['cidr'] for s in extra_subnets)
+   if gateway:
+   new_onlink_routes.update([gateway])
 existing_onlink_routes = set(device.route.list_onlink_routes())
 for route in new_onlink_routes - existing_onlink_routes:
 device.route.add_onlink_route(route)
 for route in existing_onlink_routes - new_onlink_routes:
 device.route.delete_onlink_route(route)
+if gateway:
+device.route.add_gateway(gateway)

 def delete_conntrack_state(self, root_helper, namespace, ip):
 Delete conntrack state associated with an IP address.


[2] http://www.soyoustart.com/ (http://www.soyoustart.com/en/essential-servers/)
[3] http://www.ovh.co.uk/ (http://www.ovh.co.uk/dedicated_servers/)
[4] http://www.kimsufi.com/ (http://www.kimsufi.com/uk/)



Miguel Ángel Ajo



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Need reviews for Deploy GlusterFS server patch

2014-12-01 Thread Deepak Shetty
Just correcting the tag and subject line in $subject, so that it gets the
attention of the right group  of folks (from devstack).

thanx,
deepak

On Mon, Dec 1, 2014 at 11:51 AM, Bharat Kumar bharat.kobag...@redhat.com
wrote:

 Hi All,

 Regarding the patch Deploy GlusterFS Server (
 https://review.openstack.org/#/c/133102/).
 Submitted this patch long back, this patch also got Code Review +2.

 I think it is waiting for Workflow approval. Another task is dependent on
 this patch.
 Please review (Workflow) this patch and help me to merge this patch.

 --
 Thanks  Regards,
 Bharat Kumar K


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Mellanox] [5.1.1] Critical bugs found by Mellanox for v5.1.1

2014-12-01 Thread Gil Meir
I've mistakenly put issue #1 link in issue #3,
the correct link for the floating IPs issue is: 
https://bugs.launchpad.net/fuel/+bug/1397907

Gil

From: Gil Meir [mailto:gilm...@mellanox.com]
Sent: Monday, December 01, 2014 12:19
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Fuel] [Mellanox] [5.1.1] Critical bugs found by 
Mellanox for v5.1.1

We have found 3 critical bugs for Fuel v5.1.1 here in Mellanox:


1.   https://bugs.launchpad.net/fuel/+bug/1397891

This is related to https://bugs.launchpad.net/fuel/+bug/1396020

The kernel fix there is working, but there is a missing OVS service restart, 
since restarting Mellanox driver openibd required OVS restart.
I will push a puppet fix.


2.   https://bugs.launchpad.net/fuel/+bug/1397895

On our side looks like MOS-cinder added a patch to the cinder package which has 
a mistake on the ISER part.

We investigated it here and found the cause, the solution is a small fix in the 
ISERTgtAdm class, which affect only Mellanox. A patch was attached to the LP 
bug.


3.   https://bugs.launchpad.net/fuel/+bug/1397891

This was reproduced twice. Looks like it's not related specifically to Mellanox 
flow, but a general MOS issue - mismatching tenant owners for floating-IP and 
port (services/admin).



Regards,

Gil Meir
SW Cloud Solutions
Mellanox Technologies

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][all] Gate error

2014-12-01 Thread Sergey Lukjanov
I've made a temp fix for it by removing the logilab-common 0.63.2 from our
mirror and start full re-sync for mirrors, hopefully it'll fix everything.
Thanks to lifeless for help.

On Mon, Dec 1, 2014 at 12:31 PM, Andreas Jaeger a...@suse.com wrote:

 On 12/01/2014 05:45 AM, Eli Qiao wrote:
  Got gate error today.
  how can we kick infrastructure team to fix it?
 
  HTTP error 404 while getting
 
 http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e
  
 http://pypi.iad.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e
 (from
  http://pypi.IAD.openstack.org/simple/logilab-common/
  http://pypi.iad.openstack.org/simple/logilab-common/)
 
  [1]
 http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html

 This looks like a problem with our pypi mirror that effects other
 projects as well. ;(

 Sergey just checked and tried to fix it but couldn't. I hope that
 Jeremey or another infra admin can fix it once they are awake (US time).

 For now, please do not issue rechecks - let's wait until the issue is
 fixed,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes

2014-12-01 Thread Przemyslaw Kaminski


On 11/28/2014 05:15 PM, Ivan Kliuk wrote:

Hi, team!

Let me please present ideas collected during the unit tests 
improvement meeting:

1) Rename class ``Environment`` to something more descriptive
2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. 
Let's use parameters instead
3) run_tests.sh should invoke alternate syncdb() for cases where we 
don't need to test migration procedure, i.e. create_db_schema()
4) Consider usage of custom fixture provider. The main functionality 
should combine loading from YAML/JSON source and support fixture 
inheritance

5) The project needs in a document(policy) which describes:
- Tests creation technique;
- Test categorization (integration/unit) and approaches of testing 
different code base

-
6) Review the tests and refactor unit tests as described in the test 
policy

7) Mimic Nailgun module structure in unit tests
8) Explore Swagger tool http://swagger.io/


Swagger is a great tool, we used it in my previous job. We used Tornado, 
attached some hand-crafted code to RequestHandler class so that it 
inspected all its subclasses (i.e. different endpoint with REST 
methods), generated swagger file and presented the Swagger UI 
(https://github.com/swagger-api/swagger-ui) under some /docs/ URL.
What this gave us is that we could just add YAML specification directly 
to the docstring of the handler method and it would automatically appear 
in the UI. It's worth noting that the UI provides an interactive form 
for sending requests to the API so that tinkering with the API is easy [1].


[1] 
https://www.dropbox.com/s/y0nuxull9mxm5nm/Swagger%20UI%202014-12-01%2012-13-06.png?dl=0


P.


--
Sincerely yours,
Ivan Kliuk


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] logilab-common 404 in jobs

2014-12-01 Thread Sergey Lukjanov
Hey,

there was a pypi mirrors issue with downloading logilab-common 0.63.2 from
our mirrors:

HTTP error 404
  while getting
http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e
(from http://pypi.IAD.openstack.org/simple/logilab-common/)

Example:
http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html

I've fixed it by removing the 0.63.2 version from the index and start the
full re-sync of mirror that should download the new version.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-12-01 Thread Sean Dague
On 12/01/2014 05:36 AM, Eoghan Glynn wrote:
 
 
 detail=concise is not a media type and looking at the grammar in the RFC it
 wouldn’t be valid.
 I think the grammar would allow for application/json; detail=concise. See
 the last line in the definition of the media-range nonterminal in the
 grammar (copied below for convenience):
 Accept = Accept :
 #( media-range [ accept-params ] )
 media-range= ( */*
 | ( type / * )
 | ( type / subtype )
 ) *( ; parameter )
accept-params  = ; q = qvalue *( accept-extension )
accept-extension = ; token [ = ( token | quoted-string ) ]
 The grammar does not define the parameter nonterminal but there is an
 example in the same section that seems to suggest what it could look like:
 Accept: text/*, text/html, text/html;level=1, */*
 Shaunak
 On Nov 26, 2014, at 2:03 PM, Everett Toews  everett.to...@rackspace.com 
 wrote:




 On Nov 20, 2014, at 4:06 PM, Eoghan Glynn  egl...@redhat.com  wrote:



 How about allowing the caller to specify what level of detail
 they require via the Accept header?

 ▶ GET /prefix/resource_name
 Accept: application/json; detail=concise

 The Accept request-header field can be used to specify certain media types
 which are acceptable for the response.” [1]

 detail=concise is not a media type and looking at the grammar in the RFC it
 wouldn’t be valid. It’s not appropriate for the Accept header.
 
 Well it's not a media type for sure, as it's intended to be an
 accept-extension.
 
 (which is allowed by the spec to be specified in the Accept header,
  in addition to media types  q-values)

Please lets not use the Accept field for this. It doesn't really fit
into the philosophy of content negotiation.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] logilab-common 404 in jobs

2014-12-01 Thread Sergey Lukjanov
Launchpad issue - https://bugs.launchpad.net/openstack-ci/+bug/1397931
Elastic search query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJ0YWdzOlwiY29uc29sZVwiIEFORCBtZXNzYWdlOlwiSFRUUCBlcnJvciA0MDQgd2hpbGUgZ2V0dGluZ1wiIEFORCBtZXNzYWdlOiBcImxvZ2lsYWJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDE3NDMzMjc0MDY1fQ==
Elastic-recheck CR: https://review.openstack.org/#/c/138040/

On Mon, Dec 1, 2014 at 2:15 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hey,

 there was a pypi mirrors issue with downloading logilab-common 0.63.2 from
 our mirrors:

 HTTP error 404
   while getting
 http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e
 (from http://pypi.IAD.openstack.org/simple/logilab-common/)

 Example:
 http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html

 I've fixed it by removing the 0.63.2 version from the index and start the
 full re-sync of mirror that should download the new version.

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate

2014-12-01 Thread Assaf Muller


- Original Message -
 
 My proposal here, is, _let’s not deprecate this setting_, as it’s a valid use
 case of a gateway configuration, and let’s provide it on the reference
 implementation.

I agree. As long as the reference implementation works with the setting off
there's no need to deprecate it. I still think the default should be set to True
though.

Keep in mind that the DHCP agent will need changes as well.

 
 TL;DR
 
 I’ve been looking at this yesterday, during a test deployment
 on a site where they provide external connectivity with the
 gateway outside subnet.
 
 And I needed to switch it of, to actually be able to have any external
 connectivity.
 
 https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121
 
 This is handled by providing an on-link route to the gateway first,
 and then adding the default gateway.
 
 It looks to me very interesting (not only because it’s the only way to work
 on that specific site [2][3][4]), because you can dynamically wire RIPE
 blocks to your server, without needing to use an specific IP for external
 routing or broadcast purposes, and instead use the full block in openstack.
 
 
 I have a tiny patch to support this on the neutron l3-agent [1] I yet need to
 add the logic to check “gateway outside subnet”, then add the “onlink”
 route.
 
 
 [1]
 
 diff --git a/neutron/agent/linux/interface.py
 b/neutron/agent/linux/interface.py
 index 538527b..5a9f186 100644
 --- a/neutron/agent/linux/interface.py
 +++ b/neutron/agent/linux/interface.py
 @@ -116,15 +116,16 @@ class LinuxInterfaceDriver(object):
 namespace=namespace,
 ip=ip_cidr)
 
 - if gateway:
 - device.route.add_gateway(gateway)
 -
 new_onlink_routes = set(s['cidr'] for s in extra_subnets)
 + if gateway:
 + new_onlink_routes.update([gateway])
 existing_onlink_routes = set(device.route.list_onlink_routes())
 for route in new_onlink_routes - existing_onlink_routes:
 device.route.add_onlink_route(route)
 for route in existing_onlink_routes - new_onlink_routes:
 device.route.delete_onlink_route(route)
 + if gateway:
 + device.route.add_gateway(gateway)
 
 def delete_conntrack_state(self, root_helper, namespace, ip):
 Delete conntrack state associated with an IP address.
 
 [2] http://www.soyoustart.com/
 [3] http://www.ovh.co.uk/
 [4] http://www.kimsufi.com/
 
 
 Miguel Ángel Ajo
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Integration with Ceph

2014-12-01 Thread Roman Podoliaka
Hi Sergey,

AFAIU, the problem is that when Nova was designed initially, it had no
notion of shared storage (e.g. Ceph), so all the resources were
considered to be local to compute nodes. In that case each total value
was a sum of values per node. But as we see now, that doesn't work
well with Ceph, when the storage is actually shared and doesn't belong
to any particular node.

It seems we've got two different, but related problems here:

1) resource tracking is incorrect, as nodes shouldn't report info
about storage when shared storage is used (fixing this by reporting
e.g. 0 values would require changes to nova-scheduler)

2) total storage is calculated incorrectly as we just sum the values
reported by each node

From my point of view, in order to fix both, it might make sense for
nova-api/nova-scheduler to actually know, if shared storage is used
and access Ceph directly (otherwise, it's not clear, which compute
node we should ask for this data, and what exactly we should ask, as
we don't actually know if the storage is shared in the context of
nova-api/nova-scheduler processes).

Thanks,
Roman

On Mon, Nov 24, 2014 at 3:45 PM, Sergey Nikitin sniki...@mirantis.com wrote:
 Hi,
 As you know we can use Ceph as ephemeral storage in nova. But we have some
 problems with its integration. First of all, total storage of compute nodes
 is calculated incorrectly. (more details here
 https://bugs.launchpad.net/nova/+bug/1387812). I want to fix this problem.
 Now size of total storage is only a sum of storage of all compute nodes. And
 information about the total storage is got directly from db.
 (https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L663-L691).
 To fix the problem we should check type of using storage. If type of storage
 is RBD we should get information about total storage directly from Ceph
 storage.
 I proposed a patch (https://review.openstack.org/#/c/132084/) which should
 fix this problem, but I got the fair comment that we shouldn't check type of
 storage on the API layer.

 The other problem is that information about size of compute node incorrect
 too. Now size of each node equal to size of whole Ceph cluster.

 On one hand it is good to do not check type of storage on the API layer, on
 the other hand there are some reasons to check it on API layer:
 1. It would be useful for live migration because now a user has to send
 information about storage with API request.
 2. It helps to fix problem with total storage.
 3. It helps to fix problem with size of compute nodes.

 So I want to ask you: Is this a good idea to get information about type of
 storage on API layer? If no - Is there are any ideas to get correct
 information about Ceph storage?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-01 Thread Mathieu Rohon
Hi,


On Sun, Nov 30, 2014 at 8:35 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 On 27 November 2014 at 12:11, Mohammad Hanif mha...@brocade.com wrote:

 Folks,

 Recently, as part of the L2 gateway thread, there was some discussion on
 BGP/MPLS/Edge VPN and how to bridge any overlay networks to the neutron
 network.  Just to update everyone in the community, Ian and I have
 separately submitted specs which make an attempt to address the cloud edge
 connectivity.  Below are the links describing it:

 Edge-Id: https://review.openstack.org/#/c/136555/
 Edge-VPN: https://review.openstack.org/#/c/136929 .  This is a resubmit of
 https://review.openstack.org/#/c/101043/ for the kilo release under the
 “Edge VPN” title.  “Inter-datacenter connectivity orchestration” was just
 too long and just too generic of a title to continue discussing about :-(


 Per the summit discussions, the difference is one of approach.

 The Edge-VPN case addresses MPLS attachments via a set of APIs to be added
 to the core of Neutron.  Those APIs are all new objects and don't really
 change the existing API so much as extend it.  There's talk of making it a
 'service plugin' but if it were me I would simply argue for a new service
 endpoint.  Keystone's good at service discovery, endpoints are pretty easy
 to create and I don't see why you need to fold it in.

 The edge-id case says 'Neutron doesn't really care about what happens
 outside of the cloud at this point in time, there are loads of different
 edge termination types, and so the best solution would be one where the
 description of the actual edge datamodel does not make its way into core
 Neutron'.  This avoids us folding in the information about edges in the same
 way that we folded in the information about services and later regretted it.
 The notable downside is that this method would work with an external network
 controller such as ODL, but probably will never make its way into the
 inbuilt OVS/ML2 network controller if it's implemented as described
 (explicitly *because* it's designed in such a way as to keep the
 functionality out of core Neutron).  Basically, it's not completely
 incompatible with the datamodel that the Edge-VPN change describes, but
 pushes that datamodel out to an independent service which would have its own
 service endpoint to avoid complicating the Neutron API with information
 that, likely, Neutron itself could probably only ever validate, store and
 pass on to an external controller.

This is not entirely true, as soon as a reference implementation,
based on existing Neutron components (L2agent/L3agent...) can exist.
But even if it were true, this could at least give a standardized API
to Operators that want to connect their Neutron networks to external
VPNs, without coupling their cloud solution with whatever SDN
controller. And to me, this is the main issue that we want to solve by
proposing some neutron specs.

 Also, the Edge-VPN case is specified for only MPLS VPNs, and doesn't
 consider other edge cases such as Kevin's switch-based edges in
 https://review.openstack.org/#/c/87825/ .  The edge-ID one is agnostic of
 termination types (since it absolves Neutron of all of that responsibility)
 and would leave the edge type description to the determination of an
 external service.

 Obviously, I'm biased, having written the competing spec; but I prefer the
 simple change that pushes complexity out of the core to the larger but
 comprehensive change that keeps it as a part of Neutron.  And in fact if you
 look at the two specs with that in mind, they do go together; the Edge-VPN
 model is almost precisely what you need to describe an endpoint that you
 could then associate with an Edge-ID to attach it to Neutron.
 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-12-01 Thread Sandy Walsh

From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: Sunday, November 30, 2014 5:40 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Where should Schema files live?

Duncan Thomas
On Nov 27, 2014 10:32 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 We were thinking each service API would expose their schema via a new 
 /schema resource (or something). Nova would expose its schema. Glance its 
 own. etc. This would also work well for installations still using older 
 deployments.
This feels like externally exposing info that need not be external (since the 
notifications are not external to the deploy) and it sounds like it will 
potentially leak fine detailed version and maybe deployment config details 
that you don't want to make public - either for commercial reasons or to make 
targeted attacks harder


Yep, good point. Makes a good case for standing up our own service or just 
relying on the tarballs being in a well know place.

Thanks for the feedback.


-S

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] proper syncing of cinder volume state

2014-12-01 Thread Duncan Thomas
John:

States that the driver can/should do some cleanup work during the
transition:

attaching - available or error
detaching - available or error
error - available or error
deleting - deleted or error_deleting

Also in possibly wanted in future but much harder:
backing_up - available or error (need to make sure the backup service
copes)
restoring - error (need to make sure the backup service copes)

I haven't looked at the entire state space yet, these are the obvious ones
off the top of my head


On 1 December 2014 at 06:30, John Griffith john.griffi...@gmail.com wrote:

 On Fri, Nov 28, 2014 at 11:25 AM, D'Angelo, Scott scott.dang...@hp.com
 wrote:
  A Cinder blueprint has been submitted to allow the python-cinderclient to
  involve the back end storage driver in resetting the state of a cinder
  volume:
 
  https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver
 
  and the spec:
 
  https://review.openstack.org/#/c/134366
 
 
 
  This blueprint contains various use cases for a volume that may be
 listed in
  the Cinder DataBase in state detaching|attaching|creating|deleting.
 
  The Proposed solution involves augmenting the python-cinderclient command
  ‘reset-state’, but other options are listed, including those that
 
  involve Nova, since the state of a volume in the Nova XML found in
  /etc/libvirt/qemu/instance_id.xml may also be out-of-sync with the
 
  Cinder DB or storage back end.
 
 
 
  A related proposal for adding a new non-admin API for changing volume
 status
  from ‘attaching’ to ‘error’ has also been proposed:
 
  https://review.openstack.org/#/c/137503/
 
 
 
  Some questions have arisen:
 
  1) Should ‘reset-state’ command be changed at all, since it was
 originally
  just to modify the Cinder DB?
 
  2) Should ‘reset-state’ be fixed to prevent the naïve admin from changing
  the CinderDB to be out-of-sync with the back end storage?
 
  3) Should ‘reset-state’ be kept the same, but augmented with new options?
 
  4) Should a new command be implemented, with possibly a new admin API to
  properly sync state?
 
  5) Should Nova be involved? If so, should this be done as a separate
 body of
  work?
 
 
 
  This has proven to be a complex issue and there seems to be a good bit of
  interest. Please provide feedback, comments, and suggestions.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 Hey Scott,

 Thanks for posting this to the ML, I stated my opinion on the spec,
 but for completeness:
 My feeling is that reset-state has morphed into something entirely
 different than originally intended.  That's actually great, nothing
 wrong there at all.  I strongly disagree with the statements that
 setting the status in the DB only is almost always the wrong thing to
 do.  The whole point was to allow the state to be changed in the DB
 so the item could in most cases be deleted.  There was never an intent
 (that I'm aware of) to make this some sort of uber resync and heal API
 call.

 All of that history aside, I think it would be great to add some
 driver interaction here.  I am however very unclear on what that would
 actually include.  For example, would you let a Volume's state be
 changed from Error-Attaching to In-Use and just run through the
 process of retyring an attach?  To me that seems like a bad idea.  I'm
 much happier with the current state of changing the state form Error
 to Available (and NOTHING else) so that an operation can be retried,
 or the resource can be deleted.  If you start allowing any state
 transition (which sadly we've started to do) you're almost never going
 to get things correct.  This also covers almost every situation even
 though it means you have to explicitly retry operations or steps (I
 don't think that's a bad thing) and make the code significantly more
 robust IMO (we have some issues lately with things being robust).

 My proposal would be to go back to limiting the things you can do with
 reset-state (basicly make it so you can only release the resource back
 to available) and add the driver interaction to clean up any mess if
 possible.  This could be a simple driver call added like
 make_volume_available whereby the driver just ensures that there are
 no attachments and well; honestly nothing else comes to mind as
 being something the driver cares about here. The final option then
 being to add some more power to force-delete.

 Is there anything other than attach that matters from a driver?  If
 people are talking error-recovery that to me is a whole different
 topic and frankly I think we need to spend more time preventing errors
 as opposed to trying to recover from them via new API calls.

 Curious to see if any other folks have input here?

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [mistral] Team meeting - 12/01/2014

2014-12-01 Thread Renat Akhmerov
Hi,

This is a reminder about team meeting that we’ll have today at 
#openstack-meeting at 16.00 UTC.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Release Kilo-1 progress
Open discussion

(see [0] to see the agenda as well as the meeting archieve)

[0] https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps

2014-12-01 Thread Doug Hellmann
As part of setting up version caps for Oslo and client libraries in the stable 
branches, we discovered that the fact that we do not always create stable 
branches of the Oslo libraries means the check-requirements-integration-dsvm 
job in stable branches is actually looking at the requirements in master 
branches of the libraries, and failing in a lot of cases. With the move away 
from alpha version numbers toward version caps and patch releases, we’re going 
to change the Oslo release processes so that at the end of a cycle we always 
create a stable branch from the final version of each library released in the 
cycle. 

Thanks to Clark and Thierry for helping by creating stable/icehouse and 
stable/juno branches for most of the libraries, though we’ve discovered one or 
two that we missed so we’re still working on a few cases.

For Kilo, we will branch all of the library repositories at the end of the 
cycle, probably following the same process as is used for the other projects 
(though the details remain to be worked out).

Thanks,
Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo] Handling contexts and policy enforcement in services

2014-12-01 Thread Doug Hellmann

On Nov 30, 2014, at 8:51 PM, Jamie Lennox jamielen...@redhat.com wrote:

 TL;DR: I think we can handle most of oslo.context with some additions to
 auth_token middleware and simplify policy enforcement (from a service
 perspective) at the same time.
 
 There is currently a push to release oslo.context as a
 library, for reference:
 https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py
 
 Whilst I love the intent to standardize this
 functionality I think that many of the requirements in there
 are incorrect and don't apply to all services. It is my
 understanding for example that read_only, show_deleted are
 essentially nova requirements, and the use of is_admin needs
 to be killed off, not standardized.
 
 Currently each service builds a context based on headers
 made available from auth_token middleware and some
 additional interpretations based on that user
 authentication. Each service does this slightly differently
 based on its needs/when it copied it from nova.
 
 I propose that auth_token middleware essentially handle the
 creation and management of an authentication object that
 will be passed and used by all services. This will
 standardize so much of the oslo.context library that I'm not
 sure it will be still needed. I bring this up now as I am
 wanting to push this way and don't want to change things
 after everyone has adopted oslo.context.

We put the context class in its own library because both oslo.messaging and 
oslo.log [1] need to have input into its API. If the middleware wants to get 
involved in adding to the API that’s fine, but context is not only used for 
authentication so the middleware can’t own it.

[1] https://review.openstack.org/132551


 
 The current release of auth_token middleware creates and
 passes to services (via env['keystone.token_auth']) an auth
 plugin that can be passed to clients to use the current user
 authentication. My intention here is to expand that object
 to expose all of the authentication information required for
 the services to operate.
 
 There are two components to context that I can see:
 
 - The current authentication information that is retrieved
   from auth_token middleware.
 - service specific context added based on that user
   information eg read_only, show_deleted, is_admin,
   resource_id
 
 Regarding the first point of current authentication there
 are three places I can see this used:
 
 - communicating with other services as that user
 - associating resources with a user/project
 - policy enforcement

One of the specific logging requests we’ve had from operators is to have the 
logs show the authentication context clearly and consistently (i.e., the same 
format whether domains are used in the deployment or not). That’s an aspects of 
the spec linked above.

 
 Addressing each of the 'current authentication' needs:
 
 - As mentioned for service to service communication
   auth_token middleware already provides an auth_plugin
   that can be used with (at this point most) of the
   clients. This greatly simplifies reusing an existing
   token and correctly using the service catalog as each
   client would do this differently. In future this plugin
   will be extended to provide support for concepts such as
   filling in the X-Service-Token [1] on behalf of the
   service, managing the request id, and generally
   standardizing service-service communication without
   requiring explicit support from every project and client.
 
 - Given that this authentication plugin is built within
   auth_token middleware it is a fairly trivial step to
   provide public properties on this object to give access
   to the current user_id, project_id and other relevant
   authentication data that the services can access. This is
   fairly well handled today but it means it is done without
   the service having to fetch all these objects from
   headers.

That sounds like a good source of data to populate the context object.

 
 - With upcoming changes to policy to handle features such
   as the X-Service-Token the existing context will need to
   gain a bunch of new entries. With the keystone team
   looking to wrap policy enforcement into its own
   standalone library it makes more sense to provide this
   authentication object directly to policy enforcement.
   This will allow the keystone team to manipulate policy
   data from both auth_token and the enforcement side,
   letting us introduce new features to policy transparent
   to the services. It will also standardize the naming of
   variables within these policy files.
 
 What is left for a context object after this is managing
 serialization and deserialization of this auth object and
 any additional fields (read_only etc) that are generally
 calculated at context creation time. This would be a very
 small library.

That’s not all it will do, but it will be small. As I mentioned above, we 
isolated it in its own library to control dependencies because several aspects 

Re: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps

2014-12-01 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Are we going to have stable releases for those branches?

On 01/12/14 15:19, Doug Hellmann wrote:
 As part of setting up version caps for Oslo and client libraries in
 the stable branches, we discovered that the fact that we do not
 always create stable branches of the Oslo libraries means the
 check-requirements-integration-dsvm job in stable branches is
 actually looking at the requirements in master branches of the
 libraries, and failing in a lot of cases. With the move away from
 alpha version numbers toward version caps and patch releases, we’re
 going to change the Oslo release processes so that at the end of a
 cycle we always create a stable branch from the final version of
 each library released in the cycle.
 
 Thanks to Clark and Thierry for helping by creating stable/icehouse
 and stable/juno branches for most of the libraries, though we’ve
 discovered one or two that we missed so we’re still working on a
 few cases.
 
 For Kilo, we will branch all of the library repositories at the end
 of the cycle, probably following the same process as is used for
 the other projects (though the details remain to be worked out).
 
 Thanks, Doug ___ 
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUfHuyAAoJEC5aWaUY1u57yVAH/Al27yWpaEFSuRMuni+ItdTW
+avoRoVgpeYLR8kJqo/P2YBdht12ddVjU/JZh1VBvcN7KHClvd6gyBBpAXlq66aQ
lRG2uNSy6+ufcaE7UTyt/beEmyNpZvW/yyaknvwmAZaU1+h/9ZnFByf6WgtuwsYr
o3N02GzUJkUF0MNGj14eWKGTTTO1M/xbj20ZttKLn1fifPp+pjg2dLrnXYqXdEVW
rzenlcCifPLWhHRh4PPJmPrgJYjFgLp5FYEbxMAEhxOdPHR+UiLDmifrYYLzMcMy
hDBjp38ej35LyfHzxxHUkm7Km4P/p/Cuib4Zw77FIVnspTBORNnPovOh/Lf/yJ8=
=GJIn
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps

2014-12-01 Thread Doug Hellmann

On Dec 1, 2014, at 9:31 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 Are we going to have stable releases for those branches?

We will possibly have point or patch releases based on those branches, as fixes 
end up needing to be backported. We have already done this in a few cases, 
which is why some libraries had stable branches already.

 
 On 01/12/14 15:19, Doug Hellmann wrote:
  As part of setting up version caps for Oslo and client libraries in
  the stable branches, we discovered that the fact that we do not
  always create stable branches of the Oslo libraries means the
  check-requirements-integration-dsvm job in stable branches is
  actually looking at the requirements in master branches of the
  libraries, and failing in a lot of cases. With the move away from
  alpha version numbers toward version caps and patch releases, we’re
  going to change the Oslo release processes so that at the end of a
  cycle we always create a stable branch from the final version of
  each library released in the cycle.
 
  Thanks to Clark and Thierry for helping by creating stable/icehouse
  and stable/juno branches for most of the libraries, though we’ve
  discovered one or two that we missed so we’re still working on a
  few cases.
 
  For Kilo, we will branch all of the library repositories at the end
  of the cycle, probably following the same process as is used for
  the other projects (though the details remain to be worked out).
 
  Thanks, Doug ___
  OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled'

2014-12-01 Thread Louis Taylor
Hi all,

In order to enable or disable osprofiler in Glance, we currently have an
option:

[profiler]
# If False fully disable profiling feature.
enabled = False

However, all other services with osprofiler integration use a similar option
named profiler_enabled.

For consistency, I'm proposing we deprecate this option's name in favour of
profiler_enabled. This should make it easier for someone to configure
osprofiler across projects with less confusion. Does anyone have any thoughts
or concerns about this?

Thanks,
Louis


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps

2014-12-01 Thread Thierry Carrez
Doug Hellmann wrote:
 On Dec 1, 2014, at 9:31 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 Are we going to have stable releases for those branches?
 
 We will possibly have point or patch releases based on those branches, as 
 fixes end up needing to be backported. We have already done this in a few 
 cases, which is why some libraries had stable branches already.

We won't do stable coordinated point releases though, since those
libraries are on their own semver versioning scheme.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] rpm packages versions

2014-12-01 Thread Dmitry Pyzhov
Just FYI. We have updated versions of nailgun-related packages in 6.0:
https://review.openstack.org/#/c/137886/
https://review.openstack.org/#/c/137887/
https://review.openstack.org/#/c/137888/
https://review.openstack.org/#/c/137889/

We need it in order to support packages updates both in current version and
in stable releases.

I've updated our HCF template, so we will not forget to update it in next
releases.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-01 Thread Ian Wells
On 1 December 2014 at 04:43, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 This is not entirely true, as soon as a reference implementation,
 based on existing Neutron components (L2agent/L3agent...) can exist.


The specific thing I was saying is that that's harder with an edge-id
mechanism than one incorporated into Neutron, because the point of the
edge-id proposal is to make tunnelling explicitly *not* a responsibility of
Neutron.  So how do you get the agents to terminate tunnels when Neutron
doesn't know anything about tunnels and the agents are a part of Neutron?
Conversely, you can add a mechanism to the OVS subsystem so that you can
tap an L2 bridge into a network, which would probably be more
straightforward.

But even if it were true, this could at least give a standardized API
 to Operators that want to connect their Neutron networks to external
 VPNs, without coupling their cloud solution with whatever SDN
 controller. And to me, this is the main issue that we want to solve by
 proposing some neutron specs.


So the issue I worry about here is that if we start down the path of adding
the MPLS datamodels to Neutron we have to add Kevin's switch control work.
And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN.  And whatever
else comes along.  And we get back to 'that's a lot of big changes that
aren't interesting to 90% of Neutron users' - difficult to get in and a lot
of overhead to maintain for the majority of Neutron developers who don't
want or need it.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Gate error

2014-12-01 Thread Jeremy Stanley
On 2014-12-01 12:45:04 +0800 (+0800), Eli Qiao wrote:
[...]
 HTTP error 404 while getting http://pypi.IAD.openstack.org/
 packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz
[...]

This was reported as https://launchpad.net/bugs/1397931 and handled
a few hours ago.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-12-01 Thread Jeremy Stanley
On 2014-12-01 17:04:23 +0800 (+0800), Jay Lau wrote:
 Cool, Thierry! I see. This is really what I want ;-)

Also note that we're not running a modified version of Gerrit, so if
a behavior change is desired it likely needs to be reported at
http://code.google.com/p/gerrit/issues instead.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project meeting tomorrow, Tue December 2nd, 21:00 UTC

2014-12-01 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting tomorrow at 21:00 UTC, with the
following agenda:

* Convergence on specs process (johnthetubaguy)
* Incompatible rework of client libraries (notmyname, morganfainberg)
  * New work in openstack-sdk, keep python client library for
compatibility and CLI
* 2014.1.2 point release status
* Open discussion  announcements

See you there !


NB: This meeting replaces the previous Project/Release meeting with a
more obviously cross-project agenda. For more details, please see:

https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A mascot for Ironic

2014-12-01 Thread Lucas Alvares Gomes
Hi all,

I'm sorry for the long delay on this I've been dragged into some other
stuff :) But anyway, now it's time

I've asked the core Ironic team to narrow down the name options (we had too
many, thanks to everyone that contributed) the list of finalists is in the
poll right here: http://doodle.com/9h4ncgx4etkyfgdw. So please vote and
help us choose the best name for the new mascot!

Cheers,
Lucas

On Tue, Nov 18, 2014 at 7:44 PM, Nathan Kinder nkin...@redhat.com wrote:



 On 11/16/2014 10:51 AM, David Shrewsbury wrote:
 
  On Nov 16, 2014, at 8:57 AM, Chris K nobody...@gmail.com
  mailto:nobody...@gmail.com wrote:
 
  How cute.
 
  maybe we could call him bear-thoven.
 
  Chris
 
 
  I like Blaze Bearly, lead singer for Ironic Maiden.  :)
 
  https://en.wikipedia.org/wiki/Blaze_Bayley

 Good call!  I never thought I'd see a Blaze Bayley reference on this
 list. :) Just watch out for imposters...

 http://en.wikipedia.org/wiki/Slow_Riot_for_New_Zer%C3%B8_Kanada#BBF3

 
 
 
  On Sun, Nov 16, 2014 at 5:14 AM, Lucas Alvares Gomes
  lucasago...@gmail.com mailto:lucasago...@gmail.com wrote:
 
  Hi Ironickers,
 
  I was thinking this weekend: All the cool projects does have a
 mascot
  so I thought that we could have one for Ironic too.
 
  The idea about what the mascot would be was easy because the RAX
 guys
  put bear metal their presentation[1] and that totally rocks! So I
  drew a bear. It also needed an instrument, at first I thought about
 a
  guitar, but drums is actually my favorite instrument so I drew a
 pair
  of drumsticks instead.
 
  The drawing thing wasn't that hard, the problem was to digitalize
 it.
  So I scanned the thing and went to youtube to watch some tutorials
  about gimp and inkspace to learn how to vectorize it. Magic, it
  worked!
 
  Attached in the email there's the original draw, the vectorized
  version without colors and the final version of it (with colors).
 
  Of course, I know some people does have better skills than I do, so
 I
  also attached the inkspace file of the final version in case people
  want to tweak it :)
 
  So, what you guys think about making this little drummer bear the
  mascot of the Ironic project?
 
  Ahh he also needs a name. So please send some suggestions and we can
  vote on the best name for him.
 
  [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90
 
  Lucas
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A mascot for Ironic

2014-12-01 Thread Lucas Alvares Gomes
Ah forgot to say,

Please add your launchpad ID on the Name Field. And I will close the poll
on Wednesday at 18:00 UTC (I think it's enough time to everyone take a look
at it)

Cheers,
Lucas

On Mon, Dec 1, 2014 at 4:44 PM, Lucas Alvares Gomes lucasago...@gmail.com
wrote:

 Hi all,

 I'm sorry for the long delay on this I've been dragged into some other
 stuff :) But anyway, now it's time

 I've asked the core Ironic team to narrow down the name options (we had
 too many, thanks to everyone that contributed) the list of finalists is in
 the poll right here: http://doodle.com/9h4ncgx4etkyfgdw. So please vote
 and help us choose the best name for the new mascot!

 Cheers,
 Lucas

 On Tue, Nov 18, 2014 at 7:44 PM, Nathan Kinder nkin...@redhat.com wrote:



 On 11/16/2014 10:51 AM, David Shrewsbury wrote:
 
  On Nov 16, 2014, at 8:57 AM, Chris K nobody...@gmail.com
  mailto:nobody...@gmail.com wrote:
 
  How cute.
 
  maybe we could call him bear-thoven.
 
  Chris
 
 
  I like Blaze Bearly, lead singer for Ironic Maiden.  :)
 
  https://en.wikipedia.org/wiki/Blaze_Bayley

 Good call!  I never thought I'd see a Blaze Bayley reference on this
 list. :) Just watch out for imposters...

 http://en.wikipedia.org/wiki/Slow_Riot_for_New_Zer%C3%B8_Kanada#BBF3

 
 
 
  On Sun, Nov 16, 2014 at 5:14 AM, Lucas Alvares Gomes
  lucasago...@gmail.com mailto:lucasago...@gmail.com wrote:
 
  Hi Ironickers,
 
  I was thinking this weekend: All the cool projects does have a
 mascot
  so I thought that we could have one for Ironic too.
 
  The idea about what the mascot would be was easy because the RAX
 guys
  put bear metal their presentation[1] and that totally rocks! So I
  drew a bear. It also needed an instrument, at first I thought
 about a
  guitar, but drums is actually my favorite instrument so I drew a
 pair
  of drumsticks instead.
 
  The drawing thing wasn't that hard, the problem was to digitalize
 it.
  So I scanned the thing and went to youtube to watch some tutorials
  about gimp and inkspace to learn how to vectorize it. Magic, it
  worked!
 
  Attached in the email there's the original draw, the vectorized
  version without colors and the final version of it (with colors).
 
  Of course, I know some people does have better skills than I do,
 so I
  also attached the inkspace file of the final version in case people
  want to tweak it :)
 
  So, what you guys think about making this little drummer bear the
  mascot of the Ironic project?
 
  Ahh he also needs a name. So please send some suggestions and we
 can
  vote on the best name for him.
 
  [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90
 
  Lucas
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-01 Thread Mathieu Rohon
On Mon, Dec 1, 2014 at 4:46 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 On 1 December 2014 at 04:43, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 This is not entirely true, as soon as a reference implementation,
 based on existing Neutron components (L2agent/L3agent...) can exist.


 The specific thing I was saying is that that's harder with an edge-id
 mechanism than one incorporated into Neutron, because the point of the
 edge-id proposal is to make tunnelling explicitly *not* a responsibility of
 Neutron.  So how do you get the agents to terminate tunnels when Neutron
 doesn't know anything about tunnels and the agents are a part of Neutron?

by having modular agents that can drive the dataplane with pluggable
components that would be part of any advanced service. This is a way
to move forward on splitting out advanced services.

 Conversely, you can add a mechanism to the OVS subsystem so that you can tap
 an L2 bridge into a network, which would probably be more straightforward.

This is an alternative that would say : you want an advanced service
for your VM, please stretch your l2 network to this external
component, that is driven by an external controller, and make your
traffic goes to this component to take benefit of this advanced
service. This is a valid alternative of course, but distributing the
service directly to each compute node is much more valuable, ASA it is
doable.

 But even if it were true, this could at least give a standardized API
 to Operators that want to connect their Neutron networks to external
 VPNs, without coupling their cloud solution with whatever SDN
 controller. And to me, this is the main issue that we want to solve by
 proposing some neutron specs.


 So the issue I worry about here is that if we start down the path of adding
 the MPLS datamodels to Neutron we have to add Kevin's switch control work.
 And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN.  And whatever
 else comes along.  And we get back to 'that's a lot of big changes that
 aren't interesting to 90% of Neutron users' - difficult to get in and a lot
 of overhead to maintain for the majority of Neutron developers who don't
 want or need it.

This shouldn't be a lot of big changes, once interfaces between
advanced services and neutron core services will be cleaner. The
description of the interconnection has to to be done somewhere, and
neutron and its advanced services are a good candidate for that.

 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 12/01/2014

2014-12-01 Thread Renat Akhmerov
Thanks for joining our team meeting today!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-01-16.00.html
 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-01-16.00.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-01-16.00.log.html
 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-01-16.00.log.html

The next meeting is scheduled for Dec 8 at the same time.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-12-01 Thread Ben Nemec
Okay, boiling my thoughts down further:

James's (valid, IMHO) concerns aside, I want to see one of two things
before I'm anything but -1 on this change:

1) A specific reason SHELLOPTS can't be used.  Nobody has given me one
besides hand-wavy it might not work stuff.  FTR, as I noted in my
previous message, the set -e thing can be easily addressed if we think
it necessary so I don't consider that a valid answer here.

Also, http://stackoverflow.com/questions/4325444/bash-recursive-xtrace

2) A specific use case that can only be addressed via this
implementation.  I don't personally have one, but if someone does then
I'd like to hear it.

I'm all for improving in this area, but before we make an intrusive
change with an ongoing cost that won't work with anything not explicitly
enabled for it, I want to make sure it's the right thing to do.  As yet
I'm not convinced.

-Ben

On 11/27/2014 12:29 PM, Sullivan, Jon Paul wrote:
 -Original Message-
 From: Ben Nemec [mailto:openst...@nemebean.com]
 Sent: 26 November 2014 17:03
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [diskimage-builder] Tracing levels for
 scripts (119023)

 On 11/25/2014 10:58 PM, Ian Wienand wrote:
 Hi,

 My change [1] to enable a consistent tracing mechanism for the many
 scripts diskimage-builder runs during its build seems to have hit a
 stalemate.

 I hope we can agree that the current situation is not good.  When
 trying to develop with diskimage-builder, I find myself constantly
 going and fiddling with set -x in various scripts, requiring me
 re-running things needlessly as I try and trace what's happening.
 Conversley some scripts set -x all the time and give output when you
 don't want it.

 Now nodepool is using d-i-b more, it would be even nicer to have
 consistency in the tracing so relevant info is captured in the image
 build logs.

 The crux of the issue seems to be some disagreement between reviewers
 over having a single trace everything flag or a more fine-grained
 approach, as currently implemented after it was asked for in reviews.

 I must be honest, I feel a bit silly calling out essentially a
 four-line patch here.

 My objections are documented in the review, but basically boil down to
 the fact that it's not a four line patch, it's a 500+ line patch that
 does essentially the same thing as:

 set +e
 set -x
 export SHELLOPTS
 
 I don't think this is true, as there are many more things in SHELLOPTS than 
 just xtrace.  I think it is wrong to call the two approaches equivalent.
 

 in disk-image-create.  You do lose set -e in disk-image-create itself on
 debug runs because that's not something we can safely propagate,
 although we could work around that by unsetting it before calling hooks.
  FWIW I've used this method locally and it worked fine.
 
 So this does say that your alternative implementation has a difference from 
 the proposed one.  And that the difference has a negative impact.
 

 The only drawback is it doesn't allow the granularity of an if block in
 every script, but I don't personally see that as a particularly useful
 feature anyway.  I would like to hear from someone who requested that
 functionality as to what their use case is and how they would define the
 different debug levels before we merge an intrusive patch that would
 need to be added to every single new script in dib or tripleo going
 forward.
 
 So currently we have boilerplate to be added to all new elements, and that 
 boilerplate is:
 
 set -eux
 set -o pipefail
 
 This patch would change that boilerplate to:
 
 if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
 set -x
 fi
 set -eu
 set -o pipefail
 
 So it's adding 3 lines.  It doesn't seem onerous, especially as most people 
 creating a new element will either copy an existing one or copy/paste the 
 header anyway.
 
 I think that giving control over what is effectively debug or non-debug 
 output is a desirable feature.
 
 We have a patch that implements that desirable feature.
 
 I don't see a compelling technical reason to reject that patch.
 

 -Ben

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Thanks, 
 Jon-Paul Sullivan ☺ Cloud Services - @hpcloud
 
 Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, 
 Galway.
 Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
 Quay, Dublin 2. 
 Registered Number: 361933
  
 The contents of this message and any attachments to it are confidential and 
 may be legally privileged. If you have received this message in error you 
 should delete it from your system immediately and advise the sender.
 
 To any recipient of this message within HP, unless otherwise stated, you 
 should consider this message and attachments as HP CONFIDENTIAL.
 



Re: [openstack-dev] How to debug test using pdb

2014-12-01 Thread Ben Nemec
I don't personally use the debugger much, but there is a helper script
that is supposed to allow debugging:
https://github.com/openstack/oslotest/blob/master/tools/oslo_debug_helper

On 11/30/2014 05:08 AM, Saju M wrote:
 Hi,
 
 How to debug test using pdb
 
 I want to debug tests and tried following methods, but didn't work
 (could not see pdb console).
 I could see only the message Tests running... and command got stuck.
 
 I tried this with python-neutronclient, that does not have run_test.sh
 
 Method-1:
 #source .tox/py27/bin/activate
 #.tox/py27/bin/python -m testtools.run
 neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network
 
 Method-2:
 #testr list-tests '(CLITestV20NetworkJSON.test_create_network)'  my-list
 #python -m testtools.run discover --load-list my-list
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to debug test using pdb

2014-12-01 Thread Steve Martinelli
Link to the docs for the debugger script:
  
http://docs.openstack.org/developer/oslotest/features.html#debugging-with-oslo-debug-helper

We have support for it in some of the Keystone related projects and folks
seem to find it useful. If may not fit your needs, but as Ben suggested,
give it a whirl, it should help.

Steve

Ben Nemec openst...@nemebean.com wrote on 12/01/2014 01:19:17 PM:

 From: Ben Nemec openst...@nemebean.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 12/01/2014 01:25 PM
 Subject: Re: [openstack-dev] How to debug test using pdb
 
 I don't personally use the debugger much, but there is a helper script
 that is supposed to allow debugging:
 
https://github.com/openstack/oslotest/blob/master/tools/oslo_debug_helper
 
 On 11/30/2014 05:08 AM, Saju M wrote:
  Hi,
  
  How to debug test using pdb
  
  I want to debug tests and tried following methods, but didn't work
  (could not see pdb console).
  I could see only the message Tests running... and command got stuck.
  
  I tried this with python-neutronclient, that does not have run_test.sh
  
  Method-1:
  #source .tox/py27/bin/activate
  #.tox/py27/bin/python -m testtools.run
  
 
neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network
  
  Method-2:
  #testr list-tests '(CLITestV20NetworkJSON.test_create_network)'  
my-list
  #python -m testtools.run discover --load-list my-list
  
  
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes

2014-12-01 Thread Dmitriy Shulyak
Swagger is not related to test improvement, but we started to discuss it
here so..

@Przemyslaw, how hard it will be to integrate it with nailgun rest api
(web.py and handlers hierarchy)?
Also is there any way to use auth with swagger?

On Mon, Dec 1, 2014 at 1:14 PM, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:


 On 11/28/2014 05:15 PM, Ivan Kliuk wrote:

 Hi, team!

 Let me please present ideas collected during the unit tests improvement
 meeting:
 1) Rename class ``Environment`` to something more descriptive
 2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. Let's
 use parameters instead
 3) run_tests.sh should invoke alternate syncdb() for cases where we don't
 need to test migration procedure, i.e. create_db_schema()
 4) Consider usage of custom fixture provider. The main functionality
 should combine loading from YAML/JSON source and support fixture inheritance
 5) The project needs in a document(policy) which describes:
 - Tests creation technique;
 - Test categorization (integration/unit) and approaches of testing
 different code base
 -
 6) Review the tests and refactor unit tests as described in the test policy
 7) Mimic Nailgun module structure in unit tests
 8) Explore Swagger tool http://swagger.io/


 Swagger is a great tool, we used it in my previous job. We used Tornado,
 attached some hand-crafted code to RequestHandler class so that it
 inspected all its subclasses (i.e. different endpoint with REST methods),
 generated swagger file and presented the Swagger UI (
 https://github.com/swagger-api/swagger-ui) under some /docs/ URL.
 What this gave us is that we could just add YAML specification directly to
 the docstring of the handler method and it would automatically appear in
 the UI. It's worth noting that the UI provides an interactive form for
 sending requests to the API so that tinkering with the API is easy [1].

 [1]
 https://www.dropbox.com/s/y0nuxull9mxm5nm/Swagger%20UI%202014-12-01%2012-13-06.png?dl=0

 P.

  --
 Sincerely yours,
 Ivan Kliuk



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday December 2nd at 19:00 UTC

2014-12-01 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday December 2nd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed it, meeting log and minutes from the last
meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-25-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-25-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-25-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] alembic 0.7.1 will break neutron's heal feature which assumes a fixed set of potential autogenerate types

2014-12-01 Thread Mike Bayer
hey neutron -

Just an FYI, I’ve added https://review.openstack.org/#/c/137989/ / 
https://launchpad.net/bugs/1397796 to refer to an issue in neutron’s “heal” 
script that is going to start failing when I put out Alembic 0.7.1, which is 
potentially later today / this week.

The issue is pretty straightforward,  Alembic 0.7.1 is adding foreign key 
autogenerate (and really, could add more types of autogenerate at any time), 
and as these new commands are revealed within the execute_alembic_command(), 
they are not accounted for, so it fails.   I’d recommend folks try to push this 
one through or otherwise decide how this issue (which should be expected to 
occur many more times) should be handled.

Just a heads up in case you start seeing builds failing!

- mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Please enable everyone to see patches and reviews on http://review.fuel-infra.org

2014-12-01 Thread Jay Pipes

Hi Fuel Devs,

I'm not entirely sure why we are running our own infrastructure Gerrit 
for Fuel, as opposed to using the main review.openstack.org site that 
all OpenStack and Stackforge projects use (including Fuel repositories 
on stackforge...). Could someone please advise on why we are doing that?


In the meantime, can we please have access to view review.fuel-infra.org 
code reviews and patches? I went today to track down a bug [1] and 
clicked on a link in the bug report [2] and after signing in with my 
Launchpad SSO account, got a permission denied page.


Please advise,
-jay

[1] https://bugs.launchpad.net/mos/+bug/1378081
[2] https://review.fuel-infra.org/#/c/940/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy-migrate call for reviews

2014-12-01 Thread Jeremy Stanley
On 2014-12-01 13:24:43 -0500 (-0500), Mike Bayer wrote:
[...]
 for users on the outside of immediate Openstack use cases I’d
 prefer if they can continue working towards moving to Alembic; the
 major features I’ve introduced in Alembic including the SQLite
 support are intended to make transition much more feasible.

Agreed, perhaps the documentation for sqlalchemy-migrate needs to
make that statement visible (if it doesn't already). That's also a
reasonable message for someone to pass along to those other
non-OpenStack users discussing it in external forums... this is
deprecated, on limited life support until OpenStack can complete its
transition to Alembic, and other developers using it should look
hard at performing similar updates to their software.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party]Time for Additional Meeting for third-party

2014-12-01 Thread Anita Kuno
One of the actions from the Kilo Third-Party CI summit session was to
start up an additional meeting for CI operators to participate from
non-North American time zones.

Please reply to this email with times/days that would work for you. The
current third party meeting is on Mondays at 1800 utc which works well
since Infra meetings are on Tuesdays. If we could find a time that works
for Europe and APAC that is also on Monday that would be ideal.

Josh Hesketh has said he will try to be available for these meetings, he
is in Australia.

Let's get a sense of what days and timeframes work for those interested
and then we can narrow it down and pick a channel.

Thanks everyone,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-01 Thread Clint Byrum
Excerpts from Anant Patil's message of 2014-11-30 23:02:29 -0800:
 On 27-Nov-14 18:03, Murugan, Visnusaran wrote:
  Hi Zane,
  
   
  
  At this stage our implementation (as mentioned in wiki
  https://wiki.openstack.org/wiki/Heat/ConvergenceDesign) achieves your
  design goals.
  
   
  
  1.   In case of a parallel update, our implementation adjusts graph
  according to new template and waits for dispatched resource tasks to
  complete.
  
  2.   Reason for basing our PoC on Heat code:
  
  a.   To solve contention processing parent resource by all dependent
  resources in parallel.
  
  b.  To avoid porting issue from PoC to HeatBase. (just to be aware
  of potential issues asap)
  
  3.   Resource timeout would be helpful, but I guess its resource
  specific and has to come from template and default values from plugins.
  
  4.   We see resource notification aggregation and processing next
  level of resources without contention and with minimal DB usage as the
  problem area. We are working on the following approaches in *parallel.*
  
  a.   Use a Queue per stack to serialize notification.
  
  b.  Get parent ProcessLog (ResourceID, EngineID) and initiate
  convergence upon first child notification. Subsequent children who fail
  to get parent resource lock will directly send message to waiting parent
  task (topic=stack_id.parent_resource_id)
  
  Based on performance/feedback we can select either or a mashed version.
  
   
  
  Advantages:
  
  1.   Failed Resource tasks can be re-initiated after ProcessLog
  table lookup.
  
  2.   One worker == one resource.
  
  3.   Supports concurrent updates
  
  4.   Delete == update with empty stack
  
  5.   Rollback == update to previous know good/completed stack.
  
   
  
  Disadvantages:
  
  1.   Still holds stackLock (WIP to remove with ProcessLog)
  
   
  
  Completely understand your concern on reviewing our code, since commits
  are numerous and there is change of course at places.  Our start commit
  is [c1b3eb22f7ab6ea60b095f88982247dd249139bf] though this might not help J
  
   
  
  Your Thoughts.
  
   
  
  Happy Thanksgiving.
  
  Vishnu.
  
   
  
  *From:*Angus Salkeld [mailto:asalk...@mirantis.com]
  *Sent:* Thursday, November 27, 2014 9:46 AM
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Subject:* Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown
  
   
  
  On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter zbit...@redhat.com
  mailto:zbit...@redhat.com wrote:
  
  A bunch of us have spent the last few weeks working independently on
  proof of concept designs for the convergence architecture. I think
  those efforts have now reached a sufficient level of maturity that
  we should start working together on synthesising them into a plan
  that everyone can forge ahead with. As a starting point I'm going to
  summarise my take on the three efforts; hopefully the authors of the
  other two will weigh in to give us their perspective.
  
  
  Zane's Proposal
  ===
  
  
  https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph
  
  I implemented this as a simulator of the algorithm rather than using
  the Heat codebase itself in order to be able to iterate rapidly on
  the design, and indeed I have changed my mind many, many times in
  the process of implementing it. Its notable departure from a
  realistic simulation is that it runs only one operation at a time -
  essentially giving up the ability to detect race conditions in
  exchange for a completely deterministic test framework. You just
  have to imagine where the locks need to be. Incidentally, the test
  framework is designed so that it can easily be ported to the actual
  Heat code base as functional tests so that the same scenarios could
  be used without modification, allowing us to have confidence that
  the eventual implementation is a faithful replication of the
  simulation (which can be rapidly experimented on, adjusted and
  tested when we inevitably run into implementation issues).
  
  This is a complete implementation of Phase 1 (i.e. using existing
  resource plugins), including update-during-update, resource
  clean-up, replace on update and rollback; with tests.
  
  Some of the design goals which were successfully incorporated:
  - Minimise changes to Heat (it's essentially a distributed version
  of the existing algorithm), and in particular to the database
  - Work with the existing plugin API
  - Limit total DB access for Resource/Stack to O(n) in the number of
  resources
  - Limit overall DB access to O(m) in the number of edges
  - Limit lock contention to only those operations actually contending
  (i.e. no global locks)
  - Each worker task deals with only one resource
  - Only read 

Re: [openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate

2014-12-01 Thread Kyle Mestery
On Mon, Dec 1, 2014 at 6:12 AM, Assaf Muller amul...@redhat.com wrote:


 - Original Message -

 My proposal here, is, _let’s not deprecate this setting_, as it’s a valid use
 case of a gateway configuration, and let’s provide it on the reference
 implementation.

 I agree. As long as the reference implementation works with the setting off
 there's no need to deprecate it. I still think the default should be set to 
 True
 though.

 Keep in mind that the DHCP agent will need changes as well.

++ to both suggestions Assaf. Thanks for bringing this up Miguel!

Kyle


 TL;DR

 I’ve been looking at this yesterday, during a test deployment
 on a site where they provide external connectivity with the
 gateway outside subnet.

 And I needed to switch it of, to actually be able to have any external
 connectivity.

 https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121

 This is handled by providing an on-link route to the gateway first,
 and then adding the default gateway.

 It looks to me very interesting (not only because it’s the only way to work
 on that specific site [2][3][4]), because you can dynamically wire RIPE
 blocks to your server, without needing to use an specific IP for external
 routing or broadcast purposes, and instead use the full block in openstack.


 I have a tiny patch to support this on the neutron l3-agent [1] I yet need to
 add the logic to check “gateway outside subnet”, then add the “onlink”
 route.


 [1]

 diff --git a/neutron/agent/linux/interface.py
 b/neutron/agent/linux/interface.py
 index 538527b..5a9f186 100644
 --- a/neutron/agent/linux/interface.py
 +++ b/neutron/agent/linux/interface.py
 @@ -116,15 +116,16 @@ class LinuxInterfaceDriver(object):
 namespace=namespace,
 ip=ip_cidr)

 - if gateway:
 - device.route.add_gateway(gateway)
 -
 new_onlink_routes = set(s['cidr'] for s in extra_subnets)
 + if gateway:
 + new_onlink_routes.update([gateway])
 existing_onlink_routes = set(device.route.list_onlink_routes())
 for route in new_onlink_routes - existing_onlink_routes:
 device.route.add_onlink_route(route)
 for route in existing_onlink_routes - new_onlink_routes:
 device.route.delete_onlink_route(route)
 + if gateway:
 + device.route.add_gateway(gateway)

 def delete_conntrack_state(self, root_helper, namespace, ip):
 Delete conntrack state associated with an IP address.

 [2] http://www.soyoustart.com/
 [3] http://www.ovh.co.uk/
 [4] http://www.kimsufi.com/


 Miguel Ángel Ajo




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-12-01 Thread Zane Bitter

On 13/11/14 13:59, Clint Byrum wrote:

I'm not sure we have the same understanding of AMQP, so hopefully we can
clarify here. This stackoverflow answer echoes my understanding:

http://stackoverflow.com/questions/17841843/rabbitmq-does-one-consumer-block-the-other-consumers-of-the-same-queue

Not ack'ing just means they might get retransmitted if we never ack. It
doesn't block other consumers. And as the link above quotes from the
AMQP spec, when there are multiple consumers, FIFO is not guaranteed.
Other consumers get other messages.


Thanks, obviously my recollection of how AMQP works was coloured too 
much by oslo.messaging.



So just add the ability for a consumer to read, work, ack to
oslo.messaging, and this is mostly handled via AMQP. Of course that
also likely means no zeromq for Heat without accepting that messages
may be lost if workers die.

Basically we need to add something that is not RPC but instead
jobqueue that mimics this:

http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n131

I've always been suspicious of this bit of code, as it basically means
that if anything fails between that call, and the one below it, we have
lost contact, but as long as clients are written to re-send when there
is a lack of reply, there shouldn't be a problem. But, for a job queue,
there is no reply, and so the worker would dispatch, and then
acknowledge after the dispatched call had returned (including having
completed the step where new messages are added to the queue for any
newly-possible children).


I'm curious how people are deploying Rabbit at the moment. Are they 
setting up multiple brokers and writing messages to disk before 
accepting them? I assume yes on the former but no on the latter, since 
there's no particular point in having e.g. 5 nines durability in the 
queue when the overall system is as weak as your flakiest node.


OTOH if we were to add what you're proposing, then we would need folks 
to deploy Rabbit that way (at least for Heat), since waiting for Acks on 
receipt is insufficient to make messaging reliable if the broker can 
easily outright lose the message.


I think all of the proposed approaches would benefit from this feature, 
but I'm concerned about any increased burden on deployers too.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.serialization 1.1.0 released

2014-12-01 Thread Ben Nemec
The Oslo team is pleased to announce the release of oslo.serialization
1.1.0.  This is primarily a bug fix and requirements update release.
Further details of the changes included are below.

For more details, please see the git log history below and
https://launchpad.net/oslo.serialization/+milestone/1.1.0

 Please report issues through launchpad:
https://launchpad.net/oslo.serialization

openstack/oslo.serialization  1.0.0..HEAD

a7bade1 Add pbr to installation requirements
9701670 Updated from global requirements
f3aa93c Fix pep8, docs, requirements issues in jsonutils and tests
d4e3609 Remove extraneous vim editor configuration comments
ce89925 Support building wheels (PEP-427)
472e6c9 Fix coverage testing
ddde5a5 Updated from global requirements
0929bde Support 'built-in' datetime module
9498865 Add history/changelog to docs

  diffstat (except docs and test files):

 oslo/__init__.py|  2 --
 oslo/serialization/jsonutils.py | 20 +++-
 requirements.txt|  4 +++-
 setup.cfg   |  3 +++
 test-requirements.txt   | 11 ++-
 tests/test_jsonutils.py |  5 +++--
 8 files changed, 27 insertions(+), 20 deletions(-)

  Requirements updates:

 diff --git a/requirements.txt b/requirements.txt
 index 2dc5dea..176ce3c 100644
 --- a/requirements.txt
 +++ b/requirements.txt
 @@ -3,0 +4,2 @@
 +
 +pbr=0.6,!=0.7,1.0
 @@ -9 +11 @@ iso8601=0.1.9
 -oslo.utils=0.3.0   # Apache-2.0
 +oslo.utils=1.0.0   # Apache-2.0
 diff --git a/test-requirements.txt b/test-requirements.txt
 index a0ed1c5..f4c82b9 100644
 --- a/test-requirements.txt
 +++ b/test-requirements.txt
 @@ -4 +4 @@
 -hacking=0.5.6,0.8
 +hacking=0.9.2,0.10
 @@ -9,2 +9,2 @@ netaddr=0.7.12
 -sphinx=1.1.2,!=1.2.0,1.3
 -oslosphinx=2.2.0.0a2
 +sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
 +oslosphinx=2.2.0  # Apache-2.0
 @@ -12 +12 @@ oslosphinx=2.2.0.0a2
 -oslotest=1.1.0.0a2
 +oslotest=1.2.0  # Apache-2.0
 @@ -14 +14,2 @@ simplejson=2.2.0
 -oslo.i18n=0.3.0  # Apache-2.0
 +oslo.i18n=1.0.0  # Apache-2.0
 +coverage=3.6

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy-migrate call for reviews

2014-12-01 Thread Thomas Goirand
On 12/01/2014 06:19 PM, Ihar Hrachyshka wrote:
 Indeed, the review queue is non-responsive. There are other patches in
 the queue that bit rot there:
 
 https://review.openstack.org/#/q/status:open+project:stackforge/sqlalchemy-migrate,n,z

I did +2 some of the patches which I thought were totally unharmful, but
none passed the gate. Now, it looks like the Python 3.3 gate for it is
broken... :( Where are those module not found issues from?

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] alembic 0.7.1 will break neutron's heal feature which assumes a fixed set of potential autogenerate types

2014-12-01 Thread Salvatore Orlando
Thanks Mike!

I've left some comments on the patch.
Just out of curiosity, since now alembic can autogenerate foreign keys, are
we be able to remove the logic for identifying foreign keys to add/remove
[1]?

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/heal_script.py#n205


On 1 December 2014 at 20:35, Mike Bayer mba...@redhat.com wrote:

 hey neutron -

 Just an FYI, I’ve added https://review.openstack.org/#/c/137989/ /
 https://launchpad.net/bugs/1397796 to refer to an issue in neutron’s
 “heal” script that is going to start failing when I put out Alembic 0.7.1,
 which is potentially later today / this week.

 The issue is pretty straightforward,  Alembic 0.7.1 is adding foreign key
 autogenerate (and really, could add more types of autogenerate at any
 time), and as these new commands are revealed within the
 execute_alembic_command(), they are not accounted for, so it fails.   I’d
 recommend folks try to push this one through or otherwise decide how this
 issue (which should be expected to occur many more times) should be handled.

 Just a heads up in case you start seeing builds failing!

 - mike



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-12-01 Thread Angus Salkeld
On Tue, Dec 2, 2014 at 8:15 AM, Zane Bitter zbit...@redhat.com wrote:

 On 28/11/14 02:33, Qiming Teng wrote:

 Dear all,

 Auto-Scaling is an important feature supported by Heat and needed by
 many users we talked to.  There are two flavors of AutoScalingGroup
 resources in Heat today: the AWS-based one and the Heat native one.  As
 more requests coming in, the team has proposed to separate auto-scaling
 support into a separate service so that people who are interested in it
 can jump onto it.  At the same time, Heat engine (especially the resource
 type code) will be drastically simplified.  The separated AS service
 could move forward more rapidly and efficiently.

 This work was proposed a while ago with the following wiki and
 blueprints (mostly approved during Havana cycle), but the progress is
 slow.  A group of developers now volunteer to take over this work and
 move it forward.


 Thank you!


  wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
 BPs:
   - https://blueprints.launchpad.net/heat/+spec/as-lib-db
   - https://blueprints.launchpad.net/heat/+spec/as-lib
   - https://blueprints.launchpad.net/heat/+spec/as-engine-db
   - https://blueprints.launchpad.net/heat/+spec/as-engine
   - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
   - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
   - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
   - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
   - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-
 trigger-resource
   - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources

 Once this whole thing lands, Heat engine will talk to the AS engine in
 terms of ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care
 how auto-scaling is implemented although the AS engine may in turn ask
 Heat to create/update stacks for scaling's purpose.  In theory, AS
 engine can create/destroy resources by directly invoking other OpenStack
 services.  This new AutoScaling service may eventually have its own DB,
 engine, API, api-client.  We can definitely aim high while work hard on
 real code.

 After reviewing the BPs/Wiki and some communication, we get two options
 to push forward this.  I'm writing this to solicit ideas and comments
 from the community.

 Option A: Top-Down Quick Split
 --

 This means we will follow a roadmap shown below, which is not 100%
 accurate yet and very rough:

1) Get the separated REST service in place and working
2) Switch Heat resources to use the new REST service

 Pros:
- Separate code base means faster review/commit cycle
- Less code churn in Heat
 Cons:
- A new service need to be installed/configured/launched
- Need commitments from dedicated, experienced developers from very
  beginning


 Anything that involves a kind of flag-day switchover like this
 (maintaining the implementation in two different places) will be very hard
 to land, and if by some miracle it does will likely cause a lot of
 user-breaking bugs.


Well we can use the environment to provide the two options for a cycle
(like the cloud watch lite) and the operator can switch when they feel
comfortable.
The reason I'd like to keep the door somewhat open to this is the huge
burden of work we will put on Qiming and his
team for option B (and the load on the core team). As you know this has
been thought of before and fizzled out, I don't want that to happen again.
If we can make this more manageable for the team doing this, then I think
that is a good thing. We could implement the guts of the AS in
a library and import it from both places (to prevent duplicate
implementations).



  Option B: Bottom-Up Slow Growth
 ---

 The roadmap is more conservative, with many (yes, many) incremental
 patches to migrate things carefully.

1) Separate some of the autoscaling logic into libraries in Heat
2) Augment heat-engine with new AS RPCs
3) Switch AS related resource types to use the new RPCs
4) Add new REST service that also talks to the same RPC
   (create new GIT repo, API endpoint and client lib...)

 Pros:
- Less risk breaking user lands with each revision well tested
- More smooth transition for users in terms of upgrades


I think this is only true up until 4), at that point it's the same pain
as option A
(the operator needs a new REST endpoint, daemons to run, etc) - so delayed
pain.


 Cons:
- A lot of churn within Heat code base, which means long review cycles
- Still need commitments from cores to supervise the whole process


 I vote for option B (surprise!), and I will sign up right now to as many
 nagging emails as you care to send when you need reviews if you will take
 on this work :)

  There could be option C, D... but the two above are what we came up with
 during the discussion.


I'd suggest a combination between A and B.

1) Separate 

Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-12-01 Thread Angus Lees
On Mon Dec 01 2014 at 2:06:18 PM henry hly henry4...@gmail.com wrote:

 My suggestion is that starting with LB and VPN as a trial, which can
 never be distributed.


.. Sure they can!   Loadbalancing in particular _should_ be distributed if
both the clients and backends are in the same cluster...

(I agree with your suggestion to start with LB+VPN btw, just not your
reasoning ;)

 - Gus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-12-01 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-11-28 11:27:20 -0800:
 On Thu, Nov 27, 2014 at 1:29 PM, Sullivan, Jon Paul
 jonpaul.sulli...@hp.com wrote:
  -Original Message-
  From: Ben Nemec [mailto:openst...@nemebean.com]
  Sent: 26 November 2014 17:03
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [diskimage-builder] Tracing levels for
  scripts (119023)
 
  On 11/25/2014 10:58 PM, Ian Wienand wrote:
   Hi,
  
   My change [1] to enable a consistent tracing mechanism for the many
   scripts diskimage-builder runs during its build seems to have hit a
   stalemate.
  
   I hope we can agree that the current situation is not good.  When
   trying to develop with diskimage-builder, I find myself constantly
   going and fiddling with set -x in various scripts, requiring me
   re-running things needlessly as I try and trace what's happening.
   Conversley some scripts set -x all the time and give output when you
   don't want it.
  
   Now nodepool is using d-i-b more, it would be even nicer to have
   consistency in the tracing so relevant info is captured in the image
   build logs.
  
   The crux of the issue seems to be some disagreement between reviewers
   over having a single trace everything flag or a more fine-grained
   approach, as currently implemented after it was asked for in reviews.
  
   I must be honest, I feel a bit silly calling out essentially a
   four-line patch here.
 
  My objections are documented in the review, but basically boil down to
  the fact that it's not a four line patch, it's a 500+ line patch that
  does essentially the same thing as:
 
  set +e
  set -x
  export SHELLOPTS
 
  I don't think this is true, as there are many more things in SHELLOPTS than 
  just xtrace.  I think it is wrong to call the two approaches equivalent.
 
 
  in disk-image-create.  You do lose set -e in disk-image-create itself on
  debug runs because that's not something we can safely propagate,
  although we could work around that by unsetting it before calling hooks.
   FWIW I've used this method locally and it worked fine.
 
  So this does say that your alternative implementation has a difference from 
  the proposed one.  And that the difference has a negative impact.
 
 
  The only drawback is it doesn't allow the granularity of an if block in
  every script, but I don't personally see that as a particularly useful
  feature anyway.  I would like to hear from someone who requested that
  functionality as to what their use case is and how they would define the
  different debug levels before we merge an intrusive patch that would
  need to be added to every single new script in dib or tripleo going
  forward.
 
  So currently we have boilerplate to be added to all new elements, and that 
  boilerplate is:
 
  set -eux
  set -o pipefail
 
  This patch would change that boilerplate to:
 
  if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
  set -x
  fi
  set -eu
  set -o pipefail
 
  So it's adding 3 lines.  It doesn't seem onerous, especially as most people 
  creating a new element will either copy an existing one or copy/paste the 
  header anyway.
 
  I think that giving control over what is effectively debug or non-debug 
  output is a desirable feature.
 
 I don't think it's debug vs non-debug. I think script writers that
 have explicitly used set -x previously have then operated under the
 assumption that they don't need to add any useful logging since it's
 running -x. In that case, this patch is actually harmful.
 

I believe James has hit the nail squarely on the head with the paragraph
above.

I propose a way forward for this:

1) Conform all o-r-c scripts to the logging standards we have in
OpenStack, or write new standards for diskimage-builder and conform
them to those standards. Abolish non-conditional xtrace in any script
conforming to the standards.

2) Once that is done, implement optional -x. I rather prefer the explicit
conditional set -x implementation over SHELLOPTS. As somebody else
pointed out, it feels like asking for unintended side-effects. But the
how is far less important than the what in this case, which step 1
will better define.

Anyone else have a better plan?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Older Developer documentation Icehouse? Havana?

2014-12-01 Thread Russell Sim
Hi,

From what I can see it seems like the developer documentation available
on the OpenStack website is generated from the git repositories.

http://docs.openstack.org/developer/openstack-projects.html

Are older versions of this documentation currently generated and hosted
somewhere?  Or is it possible to generate versions of this developer
documentation for each release and host it on the same website?

-- 
Cheers,
Russell

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-12-01 Thread W Chan
Renat,

Alternately, what do you think if mistral just post the events to given
exchange(s) on the same transport backend and let the subscribers decide
how to consume the events (i.e post to webhook, etc.) from these
exchanges?  This will simplify implementation somewhat.  The engine can
just take care of publishing the events to the exchanges and call it done.

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-01 Thread Mohammad Hanif
I hope we all understand how edge VPN works and what interactions are 
introduced as part of this spec.  I see references to neutron-network mapping 
to the tunnel which is not at all case and the edge-VPN spec doesn’t propose 
it.  At a very high level, there are two main concepts:

  1.  Creation of a per tenant VPN “service” on a PE (physical router) which 
has a connectivity to other PEs using some tunnel (not known to tenant or 
tenant-facing).  An attachment circuit for this VPN service is also created 
which carries a “list of tenant networks (the list is initially empty) .
  2.  Tenant “updates” the list of tenant networks in the attachment circuit 
which essentially allows the VPN “service” to add or remove the network from 
being part of that VPN.

A service plugin implements what is described in (1) and provides an API which 
is called by what is described in (2).  The Neutron driver only “updates” the 
attachment circuit using an API (attachment circuit is also part of the service 
plugin’ data model).   I don’t see where we are introducing large data model 
changes to Neutron?  How else one introduces a network service in OpenStack if 
it is not through a service plugin?  As we can see, tenant needs to communicate 
(explicit or otherwise) to add/remove its networks to/from the VPN.  There has 
to be a channel and the APIs to achieve this.

Thanks,
—Hanif.

From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, December 1, 2014 at 4:35 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

On 1 December 2014 at 09:01, Mathieu Rohon 
mathieu.ro...@gmail.commailto:mathieu.ro...@gmail.com wrote:
This is an alternative that would say : you want an advanced service
for your VM, please stretch your l2 network to this external
component, that is driven by an external controller, and make your
traffic goes to this component to take benefit of this advanced
service. This is a valid alternative of course, but distributing the
service directly to each compute node is much more valuable, ASA it is
doable.

Right, so a lot rides on the interpretation of 'advanced service' here, and 
also 'attachment'.

Firstly, the difference between this and the 'advanced services' (including the 
L3 functionality, though it's not generally considered an 'advanced service') 
is that advanced services that exist today attach via an addressed port.  This 
bridges in.  That's quite a signifcant difference, which is to an extent why 
I've avoided lumping the two together and haven't called this an advanced 
service itself, although it's clearly similar.

Secondly, 'attachment' has historically meant a connection to that port.  But 
in DVRs, it can be a multipoint connection to the network - manifested on 
several hosts - all through the auspices of a single port.  In the edge-id 
proposal you'll note that I've carefully avoided defining what an attachment 
is, largely because I have a natural tendency to want to see the interface at 
the API level before I worry about the backend, I admit.  Your point about 
distributed services is well taken, and I think would be addressed by one of 
these distributed attachment types.

 So the issue I worry about here is that if we start down the path of adding
 the MPLS datamodels to Neutron we have to add Kevin's switch control work.
 And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN.  And whatever
 else comes along.  And we get back to 'that's a lot of big changes that
 aren't interesting to 90% of Neutron users' - difficult to get in and a lot
 of overhead to maintain for the majority of Neutron developers who don't
 want or need it.

This shouldn't be a lot of big changes, once interfaces between
advanced services and neutron core services will be cleaner.

Well, incorporating a lot of models into Neutron *is*, clearly, quite a bit of 
change, for starters.

The edge-id concept says 'the data models live outside neutron in a separate 
system' and there, yes, absolutely, this proposes a clean model for 
edge/Neutron separation in the way you're alluding to with advanced services.  
I think your primary complaint is that it doesn't define that interface for an 
OVS driver based system.

The edge-vpn concept says 'the data models exists within neutron in an 
integrated fashion' and, if you agree that separation is the way to go, this 
seems to me to be exactly the wrong approach to be using.  It's the way 
advanced services are working - for now - but that's because we believe it 
would be hard to pull them out because the interfaces between service and 
Neutron don't currently exist.  The argument for this seems to be 'we should 
incorporate it so that we can 

Re: [openstack-dev] [horizon] REST and Django

2014-12-01 Thread Richard Jones
On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran tqt...@us.ibm.com wrote:

 I agree that keeping the API layer thin would be ideal. I should add that
 having discrete API calls would allow dynamic population of table. However,
 I will make a case where it *might* be necessary to add additional APIs.
 Consider that you want to delete 3 items in a given table.

 If you do this on the client side, you would need to perform: n * (1 API
 request + 1 AJAX request)
 If you have some logic on the server side that batch delete actions: n *
 (1 API request) + 1 AJAX request

 Consider the following:
 n = 1, client = 2 trips, server = 2 trips
 n = 3, client = 6 trips, server = 4 trips
 n = 10, client = 20 trips, server = 11 trips
 n = 100, client = 200 trips, server 101 trips

 As you can see, this does not scale very well something to consider...

Yep, though in the above cases the client is still going to be hanging,
waiting for those server-backend calls, with no feedback until it's all
done. I would hope that the client-server call overhead is minimal, but I
guess that's probably wishful thinking when in the land of random Internet
users hitting some provider's Horizon :)

So yeah, having mulled it over myself I agree that it's useful to have
batch operations implemented in the POST handler, the most common operation
being DELETE.

Maybe one day we could transition to a batch call with user feedback using
a websocket connection.


 Richard

 [image: Inactive hide details for Richard Jones ---11/27/2014 05:38:53
 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S travis.tr]Richard
 Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp,
 Travis S travis.tr...@hp.com wrote:

 From: Richard Jones r1chardj0...@gmail.com
 To: Tripp, Travis S travis.tr...@hp.com, OpenStack List 
 openstack-dev@lists.openstack.org
 Date: 11/27/2014 05:38 PM
 Subject: Re: [openstack-dev] [horizon] REST and Django
 --




 On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S *travis.tr...@hp.com*
 travis.tr...@hp.com wrote:

Hi Richard,

You are right, we should put this out on the main ML, so copying
thread out to there.  ML: FYI that this started after some impromptu IRC
discussions about a specific patch led into an impromptu google hangout
discussion with all the people on the thread below.


 Thanks Travis!



As I mentioned in the review[1], Thai and I were mainly discussing the
possible performance implications of network hops from client to horizon
server and whether or not any aggregation should occur server side.   In
other words, some views  require several APIs to be queried before any data
can displayed and it would eliminate some extra network requests from
client to server if some of the data was first collected on the server side
across service APIs.  For example, the launch instance wizard will need to
collect data from quite a few APIs before even the first step is displayed
(I’ve listed those out in the blueprint [2]).

The flip side to that (as you also pointed out) is that if we keep the
API’s fine grained then the wizard will be able to optimize in one place
the calls for data as it is needed. For example, the first step may only
need half of the API calls. It also could lead to perceived performance
increases just due to the wizard making a call for different data
independently and displaying it as soon as it can.


 Indeed, looking at the current launch wizard code it seems like you
 wouldn't need to load all that data for the wizard to be displayed, since
 only some subset of it would be necessary to display any given panel of the
 wizard.



I tend to lean towards your POV and starting with discrete API calls
and letting the client optimize calls.  If there are performance problems
or other reasons then doing data aggregation on the server side could be
considered at that point.


 I'm glad to hear it. I'm a fan of optimising when necessary, and not
 beforehand :)



Of course if anybody is able to do some performance testing between
the two approaches then that could affect the direction taken.


 I would certainly like to see us take some measurements when performance
 issues pop up. Optimising without solid metrics is bad idea :)


 Richard




[1]

 *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py*

 https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py
[2]
*https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign*
https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign

-Travis

*From: *Richard Jones *r1chardj0...@gmail.com*
r1chardj0...@gmail.com
 * Date: *Wednesday, November 26, 2014 at 11:55 PM
 * To: *Travis Tripp *travis.tr...@hp.com* travis.tr...@hp.com, Thai Q
Tran/Silicon Valley/IBM *tqt...@us.ibm.com* tqt...@us.ibm.com,

Re: [openstack-dev] SRIOV failures error-

2014-12-01 Thread Itzik Brown

Hi,
Seems like you don't have available devices for allocation.

What's the output of:
#echo 'use nova;select hypervisor_hostname,pci_stats from 
compute_nodes;' | mysql -u root


Itzik

On 12/02/2014 08:21 AM, Murali B wrote:

Hi

we are trying to bring-up the SRIOV on set-up.

facing the below error when we tried create the vm.
*
Still during creating instance ERROR appears .
PciDeviceRequestFailed: PCI device request ({'requests': 
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])], 
'code': 500}equests)s failed*


followed the steps from the 
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking


Please help us to get rid this error. let us know if any configuration 
is required at hardware in order to work properly.


Thanks
-Murali


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-12-01 Thread Renat Akhmerov
Hi Winson,

 On 02 Dec 2014, at 09:10, W Chan m4d.co...@gmail.com wrote:
 
 To clarify on the shortcut solution, are you saying 1) we add an adhoc event 
 subscription to the workflow spec OR 2) add a one time event subscription to 
 the workflow execution OR both?

Not sure what you mean by “adhoc” here. What I meant is that we should have 2 
options:
Have an individual REST endpoint to be able to subscribe for certain types of 
events any time. For example, “Notifications about all workflow events for 
workflow name ‘my_workflow’” or “Notifications about switching to state ‘ERROR’ 
for workflow ‘my_workflow’”. Using this endpoint we can also unsubscribe from 
these events.
When we start a workflow (“mistral execution-create” in CLI and 
start_workflow() method in engine) we can configure the same subscription and 
pass it along with “start workflow” request. For such purposes, engine method 
“start_workflow” has keyword parameter “**params” that can take any kind of 
additional parameters needed for workflow start (for example, when we start a 
reverse workflow we pass “task_name”). This way we can start a workflow and 
configure our subscription with a single request. In the first approach we 
would have to make two requests.

 I propose a separate worker/executor to process the events so to not disrupt 
 the workflow executions.  What if there are a lot of subscribers?  What if 
 one or more subscribers are offline?  Do we retry and how many times?  These 
 activities will likely disrupt the throughput of the workflows and I rather 
 handle these activities separately.


Yeah, I now tend to agree with you here. Although it still bothers me because 
of that performance overhead that we’ll have most of the time. But generally, 
yes, you’re right. Do you think we should use same executors to process these 
notifications or introduce a new type of entity for that?

Thanks

Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev