Re: [openstack-dev] [Openstack-docs] Networking Docs Swarm - Brisbane 9 August

2014-08-05 Thread Lana Brindley
Just to clear up any confusion: there is other work going on on the new 
Networking Guide prior to this swarm. The swarm intends to continue that 
work, not create a new project ;)


Lana

On 05/08/14 13:05, Lana Brindley wrote:

Hi everyone,

I just wanted to let you all know about the OpenStack Networking Docs
Swarm being held in Brisbane on 9 August.

Currently, there is no OpenStack Networking Guide, so the focus of this
swarm is to combine the existing networking content into a single doc so
that it can be updated, reviewed, and hopefully completed for the Juno
release.

We need both tech writers and OpenStack admins for the event to be a
success. Even if you can only make it for half an hour, your presence
would be greatly appreciated!

RSVP here:
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b


More information here: http://openstack-swarm.rhcloud.com/

See you on Saturday!

Lana




--
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Networking Docs Swarm - Brisbane 9 August

2014-08-05 Thread Lana Brindley
Yeah, a bug really is the best way to go about feedback and suggestions. 
I'll be sure to comb through Launchpad on the day :)


L

On 05/08/14 14:29, Tom Fifield wrote:

How about writing up something in a bug report:

https://bugs.launchpad.net/openstack-manuals/+filebug

or a mailing list post about what you'd like to see?


Regards,


Tom

On 05/08/14 12:22, Stuart Fox wrote:

Cant make it to Brisbane but this doc is so needed. Any chamce you could
put round a questionaire or sethomg similar to get input from those who
cant make it?Â

Â

--

BR,

Stuart

Â


On 14-08-04 8:05 PM Lana Brindley wrote:

Hi everyone,

I just wanted to let you all know about the OpenStack Networking Docs
Swarm being held in Brisbane on 9 August.

Currently, there is no OpenStack Networking Guide, so the focus of this
swarm is to combine the existing networking content into a single doc so
that it can be updated, reviewed, and hopefully completed for the Juno
release.

We need both tech writers and OpenStack admins for the event to be a
success. Even if you can only make it for half an hour, your presence
would be greatly appreciated!

RSVP here:
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b


More information here:
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b


See you on Saturday!

Lana

--
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-05 Thread marc

Hello Boris,

see below.

Zitat von Boris Pavlovic bo...@pavlovic.me:


Jay,

Thanks for review of proposal. Some my comments below..


I think this is one of the roots of the problem that folks like David

and Sean keep coming around to. If Rally were less monolithic, it
would be easier to say OK, bring this piece into Tempest, have this
piece be a separate library and live in the QA program, and have the
service endpoint that allows operators to store and periodically measure
SLA performance indicators against their cloud.


Actually Rally was designed to be a glue service (and cli tool) that will
bind everything together and present service endpoint for Operators. I
really do not understand what can be split? and put to tempest? and
actually why? Could you elaborate pointing on current Rally code, maybe
there is some misleading here. I think this should be discussed in more
details..



A good example for that is Rally's Tempest configuration module. Currently
Rally has all the logic to configure Tempest and for that you have your
own way to build the tempest conf out of a template [1]. If the QA
team decides to rework the configuration Rally is broken.

[1]:  
https://github.com/stackforge/rally/blob/master/rally/verification/verifiers/tempest/config.ini



[snip]


I found the Scalr incubation discussion:
http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-14-20.03.log.html

The reasons of reject were next:
*) OpenStack shouldn't put PaaS in OpenStack core # rally is not PaaS
*) Duplication of functionality (actually dashboard)  # Rally doesn't
duplicate anything


IMHO rally duplicates at least some pieces. So you can find parts of
Tempest scenarios tests in the benchmarks area, Tempest stress tests
and Tempest config.


Regards
Marc




*) Development is done behind closed doors
# Not about Rally
http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally

Seems like Rally is quite different case and this comparison is misleading
 irrelevant to current case.




, that is why I think Rally should be a separated program (i.e.

Rally scope is just different from QA scope). As well, It's not clear
for me, why collaboration is possible only in case of one program? In
my opinion collaboration  programs are irrelevant things.



Sure, it's certainly possible for collaboration to happen across
programs. I think what Sean is alluding to is the fact that the Tempest
and Rally communities have done little collaboration to date, and that
is worrying to him.



Could you please explain this paragraph. What do you mean by have done
little collaboration

We integrated Tempest in Rally:
http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/

We are working on spec in Tempest about tempest conf generation:
https://review.openstack.org/#/c/94473/ # probably not so fast as we would
like

We had design session:
http://junodesignsummit.sched.org/event/2815ca60f70466197d3a81d62e1ee7e4#.U9_ugYCSz1g

I am going to work on integration OSprofiler in tempest, as soon as I get
it in core projects.

By the way, I am really not sure how being one Program will help us to
collaborate? What it actually changes?




About collaboration between Rally  Tempest teams... Major goal of

integration Tempest in Rally is to make it simpler to use tempest on
production clouds via OpenStack API.




Plenty of folks run Tempest without Rally against production clouds as

an acceptance test platform. I see no real benefit to arguing that Rally
is for running against production clouds and Tempest is for
non-production clouds. There just isn't much of a difference there.



Hm, I didn't say anything about Tempest is for non-prduction clouds...
I said that Rally team is working on making it simpler to use on production
clouds..



The problem I see is that Rally is not *yet* exposing the REST service

endpoint that would make it a full-fledged Operator Tool outside the
scope of its current QA focus. Once Rally does indeed publish a REST API
that exposes resource endpoints for an operator to store a set of KPIs
associated with an SLA, and allows the operator to store the run
schedule that Rally would use to go and test such metrics, *then* would
be the appropriate time to suggest that Rally be the pilot project in
this new Operator Tools program, IMO.



It's really almost done.. It is all about 2 weeks of work...



I'm sure all of those things would be welcome additions to Tempest. At the

same time, Rally contributors would do well to work on an initial REST API
endpoint that would expose the resources I denoted above.



As I said before it's almost finished..


Best regards,
Boris Pavlovic



On Mon, Aug 4, 2014 at 8:25 PM, Jay Pipes jaypi...@gmail.com wrote:


On 08/04/2014 11:21 AM, Boris Pavlovic wrote:


Rally is quite monolithic and can't be split



I think this is one of the roots of the problem that folks like David
and Sean keep coming around to. If Rally 

Re: [openstack-dev] [horizon] Support for Django 1.7: there's a bit of work, though it looks fixable to me...

2014-08-05 Thread Thomas Goirand
On 08/04/2014 05:05 PM, Romain Hardouin wrote:
 Hi,
 
 Note that Django 1.7 requires Python 2.7 or above[1] while Juno still 
 requires to be compatible with Python 2.6 (Suse ES 11 uses 2.6 if my memory 
 serves me).
 
 [1] https://docs.djangoproject.com/en/dev/releases/1.7/#python-compatibility
 
 Best,
 
 Romain

Hi,

I'm not asking for *switching* to Django 1.7, but just support it. :)

A bit of update here...

Here's the list of fixes that I propose, thanks to the awesome help of
Raphael Hertzog:

https://review.openstack.org/111561 --- TEMPLATE_DIRS fix
https://review.openstack.org/111930 --- Rename conflicting add_error()
https://review.openstack.org/111932 --- Fix summation code
https://review.openstack.org/111934 --- SecurityGroup type error
https://review.openstack.org/111936 --- Fix _detail_overview.html

I know we're not supposed to ask the list for code review, but I'll do
it this time still! :)

I believe this one, which I reported previously, can be ignored:

 /home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/horizon/test/helpers.py,
 line 184, in module
 class JasmineTests(SeleniumTestCase):
 TypeError: Error when calling the metaclass bases
 function() argument 1 must be code, not str

after cleaning my build env, it didn't do it again, so it must be fine.

Now, there's still this one which isn't fixed, and which Raphael and I
didn't understand yet how to fix:

FAIL: test_update_project_when_default_role_does_not_exist
(openstack_dashboard.dashboards.admin.projects.tests.UpdateProjectWorkflowTests)
--
Traceback (most recent call last):
  File
/home/rhertzog/tmp/django17/horizon/openstack_dashboard/test/helpers.py,
line 83, in instance_stub_out
return fn(self, *args, **kwargs)
  File
/home/rhertzog/tmp/django17/horizon/openstack_dashboard/dashboards/admin/projects/tests.py,
line 1458, in test_update_project_when_default_role_does_not_exist
self.client.get(url)
AssertionError: NotFound not raised

And there's also test_change_password_shows_message_on_login_page which
fails. Here's the end of the stack dump:

  File /usr/lib/python2.7/dist-packages/requests/adapters.py, line
375, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='public.nova.example.com',
port=8774): Max retries exceeded with url: /v2/extensions (Caused by
class 'socket.gaierror': [Errno -2] Name or service not known)

Help fixing the above 2 remaining unit test errors would be greatly
appreciated!

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Route cannot be deleted

2014-08-05 Thread Sayali Lunkad
Hi,

The issue was resolved by following the commands below.

neutron port-update port-id --device_owner clear
neutron port-delete port-id
neutron router-delete router-id

Thanks,
Sayali.




On Sat, Aug 2, 2014 at 3:49 PM, Sayali Lunkad sayali.92...@gmail.com
wrote:

 Hi Zzelle,

 Thanks for the prompt response.
 As you mentioned I cleared all the routes using

 *neutron router-update 2f16d846-b6aa-43a3-adbe-7f91a1389b7f --routes
 action=clear *
 So to see the router status I run this:













 * neutron router-show 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 +---+--+|
 Field | Value
 |+---+--+ |
 admin_state_up| True ||
 external_gateway_info |  ||
 id| 2f16d846-b6aa-43a3-adbe-7f91a1389b7f ||
 name  | router0  | |
 routes|  ||
 status| ACTIVE   ||
 tenant_id | 315561c9a19e4794ac4f4364c842254f
 |+---+--+ *
 After that I try to unbind the subnet using the command below but the same
 problem persists.


 *neutron router-interface-delete* 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 962b8364-a2b4-46cc-95be-28cbab62b8c2
 409-{u'NeutronError': {u'message': u'Router interface for subnet
 962b8364-a2b4-46cc-95be-28cbab62b8c2 on router
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f cannot be deleted, as it is required
 by one or more routes.', u'type': u'RouterInterfaceInUseByRoute',
 u'detail': u''}}

 I am confused at this point as all the routes have been cleared but still
 it is throwing an error saying the router is required by multiple routes.

 Thanks,
 Sayali.


 On Sat, Aug 2, 2014 at 3:25 PM, ZZelle zze...@gmail.com wrote:

 First command is of course:
 *   neutron router-show **2f16d846-b6aa-43a3-adbe-*
 *7f91a1389b7f *
 not
 *   neutron router-update **2f16d846-b6aa-43a3-adbe-**7f91a1389b7f*


 On Sat, Aug 2, 2014 at 11:54 AM, ZZelle zze...@gmail.com wrote:

 Hi,

 According to the first error message, the subnet you try to unbind is
 used in router routes, you can see current router routes using:

*neutron router-update **2f16d846-b6aa-43a3-adbe-**7f91a1389b7f*

 you need to update them before unbind:

 *   neutron router-update **2f16d846-b6aa-43a3-adbe-*
 *7f91a1389b7f --routes type=dict list=true destination=...,nexthop=...
 [destination=...,nexthop=...[...]] *
 or clear router routes:



 *   neutron router-update 2f16d846-b6aa-43a3-adbe-7f91a1389b7f --routes
 action=clear *
 And finally retry to unbind the subnet.



 Cédric,
 ZZelle@IRC


 On Sat, Aug 2, 2014 at 9:56 AM, Sayali Lunkad sayali.92...@gmail.com
 wrote:

 Hi,

 I am facing trouble deleting a router from my openstack deployment.

 I have tried the following commands on the *controller node* and
 pasted the output.

 *neutron router-interface-delete*
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f  962b8364-a2b4-46cc-95be-28cbab62b8c2
 409-{u'NeutronError': {u'message': u'Router interface for subnet
 962b8364-a2b4-46cc-95be-28cbab62b8c2 on router
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f cannot be deleted, as it is required
 by one or more routes.', u'type': u'RouterInterfaceInUseByRoute',
 u'detail': u''}}

  *neutron port-delete*  ec1aac66-481d-488e-860b-53b88d950ac7
 409-{u'NeutronError': {u'message': u'Port
 ec1aac66-481d-488e-860b-53b88d950ac7 has owner network:router_interface and
 therefore cannot be deleted directly via the port API.', u'type':
 u'L3PortInUse', u'detail': u''}}

  *neutron router-delete* 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 409-{u'NeutronError': {u'message': u'Router
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f still has active ports', u'type':
 u'RouterInUse', u'detail': u''}}

 *neutron l3-agent-router-remove* 7a977e23-767f-418e-8429-651c4232548c
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 This removes the router from the l3 agent and attaches it to another
 one as seen below.

  *neutron l3-agent-list-hosting-router*
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f

 +--+--++---+
 | id   | host |
 admin_state_up | alive |

 +--+--++---+
 | 96e1371a-be03-42ed-8141-3b0027d3a82f | alln01-1-csx-net-004 |
 True   | :-)   |

 +--+--++---+


 I have also run the commands below on the *network node* which worked
 fine.

 *ip netns delete* qrouter-2f16d846-b6aa-43a3-adbe-7f91a1389b7f

 *ovs-vsctl del-port* br-int qr-ec1aac66-48

 Could someone please tell me what can be done to delete the router or
 is this some bug  I am hitting.

 Thanks,
 Sayali.



 

[openstack-dev] [taskflow] How to use the logbook

2014-08-05 Thread Roman Klesel
Hello,

I'm currently evaluating taskflow. I read through the examples and
extracted bits and peaces to write a little app in order to see how it
works. Right now I'm most interested in the job board and the flows.

In my code I post a Job to a jobboard, pick it up, end execute a flow
(similar to the build_car example. Sometimes the flow completes
sucessfully sometimes I make a task raise an exception. In the case a
task fails the whole flow is reverted and the job is back on the board
with state UNCLAIMED. So it seems everything works as expected.

Now I would like to scan the jobboard and examine the jobs in order to
see whether they have been claimed in the past, if yes, who claimed
them, how often, when, what has happened to them, where did they fail,
etc.

I thought the logbook is the facility to look into, but it never seems
so have any information. No matter how often a Job has failed and also
if it was sucessful I always get this:

print job.book.to_dict()

{'updated_at': None, 'created_at': datetime.datetime(2014, 8, 4, 8,
33, 31, 218007), 'meta': {}, 'name': u'romans_book', 'uuid':
u'f5eb6f75-06d7-4f52-a72e-f85f69ba67a0'}

What am I doing wrong?


Regards Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][third-party] Protocol for bringing up CI for a new driver?

2014-08-05 Thread Luke Gorrie
Howdy,

Could somebody please clarify the protocol for bringing up a CI for a new
Neutron driver?

Specifically, how does one resolve the chicken-and-egg situation of:

1. CI should be enabled before the driver is merged.

2. CI should test the refspecs given by Gerrit, which will not include the
code for the new driver itself until after it has been merged.

So what is the common practice for using the new driver in tests during the
time window prior to its merge?

Cheers!
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-05 Thread Thierry Carrez
Boris Pavlovic wrote:
 By the way, I am really not sure how being one Program will help us to
 collaborate? What it actually changes? 

Being in one program means you're in the same team, the same meetings,
with ultimate decisions taken by the same one PTL. It obviously makes it
easier to avoid duplication of effort and make stronger architectural or
placement decisions.

Being in two separate programs means we need to arbitrate conflicts
between the two programs at the TC level, or accept some amount of
duplication of effort or technical debt increase. This is why the TC is
so focused on non-overlapping scopes when considering new programs, and
is willing to defer blessing applications until teams work in a
complementary fashion.

Here it seems that we are combining several things: an instrumentation
library (which sounds more like Oslo or QA), a performance testing
system (which sounds more like QA), and future operator tools like a SLA
management platform, LogaaS (which sounds more like their own program,
but those tools are not really there yet). The combination is what
creates the tension.

I'm not saying there is no solution to this puzzle, just explaining
where the tension comes from.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Support for Django 1.7: there's a bit of work, though it looks fixable to me...

2014-08-05 Thread Julie Pichon
On 05/08/14 08:11, Thomas Goirand wrote:
 And there's also test_change_password_shows_message_on_login_page which
 fails. Here's the end of the stack dump:
 
   File /usr/lib/python2.7/dist-packages/requests/adapters.py, line
 375, in send
 raise ConnectionError(e, request=request)
 ConnectionError: HTTPConnectionPool(host='public.nova.example.com',
 port=8774): Max retries exceeded with url: /v2/extensions (Caused by
 class 'socket.gaierror': [Errno -2] Name or service not known)

This particular test is currently being skipped [1] due to issues with
the test itself. It's going to be replaced by an integration test.

Julie

[1] https://review.openstack.org/#/c/101857/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-05 Thread Boris Pavlovic
Marc,

A good example for that is Rally's Tempest configuration module. Currently
 Rally has all the logic to configure Tempest and for that you have your
 own way to build the tempest conf out of a template [1]. If the QA
 team decides to rework the configuration Rally is broken.


Absolutely agree with this point. That is why we are working on:
https://review.openstack.org/#/c/94473/ that adds this feature to Tempest,
when this will be implemented we will remove tempest configuration from
Rally.


IMHO rally duplicates at least some pieces. So you can find parts of
 Tempest scenarios tests in the benchmarks area, Tempest stress tests
 and Tempest config.


Yep agree there is similar functionality in Rally  Tempest about load
generation:
1) tempest: https://github.com/openstack/tempest/tree/master/tempest/stress
2) rally:
https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios/tempest

But seems like Rally has more common load generator that can use both
Tempest  Rally scenarios.
Maybe it makes sense to keep only one solution for this?



Best regards,
Boris Pavlovic





On Tue, Aug 5, 2014 at 10:38 AM, m...@koderer.com wrote:

 Hello Boris,

 see below.

 Zitat von Boris Pavlovic bo...@pavlovic.me:


  Jay,

 Thanks for review of proposal. Some my comments below..


 I think this is one of the roots of the problem that folks like David

 and Sean keep coming around to. If Rally were less monolithic, it
 would be easier to say OK, bring this piece into Tempest, have this
 piece be a separate library and live in the QA program, and have the
 service endpoint that allows operators to store and periodically measure
 SLA performance indicators against their cloud.


 Actually Rally was designed to be a glue service (and cli tool) that will
 bind everything together and present service endpoint for Operators. I
 really do not understand what can be split? and put to tempest? and
 actually why? Could you elaborate pointing on current Rally code, maybe
 there is some misleading here. I think this should be discussed in more
 details..


 A good example for that is Rally's Tempest configuration module. Currently
 Rally has all the logic to configure Tempest and for that you have your
 own way to build the tempest conf out of a template [1]. If the QA
 team decides to rework the configuration Rally is broken.

 [1]: https://github.com/stackforge/rally/blob/master/rally/
 verification/verifiers/tempest/config.ini


 [snip]


  I found the Scalr incubation discussion:
 http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-
 meeting.2011-06-14-20.03.log.html

 The reasons of reject were next:
 *) OpenStack shouldn't put PaaS in OpenStack core # rally is not PaaS
 *) Duplication of functionality (actually dashboard)  # Rally doesn't
 duplicate anything


 IMHO rally duplicates at least some pieces. So you can find parts of
 Tempest scenarios tests in the benchmarks area, Tempest stress tests
 and Tempest config.


 Regards
 Marc




  *) Development is done behind closed doors
 # Not about Rally
 http://stackalytics.com/?release=junometric=commits;
 project_type=Allmodule=rally

 Seems like Rally is quite different case and this comparison is misleading
  irrelevant to current case.



  , that is why I think Rally should be a separated program (i.e.

 Rally scope is just different from QA scope). As well, It's not clear
 for me, why collaboration is possible only in case of one program? In
 my opinion collaboration  programs are irrelevant things.



 Sure, it's certainly possible for collaboration to happen across
 programs. I think what Sean is alluding to is the fact that the Tempest
 and Rally communities have done little collaboration to date, and that
 is worrying to him.



 Could you please explain this paragraph. What do you mean by have done
 little collaboration

 We integrated Tempest in Rally:
 http://www.mirantis.com/blog/rally-openstack-tempest-
 testing-made-simpler/

 We are working on spec in Tempest about tempest conf generation:
 https://review.openstack.org/#/c/94473/ # probably not so fast as we
 would
 like

 We had design session:
 http://junodesignsummit.sched.org/event/2815ca60f70466197d3a81d62e1ee7
 e4#.U9_ugYCSz1g

 I am going to work on integration OSprofiler in tempest, as soon as I get
 it in core projects.

 By the way, I am really not sure how being one Program will help us to
 collaborate? What it actually changes?



  About collaboration between Rally  Tempest teams... Major goal of

 integration Tempest in Rally is to make it simpler to use tempest on
 production clouds via OpenStack API.



  Plenty of folks run Tempest without Rally against production clouds as

 an acceptance test platform. I see no real benefit to arguing that Rally
 is for running against production clouds and Tempest is for
 non-production clouds. There just isn't much of a difference there.



 Hm, I didn't say anything about Tempest is for non-prduction clouds...
 I 

Re: [openstack-dev] [neutron][third-party] Protocol for bringing up CI for a new driver?

2014-08-05 Thread Salvatore Orlando
Hi Luke,

Once in place, the CI system should be able to pick up the patches from the
new plugin or driver on gerrit.
In my opinion, successful CI runs against those patches should constitute a
sufficient proof of the validity of the CI system.

Salvatore
Il 05/ago/2014 09:57 Luke Gorrie l...@snabb.co ha scritto:

 Howdy,

 Could somebody please clarify the protocol for bringing up a CI for a new
 Neutron driver?

 Specifically, how does one resolve the chicken-and-egg situation of:

 1. CI should be enabled before the driver is merged.

 2. CI should test the refspecs given by Gerrit, which will not include the
 code for the new driver itself until after it has been merged.

 So what is the common practice for using the new driver in tests during
 the time window prior to its merge?

 Cheers!
 -Luke



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Route cannot be deleted

2014-08-05 Thread Yongsheng Gong
try to check if there are floatingips on the VMs on that subnet
962b8364-a2b4-46cc-95be-28cbab62b8c2


On Tue, Aug 5, 2014 at 3:24 PM, Sayali Lunkad sayali.92...@gmail.com
wrote:

 Hi,

 The issue was resolved by following the commands below.

 neutron port-update port-id --device_owner clear
 neutron port-delete port-id
 neutron router-delete router-id

 Thanks,
 Sayali.




 On Sat, Aug 2, 2014 at 3:49 PM, Sayali Lunkad sayali.92...@gmail.com
 wrote:

 Hi Zzelle,

 Thanks for the prompt response.
 As you mentioned I cleared all the routes using

 *neutron router-update 2f16d846-b6aa-43a3-adbe-7f91a1389b7f --routes
 action=clear *
 So to see the router status I run this:













 * neutron router-show 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 +---+--+|
 Field | Value
 |+---+--+ |
 admin_state_up| True ||
 external_gateway_info |  ||
 id| 2f16d846-b6aa-43a3-adbe-7f91a1389b7f ||
 name  | router0  | |
 routes|  ||
 status| ACTIVE   ||
 tenant_id | 315561c9a19e4794ac4f4364c842254f
 |+---+--+ *
 After that I try to unbind the subnet using the command below but the
 same problem persists.


 *neutron router-interface-delete* 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 962b8364-a2b4-46cc-95be-28cbab62b8c2
 409-{u'NeutronError': {u'message': u'Router interface for subnet
 962b8364-a2b4-46cc-95be-28cbab62b8c2 on router
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f cannot be deleted, as it is required
 by one or more routes.', u'type': u'RouterInterfaceInUseByRoute',
 u'detail': u''}}

 I am confused at this point as all the routes have been cleared but still
 it is throwing an error saying the router is required by multiple routes.

 Thanks,
 Sayali.


 On Sat, Aug 2, 2014 at 3:25 PM, ZZelle zze...@gmail.com wrote:

 First command is of course:
 *   neutron router-show **2f16d846-b6aa-43a3-adbe-*
 *7f91a1389b7f *
 not
 *   neutron router-update **2f16d846-b6aa-43a3-adbe-**7f91a1389b7f*


 On Sat, Aug 2, 2014 at 11:54 AM, ZZelle zze...@gmail.com wrote:

 Hi,

 According to the first error message, the subnet you try to unbind is
 used in router routes, you can see current router routes using:

*neutron router-update **2f16d846-b6aa-43a3-adbe-**7f91a1389b7f*

 you need to update them before unbind:

 *   neutron router-update **2f16d846-b6aa-43a3-adbe-*
 *7f91a1389b7f --routes type=dict list=true destination=...,nexthop=...
 [destination=...,nexthop=...[...]] *
 or clear router routes:



 *   neutron router-update 2f16d846-b6aa-43a3-adbe-7f91a1389b7f --routes
 action=clear *
 And finally retry to unbind the subnet.



 Cédric,
 ZZelle@IRC


 On Sat, Aug 2, 2014 at 9:56 AM, Sayali Lunkad sayali.92...@gmail.com
 wrote:

 Hi,

 I am facing trouble deleting a router from my openstack deployment.

 I have tried the following commands on the *controller node* and
 pasted the output.

 *neutron router-interface-delete*
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f  962b8364-a2b4-46cc-95be-28cbab62b8c2
 409-{u'NeutronError': {u'message': u'Router interface for subnet
 962b8364-a2b4-46cc-95be-28cbab62b8c2 on router
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f cannot be deleted, as it is required
 by one or more routes.', u'type': u'RouterInterfaceInUseByRoute',
 u'detail': u''}}

  *neutron port-delete*  ec1aac66-481d-488e-860b-53b88d950ac7
 409-{u'NeutronError': {u'message': u'Port
 ec1aac66-481d-488e-860b-53b88d950ac7 has owner network:router_interface 
 and
 therefore cannot be deleted directly via the port API.', u'type':
 u'L3PortInUse', u'detail': u''}}

  *neutron router-delete* 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 409-{u'NeutronError': {u'message': u'Router
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f still has active ports', u'type':
 u'RouterInUse', u'detail': u''}}

 *neutron l3-agent-router-remove* 7a977e23-767f-418e-8429-651c4232548c
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f
 This removes the router from the l3 agent and attaches it to another
 one as seen below.

  *neutron l3-agent-list-hosting-router*
 2f16d846-b6aa-43a3-adbe-7f91a1389b7f

 +--+--++---+
 | id   | host |
 admin_state_up | alive |

 +--+--++---+
 | 96e1371a-be03-42ed-8141-3b0027d3a82f | alln01-1-csx-net-004 |
 True   | :-)   |

 +--+--++---+


 I have also run the commands below on the *network node* which worked
 fine.

 *ip netns delete* 

Re: [openstack-dev] [horizon] Support for Django 1.7: there's a bit of work, though it looks fixable to me...

2014-08-05 Thread Thomas Goirand
On 08/05/2014 04:06 PM, Julie Pichon wrote:
 On 05/08/14 08:11, Thomas Goirand wrote:
 And there's also test_change_password_shows_message_on_login_page which
 fails. Here's the end of the stack dump:

   File /usr/lib/python2.7/dist-packages/requests/adapters.py, line
 375, in send
 raise ConnectionError(e, request=request)
 ConnectionError: HTTPConnectionPool(host='public.nova.example.com',
 port=8774): Max retries exceeded with url: /v2/extensions (Caused by
 class 'socket.gaierror': [Errno -2] Name or service not known)
 
 This particular test is currently being skipped [1] due to issues with
 the test itself. It's going to be replaced by an integration test.
 
 Julie
 
 [1] https://review.openstack.org/#/c/101857/

Thanks Julie! I then disabled the tests in the Debian package too.

I'm now down to only a single error not solved:

==
FAIL: test_update_project_when_default_role_does_not_exist
(openstack_dashboard.dashboards.admin.projects.tests.UpdateProjectWorkflowTests)
--
Traceback (most recent call last):
  File
/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/openstack_dashboard/test/helpers.py,
line 83, in instance_stub_out
return fn(self, *args, **kwargs)
  File
/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/openstack_dashboard/dashboards/admin/projects/tests.py,
line 1458, in test_update_project_when_default_role_does_not_exist
self.client.get(url)
AssertionError: NotFound not raised

Any idea?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] backport fixes to old branches

2014-08-05 Thread Osanai, Hisashi

Hi,

I would like to have the following fix for IceHouse branch because 
the problem happens on it but the fix was committed on Juno-2 only.
Is there any process to backport fixes to old branches?

https://bugs.launchpad.net/ceilometer/+bug/1326250

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] status in entities

2014-08-05 Thread Vijay Venkatachalam
Hi:
   I think we had some discussions around 'status' attribute 
earlier, I don't recollect the conclusion.
Does it reflect the deployment status?
   Meaning, if the status of an entity is ACTIVE, the user has to 
infer that the entity is deployed successfully in the backend/loadbalancer.
Thanks,
Vijay V.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-05 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 05/08/14 11:22, Osanai, Hisashi wrote:
 
 Hi,
 
 I would like to have the following fix for IceHouse branch because
  the problem happens on it but the fix was committed on Juno-2
 only. Is there any process to backport fixes to old branches?

https://wiki.openstack.org/wiki/StableBranch

 
 https://bugs.launchpad.net/ceilometer/+bug/1326250
 
 Best Regards, Hisashi Osanai
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT4KU0AAoJEC5aWaUY1u57OlYH/jlUl0Jwxw85lPSmQg4lB8kf
JzcM9Ytuyf1fOjFR0L92QyKyADcQa6hHSo3fqHXRyrA2PUA4DMzm3ahB6eov+tp3
QJ/yFaxtNbkJy3gfKhIhn+5ExtCMqqgAu2DeAA95Tv4tmccmD09MVEvkVvcmAQh7
P+fFcrgPUoaGLmhH+WmWgmNMplQdj3OLaq3yJOvZXC7Th4O1Jh+sUBA46IKGF70F
l282VJTbWR7wozdBcrjTTtIYnj3VpoJ1S5t33j5R+HfXpa7G6DPKFXo+JhnmMcFM
UhECl9fVR9fgh9CWdqtfl8ektSnivpo+lKCazomSgJi/IL1TMFeSnADxHEOUiH8=
=UKSE
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for merging Heat HOT port

2014-08-05 Thread Tomas Sedovic
On 04/08/14 00:50, Steve Baker wrote:
 On 01/08/14 12:19, Steve Baker wrote:
 The changes to port tripleo-heat-templates to HOT have been rebased to
 the current state and are ready to review. They are the next steps in
 blueprint tripleo-juno-remove-mergepy.

 However there is coordination needed to merge since every existing
 tripleo-heat-templates change will need to be rebased and changed
 after the port lands (lucky you!).

 Here is a summary of the important changes in the series:

 https://review.openstack.org/#/c/105327/
 Low risk and plenty of +2s, just needs enough validation from CI for
 an approve

 Merged
 https://review.openstack.org/#/c/105328/
 Scripted conversion to HOT. Converts everything except Fn::Select

 This is now:
 - rebased against 82c50c1 Fix swift memcache and device properties
 - switched to heat_template_version: 2014-10-16 to get list_join
 - is now passing CI
 
 https://review.openstack.org/#/c/105347/
 Manual conversion of Fn::Select to extended get_attr

All three patches are merged now and I've removed the t-h-t -2s.

Sorry for the inconvenience everybody.


 I'd like to suggest the following approach for getting these to land:
 * Any changes which really should land before the above 3 get listed
 in this mail thread (vlan?)
 * Reviews of the above 3 changes, and local testing of change 105347
 * All other tripleo-heat-templates need to be rebased/reworked to be
 after 105347 (and maybe -2 until they are?)

 I'm available for any questions on porting your changes to HOT.
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][TripleO] Heat can't retrieve stack list

2014-08-05 Thread mar...@redhat.com
On 05/08/14 08:43, Peeyush Gupta wrote:
 Hi all,
 
 I have been trying to set up tripleo using instack.
 When I try to deploy overcloud, I get a heat related 
 error. Here it is:
 
 [stack@localhost ~]$ heat stack-list
 ERROR: Timeout while waiting on RPC response - topic: engine, RPC method: 
 list_stacks info: unknown
 
 Now, heat-engine is running:
 
 
 [stack@localhost ~]$ ps ax | grep heat-engine
 15765 pts/0S+ 0:00 grep --color=auto heat-engine
 25671 ?Ss 0:27 /usr/bin/python /usr/bin/heat-engine --logfile 
 /var/log/heat/engine.log
 
 Here is the heat-engine log:
 
 2014-08-04 07:57:26.321 25671 ERROR heat.engine.resource [-] CREATE : Server 
 SwiftStorage0 [b78e4c74-f446-4941-8402-56cf46401013] Stack overcloud 
 [9bdc71f5-ce31-4a9c-8d72-3adda0a2c66e]
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource Traceback (most 
 recent call last):
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
 /usr/lib/python2.7/site-packages/heat/engine/resource.py, line 420, in 
 _do_action
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource while not 
 check(handle_data):
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
 /usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 545, 
 in check_create_complete
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource return 
 self._check_active(server)
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
 /usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 561, 
 in _check_active
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource raise exc
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource Error: Creation of 
 server overcloud-SwiftStorage0-fnl43ebtcsom failed.
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource 
 2014-08-04 07:57:27.152 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:27.494 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:27.998 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:28.312 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:28.799 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:29.452 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:30.106 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:30.516 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:31.499 25671 WARNING heat.engine.service [-] Stack create 
 failed, status FAILED
 
 Any idea how to figure this error out?

FYI I am seeing the same, it seems Heat didn't start properly after
following the undercloud install. I'm going to go back and try again
today in case I missed something, or hopefully find out more,

marios

  
 Thanks,
 Peeyush Gupta
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of A/A HA for neutron-metadata-agent?

2014-08-05 Thread Gary Kotton


On 8/4/14, 5:39 PM, mar...@redhat.com mandr...@redhat.com wrote:

On 03/08/14 13:07, Gary Kotton wrote:
 Hi,
 Happy you asked about this. This is an idea that we have:
 
 Below is a suggestion on how we can improve the metadata service. This
can
 be done by leveraging the a Load balancers supports X-Forwarded-For.The
 following link has two diagrams. The first is the existing support (may
be
 a little rusty here, so please feel free to correct) and the second is
the
 proposal. 
 
https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/drawin
gs/d/19JCirhj2NVVFZ0Vbnsxhyxrm1jjzEAS3ZAMzfBRk=oIvRg1%2BdGAgOoM1BIlLLqw%
3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=pDHtkey4
U%2FmCkIGoEa0vUaWK4o93GK5Ep2QhTvy2gAw%3D%0As=98f30ec826fdd475b3ecca4a22b
a6d3664652d7281a1a95b88eba9de570cc678
 C-0E/edit?usp=sharing
 
 Metadata proxy support: the proxy will receive the HTTP request. It will
 then perform a query to the Neutron service (1) to retrieve the tenant
id
 and the instance id from the neutron service. A proxy request will be
sent
 to Nova for the metadata details (2).
 
 Proposed support:
 
 1. There will be a load balancer vip ­ 254.169.254.169 (this can be
 reached either via the L3 agent of the DG on the DHCP.
 2. The LB will have a server farm of all of the Nova API's (this makes
the
 soon highly available)
  1. Replace the destination IP and port with the Nova metadata IP
and
 port
  2. Replace the source IP with the interface IP
  3. Insert the header X-Fowarded-For (this will have the original
 source IP of the VM)
 
 
 
 1. When the Nova metadata service receives the request, according to a
 configuration variable
 
(https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack/
nova/blob/master/nova/api/metadata/handler.pyk=oIvRg1%2BdGAgOoM1BIlLLqw%
3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=pDHtkey4
U%2FmCkIGoEa0vUaWK4o93GK5Ep2QhTvy2gAw%3D%0As=5f11211dd96938ae5a319c12eb6
e01f9fa585d76aa5ee6a36a96561644fbb625
 #L134), will interface with the neutron service to get the instance_id
and
 the tenant id. This will be done by using a new extension. With the
 details provided by Neutron Nova will provide the correct metadata for
the
 instance
 2. A new extension will be added to Neutron that will enable a port
 lookup. The port lookup will have two input values and will return the
 port ­ which has the instance id and the tenant id.
 1. LB source IP ­ this is the LB source IP that interfaces with the Nova
 API. When we create the edge router for the virtual network we will
have a
 mapping of the edge LB ip - network id. This will enable us to get the
 virtual network for the port
 2. Fixed port IP ­ this with the virtual network will enable us to get
the
 specific port.
 
 Hopefully in the coming days a spec will be posted that will provide
more
 details
 

thanks for that info Gary, the diagram in particular forced me to go
read a bit about the metadata agent (i was mostly just proxying for the
original question). I obviously miss a lot of the details (will be
easier once the spec is out) but it seems like you're proposing an
addition (port-lookup) that will change the way the metadata agent is
called; in fact does it make the neutron metadata proxy obsolete? I will
keep a look out for the spec,

At the moment there is already a port lookup. This is done by the metadata
proxy.
The proposed solution will have less hops and fewer elements that can
fail. Hopefully we
Can get the spec posted in the near future. Sadly this will not be
approved for Juno.


thanks, marios



 Thanks
 Gary
 
 
 
 On 8/1/14, 6:11 PM, mar...@redhat.com mandr...@redhat.com wrote:
 
 Hi all,

 I have been asked by a colleague about the status of A/A HA for
 neutron-* processes. From the 'HA guide' [1], l3-agent and
 metadata-agent are the only neutron components that can't be deployed
in
 A/A HA (corosync/pacemaker for a/p is documented as available 'out of
 the box' for both).

 The l3-agent work is approved for J3 [4] but I am unaware of any work
on
 the metadata-agent and can't see any mention in [2][3]. Is this someone
 has looked at, or is planning to (though ultimately K would be the
 earliest right?)?

 thanks! marios

 [1] 
http://docs.openstack.org/high-availability-guide/content/index.html
 [2] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan
 [3] 
 
https://urldefense.proofpoint.com/v1/url?u=https://launchpad.net/neutron
/%
 
2Bmilestone/juno-3k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF
6h
 
goMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=TZXQIMHmAX22lC0YOyItiXOrAA%2FegHqY5
cN
 
I73%2B0jJ8%3D%0As=b81f4d5919b317628f56d0313143cee8fca6e47f639a59784eb19
da
 3d88681da
 [4]
 
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/l3
-h
 igh-availability.rst

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

[openstack-dev] [neutron][third-party] A question about building third-party CI?

2014-08-05 Thread Yangxurong
Hi folks,

Recently I am working on building CI for our ml2 driver. Since our code has not 
been merged, when running the devstack-vm-gate script, the code in neutron 
project will be updated so our code is missing. Without the code, devstack will 
fail since it is configured to use our own ml2 driver.

Any suggestions to solve this problem?

Thanks,
XuRong Yang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] A question about building third-party CI?

2014-08-05 Thread trinath.soman...@freescale.com
Hi-

In your CI, before running devstack, configure neutron code base with your code 
(yet to be approved) and run stack.sh.



--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Yangxurong [mailto:yangxur...@huawei.com]
Sent: Tuesday, August 05, 2014 4:06 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [neutron][third-party] A question about building 
third-party CI?

Hi folks,

Recently I am working on building CI for our ml2 driver. Since our code has not 
been merged, when running the devstack-vm-gate script, the code in neutron 
project will be updated so our code is missing. Without the code, devstack will 
fail since it is configured to use our own ml2 driver.

Any suggestions to solve this problem?

Thanks,
XuRong Yang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Ryan Brown


On 08/04/2014 07:18 PM, Yuriy Taraday wrote:
 Hello, git-review users!
 
 snip
 0. create new local branch;
 
 master: M--
  \
 feature:  *
 
 1. start hacking, doing small local meaningful (to you) commits;
 
 master: M--
  \
 feature:  A-B-...-C
 
 2. since hacking takes tremendous amount of time (you're doing a Cool
 Feature (tm), nothing less) you need to update some code from master, so
 you're just merging master in to your branch (i.e. using Git as you'd
 use it normally);
 
 master: M---N-O-...
  \\\
 feature:  A-B-...-C-D-...
 
 3. and now you get the first version that deserves to be seen by
 community, so you run 'git review', it asks you for desired commit
 message, and poof, magic-magic all changes from your branch is
 uploaded to Gerrit as _one_ change request;
 
 master: M---N-O-...
  \\\E* = uploaded
 feature:  A-B-...-C-D-...-E
 
 snip

+1, this is definitely a feature I'd want to see.

Currently I run two branches bug/LPBUG#-local and bug/LPBUG# where
the local is my full history of the change and the other branch is the
squashed version I send out to Gerrit.

Cheers,
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] l2pop problems

2014-08-05 Thread Zang MingJie
Hi Mathieu:

We have deployed the new l2pop described in the previous mail in our
environment, and works pretty well. It solved the timing problem, and
also reduces lots of l2pop rpc calls. I'm going to file a blueprint to
propose the changes.

On Fri, Jul 18, 2014 at 10:26 PM, Mathieu Rohon mathieu.ro...@gmail.com wrote:
 Hi Zang,

 On Wed, Jul 16, 2014 at 4:43 PM, Zang MingJie zealot0...@gmail.com wrote:
 Hi, all:

 While resolving ovs restart rebuild br-tun flows[1], we have found
 several l2pop problems:

 1. L2pop is depending on agent_boot_time to decide whether send all
 port information or not, but the agent_boot_time is unreliable, for
 example if the service receives port up message before agent status
 report, the agent won't receive any port on other agents forever.

 you're right, there a race condition here, if the agent has more than
 1 port on the same network and if the agent sends its
 update_device_up() on every port before it sends its report_state(),
 it won't receive fdb concerning these network. Is it the race you are
 mentionning above?
 Since the report_state is done in a dedicated greenthread, and is
 launched before the greenthread that manages ovsdb_monitor, the state
 of the agent should be updated before the agent gets aware of its
 ports and sends get_device_details()/update_device_up(), am I wrong?
 So, after a restart of an agent, the agent_uptime() should be less
 than the agent_boot_time configured by default in the conf when the
 agent sent its first update_device_up(), the l2pop MD will be aware of
 this restart and trigger the cast of all fdb entries to the restarted
 agent.

 But I agree that it might relies on enventlet thread managment and on
 agent_boot_time that can be misconfigured by the provider.

 2. If the openvswitch restarted, all flows will be lost, including all
 l2pop flows, the agent is unable to fetch or recreate the l2pop flows.

 To resolve the problems, I'm suggesting some changes:

 1. Because the agent_boot_time is unreliable, the service can't decide
 whether to send flooding entry or not. But the agent can build up the
 flooding entries from unicast entries, it has already been
 implemented[2]

 2. Create a rpc from agent to service which fetch all fdb entries, the
 agent calls the rpc in `provision_local_vlan`, before setting up any
 port.[3]

 After these changes, the l2pop service part becomes simpler and more
 robust, mainly 2 function: first, returns all fdb entries at once when
 requested; second, broadcast fdb single entry when a port is up/down.

 That's an implementation that we have been thinking about during the
 l2pop implementation.
 Our purpose was to minimize RPC calls. But if this implementation is
 buggy due to uncontrolled thread order and/or bad usage of the
 agent_boot_time parameter, it's worth investigating your proposal [3].
 However, I don't get why [3] depends on [2]. couldn't we have a
 network_sync() sent by the agent during provision_local_vlan() which
 will reconfigure ovs when the agent and/or the ovs restart?

actual, [3] doesn't strictly depend [2], we have encountered l2pop
problems several times where the unicast is correct, but the broadcast
fails, so we decide completely ignore the broadcast entries in rpc,
only deal unicast entries, and use unicast entries to build broadcast
rules.



 [1] https://bugs.launchpad.net/neutron/+bug/1332450
 [2] https://review.openstack.org/#/c/101581/
 [3] https://review.openstack.org/#/c/107409/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-05 Thread Osanai, Hisashi

Thank you for your quick response.

I don't have enough rights for nominating the bug so 
I put the tag icehouse-backport-potential instead.

https://bugs.launchpad.net/ceilometer/+bug/1326250

On Tuesday, August 05, 2014 6:35 PM, Ihar Hrachyshka wrote:
 https://wiki.openstack.org/wiki/StableBranch

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Protocol for bringing up CI for a new driver?

2014-08-05 Thread Luke Gorrie
Hi Salvatore,

On 5 August 2014 10:34, Salvatore Orlando sorla...@nicira.com wrote:

 Once in place, the CI system should be able to pick up the patches from
 the new plugin or driver on gerrit.

 In my opinion, successful CI runs against those patches should constitute
 a sufficient proof of the validity of the CI system.

That sounds good to me.

Is there already an easy way to pick up the patches with devstack? (would
one create a local repo, do some git-fu to merge the driver, then point
devstack's NEUTRON_REPO there?)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-05 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 05/08/14 13:30, Osanai, Hisashi wrote:
 
 Thank you for your quick response.
 
 I don't have enough rights for nominating the bug so I put the tag
 icehouse-backport-potential instead.
 
 https://bugs.launchpad.net/ceilometer/+bug/1326250

Thanks. To facilitate quicker backport, you may also propose the patch
for review yourself. It may take time before stable maintainers or
other interested parties get to the bug and do cherry-pick.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT4MZ7AAoJEC5aWaUY1u57AaIH/As3Ef0RLtuiwJlq3VyODqgR
HUBbAHQuFnpm6D6C/+2iwRrJZstxNQhSaDCYH/uxQcpumPcVPtAvnzMUzn/j5Aaw
BbKcOekjvQA82f87HvYWWrjaZEciRGsG6LNbm04vrh/5YB79VMDUM8Csd7NrPKUK
hkyQhylIo5hPLD6jtRIS0fw0xORQ0CddzSgj7lr6g+E96mchEOK2wwEbBDsNg4kz
GGS6e4QeSt6IKB7nsU6Va9l0MRfnAso4f8+AGuekzLjUAUcDbgFdqGfvFyaAFkTW
/oXaqhybPluufd0JbHnV/CLxd4NMMoPko4WQ48SXklJzpFB//9LgG932Gp+hqSU=
=c+h8
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Protocol for bringing up CI for a new driver?

2014-08-05 Thread trinath.soman...@freescale.com
Hope this links helps you

http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/

http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/

http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Luke Gorrie [mailto:l...@tail-f.com]
Sent: Tuesday, August 05, 2014 5:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][third-party] Protocol for bringing up CI 
for a new driver?

Hi Salvatore,

On 5 August 2014 10:34, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:

Once in place, the CI system should be able to pick up the patches from the new 
plugin or driver on gerrit.

In my opinion, successful CI runs against those patches should constitute a 
sufficient proof of the validity of the CI system.
That sounds good to me.

Is there already an easy way to pick up the patches with devstack? (would one 
create a local repo, do some git-fu to merge the driver, then point devstack's 
NEUTRON_REPO there?)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Networking Docs Swarm - Brisbane 9 August

2014-08-05 Thread Anne Gentle
Thanks Lana!


On Mon, Aug 4, 2014 at 11:32 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Lana Brindley openst...@lanabrindley.com wrote on 08/04/2014 11:05:24
 PM:

  I just wanted to let you all know about the OpenStack Networking Docs
  Swarm being held in Brisbane on 9 August.
  ...

 +++ on this.

 I can not contribute answers, but have lots of questions.

 Let me suggest that documentation is needed both for cloud providers doing
 general deployment and also for developers using DevStack.  Not all of us
 developers are Neutron experts, so we need decent documentation.  And
 developers sometimes need to use host machines with fewer than the ideal
 number of NICs.  Sometimes those host machines are virtual, leading to
 nested virtualization (of network as well as compute).


Mike, we realize the need, but simply don't include developers for the doc
program's mission.

This is a focused effort to meet the needs of our stated first priority
deliverables.

Anne



 Thanks!
 Mike


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.vmware 0.5.0 released

2014-08-05 Thread Doug Hellmann
The Oslo team is pleased to announce that oslo.vmware 0.5.0, the latest version 
of the OpenStack/VMware integration library, has been released.

This version includes:

* _trunc_id to check if the session_id is not None
* Enabled hacking check H305
* Imported Translations from Transifex
* Add constant for ESX datacenter path (HTTP access)
* Store PBM wsdl in the oslo.vmware git repository
* Bump hacking to version 0.9.2
* Add support for using extensions
* The 'result' variable in RetryDecorator may be undefined
* Imported Translations from Transifex
* Fix docstrings of constructors
* Do not log the full session ID
* Refactor the PBM support
* Fix wrong usage of assertRaises
* Translations: make use of _LE, _LI and _LW

Please report issues using the Oslo bug tracker 
(https://bugs.launchpad.net/oslo) and tag the ticket with with “vmware”.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Sylvain Bauza


Le 05/08/2014 13:06, Ryan Brown a écrit :


On 08/04/2014 07:18 PM, Yuriy Taraday wrote:

Hello, git-review users!

snip
0. create new local branch;

master: M--
  \
feature:  *

1. start hacking, doing small local meaningful (to you) commits;

master: M--
  \
feature:  A-B-...-C

2. since hacking takes tremendous amount of time (you're doing a Cool
Feature (tm), nothing less) you need to update some code from master, so
you're just merging master in to your branch (i.e. using Git as you'd
use it normally);

master: M---N-O-...
  \\\
feature:  A-B-...-C-D-...

3. and now you get the first version that deserves to be seen by
community, so you run 'git review', it asks you for desired commit
message, and poof, magic-magic all changes from your branch is
uploaded to Gerrit as _one_ change request;

master: M---N-O-...
  \\\E* = uploaded
feature:  A-B-...-C-D-...-E

snip

+1, this is definitely a feature I'd want to see.

Currently I run two branches bug/LPBUG#-local and bug/LPBUG# where
the local is my full history of the change and the other branch is the
squashed version I send out to Gerrit.


-1 to this as git-review default behaviour. Ideally, branches should be 
identical in between Gerrit and local Git.


I can understand some exceptions where developers want to work on 
intermediate commits and squash them before updating Gerrit, but in that 
case, I can't see why it needs to be kept locally. If a new patchset has 
to be done on patch A, then the local branch can be rebased 
interactively on last master, edit patch A by doing an intermediate 
patch, then squash the change, and pick the later patches (B to E)


That said, I can also understand that developers work their way, and so 
could dislike squashing commits, hence my proposal to have a --no-squash 
option when uploading, but use with caution (for a single branch, how 
many dependencies are outdated in Gerrit because developers work on 
separate branches for each single commit while they could work locally 
on a single branch ? I can't iimagine how often errors could happen if 
we don't force by default to squash commits before sending them to Gerrit)


-Sylvain


Cheers,



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Step by step OpenStack Icehouse Installation Guide

2014-08-05 Thread chayma ghribi
Hi !

Yes using scripts is easy for users!
In this guide, our objective was to detail all the steps of installation.
We will use scripts in our next guide ;)
Thank you for suggestion :)

Regards,

Chaima Ghribi


2014-08-05 3:46 GMT+02:00 Shake Chen shake.c...@gmail.com:

 Hi

 maybe you can consider use script create Database and endpoint. like
 https://github.com/EmilienM/openstack-folsom-guide/tree/master/scripts

 this would be easy for user.


 On Tue, Aug 5, 2014 at 12:38 AM, chayma ghribi chaym...@gmail.com wrote:

 Hi !

 Thank you for the comment Qiming !

 The script stack.sh is used to configure Devstack and  to assign the
 heat_stack_owner role to users.
 Also, I think that Heat is configured by default on Devstack for
 icehouse.

 http://docs.openstack.org/developer/heat/getting_started/on_devstack.html#configure-devstack-to-enable-heat

 In our guide we are not installing using devstack.
 We are creating and managing stacks with Heat but we have not errors !

 If you have some examples of tests (or scenarios) that helps us to
 identify errors and improve the guide, please don't hesitate to contact us
 ;)
 All your contributions are welcome :)

 Regards,

 Chaima Ghribi





 2014-08-04 8:13 GMT+02:00 Qiming Teng teng...@linux.vnet.ibm.com:

 Thanks for the efforts.  Just want to add some comments on installing
 and configuring Heat, since an incomplete setup may cause bizarre
 problems later on when users start experiments.

 Please refer to devstack script below for proper configuration of Heat:

 https://github.com/openstack-dev/devstack/blob/master/lib/heat#L68

 and the function create_heat_accounts at the link below which helps
 create the required Heat accounts.

 https://github.com/openstack-dev/devstack/blob/master/lib/heat#L214

 Regards,
   Qiming

 On Sun, Aug 03, 2014 at 12:49:22PM +0200, chayma ghribi wrote:
  Dear All,
 
  I want to share with you our OpenStack Icehouse Installation Guide for
  Ubuntu 14.04.
 
 
 https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst
 
  An additional  guide for Heat service installation is also available ;)
 
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst
 
  Hope this manuals will be helpful and simple !
  Your contributions are welcome, as are questions and suggestions :)
 
  Regards,
  Chaima Ghribi



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Shake Chen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Summit] Please vote if you are interested

2014-08-05 Thread Jay Lau
Hi,

We submitted three simple but very interesting topics for Paris Summit,
please check if you are interested.

1) Holistic Resource Scheduling:
https://www.openstack.org/vote-paris/Presentation/schedule-multiple-tiers-enterprise-application-in-openstack-environment-prs-a-holistic-scheduler-for-both-application-orchestrator-and-infrastructure

2) China OpenStack Meetup Summary:
https://www.openstack.org/vote-paris/Presentation/organizing-openstack-meet-ups-in-china

3) How does one China Customer use OpenStack:
https://www.openstack.org/vote-paris/Presentation/an-application-driven-approach-to-openstack-another-way-to-engage-enterprises?sthash.sNSBxEVS.mjjo

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Summit] Please vote if you are interested

2014-08-05 Thread Sylvain Bauza


Le 05/08/2014 15:56, Jay Lau a écrit :

Hi,

We submitted three simple but very interesting topics for Paris 
Summit, please check if you are interested.


1) Holistic Resource Scheduling: 
https://www.openstack.org/vote-paris/Presentation/schedule-multiple-tiers-enterprise-application-in-openstack-environment-prs-a-holistic-scheduler-for-both-application-orchestrator-and-infrastructure


2) China OpenStack Meetup Summary: 
https://www.openstack.org/vote-paris/Presentation/organizing-openstack-meet-ups-in-china


3) How does one China Customer use OpenStack: 
https://www.openstack.org/vote-paris/Presentation/an-application-driven-approach-to-openstack-another-way-to-engage-enterprises?sthash.sNSBxEVS.mjjo





Could we please keep the mailing-list only for technical discussions ?
Thanks,

-Sylvain


--
Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] How to use the logbook

2014-08-05 Thread Joshua Harlow
Hi there!

I'll be back next week but the feature u are wanting currently doesn't exist 
(so that would be why u aren't seeing such data). If u want it then a blueprint 
or spec seems appropriate for this kind of historical information.

Sound good?

Btw, the logbook is more for state and flow persistence so that's why there 
currently isn't any job information in there.

A job history though seems very useful for this kind of analysis and a few 
others that have been brought up. Let's make it happen :)

Sent from my really tiny device...

 On Aug 5, 2014, at 3:52 AM, Roman Klesel roman.kle...@gmail.com wrote:
 
 Hello,
 
 I'm currently evaluating taskflow. I read through the examples and
 extracted bits and peaces to write a little app in order to see how it
 works. Right now I'm most interested in the job board and the flows.
 
 In my code I post a Job to a jobboard, pick it up, end execute a flow
 (similar to the build_car example. Sometimes the flow completes
 sucessfully sometimes I make a task raise an exception. In the case a
 task fails the whole flow is reverted and the job is back on the board
 with state UNCLAIMED. So it seems everything works as expected.
 
 Now I would like to scan the jobboard and examine the jobs in order to
 see whether they have been claimed in the past, if yes, who claimed
 them, how often, when, what has happened to them, where did they fail,
 etc.
 
 I thought the logbook is the facility to look into, but it never seems
 so have any information. No matter how often a Job has failed and also
 if it was sucessful I always get this:
 
 print job.book.to_dict()
 
 {'updated_at': None, 'created_at': datetime.datetime(2014, 8, 4, 8,
 33, 31, 218007), 'meta': {}, 'name': u'romans_book', 'uuid':
 u'f5eb6f75-06d7-4f52-a72e-f85f69ba67a0'}
 
 What am I doing wrong?
 
 
 Regards Roman
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Robert Kukura

On 8/4/14, 4:27 PM, Mark McClain wrote:

All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should 
attempting.

* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team that 
has been implementing this feature for Juno does not see this work as an 
experiment to gather data, but rather as an important innovative feature 
to put in the hands of early adopters in Juno and into widespread 
deployment with a stable API as early as Kilo.


The group-based policy BP approved for Juno addresses the critical need 
for a more usable, declarative, intent-based interface for cloud 
application developers and deployers, that can co-exist with Neutron's 
current networking-hardware-oriented API and work nicely with all 
existing core plugins. Additionally, we believe that this declarative 
approach is what is needed to properly integrate advanced services into 
Neutron, and will go a long way towards resolving the difficulties so 
far trying to integrate LBaaS, FWaaS, and VPNaaS APIs into the current 
Neutron model.


Like any new service API in Neutron, the initial group policy API 
release will be subject to incompatible changes before being declared 
stable, and hence would be labeled experimental in Juno. This does 
not mean that it is an experiment where to fail fast is an acceptable 
outcome. The sub-team's goal is to stabilize the group policy API as 
quickly as possible,  making any needed changes based on early user and 
operator experience.


The L and M cycles that Mark suggests below to revisit the status are 
a completely different time frame. By the L or M cycle, we should be 
working on a new V3 Neutron API that pulls these APIs together into a 
more cohesive core API. We will not be in a position to do this properly 
without the experience of using the proposed group policy extension with 
the V2 Neutron API in production.


If we were failing miserably, or if serious technical issues were being 
identified with the patches, some delay might make sense. But, other 
than Mark's -2 blocking the initial patches from merging, we are on 
track to complete the planned work in Juno.


-Bob



Why this email?
---
Our community has been discussing and working on Group Based Policy 
(GBP) for many months.  I think the discussion has reached a point 
where we need to openly discuss a few issues before moving forward.  I 
recognize that this discussion could create frustration for those who 
have invested significant time and energy, but the reality is we need 
ensure we are making decisions that benefit all members of our 
community (users, operators, developers and vendors).


Experimentation

I like that as a community we are exploring alternate APIs.  The 
process of exploring via real user experimentation can produce 
valuable results.  A good experiment should be designed to fail fast 
to enable further trials via rapid iteration.


Merging large changes into the master branch is the exact opposite of 
failing fast.


The master branch deliberately favors small iterative changes over 
time.  Releasing a new version of the proposed API every six months 
limits our ability to learn and make adjustments.


In the past, we've released LBaaS, FWaaS, and VPNaaS as experimental 
APIs.  The results have been very mixed as operators either shy away 
from testing/offering the API or embrace the API with the expectation 
that the community will provide full API support and migration.  In 
both cases, the experiment fails because we either could not get the 
data we need or are unable to make significant changes without 
accepting a non-trivial amount of technical debt via migrations or 
draft API support.


Next Steps
--
Previously, the GPB subteam used a Github account to host the 
development, but the workflows and tooling do not align with 
OpenStack's development model. I'd like to see us create a group based 
policy project in StackForge.  StackForge will host the code and 
enable us to follow the same open review and QA processes we use in 
the main project while we are developing and testing the API. The 
infrastructure there will benefit us as we will have a separate review 
velocity and can frequently publish libraries to PyPI.  From a 
technical perspective, the 13 new entities in GPB [1] do not require 
any changes to internal Neutron data structures.  The docs[2] also 
suggest that an external plugin or service would work to make it 
easier to speed development.


End State
-
APIs require time to fully bake and right now it is too early to know 
the final outcome.  Using StackForge will allow the team to retain all 
of its options including: merging the code into Neutron, adopting the 
repository as sub-project of the Network Program, leaving the project 
in StackForge project or 

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Ryan Brown


On 08/05/2014 09:27 AM, Sylvain Bauza wrote:
 
 Le 05/08/2014 13:06, Ryan Brown a écrit :
 -1 to this as git-review default behaviour. Ideally, branches should be
 identical in between Gerrit and local Git.

Probably not as default behaviour (people who don't want that workflow
would be driven mad!), but I think enough folks would want it that it
should be available as an option.

 I can understand some exceptions where developers want to work on
 intermediate commits and squash them before updating Gerrit, but in that
 case, I can't see why it needs to be kept locally. If a new patchset has
 to be done on patch A, then the local branch can be rebased
 interactively on last master, edit patch A by doing an intermediate
 patch, then squash the change, and pick the later patches (B to E)
 
 That said, I can also understand that developers work their way, and so
 could dislike squashing commits, hence my proposal to have a --no-squash
 option when uploading, but use with caution (for a single branch, how
 many dependencies are outdated in Gerrit because developers work on
 separate branches for each single commit while they could work locally
 on a single branch ? I can't iimagine how often errors could happen if
 we don't force by default to squash commits before sending them to Gerrit)
 
 -Sylvain
 
 Cheers,
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I am well aware this may be straying into feature creep territory, and
it wouldn't be terrible if this weren't implemented.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Gary Kotton
Hi,
Is there any description of how this will be consumed by Nova. My concern is 
this code landing there.
Thanks
Gary

From: Robert Kukura kuk...@noironetworks.commailto:kuk...@noironetworks.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 5, 2014 at 5:20 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

On 8/4/14, 4:27 PM, Mark McClain wrote:
All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should attempting.
* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team that has 
been implementing this feature for Juno does not see this work as an experiment 
to gather data, but rather as an important innovative feature to put in the 
hands of early adopters in Juno and into widespread deployment with a stable 
API as early as Kilo.

The group-based policy BP approved for Juno addresses the critical need for a 
more usable, declarative, intent-based interface for cloud application 
developers and deployers, that can co-exist with Neutron's current 
networking-hardware-oriented API and work nicely with all existing core 
plugins. Additionally, we believe that this declarative approach is what is 
needed to properly integrate advanced services into Neutron, and will go a long 
way towards resolving the difficulties so far trying to integrate LBaaS, FWaaS, 
and VPNaaS APIs into the current Neutron model.

Like any new service API in Neutron, the initial group policy API release will 
be subject to incompatible changes before being declared stable, and hence 
would be labeled experimental in Juno. This does not mean that it is an 
experiment where to fail fast is an acceptable outcome. The sub-team's goal 
is to stabilize the group policy API as quickly as possible,  making any needed 
changes based on early user and operator experience.

The L and M cycles that Mark suggests below to revisit the status are a 
completely different time frame. By the L or M cycle, we should be working on a 
new V3 Neutron API that pulls these APIs together into a more cohesive core 
API. We will not be in a position to do this properly without the experience of 
using the proposed group policy extension with the V2 Neutron API in production.

If we were failing miserably, or if serious technical issues were being 
identified with the patches, some delay might make sense. But, other than 
Mark's -2 blocking the initial patches from merging, we are on track to 
complete the planned work in Juno.

-Bob


Why this email?
---
Our community has been discussing and working on Group Based Policy (GBP) for 
many months.  I think the discussion has reached a point where we need to 
openly discuss a few issues before moving forward.  I recognize that this 
discussion could create frustration for those who have invested significant 
time and energy, but the reality is we need ensure we are making decisions that 
benefit all members of our community (users, operators, developers and vendors).

Experimentation

I like that as a community we are exploring alternate APIs.  The process of 
exploring via real user experimentation can produce valuable results.  A good 
experiment should be designed to fail fast to enable further trials via rapid 
iteration.

Merging large changes into the master branch is the exact opposite of failing 
fast.

The master branch deliberately favors small iterative changes over time.  
Releasing a new version of the proposed API every six months limits our ability 
to learn and make adjustments.

In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental APIs.  The 
results have been very mixed as operators either shy away from testing/offering 
the API or embrace the API with the expectation that the community will provide 
full API support and migration.  In both cases, the experiment fails because we 
either could not get the data we need or are unable to make significant changes 
without accepting a non-trivial amount of technical debt via migrations or 
draft API support.

Next Steps
--
Previously, the GPB subteam used a Github account to host the development, but 
the workflows and tooling do not align with OpenStack's development model. I’d 
like to see us create a group based policy project in StackForge.  StackForge 
will host the code and enable us to follow the same open review and QA 
processes we use in the main project while we are developing and testing the 
API. The infrastructure there will benefit us as we will have a separate review 
velocity and can frequently publish libraries to PyPI.  From a technical 
perspective, the 13 new entities in GPB [1] do not require any changes to 

Re: [openstack-dev] [Neutron][LBaaS] status in entities

2014-08-05 Thread Brandon Logan

Hello Vijay!

Well this is a hold over from v1, but the status is a provisioning
status.  So yes, when something is deployed successfully it should be
ACTIVE.  The exception to this is the member status, in that it's status
can be INACTIVE if a health check fails.  Now this will probably cause
edge cases when health checks and updates are happening to the same
member.  It's been talked about before, but we need to really have two
types of status fields, provisioning and operational.  IMHO, that should
be something we try to get into K.

Thanks,
Brandon

On Tue, 2014-08-05 at 09:28 +, Vijay Venkatachalam wrote:
 Hi:
 
I think we had some discussions around ‘status’
 attribute earlier, I don’t recollect the conclusion.
 
 Does it reflect the deployment status?
 
Meaning, if the status of an entity is ACTIVE, the user
 has to infer that the entity is deployed successfully in the
 backend/loadbalancer.
 
 Thanks,
 
 Vijay V.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] status in entities

2014-08-05 Thread Eichberger, German
There was also talk about a third administrative status like ON/OFF...

We really need a deeper status discussion - likely high bandwith to work all of 
that out.

German

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, August 05, 2014 8:27 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] status in entities


Hello Vijay!

Well this is a hold over from v1, but the status is a provisioning status.  So 
yes, when something is deployed successfully it should be ACTIVE.  The 
exception to this is the member status, in that it's status can be INACTIVE if 
a health check fails.  Now this will probably cause edge cases when health 
checks and updates are happening to the same member.  It's been talked about 
before, but we need to really have two types of status fields, provisioning and 
operational.  IMHO, that should be something we try to get into K.

Thanks,
Brandon

On Tue, 2014-08-05 at 09:28 +, Vijay Venkatachalam wrote:
 Hi:
 
I think we had some discussions around ‘status’
 attribute earlier, I don’t recollect the conclusion.
 
 Does it reflect the deployment status?
 
Meaning, if the status of an entity is ACTIVE, the user 
 has to infer that the entity is deployed successfully in the 
 backend/loadbalancer.
 
 Thanks,
 
 Vijay V.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread ZZelle
Hi,


I like the idea  ... with complex change, it could useful for the
understanding to split it into smaller changes during development.


Do we need to expose such feature under git review? we could define a new
subcommand? git reviewflow?


Cédric,
ZZelle@IRC



On Tue, Aug 5, 2014 at 4:49 PM, Ryan Brown rybr...@redhat.com wrote:



 On 08/05/2014 09:27 AM, Sylvain Bauza wrote:
 
  Le 05/08/2014 13:06, Ryan Brown a écrit :
  -1 to this as git-review default behaviour. Ideally, branches should be
  identical in between Gerrit and local Git.

 Probably not as default behaviour (people who don't want that workflow
 would be driven mad!), but I think enough folks would want it that it
 should be available as an option.

  I can understand some exceptions where developers want to work on
  intermediate commits and squash them before updating Gerrit, but in that
  case, I can't see why it needs to be kept locally. If a new patchset has
  to be done on patch A, then the local branch can be rebased
  interactively on last master, edit patch A by doing an intermediate
  patch, then squash the change, and pick the later patches (B to E)
 
  That said, I can also understand that developers work their way, and so
  could dislike squashing commits, hence my proposal to have a --no-squash
  option when uploading, but use with caution (for a single branch, how
  many dependencies are outdated in Gerrit because developers work on
  separate branches for each single commit while they could work locally
  on a single branch ? I can't iimagine how often errors could happen if
  we don't force by default to squash commits before sending them to
 Gerrit)
 
  -Sylvain
 
  Cheers,
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I am well aware this may be straying into feature creep territory, and
 it wouldn't be terrible if this weren't implemented.

 --
 Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] The future of the integrated release

2014-08-05 Thread Thierry Carrez
Hi everyone,

With the incredible growth of OpenStack, our development community is
facing complex challenges. How we handle those might determine the
ultimate success or failure of OpenStack.

With this cycle we hit new limits in our processes, tools and cultural
setup. This resulted in new limiting factors on our overall velocity,
which is frustrating for developers. This resulted in the burnout of key
firefighting resources. This resulted in tension between people who try
to get specific work done and people who try to keep a handle on the big
picture.

It all boils down to an imbalance between strategic and tactical
contributions. At the beginning of this project, we had a strong inner
group of people dedicated to fixing all loose ends. Then a lot of
companies got interested in OpenStack and there was a surge in tactical,
short-term contributions. We put on a call for more resources to be
dedicated to strategic contributions like critical bugfixing,
vulnerability management, QA, infrastructure... and that call was
answered by a lot of companies that are now key members of the OpenStack
Foundation, and all was fine again. But OpenStack contributors kept on
growing, and we grew the narrowly-focused population way faster than the
cross-project population.

At the same time, we kept on adding new projects to incubation and to
the integrated release, which is great... but the new developers you get
on board with this are much more likely to be tactical than strategic
contributors. This also contributed to the imbalance. The penalty for
that imbalance is twofold: we don't have enough resources available to
solve old, known OpenStack-wide issues; but we also don't have enough
resources to identify and fix new issues.

We have several efforts under way, like calling for new strategic
contributors, driving towards in-project functional testing, making
solving rare issues a more attractive endeavor, or hiring resources
directly at the Foundation level to help address those. But there is a
topic we haven't raised yet: should we concentrate on fixing what is
currently in the integrated release rather than adding new projects ?

We seem to be unable to address some key issues in the software we
produce, and part of it is due to strategic contributors (and core
reviewers) being overwhelmed just trying to stay afloat of what's
happening. For such projects, is it time for a pause ? Is it time to
define key cycle goals and defer everything else ?

On the integrated release side, more projects means stretching our
limited strategic resources more. Is it time for the Technical Committee
to more aggressively define what is in and what is out ? If we go
through such a redefinition, shall we push currently-integrated projects
that fail to match that definition out of the integrated release inner
circle ?

The TC discussion on what the integrated release should or should not
include has always been informally going on. Some people would like to
strictly limit to end-user-facing projects. Some others suggest that
OpenStack should just be about integrating/exposing/scaling smart
functionality that lives in specialized external projects, rather than
trying to outsmart those by writing our own implementation. Some others
are advocates of carefully moving up the stack, and to resist from
further addressing IaaS+ services until we complete the pure IaaS
space in a satisfactory manner. Some others would like to build a
roadmap based on AWS services. Some others would just add anything that
fits the incubation/integration requirements.

On one side this is a long-term discussion, but on the other we also
need to make quick decisions. With 4 incubated projects, and 2 new ones
currently being proposed, there are a lot of people knocking at the door.

Thanks for reading this braindump this far. I hope this will trigger the
open discussions we need to have, as an open source project, to reach
the next level.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] status in entities

2014-08-05 Thread Brandon Logan
Isn't that what admin_state_up is for?

But yes we do need a deeper discussion on this and many other things.

On Tue, 2014-08-05 at 15:42 +, Eichberger, German wrote:
 There was also talk about a third administrative status like ON/OFF...
 
 We really need a deeper status discussion - likely high bandwith to work all 
 of that out.
 
 German
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
 Sent: Tuesday, August 05, 2014 8:27 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] status in entities
 
 
 Hello Vijay!
 
 Well this is a hold over from v1, but the status is a provisioning status.  
 So yes, when something is deployed successfully it should be ACTIVE.  The 
 exception to this is the member status, in that it's status can be INACTIVE 
 if a health check fails.  Now this will probably cause edge cases when health 
 checks and updates are happening to the same member.  It's been talked about 
 before, but we need to really have two types of status fields, provisioning 
 and operational.  IMHO, that should be something we try to get into K.
 
 Thanks,
 Brandon
 
 On Tue, 2014-08-05 at 09:28 +, Vijay Venkatachalam wrote:
  Hi:
  
 I think we had some discussions around ‘status’
  attribute earlier, I don’t recollect the conclusion.
  
  Does it reflect the deployment status?
  
 Meaning, if the status of an entity is ACTIVE, the user 
  has to infer that the entity is deployed successfully in the 
  backend/loadbalancer.
  
  Thanks,
  
  Vijay V.
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Jay Pipes

Hello stackers, TC, Neutron contributors,

At the Nova mid-cycle meetup last week in Oregon, during the discussion 
about the future of nova-network, the topic of nova-network - Neutron 
migration came up.


For some reason, I had been clueless about the details of one of the 
items in the gap analysis the TC had requested [1]. Namely, the 5th 
item, about nova-network - Neutron migration, which is detailed in the 
following specification:


https://review.openstack.org/#/c/101921/12/specs/juno/neutron-migration.rst

The above specification outlines a plan to allow migration of *running* 
instances from an OpenStack deployment using nova-network (both with and 
without multi-host mode) to an OpenStack deployment using Neutron, with 
little to no downtime using live migration techniques and an array of 
post-vm-migrate strategies to wire up the new VIFs to the Neutron ports.


I personally believe that this requirement to support a live migration 
with no downtime of running instances between a nova-network and a 
Neutron deployment *is neither realistic, nor worth the extensive time 
and technical debt needed to make this happen*.


I suggest that it would be better to instead provide good instructions 
for doing cold migration (snapshot VMs in old nova-network deployment, 
store in Swift or something, then launch VM from a snapshot in new 
Neutron deployment) -- which should cover the majority of deployments -- 
and then write some instructions for what to look out for when doing a 
custom migration for environments that simply cannot afford any downtime 
and *really* want to migrate to Neutron. For these deployments, it's 
almost guaranteed that they will need to mangle their existing databases 
and do manual data migration anyway -- like RAX did when moving from 
nova-network to Neutron. The variables are too many to list here, and 
the number of deployments actually *needing* this work seems to me to be 
very limited. Someone suggested Metacloud *might* be the only deployment 
that might meet the needs for a live nova-network - Neutron migration. 
Metacloud folks, please do respond here!


In short, I don't think the live migration requirement for nova-network 
to Neutron is either realistic or valuable, and suggest relaxing it to 
be good instructions for cold migration of instances from an older 
deployment to a newer deployment. There are other more valuable things 
that Neutron contributors could focus on, IMO -- such as the DVR 
functionality that brings parity to Neutron with nova-network's 
multi-host mode.


Thoughts?

-jay

[1] 
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [git-review] Supporting development in local branches

2014-08-05 Thread Varnau, Steve (Trafodion)
Yuriy,

It looks like this would automate a standard workflow that my group often uses: 
multiple commits, create “delivery” branch, git merge --squash, git review.  
That looks really useful.

Having it be repeatable is a bonus.

Per last bullet of the implementation, I would not require not modifying 
current index/HEAD. A checkout back to working branch can be done at the end, 
right?

-Steve

From: Yuriy Taraday [mailto:yorik@gmail.com]
Sent: Monday, August 04, 2014 16:18
To: openstack-dev; openstack-infra
Subject: [OpenStack-Infra] [git-review] Supporting development in local branches

Hello, git-review users!

I'd like to gather feedback on a feature I want to implement that might turn 
out useful for you.

I like using Git for development. It allows me to keep track of current 
development process, it remembers everything I ever did with the code (and 
more).
I also really like using Gerrit for code review. It provides clean interfaces, 
forces clean histories (who needs to know that I changed one line of code in 
3am on Monday?) and allows productive collaboration.
What I really hate is having to throw away my (local, precious for me) history 
for all change requests because I need to upload a change to Gerrit.

That's why I want to propose making git-review to support the workflow that 
will make me happy. Imagine you could do smth like this:

0. create new local branch;

master: M--
 \
feature:  *

1. start hacking, doing small local meaningful (to you) commits;

master: M--
 \
feature:  A-B-...-C

2. since hacking takes tremendous amount of time (you're doing a Cool Feature 
(tm), nothing less) you need to update some code from master, so you're just 
merging master in to your branch (i.e. using Git as you'd use it normally);

master: M---N-O-...
 \\\
feature:  A-B-...-C-D-...

3. and now you get the first version that deserves to be seen by community, so 
you run 'git review', it asks you for desired commit message, and poof, 
magic-magic all changes from your branch is uploaded to Gerrit as _one_ change 
request;

master: M---N-O-...
 \\\E* = uploaded
feature:  A-B-...-C-D-...-E

4. you repeat steps 1 and 2 as much as you like;
5. and all consecutive calls to 'git review' will show you last commit message 
you used for upload and use it to upload new state of your local branch to 
Gerrit, as one change request.

Note that during this process git-review will never run rebase or merge 
operations. All such operations are done by user in local branch instead.

Now, to the dirty implementations details.

- Since suggested feature changes default behavior of git-review, it'll have to 
be explicitly turned on in config (review.shadow_branches? 
review.local_branches?). It should also be implicitly disabled on master branch 
(or whatever is in .gitreview config).
- Last uploaded commit for branch branch-name will be kept in 
refs/review-branches/branch-name.
- For every call of 'git review' it will find latest commit in gerrit/master 
(or remote and branch from .gitreview), create a new one that will have that 
commit as its parent and a tree of current commit from local branch as its tree.
- While creating new commit, it'll open an editor to fix commit message for 
that new commit taking it's initial contents from 
refs/review-branches/branch-name if it exists.
- Creating this new commit might involve generating a temporary bare repo 
(maybe even with shared objects dir) to prevent changes to current index and 
HEAD while using bare 'git commit' to do most of the work instead of loads of 
plumbing commands.

Note that such approach won't work for uploading multiple change request 
without some complex tweaks, but I imagine later we can improve it and support 
uploading several interdependent change requests from several local branches. 
We can resolve dependencies between them by tracking latest merges (if branch 
myfeature-a has been merged to myfeature-b then change request from myfeature-b 
will depend on change request from myfeature-a):

master:M---N-O-...
\\\-E*
myfeature-a: A-B-...-C-D-...-E   \
  \   \   J* = uploaded
myfeature-b:   F-...-G-I-J

This improvement would be implemented later if needed.

I hope such feature seams to be useful not just for me and I'm looking forward 
to some comments on it.

--
Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest environment for virtual or true bare metal

2014-08-05 Thread LeslieWang
Hi Ben,
Thanks for your reply. 
Actually I'm a little confused by virtual environment. I guess what it means 
is as below:  - 1 Seed VM as deployment starting point.  - Both undercloud and 
overcloud images are loaded into Glance of Seed VM.  - 15 VMs are created. 1 
for undercloud, 1 for overcloud controller, left 13 are for overcloud compute.  
- 1 Host machine acts as container for all 15 VMs. It can be separated from 
Seed VM.  - Seed VM communicates with Host machine to create 15 VMs and 
installed corresponding images.  Is it correct? Or can you roughly introduces 
the topology of the devtest virtual environment.
Best RegardsLeslie Wang
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Monty Taylor

On 08/05/2014 09:18 AM, Jay Pipes wrote:

Hello stackers, TC, Neutron contributors,

At the Nova mid-cycle meetup last week in Oregon, during the discussion
about the future of nova-network, the topic of nova-network - Neutron
migration came up.

For some reason, I had been clueless about the details of one of the
items in the gap analysis the TC had requested [1]. Namely, the 5th
item, about nova-network - Neutron migration, which is detailed in the
following specification:

https://review.openstack.org/#/c/101921/12/specs/juno/neutron-migration.rst

The above specification outlines a plan to allow migration of *running*
instances from an OpenStack deployment using nova-network (both with and
without multi-host mode) to an OpenStack deployment using Neutron, with
little to no downtime using live migration techniques and an array of
post-vm-migrate strategies to wire up the new VIFs to the Neutron ports.

I personally believe that this requirement to support a live migration
with no downtime of running instances between a nova-network and a
Neutron deployment *is neither realistic, nor worth the extensive time
and technical debt needed to make this happen*.

I suggest that it would be better to instead provide good instructions
for doing cold migration (snapshot VMs in old nova-network deployment,
store in Swift or something, then launch VM from a snapshot in new
Neutron deployment) -- which should cover the majority of deployments --
and then write some instructions for what to look out for when doing a
custom migration for environments that simply cannot afford any downtime
and *really* want to migrate to Neutron. For these deployments, it's
almost guaranteed that they will need to mangle their existing databases
and do manual data migration anyway -- like RAX did when moving from
nova-network to Neutron. The variables are too many to list here, and
the number of deployments actually *needing* this work seems to me to be
very limited. Someone suggested Metacloud *might* be the only deployment
that might meet the needs for a live nova-network - Neutron migration.
Metacloud folks, please do respond here!

In short, I don't think the live migration requirement for nova-network
to Neutron is either realistic or valuable, and suggest relaxing it to
be good instructions for cold migration of instances from an older
deployment to a newer deployment. There are other more valuable things
that Neutron contributors could focus on, IMO -- such as the DVR
functionality that brings parity to Neutron with nova-network's
multi-host mode.

Thoughts?


I agree 100%. Although I understand the I think it's an unreasonably 
high burden in an area where there are many many other real pressing 
issues that need to be solved.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Mike Spreitzer
Monty Taylor mord...@inaugust.com wrote on 08/05/2014 12:27:14 PM:

 On 08/05/2014 09:18 AM, Jay Pipes wrote:
  Hello stackers, TC, Neutron contributors,
 
  At the Nova mid-cycle meetup last week in Oregon, during the 
discussion
  about the future of nova-network, the topic of nova-network - Neutron
  migration came up.
 
  For some reason, I had been clueless about the details of one of the
  items in the gap analysis the TC had requested [1]. Namely, the 5th
  item, about nova-network - Neutron migration, which is detailed in 
the
  following specification:
 
  
https://review.openstack.org/#/c/101921/12/specs/juno/neutron-migration.rst

 
  ...
 
  I personally believe that this requirement to support a live migration
  with no downtime of running instances between a nova-network and a
  Neutron deployment *is neither realistic, nor worth the extensive time
  and technical debt needed to make this happen*.
 
  I suggest that it would be better to instead provide good instructions
  for doing cold migration (snapshot VMs in old nova-network deployment,
  store in Swift or something, then launch VM from a snapshot in new
  Neutron deployment) -- which should cover the majority of deployments 
--
  and then write some instructions for what to look out for when doing a
  custom migration for environments that simply cannot afford any 
downtime
  and *really* want to migrate to Neutron. For these deployments, it's
  almost guaranteed that they will need to mangle their existing 
databases
  and do manual data migration anyway -- like RAX did when moving from
  nova-network to Neutron. The variables are too many to list here, and
  the number of deployments actually *needing* this work seems to me to 
be
  very limited. Someone suggested Metacloud *might* be the only 
deployment
  that might meet the needs for a live nova-network - Neutron 
migration.
  Metacloud folks, please do respond here!
 
  ...
 
 I agree 100%. Although I understand the I think it's an unreasonably 
 high burden in an area where there are many many other real pressing 
 issues that need to be solved.

I will go a little further.  My focus is on workloads that are composed of 
scaling groups (one strict way of saying cattle not pets).  In this case 
I do not need to migrate individual Compute instances, just shut down 
obsolete ones and start shiny new ones.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Edgar Magana
Jay,

I do agree with you on the focus areas. I believe Neutron should focus on
the nova-parity (DVR) and DB migrations more than ever, instead of
increasing the priority to new API such as the GBP. Actually, yesterday
Neutron IRC showed the need of having a more focused work instead of
picking in to many ³N² different areas.

The part that I disagree is with the focus on the nova-network - Neutron
migration. I feel this activity is under control and even that it will not
deliver the ³no-downtime² expect ion, it will offer an alternative to
migration instances for those operators that could be interested.

Now, if Metacloud has a process that will work, please share it and let¹s
document it. The more the merrier, it will be up to the operators to chose
the best approach for their own clouds.

Edgar

On 8/5/14, 9:27 AM, Monty Taylor mord...@inaugust.com wrote:

On 08/05/2014 09:18 AM, Jay Pipes wrote:
 Hello stackers, TC, Neutron contributors,

 At the Nova mid-cycle meetup last week in Oregon, during the discussion
 about the future of nova-network, the topic of nova-network - Neutron
 migration came up.

 For some reason, I had been clueless about the details of one of the
 items in the gap analysis the TC had requested [1]. Namely, the 5th
 item, about nova-network - Neutron migration, which is detailed in the
 following specification:

 
https://review.openstack.org/#/c/101921/12/specs/juno/neutron-migration.r
st

 The above specification outlines a plan to allow migration of *running*
 instances from an OpenStack deployment using nova-network (both with and
 without multi-host mode) to an OpenStack deployment using Neutron, with
 little to no downtime using live migration techniques and an array of
 post-vm-migrate strategies to wire up the new VIFs to the Neutron ports.

 I personally believe that this requirement to support a live migration
 with no downtime of running instances between a nova-network and a
 Neutron deployment *is neither realistic, nor worth the extensive time
 and technical debt needed to make this happen*.

 I suggest that it would be better to instead provide good instructions
 for doing cold migration (snapshot VMs in old nova-network deployment,
 store in Swift or something, then launch VM from a snapshot in new
 Neutron deployment) -- which should cover the majority of deployments --
 and then write some instructions for what to look out for when doing a
 custom migration for environments that simply cannot afford any downtime
 and *really* want to migrate to Neutron. For these deployments, it's
 almost guaranteed that they will need to mangle their existing databases
 and do manual data migration anyway -- like RAX did when moving from
 nova-network to Neutron. The variables are too many to list here, and
 the number of deployments actually *needing* this work seems to me to be
 very limited. Someone suggested Metacloud *might* be the only deployment
 that might meet the needs for a live nova-network - Neutron migration.
 Metacloud folks, please do respond here!

 In short, I don't think the live migration requirement for nova-network
 to Neutron is either realistic or valuable, and suggest relaxing it to
 be good instructions for cold migration of instances from an older
 deployment to a newer deployment. There are other more valuable things
 that Neutron contributors could focus on, IMO -- such as the DVR
 functionality that brings parity to Neutron with nova-network's
 multi-host mode.

 Thoughts?

I agree 100%. Although I understand the I think it's an unreasonably
high burden in an area where there are many many other real pressing
issues that need to be solved.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-05 Thread Carl Baldwin
I have a spec proposal in play that crosses the Nova/Neutron boundary.
I split it in to two specs:  a nova spec [1] and a Neutron spec [2].
There is a little duplication between the two at a high level but not
in the details.  Each of the specs references the other at various
spots in the text and in the references section.

This isn't the optimal way to write a cross-project spec.  There is
difficulty involved in keeping the two consistent.  Also, reviewers
from one program often don't bother to read the spec from the other.
This is unfortunate.

However, given the constraints of the current process, I believe that
it was necessary to split the spec in to two so that the cores
responsible for each program review and accept the design for the
proposed changes in their realm.

Would it make more sense to submit the exact same monolithic
specification to both?  At the time, I chose against it because I
thought it would make it more difficult to read in the context of a
single program.

I'm open and looking forward to hearing others' thoughts on this.

Carl

[1] https://review.openstack.org/#/c/90150/
[2] https://review.openstack.org/#/c/88623/

On Mon, Aug 4, 2014 at 8:33 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-08-05 01:26:49 + (+), joehuang wrote:
 I would like to know how to submit cross project spec? Is there a
 repository for cross project cross project spec.

 Specs repositories are about formalizing/streamlining the design
 process within a program, and generally the core reviewers of those
 programs decide when a spec is in a suitable condition for approval.
 In the case of a cross-program spec (which I assume is what you mean
 by cross-project), who would decide what needs to be in the spec
 proposal and who would approve it? What sort of design proposal do
 you have in mind which you think would need to be a single spec
 applying to projects in more than one program?
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Monty Taylor

On 08/05/2014 09:34 AM, Mike Spreitzer wrote:

Monty Taylor mord...@inaugust.com wrote on 08/05/2014 12:27:14 PM:


On 08/05/2014 09:18 AM, Jay Pipes wrote:

Hello stackers, TC, Neutron contributors,

At the Nova mid-cycle meetup last week in Oregon, during the

discussion

about the future of nova-network, the topic of nova-network - Neutron
migration came up.

For some reason, I had been clueless about the details of one of the
items in the gap analysis the TC had requested [1]. Namely, the 5th
item, about nova-network - Neutron migration, which is detailed in

the

following specification:



https://review.openstack.org/#/c/101921/12/specs/juno/neutron-migration.rst



...

I personally believe that this requirement to support a live migration
with no downtime of running instances between a nova-network and a
Neutron deployment *is neither realistic, nor worth the extensive time
and technical debt needed to make this happen*.

I suggest that it would be better to instead provide good instructions
for doing cold migration (snapshot VMs in old nova-network deployment,
store in Swift or something, then launch VM from a snapshot in new
Neutron deployment) -- which should cover the majority of deployments

--

and then write some instructions for what to look out for when doing a
custom migration for environments that simply cannot afford any

downtime

and *really* want to migrate to Neutron. For these deployments, it's
almost guaranteed that they will need to mangle their existing

databases

and do manual data migration anyway -- like RAX did when moving from
nova-network to Neutron. The variables are too many to list here, and
the number of deployments actually *needing* this work seems to me to

be

very limited. Someone suggested Metacloud *might* be the only

deployment

that might meet the needs for a live nova-network - Neutron

migration.

Metacloud folks, please do respond here!

...


I agree 100%. Although I understand the I think it's an unreasonably
high burden in an area where there are many many other real pressing
issues that need to be solved.


I will go a little further.  My focus is on workloads that are composed of
scaling groups (one strict way of saying cattle not pets).  In this case
I do not need to migrate individual Compute instances, just shut down
obsolete ones and start shiny new ones.


To be complete - I feel the urge to communicate that I run a very large 
production infrastructure (that you all use) that is comprised of 
_several_ precious pets. I reject the notion that cloud is only for 
ephemeral things or that you can't do old-style workloads. It works great!


So, if I was a user of a cloud that told me I needed to do a downtime to 
migrate, it would be a bad user experience. Oh wait - it WAS a bad user 
experience when Rackspace migrated us from Rackspace Classic to 
Rackspace Nova. Guess what? We got over it - and thus far it's been the 
only time that's happened - so 4 years in to the OpenStack project, our 
control plane is still running on Rackspace.


Which is to say that there are people who will have pets and not cattle, 
and not having a magical seamless upgrade path from nova-network to 
neutron will annoy them. However, I think the cost to providing that 
path far outweighs the benefit in the face of other things on our plate.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] python-ironicclient 0.2.0 released

2014-08-05 Thread Devananda van der Veen
On the tail of several weeks of travel and the Juno2 milestone, I have
pushed up the 0.2.0 (*) version of the python-ironicclient library. This
includes all the feature changes from Juno-2 development, and some bug
fixes from Juno-1.

* node-show now includes the new instance_info field
* add node set-provision-state command
* update help string for node-create to distinguish nodes' physical
  properties from instances' logical properties, which should be set using
  node-update.
* add bash completion support for the CLI
* ironicclient module now exposes 'auth_ref' object
* list commands support pagination (--marker, --limit)
* CLI supports vendor-passthru method
* add driver-properties command to expose what driver_info attributes
  each driver expects
* client support for the get and set boot device calls on the new management
  interface

Please file bugs on https://bugs.launchpad.net/python-ironicclient if you
encounter any.

Regards,
Devananda


(*) I actually pushed up 0.1.5 first, then realized this needed a MINOR
version change and tagged 0.2.0
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-05 Thread Devananda van der Veen
This is great, thanks to everyone who helped make it happen!

-D


On Sat, Aug 2, 2014 at 10:16 AM, Andreas Jaeger a...@suse.com wrote:

 All OpenStack incubated projects and programs that use a -specs
 repository have now been setup to publish these to
 http://specs.openstack.org. With the next merged in patch of a *-specs
 repository, the documentation will get published.

 The index page contains the published repos as of yesterday and it will
 be enhanced as more are setup (current patch:
 https://review.openstack.org/111476).

 For now, you can reach a repo directly via
 http://specs.openstack.org/$ORGANIZATION/$project-specs, for example:
 http://specs.openstack.org/openstack/qa-specs/

 Thanks to Steve Martinelli and to the infra team (especially Clark,
 James, Jeremy and Sergey) for getting this done!

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-05 Thread Monty Taylor

On 08/05/2014 09:03 AM, Thierry Carrez wrote:

Hi everyone,

With the incredible growth of OpenStack, our development community is
facing complex challenges. How we handle those might determine the
ultimate success or failure of OpenStack.

With this cycle we hit new limits in our processes, tools and cultural
setup. This resulted in new limiting factors on our overall velocity,
which is frustrating for developers. This resulted in the burnout of key
firefighting resources. This resulted in tension between people who try
to get specific work done and people who try to keep a handle on the big
picture.

It all boils down to an imbalance between strategic and tactical
contributions. At the beginning of this project, we had a strong inner
group of people dedicated to fixing all loose ends. Then a lot of
companies got interested in OpenStack and there was a surge in tactical,
short-term contributions. We put on a call for more resources to be
dedicated to strategic contributions like critical bugfixing,
vulnerability management, QA, infrastructure... and that call was
answered by a lot of companies that are now key members of the OpenStack
Foundation, and all was fine again. But OpenStack contributors kept on
growing, and we grew the narrowly-focused population way faster than the
cross-project population.

At the same time, we kept on adding new projects to incubation and to
the integrated release, which is great... but the new developers you get
on board with this are much more likely to be tactical than strategic
contributors. This also contributed to the imbalance. The penalty for
that imbalance is twofold: we don't have enough resources available to
solve old, known OpenStack-wide issues; but we also don't have enough
resources to identify and fix new issues.

We have several efforts under way, like calling for new strategic
contributors, driving towards in-project functional testing, making
solving rare issues a more attractive endeavor, or hiring resources
directly at the Foundation level to help address those. But there is a
topic we haven't raised yet: should we concentrate on fixing what is
currently in the integrated release rather than adding new projects ?

We seem to be unable to address some key issues in the software we
produce, and part of it is due to strategic contributors (and core
reviewers) being overwhelmed just trying to stay afloat of what's
happening. For such projects, is it time for a pause ? Is it time to
define key cycle goals and defer everything else ?

On the integrated release side, more projects means stretching our
limited strategic resources more. Is it time for the Technical Committee
to more aggressively define what is in and what is out ? If we go
through such a redefinition, shall we push currently-integrated projects
that fail to match that definition out of the integrated release inner
circle ?

The TC discussion on what the integrated release should or should not
include has always been informally going on. Some people would like to
strictly limit to end-user-facing projects. Some others suggest that
OpenStack should just be about integrating/exposing/scaling smart
functionality that lives in specialized external projects, rather than
trying to outsmart those by writing our own implementation. Some others
are advocates of carefully moving up the stack, and to resist from
further addressing IaaS+ services until we complete the pure IaaS
space in a satisfactory manner. Some others would like to build a
roadmap based on AWS services. Some others would just add anything that
fits the incubation/integration requirements.

On one side this is a long-term discussion, but on the other we also
need to make quick decisions. With 4 incubated projects, and 2 new ones
currently being proposed, there are a lot of people knocking at the door.

Thanks for reading this braindump this far. I hope this will trigger the
open discussions we need to have, as an open source project, to reach
the next level.


Yes.

Additionally, and I think we've been getting better at this in the 2 
cycles that we've had an all-elected TC, I think we need to learn how to 
say no on technical merit - and we need to learn how to say thank you 
for your effort, but this isn't working out Breaking up with someone is 
hard to do, but sometimes it's best for everyone involved.


I'm wary of explicit answers in the form of policy at this point - I 
think we've spent far too long using policy as a shield from hard 
questions. The questions of Is OpenStack IaaS, or should it also be 
PaaS I think are useless questions that exist only to further the lives 
of analysts, journalists and bloggers. Instead, I'd love to focus more 
on what is software that solves problems and what is software that 
solves problems that we're in a good position to solve So if Savanna 
solves problems for users well in a way that makes sense, I'd hate to 
exclude it because it didn't meet a policy's pre-conceived notion that 
Hadoop is 

Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Russell Bryant
On 08/05/2014 12:18 PM, Jay Pipes wrote:
 Hello stackers, TC, Neutron contributors,
 
 At the Nova mid-cycle meetup last week in Oregon, during the discussion
 about the future of nova-network, the topic of nova-network - Neutron
 migration came up.
 
 For some reason, I had been clueless about the details of one of the
 items in the gap analysis the TC had requested [1]. Namely, the 5th
 item, about nova-network - Neutron migration, which is detailed in the
 following specification:
 
 https://review.openstack.org/#/c/101921/12/specs/juno/neutron-migration.rst
 
 The above specification outlines a plan to allow migration of *running*
 instances from an OpenStack deployment using nova-network (both with and
 without multi-host mode) to an OpenStack deployment using Neutron, with
 little to no downtime using live migration techniques and an array of
 post-vm-migrate strategies to wire up the new VIFs to the Neutron ports.
 
 I personally believe that this requirement to support a live migration
 with no downtime of running instances between a nova-network and a
 Neutron deployment *is neither realistic, nor worth the extensive time
 and technical debt needed to make this happen*.
 
 I suggest that it would be better to instead provide good instructions
 for doing cold migration (snapshot VMs in old nova-network deployment,
 store in Swift or something, then launch VM from a snapshot in new
 Neutron deployment) -- which should cover the majority of deployments --
 and then write some instructions for what to look out for when doing a
 custom migration for environments that simply cannot afford any downtime
 and *really* want to migrate to Neutron. For these deployments, it's
 almost guaranteed that they will need to mangle their existing databases
 and do manual data migration anyway -- like RAX did when moving from
 nova-network to Neutron. The variables are too many to list here, and
 the number of deployments actually *needing* this work seems to me to be
 very limited. Someone suggested Metacloud *might* be the only deployment
 that might meet the needs for a live nova-network - Neutron migration.
 Metacloud folks, please do respond here!
 
 In short, I don't think the live migration requirement for nova-network
 to Neutron is either realistic or valuable, and suggest relaxing it to
 be good instructions for cold migration of instances from an older
 deployment to a newer deployment. There are other more valuable things
 that Neutron contributors could focus on, IMO -- such as the DVR
 functionality that brings parity to Neutron with nova-network's
 multi-host mode.
 
 Thoughts?
 
 -jay
 
 [1]
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage

Yes, I agree with what you're suggesting here.  This was the approach I
was advocating for a cycle or two ago.  In a design summit session,
there were folks that seemed to really want to go off and investigate
live migration options.  Given what has (or hasn't) been done so far, I
maintain the same opinion as you've presented here, which is that it's
really not a worthwhile investment overall.  We should just provide some
good documentation on how a cold migration would work.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Robert Kukura


On 8/5/14, 11:04 AM, Gary Kotton wrote:

Hi,
Is there any description of how this will be consumed by Nova. My 
concern is this code landing there.

Hi Gary,

Initially, an endpoint's port_id is passed to Nova using nova boot ... 
--nic port-id=port-uuid ..., requiring no changes to Nova. Later, 
slight enhancements to Nova would allow using commands such as nova 
boot ... --nic ep-id=endpoint-uuid ... or nova boot ... --nic 
epg-id=endpoint-group-uuid 


-Bob

Thanks
Gary

From: Robert Kukura kuk...@noironetworks.com 
mailto:kuk...@noironetworks.com
Reply-To: OpenStack List openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Tuesday, August 5, 2014 at 5:20 PM
To: OpenStack List openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way 
forward


On 8/4/14, 4:27 PM, Mark McClain wrote:

All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should 
attempting.

* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team 
that has been implementing this feature for Juno does not see this 
work as an experiment to gather data, but rather as an important 
innovative feature to put in the hands of early adopters in Juno and 
into widespread deployment with a stable API as early as Kilo.


The group-based policy BP approved for Juno addresses the critical 
need for a more usable, declarative, intent-based interface for cloud 
application developers and deployers, that can co-exist with Neutron's 
current networking-hardware-oriented API and work nicely with all 
existing core plugins. Additionally, we believe that this declarative 
approach is what is needed to properly integrate advanced services 
into Neutron, and will go a long way towards resolving the 
difficulties so far trying to integrate LBaaS, FWaaS, and VPNaaS APIs 
into the current Neutron model.


Like any new service API in Neutron, the initial group policy API 
release will be subject to incompatible changes before being declared 
stable, and hence would be labeled experimental in Juno. This does 
not mean that it is an experiment where to fail fast is an 
acceptable outcome. The sub-team's goal is to stabilize the group 
policy API as quickly as possible,  making any needed changes based on 
early user and operator experience.


The L and M cycles that Mark suggests below to revisit the status 
are a completely different time frame. By the L or M cycle, we should 
be working on a new V3 Neutron API that pulls these APIs together into 
a more cohesive core API. We will not be in a position to do this 
properly without the experience of using the proposed group policy 
extension with the V2 Neutron API in production.


If we were failing miserably, or if serious technical issues were 
being identified with the patches, some delay might make sense. But, 
other than Mark's -2 blocking the initial patches from merging, we are 
on track to complete the planned work in Juno.


-Bob



Why this email?
---
Our community has been discussing and working on Group Based Policy 
(GBP) for many months.  I think the discussion has reached a point 
where we need to openly discuss a few issues before moving forward. 
 I recognize that this discussion could create frustration for those 
who have invested significant time and energy, but the reality is we 
need ensure we are making decisions that benefit all members of our 
community (users, operators, developers and vendors).


Experimentation

I like that as a community we are exploring alternate APIs.  The 
process of exploring via real user experimentation can produce 
valuable results.  A good experiment should be designed to fail fast 
to enable further trials via rapid iteration.


Merging large changes into the master branch is the exact opposite of 
failing fast.


The master branch deliberately favors small iterative changes over 
time.  Releasing a new version of the proposed API every six months 
limits our ability to learn and make adjustments.


In the past, we've released LBaaS, FWaaS, and VPNaaS as experimental 
APIs.  The results have been very mixed as operators either shy away 
from testing/offering the API or embrace the API with the expectation 
that the community will provide full API support and migration.  In 
both cases, the experiment fails because we either could not get the 
data we need or are unable to make significant changes without 
accepting a non-trivial amount of technical debt via migrations or 
draft API support.


Next Steps
--
Previously, the GPB subteam used a Github account to host the 
development, but the workflows and tooling do not align with 
OpenStack's development model. I'd like to see us create a group 
based policy project in StackForge. 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Gary Kotton
Ok, thanks for the clarification. This means that it will not be done 
automagically as it is today – the tenant will need to create a Neutron port 
and then pass that through.
Thanks
Gary

From: Robert Kukura kuk...@noironetworks.commailto:kuk...@noironetworks.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 5, 2014 at 8:13 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward


On 8/5/14, 11:04 AM, Gary Kotton wrote:
Hi,
Is there any description of how this will be consumed by Nova. My concern is 
this code landing there.
Hi Gary,

Initially, an endpoint's port_id is passed to Nova using nova boot ... --nic 
port-id=port-uuid ..., requiring no changes to Nova. Later, slight 
enhancements to Nova would allow using commands such as nova boot ... --nic 
ep-id=endpoint-uuid ... or nova boot ... --nic epg-id=endpoint-group-uuid 


-Bob
Thanks
Gary

From: Robert Kukura kuk...@noironetworks.commailto:kuk...@noironetworks.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 5, 2014 at 5:20 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

On 8/4/14, 4:27 PM, Mark McClain wrote:
All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should attempting.
* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team that has 
been implementing this feature for Juno does not see this work as an experiment 
to gather data, but rather as an important innovative feature to put in the 
hands of early adopters in Juno and into widespread deployment with a stable 
API as early as Kilo.

The group-based policy BP approved for Juno addresses the critical need for a 
more usable, declarative, intent-based interface for cloud application 
developers and deployers, that can co-exist with Neutron's current 
networking-hardware-oriented API and work nicely with all existing core 
plugins. Additionally, we believe that this declarative approach is what is 
needed to properly integrate advanced services into Neutron, and will go a long 
way towards resolving the difficulties so far trying to integrate LBaaS, FWaaS, 
and VPNaaS APIs into the current Neutron model.

Like any new service API in Neutron, the initial group policy API release will 
be subject to incompatible changes before being declared stable, and hence 
would be labeled experimental in Juno. This does not mean that it is an 
experiment where to fail fast is an acceptable outcome. The sub-team's goal 
is to stabilize the group policy API as quickly as possible,  making any needed 
changes based on early user and operator experience.

The L and M cycles that Mark suggests below to revisit the status are a 
completely different time frame. By the L or M cycle, we should be working on a 
new V3 Neutron API that pulls these APIs together into a more cohesive core 
API. We will not be in a position to do this properly without the experience of 
using the proposed group policy extension with the V2 Neutron API in production.

If we were failing miserably, or if serious technical issues were being 
identified with the patches, some delay might make sense. But, other than 
Mark's -2 blocking the initial patches from merging, we are on track to 
complete the planned work in Juno.

-Bob


Why this email?
---
Our community has been discussing and working on Group Based Policy (GBP) for 
many months.  I think the discussion has reached a point where we need to 
openly discuss a few issues before moving forward.  I recognize that this 
discussion could create frustration for those who have invested significant 
time and energy, but the reality is we need ensure we are making decisions that 
benefit all members of our community (users, operators, developers and vendors).

Experimentation

I like that as a community we are exploring alternate APIs.  The process of 
exploring via real user experimentation can produce valuable results.  A good 
experiment should be designed to fail fast to enable further trials via rapid 
iteration.

Merging large changes into the master branch is the exact opposite of failing 
fast.

The master branch deliberately favors small iterative changes over time.  
Releasing a new version of the proposed API every six months limits our ability 
to learn and make adjustments.

In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental APIs.  The 
results have been very mixed as operators either shy away from testing/offering 
the API or embrace the API with the 

Re: [openstack-dev] DevStack program change

2014-08-05 Thread Dean Troyer
Thanks for the feedback everyone, I've proposed the change in
https://review.openstack.org/112090.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-05 Thread Solly Ross
Just to add my two cents, while I get that people need to run on older versions 
of software,
at a certain point you have to bump the minimum version.  Even libvirt 0.9.11 
is from April 3rd 2012.
That's two and a third years old at this point.  I think at a certain point we 
need to say if you want
to run OpenStack on an older platform, then you'll need to run an older 
OpenStack or backport the required
packages.

Best Regards,
Solly Ross

- Original Message -
 From: Joe Gordon joe.gord...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Wednesday, July 30, 2014 7:07:13 PM
 Subject: Re: [openstack-dev] [nova] so what do i do about libvirt-python if 
 i'm on precise?
 
 
 
 
 On Jul 30, 2014 3:36 PM, Clark Boylan  cboy...@sapwetik.org  wrote:
  
  On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
   On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
While forcing people to move to a newer version of libvirt is
doable on most environments, do we want to do that now? What is
the benefit of doing so?
   [...]
   
   The only dog I have in this fight is that using the split-out
   libvirt-python on PyPI means we finally get to run Nova unit tests
   in virtualenvs which aren't built with system-site-packages enabled.
   It's been a long-running headache which I'd like to see eradicated
   everywhere we can. I understand though if we have to go about it
   more slowly, I'm just excited to see it finally within our grasp.
   --
   Jeremy Stanley
   
  We aren't quite forcing people to move to newer versions. Only those
  installing nova test-requirements need newer libvirt. This does not
  include people using eg devstack. I think it is reasonable to expect
  people testing tip of nova master to have a reasonably newish test bed
  to test it (its not like the Infra team moves at a really fast pace :)
  ).
 
 Based on
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html
 this patch is breaking people, which is the basis for my concerns. Perhaps
 we should get some further details from Salvatore.
 
  
  Avoiding system site packages in virtualenvs is a huge win particularly
  for consistency of test results. It avoids pollution of site packages
  that can happen differently across test machines. This particular type
  of inconsistency has been the cause of the previously mentioned
  headaches.
 
 I agree this is a huge win, but I am just concerned we don't have any
 deprecation cycle and just roll out a new requirement without a heads up.
 
  
  Clark
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Robert Kukura


On 8/5/14, 1:23 PM, Gary Kotton wrote:
Ok, thanks for the clarification. This means that it will not be done 
automagically as it is today -- the tenant will need to create a 
Neutron port and then pass that through.
Not quite. Using the group policy API, the port will be created 
implicitly when the endpoint is created (unless an existing port_id is 
passed explicitly). All the user will need to do is obtain the port_id 
value from the endpoint and pass this to nova.


The goal is to make passing --nic epg-id=endpoint-group-id just as 
automatic as passing --nic net-id=network-uuid. Code in Nova's 
Neutron integration would handle the epg-id by passing it to 
create_endpoint, and then using the port_id that is returned in the result.


-Bob

Thanks
Gary

From: Robert Kukura kuk...@noironetworks.com 
mailto:kuk...@noironetworks.com
Reply-To: OpenStack List openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Tuesday, August 5, 2014 at 8:13 PM
To: OpenStack List openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way 
forward



On 8/5/14, 11:04 AM, Gary Kotton wrote:

Hi,
Is there any description of how this will be consumed by Nova. My 
concern is this code landing there.

Hi Gary,

Initially, an endpoint's port_id is passed to Nova using nova boot 
... --nic port-id=port-uuid ..., requiring no changes to Nova. 
Later, slight enhancements to Nova would allow using commands such as 
nova boot ... --nic ep-id=endpoint-uuid ... or nova boot ... 
--nic epg-id=endpoint-group-uuid 


-Bob

Thanks
Gary

From: Robert Kukura kuk...@noironetworks.com 
mailto:kuk...@noironetworks.com
Reply-To: OpenStack List openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Tuesday, August 5, 2014 at 5:20 PM
To: OpenStack List openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way 
forward


On 8/4/14, 4:27 PM, Mark McClain wrote:

All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should 
attempting.

* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team 
that has been implementing this feature for Juno does not see this 
work as an experiment to gather data, but rather as an important 
innovative feature to put in the hands of early adopters in Juno and 
into widespread deployment with a stable API as early as Kilo.


The group-based policy BP approved for Juno addresses the critical 
need for a more usable, declarative, intent-based interface for cloud 
application developers and deployers, that can co-exist with 
Neutron's current networking-hardware-oriented API and work nicely 
with all existing core plugins. Additionally, we believe that this 
declarative approach is what is needed to properly integrate advanced 
services into Neutron, and will go a long way towards resolving the 
difficulties so far trying to integrate LBaaS, FWaaS, and VPNaaS APIs 
into the current Neutron model.


Like any new service API in Neutron, the initial group policy API 
release will be subject to incompatible changes before being declared 
stable, and hence would be labeled experimental in Juno. This 
does not mean that it is an experiment where to fail fast is an 
acceptable outcome. The sub-team's goal is to stabilize the group 
policy API as quickly as possible,  making any needed changes based 
on early user and operator experience.


The L and M cycles that Mark suggests below to revisit the status 
are a completely different time frame. By the L or M cycle, we should 
be working on a new V3 Neutron API that pulls these APIs together 
into a more cohesive core API. We will not be in a position to do 
this properly without the experience of using the proposed group 
policy extension with the V2 Neutron API in production.


If we were failing miserably, or if serious technical issues were 
being identified with the patches, some delay might make sense. But, 
other than Mark's -2 blocking the initial patches from merging, we 
are on track to complete the planned work in Juno.


-Bob



Why this email?
---
Our community has been discussing and working on Group Based Policy 
(GBP) for many months.  I think the discussion has reached a point 
where we need to openly discuss a few issues before moving forward. 
 I recognize that this discussion could create frustration for those 
who have invested significant time and energy, but the reality is we 
need ensure we are making decisions that benefit all members of our 
community (users, operators, developers and vendors).


Experimentation

I like that as a community we are exploring alternate APIs.  The 
process of exploring via real user 

Re: [openstack-dev] [nova] libvirtError: XML error: Missing CPU model name on 2nd level vm

2014-08-05 Thread Solly Ross
Hi Kevin,
Running devstack in a VM is perfectly doable.  Many developers use
devstack inside a VM (I run mine inside a VM launched using libvirt
on KVM).  I can't comment on the issue that you're encountering,
but perhaps something wasn't configured correctly when you launched the
VM?

Best Regards,
Solly Ross

- Original Message -
 From: Chen CH Ji jiche...@cn.ibm.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, August 1, 2014 5:04:16 AM
 Subject: [openstack-dev] [nova] libvirtError: XML error: Missing CPU model 
 name on 2nd level vm
 
 
 
 Hi
 I don't have a real PC to so created a test env ,so I created a 2nd level env
 (create a kvm virtual machine on top of a physical host then run devstack o
 the vm)
 I am not sure whether it's doable because I saw following error when start
 nova-compute service , is it a bug or I need to update my configuration
 instead? thanks
 
 
 2014-08-01 17:04:51.532 DEBUG nova.virt.libvirt.config [-] Generated XML
 ('cpu\n archx86_64/arch\n topology sockets=1 cores=1
 threads=1/\n/cpu\n',) from (pid=16956) to_xml
 /opt/stack/nova/nova/virt/libvirt/config.py:79
 Traceback (most recent call last):
 File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 346, in
 fire_timers
 timer()
 File /usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 56, in
 __call__
 cb(*args, **kw)
 File /usr/lib/python2.7/dist-packages/eventlet/event.py, line 163, in
 _do_send
 waiter.switch(result)
 File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in
 main
 result = function(*args, **kwargs)
 File /opt/stack/nova/nova/openstack/common/service.py, line 490, in
 run_service
 service.start()
 File /opt/stack/nova/nova/service.py, line 164, in start
 self.manager.init_host()
 File /opt/stack/nova/nova/compute/manager.py, line 1055, in init_host
 self.driver.init_host(host=self.host)
 File /opt/stack/nova/nova/virt/libvirt/driver.py, line 633, in init_host
 self._do_quality_warnings()
 File /opt/stack/nova/nova/virt/libvirt/driver.py, line 616, in
 _do_quality_warnings
 caps = self._get_host_capabilities()
 File /opt/stack/nova/nova/virt/libvirt/driver.py, line 2942, in
 _get_host_capabilities
 libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
 File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in doit
 result = proxy_call(self._autowrap, f, *args, **kwargs)
 File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in
 proxy_call
 rv = execute(f,*args,**kwargs)
 File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in
 tworker
 rv = meth(*args,**kwargs)
 File /usr/lib/python2.7/dist-packages/libvirt.py, line 3127, in baselineCPU
 if ret is None: raise libvirtError ('virConnectBaselineCPU() failed',
 conn=self)
 libvirtError: XML error: Missing CPU model name
 
 Best Regards!
 
 Kevin (Chen) Ji 纪 晨
 
 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirtError: XML error: Missing CPU model name on 2nd level vm

2014-08-05 Thread Rafael Folco
cat /proc/cpuinfo
Make sure the cpu model and version is listed on
/usr/share/libvirt/cpu_map.xml

Hope this helps.




On Tue, Aug 5, 2014 at 2:49 PM, Solly Ross sr...@redhat.com wrote:

 Hi Kevin,
 Running devstack in a VM is perfectly doable.  Many developers use
 devstack inside a VM (I run mine inside a VM launched using libvirt
 on KVM).  I can't comment on the issue that you're encountering,
 but perhaps something wasn't configured correctly when you launched the
 VM?

 Best Regards,
 Solly Ross

 - Original Message -
  From: Chen CH Ji jiche...@cn.ibm.com
  To: openstack-dev@lists.openstack.org
  Sent: Friday, August 1, 2014 5:04:16 AM
  Subject: [openstack-dev] [nova] libvirtError: XML error: Missing CPU
 model name on 2nd level vm
 
 
 
  Hi
  I don't have a real PC to so created a test env ,so I created a 2nd
 level env
  (create a kvm virtual machine on top of a physical host then run
 devstack o
  the vm)
  I am not sure whether it's doable because I saw following error when
 start
  nova-compute service , is it a bug or I need to update my configuration
  instead? thanks
 
 
  2014-08-01 17:04:51.532 DEBUG nova.virt.libvirt.config [-] Generated XML
  ('cpu\n archx86_64/arch\n topology sockets=1 cores=1
  threads=1/\n/cpu\n',) from (pid=16956) to_xml
  /opt/stack/nova/nova/virt/libvirt/config.py:79
  Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 346,
 in
  fire_timers
  timer()
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 56,
 in
  __call__
  cb(*args, **kw)
  File /usr/lib/python2.7/dist-packages/eventlet/event.py, line 163, in
  _do_send
  waiter.switch(result)
  File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line
 194, in
  main
  result = function(*args, **kwargs)
  File /opt/stack/nova/nova/openstack/common/service.py, line 490, in
  run_service
  service.start()
  File /opt/stack/nova/nova/service.py, line 164, in start
  self.manager.init_host()
  File /opt/stack/nova/nova/compute/manager.py, line 1055, in init_host
  self.driver.init_host(host=self.host)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 633, in
 init_host
  self._do_quality_warnings()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 616, in
  _do_quality_warnings
  caps = self._get_host_capabilities()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 2942, in
  _get_host_capabilities
  libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
  File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in
 doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in
  proxy_call
  rv = execute(f,*args,**kwargs)
  File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in
  tworker
  rv = meth(*args,**kwargs)
  File /usr/lib/python2.7/dist-packages/libvirt.py, line 3127, in
 baselineCPU
  if ret is None: raise libvirtError ('virConnectBaselineCPU() failed',
  conn=self)
  libvirtError: XML error: Missing CPU model name
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
  Beijing 100193, PRC
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Russell Bryant
On 08/05/2014 01:23 PM, Gary Kotton wrote:
 Ok, thanks for the clarification. This means that it will not be done
 automagically as it is today – the tenant will need to create a Neutron
 port and then pass that through.

FWIW, that's the direction we've wanted to move in Nova anyway.  We'd
like to get rid of automatic port creation, but can't do that in the
current stable API.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][TripleO] Heat can't retrieve stack list

2014-08-05 Thread Ben Nemec
We should take this discussion off the dev list.  It's really a usage
question for instack from what I see.  Can you possibly hop on #tripleo
on freenode to discuss?

For reference, I believe I've seen the same problem using instack for
icehouse with a qpid version that's too new.  Not sure if that's
relevant here, but it could be.

-Ben

On 08/05/2014 12:43 AM, Peeyush Gupta wrote:
 Hi all,
 
 I have been trying to set up tripleo using instack.
 When I try to deploy overcloud, I get a heat related 
 error. Here it is:
 
 [stack@localhost ~]$ heat stack-list
 ERROR: Timeout while waiting on RPC response - topic: engine, RPC method: 
 list_stacks info: unknown
 
 Now, heat-engine is running:
 
 
 [stack@localhost ~]$ ps ax | grep heat-engine
 15765 pts/0S+ 0:00 grep --color=auto heat-engine
 25671 ?Ss 0:27 /usr/bin/python /usr/bin/heat-engine --logfile 
 /var/log/heat/engine.log
 
 Here is the heat-engine log:
 
 2014-08-04 07:57:26.321 25671 ERROR heat.engine.resource [-] CREATE : Server 
 SwiftStorage0 [b78e4c74-f446-4941-8402-56cf46401013] Stack overcloud 
 [9bdc71f5-ce31-4a9c-8d72-3adda0a2c66e]
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource Traceback (most 
 recent call last):
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
 /usr/lib/python2.7/site-packages/heat/engine/resource.py, line 420, in 
 _do_action
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource while not 
 check(handle_data):
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
 /usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 545, 
 in check_create_complete
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource return 
 self._check_active(server)
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
 /usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 561, 
 in _check_active
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource raise exc
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource Error: Creation of 
 server overcloud-SwiftStorage0-fnl43ebtcsom failed.
 2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource 
 2014-08-04 07:57:27.152 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:27.494 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:27.998 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:28.312 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:28.799 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:29.452 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:30.106 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:30.516 25671 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-08-04 07:57:31.499 25671 WARNING heat.engine.service [-] Stack create 
 failed, status FAILED
 
 Any idea how to figure this error out?
  
 Thanks,
 Peeyush Gupta
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Ben Nemec
On 08/05/2014 10:51 AM, ZZelle wrote:
 Hi,
 
 
 I like the idea  ... with complex change, it could useful for the
 understanding to split it into smaller changes during development.

I don't understand this.  If it's a complex change that you need
multiple commits to keep track of locally, why wouldn't reviewers want
the same thing?  Squashing a bunch of commits together solely so you
have one review for Gerrit isn't a good thing.  Is it just the warning
message that git-review prints when you try to push multiple commits
that is the problem here?

 
 
 Do we need to expose such feature under git review? we could define a new
 subcommand? git reviewflow?
 
 
 Cédric,
 ZZelle@IRC
 
 
 
 On Tue, Aug 5, 2014 at 4:49 PM, Ryan Brown rybr...@redhat.com wrote:
 


 On 08/05/2014 09:27 AM, Sylvain Bauza wrote:

 Le 05/08/2014 13:06, Ryan Brown a écrit :
 -1 to this as git-review default behaviour. Ideally, branches should be
 identical in between Gerrit and local Git.

 Probably not as default behaviour (people who don't want that workflow
 would be driven mad!), but I think enough folks would want it that it
 should be available as an option.

 I can understand some exceptions where developers want to work on
 intermediate commits and squash them before updating Gerrit, but in that
 case, I can't see why it needs to be kept locally. If a new patchset has
 to be done on patch A, then the local branch can be rebased
 interactively on last master, edit patch A by doing an intermediate
 patch, then squash the change, and pick the later patches (B to E)

 That said, I can also understand that developers work their way, and so
 could dislike squashing commits, hence my proposal to have a --no-squash
 option when uploading, but use with caution (for a single branch, how
 many dependencies are outdated in Gerrit because developers work on
 separate branches for each single commit while they could work locally
 on a single branch ? I can't iimagine how often errors could happen if
 we don't force by default to squash commits before sending them to
 Gerrit)

 -Sylvain

 Cheers,


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I am well aware this may be straying into feature creep territory, and
 it wouldn't be terrible if this weren't implemented.

 --
 Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt][baremetal] Nova Baremetal's Usage of Components from Libvirt

2014-08-05 Thread Dan Smith
 The second option would be to make a copy of the old ImageCacheManager
 in the Baremetal directory, and have the Baremetal driver
 use that.  This seems to me to be the better option, since it means
 that when the Baremetal driver is removed, the old ImageCacheManager
 code goes with it, without someone having to manually remove it.
 
 I might get shot in the head, but I think option 2 makes the most sense.
 There is no need to do _new_ work in support of a dead codebase.

Agreed, making a copy isn't the end of the world, and we know we're
going to delete it soonish anyway. We've asked the ironic folks to do a
lot to make the baremetal transition easy and I see no reason to add a
refactor dependency to the list so it can be deleted in six months :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Jay Pipes

On 08/05/2014 01:13 PM, Robert Kukura wrote:


On 8/5/14, 11:04 AM, Gary Kotton wrote:

Hi,
Is there any description of how this will be consumed by Nova. My
concern is this code landing there.

Hi Gary,

Initially, an endpoint's port_id is passed to Nova using nova boot ...
--nic port-id=port-uuid ..., requiring no changes to Nova. Later,
slight enhancements to Nova would allow using commands such as nova
boot ... --nic ep-id=endpoint-uuid ... or nova boot ... --nic
epg-id=endpoint-group-uuid 


Hi Bob,

How exactly is the above a friendlier API for the main user of Neutron, 
which is Nova? I thought one of the main ideas behind the GBP stuff was 
to create a more declarative and intuitive API for users of Neutron -- 
i.e. Nova -- to use in constructing needed networking objects. The above 
just seems to me to be exchanging one low-level object (port) with 
another low-level object (endpoint or endpoint group)?


Perhaps the disconnect is due to the term endpoint being used, which, 
everywhere else in the OpenStack universe, means something entirely 
different from GBP.


I guess, based on my understanding of the *intent* of the GBP API, I 
would have expected an API more like:


 nova boot ... --networking-template UUID

where --networking-template would refer to a network, subnet topology, 
IP assignment policy, collection of security groups and firewall 
policies that the tenant had established prior to booting an instance... 
thereby making the API more intuitive and less cluttered.


Or is it that I just don't understand this new endpoint terminology?

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Collins, Sean
On Tue, Aug 05, 2014 at 12:50:45PM EDT, Monty Taylor wrote:
 However, I think the cost to providing that path far outweighs
 the benefit in the face of other things on our plate.

Perhaps those large operators that are hoping for a
Nova-Network-Neutron zero-downtime live migration, could dedicate
resources to this requirement? It is my direct experience that features
that are important to a large organization will require resources
from that very organization to be completed.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Kevin Benton
Specifying an endpoint group would achieve the --networking-template
effects you described. The endpoint group would have all of the security
policies, IP allocation policies, connectivity policies, etc. already setup.


On Tue, Aug 5, 2014 at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/05/2014 01:13 PM, Robert Kukura wrote:


 On 8/5/14, 11:04 AM, Gary Kotton wrote:

 Hi,
 Is there any description of how this will be consumed by Nova. My
 concern is this code landing there.

 Hi Gary,

 Initially, an endpoint's port_id is passed to Nova using nova boot ...
 --nic port-id=port-uuid ..., requiring no changes to Nova. Later,
 slight enhancements to Nova would allow using commands such as nova
 boot ... --nic ep-id=endpoint-uuid ... or nova boot ... --nic
 epg-id=endpoint-group-uuid 


 Hi Bob,

 How exactly is the above a friendlier API for the main user of Neutron,
 which is Nova? I thought one of the main ideas behind the GBP stuff was to
 create a more declarative and intuitive API for users of Neutron -- i.e.
 Nova -- to use in constructing needed networking objects. The above just
 seems to me to be exchanging one low-level object (port) with another
 low-level object (endpoint or endpoint group)?

 Perhaps the disconnect is due to the term endpoint being used, which,
 everywhere else in the OpenStack universe, means something entirely
 different from GBP.

 I guess, based on my understanding of the *intent* of the GBP API, I would
 have expected an API more like:

  nova boot ... --networking-template UUID

 where --networking-template would refer to a network, subnet topology, IP
 assignment policy, collection of security groups and firewall policies that
 the tenant had established prior to booting an instance... thereby making
 the API more intuitive and less cluttered.

 Or is it that I just don't understand this new endpoint terminology?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Well-tested guides for OpenStack Icehouse installation and Instance creation with Neutron

2014-08-05 Thread chayma ghribi
Hi all,


I want to share with you our well tested OpenStack Icehouse Installation
Guide for Ubuntu 14.04.

https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst

If you want to create your first instance with Neutron, follow the
instructions in our VM creation guide available here:

https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/Create-your-first-instance-with-Neutron.rst

Hope this will be helpful !
Your questions and suggestions are welcome :)


Regards,

Chaima Ghribi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt][baremetal] Nova Baremetal's Usage of Components from Libvirt

2014-08-05 Thread Russell Bryant
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/05/2014 02:49 PM, Dan Smith wrote:
 The second option would be to make a copy of the old
 ImageCacheManager in the Baremetal directory, and have the
 Baremetal driver use that.  This seems to me to be the better
 option, since it means that when the Baremetal driver is
 removed, the old ImageCacheManager code goes with it, without
 someone having to manually remove it.
 
 I might get shot in the head, but I think option 2 makes the most
 sense. There is no need to do _new_ work in support of a dead
 codebase.
 
 Agreed, making a copy isn't the end of the world, and we know
 we're going to delete it soonish anyway. We've asked the ironic
 folks to do a lot to make the baremetal transition easy and I see
 no reason to add a refactor dependency to the list so it can be
 deleted in six months :)

+1.  Just copy it.  More work seems like wasted effort.

- -- 
Russell Bryant
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlPhMYwACgkQFg9ft4s9SAb2OgCcDiyXhV55P9++SBcM9iCouw8L
nroAnRkPDFPkLRlsqa/dEr5HUaBbIAeF
=1h0p
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-05 Thread Devananda van der Veen
Hi all!

The following idea came out of last week's midcycle for how to improve our
spec process and tracking on launchpad. I think most of us liked it, but of
course, not everyone was there, so I'll attempt to write out what I recall.

This would apply to new specs proposed for Kilo (since the new spec
proposal deadline has already passed for Juno).


First, create a blueprint in launchpad and populate it with your spec's
heading. Then, propose a spec with just the heading (containing a link to
the BP), Problem Description, and first paragraph outlining your Proposed
change.

This will be given an initial, high-level review to determine whether it is
in scope and in alignment with project direction, which will be reflected
on the review comments, and, if affirmed, by setting the blueprint's
Direction field to Approved.

At this point, if affirmed, you should proceed with filling out the entire
spec, and the remainder of the process will continue as it was during Juno.
Once the spec is approved, update launchpad to set the specification URL to
the spec's location on https://specs.openstack.org/openstack/ironic-specs/
and a member of the team (probably me) will update the release target,
priority, and status.


I believe this provides two benefits. First, it should give quicker initial
feedback to proposer if their change is going to be in/out of scope, which
can save considerable time if the proposal is out of scope. Second, it
allows us to track well-aligned specs on Launchpad before they are
completely approved. We observed that several specs were approved at nearly
the same time as the code was approved. Due to the way we were using LP
this cycle, it meant that LP did not reflect the project's direction in
advance of landing code, which is not what we intended. This may have been
confusing, and I think this will help next cycle. FWIW, several other
projects have observed a similar problem with spec-launchpad interaction,
and are adopting similar practices for Kilo.


Comments/discussion welcome!

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 5:15 AM, Angus Salkeld angus.salk...@rackspace.com
wrote:

 On Tue, 2014-08-05 at 03:18 +0400, Yuriy Taraday wrote:
  Hello, git-review users!
 
 
  I'd like to gather feedback on a feature I want to implement that
  might turn out useful for you.
 
 
  I like using Git for development. It allows me to keep track of
  current development process, it remembers everything I ever did with
  the code (and more).
  I also really like using Gerrit for code review. It provides clean
  interfaces, forces clean histories (who needs to know that I changed
  one line of code in 3am on Monday?) and allows productive
  collaboration.
  What I really hate is having to throw away my (local, precious for me)
  history for all change requests because I need to upload a change to
  Gerrit.

 I just create a short-term branch to record this.


I tend to use branches that are squashed down to one commit after the first
upload and that's it. I'd love to keep all history during feature
development, not just the tip of it.


 
  That's why I want to propose making git-review to support the workflow
  that will make me happy. Imagine you could do smth like this:
 
 
  0. create new local branch;
 
 
  master: M--
   \
  feature:  *
 
 
  1. start hacking, doing small local meaningful (to you) commits;
 
 
  master: M--
   \
  feature:  A-B-...-C
 
 
  2. since hacking takes tremendous amount of time (you're doing a Cool
  Feature (tm), nothing less) you need to update some code from master,
  so you're just merging master in to your branch (i.e. using Git as
  you'd use it normally);
 
  master: M---N-O-...
   \\\
  feature:  A-B-...-C-D-...
 
 
  3. and now you get the first version that deserves to be seen by
  community, so you run 'git review', it asks you for desired commit
  message, and poof, magic-magic all changes from your branch is
  uploaded to Gerrit as _one_ change request;
 
  master: M---N-O-...
   \\\E* = uploaded
  feature:  A-B-...-C-D-...-E
 
 
  4. you repeat steps 1 and 2 as much as you like;
  5. and all consecutive calls to 'git review' will show you last commit
  message you used for upload and use it to upload new state of your
  local branch to Gerrit, as one change request.
 
 
  Note that during this process git-review will never run rebase or
  merge operations. All such operations are done by user in local branch
  instead.
 
 
  Now, to the dirty implementations details.
 
 
  - Since suggested feature changes default behavior of git-review,
  it'll have to be explicitly turned on in config
  (review.shadow_branches? review.local_branches?). It should also be
  implicitly disabled on master branch (or whatever is in .gitreview
  config).
  - Last uploaded commit for branch branch-name will be kept in
  refs/review-branches/branch-name.
  - For every call of 'git review' it will find latest commit in
  gerrit/master (or remote and branch from .gitreview), create a new one
  that will have that commit as its parent and a tree of current commit
  from local branch as its tree.
  - While creating new commit, it'll open an editor to fix commit
  message for that new commit taking it's initial contents from
  refs/review-branches/branch-name if it exists.
  - Creating this new commit might involve generating a temporary bare
  repo (maybe even with shared objects dir) to prevent changes to
  current index and HEAD while using bare 'git commit' to do most of the
  work instead of loads of plumbing commands.
 
 
  Note that such approach won't work for uploading multiple change
  request without some complex tweaks, but I imagine later we can
  improve it and support uploading several interdependent change
  requests from several local branches. We can resolve dependencies
  between them by tracking latest merges (if branch myfeature-a has been
  merged to myfeature-b then change request from myfeature-b will depend
  on change request from myfeature-a):
 
  master:M---N-O-...
  \\\-E*
  myfeature-a: A-B-...-C-D-...-E   \
\   \   J* = uploaded
  myfeature-b:   F-...-G-I-J
 
 
  This improvement would be implemented later if needed.
 
 
  I hope such feature seams to be useful not just for me and I'm looking
  forward to some comments on it.

 Hi Yuriy,

 I like my local history matching what is up for review and
 don't value the interim messy commits (I make a short term branch to
 save the history so I can go back to it - if I mess up a merge).


You'll still get this history in those special refs. But in your branch
you'll have your own history.



 Tho' others might love this idea.

 -Angus



-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] make mac address updatable: which plugins?

2014-08-05 Thread Charles Carlino
Hi all,

I need some help regarding a bug [1] I'm working on.

The bug is basically a request to make the mac address of a port updatable.  
The use case is a baremetal (Ironic) node that has a bad NIC which must be 
replaced, resulting in a new mac address.  The bad NIC has an associated 
neutron port which of course holds the NIC's IP address.  The reason to make 
mac_address updatable (as opposed to having the user create a new port and 
delete the old one) is that during the recovery process the IP address must be 
retained and assigned to the new NIC/port, which is not guaranteed in the above 
work-around.

I'm coding the changes to do this in the ml2, openvswitch, and linuxbridge 
plugins but I'm not sure how to handle the the other plugins since I don't know 
if the associated backends are prepared to handle such updates.  My first 
thought is to disallow the update in the other plugins, but I would really 
appreciate your advice.

Kind regards,
Chuck Carlino

[1] https://bugs.launchpad.net/neutron/+bug/1341268___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 3:06 PM, Ryan Brown rybr...@redhat.com wrote:

  On 08/04/2014 07:18 PM, Yuriy Taraday wrote:
  snip

 +1, this is definitely a feature I'd want to see.

 Currently I run two branches bug/LPBUG#-local and bug/LPBUG# where
 the local is my full history of the change and the other branch is the
 squashed version I send out to Gerrit.


And I'm too lazy to keep switching between these branches :)
Great, you're first to support this feature!

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] make mac address updatable: which plugins?

2014-08-05 Thread Kevin Benton
How are you implementing the change? It would be good to get to see some
code in a review to get an idea of what needs to be updated.

If it's just a change in the DB base plugin, just let those changes
propagate to the plugins that haven't overridden the inherited behavior.


On Tue, Aug 5, 2014 at 1:28 PM, Charles Carlino chuckjcarl...@gmail.com
wrote:

 Hi all,

 I need some help regarding a bug [1] I'm working on.

 The bug is basically a request to make the mac address of a port
 updatable.  The use case is a baremetal (Ironic) node that has a bad NIC
 which must be replaced, resulting in a new mac address.  The bad NIC has an
 associated neutron port which of course holds the NIC's IP address.  The
 reason to make mac_address updatable (as opposed to having the user create
 a new port and delete the old one) is that during the recovery process the
 IP address must be retained and assigned to the new NIC/port, which is not
 guaranteed in the above work-around.

 I'm coding the changes to do this in the ml2, openvswitch, and linuxbridge
 plugins but I'm not sure how to handle the the other plugins since I don't
 know if the associated backends are prepared to handle such updates.  My
 first thought is to disallow the update in the other plugins, but I would
 really appreciate your advice.

 Kind regards,
 Chuck Carlino

 [1] https://bugs.launchpad.net/neutron/+bug/1341268

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] make mac address updatable: which plugins?

2014-08-05 Thread Amir Sadoughi
I agree with Kevin here. Just a note, don't bother with openvswitch and 
linuxbridge plugins as they are marked for deletion this cycle, imminently 
(already deprecated)[0].

Amir

[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-04-21.02.html
 Announcements 2e.

From: Kevin Benton [blak...@gmail.com]
Sent: Tuesday, August 05, 2014 2:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] make mac address updatable: which 
plugins?

How are you implementing the change? It would be good to get to see some code 
in a review to get an idea of what needs to be updated.

If it's just a change in the DB base plugin, just let those changes propagate 
to the plugins that haven't overridden the inherited behavior.


On Tue, Aug 5, 2014 at 1:28 PM, Charles Carlino 
chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.com wrote:
Hi all,

I need some help regarding a bug [1] I'm working on.

The bug is basically a request to make the mac address of a port updatable.  
The use case is a baremetal (Ironic) node that has a bad NIC which must be 
replaced, resulting in a new mac address.  The bad NIC has an associated 
neutron port which of course holds the NIC's IP address.  The reason to make 
mac_address updatable (as opposed to having the user create a new port and 
delete the old one) is that during the recovery process the IP address must be 
retained and assigned to the new NIC/port, which is not guaranteed in the above 
work-around.

I'm coding the changes to do this in the ml2, openvswitch, and linuxbridge 
plugins but I'm not sure how to handle the the other plugins since I don't know 
if the associated backends are prepared to handle such updates.  My first 
thought is to disallow the update in the other plugins, but I would really 
appreciate your advice.

Kind regards,
Chuck Carlino

[1] https://bugs.launchpad.net/neutron/+bug/1341268

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Jay Pipes

On 08/05/2014 03:24 PM, Kevin Benton wrote:

Specifying an endpoint group would achieve the --networking-template
effects you described. The endpoint group would have all of the security
policies, IP allocation policies, connectivity policies, etc. already setup.


OK. Is there any reason it was called an endpoint group then? Perhaps 
I am missing something, but the term endpoint is well-used and 
understood to mean something entirely different in the OpenStack 
ecosystem...


Best,
-jay


On Tue, Aug 5, 2014 at 1:04 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 08/05/2014 01:13 PM, Robert Kukura wrote:


On 8/5/14, 11:04 AM, Gary Kotton wrote:

Hi,
Is there any description of how this will be consumed by
Nova. My
concern is this code landing there.

Hi Gary,

Initially, an endpoint's port_id is passed to Nova using nova
boot ...
--nic port-id=port-uuid ..., requiring no changes to Nova. Later,
slight enhancements to Nova would allow using commands such as nova
boot ... --nic ep-id=endpoint-uuid ... or nova boot ... --nic
epg-id=endpoint-group-uuid 


Hi Bob,

How exactly is the above a friendlier API for the main user of
Neutron, which is Nova? I thought one of the main ideas behind the
GBP stuff was to create a more declarative and intuitive API for
users of Neutron -- i.e. Nova -- to use in constructing needed
networking objects. The above just seems to me to be exchanging one
low-level object (port) with another low-level object (endpoint or
endpoint group)?

Perhaps the disconnect is due to the term endpoint being used,
which, everywhere else in the OpenStack universe, means something
entirely different from GBP.

I guess, based on my understanding of the *intent* of the GBP API, I
would have expected an API more like:

  nova boot ... --networking-template UUID

where --networking-template would refer to a network, subnet
topology, IP assignment policy, collection of security groups and
firewall policies that the tenant had established prior to booting
an instance... thereby making the API more intuitive and less cluttered.

Or is it that I just don't understand this new endpoint terminology?

Best,
-jay


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 5:27 PM, Sylvain Bauza sba...@redhat.com wrote:

 -1 to this as git-review default behaviour.


I don't suggest to make it the default behavior. As I wrote there will
definitely be a config option that would turn it on.


 Ideally, branches should be identical in between Gerrit and local Git.


The thing is that there's no feature branches in Gerrit. Just some number
of independent commits (patchsets). And you'll even get log of those
locally in special refs!


 I can understand some exceptions where developers want to work on
 intermediate commits and squash them before updating Gerrit, but in that
 case, I can't see why it needs to be kept locally. If a new patchset has to
 be done on patch A, then the local branch can be rebased interactively on
 last master, edit patch A by doing an intermediate patch, then squash the
 change, and pick the later patches (B to E)


And that works up to the point when your change requests evolves for
several months and there's no easy way to dig up why did you change that
default or how did this algorithm ended up in such shape. You can't simply
run bisect to find what did you break since 10 patchsets ago. Git has been
designed to be super easy to keep branches and most of them - locally. And
we can't properly use them.


 That said, I can also understand that developers work their way, and so
 could dislike squashing commits, hence my proposal to have a --no-squash
 option when uploading, but use with caution (for a single branch, how many
 dependencies are outdated in Gerrit because developers work on separate
 branches for each single commit while they could work locally on a single
 branch ? I can't iimagine how often errors could happen if we don't force
 by default to squash commits before sending them to Gerrit)


I don't quite get the reason for --no-squash option. With current
git-review there's no squashing at all. You either upload all outstanding
commits or you go a change smth by yourself. With my suggested approach you
don't squash (in terms of rebasing) anything, you just create a new commit
with the very same contents as in your branch.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Sumit Naiksatam
That's right Kevin, EPG (and its association to the L2/3_Policy)
capture the attributes which would represent the network-template
being referenced here.

Jay, what Bob mentioned here was an option to use the endpoint as a
one-to-one replacement for the option of using a Neutron port. This is
more so in the context of providing an evolutionary path (from the way
Nova currently does it using a pre-defined port). However, if it makes
sense to make Nova aware of the EPG right at the outset, then that is
even better.

I have also noted your suggestion on clarifying the endpoint
terminology. This was already done in one of the patches you had
reviewed earlier, and will do that in the first patch as well (where
you pointed it out now).

Thanks,
~Sumit.

On Tue, Aug 5, 2014 at 12:24 PM, Kevin Benton blak...@gmail.com wrote:
 Specifying an endpoint group would achieve the --networking-template effects
 you described. The endpoint group would have all of the security policies,
 IP allocation policies, connectivity policies, etc. already setup.


 On Tue, Aug 5, 2014 at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/05/2014 01:13 PM, Robert Kukura wrote:


 On 8/5/14, 11:04 AM, Gary Kotton wrote:

 Hi,
 Is there any description of how this will be consumed by Nova. My
 concern is this code landing there.

 Hi Gary,

 Initially, an endpoint's port_id is passed to Nova using nova boot ...
 --nic port-id=port-uuid ..., requiring no changes to Nova. Later,
 slight enhancements to Nova would allow using commands such as nova
 boot ... --nic ep-id=endpoint-uuid ... or nova boot ... --nic
 epg-id=endpoint-group-uuid 


 Hi Bob,

 How exactly is the above a friendlier API for the main user of Neutron,
 which is Nova? I thought one of the main ideas behind the GBP stuff was to
 create a more declarative and intuitive API for users of Neutron -- i.e.
 Nova -- to use in constructing needed networking objects. The above just
 seems to me to be exchanging one low-level object (port) with another
 low-level object (endpoint or endpoint group)?

 Perhaps the disconnect is due to the term endpoint being used, which,
 everywhere else in the OpenStack universe, means something entirely
 different from GBP.

 I guess, based on my understanding of the *intent* of the GBP API, I would
 have expected an API more like:

  nova boot ... --networking-template UUID

 where --networking-template would refer to a network, subnet topology, IP
 assignment policy, collection of security groups and firewall policies that
 the tenant had established prior to booting an instance... thereby making
 the API more intuitive and less cluttered.

 Or is it that I just don't understand this new endpoint terminology?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-05 Thread Roman Prykhodchenko
Hi!

I think this is a nice idea indeed. Do you plan to use this process starting 
from Juno or as soon as possible?


- Roman


On Aug 5, 2014, at 22:33, Devananda van der Veen devananda@gmail.com 
wrote:

 Hi all!
 
 The following idea came out of last week's midcycle for how to improve our 
 spec process and tracking on launchpad. I think most of us liked it, but of 
 course, not everyone was there, so I'll attempt to write out what I recall.
 
 This would apply to new specs proposed for Kilo (since the new spec proposal 
 deadline has already passed for Juno).
 
 
 First, create a blueprint in launchpad and populate it with your spec's 
 heading. Then, propose a spec with just the heading (containing a link to the 
 BP), Problem Description, and first paragraph outlining your Proposed change. 
 
 This will be given an initial, high-level review to determine whether it is 
 in scope and in alignment with project direction, which will be reflected on 
 the review comments, and, if affirmed, by setting the blueprint's Direction 
 field to Approved.
 
 At this point, if affirmed, you should proceed with filling out the entire 
 spec, and the remainder of the process will continue as it was during Juno. 
 Once the spec is approved, update launchpad to set the specification URL to 
 the spec's location on https://specs.openstack.org/openstack/ironic-specs/ 
 and a member of the team (probably me) will update the release target, 
 priority, and status.
 
 
 I believe this provides two benefits. First, it should give quicker initial 
 feedback to proposer if their change is going to be in/out of scope, which 
 can save considerable time if the proposal is out of scope. Second, it allows 
 us to track well-aligned specs on Launchpad before they are completely 
 approved. We observed that several specs were approved at nearly the same 
 time as the code was approved. Due to the way we were using LP this cycle, it 
 meant that LP did not reflect the project's direction in advance of landing 
 code, which is not what we intended. This may have been confusing, and I 
 think this will help next cycle. FWIW, several other projects have observed a 
 similar problem with spec-launchpad interaction, and are adopting similar 
 practices for Kilo.
 
 
 Comments/discussion welcome!
 
 -Deva
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 6:49 PM, Ryan Brown rybr...@redhat.com wrote:

 On 08/05/2014 09:27 AM, Sylvain Bauza wrote:
 
  Le 05/08/2014 13:06, Ryan Brown a écrit :
  -1 to this as git-review default behaviour. Ideally, branches should be
  identical in between Gerrit and local Git.

 Probably not as default behaviour (people who don't want that workflow
 would be driven mad!), but I think enough folks would want it that it
 should be available as an option.


This would definitely be a feature that only some users would turn on in
their config files.


 I am well aware this may be straying into feature creep territory, and
 it wouldn't be terrible if this weren't implemented.


I'm not sure I understand what do you mean by this...

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-05 Thread Jay Pipes

On 08/05/2014 03:23 PM, Collins, Sean wrote:

On Tue, Aug 05, 2014 at 12:50:45PM EDT, Monty Taylor wrote:

However, I think the cost to providing that path far outweighs
the benefit in the face of other things on our plate.


Perhaps those large operators that are hoping for a
Nova-Network-Neutron zero-downtime live migration, could dedicate
resources to this requirement? It is my direct experience that features
that are important to a large organization will require resources
from that very organization to be completed.


Indeed, that's partly why I called out Metacloud in the original post, 
as they were brought up as a deployer with this potential need. Please, 
if there are any other shops that:


* Currently deploy nova-network
* Need to move to Neutron
* Their tenants cannot tolerate any downtime due to a cold migration

Please do comment on this thread and speak up.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Meeting minutes for 2014-08-05

2014-08-05 Thread Brian Curtin
The following logs are from today's python-openstacksdk meeting

Minutes:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-08-05-19.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-08-05-19.00.txt
Log:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-08-05-19.00.log.html

The next meeting is scheduled for 2014-08-12 at 1900 UTC
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 7:51 PM, ZZelle zze...@gmail.com wrote:

 Hi,


 I like the idea  ... with complex change, it could useful for the
 understanding to split it into smaller changes during development.


 Do we need to expose such feature under git review? we could define a new
 subcommand? git reviewflow?


Yes. I think we should definitely make it an enhancement for 'git review'
command because it's essentially mostly the same 'git review' control flow
with an extra preparation step and a bit shifted upload source. git-review
is a magic command that does what you need finishing with change request
upload. And this is exactly what I want here.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][oslo] Problem installing oslo.config-1.4.0.0a3 from .whl files

2014-08-05 Thread Carl Baldwin
Hi,

I noticed this yesterday afternoon.  I tried to run pep8 and unit
tests on a patch I was going to submit.  It failed with an error that
no package satisfying oslo.config could be found [1].  I went to pypi
and saw that the version appears to be available [2] but still
couldn't install it.

I tried to activate the .tox/pep8 virtual environment and install the
version explicitly.  Interestingly, that worked in one gerrit repo for
Neutron [3] but not the other [4].  These two virtual envs are on the
same machine.  I ran git clean -fdx to start over and now neither
virtualenv can install it.

Anyone have any idea what is going on?  It seems to be related to the
fact that oslo.config is now uploaded as .whl files, whatever those
are.  Why is it that my system cannot handle these?  I noticed that
oslo.config is now available only as .whl in the 1.4.0.0aN versions
but used to be available as .tar.gz files.

Carl

[1] http://paste.openstack.org/show/90651/
[2] https://pypi.python.org/pypi/oslo.config
[3] http://paste.openstack.org/show/90674/
[4] http://paste.openstack.org/show/90675/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday August 5th at 19:00 UTC

2014-08-05 Thread Elizabeth K. Joseph
On Mon, Aug 4, 2014 at 10:12 AM, Elizabeth K. Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting on Tuesday August 5th, at 19:00 UTC in #openstack-meeting

Thanks to everyone who attended, meeting minutes and log are now available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-05-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-05-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-05-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 8:20 PM, Varnau, Steve (Trafodion) 
steve.var...@hp.com wrote:

  Yuriy,



 It looks like this would automate a standard workflow that my group often
 uses: multiple commits, create “delivery” branch, git merge --squash, git
 review.  That looks really useful.



 Having it be repeatable is a bonus.


That's great! I'm glad to hear that there are more and more supporters for
it.


  Per last bullet of the implementation, I would not require not modifying
 current index/HEAD. A checkout back to working branch can be done at the
 end, right?


To make this magic commit we'll have to backtrack HEAD to the latest commit
in master, then load tree from the latest commit in the feature branch to
index and then do the commit. To do this properly without hurting worktree,
messing index and losing HEAD I think it'd be safer to create a very small
clone. As a bonus you won't have to stash your local changes or current
index to run 'git review'.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Datastore/Versions API improvements

2014-08-05 Thread Craig Vyvial
On Wed, Jul 30, 2014 at 10:10 AM, Denis Makogon dmako...@mirantis.com
wrote:

 Hello, Stackers.



 I’d like to gather Trove team around question related to
 Datastores/Version API responses (request/response payloads and HTTP codes).

 Small INFO

 When deployer creates datastore and versions for it Troves` backend
 receives request to store DBDatastore and DBDatastoreVersion objects with
 certain parameters. The most interesting attribute of DBDatastoreVersion is
 “packages” - it’s being stored as String object (and it’s totally fine).
 But when we’re trying to query given datastore version through the
 Datastores API attribute “packages” is being returned as String object too.
 And it seems that it breaks response pattern - “If given attribute
 represents complex attribute, such as: list, dict, tuple - it should be
 returned as is.

 So, the first question is - are we able to change it in terms of V1?

If it does not break the public api then i do not think there is an issue
making a change.
I made a change not long ago around making the packages a list thats sent
to the guest. I'm a bit confused what you are wanting to change here.
Are you suggesting changing the data that is stored for packages (string to
a json.dumps list or something).
Or making the model parse the string into a list when you request the
packages for a datastore version?



 The second question is about admin_context decorator (see [1]). This
 method executes methods of given controller and verifies that user is able
 to execute certain procedure.

 Taking into account RFC 2616 this method should raise HTTP Forbidden
 (code 403) if user tried to execute request that he’s not allowed to.

 But given method return HTTP Unauthorized (code 401) which seems
 weird since user is authorized.

I think this is a valid bug for the error code although the message makes
it clear why you get the 401.
https://github.com/openstack/trove/blob/master/trove/common/auth.py#L85



 This is definitely a bug. And it comes from [2].


 [1]
 https://github.com/openstack/trove/blob/master/trove/common/auth.py#L72-L87

 [2]
 https://github.com/openstack/trove/blob/master/trove/common/wsgi.py#L316-L318



 Best regards,

 Denis Makogon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][oslo] Problem installing oslo.config-1.4.0.0a3 from .whl files

2014-08-05 Thread Alexei Kornienko

Hello Carl,

You should try to update your virtualenv (pip install -U virtualenv).
It fixed this problem for me.

Regards,
Alexei

On 05/08/14 23:00, Carl Baldwin wrote:

Hi,

I noticed this yesterday afternoon.  I tried to run pep8 and unit
tests on a patch I was going to submit.  It failed with an error that
no package satisfying oslo.config could be found [1].  I went to pypi
and saw that the version appears to be available [2] but still
couldn't install it.

I tried to activate the .tox/pep8 virtual environment and install the
version explicitly.  Interestingly, that worked in one gerrit repo for
Neutron [3] but not the other [4].  These two virtual envs are on the
same machine.  I ran git clean -fdx to start over and now neither
virtualenv can install it.

Anyone have any idea what is going on?  It seems to be related to the
fact that oslo.config is now uploaded as .whl files, whatever those
are.  Why is it that my system cannot handle these?  I noticed that
oslo.config is now available only as .whl in the 1.4.0.0aN versions
but used to be available as .tar.gz files.

Carl

[1] http://paste.openstack.org/show/90651/
[2] https://pypi.python.org/pypi/oslo.config
[3] http://paste.openstack.org/show/90674/
[4] http://paste.openstack.org/show/90675/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] File-storage for Manila service image

2014-08-05 Thread Valeriy Ponomaryov
Hello everyone,

Currently used image for Manila is located in dropbox:
ubuntu_1204_nfs_cifs.qcow2
https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2 and
dropbox has limit for traffic, see https://www.dropbox.com/help/4204

Due to generation of excessive traffic, public links were banned and image
could not be downloaded with error code 509, now it is unbanned, until
another excess reached.

Traffic limit should not threat possibility to use project, so we need find
stable file storage with permanent public links and without traffic limit.

Does anyone have any suggestions for more suitable file storage to use?

-- 
Kind Regards
Valeriy Ponomaryov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.

 I don't understand this.  If it's a complex change that you need
 multiple commits to keep track of locally, why wouldn't reviewers want
 the same thing?  Squashing a bunch of commits together solely so you
 have one review for Gerrit isn't a good thing.  Is it just the warning
 message that git-review prints when you try to push multiple commits
 that is the problem here?


When you're developing some big change you'll end up with trying dozens of
different approaches and make thousands of mistakes. For reviewers this is
just unnecessary noise (commit title Scratch my last CR, that was
bullshit) while for you it's a precious history that can provide basis for
future research or bug-hunting.

Merges are one of the strong sides of Git itself (and keeping them very
easy is one of the founding principles behind it). With current workflow we
don't use them at all. master went too far forward? You have to do rebase
and screw all your local history and most likely squash everything anyway
because you don't want to fix commits with known bugs in them. With
proposed feature you can just do merge once and let 'git review' add some
magic without ever hurting your code.

And speaking about breaking down of change requests don't forget support
for change requests chains that this feature would lead to. How to you deal
with 5 consecutive change request that are up on review for half a year?
The only way I could suggest to my colleague at a time was Erm... Learn
Git and dance with rebases, detached heads and reflogs! My proposal might
take care of that too.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] make mac address updatable: which plugins?

2014-08-05 Thread Carlino, Chuck (OpenStack TripleO, Neutron)
Thanks for the quick responses.

Here's the WIP review:

https://review.openstack.org/112129.

The base plugin doesn't contribute to the notification decision right now, so 
I've modified the actual plugin code.

Chuck


On Aug 5, 2014, at 12:51 PM, Amir Sadoughi 
amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com
 wrote:

I agree with Kevin here. Just a note, don't bother with openvswitch and 
linuxbridge plugins as they are marked for deletion this cycle, imminently 
(already deprecated)[0].

Amir

[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-04-21.02.html
 Announcements 2e.

From: Kevin Benton [blak...@gmail.commailto:blak...@gmail.com]
Sent: Tuesday, August 05, 2014 2:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] make mac address updatable: which 
plugins?

How are you implementing the change? It would be good to get to see some code 
in a review to get an idea of what needs to be updated.

If it's just a change in the DB base plugin, just let those changes propagate 
to the plugins that haven't overridden the inherited behavior.


On Tue, Aug 5, 2014 at 1:28 PM, Charles Carlino 
chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.com wrote:
Hi all,

I need some help regarding a bug [1] I'm working on.

The bug is basically a request to make the mac address of a port updatable.  
The use case is a baremetal (Ironic) node that has a bad NIC which must be 
replaced, resulting in a new mac address.  The bad NIC has an associated 
neutron port which of course holds the NIC's IP address.  The reason to make 
mac_address updatable (as opposed to having the user create a new port and 
delete the old one) is that during the recovery process the IP address must be 
retained and assigned to the new NIC/port, which is not guaranteed in the above 
work-around.

I'm coding the changes to do this in the ml2, openvswitch, and linuxbridge 
plugins but I'm not sure how to handle the the other plugins since I don't know 
if the associated backends are prepared to handle such updates.  My first 
thought is to disallow the update in the other plugins, but I would really 
appreciate your advice.

Kind regards,
Chuck Carlino

[1] https://bugs.launchpad.net/neutron/+bug/1341268

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Stephen Wong
Agreed with Kevin and Sumit here. As a subgroup we talked about Nova
integration, and the preliminary idea, as Bob alluded to, is to add
endpoint as an option in place of Neutron port. But if we can make Nova
EPG-aware, it would be great.


On Tue, Aug 5, 2014 at 12:54 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
wrote:

 That's right Kevin, EPG (and its association to the L2/3_Policy)
 capture the attributes which would represent the network-template
 being referenced here.

 Jay, what Bob mentioned here was an option to use the endpoint as a
 one-to-one replacement for the option of using a Neutron port. This is
 more so in the context of providing an evolutionary path (from the way
 Nova currently does it using a pre-defined port). However, if it makes
 sense to make Nova aware of the EPG right at the outset, then that is
 even better.

 I have also noted your suggestion on clarifying the endpoint
 terminology. This was already done in one of the patches you had
 reviewed earlier, and will do that in the first patch as well (where
 you pointed it out now).

 Thanks,
 ~Sumit.

 On Tue, Aug 5, 2014 at 12:24 PM, Kevin Benton blak...@gmail.com wrote:
  Specifying an endpoint group would achieve the --networking-template
 effects
  you described. The endpoint group would have all of the security
 policies,
  IP allocation policies, connectivity policies, etc. already setup.
 
 
  On Tue, Aug 5, 2014 at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  On 08/05/2014 01:13 PM, Robert Kukura wrote:
 
 
  On 8/5/14, 11:04 AM, Gary Kotton wrote:
 
  Hi,
  Is there any description of how this will be consumed by Nova. My
  concern is this code landing there.
 
  Hi Gary,
 
  Initially, an endpoint's port_id is passed to Nova using nova boot ...
  --nic port-id=port-uuid ..., requiring no changes to Nova. Later,
  slight enhancements to Nova would allow using commands such as nova
  boot ... --nic ep-id=endpoint-uuid ... or nova boot ... --nic
  epg-id=endpoint-group-uuid 
 
 
  Hi Bob,
 
  How exactly is the above a friendlier API for the main user of Neutron,
  which is Nova? I thought one of the main ideas behind the GBP stuff was
 to
  create a more declarative and intuitive API for users of Neutron -- i.e.
  Nova -- to use in constructing needed networking objects. The above just
  seems to me to be exchanging one low-level object (port) with another
  low-level object (endpoint or endpoint group)?
 
  Perhaps the disconnect is due to the term endpoint being used, which,
  everywhere else in the OpenStack universe, means something entirely
  different from GBP.
 
  I guess, based on my understanding of the *intent* of the GBP API, I
 would
  have expected an API more like:
 
   nova boot ... --networking-template UUID
 
  where --networking-template would refer to a network, subnet topology,
 IP
  assignment policy, collection of security groups and firewall policies
 that
  the tenant had established prior to booting an instance... thereby
 making
  the API more intuitive and less cluttered.
 
  Or is it that I just don't understand this new endpoint terminology?
 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Kevin Benton
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][oslo] Problem installing oslo.config-1.4.0.0a3 from .whl files

2014-08-05 Thread Carl Baldwin
Alexei,

Thanks, that is what I was missing.

I hit one more error in case anyone is interested.  lxml couldn't
install because the compiler couldn't find libxml/xmlversion.h.  For
lack of interest on my part to figure out why this happened, I did the
following which allowed me to get past the problem.  (Ubuntu 12.04.1
LTS)

$ cd /usr/include  sudo ln -s libxml2/libxml

I seem to be back in operation now.  Thanks.

Carl

On Tue, Aug 5, 2014 at 2:08 PM, Alexei Kornienko
alexei.kornie...@gmail.com wrote:
 Hello Carl,

 You should try to update your virtualenv (pip install -U virtualenv).
 It fixed this problem for me.

 Regards,
 Alexei


 On 05/08/14 23:00, Carl Baldwin wrote:

 Hi,

 I noticed this yesterday afternoon.  I tried to run pep8 and unit
 tests on a patch I was going to submit.  It failed with an error that
 no package satisfying oslo.config could be found [1].  I went to pypi
 and saw that the version appears to be available [2] but still
 couldn't install it.

 I tried to activate the .tox/pep8 virtual environment and install the
 version explicitly.  Interestingly, that worked in one gerrit repo for
 Neutron [3] but not the other [4].  These two virtual envs are on the
 same machine.  I ran git clean -fdx to start over and now neither
 virtualenv can install it.

 Anyone have any idea what is going on?  It seems to be related to the
 fact that oslo.config is now uploaded as .whl files, whatever those
 are.  Why is it that my system cannot handle these?  I noticed that
 oslo.config is now available only as .whl in the 1.4.0.0aN versions
 but used to be available as .tar.gz files.

 Carl

 [1] http://paste.openstack.org/show/90651/
 [2] https://pypi.python.org/pypi/oslo.config
 [3] http://paste.openstack.org/show/90674/
 [4] http://paste.openstack.org/show/90675/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Nova] Containers Team Mid-Cycle Meetup to join Nova Meetup

2014-08-05 Thread Matt Riedemann



On 7/16/2014 10:44 AM, Adrian Otto wrote:

Additional Update:

Two important additions:

1) No Formal Thursday Meetings.

We are eliminating our plans to meet formally on the 31st. You are
still welcome to meet informally. We want to keep these discussions
as productive as possible, and want to avoid attendee burnout. My
deepest apologies to those who have made travel plans around this.
See me if there are financial considerations to resolve.


2) Containers Team Registration

To better manage attendance expectations, register for the event
that you will attend as a primary. For those attending primarily for
Containers, register here:


https://www.eventbrite.com/e/openstack-containers-team-juno-mid-cycle-developer-meetup-tickets-12304951441


If you are registering for Nova, use this link:


https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

If you are already registered for the Nova Meetup, but will be
attending in the Containers Team Meetup as the primary, you can
return your tickets for Nova as long as you have a Containers Team
Meetup ticket. That will allow for a more accurate count, and make
sure that all the Nova devs who need to attend can.


Logistics details:

https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint


Event Etherpad:

https://etherpad.openstack.org/p/juno-containers-sprint


Thanks,

Adrian


On Jul 11, 2014, at 3:31 PM, Adrian Otto adrian.o...@rackspace.com
mailto:adrian.o...@rackspace.com wrote:


CORRECTION: This event happens *July* 28-31. Sorry for any confusion!
Corrected Announcement:

Containers Team,

We have decided to hold our Mid-Cycle meetup along with the Nova
Meetup in Beaverton, Oregon on *July* 28-31.The Nova Meetup is
scheduled for *July* 28-30.

https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

Those of us interested in Containers topic will use one of the
breakout rooms generously offered by Intel. We will also stay on
Thursday to focus on implementation plans and to engage with those
members of the Nova Team who will be otherwise occupied on *July*
28-30, and will have a chance to focus entirely on Containers on the 31st.

Please take a moment now to register using the link above, and I look
forward to seeing you there.

Thanks,

Adrian Otto





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Adrian,

Can you share a summary of notes that came out of the containers meetup, 
specifically related to the integration with nova, i.e. the slides you 
shared in one of the nova sessions?  Wondering what the plans/details 
are for Kilo.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Jay Pipes

On 08/05/2014 04:26 PM, Stephen Wong wrote:

Agreed with Kevin and Sumit here. As a subgroup we talked about Nova
integration, and the preliminary idea, as Bob alluded to, is to add
endpoint as an option in place of Neutron port. But if we can make
Nova EPG-aware, it would be great.


Is anyone listening to what I'm saying? The term endpoint is obtuse 
and completely disregards the existing denotation of the word endpoint 
in use in OpenStack today.


So, we've gone ahead and replaced the term port in the caller 
interface -- which, besides being too entirely too low-level, actually 
did describe what the object was -- to using a term endpoint that 
doesn't describe even remotely what the thing is (a template for a 
collection of networking-related policies and objects) and that already 
has a well-known definition in the OpenStack ecosystem.


That is my point. That is why I brought up the comment on the original 
patch in the series that some docstrings would be helpful for those not 
entirely subscribed to the Tenets of National Dvorkinism.


These interfaces should speak plain old concepts, not networking guru 
arcanum.


Best,
-jay


On Tue, Aug 5, 2014 at 12:54 PM, Sumit Naiksatam
sumitnaiksa...@gmail.com mailto:sumitnaiksa...@gmail.com wrote:

That's right Kevin, EPG (and its association to the L2/3_Policy)
capture the attributes which would represent the network-template
being referenced here.

Jay, what Bob mentioned here was an option to use the endpoint as a
one-to-one replacement for the option of using a Neutron port. This is
more so in the context of providing an evolutionary path (from the way
Nova currently does it using a pre-defined port). However, if it makes
sense to make Nova aware of the EPG right at the outset, then that is
even better.

I have also noted your suggestion on clarifying the endpoint
terminology. This was already done in one of the patches you had
reviewed earlier, and will do that in the first patch as well (where
you pointed it out now).

Thanks,
~Sumit.

On Tue, Aug 5, 2014 at 12:24 PM, Kevin Benton blak...@gmail.com
mailto:blak...@gmail.com wrote:
  Specifying an endpoint group would achieve the
--networking-template effects
  you described. The endpoint group would have all of the security
policies,
  IP allocation policies, connectivity policies, etc. already setup.
 
 
  On Tue, Aug 5, 2014 at 1:04 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:
 
  On 08/05/2014 01:13 PM, Robert Kukura wrote:
 
 
  On 8/5/14, 11:04 AM, Gary Kotton wrote:
 
  Hi,
  Is there any description of how this will be consumed by Nova. My
  concern is this code landing there.
 
  Hi Gary,
 
  Initially, an endpoint's port_id is passed to Nova using nova
boot ...
  --nic port-id=port-uuid ..., requiring no changes to Nova.
Later,
  slight enhancements to Nova would allow using commands such as
nova
  boot ... --nic ep-id=endpoint-uuid ... or nova boot ... --nic
  epg-id=endpoint-group-uuid 
 
 
  Hi Bob,
 
  How exactly is the above a friendlier API for the main user of
Neutron,
  which is Nova? I thought one of the main ideas behind the GBP
stuff was to
  create a more declarative and intuitive API for users of Neutron
-- i.e.
  Nova -- to use in constructing needed networking objects. The
above just
  seems to me to be exchanging one low-level object (port) with
another
  low-level object (endpoint or endpoint group)?
 
  Perhaps the disconnect is due to the term endpoint being used,
which,
  everywhere else in the OpenStack universe, means something entirely
  different from GBP.
 
  I guess, based on my understanding of the *intent* of the GBP
API, I would
  have expected an API more like:
 
   nova boot ... --networking-template UUID
 
  where --networking-template would refer to a network, subnet
topology, IP
  assignment policy, collection of security groups and firewall
policies that
  the tenant had established prior to booting an instance...
thereby making
  the API more intuitive and less cluttered.
 
  Or is it that I just don't understand this new endpoint
terminology?
 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Kevin Benton
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [Manila] File-storage for Manila service image

2014-08-05 Thread Swartzlander, Ben
On Tue, 2014-08-05 at 23:13 +0300, Valeriy Ponomaryov wrote:
 Hello everyone,
 
 
 Currently used image for Manila is located in
 dropbox: ubuntu_1204_nfs_cifs.qcow2 and dropbox has limit for traffic,
 see https://www.dropbox.com/help/4204
 
 
 Due to generation of excessive traffic, public links were banned and
 image could not be downloaded with error code 509, now it is unbanned,
 until another excess reached.
 
 
 Traffic limit should not threat possibility to use project, so we need
 find stable file storage with permanent public links and without
 traffic limit.
 
 
 Does anyone have any suggestions for more suitable file storage to
 use?

Let's try creating a github repo and sharing it there. For hopefully
obvious reasons, let's NOT put this into the manila repos directly --
let's keep it separate.


 -- 
 Kind Regards
 Valeriy Ponomaryov
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-05 Thread Joe Gordon
On Aug 6, 2014 2:49 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 I have a spec proposal in play that crosses the Nova/Neutron boundary.
 I split it in to two specs:  a nova spec [1] and a Neutron spec [2].
 There is a little duplication between the two at a high level but not
 in the details.  Each of the specs references the other at various
 spots in the text and in the references section.

 This isn't the optimal way to write a cross-project spec.  There is
 difficulty involved in keeping the two consistent.  Also, reviewers
 from one program often don't bother to read the spec from the other.
 This is unfortunate.

 However, given the constraints of the current process, I believe that
 it was necessary to split the spec in to two so that the cores
 responsible for each program review and accept the design for the
 proposed changes in their realm.

 Would it make more sense to submit the exact same monolithic
 specification to both?  At the time, I chose against it because I
 thought it would make it more difficult to read in the context of a
 single program.

 I'm open and looking forward to hearing others' thoughts on this.


While I think we need to flesh put how to do cross project specs, and
better cross project communication in general, as this will become more
important in the future, this email thread doesn't seem like the
Appropriate place to do it. This sounds like a new topic, that deserves a
new thread.

 Carl

 [1] https://review.openstack.org/#/c/90150/
 [2] https://review.openstack.org/#/c/88623/

 On Mon, Aug 4, 2014 at 8:33 PM, Jeremy Stanley fu...@yuggoth.org wrote:
  On 2014-08-05 01:26:49 + (+), joehuang wrote:
  I would like to know how to submit cross project spec? Is there a
  repository for cross project cross project spec.
 
  Specs repositories are about formalizing/streamlining the design
  process within a program, and generally the core reviewers of those
  programs decide when a spec is in a suitable condition for approval.
  In the case of a cross-program spec (which I assume is what you mean
  by cross-project), who would decide what needs to be in the spec
  proposal and who would approve it? What sort of design proposal do
  you have in mind which you think would need to be a single spec
  applying to projects in more than one program?
  --
  Jeremy Stanley
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Datastore/Versions API improvements

2014-08-05 Thread Denis Makogon
On Tue, Aug 5, 2014 at 11:06 PM, Craig Vyvial cp16...@gmail.com wrote:




 On Wed, Jul 30, 2014 at 10:10 AM, Denis Makogon dmako...@mirantis.com
 wrote:

 Hello, Stackers.



 I’d like to gather Trove team around question related to
 Datastores/Version API responses (request/response payloads and HTTP codes).

 Small INFO

 When deployer creates datastore and versions for it Troves` backend
 receives request to store DBDatastore and DBDatastoreVersion objects with
 certain parameters. The most interesting attribute of DBDatastoreVersion is
 “packages” - it’s being stored as String object (and it’s totally fine).
 But when we’re trying to query given datastore version through the
 Datastores API attribute “packages” is being returned as String object too.
 And it seems that it breaks response pattern - “If given attribute
 represents complex attribute, such as: list, dict, tuple - it should be
 returned as is.

 So, the first question is - are we able to change it in terms of V1?

 If it does not break the public api then i do not think there is an issue
 making a change.


If modification means breaking then yes. I would say that type 'packages'
attribute should be changed to more appropriate type, such as list of
string. But it seems that this modification would be possible in abstract
V2,


 I made a change not long ago around making the packages a list thats sent
 to the guest. I'm a bit confused what you are wanting to change here.
 Are you suggesting changing the data that is stored for packages (string
 to a json.dumps list or something).
 Or making the model parse the string into a list when you request the
 packages for a datastore version?

I guess last thing. If i want to iterate over packages i would need to
manually split string an build appropriate data type.




 The second question is about admin_context decorator (see [1]). This
 method executes methods of given controller and verifies that user is able
 to execute certain procedure.

 Taking into account RFC 2616 this method should raise HTTP Forbidden
 (code 403) if user tried to execute request that he’s not allowed to.

 But given method return HTTP Unauthorized (code 401) which seems
 weird since user is authorized.

 I think this is a valid bug for the error code although the message makes
 it clear why you get the 401.
 https://github.com/openstack/trove/blob/master/trove/common/auth.py#L85


The problem is that user is authorized but doesn't have certain
permissions. Unauthorized means that user passed wrong credentials,
Forbidden (in terms of ReST) authorized but not permitted.

Craig, after digging into the problem i found out where current code is
broken, see
https://github.com/openstack/trove/blob/master/trove/common/wsgi.py#L316-L318




 This is definitely a bug. And it comes from [2].


 [1]
 https://github.com/openstack/trove/blob/master/trove/common/auth.py#L72-L87

 [2]
 https://github.com/openstack/trove/blob/master/trove/common/wsgi.py#L316-L318



 Best regards,

 Denis Makogon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] File-storage for Manila service image

2014-08-05 Thread Valeriy Ponomaryov
Github has file size limit in 100 Mb, see
https://help.github.com/articles/what-is-my-disk-quota

Our current image is about 300 Mb.


On Tue, Aug 5, 2014 at 11:43 PM, Swartzlander, Ben 
ben.swartzlan...@netapp.com wrote:

 On Tue, 2014-08-05 at 23:13 +0300, Valeriy Ponomaryov wrote:
  Hello everyone,
 
 
  Currently used image for Manila is located in
  dropbox: ubuntu_1204_nfs_cifs.qcow2 and dropbox has limit for traffic,
  see https://www.dropbox.com/help/4204
 
 
  Due to generation of excessive traffic, public links were banned and
  image could not be downloaded with error code 509, now it is unbanned,
  until another excess reached.
 
 
  Traffic limit should not threat possibility to use project, so we need
  find stable file storage with permanent public links and without
  traffic limit.
 
 
  Does anyone have any suggestions for more suitable file storage to
  use?

 Let's try creating a github repo and sharing it there. For hopefully
 obvious reasons, let's NOT put this into the manila repos directly --
 let's keep it separate.


  --
  Kind Regards
  Valeriy Ponomaryov
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards
Valeriy Ponomaryov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >