[openstack-dev] [Ironic][Nova] Functional testing of Nova Ironic driver

2014-03-24 Thread Rohan Kanade
Hi,

I have successfully setup latest devstack.

I am aware that the Nova Ironic driver is temporarily in
ironic.nova.virt.ironic..


I am not sure how to use this driver in Nova and then create flavors or
Nodes in Nova that will then call Ironic.

I am a bit confused about the workflow for creating a Instance in Nova
using the Nova Ironic driver.



Regards,
Rohan Kanade
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Serg Melikyan
Timur, I don't know about plans to support different languages for Murano
Engine. I think Murano PL may be valuable as standalone library, so I think
we should extract Murano PL code to separate package, and if we will need
it as a library it will be easy to extract to.


On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov 
tnurlygaya...@mirantis.com wrote:

 Hi Serg,

 This idea sounds good, I suggest to use name 'murano.engine.murano_pl'
 (not just common name like 'language' or 'dsl', but name, which will be
 based on 'MuranoPL')

 Do we plan to support the ability to define different languages for Murano
 Engine?


 Thank you!


 On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan smelik...@mirantis.comwrote:

 There is a idea to separate core of Murano PL implementation from engine
 specific code, like it was done in PoC. When this two things are separated
 in different packages, we will be able to track and maintain our language
 core as clean as possible from engine specific code. This will give to us
 an ability to easily separate our language implementation to a library.

 Questions is under what name we should place core of Murano PL?

 1) muranoapi.engine.language;
 2) muranoapi.engine.dsl;
 3) suggestions?

 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Timur Sufiev
+1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are too broad.

On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
tnurlygaya...@mirantis.com wrote:
 Hi Serg,

 This idea sounds good, I suggest to use name 'murano.engine.murano_pl' (not
 just common name like 'language' or 'dsl', but name, which will be based on
 'MuranoPL')

 Do we plan to support the ability to define different languages for Murano
 Engine?


 Thank you!


 On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan smelik...@mirantis.com
 wrote:

 There is a idea to separate core of Murano PL implementation from engine
 specific code, like it was done in PoC. When this two things are separated
 in different packages, we will be able to track and maintain our language
 core as clean as possible from engine specific code. This will give to us an
 ability to easily separate our language implementation to a library.

 Questions is under what name we should place core of Murano PL?

 1) muranoapi.engine.language;
 2) muranoapi.engine.dsl;
 3) suggestions?

 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Proposing Kun to Rally Core.

2014-03-24 Thread Boris Pavlovic
Hi stackers,


I would like to propose Kun Huang to Rally Core team.
As you saw already, he is doing a lot of good reviews, and catch a lot of
nits and bugs, so he will be a good core reviewer.

Here is detailed statistics for the latest 30 days:
http://stackalytics.com/report/contribution/rally/30

Thoughts?


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Serg Melikyan
because 'dsl'/'language' terms are too broad.
Too broad in general, but we choose name for sub-package, and in murano
term 'language' mean Murano PL.

+1 for language


On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev tsuf...@mirantis.com wrote:

 +1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are too
 broad.

 On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi Serg,
 
  This idea sounds good, I suggest to use name 'murano.engine.murano_pl'
 (not
  just common name like 'language' or 'dsl', but name, which will be based
 on
  'MuranoPL')
 
  Do we plan to support the ability to define different languages for
 Murano
  Engine?
 
 
  Thank you!
 
 
  On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan smelik...@mirantis.com
  wrote:
 
  There is a idea to separate core of Murano PL implementation from engine
  specific code, like it was done in PoC. When this two things are
 separated
  in different packages, we will be able to track and maintain our
 language
  core as clean as possible from engine specific code. This will give to
 us an
  ability to easily separate our language implementation to a library.
 
  Questions is under what name we should place core of Murano PL?
 
  1) muranoapi.engine.language;
  2) muranoapi.engine.dsl;
  3) suggestions?
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
  +7 (495) 640-4904, 0261
  +7 (903) 156-0836
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Oleg Gelbukh
What does PL stand for, anyway?

--
Best regards,
Oleg Gelbukh


On Mon, Mar 24, 2014 at 11:39 AM, Serg Melikyan smelik...@mirantis.comwrote:

 because 'dsl'/'language' terms are too broad.
 Too broad in general, but we choose name for sub-package, and in murano
 term 'language' mean Murano PL.

 +1 for language


 On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev tsuf...@mirantis.comwrote:

 +1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are too
 broad.

 On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi Serg,
 
  This idea sounds good, I suggest to use name 'murano.engine.murano_pl'
 (not
  just common name like 'language' or 'dsl', but name, which will be
 based on
  'MuranoPL')
 
  Do we plan to support the ability to define different languages for
 Murano
  Engine?
 
 
  Thank you!
 
 
  On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan smelik...@mirantis.com
  wrote:
 
  There is a idea to separate core of Murano PL implementation from
 engine
  specific code, like it was done in PoC. When this two things are
 separated
  in different packages, we will be able to track and maintain our
 language
  core as clean as possible from engine specific code. This will give to
 us an
  ability to easily separate our language implementation to a library.
 
  Questions is under what name we should place core of Murano PL?
 
  1) muranoapi.engine.language;
  2) muranoapi.engine.dsl;
  3) suggestions?
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
  +7 (495) 640-4904, 0261
  +7 (903) 156-0836
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Serg Melikyan
Programming Language, AFAIK


On Mon, Mar 24, 2014 at 11:46 AM, Oleg Gelbukh ogelb...@mirantis.comwrote:

 What does PL stand for, anyway?

 --
 Best regards,
 Oleg Gelbukh


 On Mon, Mar 24, 2014 at 11:39 AM, Serg Melikyan smelik...@mirantis.comwrote:

 because 'dsl'/'language' terms are too broad.
 Too broad in general, but we choose name for sub-package, and in murano
 term 'language' mean Murano PL.

 +1 for language


 On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev tsuf...@mirantis.comwrote:

 +1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are
 too broad.

 On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi Serg,
 
  This idea sounds good, I suggest to use name 'murano.engine.murano_pl'
 (not
  just common name like 'language' or 'dsl', but name, which will be
 based on
  'MuranoPL')
 
  Do we plan to support the ability to define different languages for
 Murano
  Engine?
 
 
  Thank you!
 
 
  On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan smelik...@mirantis.com
 
  wrote:
 
  There is a idea to separate core of Murano PL implementation from
 engine
  specific code, like it was done in PoC. When this two things are
 separated
  in different packages, we will be able to track and maintain our
 language
  core as clean as possible from engine specific code. This will give
 to us an
  ability to easily separate our language implementation to a library.
 
  Questions is under what name we should place core of Murano PL?
 
  1) muranoapi.engine.language;
  2) muranoapi.engine.dsl;
  3) suggestions?
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
  +7 (495) 640-4904, 0261
  +7 (903) 156-0836
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Miguel Angel Ajo



It's the first call starting the daemon / loading config files, etc?,

May be that first sample should be discarded from the mean for all 
processes (it's an outlier value).




On 03/21/2014 05:32 PM, Yuriy Taraday wrote:

On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez thie...@openstack.org
mailto:thie...@openstack.org wrote:

Yuriy Taraday wrote:
  Benchmark included showed on my machine these numbers (average
over 100
  iterations):
 
  Running 'ip a':
ip a :   4.565ms
   sudo ip a :  13.744ms
 sudo rootwrap conf ip a : 102.571ms
  daemon.run('ip a') :   8.973ms
  Running 'ip netns exec bench_ns ip a':
sudo ip netns exec bench_ns ip a : 162.098ms
  sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
   daemon.run('ip netns exec bench_ns ip a') : 129.876ms
 
  So it looks like running daemon is actually faster than running
sudo.

That's pretty good! However I fear that the extremely simplistic filter
rule file you fed on the benchmark is affecting numbers. Could you post
results from a realistic setup (like same command, but with all the
filter files normally found on a devstack host ?)


I don't have a devstack host at hands but I gathered all filters from
Nova, Cinder and Neutron and got this:
 method  :min   avg   max   dev
ip a :   3.741ms   4.443ms   7.356ms 500.660us
   sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
  daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms

Then I switched back to one file and got:
 method  :min   avg   max   dev
ip a :   4.176ms   4.976ms  22.910ms   1.821ms
   sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
  daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms

There is a difference but it looks like it's because of config files
parsing, not applying filters themselves.

--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] [QA] unification of timestamp related parameters in query fileds

2014-03-24 Thread 刘胜
Hi stackers:


I have a problem about unification of timestamp related parameters in query 
fileds.


the related bug is:


https://bugs.launchpad.net/ceilometer/+bug/1295100


start_timestamp/end_timestamp and start/end and timestamp is not unified in 
query fields.


for examples:


the valid keys of alarm-history query fields are:
set(['start_timestamp', 'type', 'project', 'alarm_id', 'user', 
'start_timestamp_op', 'end_timestamp_op', 'end_timestamp'])


and, the valid keys of statistics query fields are:
(['end', 'start', 'metaquery', 'meter', 'project', 'source', 'user', 
'start_timestamp_op', 'resource', 'end_timestamp_op', 'message_id'])


and, the valid keys of sample-list query fields are:
(['end', 'start', 'metaquery', 'meter', 'project', 'source', 'user', 
'start_timestamp_op', 'resource', 'end_timestamp_op', 'message_id'])






Please pay attention to the 
method:ceilometer.api.controllers.v2:_query_to_kwargs()


In query field this method will transform 'timestamp' to 'end_timestamp' with 
the 'op' in ('lt', 'le'), and transform 'timestamp' to 'start_timestamp' with 
'op' in ('gt', 'ge'),
meanwhile, the start_timestamp_op and end_timestamp_op generated by 'op'.


So I think we should unitize the timestamp in query fields, an option is use 
'timestamp' with operators to instead others.


But the changes about API should be treated with caution, so I wish to obtain 
your opinion :)


before close these bugs, I want to get some advice:
https://bugs.launchpad.net/ceilometer/+bug/1270394
https://bugs.launchpad.net/ceilometer/+bug/1295104
https://bugs.launchpad.net/ceilometer/+bug/1291171




My Best


liu sheng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-delete in amqp reply_* queues in OpenStack

2014-03-24 Thread Dmitry Mescheryakov
Chris,

In oslo.messaging a single reply queue is used to gather results from
all the calls. It is created lazily on the first call and is used
until the process is killed. I did a quick look at oslo.rpc from
oslo-incubator and it seems like it uses the same pattern, which is
not surprising since oslo.messaging descends from oslo.rpc. So if you
restart some process which does rpc calls (nova-api, I guess), you
should see one reply queue gone and another one created instead after
some time.

Dmitry

2014-03-24 7:55 GMT+04:00 Chris Friesen chris.frie...@windriver.com:
 Hi,

 If I run rabbitmqadmin list queues on my controller node I see 28 queues
 with names of the form reply_uuid.

 From what I've been reading, these queues are supposed to be used for the
 replies to rpc calls, they're not durable', and they all have auto_delete
 set to True.

 Given the above, I would have expected that queues with names of that form
 would only exist for in-flight rpc operations, and that subsequent listings
 of the queues would show mostly different ones, but these 28 seem to be
 fairly persistent.

 Is this expected or do I have something unusual going on?

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-24 Thread Gary Kotton
Regarding the spawn there are a number of patches up for review at the
moment - they are all mutually exclusive and hopefully will make the
process a lot smoother.
https://review.openstack.org/#/q/topic:bp/vmware-spawn-refactor,n,z
In addition to this we have a patch up for review with the OSLO
integration - https://review.openstack.org/#/c/70175/ (ideally it would be
best that this gets in first)
Thanks
Gary

On 3/22/14 8:03 PM, Chris Behrens cbehr...@codestud.com wrote:

I'd like to get spawn broken up sooner rather than later, personally. It
has additional benefits of being able to do better orchestration of
builds from conductor, etc.

On Mar 14, 2014, at 3:58 PM, Dan Smith d...@danplanet.com wrote:

 Just to answer this point, despite the review latency, please don't be
 tempted to think one big change will get in quicker than a series of
 little, easy to review, changes. All changes are not equal. A large
 change often scares me away to easier to review patches.
 
 Seems like, for Juno-1, it would be worth cancelling all non-urgent
 bug fixes, and doing the refactoring we need.
 
 I think the aim here should be better (and easier to understand) unit
 test coverage. Thats a great way to drive good code structure.
 
 Review latency will be directly affected by how good the refactoring
 changes are staged. If they are small, on-topic and easy to validate,
 they will go quickly. They should be linearized unless there are some
 places where multiple sequences of changes make sense (i.e. refactoring
 a single file that results in no changes required to others).
 
 As John says, if it's just a big change everything patch, or a ton of
 smaller ones that don't fit a plan or process, then it will be slow and
 painful (for everyone).
 
 +1 sounds like a good first step is to move to oslo.vmware
 
 I'm not sure whether I think that refactoring spawn would be better done
 first or second. My gut tells me that doing spawn first would mean that
 we could more easily validate the oslo refactors because (a) spawn is
 impossible to follow right now and (b) refactoring it to smaller methods
 should be fairly easy. The tests for spawn are equally hard to follow
 and refactoring it first would yield a bunch of more unit-y tests that
 would help us follow the oslo refactoring.
 
 However, it sounds like the osloificastion has maybe already started and
 that refactoring spawn will have to take a backseat to that.
 
 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar
=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=q5RejnjmrSIFg0K7ua
AZbKHVqAKLHnVAB98J%2BszOfhw%3D%0As=1629f4e9008260c5f8ea577da1bdc69388dbe
fa3646803244df992a31d94bc96

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=q5RejnjmrSIFg0K7uaAZb
KHVqAKLHnVAB98J%2BszOfhw%3D%0As=1629f4e9008260c5f8ea577da1bdc69388dbefa36
46803244df992a31d94bc96


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Nova] Functional testing of Nova Ironic driver

2014-03-24 Thread Alexander Gordeev
Hi,

https://etherpad.openstack.org/p/IronicAndDevstackAgain might help you.
Should flawlessly work on Ubuntu 12.04. Latest devstack creates baremetal
flavor by itself, so you don't need to create it manually.

Let me know if you need any additional help. Or it could be better to ask
in IRC #openstack-ironic


Many thanks, Alex


On Mon, Mar 24, 2014 at 10:36 AM, Rohan Kanade openst...@rohankanade.comwrote:

 Hi,

 I have successfully setup latest devstack.

 I am aware that the Nova Ironic driver is temporarily in
 ironic.nova.virt.ironic..


 I am not sure how to use this driver in Nova and then create flavors or
 Nodes in Nova that will then call Ironic.

 I am a bit confused about the workflow for creating a Instance in Nova
 using the Nova Ironic driver.



 Regards,
 Rohan Kanade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing Kun to Rally Core.

2014-03-24 Thread Hugh Saunders
+1
Kun has been very active on gerrit, and contributed good patches, so will
be a great addition to the core team.


--
Hugh Saunders


On 24 March 2014 07:31, Boris Pavlovic bpavlo...@mirantis.com wrote:

 Hi stackers,


 I would like to propose Kun Huang to Rally Core team.
 As you saw already, he is doing a lot of good reviews, and catch a lot of
 nits and bugs, so he will be a good core reviewer.

 Here is detailed statistics for the latest 30 days:
 http://stackalytics.com/report/contribution/rally/30

 Thoughts?


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-24 Thread Thierry Carrez
Joe Gordon wrote:
 There are still two outstanding trove dependencies that are currently
 used in trove but not in global requirements. It would be nice to get
 this sorted out before the freeze so we can
 turn https://review.openstack.org/#/c/80690/ on.
 
 mockito https://review.openstack.org/#/c/80850/

This one was abandoned. Trove team is looking to move away from mockito
to mock. Timeline is in the next 4-5 days.

 wsgi_intercept https://review.openstack.org/#/c/80851/

This one was merged.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][QA] Does turbo-hipster fail in Nova?

2014-03-24 Thread wu jiang
Hi all,

Does turbo-hipster fail in Nova? I rechecked and recommitted my
patches[1][2], the gate jobs always fail.

I check all the Nova patches on Gerrit[3] now, and found lots of nova
patches fail caused by turbo-hipster.

Does it meet some problems?

Thanks.


wingwj

-
[1] https://review.openstack.org/#/c/72554/
[2] https://review.openstack.org/#/c/79529/

[3] https://review.openstack.org/#/q/status:open+project:openstack/nova,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Alexander Tivelkov
Hi Serg,

Are you proposing to have a standalone git repository / stack forge project
for that? Or just a separate package inside our primary murano repo?

--
Regards,
Alexander Tivelkov


On Mon, Mar 24, 2014 at 12:00 PM, Serg Melikyan smelik...@mirantis.comwrote:

 Programming Language, AFAIK


 On Mon, Mar 24, 2014 at 11:46 AM, Oleg Gelbukh ogelb...@mirantis.comwrote:

 What does PL stand for, anyway?

 --
 Best regards,
 Oleg Gelbukh


 On Mon, Mar 24, 2014 at 11:39 AM, Serg Melikyan 
 smelik...@mirantis.comwrote:

 because 'dsl'/'language' terms are too broad.
 Too broad in general, but we choose name for sub-package, and in murano
 term 'language' mean Murano PL.

 +1 for language


 On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev tsuf...@mirantis.comwrote:

 +1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are
 too broad.

 On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi Serg,
 
  This idea sounds good, I suggest to use name
 'murano.engine.murano_pl' (not
  just common name like 'language' or 'dsl', but name, which will be
 based on
  'MuranoPL')
 
  Do we plan to support the ability to define different languages for
 Murano
  Engine?
 
 
  Thank you!
 
 
  On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan 
 smelik...@mirantis.com
  wrote:
 
  There is a idea to separate core of Murano PL implementation from
 engine
  specific code, like it was done in PoC. When this two things are
 separated
  in different packages, we will be able to track and maintain our
 language
  core as clean as possible from engine specific code. This will give
 to us an
  ability to easily separate our language implementation to a library.
 
  Questions is under what name we should place core of Murano PL?
 
  1) muranoapi.engine.language;
  2) muranoapi.engine.dsl;
  3) suggestions?
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
  +7 (495) 640-4904, 0261
  +7 (903) 156-0836
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-24 Thread Thierry Carrez
Russell Bryant wrote:
 On 03/21/2014 02:55 PM, Stefano Maffulli wrote:
 While I'm here, and for the records, I think that creating a new
 workflow 'temporarily' only until we have Storyboard usable, is a *huge*
 mistake. It seems to me that you're ignoring or at least underestimating
 the amount of *people* that will need to be retrained, the amount of
 documentation that need to be fixed/adjusted. And the confusion that
 this will create on the 'long tail' developers.

 A change like this, done with a couple of announcements on a mailing
 list and a few mentions on IRC is not enough to steer the ~400
 developers who may be affected by this change. And then we'll have to
 manage the change again when we switch to Storyboard. If I were you, I'd
 focus on getting storyboard ready to use asap, instead.

 There, I said it, and I'm now going back to my cave.
 
 I think the current process and system and *so* broken that we can't
 afford to wait.  Further, after talking to Thierry, it seems quite
 likely that we would continue using this exact process, even with
 Storyboard.  Storyboard isn't a review tool and won't solve all of the
 project's problems.

Indeed. Storyboard is primarily designed as a task tracker. It lets you
groups tasks affecting various repositories but ultimately related to
the same story.

So feature stories in StoryBoard can definitely have, as their first
task, a design task that will be linked to a change to a -specs
repository. Then when the design is approved you can add more tasks to
that same story, corresponding to implementation steps.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][QA] Does turbo-hipster fail in Nova?

2014-03-24 Thread wu jiang
The information provided in patch[1] is like this:


   - 
gate-real-db-upgrade_nova_mysql_devstack_131007http://www.rcbops.com/turbo_hipster/results/72/72554/8/check/gate-real-db-upgrade_nova_mysql_devstack_131007/ddedd5a/131007_devstack_export.log
SUCCESS in 9m 08s
   - 
gate-real-db-upgrade_nova_mysql_user_001http://www.rcbops.com/turbo_hipster/results/72/72554/8/check/gate-real-db-upgrade_nova_mysql_user_001/0ea217a/user_001.log
WARNING - Migration 215-216 changed too many rows (264484) in 24m 45s
   - 
gate-real-db-upgrade_nova_percona_devstack_131007http://www.rcbops.com/turbo_hipster/results/72/72554/8/check/gate-real-db-upgrade_nova_percona_devstack_131007/423d4d7/131007_devstack_export.log
SUCCESS in 8m 34s
   - 
gate-real-db-upgrade_nova_percona_user_001http://www.rcbops.com/turbo_hipster/results/72/72554/8/check/gate-real-db-upgrade_nova_percona_user_001/acb427c/user_001.log
WARNING - Migration 215-216 changed too many rows (264484) in 24m 10s
   - 
gate-real-db-upgrade_nova_mysql_user_002http://www.rcbops.com/turbo_hipster/results/72/72554/8/check/gate-real-db-upgrade_nova_mysql_user_002/91951bb/user_002.log
SUCCESS in 12m 28s
   - 
gate-real-db-upgrade_nova_percona_user_002http://www.rcbops.com/turbo_hipster/results/72/72554/8/check/gate-real-db-upgrade_nova_percona_user_002/2fffa29/user_002.log
SUCCESS in 12m 40s
   -

I checked the codes in 'nova/db/sqlalchemy/migrate_repo/versions/', the
first file for db migration is '216_havana.py'.

Thanks.

---
[1] https://review.openstack.org/#/c/72554/


On Mon, Mar 24, 2014 at 5:42 PM, wu jiang win...@gmail.com wrote:

 Hi all,

 Does turbo-hipster fail in Nova? I rechecked and recommitted my
 patches[1][2], the gate jobs always fail.

 I check all the Nova patches on Gerrit[3] now, and found lots of nova
 patches fail caused by turbo-hipster.

 Does it meet some problems?

 Thanks.


 wingwj

 -
 [1] https://review.openstack.org/#/c/72554/
 [2] https://review.openstack.org/#/c/79529/

 [3]
 https://review.openstack.org/#/q/status:open+project:openstack/nova,n,z


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing Kun to Rally Core.

2014-03-24 Thread Li, Chen
+1 !


Thanks.
-chen

From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: Monday, March 24, 2014 3:32 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [rally] Proposing Kun to Rally Core.

Hi stackers,


I would like to propose Kun Huang to Rally Core team.
As you saw already, he is doing a lot of good reviews, and catch a lot of nits 
and bugs, so he will be a good core reviewer.

Here is detailed statistics for the latest 30 days:
http://stackalytics.com/report/contribution/rally/30

Thoughts?


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing Kun to Rally Core.

2014-03-24 Thread Sergey Skripnick

+1I would like to propose Kun Huang to Rally Core team.As you saw already, he is doing a lot of good reviews, and catch a lot of nits and bugs, so he will be a good core reviewer. 
Here is detailed statistics for the latest 30 days:http://stackalytics.com/report/contribution/rally/30
Thoughts?Best regards,Boris Pavlovic 
-- Regards,Sergey Skripnick___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing Kun to Rally Core.

2014-03-24 Thread Gareth
nice to join you guys ; )

On Monday, March 24, 2014, Hugh Saunders h...@wherenow.org wrote:

 +1
 Kun has been very active on gerrit, and contributed good patches, so will
 be a great addition to the core team.


 --
 Hugh Saunders


 On 24 March 2014 07:31, Boris Pavlovic 
 bpavlo...@mirantis.comjavascript:_e(%7B%7D,'cvml','bpavlo...@mirantis.com');
  wrote:

 Hi stackers,


 I would like to propose Kun Huang to Rally Core team.
 As you saw already, he is doing a lot of good reviews, and catch a lot of
 nits and bugs, so he will be a good core reviewer.

 Here is detailed statistics for the latest 30 days:
 http://stackalytics.com/report/contribution/rally/30

 Thoughts?


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgjavascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing Kun to Rally Core.

2014-03-24 Thread Boris Pavlovic
Kun,

Welcome to Rally core team! =)

Best regards,
Boris Pavlovic


On Mon, Mar 24, 2014 at 2:12 PM, Gareth academicgar...@gmail.com wrote:

 nice to join you guys ; )


 On Monday, March 24, 2014, Hugh Saunders h...@wherenow.org wrote:

 +1
 Kun has been very active on gerrit, and contributed good patches, so will
 be a great addition to the core team.


 --
 Hugh Saunders


 On 24 March 2014 07:31, Boris Pavlovic bpavlo...@mirantis.com wrote:

 Hi stackers,


 I would like to propose Kun Huang to Rally Core team.
 As you saw already, he is doing a lot of good reviews, and catch a lot
 of nits and bugs, so he will be a good core reviewer.

 Here is detailed statistics for the latest 30 days:
 http://stackalytics.com/report/contribution/rally/30

 Thoughts?


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Gareth

 *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
 *OpenStack contributor, kun_huang@freenode*
 *My promise: if you find any spelling or grammar mistakes in my email from
 Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Serg Melikyan
Alexander, to have simple sub-package in muranoapi.engine/muranoapi


On Mon, Mar 24, 2014 at 1:43 PM, Alexander Tivelkov
ativel...@mirantis.comwrote:

 Hi Serg,

 Are you proposing to have a standalone git repository / stack forge
 project for that? Or just a separate package inside our primary murano repo?

 --
 Regards,
 Alexander Tivelkov


 On Mon, Mar 24, 2014 at 12:00 PM, Serg Melikyan smelik...@mirantis.comwrote:

 Programming Language, AFAIK


 On Mon, Mar 24, 2014 at 11:46 AM, Oleg Gelbukh ogelb...@mirantis.comwrote:

 What does PL stand for, anyway?

 --
 Best regards,
 Oleg Gelbukh


 On Mon, Mar 24, 2014 at 11:39 AM, Serg Melikyan 
 smelik...@mirantis.comwrote:

 because 'dsl'/'language' terms are too broad.
 Too broad in general, but we choose name for sub-package, and in murano
 term 'language' mean Murano PL.

 +1 for language


 On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev tsuf...@mirantis.comwrote:

 +1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are
 too broad.

 On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi Serg,
 
  This idea sounds good, I suggest to use name
 'murano.engine.murano_pl' (not
  just common name like 'language' or 'dsl', but name, which will be
 based on
  'MuranoPL')
 
  Do we plan to support the ability to define different languages for
 Murano
  Engine?
 
 
  Thank you!
 
 
  On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan 
 smelik...@mirantis.com
  wrote:
 
  There is a idea to separate core of Murano PL implementation from
 engine
  specific code, like it was done in PoC. When this two things are
 separated
  in different packages, we will be able to track and maintain our
 language
  core as clean as possible from engine specific code. This will give
 to us an
  ability to easily separate our language implementation to a library.
 
  Questions is under what name we should place core of Murano PL?
 
  1) muranoapi.engine.language;
  2) muranoapi.engine.dsl;
  3) suggestions?
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
  +7 (495) 640-4904, 0261
  +7 (903) 156-0836
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Stan Lagun
I like dsl most because it is
a. Short. This is especially good when you have that awesome 79-chars
limitation
b. It leaves a lot of room for changes. MuranoPL can change name. DSL - not
:)


On Mon, Mar 24, 2014 at 1:43 PM, Alexander Tivelkov
ativel...@mirantis.comwrote:

 Hi Serg,

 Are you proposing to have a standalone git repository / stack forge
 project for that? Or just a separate package inside our primary murano repo?

 --
 Regards,
 Alexander Tivelkov


 On Mon, Mar 24, 2014 at 12:00 PM, Serg Melikyan smelik...@mirantis.comwrote:

 Programming Language, AFAIK


 On Mon, Mar 24, 2014 at 11:46 AM, Oleg Gelbukh ogelb...@mirantis.comwrote:

 What does PL stand for, anyway?

 --
 Best regards,
 Oleg Gelbukh


 On Mon, Mar 24, 2014 at 11:39 AM, Serg Melikyan 
 smelik...@mirantis.comwrote:

 because 'dsl'/'language' terms are too broad.
 Too broad in general, but we choose name for sub-package, and in murano
 term 'language' mean Murano PL.

 +1 for language


 On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev tsuf...@mirantis.comwrote:

 +1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are
 too broad.

 On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi Serg,
 
  This idea sounds good, I suggest to use name
 'murano.engine.murano_pl' (not
  just common name like 'language' or 'dsl', but name, which will be
 based on
  'MuranoPL')
 
  Do we plan to support the ability to define different languages for
 Murano
  Engine?
 
 
  Thank you!
 
 
  On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan 
 smelik...@mirantis.com
  wrote:
 
  There is a idea to separate core of Murano PL implementation from
 engine
  specific code, like it was done in PoC. When this two things are
 separated
  in different packages, we will be able to track and maintain our
 language
  core as clean as possible from engine specific code. This will give
 to us an
  ability to easily separate our language implementation to a library.
 
  Questions is under what name we should place core of Murano PL?
 
  1) muranoapi.engine.language;
  2) muranoapi.engine.dsl;
  3) suggestions?
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
  +7 (495) 640-4904, 0261
  +7 (903) 156-0836
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-24 Thread Sean Dague
Here is some preliminary views (it currently ignores the ceilometer
logs, I haven't had a chance to dive in there yet).

It actually looks like a huge part of the issue is olso.messaging, the
bulk of screen-n-cond is oslo.messaging debug errors. It seems that in
debug mode oslo.messaging is basically a 100% trace mode, which include
logging every time a UUID is created and every payload.

I'm not convinced why that's a useful. We don't log every sql statement
we run (with full payload).

The recent integration of oslo.messaging would also explain the new
growth of logs.

Other issues include other oslo utils that have really verbose debug
modes. Like lockutils emitting 4 DEBUG messages for every lock acquired.

Part of the challenge is turning off DEBUG is currently embedded in code
in oslo log, which makes it kind of awkward to set sane log levels for
included libraries because it requires an oslo round trip with code to
all the projects to do it.

-Sean

On 03/21/2014 07:23 PM, Clark Boylan wrote:
 Hello everyone,
 
 Back at the Portland summit the Infra team committed to archiving six months
 of test logs for Openstack. Since then we have managed to do just that.
 However, more recently we have seen the growth rate on those logs continue
 to grow beyond what is a currently sustainable level.
 
 For reasons, we currently store logs on a filesystem backed by cinder
 volumes. Rackspace limits the size and number of volumes attached to a
 single host meaning the upper bound on the log archive filesystem is ~12TB
 and we are almost there. You can see real numbers and pretty graphs on our
 cacti server [0].
 
 Long term we are trying to move to putting all of the logs in swift, but it
 turns out there are some use case issues we need to sort out around that
 before we can do so (but this is being worked on so should happen). Until
 that day arrives we need to work on logging more smartly, and if we can't do
 that we will have to reduce the log retention period.
 
 So what can you do? Well it appears that our log files may need a diet. I
 have listed the worst offenders below (after a small sampling, there may be
 more) and it would be great if we could go through those with a comb and
 figure out if we are logging actually useful data. The great thing about
 doing this is it will make lives better for deployers of Openstack too.
 
 Some initial checking indicates a lot of this noise may be related to
 ceilometer. It looks like it is logging AMQP stuff frequently and inflating
 the logs of individual services as it polls them.
 
 Offending files from tempest tests:
 screen-n-cond.txt.gz 7.3M
 screen-ceilometer-collector.txt.gz 6.0M
 screen-n-api.txt.gz 3.7M
 screen-n-cpu.txt.gz 3.6M
 tempest.txt.gz 2.7M
 screen-ceilometer-anotification.txt.gz 1.9M
 subunit_log.txt.gz 1.5M
 screen-g-api.txt.gz 1.4M
 screen-ceilometer-acentral.txt.gz 1.4M
 screen-n-net.txt.gz 1.4M
 from: 
 http://logs.openstack.org/52/81252/2/gate/gate-tempest-dsvm-full/488bc4e/logs/?C=S;O=D
 
 Unittest offenders:
 Nova subunit_log.txt.gz 14M
 Neutron subunit_log.txt.gz 7.8M
 Keystone subunit_log.txt.gz 4.8M
 
 Note all of the above files are compressed with gzip -9 and the filesizes
 above reflect compressed file sizes.
 
 Debug logs are important to you guys when dealing with Jenkins results. We
 want your feedback on how we can make this better for everyone.
 
 [0] 
 http://cacti.openstack.org/cacti/graph.php?action=viewlocal_graph_id=717rra_id=all
 
 Thank you,
 Clark Boylan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Glossary

2014-03-24 Thread Eugene Nikanorov
Hi,

Here's the wiki page with a list of terms we're usually operate when
discussing lbaas object model:
https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary

Feel free to add/modify/ask questions.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] admin deployement/maintenance commands.

2014-03-24 Thread Yves-Gwenaël Bourhis
Hi all,

I drafted 2 blueprints (and pushed to patches) to allow administrators to:

- deploy an apache configuration :
https://blueprints.launchpad.net/horizon/+spec/web-conf-generation-script

- maintain the local_settings.py and migrate it easily with new
features (migration script) with horizon updates:
   
https://blueprints.launchpad.net/horizon/+spec/settings-migration-script

Although I pushed 2 patches, I'm conscious they can be greatly improved.
The goal is to add management commands to ease the life of people who
deploy and maintain horizon servers.

Concerning the web deployment command, it also needs nginx, gunicorn and
uwsgi templates if we want to cover more deployment methods.

Concerning the migration command, I didn't find a satisfying patch
python implementation, the ones I found require either the
local_settings.py to be unchanged (which is of no use because for
migration purposes we want to apply a diff to a file which changed),
either are not easily installable via easy_install -Ur requirements.txt.

Let me know your thought on modifying/adding management commands
designed to help the people who deploy and maintain
a production server.

-- 
Yves-Gwenaël Bourhis


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] API/data formats version change and compatibility

2014-03-24 Thread Timur Sufiev
Hi all!

While adapting Murano's Dynamic UI to the new MuranoPL data input
format, I've encountered the need to add some new fields to it, which
meant that the 'Version' field also had to be added to Dynamic UI. So,
Dynamic UI definition without Version field is supposed to be of v.1
while upcoming Murano 0.5 release will support Dynamic UI v.2
processing (which produces data suitable for the MuranoPL in 0.5).

But then the next questions arised. What if Murano 0.5 Dynamic UI
processor gets definition in v.1 format? Should it be able to process
it as well? If it should, then to which murano-api endpoint should it
pass the resulting object model?

I suspect that we have to deal with a slightly broader scope: to which
extent should Murano components of each new version support the data
formats and APIs from the previous versions? I suggest to discuss this
question on the upcoming Murano's community meeting.

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-24 Thread Thomas Herve
Hi Stan,

Comments inline.

 Zane,
 
 I appreciate your explanations on Heat/HOT. This really makes sense.
 I didn't mean to say that MuranoPL is better for Heat. Actually HOT is good
 for Heat's mission. I completely acknowledge it.
 I've tried to avoid comparison between languages and I'm sorry if it felt
 that way. This is not productive as I don't offer you to replace HOT with
 MuranoPL (although I believe that certain elements of MuranoPL syntax can be
 contributed to HOT and be valuable addition there). Also people tend to
 protect what they have developed and invested into and to be fair this is
 what we did in this thread to great extent.
 
 What I'm trying to achieve is that you and the rest of Heat team understand
 why it was designed the way it is. I don't feel that Murano can become
 full-fledged member of OpenStack ecosystem without a bless from Heat team.
 And it would be even better if we agree on certain design, join our efforts
 and contribute to each other for sake of Orchestration program.

Note that I feel that most people outside of the Murano project are against the 
idea of using a DSL. You should feel that it could block the integration in 
OpenStack.

 I'm sorry for long mail texts written in not-so-good English and appreciate
 you patience reading and answering them.
 
 Having said that let me step backward and explain our design decisions.
 
 Cloud administrators are usually technical guys that are capable of learning
 HOT and writing YAML templates. They know exact configuration of their cloud
 (what services are available, what is the version of OpenStack cloud is
 running) and generally understands how OpenStack works. They also know about
 software they intent to install. If such guy wants to install Drupal he
 knows exactly that he needs HOT template describing Fedora VM with Apache +
 PHP + MySQL + Drupal itself. It is not a problem for him to write such HOT
 template.
 
 Note that such template would be designed for very particular configuration.
 There are hundreds of combinations that may be used to install that Drupal -
 use RHEL/Windows/etc instead of Fedora, use ngnix/IIS/etc instead of Apache,
 use FastCGI instead of mod_php, PostgreSQL instead of MySQL. You may choose
 to have all software on single VM or have one VM for database and another
 for Drupal. There are also constraints to those combinations. For example
 you cannot have Fedora + IIS on the same VM. You cannot have Apache and
 Drupal on different VMs.
 
 So the HOT template represent fixed combination of those software components.
 HOT may have input parameters like username or dbImageName but the
 overall structure of template is fixed. You cannot have template that choses
 whether to use Windows or Linux based on parameter value. You cannot write
 HOT that accepts number of instances it allowed to create and then decide
 what would be installed on each of them. This is just not needed for Heat
 users.
 
 With Murano the picture looks the opposite. Typical Murano user is a guy who
 bought account from cloud hosting vendor (cloud operator) and want to run
 some software in the cloud. He may not even be aware that it is OpenStack.
 He knows nothing about programming in general and Heat in particular. He
 doesn't want to write YAMLs. He may not know how exactly Drupal is installed
 and what components it consists of.

So that's where I want to make a first stop. If your primary user is not a 
developer, there is no reason to introduce a DSL for security reasons. The 
provider can trust the code he writes, and there is no need to create a 
dedicated language.

 So what he does is he goes to his cloud (Murano) dashboard, browses through
 application catalog, finds Drupal and drags it onto his environment board
 (think like Visio-style designer). He can stop at this point, click deploy
 button and the system will deploy Drupal. In another words the system (or
 maybe better to say cloud operator or application developer) decides what
 set of components is going to be installed (like 1 Fedora VM for MySQL and 1
 CentOS VM for Apache-PHP-Drupal). But user may decide he wants to customize
 his environment. He digs down and sees that Drupal requires database
 instance and the default is MySQL. He clicks on a button to see what are
 other options available for that role.
 
 In Heat HOT developer is the user. But in Murano those are completely
 different roles. There are developers that write application definitions
 (that is DSL code) and there are end users who compose environments from
 those applications (components). Application developers may have nothing to
 do with particular cloud their application deployed on. As for Drupal
 application the developer knows that Drupal can be run with MySQL or
 PostgreSQL. But there may be many compatible implementations of those DBMSes
 - Galera MySQL, TroveMySQL, MMM MySQL etc. So to get a list of what
 components can be placed in a database role Murano needs to look at 

Re: [openstack-dev] [Murano] API/data formats version change and compatibility

2014-03-24 Thread Stan Lagun
Let's discuss this in community meeting.

I would suggest drop support for older version at least until we release
Murano 1.0. As soon as we start to guarantee backward compatibility we will
introduce MinimalMuranoVersion instead of Version because format would not
change but some particular template may require feature that was introduced
after specified version.


On Mon, Mar 24, 2014 at 3:42 PM, Timur Sufiev tsuf...@mirantis.com wrote:

 Hi all!

 While adapting Murano's Dynamic UI to the new MuranoPL data input
 format, I've encountered the need to add some new fields to it, which
 meant that the 'Version' field also had to be added to Dynamic UI. So,
 Dynamic UI definition without Version field is supposed to be of v.1
 while upcoming Murano 0.5 release will support Dynamic UI v.2
 processing (which produces data suitable for the MuranoPL in 0.5).

 But then the next questions arised. What if Murano 0.5 Dynamic UI
 processor gets definition in v.1 format? Should it be able to process
 it as well? If it should, then to which murano-api endpoint should it
 pass the resulting object model?

 I suspect that we have to deal with a slightly broader scope: to which
 extent should Murano components of each new version support the data
 formats and APIs from the previous versions? I suggest to discuss this
 question on the upcoming Murano's community meeting.

 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Glossary

2014-03-24 Thread Susanne Balle
Looks good, Thanks Susanne


On Mon, Mar 24, 2014 at 6:55 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi,

 Here's the wiki page with a list of terms we're usually operate when
 discussing lbaas object model:
 https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary

 Feel free to add/modify/ask questions.

 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-24 Thread Alexander Tivelkov
Hi,

 
 So that's where I want to make a first stop. If your primary user is not a
 developer, there is no reason to introduce a DSL for security reasons. The
 provider can trust the code he writes, and there is no need to create a
 dedicated language.


I thinks this need to be clarified.
The provider does not write code: the provider just hosts the cloud, acting
as a moderator.
The code is written by another category of end-users, called Application
Publishers. This category is untrusted - that's the nature of Application
Catalog: anybody can upload everything.

The publishers (who write DSL) should not be confused with users (who
define object models using it). These are different roles.





--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-24 Thread Stan Lagun
On Mon, Mar 24, 2014 at 4:13 PM, Thomas Herve thomas.he...@enovance.comwrote:

What I can say is that I'm not convinced. The only use-case for a DSL would
 be if you have to upload user-written code, but what you mentioned is a Web
 interface, where the user doesn't use the DSL, and the cloud provider is
 the developer. There is no reason in this case to have a secure environment
 for the code.


I didn't say that. There are at least 2 different roles application
developers/publishers and application users. Application developer is not
necessary cloud provider. The whole point of AppCatalog is to support
scenario when anyone can create and package some application and that
package can be uploaded by user alone. Think Apple AppStore or Google Play.
Some cloud providers may configure ACLs so that user be allowed to consume
applications they decided while others may permit to upload applications to
some configurable scope (e.g. apps that would be visible to all cloud
users, to particular tenant or be private to the user). We also think to
have some of peer relations so that it would be possible to have
application upload in one catalog to become automatically available in all
connected catalogs.

This is similar to how Linux software repos work - AppCatalog is repo,
Murano package is what DEB/RPMs are to repo and DSL is what DEB/RPMs
manifests are to packages. Just that is run on cloud and designed to handle
complex multi-node apps as well as trivial ones in which case this may be
narrowed to actual installation of DEB/RPM



-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Extend operation for NFS driver

2014-03-24 Thread Kerr, Andrew
Hi cinder,

Just noticed we have competing solutions to implement extend_volume in the
generic NFS driver [1]  [2].  I understand these are not targeted until
after RC1, but I also didn't want the duplicate effort lost in the
shuffle.  Are there any thoughts on which is the more appropriate
implementation?

[1] https://review.openstack.org/82020
[2] https://review.openstack.org/82100

Andrew Kerr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Multiple patches in one review

2014-03-24 Thread John Dennis
When a change is complex good practice is to break the change into a
series of smaller individual patches that show the individual
incremental steps needed to get to the final goal. When partitioned into
small steps each change is easier to review and hopefully illustrates
the progression.

In most cases such a series of patches are interdependent and order
dependent, jenkins cannot run tests on any patch unless the previous
patch has been applied.

I was under the impression gerrit review supported multiple commits. In
fact you can submit multiple commits with a single git review command.

But from that point forward it appears as if each commit is handled
independently rather than being an ordered list of commits that are
grouped together sharing a single review where their relationship is
explicit. Also the jenkins tests either needs to apply all the commits
in sequence and run the test or it needs to run the test after applying
the next commit in the sequence.

Can someone provide some explanation on how to handle this situation?

Or perhaps I'm just not understanding how the tools work when multiple
commits are submitted.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-24 Thread Julien Danjou
On Mon, Mar 24 2014, John Dennis wrote:

 But from that point forward it appears as if each commit is handled
 independently rather than being an ordered list of commits that are
 grouped together sharing a single review where their relationship is
 explicit. Also the jenkins tests either needs to apply all the commits
 in sequence and run the test or it needs to run the test after applying
 the next commit in the sequence.

 Can someone provide some explanation on how to handle this situation?

 Or perhaps I'm just not understanding how the tools work when multiple
 commits are submitted.

If a patch depends on another one, Jenkins is run with all the needed
patches applied.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-24 Thread Russell Bryant
On 03/24/2014 10:31 AM, John Dennis wrote:
 When a change is complex good practice is to break the change into a
 series of smaller individual patches that show the individual
 incremental steps needed to get to the final goal. When partitioned into
 small steps each change is easier to review and hopefully illustrates
 the progression.
 
 In most cases such a series of patches are interdependent and order
 dependent, jenkins cannot run tests on any patch unless the previous
 patch has been applied.
 
 I was under the impression gerrit review supported multiple commits. In
 fact you can submit multiple commits with a single git review command.
 
 But from that point forward it appears as if each commit is handled
 independently rather than being an ordered list of commits that are
 grouped together sharing a single review where their relationship is
 explicit. Also the jenkins tests either needs to apply all the commits
 in sequence and run the test or it needs to run the test after applying
 the next commit in the sequence.
 
 Can someone provide some explanation on how to handle this situation?
 
 Or perhaps I'm just not understanding how the tools work when multiple
 commits are submitted.

Gerrit support for a patch series could certainly be better.

When you push a series of dependent changes, each commit has its own
review, but the dependencies between them are still tracked.

As an example, take a look at this series of patches:

https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/admin-event-callback-api,n,z

Take a look at this patch from the middle:

https://review.openstack.org/#/c/74576/

There is a Dependencies section above the list of patch versions.
That's where you see the patches linked together.  Our other tools that
do testing and merging of changes respect these dependencies.  Changes
will only be tested with the other changes they depend on.  They will
also only be merged once all changes they depend on have been merged.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-24 Thread Cindy
+1! Malini is going to be a great addition!

On 03/21/2014 09:06 PM, Alejandro Cabrera wrote:
 +1.
 
 Malini is dedicated to making Marconi and Openstack a healthier, better
 place. I am very happy to see Malini being proposed for Core. I trust
 that she'll do wonders for the project and will help drive interaction
 with the larger Openstack ecosystem. :)
 
 -Original Message-
 From: Flavio Percoco [mailto:flavio at redhat.com 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev] 
 Sent: Friday, March 21, 2014 11:18 AM
 To: openstack-dev at lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Subject: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the 
 marconi-core team

 Greetings,

 I'd like to propose adding Malini Kamalambal to Marconi's core. Malini has 
 been an outstanding contributor for a long time. She's taken care of 
 Marconi's tests, benchmarks, gate integration, tempest support and way more 
 other things. She's also actively participated in the mailing list 
 discussions, she's contributed with thoughtful reviews and participated in 
 the project's meeting since she first joined the project.

 Folks in favor or against please explicitly +1 / -1 the proposal.

 Thanks Malini, it's an honor to have you in the team.

 --
 @flaper87
 Flavio Percoco
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-24 Thread John Griffith
On Mon, Mar 24, 2014 at 8:31 AM, John Dennis jden...@redhat.com wrote:

 When a change is complex good practice is to break the change into a
 series of smaller individual patches that show the individual
 incremental steps needed to get to the final goal. When partitioned into
 small steps each change is easier to review and hopefully illustrates
 the progression.

Definitely agree, however I've noticed people aren't necessarily very
*good* about breaking these into logical pieces sometimes.  In other words
it becomes random changes throughout multiple patches; in most cases it
seems to be after-thoughts or just what the submitter managed to work on at
the time.

Personally I'd love to see these be a bit more well thought out and
organized for my own sake as a reviewer.  While we're at it (I realize this
isn't the case you're talking about) I also would REALLY like to not see 5
individual patches all dependent on each other and all just changing one or
two lines (I was seeing this quite a bit this cycle, and the only thing I
can think of is perhaps it's developers getting some sort of points for
number of commits).


 In most cases such a series of patches are interdependent and order
 dependent, jenkins cannot run tests on any patch unless the previous
 patch has been applied.

 I was under the impression gerrit review supported multiple commits. In
 fact you can submit multiple commits with a single git review command.

 But from that point forward it appears as if each commit is handled
 independently rather than being an ordered list of commits that are
 grouped together sharing a single review where their relationship is
 explicit. Also the jenkins tests either needs to apply all the commits
 in sequence and run the test or it needs to run the test after applying
 the next commit in the sequence.

 Can someone provide some explanation on how to handle this situation?



 Or perhaps I'm just not understanding how the tools work when multiple
 commits are submitted.

 --
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-24 Thread Russell Bryant
On 03/24/2014 11:08 AM, John Griffith wrote:
 
 On Mon, Mar 24, 2014 at 8:31 AM, John Dennis jden...@redhat.com
 mailto:jden...@redhat.com wrote:
 
 When a change is complex good practice is to break the change into a
 series of smaller individual patches that show the individual
 incremental steps needed to get to the final goal. When partitioned into
 small steps each change is easier to review and hopefully illustrates
 the progression.
 
 Definitely agree, however I've noticed people aren't necessarily very
 *good* about breaking these into logical pieces sometimes.  In other
 words it becomes random changes throughout multiple patches; in most
 cases it seems to be after-thoughts or just what the submitter managed
 to work on at the time.
 
 Personally I'd love to see these be a bit more well thought out and
 organized for my own sake as a reviewer.  While we're at it (I realize
 this isn't the case you're talking about) I also would REALLY like to
 not see 5 individual patches all dependent on each other and all just
 changing one or two lines (I was seeing this quite a bit this cycle, and
 the only thing I can think of is perhaps it's developers getting some
 sort of points for number of commits).
 

Some related good docs on splitting up changes:

https://wiki.openstack.org/wiki/GitCommitMessages

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-24 Thread Dmitry
MuranoPL supposed to provide a solution for the real needs to manage
services in the centralized manner and to allow cloud provider customers to
create their own services.
The application catalog similar to AppDirect (www.appdirect.com) natively
supported by OpenStack is a huge step forward.
Think about Amazon which provides different services for the different
needs: Amazon Cloud Formation, Amazon OpsWorks and Amazon Beanstalk.
Following the similar logic (which is fully makes sense for me), OpenStack
should provide resource reservation and orchestration (Heat and Climate),
Application Catalog (Murano) and PaaS (Solum).
Every project can live in harmony with other and contribute for the cloud
service provider service completeness.
This is my opinion and i would happy to use Murano in our internal solution.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-24 Thread Stefano Maffulli
On 03/22/2014 03:14 AM, Sean Dague wrote:
 Honestly, I largely disagree. This is applying some process where there
 clearly wasn't any before.

We have a different perception evidently. I'm assuming you're
exaggerating for the sake of clarity because assuming there was no
process before means that the releases are basically randomly defined.

 Storyboard remains vaporware. I will be enormously happy when it is not.
[...]

That's depressing to read. I'm again assuming you're using exaggerated
words to paint a clearer picture.

At this point I'd like to get a fair assessment of storyboard's status
and timeline: it's clear that Launchpad blueprints need to be abandoned
lightspeed fast and I (and others) have been sold the idea that
Storyboard is a *thing* that will happen *soon*. I also was told that
spec reviews are an integral part of Storyboard use case scenarios, not
just defects.

I guess the reason why I'm puzzled is that if the largest OpenStack
project can afford to migrate out of Launchpad's Blueprints quickly, why
do we even care about Storyboard? And don't tell me this is an
experiment: you don't do experiments of this size with people! We're
talking about hundreds of people who've been reading/told for
months/years that in order to get a new feature in OpenStack you file a
blueprint on Launchpad... There are hundreds of presentations on
slideshare, webinar recordings, documentation *we* don't control, all
mentioning launchpad.

I keep having the impression you are all underestimating the size of the
change you're proposing. To give you an example, even though we moved
the individual CLA from echosign to Gerrit, I keep receiving 4-5 signed
iCLA from echosign per week. And that's because some of these companies
have not updated their internal documentation or because individual
developers believe anything shows up on google instead of wiki-search info.

Now, it seems that Russell and Thierry have made up their mind so let's
focus on Storyboard again: is it going to happen during the Juno
timeframe or shall I consider it vaporware too?

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] python-keystoneclient unit tests only if python-memcache is installed

2014-03-24 Thread Dolph Mathews
FWIW, I opened a bug [1] and proposed a fix [2].

[1]: https://bugs.launchpad.net/python-keystoneclient/+bug/1296794
[2]: https://review.openstack.org/#/c/82527/

On Fri, Mar 21, 2014 at 12:38 AM, Thomas Goirand z...@debian.org wrote:

 On 03/20/2014 11:48 PM, Dolph Mathews wrote:
  Yes, those tests are conditionally executed if
  https://pypi.python.org/pypi/python-memcached/ is installed and if so,
  memcached is assumed to be accessible on localhost. Unfortunately the
  test suite doesn't have a sanity check for that following assumption, so
  the test failures aren't particularly helpful.

 Oh, ok, thanks!

 I've added python-memcache *AND* memcached as build-dependency, then
 everything is back to working! :)

 Thanks again for the above help,
 Cheers,

 Thomas Goirand (zigo)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-delete in amqp reply_* queues in OpenStack

2014-03-24 Thread Chris Friesen

On 03/24/2014 02:59 AM, Dmitry Mescheryakov wrote:

Chris,

In oslo.messaging a single reply queue is used to gather results from
all the calls. It is created lazily on the first call and is used
until the process is killed. I did a quick look at oslo.rpc from
oslo-incubator and it seems like it uses the same pattern, which is
not surprising since oslo.messaging descends from oslo.rpc. So if you
restart some process which does rpc calls (nova-api, I guess), you
should see one reply queue gone and another one created instead after
some time.


Okay, that makes a certain amount of sense.

How does it work for queues used by both the controller and the compute 
node?


If I do a controlled switchover from one controller to another (killing 
and restarting rabbit, nova-api, nova-conductor, nova-scheduler, 
neutron, cinder, etc.) I see that the number of reply queues drops from 
28 down to 5, but those 5 are all ones that existed before.


I assume that those 5 queues are (re)created by the services running on 
the compute nodes, but if that's the case then how would the services 
running on the controller node find out about the names of the queues?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services concept

2014-03-24 Thread Susanne Balle
Hi Neutron LBaaS folks,


I have been getting up to speed on the Neutron LBaaS implementation and
have been wondering how to make it fit our needs in HP public cloud as well
as as an enterprise-grade load balancer service for HP Openstack
implementations. We are currently using Libra as our LBaaS implementation
and are interested in moving to the Neutron LBaaS service in the future.


I have been looking at the LBaaS requirements posted by Jorge at:

https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements


When we started looking at existing packages for our LBaaS service we had a
focus on requirements needed to create a managed service where the user
would just interact with the service APIs and not have to deal with
resiliency, HA, monitoring, and reporting functions themselves. Andrew
Hutchings became the HP Tech Lead for the open source Libra project. For
historical reasons around why we decided to contribute to Libra see:

http://openstack.10931.n7.nabble.com/Neutron-Relationship-between-Neutron-LBaaS-and-Libra-td29562.html


We would like to discuss adding the concept of managed services to the
Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
proxy. The latter could be a second approach for some of the software
load-balancers e.g. HA proxy since I am not sure that it makes sense to
deploy Libra within Devstack on a single VM.


Currently users would have to deal with HA, resiliency, monitoring and
managing their load-balancers themselves.  As a service provider we are
taking a more managed service approach allowing our customers to consider
the LB as a black box and the service manages the resiliency, HA,
monitoring, etc. for them.


We like where Neutron LBaaS is going with regards to L7 policies and SSL
termination support which Libra is not currently supporting and want to
take advantage of the best in each project.

We have a draft on how we could make Neutron LBaaS take advantage of Libra
in the back-end.

The details are available at:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft


While this would allow us to fill a gap short term we would like to discuss
the longer term strategy since we believe that everybody would benefit from
having such managed services artifacts built directly into Neutron LBaaS.


There are blueprints on high-availability for the HA proxy software
load-balancer and we would like to suggest implementations that fit our
needs as services providers.


One example where the managed service approach for the HA proxy load
balancer is different from the current Neutron LBaaS roadmap is around HA
and resiliency. The 2 LB HA setup proposed (
https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
appropriate for service providers in that users would have to pay for the
extra load-balancer even though it is not being actively used.  An
alternative approach is to implement resiliency using a pool of stand-by
load-and preconfigured load balancers own by e.g. LBaaS tenant and assign
load-balancers from the pool to tenants environments. We currently are
using this approach in the public cloud with Libra and it takes
approximately 80 seconds for the service to decide that a load-balancer has
failed, swap the floating ip and update the db, etc. and have a new LB
running.


Regards Susanne


--

Susanne Balle

HP Cloud
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-24 Thread Ben Nemec
On 2014-03-24 09:31, John Dennis wrote:
 When a change is complex good practice is to break the change into a
 series of smaller individual patches that show the individual
 incremental steps needed to get to the final goal. When partitioned into
 small steps each change is easier to review and hopefully illustrates
 the progression.
 
 In most cases such a series of patches are interdependent and order
 dependent, jenkins cannot run tests on any patch unless the previous
 patch has been applied.
 
 I was under the impression gerrit review supported multiple commits. In
 fact you can submit multiple commits with a single git review command.
 
 But from that point forward it appears as if each commit is handled
 independently rather than being an ordered list of commits that are
 grouped together sharing a single review where their relationship is
 explicit. Also the jenkins tests either needs to apply all the commits
 in sequence and run the test or it needs to run the test after applying
 the next commit in the sequence.

I should point out that Jenkins can't apply the next patch in sequence
just to get tests passing.  What happens if the next patch never merges
or has to be reverted?  Each commit needs to be able to pass tests using
only the previous commits in the sequence.  If it relies on a subsequent
commit then either the commits need to be reordered or, if there's a
circular dependency, maybe those commits aren't logically separate and
should just be squashed together.

 
 Can someone provide some explanation on how to handle this situation?
 
 Or perhaps I'm just not understanding how the tools work when multiple
 commits are submitted.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.rootwrap 1.2.0 released

2014-03-24 Thread Thierry Carrez
A new version of the oslo.rootwrap library (1.2.0) was just released:

https://pypi.python.org/pypi/oslo.rootwrap
http://tarballs.openstack.org/oslo.rootwrap/oslo.rootwrap-1.2.0.tar.gz

MD5SUM: 2cd7e0b6e838d2ee492982e99a7834a2

It contains the following improvements and bugfixes:

Add use_syslog_rfc_format config option to honor RFC5424
https://bugs.launchpad.net/oslo/+bug/904307

Fix IpFilter so that it can't be trivially bypassed
https://bugs.launchpad.net/oslo/+bug/1081795

Regards,

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-24 Thread Brant Knudson
There's a proposed change to Keystone to update SQLAlchemy to 0.9[0] which
is failing in the doc build. I'm not sure exactly what it's doing that's
causing it to fail. I proposed changes to work around the issues[1] based
on the error messages. Part of the fix requires a change in oslo-incubator
code[2]. So I think once these are merged (or a better fix is merged...),
Keystone will support SQLAlchemy 0.9.

[0] https://review.openstack.org/#/c/82231/
[1] https://review.openstack.org/#/c/82370/
[2] https://review.openstack.org/#/c/82370/

- Brant



On Fri, Mar 14, 2014 at 12:45 AM, Roman Podoliaka
rpodoly...@mirantis.comwrote:

 Hi all,

 I think it's actually not that hard to fix the errors we have when
 using SQLAlchemy 0.9.x releases.

 I uploaded two changes two Nova to fix unit tests:
 - https://review.openstack.org/#/c/80431/ (this one should also fix
 the Tempest test run error)
 - https://review.openstack.org/#/c/80432/

 Thanks,
 Roman

 On Thu, Mar 13, 2014 at 7:41 PM, Thomas Goirand z...@debian.org wrote:
  On 03/14/2014 02:06 AM, Sean Dague wrote:
  On 03/13/2014 12:31 PM, Thomas Goirand wrote:
  On 03/12/2014 07:07 PM, Sean Dague wrote:
  Because of where we are in the freeze, I think this should wait until
  Juno opens to fix. Icehouse will only be compatible with SQLA 0.8,
 which
  I think is fine. I expect the rest of the issues can be addressed
 during
  Juno 1.
 
  -Sean
 
  Sean,
 
  No, it's not fine for me. I'd like things to be fixed so we can move
  forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
  will be released SQLA 0.9 and with Icehouse, not Juno.
 
  We're past freeze, and this requires deep changes in Nova DB to work. So
  it's not going to happen. Nova provably does not work with SQLA 0.9, as
  seen in Tempest tests.
 
-Sean
 
  I'd be nice if we considered more the fact that OpenStack, at some
  point, gets deployed on top of distributions... :/
 
  Anyway, if we can't do it because of the freeze, then I will have to
  carry the patch in the Debian package. Never the less, someone will have
  to work and fix it. If you know how to help, it'd be very nice if you
  proposed a patch, even if we don't accept it before Juno opens.
 
  Thomas Goirand (zigo)
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-24 Thread Malini Kamalambal


On 3/21/14 3:49 PM, Rochelle.RochelleGrober rochelle.gro...@huawei.com
wrote:


 From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
snip
 
 We are talking about different levels of testing,
 
 1. Unit tests - which everybody agrees should be in the individual
 project
 itself
 2. System Tests - 'System' referring to ( limited to), all the
 components
 that make up the project. These are also the functional tests for the
 project.
 3. Integration Tests - This is to verify that the OS components
 interact
 well and don't break other components -Keystone being the most obvious
 example. This is where I see getting the maximum mileage out of
 Tempest.
 
 I see value in projects taking ownership of the System Tests - because
 if
 the project is not 'functionally ready', it is not ready to integrate
 with
 other components of Openstack.
 But for this approach to be successful, projects should have diversity
 in
 the team composition - we need more testers who focus on creating these
 tests.
 This will keep the teams honest in their quality standards.

+1000  I love your approach to this.  You are right.  Functional tests
for the project, that exist in an environment, but that exercise the
intricacies of just the project aren't there for most projects, but
really should be.  And these tests should be exercised against new code
before the code enters the gerrit/Jenkins stream. But, as Malini points
out, it's at most a dream for most projects as the test developers just
aren't part of most projects.


 As long as individual projects cannot guarantee functional test
 coverage,
 we will need more tests in Tempest.
 But that will shift focus away from Integration Testing, which can be
 done
 ONLY in Tempest.

+1  This is also an important point.  If functional testing belonged to
the projects, then most of these tests would be run before a tempest test
was ever run and would not need to be part of the integrated tests,
except as a subset that demonstrate the functioning integration with
other projects.

 
 Regardless of whatever we end up deciding, it will be good to have
 these
 discussions sooner than later.
 This will help at least the new projects to move in the right
 direction.

Maybe a summit topic?  How do we push functional testing into the project
level development?

--Rocky

That is a good idea
I just suggested a session for the ATL summit. See
http://summit.openstack.org/cfp/details/134



 
 -Malini
 
 
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread James E. Blair
Hi,

So recently we started this experiment with the compute and qa programs
to try using Gerrit to review blueprints.  Launchpad is deficient in
this area, and while we hope Storyboard will deal with it much better,
but it's not ready yet.

As a development organization, OpenStack scales by adopting common tools
and processes, and true to form, we now have a lot of other projects
that would like to join the experiment.  At some point that stops
being an experiment and becomes practice.

However, at this very early point, we haven't settled on answers to some
really basic questions about how this process should work.  Before we
extend it to more projects, I think we need to establish a modicum of
commonality that helps us integrate it with our tooling at scale, and
just as importantly, helps new contributors and people who are working
on multiple projects have a better experience.

I'd like to hold off on creating any new specs repos until we have at
least the following questions answered:

a) Should the specs repos be sphinx documents?
b) Should the follow the Project Testing Interface[1]?
c) Some basic agreement on what information is encoded?
   eg: don't encode implementation status (it should be in launchpad)
   do encode branches (as directories? as ...?)
d) Workflow process -- what are the steps to create a new spec and make
   sure it also exists and is tracked correctly in launchpad?

-Jim

[1] https://wiki.openstack.org/wiki/ProjectTestingInterface

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 03/24/2014

2014-03-24 Thread Dmitri Zimine
Hi folks, thanks for taking part in Mistral meeting on #openstack-meeting

Here is the minutes and the full log.

Minutes 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-24-16.03.html

Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-24-16.03.log.html

Join us next time March 31, at the same time 

(note that with summer time in effect now it's 9am Pacific)

Cheers, DZ. ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova-compute not re-establishing connectivity after controller switchover

2014-03-24 Thread Chris Friesen

We've been stress-testing our system doing controlled switchover of the 
controller.  Normally this works okay, but we've run into a situation that 
seems to show a flaw in the reconnection logic.

On the compute node, nova-compute has managed to get into a state where it 
shows as down in nova service-list, and the nova-compute.log seems to show 
it never managing to reconnect with the AMQP server.

I've included logs below showing what seems to be the beginning of the problem 
and then showing it transitioning to the periodic logs without a successful 
reconnection.  The periodic logs have now been going for roughly seven hours...

Any ideas on what might be going on would be appreciated.

Chris






2014-03-24 09:24:33.566 6620 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2014-03-24 09:24:34.126 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-4', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x180, cpulist=[7, 8] pinned, nodelist=[0], node=0 
2014-03-24 09:24:34.126 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-1', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x60, cpulist=[5, 6] pinned, nodelist=[0], node=0 
2014-03-24 09:24:34.126 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'load_balancer', vm_state=u'active', task_state=None, vcpus=3, 
cpuset=0x1c00, cpulist=[10, 11, 12] pinned, nodelist=[1], node=1 
2014-03-24 09:24:34.182 6620 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 111290, per-node: [52286, 59304], numa nodes:2
2014-03-24 09:24:34.183 6620 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 29
2014-03-24 09:24:34.183 6620 AUDIT nova.compute.resource_tracker [-] Free 
vcpus: 170, free per-node float vcpus: [48, 112], free per-node pinned vcpus: 
[3, 7]
2014-03-24 09:24:34.183 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
vcpus:20, Free vcpus:170, 16.0x overcommit, per-cpu float cpulist: [3, 4, 9, 
13, 14, 15, 16, 17, 18, 19]
2014-03-24 09:24:34.244 6620 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute-0:compute-0
2014-03-24 09:25:36.564 6620 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2014-03-24 09:25:37.122 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-4', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x180, cpulist=[7, 8] pinned, nodelist=[0], node=0 
2014-03-24 09:25:37.122 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-1', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x60, cpulist=[5, 6] pinned, nodelist=[0], node=0 
2014-03-24 09:25:37.122 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'load_balancer', vm_state=u'active', task_state=None, vcpus=3, 
cpuset=0x1c00, cpulist=[10, 11, 12] pinned, nodelist=[1], node=1 
2014-03-24 09:25:37.182 6620 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 111290, per-node: [52286, 59304], numa nodes:2
2014-03-24 09:25:37.182 6620 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 29
2014-03-24 09:25:37.183 6620 AUDIT nova.compute.resource_tracker [-] Free 
vcpus: 170, free per-node float vcpus: [48, 112], free per-node pinned vcpus: 
[3, 7]
2014-03-24 09:25:37.183 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
vcpus:20, Free vcpus:170, 16.0x overcommit, per-cpu float cpulist: [3, 4, 9, 
13, 14, 15, 16, 17, 18, 19]
2014-03-24 09:25:37.245 6620 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute-0:compute-0
2014-03-24 09:26:47.324 6620 ERROR root [-] Unexpected exception occurred 1 
time(s)... retrying.
2014-03-24 09:26:47.324 6620 TRACE root Traceback (most recent call last):
2014-03-24 09:26:47.324 6620 TRACE root   File 
./usr/lib64/python2.7/site-packages/nova/openstack/common/excutils.py, line 
78, in inner_func
2014-03-24 09:26:47.324 6620 TRACE root   File 
./usr/lib64/python2.7/site-packages/nova/openstack/common/rpc/impl_kombu.py, 
line 745, in _consumer_thread
2014-03-24 09:26:47.324 6620 TRACE root   File 
./usr/lib64/python2.7/site-packages/nova/openstack/common/rpc/impl_kombu.py, 
line 736, in consume
2014-03-24 09:26:47.324 6620 TRACE root   File 
./usr/lib64/python2.7/site-packages/nova/openstack/common/rpc/impl_kombu.py, 
line 663, in iterconsume
2014-03-24 09:26:47.324 6620 TRACE root   File 
./usr/lib64/python2.7/site-packages/nova/openstack/common/rpc/impl_kombu.py, 
line 578, in ensure
2014-03-24 09:26:47.324 6620 TRACE root   File 
./usr/lib64/python2.7/site-packages/nova/openstack/common/rpc/impl_kombu.py, 
line 658, in _consume
2014-03-24 09:26:47.324 6620 TRACE root   File 
/usr/lib64/python2.7/site-packages/kombu/connection.py, line 279, in 
drain_events
2014-03-24 09:26:47.324 6620 TRACE root return 
self.transport.drain_events(self.connection, **kwargs)
2014-03-24 09:26:47.324 6620 TRACE root   File 

[openstack-dev] Ports' vif_details after upgrading from havana with ML2

2014-03-24 Thread Jakub Libosvar
Hello,

In Icehouse was introduced new column vif_details and removed
cap_port_filter. During db migration data in cap_port_filter column are
lost and after upgrade the vif_details column is legally empty. This
leads to tempest tests failure when checking port_filter[1].

What would be the impact of running neutron-server without information
about port_filter in vif_details?
Should neutron-server check this on startup and according vif_type and
mechanism_drivers set the port_filter?

Thanks for explanation.

Kuba

[1]
http://logs.openstack.org/95/58695/40/check/check-grenade-dsvm-neutron/4dc0dff/logs/testr_results.html.gz

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-24 Thread Dan Prince


- Original Message -
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Sent: Sunday, March 23, 2014 9:02:23 PM
 Subject: Re: [openstack-dev] [TripleO] test environment requirements
 
 Excerpts from Dan Prince's message of 2014-03-21 09:25:42 -0700:
  
  - Original Message -
   From: Robert Collins robe...@robertcollins.net
   To: OpenStack Development Mailing List
   openstack-dev@lists.openstack.org
   Sent: Thursday, March 13, 2014 5:51:30 AM
   Subject: [openstack-dev] [TripleO] test environment requirements
   
   So we already have pretty high requirements - its basically a 16G
   workstation as minimum.
   
   Specifically to test the full story:
- a seed VM
- an undercloud VM (bm deploy infra)
- 1 overcloud control VM
- 2 overcloud hypervisor VMs
   
  5 VMs with 2+G RAM each.
   
   To test the overcloud alone against the seed we save 1 VM, to skip the
   overcloud we save 3.
   
   However, as HA matures we're about to add 4 more VMs: we need a HA
   control plane for both the under and overclouds:
- a seed VM
- 3 undercloud VMs (HA bm deploy infra)
- 3 overcloud control VMs (HA)
- 2 overcloud hypervisor VMs
   
  9 VMs with 2+G RAM each == 18GB
   
   What should we do about this?
   
   A few thoughts to kick start discussion:
- use Ironic to test across multiple machines (involves tunnelling
   brbm across machines, fairly easy)
- shrink the VM sizes (causes thrashing)
- tell folk to toughen up and get bigger machines (ahahahahaha, no)
- make the default configuration inline the hypervisors on the
   overcloud with the control plane:
  - a seed VM
  - 3 undercloud VMs (HA bm deploy infra)
  - 3 overcloud all-in-one VMs (HA)
 
7 VMs with 2+G RAM each == 14GB
   
   
   I think its important that we exercise features like HA and live
   migration regularly by developers, so I'm quite keen to have a fairly
   solid systematic answer that will let us catch things like bad
   firewall rules on the control node preventing network tunnelling
   etc...
  
  I'm all for supporting HA development and testing within devtest. I'm
  *against* forcing it on all users as a default.
  
  I can imaging wanting to cut corners and have configurations flexible on
  both ends (undercloud and overcloud). I may for example deploy a single
  all-in-one undercloud when I'm testing overcloud HA. Or vice versa.
  
  I think I'm one of the few (if not the only) developer who uses almost
  exclusive baremetal (besides seed VM) when test/developing TripleO.
  Forcing users who want to do this to have 6-7 real machines is a bit much
  I think. Arguably wasteful even. By requiring more machines to run through
  devtest you actually make it harder for people to test it on real hardware
  which is usually harder to come by. Given deployment on real bare metal is
  sort of the point or TripleO I'd very much like to see more developers
  using it rather than less.
  
  So by all means lets support HA... but lets do it in a way that is
  configurable (i.e. not forcing people to be wasters)
  
  Dan
  
 
 I don't think anybody wants to force it on _users_. But a predominance
 of end users will require HA, and thus we need our developers to be able
 to develop with HA.
 
 This is for the _benefit_ of developers. I imagine we've all been in
 situations where our dev environment is almost nothing like CI, and then
 when CI runs you find that you have missed huge problems.. and now to
 test for those problems you either have to re-do your dev environment,
 or wait.. a lot.. for CI.

All good points. Running an exactly copy of the upstream CI environment seems 
to be getting more and more costly though. My goal is that I'd like developers 
to be able to choose what they want to test as much as they can. Streamline 
things where appropriate. Take the overcloud today: I actually like to idea of 
going the other way here and running an all-in-one version of it from time to 
time to save resources.

If I know I need to dev test on an HA cloud then by all means I'll try to do 
that. But in many cases I may not need to go to such lengths. Furthermore, some 
testing is better than no testing at all because we've set the resources bar 
too high.

Again, all for supporting the HA test and dev path in the devtest scripts. 
Ideally making it as configurable as we can...

Dan


 
 I don't have any really clever answers to this problem. We're testing an
 end-to-end cloud deployment. If we can't run a small, accurate simulation
 of such an environment as developers, then we will end up going very slow.
 The problem is that this small simulation is still massive compared
 to the usual development paradigm which involves at most two distinct
 virtual machines.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [TripleO] test environment requirements

2014-03-24 Thread Dan Prince


- Original Message -
 From: Robert Collins robe...@robertcollins.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, March 23, 2014 11:18:19 PM
 Subject: Re: [openstack-dev] [TripleO] test environment requirements
 
 On 24 March 2014 14:06, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from Robert Collins's message of 2014-03-13 02:51:30 -0700:
  So we already have pretty high requirements - its basically a 16G
  workstation as minimum.
 
  Specifically to test the full story:
   - a seed VM
   - an undercloud VM (bm deploy infra)
   - 1 overcloud control VM
   - 2 overcloud hypervisor VMs
  
 5 VMs with 2+G RAM each.
 
  To test the overcloud alone against the seed we save 1 VM, to skip the
  overcloud we save 3.
 
  However, as HA matures we're about to add 4 more VMs: we need a HA
  control plane for both the under and overclouds:
   - a seed VM
   - 3 undercloud VMs (HA bm deploy infra)
   - 3 overcloud control VMs (HA)
   - 2 overcloud hypervisor VMs
  
 9 VMs with 2+G RAM each == 18GB
 
 
  If we switch end-user-vm tests to cirros, and use a flavor that is really
  really tiny, like 128M RAM, then I think we can downsize the development
  environment hypervisors from 2G to 1G. That at least drops us to 16G. If
  we can also turn off the seed as soon as the undercloud boots, that's
  another 2G saved. Finally if we can turn off 1 of the undercloud VMs
  and run degraded, that is another 2G saved.
 
 We can't turn off the seed unless we change the network topology; we
 need to make sure if we do that that we don't invalidate the test
 structure.

This should make changing the network topology in the test env's easier:

  https://review.openstack.org/#/c/82327/

If we use something else (besides the seed VM) as the gateway for the baremetal 
network I think it should work fine right?

 
  Small potatoes I know, but it wouldn't be complicated to do any of these
  and it would also help test real scenarios we want to test (seed going
  away, undercloud node going away).
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 I like this.
 
 I also like much of the spread of ideas that came up - lets capture
 them to an etherpad and link that from a blueprint for future selves
 finding it.
 
 In particular I'd like to make being able to use OpenStack a J series
 goal, though it doesn't help with local dev overheads.
 
 -Rob
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute not re-establishing connectivity after controller switchover

2014-03-24 Thread Dan Smith
 Any ideas on what might be going on would be appreciated.

This looks like something that should be filed as a bug. I don't have
any ideas off hand, bit I will note that the reconnection logic works
fine for us in the upstream upgrade tests. That scenario includes
starting up a full stack, then taking down everything except compute and
rebuilding a new one on master. After the several minutes it takes to
upgrade the controller services, the compute host reconnects and is
ready to go before tempest runs.

I suspect your case wedged itself somehow other than that, which
definitely looks nasty and is worth tracking in a bug.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute not re-establishing connectivity after controller switchover

2014-03-24 Thread Chris Friesen

On 03/24/2014 10:41 AM, Chris Friesen wrote:


We've been stress-testing our system doing controlled switchover of
the controller.  Normally this works okay, but we've run into a
situation that seems to show a flaw in the reconnection logic.

On the compute node, nova-compute has managed to get into a state
where it shows as down in nova service-list, and the
nova-compute.log seems to show it never managing to reconnect with
the AMQP server.

I've included logs below showing what seems to be the beginning of
the problem and then showing it transitioning to the periodic logs
without a successful reconnection.  The periodic logs have now been
going for roughly seven hours...

Any ideas on what might be going on would be appreciated.



I suppose I should mention that we're running Havana, not the current code.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread Russell Bryant
On 03/24/2014 12:34 PM, James E. Blair wrote:
 Hi,
 
 So recently we started this experiment with the compute and qa programs
 to try using Gerrit to review blueprints.  Launchpad is deficient in
 this area, and while we hope Storyboard will deal with it much better,
 but it's not ready yet.

This seems to be a point of confusion.  My view is that Storyboard isn't
intended to implement what gerrit provides.  Given that, it seems like
we'd still be using this whether the tracker is launchpad or storyboard.

 As a development organization, OpenStack scales by adopting common tools
 and processes, and true to form, we now have a lot of other projects
 that would like to join the experiment.  At some point that stops
 being an experiment and becomes practice.
 
 However, at this very early point, we haven't settled on answers to some
 really basic questions about how this process should work.  Before we
 extend it to more projects, I think we need to establish a modicum of
 commonality that helps us integrate it with our tooling at scale, and
 just as importantly, helps new contributors and people who are working
 on multiple projects have a better experience.
 
 I'd like to hold off on creating any new specs repos until we have at
 least the following questions answered:

Sounds good to me.

 a) Should the specs repos be sphinx documents?

Probably.  I see that the qa-specs repo has this up for review.  I'd
like to look at applying this to nova-specs and see how it affects the
workflow we've had in mind so far.

 b) Should the follow the Project Testing Interface[1]?

As its relevant, sure.

 c) Some basic agreement on what information is encoded?

We've been working on a template in nova-specs here:

http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst

eg: don't encode implementation status (it should be in launchpad)

Agreed

do encode branches (as directories? as ...?)

IMO, yes.  The reason is that I think approval of a spec should be
limited to a given release.  If it slips, it should be re-reviewed to
make sure it still makes sense given whatever developments have
occurred.  That's why we have a juno/ directory in nova-specs.

 d) Workflow process -- what are the steps to create a new spec and make
sure it also exists and is tracked correctly in launchpad?

For nova-specs, the first piece of info in the template is a blueprint URL.

On the launchpad side, nothing will be allowed to be targeted to a
milestone with an approved spec attached to it.

 [1] https://wiki.openstack.org/wiki/ProjectTestingInterface


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-24 Thread Susanne Balle
My apologies if you receive this twice. I seems to have problems with my
gmail account.



Hi Neutron LBaaS folks,



I have been getting up to speed on the Neutron LBaaS implementation and
have been wondering how to make it fit our needs in HP public cloud as well
as as an enterprise-grade load balancer service for HP Openstack
implementations. We are currently using Libra as our LBaaS implementation
and are interested in moving to the Neutron LBaaS service in the future.



I have been looking at the LBaaS requirements posted by Jorge at:

https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements



When we started looking at existing packages for our LBaaS service we had a
focus on requirements needed to create a managed service where the user
would just interact with the service APIs and not have to deal with
resiliency, HA, monitoring, and reporting functions themselves. Andrew
Hutchings became the HP Tech Lead for the open source Libra project. For
historical reasons around why we decided to contribute to Libra see:

http://openstack.10931.n7.nabble.com/Neutron-Relationship-between-Neutron-LBaaS-and-Libra-td29562.html



We would like to discuss adding the concept of managed services to the
Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
proxy. The latter could be a second approach for some of the software
load-balancers e.g. HA proxy since I am not sure that it makes sense to
deploy Libra within Devstack on a single VM.



Currently users would have to deal with HA, resiliency, monitoring and
managing their load-balancers themselves.  As a service provider we are
taking a more managed service approach allowing our customers to consider
the LB as a black box and the service manages the resiliency, HA,
monitoring, etc. for them.



We like where Neutron LBaaS is going with regards to L7 policies and SSL
termination support which Libra is not currently supporting and want to
take advantage of the best in each project.

We have a draft on how we could make Neutron LBaaS take advantage of Libra
in the back-end.

The details are available at:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft



While this would allow us to fill a gap short term we would like to discuss
the longer term strategy since we believe that everybody would benefit from
having such managed services artifacts built directly into Neutron LBaaS.



There are blueprints on high-availability for the HA proxy software
load-balancer and we would like to suggest implementations that fit our
needs as services providers.



One example where the managed service approach for the HA proxy load
balancer is different from the current Neutron LBaaS roadmap is around HA
and resiliency. The 2 LB HA setup proposed (
https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
appropriate for service providers in that users would have to pay for the
extra load-balancer even though it is not being actively used.  An
alternative approach is to implement resiliency using a pool of stand-by
load-and preconfigured load balancers own by e.g. LBaaS tenant and assign
load-balancers from the pool to tenants environments. We currently are
using this approach in the public cloud with Libra and it takes
approximately 80 seconds for the service to decide that a load-balancer has
failed, swap the floating ip and update the db, etc. and have a new LB
running.



Regards Susanne

---

Susanne M. Balle
Hewlett-Packard
HP Cloud Services

Please consider the environment before printing this email.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-24 Thread Russell Bryant
On 03/24/2014 11:42 AM, Stefano Maffulli wrote:
 On 03/22/2014 03:14 AM, Sean Dague wrote:
 Honestly, I largely disagree. This is applying some process where there
 clearly wasn't any before.
 
 We have a different perception evidently. I'm assuming you're
 exaggerating for the sake of clarity because assuming there was no
 process before means that the releases are basically randomly defined.
 
 Storyboard remains vaporware. I will be enormously happy when it is not.
 [...]
 
 That's depressing to read. I'm again assuming you're using exaggerated
 words to paint a clearer picture.
 
 At this point I'd like to get a fair assessment of storyboard's status
 and timeline: it's clear that Launchpad blueprints need to be abandoned
 lightspeed fast and I (and others) have been sold the idea that
 Storyboard is a *thing* that will happen *soon*. I also was told that
 spec reviews are an integral part of Storyboard use case scenarios, not
 just defects.

Another critical point of clarification ... we are *not* moving out of
blueprints at all.  We're still using them for tracking, just as before.
 We are *adding* the use of gerrit for reviewing the design.

Prior to this, we had no tool to assist with doing the actual review of
the spec.  We used blueprint whiteboards, the mailing list, and
discussions in various other places.  We're *adding* the use of gerrit
as a centralized and organized place to do design reviews.

We will continue to use blueprints for tracking what we plan to go into
each release, as well as it's current status.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] test environment requirements

2014-03-24 Thread Sullivan, Jon Paul
 From: Dan Prince [mailto:dpri...@redhat.com]
 Sent: 24 March 2014 16:53
 Subject: Re: [openstack-dev] [TripleO] test environment requirements
 
  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org
  Sent: Sunday, March 23, 2014 9:02:23 PM
  Subject: Re: [openstack-dev] [TripleO] test environment requirements
 
  Excerpts from Dan Prince's message of 2014-03-21 09:25:42 -0700:
  
   - Original Message -
From: Robert Collins robe...@robertcollins.net
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Sent: Thursday, March 13, 2014 5:51:30 AM
Subject: [openstack-dev] [TripleO] test environment requirements
   
So we already have pretty high requirements - its basically a 16G
workstation as minimum.
   
Specifically to test the full story:
 - a seed VM
 - an undercloud VM (bm deploy infra)
 - 1 overcloud control VM
 - 2 overcloud hypervisor VMs

   5 VMs with 2+G RAM each.
   
To test the overcloud alone against the seed we save 1 VM, to skip
the overcloud we save 3.
   
However, as HA matures we're about to add 4 more VMs: we need a HA
control plane for both the under and overclouds:
 - a seed VM
 - 3 undercloud VMs (HA bm deploy infra)
 - 3 overcloud control VMs (HA)
 - 2 overcloud hypervisor VMs

   9 VMs with 2+G RAM each == 18GB
   
What should we do about this?
   
A few thoughts to kick start discussion:
 - use Ironic to test across multiple machines (involves
tunnelling brbm across machines, fairly easy)
 - shrink the VM sizes (causes thrashing)
 - tell folk to toughen up and get bigger machines (ahahahahaha,
no)
 - make the default configuration inline the hypervisors on the
overcloud with the control plane:
   - a seed VM
   - 3 undercloud VMs (HA bm deploy infra)
   - 3 overcloud all-in-one VMs (HA)
  
 7 VMs with 2+G RAM each == 14GB
   
   
I think its important that we exercise features like HA and live
migration regularly by developers, so I'm quite keen to have a
fairly solid systematic answer that will let us catch things like
bad firewall rules on the control node preventing network
tunnelling etc...
  
   I'm all for supporting HA development and testing within devtest.
   I'm
   *against* forcing it on all users as a default.
  
   I can imaging wanting to cut corners and have configurations
   flexible on both ends (undercloud and overcloud). I may for example
   deploy a single all-in-one undercloud when I'm testing overcloud HA.
 Or vice versa.
  
   I think I'm one of the few (if not the only) developer who uses
   almost exclusive baremetal (besides seed VM) when test/developing
 TripleO.
   Forcing users who want to do this to have 6-7 real machines is a bit
   much I think. Arguably wasteful even. By requiring more machines to
   run through devtest you actually make it harder for people to test
   it on real hardware which is usually harder to come by. Given
   deployment on real bare metal is sort of the point or TripleO I'd
   very much like to see more developers using it rather than less.
  
   So by all means lets support HA... but lets do it in a way that is
   configurable (i.e. not forcing people to be wasters)
  
   Dan
  
 
  I don't think anybody wants to force it on _users_. But a predominance
  of end users will require HA, and thus we need our developers to be
  able to develop with HA.
 
  This is for the _benefit_ of developers. I imagine we've all been in
  situations where our dev environment is almost nothing like CI, and
  then when CI runs you find that you have missed huge problems.. and
  now to test for those problems you either have to re-do your dev
  environment, or wait.. a lot.. for CI.
 
 All good points. Running an exactly copy of the upstream CI environment
 seems to be getting more and more costly though. My goal is that I'd
 like developers to be able to choose what they want to test as much as
 they can. Streamline things where appropriate. Take the overcloud today:
 I actually like to idea of going the other way here and running an all-
 in-one version of it from time to time to save resources.
 
 If I know I need to dev test on an HA cloud then by all means I'll try
 to do that. But in many cases I may not need to go to such lengths.
 Furthermore, some testing is better than no testing at all because we've
 set the resources bar too high.
 
 Again, all for supporting the HA test and dev path in the devtest
 scripts. Ideally making it as configurable as we can...

This is where I was suggesting a small number (2-3)  of differing standard 
configurations that developers could select from given what their change is and 
what resources they have available.

Something like overcloud-only, non-ha and ha, or similar.

This seems very much related to 

Re: [openstack-dev] [nova] nova-compute not re-establishing connectivity after controller switchover

2014-03-24 Thread Chris Friesen

On 03/24/2014 10:59 AM, Dan Smith wrote:

Any ideas on what might be going on would be appreciated.


This looks like something that should be filed as a bug. I don't have
any ideas off hand, bit I will note that the reconnection logic works
fine for us in the upstream upgrade tests. That scenario includes
starting up a full stack, then taking down everything except compute and
rebuilding a new one on master. After the several minutes it takes to
upgrade the controller services, the compute host reconnects and is
ready to go before tempest runs.

I suspect your case wedged itself somehow other than that, which
definitely looks nasty and is worth tracking in a bug.


We've got an HA controller setup using pacemaker and were stress-testing 
it by doing multiple controlled switchovers while doing other activity. 
 Generally this works okay, but last night we ran into this problem.


I'll file a bug, but in the meantime I've found something that looks a 
bit suspicious.  The  Unexpected exception occurred 61 time(s)... 
retrying. message comes from forever_retry_uncaught_exceptions() in 
excutils.py.  It looks like we're raising


RecoverableConnectionError: connection already closed

down in /usr/lib64/python2.7/site-packages/amqp/abstract_channel.py, but 
nothing handles it.


It looks like the most likely place that should be handling it is 
nova.openstack.common.rpc.impl_kombu.Connection.ensure().



In the current oslo.messaging code the ensure() routine explicitly 
handles connection errors (which RecoverableConnectionError is) and 
socket timeouts--the ensure() routine in Havana doesn't do this.



Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread Sean Dague
On 03/24/2014 01:07 PM, Russell Bryant wrote:
 On 03/24/2014 12:34 PM, James E. Blair wrote:
 Hi,

 So recently we started this experiment with the compute and qa programs
 to try using Gerrit to review blueprints.  Launchpad is deficient in
 this area, and while we hope Storyboard will deal with it much better,
 but it's not ready yet.
 
 This seems to be a point of confusion.  My view is that Storyboard isn't
 intended to implement what gerrit provides.  Given that, it seems like
 we'd still be using this whether the tracker is launchpad or storyboard.
 
 As a development organization, OpenStack scales by adopting common tools
 and processes, and true to form, we now have a lot of other projects
 that would like to join the experiment.  At some point that stops
 being an experiment and becomes practice.

 However, at this very early point, we haven't settled on answers to some
 really basic questions about how this process should work.  Before we
 extend it to more projects, I think we need to establish a modicum of
 commonality that helps us integrate it with our tooling at scale, and
 just as importantly, helps new contributors and people who are working
 on multiple projects have a better experience.

 I'd like to hold off on creating any new specs repos until we have at
 least the following questions answered:
 
 Sounds good to me.
 
 a) Should the specs repos be sphinx documents?
 
 Probably.  I see that the qa-specs repo has this up for review.  I'd
 like to look at applying this to nova-specs and see how it affects the
 workflow we've had in mind so far.
 
 b) Should the follow the Project Testing Interface[1]?
 
 As its relevant, sure.
 
 c) Some basic agreement on what information is encoded?
 
 We've been working on a template in nova-specs here:
 
 http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst

We are working on a template in qa-specs, though intentionally kind of
loose. I think some of the items like problem, solution, and
implementors map well. And we'll probably copy those. A lot of the rest
of it doesn't so much.

Especially if we think about the differences between a spec to implement
a tempest change, one to implement a grenade change, and one to do
something in infrastructure like multi node testing. As qa-specs is
intentionally not tempest-specs, it's for the qa program.

 
eg: don't encode implementation status (it should be in launchpad)
 
 Agreed
 
do encode branches (as directories? as ...?)
 
 IMO, yes.  The reason is that I think approval of a spec should be
 limited to a given release.  If it slips, it should be re-reviewed to
 make sure it still makes sense given whatever developments have
 occurred.  That's why we have a juno/ directory in nova-specs.

So this is an area that I think there are differences for between
programs. Because I don't actually think automatic reset makes sense for
the QA program. Because unlike a server program that is very strongly
release bounded, QA is only loosely release bounded. We don't do a
feature freeze in the same way as the server projects.

 d) Workflow process -- what are the steps to create a new spec and make
sure it also exists and is tracked correctly in launchpad?
 
 For nova-specs, the first piece of info in the template is a blueprint URL.
 
 On the launchpad side, nothing will be allowed to be targeted to a
 milestone with an approved spec attached to it.
 
 [1] https://wiki.openstack.org/wiki/ProjectTestingInterface

https://github.com/openstack/qa-specs - is where we started with the
process, it's pretty explicit. Our template is still evolving, though
we're going to go intentionally pretty loose on it at this point to
figure out what's working and what isn't in the template.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday March 25th at 19:00 UTC

2014-03-24 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday March 25th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-24 Thread Michael Krotscheck
On Sat, Mar 22, 2014 at 3:14 AM, Sean Dague s...@dague.net wrote:

 Storyboard remains vaporware. I will be enormously happy when it is not.


You could, oh, I dunno, maybe contribute to the codebase. I am _certain_
that it would make more people happy than calling it vaporware on a public
list.

Michael
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-24 Thread Sean Dague
On 03/24/2014 12:20 PM, Russell Bryant wrote:
 On 03/24/2014 11:42 AM, Stefano Maffulli wrote:
 On 03/22/2014 03:14 AM, Sean Dague wrote:
 Honestly, I largely disagree. This is applying some process where there
 clearly wasn't any before.

 We have a different perception evidently. I'm assuming you're
 exaggerating for the sake of clarity because assuming there was no
 process before means that the releases are basically randomly defined.

 Storyboard remains vaporware. I will be enormously happy when it is not.
 [...]

 That's depressing to read. I'm again assuming you're using exaggerated
 words to paint a clearer picture.

 At this point I'd like to get a fair assessment of storyboard's status
 and timeline: it's clear that Launchpad blueprints need to be abandoned
 lightspeed fast and I (and others) have been sold the idea that
 Storyboard is a *thing* that will happen *soon*. I also was told that
 spec reviews are an integral part of Storyboard use case scenarios, not
 just defects.
 
 Another critical point of clarification ... we are *not* moving out of
 blueprints at all.  We're still using them for tracking, just as before.
  We are *adding* the use of gerrit for reviewing the design.
 
 Prior to this, we had no tool to assist with doing the actual review of
 the spec.  We used blueprint whiteboards, the mailing list, and
 discussions in various other places.  We're *adding* the use of gerrit
 as a centralized and organized place to do design reviews.

... and random etherpads (which may or may not have gotten vandalized
over time), and random wiki pages, and random google docs (which may or
may not have been visible to everyone), and private email discussions.

 We will continue to use blueprints for tracking what we plan to go into
 each release, as well as it's current status.

Agreed. This isn't a tracking tool, this is handling a piece of the
process which has been a train wreck for so long, and that we had no
tool support.

Blueprints today support the idea of a detailed specification at a URL.
This is taking all those divergent URLs and making them all point to one
place, a gerrit repo, where the entire history of the discussion will be
publicly viewable and auditable.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread Sean Dague
On 03/24/2014 01:07 PM, Russell Bryant wrote:
 On 03/24/2014 12:34 PM, James E. Blair wrote:
 Hi,

 So recently we started this experiment with the compute and qa programs
 to try using Gerrit to review blueprints.  Launchpad is deficient in
 this area, and while we hope Storyboard will deal with it much better,
 but it's not ready yet.
 
 This seems to be a point of confusion.  My view is that Storyboard isn't
 intended to implement what gerrit provides.  Given that, it seems like
 we'd still be using this whether the tracker is launchpad or storyboard.

Agreed. It seems like Storyboard already has it's hands full to get us
off launchpad for status tracking, which I look forward to. The moment
someone says we're ready to take on projects, I'm there.

Gerrit is a very good tool for reviewing things. We've already moved to
it for all the TC workflow for non code things, which I think is a nice
improvement. And I don't see this as a stop gap, I see this as improved
incremental process in an area that we had none. I don't really see this
part of the process changing once Storyboard is available, except we
could probably get story board to pick up the specs links much nicer
than we have the ability to do with launchpad.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-24 Thread Carl Baldwin
+1 to all Ben said.

There are reasons to split things up in to a logical progression of
changes but each change must stand alone and must pass tests.

Carl

On Mon, Mar 24, 2014 at 10:03 AM, Ben Nemec openst...@nemebean.com wrote:
 I should point out that Jenkins can't apply the next patch in sequence
 just to get tests passing.  What happens if the next patch never merges
 or has to be reverted?  Each commit needs to be able to pass tests using
 only the previous commits in the sequence.  If it relies on a subsequent
 commit then either the commits need to be reordered or, if there's a
 circular dependency, maybe those commits aren't logically separate and
 should just be squashed together.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-24 Thread Ben Nemec
On 2014-03-23 22:18, Robert Collins wrote:
 On 24 March 2014 14:06, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Robert Collins's message of 2014-03-13 02:51:30 -0700:
 So we already have pretty high requirements - its basically a 16G
 workstation as minimum.

 Specifically to test the full story:
  - a seed VM
  - an undercloud VM (bm deploy infra)
  - 1 overcloud control VM
  - 2 overcloud hypervisor VMs
 
5 VMs with 2+G RAM each.

 To test the overcloud alone against the seed we save 1 VM, to skip the
 overcloud we save 3.

 However, as HA matures we're about to add 4 more VMs: we need a HA
 control plane for both the under and overclouds:
  - a seed VM
  - 3 undercloud VMs (HA bm deploy infra)
  - 3 overcloud control VMs (HA)
  - 2 overcloud hypervisor VMs
 
9 VMs with 2+G RAM each == 18GB


 If we switch end-user-vm tests to cirros, and use a flavor that is really
 really tiny, like 128M RAM, then I think we can downsize the development
 environment hypervisors from 2G to 1G. That at least drops us to 16G. If
 we can also turn off the seed as soon as the undercloud boots, that's
 another 2G saved. Finally if we can turn off 1 of the undercloud VMs
 and run degraded, that is another 2G saved.
 
 We can't turn off the seed unless we change the network topology; we
 need to make sure if we do that that we don't invalidate the test
 structure.
 
 Small potatoes I know, but it wouldn't be complicated to do any of these
 and it would also help test real scenarios we want to test (seed going
 away, undercloud node going away).

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 I like this.
 
 I also like much of the spread of ideas that came up - lets capture
 them to an etherpad and link that from a blueprint for future selves
 finding it.

I created an etherpad here:
https://etherpad.openstack.org/p/devtest-env-reqs

And linked it from the blueprint here:
https://blueprints.launchpad.net/tripleo/+spec/test-environment-requirements

I only added some details about devtest on openstack for now since I
figured everyone else could add their thoughts in their own words.

 
 In particular I'd like to make being able to use OpenStack a J series
 goal, though it doesn't help with local dev overheads.
 
 -Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Carl Baldwin
Don't discard the first number so quickly.

For example, say we use a timeout mechanism for the daemon running
inside namespaces to avoid using too much memory with a daemon in
every namespace.  That means we'll pay the startup cost repeatedly but
in a way that amortizes it down.

Even if it is really a one time cost, then if you collect enough
samples then the outlier won't have much affect on the mean anyway.
I'd say keep it in there.

Carl

On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo majop...@redhat.com wrote:


 It's the first call starting the daemon / loading config files, etc?,

 May be that first sample should be discarded from the mean for all processes
 (it's an outlier value).




 On 03/21/2014 05:32 PM, Yuriy Taraday wrote:

 On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:

 Yuriy Taraday wrote:
   Benchmark included showed on my machine these numbers (average
 over 100
   iterations):
  
   Running 'ip a':
 ip a :   4.565ms
sudo ip a :  13.744ms
  sudo rootwrap conf ip a : 102.571ms
   daemon.run('ip a') :   8.973ms
   Running 'ip netns exec bench_ns ip a':
 sudo ip netns exec bench_ns ip a : 162.098ms
   sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
daemon.run('ip netns exec bench_ns ip a') : 129.876ms
  
   So it looks like running daemon is actually faster than running
 sudo.

 That's pretty good! However I fear that the extremely simplistic
 filter
 rule file you fed on the benchmark is affecting numbers. Could you
 post
 results from a realistic setup (like same command, but with all the
 filter files normally found on a devstack host ?)


 I don't have a devstack host at hands but I gathered all filters from
 Nova, Cinder and Neutron and got this:
  method  :min   avg   max   dev
 ip a :   3.741ms   4.443ms   7.356ms 500.660us
sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
 sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
   daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms

 Then I switched back to one file and got:
  method  :min   avg   max   dev
 ip a :   4.176ms   4.976ms  22.910ms   1.821ms
sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
 sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
   daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms

 There is a difference but it looks like it's because of config files
 parsing, not applying filters themselves.

 --

 Kind regards, Yuriy.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute not re-establishing connectivity after controller switchover

2014-03-24 Thread Chris Friesen

On 03/24/2014 11:31 AM, Chris Friesen wrote:


It looks like we're raising

RecoverableConnectionError: connection already closed

down in /usr/lib64/python2.7/site-packages/amqp/abstract_channel.py, but
nothing handles it.

It looks like the most likely place that should be handling it is
nova.openstack.common.rpc.impl_kombu.Connection.ensure().


In the current oslo.messaging code the ensure() routine explicitly
handles connection errors (which RecoverableConnectionError is) and
socket timeouts--the ensure() routine in Havana doesn't do this.


I misread the code, ensure() in Havana does in fact monitor socket 
timeouts, but it doesn't handle connection errors.


It looks like support for handling connection errors was added to 
oslo.messaging just recently in git commit 0400cbf.  The git commit 
comment talks about clustered rabbit nodes and mirrored queues which 
doesn't apply to our scenario, but I suspect it would probably fix the 
problem that we're seeing as well.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-24 Thread Sean Dague
On 03/24/2014 01:40 PM, Michael Krotscheck wrote:
 On Sat, Mar 22, 2014 at 3:14 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 Storyboard remains vaporware. I will be enormously happy when it is not.
 
 
 You could, oh, I dunno, maybe contribute to the codebase. I am _certain_
 that it would make more people happy than calling it vaporware on a
 public list.
 
 Michael

If I had the time to contribute to more projects, I would. Sadly, I like
to sleep some times. :)

But it's also really important to be realistic about status. Because as
a community we often say this will all be solved by X. The grass is
always greener on software no one is yet using.

As soon as you say Storyboard is ready to take projects for a release
cycle, I'm there, sign me up. Nothing would make me happier to get a
response from you that we should use it for Tempest starting today.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-24 Thread W Chan
I have the following murano-ci failure for my last patch set.
https://murano-ci.mirantis.com/job/mistral_master_on_commit/194/  Since I
modified the API launch script in mistral, is that the cause of this
failure here?  Do I have to make changes to the tempest test?  Please
advise.  Thanks.


On Fri, Mar 21, 2014 at 3:20 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Alright, thanks Winson!

 Team, please review.

 Renat Akhmerov
 @ Mirantis Inc.



 On 21 Mar 2014, at 06:43, W Chan m4d.co...@gmail.com wrote:

 I submitted a rough draft for review @
 https://review.openstack.org/#/c/81941/.  Instead of using the pecan
 hook, I added a class property for the transport in the abstract engine
 class.  On the pecan app setup, I passed the shared transport to the engine
 on load.  Please provide feedback.  Thanks.


 On Mon, Mar 17, 2014 at 9:37 AM, Ryan Petrello 
 ryan.petre...@dreamhost.com wrote:

 Changing the configuration object at runtime is not thread-safe.  If you
 want to share objects with controllers, I'd suggest checking out Pecan's
 hook functionality.


 http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook

 e.g.,

 class SpecialContextHook(object):

 def __init__(self, some_obj):
 self.some_obj = some_obj

 def before(self, state):
 # In any pecan controller, `pecan.request` is a thread-local
 webob.Request instance,
 # allowing you to access `pecan.request.context['foo']` in your
 controllers.  In this example,
 # self.some_obj could be just about anything - a Python
 primitive, or an instance of some class
 state.request.context = {
 'foo': self.some_obj
 }

 ...

 wsgi_app = pecan.Pecan(
 my_package.controllers.root.RootController(),
 hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
 )

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com

 On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

  Take a look at method get_pecan_config() in mistral/api/app.py. It's
 where you can pass any parameters into pecan app (see a dictionary
 'cfg_dict' initialization). They can be then accessed via pecan.conf as
 described here:
 http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
 If I understood the problem correctly this should be helpful.
 
  Renat Akhmerov
  @ Mirantis Inc.
 
 
 
  On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
  We have access to all configuration parameters in the context of
 api.py. May be you don't pass it but just instantiate it where you need it?
 Or I may misunderstand what you're trying to do...
 
  DZ
 
  PS: can you generate and update mistral.config.example to include new
 oslo messaging options? I forgot to mention it on review on time.
 
 
  On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
  On the transport variable, the problem I see isn't with passing the
 variable to the engine and executor.  It's passing the transport into the
 API layer.  The API layer is a pecan app and I currently don't see a way
 where the transport variable can be passed to it directly.  I'm looking at
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
  Do you have any suggestion?  Thanks.
 
 
  On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov 
 rakhme...@mirantis.com wrote:
 
  On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 * I can write a method in base test to start local executor.  I
 will do that as a separate bp.
  Ok.
 
 * After the engine is made standalone, the API will communicate
 to the engine and the engine to the executor via the oslo.messaging
 transport.  This means that for the local option, we need to start all
 three components (API, engine, and executor) on the same process.  If the
 long term goal as you stated above is to use separate launchers for these
 components, this means that the API launcher needs to duplicate all the
 logic to launch the engine and the executor. Hence, my proposal here is to
 move the logic to launch the components into a common module and either
 have a single generic launch script that launch specific components based
 on the CLI options or have separate launch scripts that reference the
 appropriate launch function from the common module.
  Ok, I see your point. Then I would suggest we have one script which
 we could use to run all the components (any subset of of them). So for
 those components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.
 
 * The RPC client/server in oslo.messaging do not determine the
 transport.  The transport is determine via oslo.config and then given
 explicitly to the RPC client/server.
 

Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-24 Thread Joe Gordon
On Fri, Mar 21, 2014 at 9:38 AM, Malini Kamalambal 
malini.kamalam...@rackspace.com wrote:



 On 3/21/14 12:01 PM, David Kranz dkr...@redhat.com wrote:

 On 03/20/2014 04:19 PM, Rochelle.RochelleGrober wrote:
 
  -Original Message-
  From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
  Sent: Thursday, March 20, 2014 12:13 PM
 
  'project specific functional testing' in the Marconi context is
  treating
  Marconi as a complete system, making Marconi API calls  verifying the
  response - just like an end user would, but without keystone. If one of
  these tests fail, it is because there is a bug in the Marconi code ,
  and
  not because its interaction with Keystone caused it to fail.
 
  That being said there are certain cases where having a project
  specific
  functional test makes sense. For example swift has a functional test
  job
  that
  starts swift in devstack. But, those things are normally handled on a
  per
  case
  basis. In general if the project is meant to be part of the larger
  OpenStack
  ecosystem then Tempest is the place to put functional testing. That way
  you know
  it works with all of the other components. The thing is in openstack
  what
  seems
  like a project isolated functional test almost always involves another
  project
  in real use cases. (for example keystone auth with api requests)
 
  
 
  One of the concerns we heard in the review was 'having the functional
  tests elsewhere (I.e within the project itself) does not count and they
  have to be in Tempest'.
  This has made us as a team wonder if we should migrate all our
  functional
  tests to Tempest.
  But from Matt's response, I think it is reasonable to continue in our
  current path  have the functional tests in Marconi coexist  along with
  the tests in Tempest.
 
  I think that what is being asked, really is that the functional tests
 could be a single set of tests that would become a part of the tempest
 repository and that these tests would have an ENV variable as part of
 the configuration that would allow either no Keystone or Keystone or
 some such, if that is the only configuration issue that separates
 running the tests isolated vs. integrated.  The functional tests need to
 be as much as possible a single set of tests to reduce duplication and
 remove the likelihood of two sets getting out of sync with each
 other/development.  If they only run in the integrated environment,
 that's ok, but if you want to run them isolated to make debugging
 easier, then it should be a configuration option and a separate test job.
 
  So, if my assumptions are correct, QA only requires functional tests
 for integrated runs, but if the project QAs/Devs want to run isolated
 for dev and devtest purposes, more power to them.  Just keep it a single
 set of functional tests and put them in the Tempest repository so that
 if a failure happens, anyone can find the test and do the debug work
 without digging into a separate project repository.
 
  Hopefully, the tests as designed could easily take a new configuration
 directive and a short bit of work with OS QA will get the integrated FTs
 working as well as the isolated ones.
 
  --Rocky
 This issue has been much debated. There are some active members of our
 community who believe that all the functional tests should live outside
 of tempest in the projects, albeit with the same idea that such tests
 could be run either as part of today's real tempest runs or mocked in
 various ways to allow component isolation or better performance. Maru
 Newby posted a patch with an example of one way to do this but I think
 it expired and I don't have a pointer.
 
 IMO there are valid arguments on both sides, but I hope every one could
 agree that functional tests should not be arbitrarily split between
 projects and tempest as they are now. The Tempest README states a desire
 for complete coverage of the OpenStack API but Tempest is not close to
 that. We have been discussing and then ignoring this issue for some time
 but I think the recent action to say that Tempest will be used to
 determine if something can use the OpenStack trademark will force more
 completeness on tempest (more tests, that is). I think we need to
 resolve this issue but it won't be easy and modifying existing api tests
 to be more flexible will be a lot of work. But at least new projects
 could get on the right path sooner.
 
   -David
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 We are talking about different levels of testing,

 1. Unit tests - which everybody agrees should be in the individual project
 itself
 2. System Tests - 'System' referring to ( limited to), all the components
 that make up the project. These are also the functional tests for the
 project.
 3. Integration Tests - This is to verify that the OS components interact
 well and don't 

[openstack-dev] [Ironic] Nodeless Vendor Passthru API

2014-03-24 Thread Russell Haering
All,

Ironic allows drivers to expose a vendor passthru API on a Node. This
basically serves two purposes:

1. Allows drivers to expose functionality that hasn't yet been standardized
in the Ironic API. For example, the Seamicro driver exposes
attach_volume, set_boot_device and set_node_vlan_id passthru methods.
2. Vendor passthru is also used by the PXE deploy driver as an internal RPC
callback mechanism. The deploy ramdisk makes calls to the passthru API to
signal for a deployment to continue once a server has booted.

For the purposes of this discussion I want to focus on case #2. Case #1 is
certainly worth a separate discussion - we started this in
#openstack-ironic on Friday.

In the new agent we are working on, we want to be able to look up what node
the agent is running on, and eventually to be able to register a new node
automatically. We will perform an inventory of the server and submit that
to Ironic, where it can be used to map the agent to an existing Node or to
create a new one. Once the agent knows what node it is on, it will check in
with a passthru API much like that used by the PXE driver - in some
configurations this might trigger an immediate continue of an ongoing
deploy, in others it might simply register the agent as available for new
deploys in the future.

The point here is that we need a way for the agent driver to expose a
top-level lookup API, which doesn't require a Node UUID in the URL.

I've got a review (https://review.openstack.org/#/c/81919/) up which
explores one possible implementation of this. It basically routes POSTs to
/drivers/driver_name/vendor_passthru/method_name to a new method on the
vendor interface.

Importantly, I don't believe that this is a useful way for vendors to
implement new consumer-facing functionality. If we decide to take this
approach, we should reject drivers try to do so. It is intended *only* for
internal communications with deploy agents.

Another possibility is that we could create a new API service intended
explicitly to serve use case #2 described above, which doesn't include most
of the existing public paths. In our environment I expect us to allow
agents whitelisted access to only two specific paths (lookup and checkin),
but this might be a better way to achieve that.

Thoughts?

Thanks,
Russell
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Joshua Harlow
Seeing that the following repos already exist, maybe there is some need for 
cleanup?

- https://github.com/stackforge/murano-agent
- https://github.com/stackforge/murano-api
- https://github.com/stackforge/murano-common
- https://github.com/stackforge/murano-conductor
- https://github.com/stackforge/murano-dashboard
- https://github.com/stackforge/murano-deployment
- https://github.com/stackforge/murano-docs
- https://github.com/stackforge/murano-metadataclient
- https://github.com/stackforge/murano-repository
- https://github.com/stackforge/murano-tests
…(did I miss others?)

Can we maybe not have more git repositories and instead figure out a way to 
have 1 repository (perhaps with submodules?) ;-)

It appears like murano is already exploding all over stackforge which makes it 
hard to understand why yet another repo is needed. I understand why from a code 
point of view, but it doesn't seem right from a code organization point of view 
to continue adding repos. It seems like murano 
(https://github.com/stackforge/murano) should just have 1 repo, with sub-repos 
(tests, docs, api, agent…) for its own organizational usage instead of X repos 
that expose others to murano's internal organizational details.

-Josh

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 24, 2014 at 3:27 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Separating our Murano PL core in own package

I like dsl most because it is
a. Short. This is especially good when you have that awesome 79-chars 
limitation
b. It leaves a lot of room for changes. MuranoPL can change name. DSL - not :)


On Mon, Mar 24, 2014 at 1:43 PM, Alexander Tivelkov 
ativel...@mirantis.commailto:ativel...@mirantis.com wrote:
Hi Serg,

Are you proposing to have a standalone git repository / stack forge project for 
that? Or just a separate package inside our primary murano repo?

--
Regards,
Alexander Tivelkov


On Mon, Mar 24, 2014 at 12:00 PM, Serg Melikyan 
smelik...@mirantis.commailto:smelik...@mirantis.com wrote:
Programming Language, AFAIK


On Mon, Mar 24, 2014 at 11:46 AM, Oleg Gelbukh 
ogelb...@mirantis.commailto:ogelb...@mirantis.com wrote:
What does PL stand for, anyway?

--
Best regards,
Oleg Gelbukh


On Mon, Mar 24, 2014 at 11:39 AM, Serg Melikyan 
smelik...@mirantis.commailto:smelik...@mirantis.com wrote:
because 'dsl'/'language' terms are too broad.
Too broad in general, but we choose name for sub-package, and in murano term 
'language' mean Murano PL.

+1 for language


On Mon, Mar 24, 2014 at 11:26 AM, Timur Sufiev 
tsuf...@mirantis.commailto:tsuf...@mirantis.com wrote:
+1 for muranoapi.engine.murano_pl, because 'dsl'/'language' terms are too broad.

On Mon, Mar 24, 2014 at 12:48 AM, Timur Nurlygayanov
tnurlygaya...@mirantis.commailto:tnurlygaya...@mirantis.com wrote:
 Hi Serg,

 This idea sounds good, I suggest to use name 'murano.engine.murano_pl' (not
 just common name like 'language' or 'dsl', but name, which will be based on
 'MuranoPL')

 Do we plan to support the ability to define different languages for Murano
 Engine?


 Thank you!


 On Sun, Mar 23, 2014 at 1:05 PM, Serg Melikyan 
 smelik...@mirantis.commailto:smelik...@mirantis.com
 wrote:

 There is a idea to separate core of Murano PL implementation from engine
 specific code, like it was done in PoC. When this two things are separated
 in different packages, we will be able to track and maintain our language
 core as clean as possible from engine specific code. This will give to us an
 ability to easily separate our language implementation to a library.

 Questions is under what name we should place core of Murano PL?

 1) muranoapi.engine.language;
 2) muranoapi.engine.dsl;
 3) suggestions?

 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.commailto:smelik...@mirantis.com

 +7 (495) 640-4904tel:%2B7%20%28495%29%20640-4904, 0261
 +7 (903) 156-0836tel:%2B7%20%28903%29%20156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Serg Melikyan, 

Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread Monty Taylor

On 03/24/2014 10:07 AM, Russell Bryant wrote:

On 03/24/2014 12:34 PM, James E. Blair wrote:

Hi,

So recently we started this experiment with the compute and qa programs
to try using Gerrit to review blueprints.  Launchpad is deficient in
this area, and while we hope Storyboard will deal with it much better,
but it's not ready yet.


This seems to be a point of confusion.  My view is that Storyboard isn't
intended to implement what gerrit provides.  Given that, it seems like
we'd still be using this whether the tracker is launchpad or storyboard.


As a development organization, OpenStack scales by adopting common tools
and processes, and true to form, we now have a lot of other projects
that would like to join the experiment.  At some point that stops
being an experiment and becomes practice.

However, at this very early point, we haven't settled on answers to some
really basic questions about how this process should work.  Before we
extend it to more projects, I think we need to establish a modicum of
commonality that helps us integrate it with our tooling at scale, and
just as importantly, helps new contributors and people who are working
on multiple projects have a better experience.

I'd like to hold off on creating any new specs repos until we have at
least the following questions answered:


Sounds good to me.


a) Should the specs repos be sphinx documents?


Probably.  I see that the qa-specs repo has this up for review.  I'd
like to look at applying this to nova-specs and see how it affects the
workflow we've had in mind so far.


b) Should the follow the Project Testing Interface[1]?


As its relevant, sure.


I think the main one here, as is in mtrenish's patch, is to make sure 
tox -evenv python setup.py build_sphinx works.


Additionally, if we want to do code-style analysis, I'd suggest putting 
it into tox -epep8. I know that soudns LUDICROUS right now, but I really 
want to rename that to tox -elint or something  -because it's long since 
stopped being just pep8 anywhere.


Or?


c) Some basic agreement on what information is encoded?


We've been working on a template in nova-specs here:

http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst


eg: don't encode implementation status (it should be in launchpad)


Agreed


do encode branches (as directories? as ...?)


IMO, yes.  The reason is that I think approval of a spec should be
limited to a given release.  If it slips, it should be re-reviewed to
make sure it still makes sense given whatever developments have
occurred.  That's why we have a juno/ directory in nova-specs.


My biggest concern about the directories is where it relates to 
workflow. Essentially, I think we should not _move_ them - because there 
will be blueprints in either launchpad or storyboard with a link to the 
URL of the thing. If we later move the spec because it got re-targetted, 
we'll have a bunch of broken links.


Instead, if we copy the spec to the new location (say, kestral) when 
it's time - OR - we move the spec but leave behind a placeholder doc in 
the old location that says retargetted to kestral - then I think we're 
in a good place.


(this is why I think the implemented and approved dirs are bad)

If we can do that, then I can totally buy the argument about having 
$release dirs.



d) Workflow process -- what are the steps to create a new spec and make
sure it also exists and is tracked correctly in launchpad?


For nova-specs, the first piece of info in the template is a blueprint URL.

On the launchpad side, nothing will be allowed to be targeted to a
milestone with an approved spec attached to it.


[1] https://wiki.openstack.org/wiki/ProjectTestingInterface






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-24 Thread Doug Hellmann
On Mon, Mar 24, 2014 at 5:33 AM, Thierry Carrez thie...@openstack.orgwrote:

 Joe Gordon wrote:
  There are still two outstanding trove dependencies that are currently
  used in trove but not in global requirements. It would be nice to get
  this sorted out before the freeze so we can
  turn https://review.openstack.org/#/c/80690/ on.
 
  mockito https://review.openstack.org/#/c/80850/

 This one was abandoned. Trove team is looking to move away from mockito
 to mock. Timeline is in the next 4-5 days.


I was one of several people who objected to that. I would be happy to
change my -1 to a +2 if the Trove team needs more time to change. Perhaps
as a J1 goal?

Doug




  wsgi_intercept https://review.openstack.org/#/c/80851/

 This one was merged.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Yuriy Taraday
On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 Don't discard the first number so quickly.

 For example, say we use a timeout mechanism for the daemon running
 inside namespaces to avoid using too much memory with a daemon in
 every namespace.  That means we'll pay the startup cost repeatedly but
 in a way that amortizes it down.

 Even if it is really a one time cost, then if you collect enough
 samples then the outlier won't have much affect on the mean anyway.


It actually affects all numbers but mean (e.g. deviation is gross).


 I'd say keep it in there.

 Carl

 On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo majop...@redhat.com
 wrote:
 
 
  It's the first call starting the daemon / loading config files, etc?,
 
  May be that first sample should be discarded from the mean for all
 processes
  (it's an outlier value).


I thought about cutting max from counting deviation and/or showing
second-max value. But I don't think it matters much and there's not much
people here who're analyzing deviation. It's pretty clear what happens with
the longest run with this case and I think we can let it be as is. It's
mean value that matters most here.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread Sean Dague
On 03/24/2014 02:05 PM, James E. Blair wrote:
 Russell Bryant rbry...@redhat.com writes:
 
 On 03/24/2014 12:34 PM, James E. Blair wrote:
 Hi,

 So recently we started this experiment with the compute and qa programs
 to try using Gerrit to review blueprints.  Launchpad is deficient in
 this area, and while we hope Storyboard will deal with it much better,
 but it's not ready yet.

 This seems to be a point of confusion.  My view is that Storyboard isn't
 intended to implement what gerrit provides.  Given that, it seems like
 we'd still be using this whether the tracker is launchpad or storyboard.
 
 I don't think it's intended to implement what Gerrit provides, however,
 I'm not sure what Gerrit provides is _exactly_ what's needed here.  I do
 agree that Gerrit is a much better tool than launchpad for collaborating
 on some kinds of blueprints.
 
 However, one of the reasons we're creating StoryBoard is so that we have
 a tool that is compatible with our workflow and meets our requirements.
 It's not just about tracking work items, it should be a tool for
 creating, evaluating, and progressing changes to projects (stories),
 across all stages.
 
 I don't envision the end-state for storyboard to be that we end up
 copying data back and forth between it and Gerrit.  Since we're
 designing a system from scratch, we might as well design it to do what
 we want.
 
 One of our early decisions was to say that UX and code stories have
 equally important use cases in StoryBoard.  Collaboration around UX
 style blueprints (especially those with graphical mock-ups) sets a
 fairly high bar for the kind of interaction we will support.
 
 Gerrit is a great tool for reviewing code and other text media.  But
 somehow it is even worse than launchpad for collaborating when visual
 media are involved.  Quite a number of blueprints could benefit from
 better support for that (not just UI mockups but network diagrams, etc).
 We can learn a lot from the experiment of using Gerrit for blueprint
 review, and I think it's going to help make StoryBoard a lot better for
 all of our use cases.

I think that's fine if long term this whole thing is optimized. I just
do very much worry that StoryBoard keeps going under progressive scope
creep before we've managed to ship the base case. That's a dangerous
situation to be in, as it means it's evolving without a feedback loop.

I'd much rather see Storyboard get us off launchpad ASAP across all the
projects, and then work on solving the things launchpad doesn't do.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-24 Thread Shawn Hartsock
I fully support https://review.openstack.org/#/c/70175/ but I fail to
see why the spawn-refactor should depend on that. There are only 13
lines touched that are related. These two tasks could be completed
more or less in parallel.

On Mon, Mar 24, 2014 at 4:57 AM, Gary Kotton gkot...@vmware.com wrote:
 Regarding the spawn there are a number of patches up for review at the
 moment - they are all mutually exclusive and hopefully will make the
 process a lot smoother.
 https://review.openstack.org/#/q/topic:bp/vmware-spawn-refactor,n,z
 In addition to this we have a patch up for review with the OSLO
 integration - https://review.openstack.org/#/c/70175/ (ideally it would be
 best that this gets in first)
 Thanks
 Gary

 On 3/22/14 8:03 PM, Chris Behrens cbehr...@codestud.com wrote:

I'd like to get spawn broken up sooner rather than later, personally. It
has additional benefits of being able to do better orchestration of
builds from conductor, etc.

On Mar 14, 2014, at 3:58 PM, Dan Smith d...@danplanet.com wrote:

 Just to answer this point, despite the review latency, please don't be
 tempted to think one big change will get in quicker than a series of
 little, easy to review, changes. All changes are not equal. A large
 change often scares me away to easier to review patches.

 Seems like, for Juno-1, it would be worth cancelling all non-urgent
 bug fixes, and doing the refactoring we need.

 I think the aim here should be better (and easier to understand) unit
 test coverage. Thats a great way to drive good code structure.

 Review latency will be directly affected by how good the refactoring
 changes are staged. If they are small, on-topic and easy to validate,
 they will go quickly. They should be linearized unless there are some
 places where multiple sequences of changes make sense (i.e. refactoring
 a single file that results in no changes required to others).

 As John says, if it's just a big change everything patch, or a ton of
 smaller ones that don't fit a plan or process, then it will be slow and
 painful (for everyone).

 +1 sounds like a good first step is to move to oslo.vmware

 I'm not sure whether I think that refactoring spawn would be better done
 first or second. My gut tells me that doing spawn first would mean that
 we could more easily validate the oslo refactors because (a) spawn is
 impossible to follow right now and (b) refactoring it to smaller methods
 should be fairly easy. The tests for spawn are equally hard to follow
 and refactoring it first would yield a bunch of more unit-y tests that
 would help us follow the oslo refactoring.

 However, it sounds like the osloificastion has maybe already started and
 that refactoring spawn will have to take a backseat to that.

 --Dan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org

https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar
=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=q5RejnjmrSIFg0K7ua
AZbKHVqAKLHnVAB98J%2BszOfhw%3D%0As=1629f4e9008260c5f8ea577da1bdc69388dbe
fa3646803244df992a31d94bc96

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=q5RejnjmrSIFg0K7uaAZb
KHVqAKLHnVAB98J%2BszOfhw%3D%0As=1629f4e9008260c5f8ea577da1bdc69388dbefa36
46803244df992a31d94bc96


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-24 Thread Sean Dague
On 03/24/2014 02:20 PM, Doug Hellmann wrote:
 
 
 
 On Mon, Mar 24, 2014 at 5:33 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:
 
 Joe Gordon wrote:
  There are still two outstanding trove dependencies that are currently
  used in trove but not in global requirements. It would be nice to get
  this sorted out before the freeze so we can
  turn https://review.openstack.org/#/c/80690/ on.
 
  mockito https://review.openstack.org/#/c/80850/
 
 This one was abandoned. Trove team is looking to move away from mockito
 to mock. Timeline is in the next 4-5 days.
 
 
 I was one of several people who objected to that. I would be happy to
 change my -1 to a +2 if the Trove team needs more time to change.
 Perhaps as a J1 goal?

The Trove team said last week they could probably land the remove in
trove this week for mockito (it was on their roadmap anyway). So unless
they feel that's not doable, I think we're good.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-24 Thread Ruslan Kamaldinov
On Mon, Mar 24, 2014 at 10:08 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Seeing that the following repos already exist, maybe there is some need for
 cleanup?

 - https://github.com/stackforge/murano-agent
 - https://github.com/stackforge/murano-api
 - https://github.com/stackforge/murano-common
 - https://github.com/stackforge/murano-conductor
 - https://github.com/stackforge/murano-dashboard
 - https://github.com/stackforge/murano-deployment
 - https://github.com/stackforge/murano-docs
 - https://github.com/stackforge/murano-metadataclient
 - https://github.com/stackforge/murano-repository
 - https://github.com/stackforge/murano-tests
 ...(did I miss others?)

 Can we maybe not have more git repositories and instead figure out a way to
 have 1 repository (perhaps with submodules?) ;-)

 It appears like murano is already exploding all over stackforge which makes
 it hard to understand why yet another repo is needed. I understand why from
 a code point of view, but it doesn't seem right from a code organization
 point of view to continue adding repos. It seems like murano
 (https://github.com/stackforge/murano) should just have 1 repo, with
 sub-repos (tests, docs, api, agent...) for its own organizational usage
 instead of X repos that expose others to murano's internal organizational
 details.

 -Josh


Joshua,

I agree that this huge number of repositories is confusing for newcomers. I've
spent some time to understand mission of each of these repos. That's why we
already did the cleanup :) [0]

And I personally will do everything to prevent creation of new repo for
Murano.

Here is the list of repositories targeted for the next Murano release (Apr 17):
* murano-api
* murano-agent
* python-muranoclient
* murano-dashboard
* murano-docs

The rest of these repos will be deprecated right after the release.  Also we
will rename murano-api to just murano. murano-api will include all the
Murano services, functionaltests for Tempest, Devstack scripts, developer docs.
I guess we already can update README files in deprecated repos to avoid further
confusion.

I wouldn't agree that there should be just one repo. Almost every OpenStack
project has it's own repo for python client. All the user docs are kept in a
separate repo. Guest agent code should live in it's own repository to keep
number of dependencies as low as possible. I'd say there should be
required/comfortable minimum of repositories per project.


And one more nit correction:
OpenStack has it's own git repository [1]. We shoul avoid referring to github
since it's just a convinient mirror, while [1] is an official
OpenStack repository.

[0] https://blueprints.launchpad.net/murano/+spec/repository-reorganization
[1] http://git.openstack.org/cgit/



Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread Russell Bryant
On 03/24/2014 02:19 PM, Monty Taylor wrote:
 On 03/24/2014 10:07 AM, Russell Bryant wrote:
 On 03/24/2014 12:34 PM, James E. Blair wrote:
 Hi,

 So recently we started this experiment with the compute and qa programs
 to try using Gerrit to review blueprints.  Launchpad is deficient in
 this area, and while we hope Storyboard will deal with it much better,
 but it's not ready yet.

 This seems to be a point of confusion.  My view is that Storyboard isn't
 intended to implement what gerrit provides.  Given that, it seems like
 we'd still be using this whether the tracker is launchpad or storyboard.

 As a development organization, OpenStack scales by adopting common tools
 and processes, and true to form, we now have a lot of other projects
 that would like to join the experiment.  At some point that stops
 being an experiment and becomes practice.

 However, at this very early point, we haven't settled on answers to some
 really basic questions about how this process should work.  Before we
 extend it to more projects, I think we need to establish a modicum of
 commonality that helps us integrate it with our tooling at scale, and
 just as importantly, helps new contributors and people who are working
 on multiple projects have a better experience.

 I'd like to hold off on creating any new specs repos until we have at
 least the following questions answered:

 Sounds good to me.

 a) Should the specs repos be sphinx documents?

 Probably.  I see that the qa-specs repo has this up for review.  I'd
 like to look at applying this to nova-specs and see how it affects the
 workflow we've had in mind so far.

 b) Should the follow the Project Testing Interface[1]?

 As its relevant, sure.
 
 I think the main one here, as is in mtrenish's patch, is to make sure
 tox -evenv python setup.py build_sphinx works.

OK, here it is for nova-specs: https://review.openstack.org/#/c/82564/

 Additionally, if we want to do code-style analysis, I'd suggest putting
 it into tox -epep8. I know that soudns LUDICROUS right now, but I really
 want to rename that to tox -elint or something  -because it's long since
 stopped being just pep8 anywhere.
 
 Or?

Yeah, some validation on the structure would be nice.

 c) Some basic agreement on what information is encoded?

 We've been working on a template in nova-specs here:

 http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst

 eg: don't encode implementation status (it should be in launchpad)

 Agreed

 do encode branches (as directories? as ...?)

 IMO, yes.  The reason is that I think approval of a spec should be
 limited to a given release.  If it slips, it should be re-reviewed to
 make sure it still makes sense given whatever developments have
 occurred.  That's why we have a juno/ directory in nova-specs.
 
 My biggest concern about the directories is where it relates to
 workflow. Essentially, I think we should not _move_ them - because there
 will be blueprints in either launchpad or storyboard with a link to the
 URL of the thing. If we later move the spec because it got re-targetted,
 we'll have a bunch of broken links.
 
 Instead, if we copy the spec to the new location (say, kestral) when
 it's time - OR - we move the spec but leave behind a placeholder doc in
 the old location that says retargetted to kestral - then I think we're
 in a good place.
 
 (this is why I think the implemented and approved dirs are bad)
 
 If we can do that, then I can totally buy the argument about having
 $release dirs.

I think that would work for me.  I think we can do away with
juno/approved and juno/implemented and just have juno/.



-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-24 Thread Joe Gordon
On Fri, Mar 21, 2014 at 4:40 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  We recently discussed the idea of using gerrit to review blueprint
  specifications [1].  There was a lot of support for the idea so we have
  proceeded with putting this together before the start of the Juno
  development cycle.
 
  We now have a new project set up, openstack/nova-specs.  You submit
  changes to it just like any other project in gerrit.  Find the README
  and a template for specifications here:
 
http://git.openstack.org/cgit/openstack/nova-specs/tree/README.rst
 
http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst

 Adding the documentation team - the above is the template for nova
 blueprints under the new process, at the time of writing the documentation
 impact section reads:

 
 Documentation Impact
 

 What is the impact on the docs team of this change? Some changes might
 require
 donating resources to the docs team to have the documentation updated.
 Don't
 repeat details discussed above, but please reference them here.
 

 Under the current procedure documentation impact is only really directly
 addressed when the code itself is committed, with the DocImpact tag in the
 commit message, and a documentation bug is raised via automation. The above
 addition to the blueprint template offers a good opportunity to start
 thinking about the documentation impact, and articulating it much earlier
 in the process*.

 I'm wondering if we shouldn't provide some more guidance on what a good
 documentation impact assessment would like though, I know Anne previously
 articulated some thoughts on this here:


 http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/

 TL;DR:

 * Who would use the feature?
 * Why use the feature?
 * What is the exact usage for the feature?
 * Does the feature also have permissions/policies attached?
 * If it is a configuration option, which flag grouping should it go into?

 Do these questions or some approximation of them belong in the template?
 Or can we do better? Interested in your thoughts :). On a separate note a
 specific type of documentation I have often bemoaned not having a field in
 launchpad for is a release note. Is this something separate or does it
 belong in documentation impact? A good release note answers most if not all
 of the above questions but is also short and concise.


The template should answer all the questions highlighted on
http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/ as
we need the same information to review the blueprint itself. If not its a
bug.



 Thanks,

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-24 Thread Doug Hellmann
On Mon, Mar 24, 2014 at 2:28 PM, Sean Dague s...@dague.net wrote:

 On 03/24/2014 02:20 PM, Doug Hellmann wrote:
 
 
 
  On Mon, Mar 24, 2014 at 5:33 AM, Thierry Carrez thie...@openstack.org
  mailto:thie...@openstack.org wrote:
 
  Joe Gordon wrote:
   There are still two outstanding trove dependencies that are
 currently
   used in trove but not in global requirements. It would be nice to
 get
   this sorted out before the freeze so we can
   turn https://review.openstack.org/#/c/80690/ on.
  
   mockito https://review.openstack.org/#/c/80850/
 
  This one was abandoned. Trove team is looking to move away from
 mockito
  to mock. Timeline is in the next 4-5 days.
 
 
  I was one of several people who objected to that. I would be happy to
  change my -1 to a +2 if the Trove team needs more time to change.
  Perhaps as a J1 goal?

 The Trove team said last week they could probably land the remove in
 trove this week for mockito (it was on their roadmap anyway). So unless
 they feel that's not doable, I think we're good.


Sounds good.

Doug




 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Meeting Monday March 24th at 20:00 UTC

2014-03-24 Thread Douglas Mendizabal
Hi Everyone,

The Barbican team is hosting our weekly meeting today, Monday March 24, at
20:00 UTC  in #openstack-meeting-alt

Meeting agenda is avaialbe here
https://wiki.openstack.org/wiki/Meetings/Barbican and everyone is welcomed
to add agenda items

You can check this link
http://time.is/0800PM_24_Mar_2014_in_UTC/CDT/EDT/PDT?Barbican_Weekly_Meeting
if you need to figure out what 20:00 UTC means in your time.

-Douglas Mendizabal




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-24 Thread Joshua Harlow
So getting back to this thread.

I'd like to split it up into a few sections to address the HA and 
long-running-actions cases, which I believe are 2 seperate (but connected) 
questions.

=== Long-running actions ===

First, let me describe a little bit about what I believe are the execution 
models that taskflow currently targets (but is not limited to just targeting in 
general).

The first execution model I would call the local execution model, this model 
involves forming tasks and flows and then executing them inside an application, 
that application is running for the duration of the workflow (although if it 
crashes it can re-establish the task and flows that it was doing and attempt to 
resume them). This could also be what openstack projects would call the 
'conductor' approach where nova, ironic, trove have a conductor which manages 
these long-running actions (the conductor is alive/running throughout the 
duration of these workflows, although it may be restarted while running). The 
restarting + resuming part is something that openstack hasn't handled so 
gracefully currently, typically requiring either some type of cleanup at 
restart (or by operations), with taskflow using this model the resumption part 
makes it possible to resume from the last saved state (this connects into the 
persistence model that taskflow uses, the state transitions, how execution 
occurrs itself...).

The second execution model is an extension of the first, whereby there is still 
a type of 'conductor' that is managing the life-time of the workflow, but 
instead of locally executing tasks in the conductor itself tasks are now 
executed on remote-workers (see http://tinyurl.com/lf3yqe4
). The engine currently still is 'alive' for the life-time of the execution, 
although the work that it is doing is relatively minimal (since its not 
actually executing any task code, but proxying those requests to others works). 
The engine while running does the conducting of the remote-workers (saving 
persistence details, doing state-transtions, getting results, sending requests 
to workers...).

As you have already stated, if a task is going to run for 5+ days (some really 
long hadoop job for example) then these 2 execution models may not be suited 
for this type of usage due to the current requirement that the engine 
overseeing the work must be kept alive (since something needs to recieve 
responses and deal with state transitions and persistence). If the desire is to 
have a third execution model, one that can handle with extremly long-running 
tasks without needing an active mechanism that is 'alive' during this process 
then I believe that would call for the creation of a new engine type in 
taskflow (https://github.com/openstack/taskflow/tree/master/taskflow/engines) 
that deals with this use-case. I don't beleive it would be hard to create this 
engine type although it would involve more complexity than what exists. 
Especially since there needs to be some 'endpoint' that recieves responses when 
the 5+ day job actually finishes (so in this manner some type of code must be 
'always' running to deal with these responses anyway). So that means there 
would likely need to be a 'watchdog' process that would always be running that 
itself would do the state-transitions and result persistence (and so-on), in a 
way this would be a 'lazy' version of the above first/second execution models.

=== HA ===

So this is an interesting question, and to me is strongly connected to how your 
engines are executing (and the persistence and state-transitions that they go 
through while running). Without persistence of state and transitions there is 
no good way (a bad way of course can be created, by just redoing all the work, 
but that's not always feasible or the best option) to accomplish resuming in a 
sane manner and there is also imho no way to accomplish any type of automated 
HA of workflows. Since taskflow was concieved to manage the states and 
transitions of tasks and flows it gains the ability to do this resuming but it 
also gains the ability to automatically provide execution HA to its users.

Let me describe:

When you save the states of a workflow and any intermediate results of a 
workflow to some database (for example) and the engine (see above models) which 
is being used (for example the conductor type from above) the application 
containing that engine may be prone to crashes (or just being powered off due 
to software upgrades...). Since taskflows key primitives were made to allow for 
resuming when a crash occurs, it is relatively simple to allow another 
application (also running a conductor) to resume whatever that prior 
application was doing when it crashed. Now most users of taskflow don't want to 
have to do this resumption manually (although they can if they want) so it 
would be expected that the other versions of that application would be running 
would automatically 'know' how to 'take-over' the work of the 

[openstack-dev] Rolling upgrades in icehouse

2014-03-24 Thread Meghal Gosalia
Hello folks,

I was reading a blogpost mentioned in the newsletter here - 
http://redhatstackblog.redhat.com/2014/03/11/an-icehouse-sneak-peek-openstack-compute-nova/
A note about rolling upgrades is mentioned - 
The Compute services now allow for a level of rolling upgrade, whereby control 
services can be upgraded to Icehouse while they continue to interact with 
compute services running code from the Havana release. This allows for a more 
gradual approach to upgrading an OpenStack cloud, or logical designated subset 
thereof, than has typically been possible in the past.

Where can I obtain more information about this feature? 
Does above imply that database is upgraded along with control service update as 
well?

One more question - is there an initiative to make icehouse database schema 
work with havana based control services ? 
If control services were not tightly coupled with database schema, then I could 
move one half of control services out of rotation and update from havana to 
icehouse,
while other half is still serving traffic. This would help us achieve zero api 
downtime during upgrades.
Currently, upgrading first half from one release to another would update 
database schema and other half running old services does not work with new 
schema.

Thanks,
Meghal



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-24 Thread Sergey Lukjanov
RE Sahara, we'll need one more version bump to remove all backward
compat code added for smooth transition. What's the deadline for doing
it? Personally, I'd like to do it next week. Is it ok?

Thanks.

On Mon, Mar 24, 2014 at 10:41 PM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:



 On Mon, Mar 24, 2014 at 2:28 PM, Sean Dague s...@dague.net wrote:

 On 03/24/2014 02:20 PM, Doug Hellmann wrote:
 
 
 
  On Mon, Mar 24, 2014 at 5:33 AM, Thierry Carrez thie...@openstack.org
  mailto:thie...@openstack.org wrote:
 
  Joe Gordon wrote:
   There are still two outstanding trove dependencies that are
  currently
   used in trove but not in global requirements. It would be nice to
  get
   this sorted out before the freeze so we can
   turn https://review.openstack.org/#/c/80690/ on.
  
   mockito https://review.openstack.org/#/c/80850/
 
  This one was abandoned. Trove team is looking to move away from
  mockito
  to mock. Timeline is in the next 4-5 days.
 
 
  I was one of several people who objected to that. I would be happy to
  change my -1 to a +2 if the Trove team needs more time to change.
  Perhaps as a J1 goal?

 The Trove team said last week they could probably land the remove in
 trove this week for mockito (it was on their roadmap anyway). So unless
 they feel that's not doable, I think we're good.


 Sounds good.

 Doug




 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-24 Thread Nikhil Manchanda

Sean Dague writes:

 The Trove team said last week they could probably land the remove in
 trove this week for mockito (it was on their roadmap anyway). So unless
 they feel that's not doable, I think we're good.

   -Sean

Yes, this was discussed at the trove IRC meeting last week, and is
something that we identified as a top priority for us. I'm looking to
have it complete over the next 2-3 days. Once the changes are in,
we won't require an exception for mockito, so it shouldn't affect the
dependency freeze. I'll post an update to the thread once this is
complete.

Thanks all for your help getting the trove requirements sorted out.

Cheers,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rolling upgrades in icehouse

2014-03-24 Thread Dan Smith
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 Where can I obtain more information about this feature?

- From the blog post that I've yet to write :D

 Does above imply that database is upgraded along with control
 service update as well?

Yes, but only for the services that interact directly with the
database. The services that do *not* need to be upgraded atomically
with the schema are: nova-compute and nova-network.

 One more question - is there an initiative to make icehouse
 database schema work with havana based control services ?

It depends on what you mean by control services. For icehouse, the
incremental step that we're making is that all the controller services
must be upgraded atomically with the database schema. That means api,
scheduler, conductor, etc. A havana compute node is sufficiently
isolated from the data schema that it will continue to work with an
icehouse conductor, allowing you to upgrade compute nodes
independently after the controller services are updated.

This was just our first step at providing some capability for this. We
hope to continue to increase the capabilities (and decrease the amount
that must be done atomically) going forward.

Hope that helps!

- --Dan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTMIP6AAoJEBeZxaMESjNVSrYH/ixFKup4jRu5THMq5+X9td/S
0lJfTTTBUki2ikmi/mvSPN4Dtfes+SdfkK71EF09B2Za+rq29A4QTLf0RQHSqeFR
NpOzTf/baqxUcWroDe/HNLakVd2KnBwh1n3XhEU7Wy+7wzYLl9uLQ/ZguyjazfZv
vt7aAs/VtLpYARx4MdK3vopjSuSdVlfHP+0vhPTzoxyDSRzudDh7FRddGLEjjVlX
WUHWNePNmdgRzKAFarvyw3qipEuR4kaPqZh3bWr4fIxB6ZQzOA+fa5hkIg3vnD3D
0sLznbZkKevLxhSEX+ml3Gfk2Ax3UVIdAai5JxH+LAka2tiVCrwHQMeKyu7lxFw=
=JYp4
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-24 Thread Matthew Treinish
On Mon, Mar 24, 2014 at 02:32:35PM -0400, Russell Bryant wrote:
 On 03/24/2014 02:19 PM, Monty Taylor wrote:
  On 03/24/2014 10:07 AM, Russell Bryant wrote:
  On 03/24/2014 12:34 PM, James E. Blair wrote:
  Hi,
 
  So recently we started this experiment with the compute and qa programs
  to try using Gerrit to review blueprints.  Launchpad is deficient in
  this area, and while we hope Storyboard will deal with it much better,
  but it's not ready yet.
 
  This seems to be a point of confusion.  My view is that Storyboard isn't
  intended to implement what gerrit provides.  Given that, it seems like
  we'd still be using this whether the tracker is launchpad or storyboard.
 
  As a development organization, OpenStack scales by adopting common tools
  and processes, and true to form, we now have a lot of other projects
  that would like to join the experiment.  At some point that stops
  being an experiment and becomes practice.
 
  However, at this very early point, we haven't settled on answers to some
  really basic questions about how this process should work.  Before we
  extend it to more projects, I think we need to establish a modicum of
  commonality that helps us integrate it with our tooling at scale, and
  just as importantly, helps new contributors and people who are working
  on multiple projects have a better experience.
 
  I'd like to hold off on creating any new specs repos until we have at
  least the following questions answered:
 
  Sounds good to me.
 
  a) Should the specs repos be sphinx documents?
 
  Probably.  I see that the qa-specs repo has this up for review.  I'd
  like to look at applying this to nova-specs and see how it affects the
  workflow we've had in mind so far.
 
  b) Should the follow the Project Testing Interface[1]?
 
  As its relevant, sure.
  
  I think the main one here, as is in mtrenish's patch, is to make sure
  tox -evenv python setup.py build_sphinx works.
 
 OK, here it is for nova-specs: https://review.openstack.org/#/c/82564/
 

The matching qa-specs one that Monty mentioned before is here:

https://review.openstack.org/#/c/82531/

I also took the initiative and took what Russell and I did for the qa-specs and
nova-specs repos and started a cookiecutter template for making new specs repos
and put it on github:

https://github.com/mtreinish/specs-cookiecutter

This way once everything is sorted out from this thread we can have a common
base for adding new specs repos for other projects.

  Additionally, if we want to do code-style analysis, I'd suggest putting
  it into tox -epep8. I know that soudns LUDICROUS right now, but I really
  want to rename that to tox -elint or something  -because it's long since
  stopped being just pep8 anywhere.
  
  Or?
 
 Yeah, some validation on the structure would be nice.
 

Sean has a review up for a basic validation tool here:
https://review.openstack.org/#/c/81347/

Like Jim mentioned in the review it probably makes the most sense to put that in
common place instead of in tree so that all the specs repos can use it easily.


  c) Some basic agreement on what information is encoded?
 
  We've been working on a template in nova-specs here:
 
  http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst
 
  eg: don't encode implementation status (it should be in launchpad)
 
  Agreed
 
  do encode branches (as directories? as ...?)
 
  IMO, yes.  The reason is that I think approval of a spec should be
  limited to a given release.  If it slips, it should be re-reviewed to
  make sure it still makes sense given whatever developments have
  occurred.  That's why we have a juno/ directory in nova-specs.
  
  My biggest concern about the directories is where it relates to
  workflow. Essentially, I think we should not _move_ them - because there
  will be blueprints in either launchpad or storyboard with a link to the
  URL of the thing. If we later move the spec because it got re-targetted,
  we'll have a bunch of broken links.
  
  Instead, if we copy the spec to the new location (say, kestral) when
  it's time - OR - we move the spec but leave behind a placeholder doc in
  the old location that says retargetted to kestral - then I think we're
  in a good place.
  
  (this is why I think the implemented and approved dirs are bad)
  
  If we can do that, then I can totally buy the argument about having
  $release dirs.
 
 I think that would work for me.  I think we can do away with
 juno/approved and juno/implemented and just have juno/.
 
 

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-delete in amqp reply_* queues in OpenStack

2014-03-24 Thread Dmitry Mescheryakov
I see two possible explanations for these 5 remaining queues:

 * They were indeed recreated by 'compute' services. I.e. controller
service send some command over rpc and then it was shut down. Its
reply queue was automatically deleted, since its only consumer was
disconnected. The compute services replied after that and so recreated
the queue. According to Rabbit MQ docs, such queue will be stuck alive
indefinitely, since it will never have a consumer.

 * Possibly there are services on compute nodes which initiate RPC
calls themselves. I don't know OpenStack architecture enough to say if
services running on compute nodes do so. In that case these 5 queues
are still used by compute services.

Do Rabbit MQ management tools (web or cli) allow to view active
consumers for queues? If yes, then you can find out which of the cases
above you encountered. Or it maybe be some third case I didn't account
for :-)

 I assume that those 5 queues are (re)created by the services running on the
 compute nodes, but if that's the case then how would the services running on
 the controller node find out about the names of the queues?

When process initiating rpc call is restarted, there is no way for it
to know about queue it used before for receiving replies. The replies
simply never got back. On the other hand, the restarted process does
not know about calls it did before the restart, so it is not a big
loss anyway.

For clarity, here is a simplified algorithm RPC client (the one
initiating RPC call) uses:

msg_id = uuid.uuid4().hex

if not self.reply_q:
self.reply_q = 'reply_' + uuid.uuid4().hex

message = {
'msg_id': msg_id,
'reply_q': self.reply_q,
'payload': payload,}

send(message)

reply = wait_for_reply(queue=self.reply_q, msg_id=msg_id)


Dmitry



2014-03-24 19:52 GMT+04:00 Chris Friesen chris.frie...@windriver.com:
 On 03/24/2014 02:59 AM, Dmitry Mescheryakov wrote:

 Chris,

 In oslo.messaging a single reply queue is used to gather results from
 all the calls. It is created lazily on the first call and is used
 until the process is killed. I did a quick look at oslo.rpc from
 oslo-incubator and it seems like it uses the same pattern, which is
 not surprising since oslo.messaging descends from oslo.rpc. So if you
 restart some process which does rpc calls (nova-api, I guess), you
 should see one reply queue gone and another one created instead after
 some time.


 Okay, that makes a certain amount of sense.

 How does it work for queues used by both the controller and the compute
 node?

 If I do a controlled switchover from one controller to another (killing and
 restarting rabbit, nova-api, nova-conductor, nova-scheduler, neutron,
 cinder, etc.) I see that the number of reply queues drops from 28 down to 5,
 but those 5 are all ones that existed before.

 I assume that those 5 queues are (re)created by the services running on the
 compute nodes, but if that's the case then how would the services running on
 the controller node find out about the names of the queues?


 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rolling upgrades in icehouse

2014-03-24 Thread Tim Bell

How does this interact with cells ? Can the cell API instances be upgraded 
independently of the cells themselves ?

My ideal use case would be

- It would be possible to upgrade one of the cells (such as a QA environment) 
before the cell API nodes
- Cells can be upgraded one-by-one as needed by stability/functionality
- API cells can be upgraded during this process ... i.e. mid way before the 
most critical cells are migrated

Is this approach envisaged ?

Tim


 -Original Message-
 From: Dan Smith [mailto:d...@danplanet.com]
 Sent: 24 March 2014 20:14
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Rolling upgrades in icehouse
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
  Where can I obtain more information about this feature?
 
 - From the blog post that I've yet to write :D
 
  Does above imply that database is upgraded along with control service
  update as well?
 
 Yes, but only for the services that interact directly with the database. The 
 services that do *not* need to be upgraded atomically with
 the schema are: nova-compute and nova-network.
 
  One more question - is there an initiative to make icehouse database
  schema work with havana based control services ?
 
 It depends on what you mean by control services. For icehouse, the 
 incremental step that we're making is that all the controller
 services must be upgraded atomically with the database schema. That means 
 api, scheduler, conductor, etc. A havana compute node
 is sufficiently isolated from the data schema that it will continue to work 
 with an icehouse conductor, allowing you to upgrade
 compute nodes independently after the controller services are updated.
 
 This was just our first step at providing some capability for this. We hope 
 to continue to increase the capabilities (and decrease the
 amount that must be done atomically) going forward.
 
 Hope that helps!
 
 - --Dan
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.14 (GNU/Linux)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
 
 iQEcBAEBAgAGBQJTMIP6AAoJEBeZxaMESjNVSrYH/ixFKup4jRu5THMq5+X9td/S
 0lJfTTTBUki2ikmi/mvSPN4Dtfes+SdfkK71EF09B2Za+rq29A4QTLf0RQHSqeFR
 NpOzTf/baqxUcWroDe/HNLakVd2KnBwh1n3XhEU7Wy+7wzYLl9uLQ/ZguyjazfZv
 vt7aAs/VtLpYARx4MdK3vopjSuSdVlfHP+0vhPTzoxyDSRzudDh7FRddGLEjjVlX
 WUHWNePNmdgRzKAFarvyw3qipEuR4kaPqZh3bWr4fIxB6ZQzOA+fa5hkIg3vnD3D
 0sLznbZkKevLxhSEX+ml3Gfk2Ax3UVIdAai5JxH+LAka2tiVCrwHQMeKyu7lxFw=
 =JYp4
 -END PGP SIGNATURE-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Carl Baldwin
I was thinking that we could document the information about sudo and
iproute2 patches with the upcoming release.  How would I go about
doing this?  Is there any section in our documentation about OS level
tweaks or requirements such as these that could present this
information as part of the release?

Carl

On Wed, Mar 5, 2014 at 9:58 AM, Rick Jones rick.jon...@hp.com wrote:
 On 03/05/2014 06:42 AM, Miguel Angel Ajo wrote:


  Hello,

  Recently, I found a serious issue about network-nodes startup time,
 neutron-rootwrap eats a lot of cpu cycles, much more than the processes
 it's wrapping itself.

  On a database with 1 public network, 192 private networks, 192
 routers, and 192 nano VMs, with OVS plugin:


 Network node setup time (rootwrap): 24 minutes
 Network node setup time (sudo): 10 minutes


 I've not been looking at rootwrap, but have been looking at sudo and ip.
 (Using some scripts which create fake routers so I could look without any
 of this icky OpenStack stuff in the way :) ) The Ubuntu 12.04 versions of
 each at least will enumerate all the interfaces on the system, even though
 they don't need to.

 There was already an upstream change to 'ip' that eliminates the unnecessary
 enumeration.  In the last few weeks an enhancement went into the upstream
 sudo that allows one to configure sudo to not do the same thing.   Down in
 the low(ish) three figures of interfaces it may not be a Big Deal (tm) but
 as one starts to go beyond that...

 commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8
 Author: Stephen Hemminger step...@networkplumber.org
 Date:   Thu Mar 28 15:17:47 2013 -0700

 ip: remove unnecessary ll_init_map

 Don't call ll_init_map on modify operations
 Saves significant overhead with 1000's of devices.

 http://www.sudo.ws/pipermail/sudo-workers/2014-January/000826.html

 Whether your environment already has the 'ip' change I don't know, but odd
 are probably pretty good it doesn't have the sudo enhancement.


 That's the time since you reboot a network node, until all namespaces
 and services are restored.


 So, that includes the time for the system to go down and reboot, not just
 the time it takes to rebuild once rebuilding starts?


 If you see appendix 1, this extra 14min overhead, matches with the
 fact that rootwrap needs 0.3s to start, and launch a system command
 (once filtered).

  14minutes =  840 s.
  (840s. / 192 resources)/0.3s ~= 15 operations /
 resource(qdhcp+qrouter) (iptables, ovs port creation  tagging, starting
 child processes, etc..)

 The overhead comes from python startup time + rootwrap loading.


 How much of the time is python startup time?  I assume that would be all the
 find this lib, find that lib stuff one sees in a system call trace?  I saw
 a boatload of that at one point but didn't quite feel like wading into that
 at the time.


 I suppose that rootwrap was designed for lower amount of system
 calls (nova?).


 And/or a smaller environment perhaps.


 And, I understand what rootwrap provides, a level of filtering that
 sudo cannot offer. But it raises some question:

 1) It's actually someone using rootwrap in production?

 2) What alternatives can we think about to improve this situation.

 0) already being done: coalescing system calls. But I'm unsure
 that's enough. (if we coalesce 15 calls to 3 on this system we get:
 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).


 It may not be sufficient, but it is (IMO) certainly necessary.  It will make
 any work that minimizes or eliminates the overhead of rootwrap look that
 much better.


 a) Rewriting rules into sudo (to the extent that it's possible), and
 live with that.
 b) How secure is neutron about command injection to that point? How
 much is user input filtered on the API calls?
 c) Even if b is ok , I suppose that if the DB gets compromised,
 that could lead to command injection.

 d) Re-writing rootwrap into C (it's 600 python LOCs now).

 e) Doing the command filtering at neutron-side, as a library and
 live with sudo with simple filtering. (we kill the python/rootwrap
 startup overhead).

 3) I also find 10 minutes a long time to setup 192 networks/basic tenant
 structures, I wonder if that time could be reduced by conversion
 of system process calls into system library calls (I know we don't have
 libraries for iproute, iptables?, and many other things... but it's a
 problem that's probably worth looking at.)


 Certainly going back and forth creating short-lived processes is at least
 anti-social and perhaps ever so slightly upsetting to the process scheduler.
 Particularly at scale.  The/a problem is though that the Linux networking
 folks have been somewhat reticent about creating libraries (at least any
 that they would end-up supporting) because they have a concern it will
 lock-in interfaces and reduce their freedom of movement.

 happy benchmarking,

 rick jones
 the fastest procedure call 

[openstack-dev] [TripleO][Tempest]

2014-03-24 Thread Robert Collins
Hey, so folk want to start glueing tempest and tripleo-ci together - yay!

Sadly I was asleep during the IRC conversation, but I just wanted to
note - we've no need to bake tempested into images at this point - the
jenkins slave is appropriately resourced to run tempest against the
deployed cloud, and in fact its better to run it from there than from
e.g. the seed, because otherwise the jenkins slave is idle, just
wasting all the ram it used to build images.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-24 Thread Robert Collins
On 25 March 2014 06:28, Ben Nemec openst...@nemebean.com wrote:


 I created an etherpad here:
 https://etherpad.openstack.org/p/devtest-env-reqs

 And linked it from the blueprint here:
 https://blueprints.launchpad.net/tripleo/+spec/test-environment-requirements

 I only added some details about devtest on openstack for now since I
 figured everyone else could add their thoughts in their own words.

I think it would be super useful if you could capture all the things
in one place - it will mean we don't need to chase folk up to ensure
their idea is present in the etherpad.

-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-24 Thread Eugene Nikanorov
Hi Susanne,

a couple of comments inline:





 We would like to discuss adding the concept of managed services to the
 Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
 proxy. The latter could be a second approach for some of the software
 load-balancers e.g. HA proxy since I am not sure that it makes sense to
 deploy Libra within Devstack on a single VM.



 Currently users would have to deal with HA, resiliency, monitoring and
 managing their load-balancers themselves.  As a service provider we are
 taking a more managed service approach allowing our customers to consider
 the LB as a black box and the service manages the resiliency, HA,
 monitoring, etc. for them.

As far as I understand these two abstracts, you're talking about making
LBaaS API more high-level than it is right now.
I think that was not on our roadmap because another project (Heat) is
taking care of more abstracted service.
The LBaaS goal is to provide vendor-agnostic management of load balancing
capabilities and quite fine-grained level.
Any higher level APIs/tools can be built on top of that, but are out of
LBaaS scope.



 We like where Neutron LBaaS is going with regards to L7 policies and SSL
 termination support which Libra is not currently supporting and want to
 take advantage of the best in each project.

 We have a draft on how we could make Neutron LBaaS take advantage of Libra
 in the back-end.

 The details are available at:
 https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft


I looked at the proposal briefly, it makes sense to me. Also it seems to be
the simplest way of integrating LBaaS and Libra - create a Libra driver for
LBaaS.


 While this would allow us to fill a gap short term we would like to
 discuss the longer term strategy since we believe that everybody would
 benefit from having such managed services artifacts built directly into
 Neutron LBaaS.

I'm not sure about building it directly into LBaaS, although we can discuss
it. For instance, HA is definitely on roadmap and everybody seems to agree
that HA should not require user/tenant to do any specific configuration
other than choosing HA capability of LBaaS service. So as far as I see it,
requirements for HA in LBaaS look very similar to requirements for HA in
Libra.



 There are blueprints on high-availability for the HA proxy software
 load-balancer and we would like to suggest implementations that fit our
 needs as services providers.



 One example where the managed service approach for the HA proxy load
 balancer is different from the current Neutron LBaaS roadmap is around HA
 and resiliency. The 2 LB HA setup proposed (
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
 appropriate for service providers in that users would have to pay for the
 extra load-balancer even though it is not being actively used.

One important idea of the HA is that its implementation is vendor-specific,
so each vendor or cloud provider can implement it in the way that suits
their needs. So I don't see why particular HA solution for haproxy should
be considered as a common among other vendors/providers.

An alternative approach is to implement resiliency using a pool of stand-by
 load-and preconfigured load balancers own by e.g. LBaaS tenant and assign
 load-balancers from the pool to tenants environments. We currently are
 using this approach in the public cloud with Libra and it takes
 approximately 80 seconds for the service to decide that a load-balancer has
 failed, swap the floating ip and update the db, etc. and have a new LB
 running.

That for sure can be implemented. I only would recommend to implement such
kind of management system out of Neutron/LBaaS tree, e.g. to only have
client within Libra driver that will communicate with the management
backend.

Thanks,
Eugene.



 Regards Susanne

 ---

 Susanne M. Balle
 Hewlett-Packard
 HP Cloud Services

 Please consider the environment before printing this email.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rolling upgrades in icehouse

2014-03-24 Thread Chris Behrens

On Mar 24, 2014, at 12:31 PM, Tim Bell tim.b...@cern.ch wrote:

 
 How does this interact with cells ? Can the cell API instances be upgraded 
 independently of the cells themselves ?
 
 My ideal use case would be
 
 - It would be possible to upgrade one of the cells (such as a QA environment) 
 before the cell API nodes
 - Cells can be upgraded one-by-one as needed by stability/functionality
 - API cells can be upgraded during this process ... i.e. mid way before the 
 most critical cells are migrated
 
 Is this approach envisaged ?

That would be my goal long term, but I’m not sure it’ll work right now. :)  We 
did try to take care in making sure that the cells manager is backwards 
compatible. I think all messages going DOWN to the child cell from the API will 
work. However, what I could possibly see as broken is messages coming from a 
child cell back up to the API cell. I believe we changed instance updates to 
pass objects back up…  The objects will fail to deserialize right now in the 
API cell, because it could get a newer version and not know how to deal with 
it. If we added support to make nova-cells always redirect via conductor, it 
could actually down-dev the object, but that has performance implications 
because of all of the DB updates the API nova-cells does. There are a number of 
things that I think cells doesn’t pass as objects yet, either, which could be a 
problem.

So, in order words, I think the answer right now is there really is no great 
upgrade plan wrt cells other than just taking a hit and doing everything at 
once. I’d love to fix that, as I think it should work as you describe some day. 
We have work to do to make sure we’re actually passing objects everywhere.. and 
then need to think about how we can get the API cell to be able to deserialize 
newer object versions.

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >