Re: [openstack-dev] [LBaaS] API spec for SSL Support

2014-03-06 Thread Samuel Bercovici
Hi,

The wiki is updated to reflect the APIs.

Regards,
-Sam.



From: Palanisamy, Anand [mailto:apalanis...@paypal.com]
Sent: Thursday, March 06, 2014 3:26 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [LBaaS] API spec for SSL Support

Hi All,

Please let us know if we have the blueprint or the proposal for the LBaaS SSL 
API specification. We see only the workflow documented here 
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL.

Thanks
Anand

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-06 Thread Nikola Đipanov
On 03/05/2014 07:59 PM, Russell Bryant wrote:
 On 03/05/2014 12:27 PM, Andrew Laski wrote:
 On 03/05/14 at 07:37am, Tracy Jones wrote:
 Hi - Please consider the image cache aging BP for FFE
 (https://review.openstack.org/#/c/56416/)

 This is the last of several patches (already merged) that implement
 image cache cleanup for the vmware driver.  This patch solves a
 significant customer pain point as it removes unused images from their
 datastore.  Without this patch their datastore can become
 unnecessarily full.  In addition to the customer benefit from this
 patch it

 1.  has a turn off switch
 2.  if fully contained within the vmware driver
 3.  has gone through functional testing with our internal QA team

 ndipanov has been good enough to say he will review the patch, so we
 would ask for one additional core sponsor for this FFE.

 Looking over the blueprint and outstanding review it seems that this is
 a fairly low risk change, so I am willing to sponsor this bp as well.
 
 Nikola, can you confirm if you're willing to sponsor (review) this?
 

Yeah - I'll review it!

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-06 Thread Eoghan Glynn

Thanks Thierry,

For completeness, the ceilometer links are below.

tarball:
  http://tarballs.openstack.org/ceilometer/ceilometer-milestone-proposed.tar.gz

milestone-proposed branch:
  https://github.com/openstack/ceilometer/tree/milestone-proposed

Cheers,
Eoghan


- Original Message -
 Hi everyone,
 
 We just hit feature freeze, so please do not approve changes that add
 features or new configuration options unless those have been granted a
 feature freeze exception.
 
 This is also string freeze, so you should avoid changing translatable
 strings. If you have to modify a translatable string, you should give a
 heads-up to the I18N team.
 
 Milestone-proposed branches were created for Horizon, Keystone, Glance,
 Nova, Neutron, Cinder, Heat and and Trove in preparation for the
 icehouse-3 milestone publication tomorrow.
 
 Ceilometer should follow in an hour.
 
 You can find candidate tarballs at:
 http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
 http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
 http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
 http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
 http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
 http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
 http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
 http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz
 
 You can also access the milestone-proposed branches directly at:
 https://github.com/openstack/horizon/tree/milestone-proposed
 https://github.com/openstack/keystone/tree/milestone-proposed
 https://github.com/openstack/glance/tree/milestone-proposed
 https://github.com/openstack/nova/tree/milestone-proposed
 https://github.com/openstack/neutron/tree/milestone-proposed
 https://github.com/openstack/cinder/tree/milestone-proposed
 https://github.com/openstack/heat/tree/milestone-proposed
 https://github.com/openstack/trove/tree/milestone-proposed
 
 Regards,
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-06 Thread Nikola Đipanov
On 03/05/2014 08:00 PM, Russell Bryant wrote:
 On 03/05/2014 10:34 AM, Gary Kotton wrote:
 Hi,
 Unfortunately we did not get the ISO support approved by the deadline.
 If possible can we please get the FFE.

 The feature is completed and has been tested extensively internally. The
 feature is very low risk and has huge value for users. In short a user
 is able to upload a iso to glance then boot from that iso.

 BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
 Code: https://review.openstack.org/#/c/63084/ and 
 https://review.openstack.org/#/c/77965/
 Sponsors: John Garbutt and Nikola Dipanov

 One of the things that we are planning on improving in Juno is the way
 that the Vmops code is arranged and organized. We will soon be posting a
 wiki for ideas to be discussed. That will enable use to make additions
 like this a lot simpler in the future. But sadly that is not part of the
 scope at the moment.
 
 John and Nikola, can you confirm your sponsorship of this one?
 

Yeah - I'll review this.

This one is actually almost ready except that everyone who reviewed it
hates the code this touches (and makes slightly worse). That said - I
have every reason to believe the VMWare team will coordinate their
efforts around making it better in Juno, and the feature is useful and
low risk.

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-03-06 Thread Yair Fried

- Original Message - 

 From: Alexei Kornienko alexei.kornie...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, February 28, 2014 12:43:13 PM
 Subject: Re: [openstack-dev] [QA] The future of nosetests with
 Tempest

 Hi,

 Let me express my concerns on this topic:

  With some recent changes made to Tempest compatibility with
 
  nosetests is going away.
 
 I think that we should not drop nosetests support from tempest or any
 other project. The problem with testrepository is that it's not
 providing any debugger support at all (and will never provide). It
 also has some issues with proving error traces in human readable
 form and it can be quite hard to find out what is actually broken.

 Because of this I think we should try to avoid any kind of test
 libraries that break compatibility with conventional test runners.

 Our tests should be able to run correctly with nosetests, teststools
 or plain old unittest runner. If for some reason test libraries
 (like testscenarios) doesn't provide support for this we should fix
 this libraries or avoid their usage.

+1
I have the same concern about debugging. It's an essential tool (for me, at 
least) in creating scenario tests. The more complex the test, the harder it is 
to rely on simple log-prints. 

 Regards,
 Alexei Kornienko

 On 02/27/2014 06:36 PM, Frittoli, Andrea (HP Cloud) wrote:

  This is another example of achieving the same result (exclusion
  from
  a
 
  list):
  https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
  s/tempest/tests2skip.py
  https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
  s/tempest/tests2skip.txt
 

  andrea
 

  -Original Message-
 
  From: Matthew Treinish [ mailto:mtrein...@kortar.org ]
 
  Sent: 27 February 2014 15:49
 
  To: OpenStack Development Mailing List (not for usage questions)
 
  Subject: Re: [openstack-dev] [QA] The future of nosetests with
  Tempest
 

  On Tue, Feb 25, 2014 at 07:46:23PM -0600, Matt Riedemann wrote:
 
   On 2/12/2014 1:57 PM, Matthew Treinish wrote:
  
 
On Wed, Feb 12, 2014 at 11:32:39AM -0700, Matt Riedemann wrote:
   
  
 
 On 1/17/2014 8:34 AM, Matthew Treinish wrote:

   
  
 
  On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz
  wrote:
 

   
  
 
   On 01/16/2014 10:56 PM, Matthew Treinish wrote:
  
 

   
  
 
Hi everyone,
   
  
 

   
  
 

With some recent changes made to Tempest compatibility
with
   
  
 

   
  
 
nosetests is going away. We've started using newer
features
that
   
  
 

   
  
 
nose just doesn't support. One example of this is that
we've
   
  
 

   
  
 
started using testscenarios and we're planning to do
this
in
more
   
  
 

   
  
 
  places moving forward.
 
So at Icehouse-3 I'm planning to push the patch out to
remove
   
  
 

   
  
 
nosetests from the requirements list and all the
workarounds
and
   
  
 

   
  
 
references to nose will be pulled out of the tree.
Tempest
will
   
  
 

   
  
 
also start raising an unsupported exception when you
try
to
run
   
  
 

   
  
 
it with nose so that there isn't any confusion on this
moving
   
  
 

   
  
 
forward. We talked about doing this at summit briefly
and
I've
   
  
 

   
  
 
brought it up a couple of times before, but I believe
it
is
time
   
  
 

   
  
 
to do this now. I feel for tempest to move forward we
need
to
do
this
   
  
 

   
  
 
  now so that there isn't any ambiguity as we add even more features
  and new
 
  types of testing.
 
   I'm with you up to here.
  
 

   
  
 
Now, this will have implications for people running
tempest
with
   
  
 

   
  
 
python 2.6 since up until now we've set nosetests.
There
is
a
   
  
 

   
  
 
workaround for getting tempest to run with python 2.6
and
testr
see:
https://review.openstack.org/#/c/59007/1/README.rst but
essentially
this means that when nose is marked as
   
  
 

   
  
 
unsupported on tempest python 2.6 will also be
unsupported
by
   
  
 

   
  
 
Tempest. (which honestly it basically has been for
while
now
just
   
  
 

   
  
 
we've gone without making it official)
   
  
 

   
  
 
   

Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-06 Thread Manas Kelshikar
I agree, let's rename data to spec and unblock the check-in.

Nikolay - Sorry for the trouble :)
On Mar 5, 2014 10:17 PM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Alright, good input Manas, appreciate.

 My comments are below...

 On 06 Mar 2014, at 10:47, Manas Kelshikar ma...@stackstorm.com wrote:


- Do we have better ideas on how to work with DSL? A good mental
exercise here would be to imagine that we have more than one DSL, not only
YAML but say XML. How would it change the picture?

 [Manas] As long as we form an abstraction between the DSL format i.e.
 YAML/XML and it consumption we will be able to move between various
 formats. (wishful) My personal preference is to not even have DSL show up
 anywhere in Mistral code apart from take it as input and transforming into
 this first level specification model - I know this is not the current state.


 Totally agree with your point. That's what we're trying to achieve.


- How can we clearly distinguish between these two models so that it
wouldn't be confusing?
- Do we have a better naming in mind?

 [Manas] I think we all would agree that the best approach is to have
 precise naming.

 I see your point of de-normalizing the dsl data into respective db model
 objects.

 In a previous email I suggested using *Spec. I will try to build on this -
 1. Everything specified via the YAML input is a specification or
 definition or template. Therefore I suggest we suffix all these types with
 Spec/Definition/Template. So TaskSpec/TaskDefinition/TaskTemplate etc.. As
 per the latest change these are TaskData ... ActionData.


 After all the time I spent thinking about it I would choose Spec suffix
 since it's short and expresses the intention well enough. In conjunction
 with workbook package name it would look very nice (basically we get
 specification of workbook which is what we're talking about, right?)

 So if you agree then let's change to TaskSpec, ActionSpec etc. Nikolay,
 sorry for making you change this patch again and again :) But it's really
 important and going to have a long-term effect at the entire system.

 2. As per current impl the YAML is stored as a key-value in the DB. This
 is fine since that is front-ended by objects that Nikolay has introduced.
 e.g. TaskData, ActionData etc.


 Yep, right. The only thing I would suggest is to avoid DB fields like
 task_dsl like we have now. The alternative could be task_spec.

 3. As per my thinking a model object that ends up in the DB or a model
 object that is in memory all can reside in the same module. I view
 persistence as an orthogonal concern so no real reason to distinguish the
 module names of the two set of models. If we do choose to distinguish as
 per latest change i.e. mistral/workbook that works too.


 Sorry, I believe I wasn't clear enough on this thing. I think we shouldn't
 have these two models in the same package since what I meant by DB model
 is actually execution and task that carry workflow runtime information
 and refer to a particular execution (we could also call it session). So
 my point is that these are fundamentally different types of models. The
 best analogy that comes to my mind is the relationship class - instance
 where in our case class = Specification (TaskSpec etc.) and instance =
 Execution/Task. Does it make any sense?

 @Nikolay - I am generally ok with the approach. I hope that this helps
 clarify my thinking and perception. Please ask more questions.

 Overall I like the approach of formalizing the 2 models. I am ok with
 current state of the review and have laid out my preferences.


 I like the current state of this patch. The only thing I would do is
 renaming Data to Spec.

 Thank you.

 Renat Akhmerov
 @ Mirantis Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Andrew Woodward
I'd Like to request A FFE for the remaining patches in the Ephemeral
RBD image support chain

https://review.openstack.org/#/c/59148/
https://review.openstack.org/#/c/59149/

are still open after their dependency
https://review.openstack.org/#/c/33409/ was merged.

These should be low risk as:
1. We have been testing with this code in place.
2. It's nearly all contained within the RBD driver.

This is needed as it implements an essential functionality that has
been missing in the RBD driver and this will become the second release
it's been attempted to be merged into.

Andrew
Mirantis
Ceph Community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-06 Thread Nikolay Makhotkin
Manas, Renat, no problem :)

The commit is sent already - https://review.openstack.org/#/c/75888/


On Thu, Mar 6, 2014 at 12:14 PM, Manas Kelshikar ma...@stackstorm.comwrote:

 I agree, let's rename data to spec and unblock the check-in.

 Nikolay - Sorry for the trouble :)
 On Mar 5, 2014 10:17 PM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Alright, good input Manas, appreciate.

 My comments are below...

 On 06 Mar 2014, at 10:47, Manas Kelshikar ma...@stackstorm.com wrote:


- Do we have better ideas on how to work with DSL? A good mental
exercise here would be to imagine that we have more than one DSL, not only
YAML but say XML. How would it change the picture?

 [Manas] As long as we form an abstraction between the DSL format i.e.
 YAML/XML and it consumption we will be able to move between various
 formats. (wishful) My personal preference is to not even have DSL show up
 anywhere in Mistral code apart from take it as input and transforming into
 this first level specification model - I know this is not the current state.


 Totally agree with your point. That's what we're trying to achieve.


- How can we clearly distinguish between these two models so that it
wouldn't be confusing?
- Do we have a better naming in mind?

 [Manas] I think we all would agree that the best approach is to have
 precise naming.

 I see your point of de-normalizing the dsl data into respective db model
 objects.

 In a previous email I suggested using *Spec. I will try to build on this -
 1. Everything specified via the YAML input is a specification or
 definition or template. Therefore I suggest we suffix all these types with
 Spec/Definition/Template. So TaskSpec/TaskDefinition/TaskTemplate etc.. As
 per the latest change these are TaskData ... ActionData.


 After all the time I spent thinking about it I would choose Spec suffix
 since it's short and expresses the intention well enough. In conjunction
 with workbook package name it would look very nice (basically we get
 specification of workbook which is what we're talking about, right?)

 So if you agree then let's change to TaskSpec, ActionSpec etc. Nikolay,
 sorry for making you change this patch again and again :) But it's really
 important and going to have a long-term effect at the entire system.

 2. As per current impl the YAML is stored as a key-value in the DB. This
 is fine since that is front-ended by objects that Nikolay has introduced.
 e.g. TaskData, ActionData etc.


 Yep, right. The only thing I would suggest is to avoid DB fields like
 task_dsl like we have now. The alternative could be task_spec.

 3. As per my thinking a model object that ends up in the DB or a model
 object that is in memory all can reside in the same module. I view
 persistence as an orthogonal concern so no real reason to distinguish the
 module names of the two set of models. If we do choose to distinguish as
 per latest change i.e. mistral/workbook that works too.


 Sorry, I believe I wasn't clear enough on this thing. I think we
 shouldn't have these two models in the same package since what I meant by
 DB model is actually execution and task that carry workflow runtime
 information and refer to a particular execution (we could also call it
 session). So my point is that these are fundamentally different types of
 models. The best analogy that comes to my mind is the relationship class
 - instance where in our case class = Specification (TaskSpec etc.) and
 instance = Execution/Task. Does it make any sense?

 @Nikolay - I am generally ok with the approach. I hope that this helps
 clarify my thinking and perception. Please ask more questions.

 Overall I like the approach of formalizing the 2 models. I am ok with
 current state of the review and have laid out my preferences.


 I like the current state of this patch. The only thing I would do is
 renaming Data to Spec.

 Thank you.

 Renat Akhmerov
 @ Mirantis Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-06 Thread 方祯
Hi Sylvain, Russell, dims

Thanks for your the replies and guidances!

I have read the docs below the title. In my opinion, It is quite a good
idea to take storage component and network component into
consideration of the scheduler of nova. I agree that it is quite a large
job for current GSOC project now. And from the docs I have read before,
I think that providing more additional Filters and Weight functions is
quite import in both current filter_scheduler and later Cross service
scheduler(SolverScheduler or other).
So I have the idea that implement some scheduler_filer/weight with other
metrics of all host state or other cross service data.
As a newbie to OpenStack-dev. I have a concern that is it full enough for
GSOC term? and what kind of work is suitable for current nova enhancement
, future cross service scheduler and could also be full enough for GSOC
term. It would be great if somebody could give some advice:)

Thank for Sylvain's help for the information of #openstack-meeting IRC
channel and dims and Russell's suggestion, and I will update my information
soon on the GSOC's Wiki pages.

Thanks and Regards,
fangzhen
GitHub : https://github.com/fz1989



2014-03-05 23:32 GMT+08:00 Davanum Srinivas dava...@gmail.com:

 Hi Fang,

 Agree with Russell. Also please update the wiki with your information
 https://wiki.openstack.org/wiki/GSoC2014 and also information about
 the mentor/ideas as well (if you have not yet done so already). You
 can reach out to folks on #openstack-gsoc and #openstack-nova IRC
 channels as well

 thanks,
 dims

 On Wed, Mar 5, 2014 at 10:12 AM, Russell Bryant rbry...@redhat.com
 wrote:
  On 03/05/2014 09:59 AM, 方祯 wrote:
  Hi:
  I'm Fang Zhen, an M.S student from China. My current research work is on
  scheduling policy on cloud computing. I have been following the
  openstack for about 2 years.I always thought of picking a blueprint and
  implementing it with the community's guidance.Luckily, open-stack
  participates GSOC this year and is is impossible for me to implement
  Cross-services Scheduler of Openstack-Gantt project.And also, I'm sure
  that I can continue to help to  openstack after GSOC.
 
  Thanks for your interest in OpenStack!
 
  I think the project as you've described it is far too large to be able
  to implement in one GSoC term.  If you're interested in scheduling,
  perhaps we can come up with a specific enhancement to Nova's current
  scheduler that would be more achievable in the time allotted.  I want to
  make sure we're setting you up for success, and I think helping scope
  the project is a big early part of that.
 
  --
  Russell Bryant
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-06 Thread Renat Akhmerov
Ok, good!

Renat Akhmerov
@ Mirantis Inc.



On 06 Mar 2014, at 15:25, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Manas, Renat, no problem :)
 
 The commit is sent already - https://review.openstack.org/#/c/75888/
 
 
 On Thu, Mar 6, 2014 at 12:14 PM, Manas Kelshikar ma...@stackstorm.com wrote:
 I agree, let's rename data to spec and unblock the check-in.
 
 Nikolay - Sorry for the trouble :)
 
 On Mar 5, 2014 10:17 PM, Renat Akhmerov rakhme...@mirantis.com wrote:
 Alright, good input Manas, appreciate.
 
 My comments are below...
 
 On 06 Mar 2014, at 10:47, Manas Kelshikar ma...@stackstorm.com wrote:
 Do we have better ideas on how to work with DSL? A good mental exercise here 
 would be to imagine that we have more than one DSL, not only YAML but say 
 XML. How would it change the picture?
 [Manas] As long as we form an abstraction between the DSL format i.e. 
 YAML/XML and it consumption we will be able to move between various formats. 
 (wishful) My personal preference is to not even have DSL show up anywhere in 
 Mistral code apart from take it as input and transforming into this first 
 level specification model - I know this is not the current state.
 
 Totally agree with your point. That’s what we’re trying to achieve.
 How can we clearly distinguish between these two models so that it wouldn’t 
 be confusing?
 Do we have a better naming in mind?
 [Manas] I think we all would agree that the best approach is to have precise 
 naming.
 
 I see your point of de-normalizing the dsl data into respective db model 
 objects.
 
 In a previous email I suggested using *Spec. I will try to build on this -
 1. Everything specified via the YAML input is a specification or definition 
 or template. Therefore I suggest we suffix all these types with 
 Spec/Definition/Template. So TaskSpec/TaskDefinition/TaskTemplate etc.. As 
 per the latest change these are TaskData ... ActionData. 
 
 After all the time I spent thinking about it I would choose Spec suffix since 
 it’s short and expresses the intention well enough. In conjunction with 
 “workbook” package name it would look very nice (basically we get 
 specification of workbook which is what we’re talking about, right?)
 
 So if you agree then let’s change to TaskSpec, ActionSpec etc. Nikolay, sorry 
 for making you change this patch again and again :) But it’s really important 
 and going to have a long-term effect at the entire system.
 
 2. As per current impl the YAML is stored as a key-value in the DB. This is 
 fine since that is front-ended by objects that Nikolay has introduced. e.g. 
 TaskData, ActionData etc.
 
 Yep, right. The only thing I would suggest is to avoid DB fields like 
 “task_dsl” like we have now. The alternative could be “task_spec”.
 
 3. As per my thinking a model object that ends up in the DB or a model 
 object that is in memory all can reside in the same module. I view 
 persistence as an orthogonal concern so no real reason to distinguish the 
 module names of the two set of models. If we do choose to distinguish as per 
 latest change i.e. mistral/workbook that works too.
 
 Sorry, I believe I wasn’t clear enough on this thing. I think we shouldn’t 
 have these two models in the same package since what I meant by “DB model” is 
 actually “execution” and “task” that carry workflow runtime information and 
 refer to a particular execution (we could also call it “session”). So my 
 point is that these are fundamentally different types of models. The best 
 analogy that comes to my mind is the relationship “class - instance” where 
 in our case “class = Specification (TaskSpec etc.) and “instance = 
 Execution/Task”. Does it make any sense?
 
 @Nikolay - I am generally ok with the approach. I hope that this helps 
 clarify my thinking and perception. Please ask more questions.
 
 Overall I like the approach of formalizing the 2 models. I am ok with 
 current state of the review and have laid out my preferences.
 
 I like the current state of this patch. The only thing I would do is renaming 
 “Data” to “Spec”.
 
 Thank you.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-06 Thread Samuel Bercovici
Hi,

As an example you can look at 
https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit?usp=sharing
Under the “Logical Model + Provisioning Status + Operation Status + Statistics” 
there are some details on thoughts on how to implement this.

Regards,
-Sam.


From: John Dewey [mailto:j...@dewey.ws]
Sent: Thursday, March 06, 2014 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics 
for complex LB configurations.

On Wednesday, March 5, 2014 at 12:41 PM, Eugene Nikanorov wrote:
Hi community,

Another interesting questions were raised during object model discussion about 
how pool statistics and health monitoring should be used in case of multiple 
vips sharing one pool.

Right now we can query statistics for the pool, and some data like in/out bytes 
and request count will be returned.
If we had several vips sharing the pool, what kind of statistics would make 
sense for the user?
The options are:

1) aggregated statistics for the pool, e.g. statistics of all requests that has 
hit the pool through any VIP
2) per-vip statistics for the pool.
Would it be crazy to offer both?  We can return stats for each pool associated 
with the VIP as you described below.  However, we also offer an aggregated 
section for those interested.

IMO, having stats broken out per-pool seem more helpful than only aggregated, 
while both would be ideal.

John

Depending on the answer, the statistics workflow will be different.

The good option of getting the statistics and health status could be to query 
it through the vip and get it for the whole logical instance, e.g. a call like:
 lb-vip-statistics-get --vip-id vip_id
the would result in json that returns statistics for every pool associated with 
the vip, plus operational status of all members for the pools associated with 
that VIP.

Looking forward to your feedback.

Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread zhangyu (AI)
It seems to be an interesting idea. In fact, a China-based public IaaS, 
QingCloud, has provided a similar feature
to their virtual servers. Within 2 hours after a virtual server is deleted, the 
server owner can decide whether
or not to cancel this deletion and re-cycle that deleted virtual server.

People make mistakes, while such a feature helps in urgent cases. Any idea here?

Thanks!

-Original Message-
From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com] 
Sent: Thursday, March 06, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

Hi all,

Current openstack provide the delete volume function to the user.
But it seems there is no any protection for user's delete operation miss.

As we know the data in the volume maybe very important and valuable. 
So it's better to provide a method to the user to avoid the volume delete miss.

Such as:
We can provide a safe delete for the volume.
User can specify how long the volume will be delay deleted(actually deleted) 
when he deletes the volume.
Before the volume is actually deleted, user can cancel the delete operation and 
find back the volume.
After the specified time, the volume will be actually deleted by the system.

Any thoughts? Welcome any advices.

Best regards to you.


--
zhangleiqiang

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-06 Thread Dina Belova
Thierry, hello.


 Anne Gentle wrote:

  It feels like it should be part of a scheduler or reservation program

  but we don't have one today. We also don't have a workflow, planning, or

  capacity management program, all of which these use cases could fall
under.

 

  (I should know this but) What are the options when a program doesn't

  exist already? Am I actually struggling with a scope expansion beyond

  infrastructure definitions? I'd like some more discussion by next week's

  TC meeting.



 When a project files for incubation and covers a new scope, they also

 file for a new program to go with it.


Yes, we've prepared 'Resource Reservation' program - but it seems to me,
that now we should reexamine it due to idea of common program for Gantt and
Climate, and, probably, Mistral (as Anne said  We also don't have a
workflow, planning, or capacity management program, all of which these use
cases could fall under.  )


 Dina Belova wrote:

  I think your idea is really interesting. I mean, that thought Gantt -

  where to schedule, Climate - when to schedule is quite understandable

  and good looking.



 Would Climate also be usable to support functionality like Spot

 Instances ? Schedule when spot price falls under X ?


Really good question. Personally I think that Climate might help
implementing this feature, but probably it's not the main thing that will
work there.


Here are my concerns about it. Spot instances require way of counting
instance price:


* that might be done by *online* counting of free capacity. If so, that's
something that might be managed by billing service - price counting due to
current load. In this case I can imagine hardly how lease service might
help - that will be only some approximate capacity planning help in future

* there is other way - if every instance in cloud will be reserved via
Climate (so there will be full planning). If so, Climate will know for sure
what and when will be running. And price will be some priority stuff there
- due to it not started leases will be rescheduled. Like if capacity load
in moment X is Y and user gives price Z for some VM and it's more than
minimal price counted for that X moment, his VM will be leased for X. If
not, place for VM will be found later.


It were some quick  thoughts about this idea, I'm pretty sure there might
be some other variants about it.


Thanks

Dina


On Wed, Mar 5, 2014 at 7:35 PM, Thierry Carrez thie...@openstack.orgwrote:

 Dina Belova wrote:
  I think your idea is really interesting. I mean, that thought Gantt -
  where to schedule, Climate - when to schedule is quite understandable
  and good looking.

 Would Climate also be usable to support functionality like Spot
 Instances ? Schedule when spot price falls under X ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-06 Thread Thierry Carrez
Thierry Carrez wrote:
 We just hit feature freeze, so please do not approve changes that add
 features or new configuration options unless those have been granted a
 feature freeze exception.
 
 This is also string freeze, so you should avoid changing translatable
 strings. If you have to modify a translatable string, you should give a
 heads-up to the I18N team.
 [...]

And a shameless plug, for those asking what Feature freeze is about and
how Feature freeze exceptions are granted:

http://fnords.wordpress.com/2014/03/06/why-we-do-feature-freeze/

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-06 Thread Sergey Lukjanov
Savanna milestone cut:

http://tarballs.openstack.org/savanna/savanna-milestone-proposed.tar.gz
https://git.openstack.org/cgit/openstack/savanna/log/?h=milestone-proposed

On Thu, Mar 6, 2014 at 1:19 PM, Thierry Carrez thie...@openstack.org wrote:
 Thierry Carrez wrote:
 We just hit feature freeze, so please do not approve changes that add
 features or new configuration options unless those have been granted a
 feature freeze exception.

 This is also string freeze, so you should avoid changing translatable
 strings. If you have to modify a translatable string, you should give a
 heads-up to the I18N team.
 [...]

 And a shameless plug, for those asking what Feature freeze is about and
 how Feature freeze exceptions are granted:

 http://fnords.wordpress.com/2014/03/06/why-we-do-feature-freeze/

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-06 Thread Xuhan Peng
Sean, you are right. It doesn't work at all.

So I think short term goal is to get that fixed for ICMP and long term goal
is to write an extension as Amir pointed out?


On Wed, Mar 5, 2014 at 1:55 AM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:

 On Tue, Mar 04, 2014 at 12:01:00PM -0500, Brian Haley wrote:
  On 03/03/2014 11:18 AM, Collins, Sean wrote:
   On Mon, Mar 03, 2014 at 09:39:42PM +0800, Xuhan Peng wrote:
   Currently, only security group rule direction, protocol, ethertype
 and port
   range are supported by neutron security group rule data structure. To
 allow
  
   If I am not mistaken, I believe that when you use the ICMP protocol
   type, you can use the port range specs to limit the type.
  
  
 https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L309
  
   http://i.imgur.com/3n858Pf.png
  
   I assume we just have to check and see if it applies to ICMPv6?
 
  I tried using horizon to add an icmp type/code rule, and it didn't work.
 
  Before:
 
  -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
 
  After:
 
  -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
  -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
 
  I'd assume I'll have the same error with v6.
 
  I am curious what's actually being done under the hood here now...

 Looks like _port_arg just returns an empty array when hte protocol is
 ICMP?


 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L328

 Called by:


 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L292


 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-06 Thread Thierry Carrez
Steven Dake wrote:
 My general take is workflow would fit in the Orchestration program, but
 not be integrated into the heat repo specifically.  It would be a
 different repo, managed by the same orchestration program just as we
 have heat-cfntools and other repositories.  Figuring out how to handle
 the who is the core team of people responsible for program's individual
 repositories is the most difficult aspect of making such a merge.  For
 example, I'd not desire a bunch of folks from Murano +2/+A heat specific
 repos until they understood the code base in detail, or atleast the
 broad architecture.   I think the same think applies in reverse from the
 Murano perspective.  Ideally folks that are core on a specific program
 would need to figure out how to learn how to broadly review each repo
 (meaning the heat devs would have to come up to speed on murano and
 murano devs would have to come up to speed on heat.  Learning a new code
 base is a big commitment for an already overtaxed core team.

Being in the same program means you share the same team and PTL, not
necessarily that all projects under the program have the same core
review team. So you could have different core reviewers for both
(although I'd encourage the core for ones become core for the other,
since it will facilitate behaving like a coherent team). You could also
have a single core team with clear expectations set (do not approve
changes for code you're not familiar with).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Adds PCI support for the V3 API (just one patch in novaclient)

2014-03-06 Thread Thierry Carrez
Michael Still wrote:
 On Thu, Mar 6, 2014 at 12:26 PM, Tian, Shuangtai
 shuangtai.t...@intel.com wrote:

 Hi,

 I would like to make a request for FFE for one patch in novaclient for PCI
 V3 API : https://review.openstack.org/#/c/75324/
 
 [snip]
 
 BTW the PCI Patches in V2 will defer to Juno.
 
 I'm confused. If this isn't landing in v2 in icehouse I'm not sure we
 should do a FFE for v3. I don't think right at this moment we want to
 be encouraging users to user v3 so why does waiting matter?

Yes, the benefit of having this IN the release (rather than early in
tree in Juno) is not really obvious. We already have a significant
number of FFEs lined up for Nova, so I'd be -1 on this one.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-06 Thread Thierry Carrez
Gary Kotton wrote:
 Hi,
 Unfortunately we did not get the ISO support approved by the deadline.
 If possible can we please get the FFE.
 
 The feature is completed and has been tested extensively internally. The
 feature is very low risk and has huge value for users. In short a user
 is able to upload a iso to glance then boot from that iso.
 
 BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
 Code: https://review.openstack.org/#/c/63084/ and 
 https://review.openstack.org/#/c/77965/
 Sponsors: John Garbutt and Nikola Dipanov
 
 One of the things that we are planning on improving in Juno is the way
 that the Vmops code is arranged and organized. We will soon be posting a
 wiki for ideas to be discussed. That will enable use to make additions
 like this a lot simpler in the future. But sadly that is not part of the
 scope at the moment.

Sounds self-contained enough... but we have a lot piled up for Nova
already. I'm +0 on this one, if Nova PTL wants it and it lands early
enough to limit the distraction, I guess it's fine.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-06 Thread Thierry Carrez
Tracy Jones wrote:
 Hi - Please consider the image cache aging BP for FFE 
 (https://review.openstack.org/#/c/56416/)
 
 This is the last of several patches (already merged) that implement image 
 cache cleanup for the vmware driver.  This patch solves a significant 
 customer pain point as it removes unused images from their datastore.  
 Without this patch their datastore can become unnecessarily full.  In 
 addition to the customer benefit from this patch it
 
 1.  has a turn off switch 
 2.  if fully contained within the vmware driver
 3.  has gone through functional testing with our internal QA team 
 
 ndipanov has been good enough to say he will review the patch, so we would 
 ask for one additional core sponsor for this FFE.

This one borders on the bug side, so if it merges early enough, I'm +1
on it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][FFE] Cinder switch-over to oslo.messaging

2014-03-06 Thread Flavio Percoco

I'd like to request a FFE for the oslo.messaging migration in Cinder.

Some projects have already switched over oslo.messaging and others are
still doing so. I think we should switch remaining projects to
oslo.messaging as soon as possible and keep the RPC library in use
consistent throughout OpenStack.

Cinder's patch has been up for review for a couple of weeks already
and it's been kept updated with master. Besides some of the gate
failures we've had in the last couple of weeks, it seems to work as
expected.

As a final note, most of the work on this patch followed the style and
changes done in Nova, for better or for worse.

The review link is: https://review.openstack.org/#/c/71873/

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpopMOYxazNg.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Zhi Yan Liu
+1! according to the low rise and the usefulness for the real cloud deployment.

zhiyan

On Thu, Mar 6, 2014 at 4:20 PM, Andrew Woodward xar...@gmail.com wrote:
 I'd Like to request A FFE for the remaining patches in the Ephemeral
 RBD image support chain

 https://review.openstack.org/#/c/59148/
 https://review.openstack.org/#/c/59149/

 are still open after their dependency
 https://review.openstack.org/#/c/33409/ was merged.

 These should be low risk as:
 1. We have been testing with this code in place.
 2. It's nearly all contained within the RBD driver.

 This is needed as it implements an essential functionality that has
 been missing in the RBD driver and this will become the second release
 it's been attempted to be merged into.

 Andrew
 Mirantis
 Ceph Community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-06 Thread Thierry Carrez
Dina Belova wrote:
 Would Climate also be usable to support functionality like Spot
 Instances ? Schedule when spot price falls under X ?
 
 Really good question. Personally I think that Climate might help
 implementing this feature, but probably it’s not the main thing that
 will work there.
 
 Here are my concerns about it. Spot instances require way of counting
 instance price:
 [...]

Not necessarily. It's a question of whether Climate would handle only
schedule at (a given date), or more generally schedule when (a
certain event happens, with date just being one event type). You can
depend on some external system setting spot prices, or any other
information, and climate rules that would watch regularly that external
information to decide if it's time to run resources or not. I don't
think it should be Climate's responsibility to specifically maintain
spot price, everyone can come up with their own rules.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][FFE] Cinder switch-over to oslo.messaging

2014-03-06 Thread Thierry Carrez
Flavio Percoco wrote:
 I'd like to request a FFE for the oslo.messaging migration in Cinder.
 
 Some projects have already switched over oslo.messaging and others are
 still doing so. I think we should switch remaining projects to
 oslo.messaging as soon as possible and keep the RPC library in use
 consistent throughout OpenStack.
 
 Cinder's patch has been up for review for a couple of weeks already
 and it's been kept updated with master. Besides some of the gate
 failures we've had in the last couple of weeks, it seems to work as
 expected.
 
 As a final note, most of the work on this patch followed the style and
 changes done in Nova, for better or for worse.
 
 The review link is: https://review.openstack.org/#/c/71873/

So on one hand this is a significant change that looks like it could
wait (little direct feature gain). On the other we have oslo.messaging
being adopted in a lot of projects, and we reduce the maintenance
envelope if we switch most projects to it BEFORE release.

This one really boils down to how early it can be merged. If it's done
before the meeting next Tuesday, it's a net gain. If not, it becomes too
much of a distraction from bugfixes for reviewers and any regression it
creates might get overlooked.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Proposal to merge blueprints that just missed Icehouse-3 in early Juno-1

2014-03-06 Thread John Garbutt
On 5 March 2014 15:02, Russell Bryant rbry...@redhat.com wrote:
 Nova is now feature frozen for the Icehouse release.  Patches for
 blueprints not already merged will need a feature freeze exception (FFE)
 to be considered for Icehouse.

 In addition to evaluation the request in terms of risks and benefits, I
 would like to require that every FFE be sponsored by two members of
 nova-core.  This is to ensure that there are reviewers willing to review
 the code in a timely manner so that we can exclusively focus on bug
 fixes as soon as possible.

To help avoid adding too many FFE and not getting enough bug fixing done...

I have a proposal to try and get many of the blueprints that just
missed getting into Icehouse merged in early Juno, ideally before the
Summit.

For the interested, here are blueprints that met the proposal deadline
but didn't make Icehouse-3:
* API (v2) blueprints: 8
* VMware: 7
* Scheduler blueprints: 7  (two were partially completed in Icehouse)
* Others: around another 7

Making an effort to get these merged in Juno-1, and ideally before the
summit, seems a fair thing to do.

Once Juno opens, if submitters get their blueprint patches rebased and
ready to review by two weeks before the summit, I propose we try to
give them (where possible, and where it makes sense) at least medium
priority, at least until after the summit.

If we get too many takers, that might need some refinement. However,
looking at them, they all appear to be features that our users would
really benefit from.

This probably means, all non-top priority items would then get low
priority in Juno-1. Currently tasks (at least the move to conductor
parts), the scheduler split and objects, seem like they will be the
other high priority items for Juno-1.

This is all very rough, and subject to massive post-summit change, but
looking at the ones with their priority set, gives a rough idea of
what Juno-1 might look like:
https://launchpad.net/nova/+milestone/next

Its just an idea. What do you all think?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [GSoC 2014] Proposal Template

2014-03-06 Thread Masaru Nomura
Dear mentors and students,

Hi,

after a short talk with dims, I created an application template wiki
page[1]. Obviously, this is not a completed version, and I'd like your
opinions to improve it. :)

I have :
1) simply added information such as :

   ・Personal Details (e.g. Name, Email, University and so on)

   ・Project Proposal (e.g. Project, idea, implementation issues, and time
scheduling)

   ・Background (e.g. Open source, academic or intern experience, or
language experience)

2) linked this page on GSoC 2014 wiki page[2]
3) created an example of my proposal page [3] (not completed yet!)
4) linked the example to an Oslo project page[4]


Thank you,
Masaru

[1] 
https://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplatehttps://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplate#Link_your_page
[2] https://wiki.openstack.org/wiki/GSoC2014#Communication
[3] https://wiki.openstack.org/wiki/GSoC2014/Student/Masaru
[4]
https://wiki.openstack.org/wiki/GSoC2014/Incubator/SharedLib#Students.27_proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-06 Thread Flavio Percoco


(top-post)

Marconi links below:

Tarball:
http://tarballs.openstack.org/marconi/marconi-milestone-proposed.tar.gz

And milestone-proposed branch:
https://github.com/openstack/marconi/tree/milestone-proposed

Cheers
Flavio

On 05/03/14 21:46 +0100, Thierry Carrez wrote:

Hi everyone,

We just hit feature freeze, so please do not approve changes that add
features or new configuration options unless those have been granted a
feature freeze exception.

This is also string freeze, so you should avoid changing translatable
strings. If you have to modify a translatable string, you should give a
heads-up to the I18N team.

Milestone-proposed branches were created for Horizon, Keystone, Glance,
Nova, Neutron, Cinder, Heat and and Trove in preparation for the
icehouse-3 milestone publication tomorrow.

Ceilometer should follow in an hour.

You can find candidate tarballs at:
http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz

You can also access the milestone-proposed branches directly at:
https://github.com/openstack/horizon/tree/milestone-proposed
https://github.com/openstack/keystone/tree/milestone-proposed
https://github.com/openstack/glance/tree/milestone-proposed
https://github.com/openstack/nova/tree/milestone-proposed
https://github.com/openstack/neutron/tree/milestone-proposed
https://github.com/openstack/cinder/tree/milestone-proposed
https://github.com/openstack/heat/tree/milestone-proposed
https://github.com/openstack/trove/tree/milestone-proposed

Regards,

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpahScKue6hS.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AUTO: Avishay Traeger is prepared for DELETION (FREEZE) (returning 05/12/2013)

2014-03-06 Thread Avishay Traeger

I am out of the office until 05/12/2013.

Avishay Traeger is prepared for DELETION (FREEZE)


Note: This is an automated response to your message  Re: [openstack-dev]
[Cinder][FFE] Cinder switch-over to oslo.messaging sent on 06/03/2014
12:50:51.

This is the only notification you will receive while this person is away.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] AUTO: Avishay Traeger is prepared for DELETION (FREEZE) (returning 05/12/2013)

2014-03-06 Thread Thierry Carrez
Avishay Traeger wrote:
 
 Avishay Traeger is prepared for DELETION (FREEZE)

Wow. Scary.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-06 Thread Sylvain Bauza
Hi Thierry,


2014-03-06 11:46 GMT+01:00 Thierry Carrez thie...@openstack.org:

 Dina Belova wrote:
  Would Climate also be usable to support functionality like Spot
  Instances ? Schedule when spot price falls under X ?
 
  Really good question. Personally I think that Climate might help
  implementing this feature, but probably it's not the main thing that
  will work there.
 
  Here are my concerns about it. Spot instances require way of counting
  instance price:
  [...]

 Not necessarily. It's a question of whether Climate would handle only
 schedule at (a given date), or more generally schedule when (a
 certain event happens, with date just being one event type). You can
 depend on some external system setting spot prices, or any other
 information, and climate rules that would watch regularly that external
 information to decide if it's time to run resources or not. I don't
 think it should be Climate's responsibility to specifically maintain
 spot price, everyone can come up with their own rules.



I can't agree more on this. The goal of Climate is to provide some formal
contract agreement in betwen an user and the Reservation service, for
ensuring that the order will be placed and served correctly (with regards
to quotas and capacity). Of course, what we call 'user' doesn't formally
tend to be a 'real' user.
About spot instances use-case, I don't pretend to design it, but I could
easily imagine that a call to Nova for booting an instance would place an
order to Climate with a specific type of contract (what we began to call
'best-effort' and which needs to be implemented yet) where notifications
for acquitting the order would come from Ceilometer (for instance). If no
notifications come to Climate, the lease would not be honored.

See https://wiki.openstack.org/wiki/Climate#Lease_types_.28concepts.29 for
best-effort definition of a lease.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Sebastien Han
Big +1 on this.
Missing such support would make the implementation useless.

 
Sébastien Han 
Cloud Engineer 

Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 06 Mar 2014, at 11:44, Zhi Yan Liu lzy@gmail.com wrote:

 +1! according to the low rise and the usefulness for the real cloud 
 deployment.
 
 zhiyan
 
 On Thu, Mar 6, 2014 at 4:20 PM, Andrew Woodward xar...@gmail.com wrote:
 I'd Like to request A FFE for the remaining patches in the Ephemeral
 RBD image support chain
 
 https://review.openstack.org/#/c/59148/
 https://review.openstack.org/#/c/59149/
 
 are still open after their dependency
 https://review.openstack.org/#/c/33409/ was merged.
 
 These should be low risk as:
 1. We have been testing with this code in place.
 2. It's nearly all contained within the RBD driver.
 
 This is needed as it implements an essential functionality that has
 been missing in the RBD driver and this will become the second release
 it's been attempted to be merged into.
 
 Andrew
 Mirantis
 Ceph Community
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to merge blueprints that just missed Icehouse-3 in early Juno-1

2014-03-06 Thread John Garbutt
On 6 March 2014 10:51, John Garbutt j...@johngarbutt.com wrote:
 On 5 March 2014 15:02, Russell Bryant rbry...@redhat.com wrote:
 Nova is now feature frozen for the Icehouse release.  Patches for
 blueprints not already merged will need a feature freeze exception (FFE)
 to be considered for Icehouse.

 In addition to evaluation the request in terms of risks and benefits, I
 would like to require that every FFE be sponsored by two members of
 nova-core.  This is to ensure that there are reviewers willing to review
 the code in a timely manner so that we can exclusively focus on bug
 fixes as soon as possible.

 To help avoid adding too many FFE and not getting enough bug fixing done...

 I have a proposal to try and get many of the blueprints that just
 missed getting into Icehouse merged in early Juno, ideally before the
 Summit.

 For the interested, here are blueprints that met the proposal deadline
 but didn't make Icehouse-3:
 * API (v2) blueprints: 8
 * VMware: 7
 * Scheduler blueprints: 7  (two were partially completed in Icehouse)
 * Others: around another 7

 Making an effort to get these merged in Juno-1, and ideally before the
 summit, seems a fair thing to do.

 Once Juno opens, if submitters get their blueprint patches rebased and
 ready to review by two weeks before the summit, I propose we try to
 give them (where possible, and where it makes sense) at least medium
 priority, at least until after the summit.

 If we get too many takers, that might need some refinement. However,
 looking at them, they all appear to be features that our users would
 really benefit from.

 This probably means, all non-top priority items would then get low
 priority in Juno-1. Currently tasks (at least the move to conductor
 parts), the scheduler split and objects, seem like they will be the
 other high priority items for Juno-1.

My bad, API work is clearly in that top priority list.

 This is all very rough, and subject to massive post-summit change, but
 looking at the ones with their priority set, gives a rough idea of
 what Juno-1 might look like:
 https://launchpad.net/nova/+milestone/next

 Its just an idea. What do you all think?

 John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-06 Thread Dina Belova
Sylvain, I love your idea. As you said, that should be designed, but for
the first sight your proposal looks quite nice.


On Thu, Mar 6, 2014 at 3:11 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Hi Thierry,


 2014-03-06 11:46 GMT+01:00 Thierry Carrez thie...@openstack.org:

 Dina Belova wrote:
  Would Climate also be usable to support functionality like Spot
  Instances ? Schedule when spot price falls under X ?
 
  Really good question. Personally I think that Climate might help
  implementing this feature, but probably it's not the main thing that
  will work there.
 
  Here are my concerns about it. Spot instances require way of counting
  instance price:
  [...]

 Not necessarily. It's a question of whether Climate would handle only
 schedule at (a given date), or more generally schedule when (a
 certain event happens, with date just being one event type). You can
 depend on some external system setting spot prices, or any other
 information, and climate rules that would watch regularly that external
 information to decide if it's time to run resources or not. I don't
 think it should be Climate's responsibility to specifically maintain
 spot price, everyone can come up with their own rules.



 I can't agree more on this. The goal of Climate is to provide some formal
 contract agreement in betwen an user and the Reservation service, for
 ensuring that the order will be placed and served correctly (with regards
 to quotas and capacity). Of course, what we call 'user' doesn't formally
 tend to be a 'real' user.
 About spot instances use-case, I don't pretend to design it, but I could
 easily imagine that a call to Nova for booting an instance would place an
 order to Climate with a specific type of contract (what we began to call
 'best-effort' and which needs to be implemented yet) where notifications
 for acquitting the order would come from Ceilometer (for instance). If no
 notifications come to Climate, the lease would not be honored.

 See https://wiki.openstack.org/wiki/Climate#Lease_types_.28concepts.29 for
 best-effort definition of a lease.

 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-06 Thread Ana Krivokapic

On 03/06/2014 04:47 AM, Jason Rist wrote:
 On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote:
 I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his 
 reviews very insightful and more importantly have come to rely on their 
 quality. He has contributed to several areas in Horizon and he understands 
 the code base well.  Radomir is also very active in tuskar-ui both 
 contributing and reviewing.

 David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 As someone who benefits from his insightful reviews, I second the 
 nomination.
+1

 --
 Jason E. Rist
 Senior Software Engineer
 OpenStack Management UI
 Red Hat, Inc.
 +1.720.256.3933
 Freenode: jrist
 github/identi.ca: knowncitizen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Regards,

Ana Krivokapic
Associate Software Engineer
OpenStack team
Red Hat Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to merge blueprints that just missed Icehouse-3 in early Juno-1

2014-03-06 Thread Sean Dague
On 03/06/2014 05:51 AM, John Garbutt wrote:
 On 5 March 2014 15:02, Russell Bryant rbry...@redhat.com wrote:
 Nova is now feature frozen for the Icehouse release.  Patches for
 blueprints not already merged will need a feature freeze exception (FFE)
 to be considered for Icehouse.

 In addition to evaluation the request in terms of risks and benefits, I
 would like to require that every FFE be sponsored by two members of
 nova-core.  This is to ensure that there are reviewers willing to review
 the code in a timely manner so that we can exclusively focus on bug
 fixes as soon as possible.
 
 To help avoid adding too many FFE and not getting enough bug fixing done...
 
 I have a proposal to try and get many of the blueprints that just
 missed getting into Icehouse merged in early Juno, ideally before the
 Summit.
 
 For the interested, here are blueprints that met the proposal deadline
 but didn't make Icehouse-3:
 * API (v2) blueprints: 8
 * VMware: 7
 * Scheduler blueprints: 7  (two were partially completed in Icehouse)
 * Others: around another 7
 
 Making an effort to get these merged in Juno-1, and ideally before the
 summit, seems a fair thing to do.
 
 Once Juno opens, if submitters get their blueprint patches rebased and
 ready to review by two weeks before the summit, I propose we try to
 give them (where possible, and where it makes sense) at least medium
 priority, at least until after the summit.
 
 If we get too many takers, that might need some refinement. However,
 looking at them, they all appear to be features that our users would
 really benefit from.
 
 This probably means, all non-top priority items would then get low
 priority in Juno-1. Currently tasks (at least the move to conductor
 parts), the scheduler split and objects, seem like they will be the
 other high priority items for Juno-1.
 
 This is all very rough, and subject to massive post-summit change, but
 looking at the ones with their priority set, gives a rough idea of
 what Juno-1 might look like:
 https://launchpad.net/nova/+milestone/next
 
 Its just an idea. What do you all think?

I think it's generally a good approach. I'll just caution that most of
the core review team is burnt out hard by the end of the release, and
really does need a breather. There is plenty of prep that needs to
happen to summit, and it provides a pretty solid recharge.

In the past I think the biggest issue you saw blueprints miss two cycles
in a row is their authors weren't working on them until milestone 2. So
anything that's open and ready to go in Juno.

That being said, importance isn't a FIFO. And with limited review time I
question a ton of scheduler blueprints at the same time we're talking
about a scheduler split. Because like it seems like it makes both things
slower to try them at the same time (as there are a limited number of
people here). So I think at least in this case scheduler really should
be split then improve, or improve then split, and everyone interested in
either side of that equation needs to be helping on both sides.

Otherwise I think we'll see another do-over on the split again at the
end of the Juno cycle.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-06 Thread John Garbutt
On 5 March 2014 03:44, Christopher Yeoh cbky...@gmail.com wrote:
 But this plan is certainly something I'm happy to support.

+1

On 5 March 2014 03:44, Christopher Yeoh cbky...@gmail.com wrote:
 So I think this a good compromise to keep things moving. Some aspects
 that we'll need to consider:

 - We need more tempest coverage of Nova because it doesn't cover all of
   the Nova API yet. We've been working on increasing this as part of
   the V3 API work anyway (and V2 support is an easyish side effect).
   But more people willing to write tempest tests are always welcome :-)

+1

This seems key to making sure we do any v2 compatibility right.

 - I think in practice this will probably mean that V3 API is
   realistically only a K rather than J thing - just in terms of allowing
   a reasonable timeline to not only implement the v2 compat but get
   feedback from deployers.

+1

One extra thing we need to consider, how to deal with new APIs while
we go through this transition?

I don't really have any answers to hand, but I want v3 to get released
ASAP, and I want people to easily add API features to Nova. If the
proxy is ready early, maybe we could have people implement only v3
extensions, then optionally add v2 extension that just uses wraps the
v2 proxy + v3 extensions.

 - I'm not sure how this affects how we approach the tasks work. Will
   need to think about that more.

+1

Its a thread of its own, but I think improving instance-actions, still
leaving tasks for v3, might be the way forward.

While we need to avoid feature creep, I still think if we add tasks
into v3 in the right way, it could be what makes people move to v3.


One more thing... I wonder if all new extensions should be considered
experimental (or version 0) for at least one cycle. In theory, that
should help us avoid some of the worst mistakes when adding new APIs.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSoC 2014] Proposal Template

2014-03-06 Thread Davanum Srinivas
Thanks Masaru, I'd encourage other applicants to use add their own
details and get their potential mentors to review their page.

thanks,
dims

On Thu, Mar 6, 2014 at 5:58 AM, Masaru Nomura massa.nom...@gmail.com wrote:
 Dear mentors and students,

 Hi,

 after a short talk with dims, I created an application template wiki
 page[1]. Obviously, this is not a completed version, and I'd like your
 opinions to improve it. :)

 I have :
 1) simply added information such as :

・Personal Details (e.g. Name, Email, University and so on)

・Project Proposal (e.g. Project, idea, implementation issues, and time
 scheduling)

・Background (e.g. Open source, academic or intern experience, or language
 experience)


 2) linked this page on GSoC 2014 wiki page[2]
 3) created an example of my proposal page [3] (not completed yet!)
 4) linked the example to an Oslo project page[4]


 Thank you,
 Masaru

 [1] https://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplate
 [2] https://wiki.openstack.org/wiki/GSoC2014#Communication
 [3] https://wiki.openstack.org/wiki/GSoC2014/Student/Masaru
 [4]
 https://wiki.openstack.org/wiki/GSoC2014/Incubator/SharedLib#Students.27_proposals


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to merge blueprints that just missed Icehouse-3 in early Juno-1

2014-03-06 Thread John Garbutt
On 6 March 2014 11:31, Sean Dague s...@dague.net wrote:
 On 03/06/2014 05:51 AM, John Garbutt wrote:
 On 5 March 2014 15:02, Russell Bryant rbry...@redhat.com wrote:
 Nova is now feature frozen for the Icehouse release.  Patches for
 blueprints not already merged will need a feature freeze exception (FFE)
 to be considered for Icehouse.

 In addition to evaluation the request in terms of risks and benefits, I
 would like to require that every FFE be sponsored by two members of
 nova-core.  This is to ensure that there are reviewers willing to review
 the code in a timely manner so that we can exclusively focus on bug
 fixes as soon as possible.

 To help avoid adding too many FFE and not getting enough bug fixing done...

 I have a proposal to try and get many of the blueprints that just
 missed getting into Icehouse merged in early Juno, ideally before the
 Summit.

 For the interested, here are blueprints that met the proposal deadline
 but didn't make Icehouse-3:
 * API (v2) blueprints: 8
 * VMware: 7
 * Scheduler blueprints: 7  (two were partially completed in Icehouse)
 * Others: around another 7

 Making an effort to get these merged in Juno-1, and ideally before the
 summit, seems a fair thing to do.

 Once Juno opens, if submitters get their blueprint patches rebased and
 ready to review by two weeks before the summit, I propose we try to
 give them (where possible, and where it makes sense) at least medium
 priority, at least until after the summit.

 If we get too many takers, that might need some refinement. However,
 looking at them, they all appear to be features that our users would
 really benefit from.

 This probably means, all non-top priority items would then get low
 priority in Juno-1. Currently tasks (at least the move to conductor
 parts), the scheduler split and objects, seem like they will be the
 other high priority items for Juno-1.

 This is all very rough, and subject to massive post-summit change, but
 looking at the ones with their priority set, gives a rough idea of
 what Juno-1 might look like:
 https://launchpad.net/nova/+milestone/next

 Its just an idea. What do you all think?

 I think it's generally a good approach. I'll just caution that most of
 the core review team is burnt out hard by the end of the release, and
 really does need a breather. There is plenty of prep that needs to
 happen to summit, and it provides a pretty solid recharge.

True. Its probably unrealistic to get them merged before the summit.
But I still think its worth considering for Juno-1.

 In the past I think the biggest issue you saw blueprints miss two cycles
 in a row is their authors weren't working on them until milestone 2. So
 anything that's open and ready to go in Juno.

True.

Some had be to split up their patches so were really only ready for
review in March.

But some have been open much longer than that. Its just they kept
being at the bottom of the queue. I feel we need something to stop the
priority inversion.

 That being said, importance isn't a FIFO.

Agreed, but it seems most of them are quite useful, and a lot have
already been through a few review cycles.

 And with limited review time I
 question a ton of scheduler blueprints at the same time we're talking
 about a scheduler split. Because like it seems like it makes both things
 slower to try them at the same time (as there are a limited number of
 people here). So I think at least in this case scheduler really should
 be split then improve, or improve then split, and everyone interested in
 either side of that equation needs to be helping on both sides.
 Otherwise I think we'll see another do-over on the split again at the
 end of the Juno cycle.

Given the discussion at the mid-cylce meet up, I got the impression
are planning on splitting out the code more in nova, then
re-generating the gantt tree. Using the lessons learnt from the
current efforts to speed up things after the next regeneration.

Most of the scheduler blueprints don't look to impact the edges that
are likely to change the most, but I could be totally misjudging that.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-06 Thread John Garbutt
On 6 March 2014 08:04, Nikola Đipanov ndipa...@redhat.com wrote:
 On 03/05/2014 08:00 PM, Russell Bryant wrote:
 On 03/05/2014 10:34 AM, Gary Kotton wrote:
 Hi,
 Unfortunately we did not get the ISO support approved by the deadline.
 If possible can we please get the FFE.

 The feature is completed and has been tested extensively internally. The
 feature is very low risk and has huge value for users. In short a user
 is able to upload a iso to glance then boot from that iso.

 BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
 Code: https://review.openstack.org/#/c/63084/ and 
 https://review.openstack.org/#/c/77965/
 Sponsors: John Garbutt and Nikola Dipanov

 One of the things that we are planning on improving in Juno is the way
 that the Vmops code is arranged and organized. We will soon be posting a
 wiki for ideas to be discussed. That will enable use to make additions
 like this a lot simpler in the future. But sadly that is not part of the
 scope at the moment.

 John and Nikola, can you confirm your sponsorship of this one?


 Yeah - I'll review this.

 This one is actually almost ready except that everyone who reviewed it
 hates the code this touches (and makes slightly worse). That said - I
 have every reason to believe the VMWare team will coordinate their
 efforts around making it better in Juno, and the feature is useful and
 low risk.

I am happy to review this, since it was so close.

But I do worry about having too many FFEs, so we get distracted from bugs.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread John Garbutt
On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
 It seems to be an interesting idea. In fact, a China-based public IaaS, 
 QingCloud, has provided a similar feature
 to their virtual servers. Within 2 hours after a virtual server is deleted, 
 the server owner can decide whether
 or not to cancel this deletion and re-cycle that deleted virtual server.

 People make mistakes, while such a feature helps in urgent cases. Any idea 
 here?

Nova has soft_delete and restore for servers. That sounds similar?

John


 -Original Message-
 From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
 Sent: Thursday, March 06, 2014 2:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

 Hi all,

 Current openstack provide the delete volume function to the user.
 But it seems there is no any protection for user's delete operation miss.

 As we know the data in the volume maybe very important and valuable.
 So it's better to provide a method to the user to avoid the volume delete 
 miss.

 Such as:
 We can provide a safe delete for the volume.
 User can specify how long the volume will be delay deleted(actually deleted) 
 when he deletes the volume.
 Before the volume is actually deleted, user can cancel the delete operation 
 and find back the volume.
 After the specified time, the volume will be actually deleted by the system.

 Any thoughts? Welcome any advices.

 Best regards to you.


 --
 zhangleiqiang

 Best Regards



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-03-06 Thread John Garbutt
On 3 March 2014 19:33, David Peraza david_per...@persistentsys.com wrote:
 Thanks John,

 What I'm trying to do is to run an asynchronous task that pre-organizes the 
 target hosts for an image. Then scheduler only need to read the top of the 
 list or priority queue. We have a paper proposed for the summit that will 
 explain the approach, hopefully it gets accepted so we can have a 
 conversation on this at the summit. I suspect the DB overhead will go away if 
 we try our approach. Still theory though, that is why I want to get a 
 significant test environment to appreciate the performance better.

I attempted something similar as part of the caching scheduler work.

When picking the size of the slot cache, I found I got the best
performance when I turned it off. Small bursts of builds were slightly
quicker, but would get delayed if they came in when the cache was
being populated. Large bursts of requests very quickly depleted the
cache, and filling it back up was quite expensive, and you queue up
other requests while you do that. So choosing the cache size was very
tricky. All the time, you end up making some bad choices because you
are only looking at a subset of the nodes.

I am however very interested in seeing if you have found a balance
that works well. It feels like some combination would help in certain
situations. I just couldn't find either myself.

My current approach is just to cache the lists of hosts you get from
the DB, and update the host state with each decision you make, so
those requests don't race each other.

Some simple optimisations to the filter and weights system seemed to
be a much better route to improving the performance. (I had some
patches up for that, will refresh them when Juno opens).

But until we get the move to conductor work complete (using select
destination instead of run_instance), the DB calls locking all the
eventlet threads seems like the biggest issue.

Anyways, looking forward to a good discussion at the summit.

John


 Regards,
 David Peraza

 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Tuesday, February 25, 2014 5:45 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute nodes 
 for scheduler testing

 On 24 February 2014 20:13, David Peraza david_per...@persistentsys.com 
 wrote:
 Thanks John,

 I also think it is a good idea to test the algorithm at unit test level, but 
 I will like to try out over amqp as well, that is, we process and threads 
 talking to each other over rabbit or qpid. I'm trying to test out 
 performance as well.


 Nothing beats testing the thing for real, of course.

 As a heads up, the overheads of DB calls turned out to dwarf any algorithmic 
 improvements I managed. There will clearly be some RPC overhead, but it 
 didn't stand out as much as the DB issue.

 The move to conductor work should certainly stop the scheduler making those 
 pesky DB calls to update the nova instance. And then, improvements like 
 no-db-scheduler and improvements to scheduling algorithms should shine 
 through much more.

 Thanks,
 John


 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Monday, February 24, 2014 11:51 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute
 nodes for scheduler testing

 On 24 February 2014 16:24, David Peraza david_per...@persistentsys.com 
 wrote:
 Hello all,

 I have been trying some new ideas on scheduler and I think I'm
 reaching a resource issue. I'm running 6 compute service right on my
 4 CPU 4 Gig VM, and I started to get some memory allocation issues.
 Keystone and Nova are already complaining there is not enough memory.
 The obvious solution to add more candidates is to get another VM and set 
 another 6 Fake compute service.
 I could do that but I think I need to be able to scale more without
 the need to use this much resources. I will like to simulate a cloud
 of 100 maybe
 1000 compute nodes that do nothing (Fake driver) this should not take
 this much memory. Anyone knows of a more efficient way to  simulate
 many computes? I was thinking changing the Fake driver to report many
 compute services in different threads instead of having to spawn a
 process per compute service. Any other ideas?

 It depends what you want to test, but I was able to look at tuning the 
 filters and weights using the test at the end of this file:
 https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_ca
 ching_scheduler.py

 Cheers,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 DISCLAIMER
 ==
 This e-mail may contain privileged and confidential information which is the 
 property of Persistent Systems Ltd. It is intended only 

Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread John Griffith
On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com wrote:

 On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
  It seems to be an interesting idea. In fact, a China-based public IaaS,
 QingCloud, has provided a similar feature
  to their virtual servers. Within 2 hours after a virtual server is
 deleted, the server owner can decide whether
  or not to cancel this deletion and re-cycle that deleted virtual
 server.
 
  People make mistakes, while such a feature helps in urgent cases. Any
 idea here?

 Nova has soft_delete and restore for servers. That sounds similar?

 John

 
  -Original Message-
  From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
  Sent: Thursday, March 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection
 
  Hi all,
 
  Current openstack provide the delete volume function to the user.
  But it seems there is no any protection for user's delete operation miss.
 
  As we know the data in the volume maybe very important and valuable.
  So it's better to provide a method to the user to avoid the volume
 delete miss.
 
  Such as:
  We can provide a safe delete for the volume.
  User can specify how long the volume will be delay deleted(actually
 deleted) when he deletes the volume.
  Before the volume is actually deleted, user can cancel the delete
 operation and find back the volume.
  After the specified time, the volume will be actually deleted by the
 system.
 
  Any thoughts? Welcome any advices.
 
  Best regards to you.
 
 
  --
  zhangleiqiang
 
  Best Regards
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I think a soft-delete for Cinder sounds like a neat idea.  You should file
a BP that we can target for Juno.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-06 Thread Sampath Priyankara
Hi Gordon,

 

  Thanks for the info and reply.

 your interest purely to see status or were you looking to work on it? ;) 

My interest lies in how to evaluate  large number of  notifications within a
short time.

I thought moving alarms into the pipelines would be a good start.

 

From: Gordon Chung [mailto:chu...@ca.ibm.com] 
Sent: Wednesday, March 05, 2014 6:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

 

hi Sampath 

tbh, i actually like the pipeline solution proposed in the blueprint... that
said, there hasn't been any work done relating to this in Icehouse. there
was work on adding alarms to notification
https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification
https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification but
that has been pushed. i'd be interested in discussing adding alarms to
pipeline and it's pros/cons vs current implementation. 

   https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements
https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements
  Is there any further discussion about [Part 4 - Moving Alarms into the
 Pipelines] in above doc? 
is the pipeline alarm design attached to a blueprint? also, is your interest
purely to see status or were you looking to work on it? ;) 

cheers,
gordon chung
openstack, ibm software standards

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [heat] [neutron] - Status of Heat and Neutron tempest blueprints?

2014-03-06 Thread Sean Dague
We're at Freeze, so I want to pick up and understand where we currently
stand with both Neutron and Heat actually getting tested fully in the gate.

First Neutron -
https://blueprints.launchpad.net/tempest/+spec/fix-gate-tempest-devstack-vm-quantum-full


We know that this is *close* as the full job is running non voting
everywhere, and typically passing. How close are we? Or should we be
defering this until Juno (which would be unfortunate).

Second Heat -
https://blueprints.launchpad.net/tempest/+spec/tempest-heat-integration

The Heat tests that are in a normal Tempest job are relatively trivial
surface verification, and in no way actually make sure that Heat is
operating at a real level. This fact is a contributing factor to why
Heat was broken in i2.

The first real verification for Heat is in the Heat slow job (which we
created to give Heat a separate time budget, because doing work that
requires real guests takes time).

The heat slow job looks like it is finally passing much of the time -
http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIEFORCBidWlsZF9uYW1lOmNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWhlYXQtc2xvdyIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDEwOTg3NDQ4OH0=

It's seeing a 78% pass rate in check. Can anyone in the Heat team
confirm that the Failures in this job are actually real failures on
patches that should have been blocked?

I'd like to get that turned on (and on all the projects) as soon as the
Heat team is confident on it so that Heat actually participates in the
tempest/devstack gate in a material way and we can prevent future issues
where a keystone, nova, neutron or whatever change would break Heat in git.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-06 Thread Christopher Yeoh
On Thu, Mar 6, 2014 at 8:35 AM, Russell Bryant rbry...@redhat.com wrote:

 On 03/05/2014 04:52 PM, Everett Toews wrote:
  On Mar 5, 2014, at 8:16 AM, Russell Bryant rbry...@redhat.com wrote:
 
  I think SDK support is critical for the success of v3 long term.  I
  expect most people are using the APIs through one of the major SDKs, so
  v3 won't take off until that happens.  I think our top priority in Nova
  to help ensure this happens is to provide top notch documentation on the
  v3 API, as well as all of the differences between v2 and v3.
 
  Yes. Thank you.
 
  And the earlier we can see the first parts of this documentation, both
 the differences between v2 and v3 and the final version, the better. If we
 can give you feedback on early versions of the docs, the whole thing will
 go much more smoothly.
 
  You can find us in #openstack-sdks on IRC.

 Sounds good.

 I know there is at least this wiki page started to document the changes,
 but I believe there is still a lot missing.

 https://wiki.openstack.org/wiki/NovaAPIv2tov3


Yes, unfortunately we realised a little late in development just how
important a document like this would be so its not as complete as I'd like.
It doesn't for example address input validation changes. But it is
something that definitely needs to be 100% complete and is getting better
as the tempest testing side fills out.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [heat] [neutron] - Status of Heat and Neutron tempest blueprints?

2014-03-06 Thread Steven Hardy
On Thu, Mar 06, 2014 at 07:53:03AM -0500, Sean Dague wrote:
 We're at Freeze, so I want to pick up and understand where we currently
 stand with both Neutron and Heat actually getting tested fully in the gate.
 
 First Neutron -
 https://blueprints.launchpad.net/tempest/+spec/fix-gate-tempest-devstack-vm-quantum-full
 
 
 We know that this is *close* as the full job is running non voting
 everywhere, and typically passing. How close are we? Or should we be
 defering this until Juno (which would be unfortunate).

Can you please clarify - does FF apply to tempest *tests*?  My assumption
was that we could move from feature development to testing during the FF/RC
phase of the cycle, the natural by-product of which will be additional
tempest testcases..

 Second Heat -
 https://blueprints.launchpad.net/tempest/+spec/tempest-heat-integration
 
 The Heat tests that are in a normal Tempest job are relatively trivial
 surface verification, and in no way actually make sure that Heat is
 operating at a real level. This fact is a contributing factor to why
 Heat was broken in i2.

Yeah, we have a list of wishlist bugs so we can track the additional
less-trivial tests we'd like to implement:

https://bugs.launchpad.net/heat/+bugs?field.tag=tempest

I was hoping we'd get patches at least proposed for most of these before
Icehouse ships, but we'll have to see how it goes.. :)

 The first real verification for Heat is in the Heat slow job (which we
 created to give Heat a separate time budget, because doing work that
 requires real guests takes time).
 
 The heat slow job looks like it is finally passing much of the time -
 http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIEFORCBidWlsZF9uYW1lOmNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWhlYXQtc2xvdyIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDEwOTg3NDQ4OH0=
 
 It's seeing a 78% pass rate in check. Can anyone in the Heat team
 confirm that the Failures in this job are actually real failures on
 patches that should have been blocked?
 
 I'd like to get that turned on (and on all the projects) as soon as the
 Heat team is confident on it so that Heat actually participates in the
 tempest/devstack gate in a material way and we can prevent future issues
 where a keystone, nova, neutron or whatever change would break Heat in git.

Steve Baker can make the final decision, but I am +2 on turning it on -
during the process of getting my instance-users patches through the gate,
this test consistently found real issues with no false positives (other
than those caused by other known bugs).

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-06 Thread Christopher Yeoh
On Thu, Mar 6, 2014 at 10:24 PM, John Garbutt j...@johngarbutt.com wrote:

 On 5 March 2014 03:44, Christopher Yeoh cbky...@gmail.com wrote:
  But this plan is certainly something I'm happy to support.

One extra thing we need to consider, how to deal with new APIs while
 we go through this transition?

 I don't really have any answers to hand, but I want v3 to get released
 ASAP, and I want people to easily add API features to Nova. If the
 proxy is ready early, maybe we could have people implement only v3
 extensions, then optionally add v2 extension that just uses wraps the
 v2 proxy + v3 extensions.


So pretty much any extension which is added now (or those that have gone in
during Icehouse) should
offer an API which is exactly the same. There's no excuse for divergence so
what you suggest is most likely
quite doable. We might not even need a proxy in some cases to make it
available in the v2 namespace.



  - I'm not sure how this affects how we approach the tasks work. Will
need to think about that more.

 +1

 Its a thread of its own, but I think improving instance-actions, still
 leaving tasks for v3, might be the way forward.

 While we need to avoid feature creep, I still think if we add tasks
 into v3 in the right way, it could be what makes people move to v3.


Yea we really need to flesh out what we want from tasks long term.


 One more thing... I wonder if all new extensions should be considered
 experimental (or version 0) for at least one cycle. In theory, that
 should help us avoid some of the worst mistakes when adding new APIs.


Yes, so this I think is a similar suggestion to being able to have
extensions first drop
into a holding area outside of Nova. Because the whole freeze deadline rush
is a recipe for making
compromises around the API that we don't want to live with for the long
term but do so
because we want a feature to merge soon. But I think whatever approach we
take it needs
to come with the possibility that if its not fixed up in a reasonable time,
it may get removed.
So we don't end up with a large pool of experimental things.

As an aside, I think we need to improve the process we use for API related
features. Because a lot of the
problems that get picked up during code review could have been avoided if
we had a better review
earlier on that just focussed on the API design independent of
implementation and Nova internals.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTL] Absent next week

2014-03-06 Thread Thierry Carrez
Hi PTLs,

I'll be away next week to recharge batteries before the long release +
summit tunnel, so I'll miss our usual 1:1 status update.

There will still be a Release/project meeting Tuesday at 21:00 UTC. Sean
Dague has agreed to run it, and will review the currently-granted
feature freeze exceptions to see which (if any) should be extended.

Any future Feature Freeze Exception request will be processed by the
PTLs and the rest of the release team[1], with Sean being the first
point of contact in my absence. The base advice and rules we follow can
be found in my recent blogpost[2]. Remember that the deeper we go into
the freeze, the less likely it is for *any* FFE to be granted or extended.

[1] https://launchpad.net/~openstack-release/+members
[2] http://fnords.wordpress.com/2014/03/06/why-we-do-feature-freeze/

Thanks everyone !

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [heat] [neutron] - Status of Heat and Neutron tempest blueprints?

2014-03-06 Thread Sean Dague
On 03/06/2014 08:11 AM, Steven Hardy wrote:
 On Thu, Mar 06, 2014 at 07:53:03AM -0500, Sean Dague wrote:
 We're at Freeze, so I want to pick up and understand where we currently
 stand with both Neutron and Heat actually getting tested fully in the gate.

 First Neutron -
 https://blueprints.launchpad.net/tempest/+spec/fix-gate-tempest-devstack-vm-quantum-full


 We know that this is *close* as the full job is running non voting
 everywhere, and typically passing. How close are we? Or should we be
 defering this until Juno (which would be unfortunate).
 
 Can you please clarify - does FF apply to tempest *tests*?  My assumption
 was that we could move from feature development to testing during the FF/RC
 phase of the cycle, the natural by-product of which will be additional
 tempest testcases..

It doesn't. Typically we're adding additional validation up to the
release to catch things we believe are still exposed (especially in
conjunction with bugs that are found). That being said, if the core
teams here are punting, it would be good to know, so we don't waste time
asking the questions over and over again. :)

 Second Heat -
 https://blueprints.launchpad.net/tempest/+spec/tempest-heat-integration

 The Heat tests that are in a normal Tempest job are relatively trivial
 surface verification, and in no way actually make sure that Heat is
 operating at a real level. This fact is a contributing factor to why
 Heat was broken in i2.
 
 Yeah, we have a list of wishlist bugs so we can track the additional
 less-trivial tests we'd like to implement:
 
 https://bugs.launchpad.net/heat/+bugs?field.tag=tempest
 
 I was hoping we'd get patches at least proposed for most of these before
 Icehouse ships, but we'll have to see how it goes.. :)

+1. The more the better.

 The first real verification for Heat is in the Heat slow job (which we
 created to give Heat a separate time budget, because doing work that
 requires real guests takes time).

 The heat slow job looks like it is finally passing much of the time -
 http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIEFORCBidWlsZF9uYW1lOmNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWhlYXQtc2xvdyIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDEwOTg3NDQ4OH0=

 It's seeing a 78% pass rate in check. Can anyone in the Heat team
 confirm that the Failures in this job are actually real failures on
 patches that should have been blocked?

 I'd like to get that turned on (and on all the projects) as soon as the
 Heat team is confident on it so that Heat actually participates in the
 tempest/devstack gate in a material way and we can prevent future issues
 where a keystone, nova, neutron or whatever change would break Heat in git.
 
 Steve Baker can make the final decision, but I am +2 on turning it on -
 during the process of getting my instance-users patches through the gate,
 this test consistently found real issues with no false positives (other
 than those caused by other known bugs).

Great. Looking forward to turning that on.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a Real life workflow

2014-03-06 Thread Sandy Walsh
DSL's are tricky beasts. On one hand I like giving a tool to
non-developers so they can do their jobs, but I always cringe when the
DSL reinvents the wheel for basic stuff (compound assignment
expressions, conditionals, etc).

YAML isn't really a DSL per se, in the sense that it has no language
constructs. As compared to a Ruby-based DSL (for example) where you
still have Ruby under the hood for the basic stuff and extensions to the
language for the domain-specific stuff.

Honestly, I'd like to see a killer object model for defining these
workflows as a first step. What would a python-based equivalent of that
real-world workflow look like? Then we can ask ourselves, does the DSL
make this better or worse? Would we need to expose things like email
handlers, or leave that to the general python libraries?

$0.02

-S



On 03/05/2014 10:50 PM, Dmitri Zimine wrote:
 Folks, 
 
 I took a crack at using our DSL to build a real-world workflow. 
 Just to see how it feels to write it. And how it compares with
 alternative tools. 
 
 This one automates a page from OpenStack operation
 guide: 
 http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node
  
 
 Here it is https://gist.github.com/dzimine/9380941
 or here http://paste.openstack.org/show/72741/
 
 I have a bunch of comments, implicit assumptions, and questions which
 came to mind while writing it. Want your and other people's opinions on it. 
 
 But gist and paste don't let annotate lines!!! :(
 
 May be we can put it on the review board, even with no intention to
 check in,  to use for discussion? 
 
 Any interest?
 
 DZ 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to merge blueprints that just missed Icehouse-3 in early Juno-1

2014-03-06 Thread Russell Bryant
On 03/06/2014 07:05 AM, John Garbutt wrote:
 That being said, importance isn't a FIFO.
 
 Agreed, but it seems most of them are quite useful, and a lot have
 already been through a few review cycles.

I think this is a key point.  Regardless of how long something has been
in the queue, importance isn't a FIFO queue.  This is mostly just about
setting expectations.  It's more of a ... priority based heap?

I think what you're proposing is fine.  I would state it as something like:

 - This primarily applies to blueprints that were close to merging.
   This means that the code already had some core review iterations and
   was getting close.

 - Have your code rebased and ready for review as soon as Juno dev
   opens.  This gives reviewers the best chance to get back to it,
   and since it was close, it has a good chance for juno-1.

Even for stuff that wasn't actually close to merging, this is still the
best practice to give your patches the best chance.

However, if there are things coming in than review bandwidth available
(the current reality), you're competing for review attention.  It has to
be compelling enough to get priority on its technical merits.  The other
angle is to increase your general karma in the dev community so that
reviewers are compelled to help the *developer*, even if they are
ambivalent about the particular code.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-06 Thread Russell Bryant
On 03/06/2014 08:18 AM, Christopher Yeoh wrote:
 As an aside, I think we need to improve the process we use for API
 related features. Because a lot of the
 problems that get picked up during code review could have been avoided
 if we had a better review
 earlier on that just focussed on the API design independent of
 implementation and Nova internals.

Yes, I would absolutely like to get better about doing this as a part of
our blueprint review process.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-06 Thread Russell Bryant
On 03/06/2014 05:33 AM, Thierry Carrez wrote:
 Tracy Jones wrote:
 Hi - Please consider the image cache aging BP for FFE 
 (https://review.openstack.org/#/c/56416/)

 This is the last of several patches (already merged) that implement image 
 cache cleanup for the vmware driver.  This patch solves a significant 
 customer pain point as it removes unused images from their datastore.  
 Without this patch their datastore can become unnecessarily full.  In 
 addition to the customer benefit from this patch it

 1.  has a turn off switch 
 2.  if fully contained within the vmware driver
 3.  has gone through functional testing with our internal QA team 

 ndipanov has been good enough to say he will review the patch, so we would 
 ask for one additional core sponsor for this FFE.
 
 This one borders on the bug side, so if it merges early enough, I'm +1
 on it.
 

I'd still want it to land ASAP.  I'm OK with it as long there are
reviewers signed up.  I'm still waiting for confirmation on sponsorship
of this one.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Adds PCI support for the V3 API (just one patch in novaclient)

2014-03-06 Thread Russell Bryant
On 03/06/2014 05:31 AM, Thierry Carrez wrote:
 Michael Still wrote:
 On Thu, Mar 6, 2014 at 12:26 PM, Tian, Shuangtai
 shuangtai.t...@intel.com wrote:

 Hi,

 I would like to make a request for FFE for one patch in novaclient for PCI
 V3 API : https://review.openstack.org/#/c/75324/

 [snip]

 BTW the PCI Patches in V2 will defer to Juno.

 I'm confused. If this isn't landing in v2 in icehouse I'm not sure we
 should do a FFE for v3. I don't think right at this moment we want to
 be encouraging users to user v3 so why does waiting matter?
 
 Yes, the benefit of having this IN the release (rather than early in
 tree in Juno) is not really obvious. We already have a significant
 number of FFEs lined up for Nova, so I'd be -1 on this one.
 

This is actually a novaclient patch anyway, so it's not really relevant
to the feature freeze.  novaclient changes can go in anytime as it's not
part of the integrated release.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread zhangyu (AI)
Got it. Many thanks!

Leiqiang, you can take action now :)

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection



On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
j...@johngarbutt.commailto:j...@johngarbutt.com wrote:
On 6 March 2014 08:50, zhangyu (AI) 
zhangy...@huawei.commailto:zhangy...@huawei.com wrote:
 It seems to be an interesting idea. In fact, a China-based public IaaS, 
 QingCloud, has provided a similar feature
 to their virtual servers. Within 2 hours after a virtual server is deleted, 
 the server owner can decide whether
 or not to cancel this deletion and re-cycle that deleted virtual server.

 People make mistakes, while such a feature helps in urgent cases. Any idea 
 here?
Nova has soft_delete and restore for servers. That sounds similar?

John


 -Original Message-
 From: Zhangleiqiang 
 [mailto:zhangleiqi...@huawei.commailto:zhangleiqi...@huawei.com]
 Sent: Thursday, March 06, 2014 2:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

 Hi all,

 Current openstack provide the delete volume function to the user.
 But it seems there is no any protection for user's delete operation miss.

 As we know the data in the volume maybe very important and valuable.
 So it's better to provide a method to the user to avoid the volume delete 
 miss.

 Such as:
 We can provide a safe delete for the volume.
 User can specify how long the volume will be delay deleted(actually deleted) 
 when he deletes the volume.
 Before the volume is actually deleted, user can cancel the delete operation 
 and find back the volume.
 After the specified time, the volume will be actually deleted by the system.

 Any thoughts? Welcome any advices.

 Best regards to you.


 --
 zhangleiqiang

 Best Regards



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think a soft-delete for Cinder sounds like a neat idea.  You should file a BP 
that we can target for Juno.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-06 Thread Russell Bryant
On 03/06/2014 05:37 AM, Thierry Carrez wrote:
 Gary Kotton wrote:
 Hi,
 Unfortunately we did not get the ISO support approved by the deadline.
 If possible can we please get the FFE.

 The feature is completed and has been tested extensively internally. The
 feature is very low risk and has huge value for users. In short a user
 is able to upload a iso to glance then boot from that iso.

 BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
 Code: https://review.openstack.org/#/c/63084/ and 
 https://review.openstack.org/#/c/77965/
 Sponsors: John Garbutt and Nikola Dipanov

 One of the things that we are planning on improving in Juno is the way
 that the Vmops code is arranged and organized. We will soon be posting a
 wiki for ideas to be discussed. That will enable use to make additions
 like this a lot simpler in the future. But sadly that is not part of the
 scope at the moment.
 
 Sounds self-contained enough... but we have a lot piled up for Nova
 already. I'm +0 on this one, if Nova PTL wants it and it lands early
 enough to limit the distraction, I guess it's fine.
 

I'm fine with it as long as the sponsors confirm their sponsorship and
are ready to get it merged this week.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Russell Bryant
On 03/06/2014 03:20 AM, Andrew Woodward wrote:
 I'd Like to request A FFE for the remaining patches in the Ephemeral
 RBD image support chain
 
 https://review.openstack.org/#/c/59148/
 https://review.openstack.org/#/c/59149/
 
 are still open after their dependency
 https://review.openstack.org/#/c/33409/ was merged.
 
 These should be low risk as:
 1. We have been testing with this code in place.
 2. It's nearly all contained within the RBD driver.
 
 This is needed as it implements an essential functionality that has
 been missing in the RBD driver and this will become the second release
 it's been attempted to be merged into.

It's not a trivial change, and it doesn't appear that it was super close
to merging based on review history.

Are there two nova-core members interested and willing to review this to
get it merged ASAP?  If so, could you comment on how close you think it is?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-06 Thread Russell Bryant
On 03/05/2014 03:40 PM, Yathiraj Udupi (yudupi) wrote:
 Hi, 
 
 We would like to make a request for FFE for the Solver Scheduler work.
  A lot of work has gone into it since Sep’13, and the first patch has
 gone through several iteration after some reviews.   The first patch
 - https://review.openstack.org/#/c/46588/ introduces the main solver
 scheduler driver, and a reference solver implementation, and the
 subsequent patches that are already added provide the pluggable solver,
 and individual support for adding constraints, costs, etc. 

I think this one should wait for Juno, so -1 on the FFE.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] FFE for instance-users

2014-03-06 Thread Steven Hardy
Hi all,

We've not quite managed to land the last few patches for instance-users
in time for FF:

https://blueprints.launchpad.net/heat/+spec/instance-users

https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bug/1089261,n,z

This is due to a combination of review latency getting the required changes
into devstack, working up backwards compatibility hacks for the tripleo
folks, and availability of core-reviewers to review the patches over the
last month.

The remaining patches are required to provide a solution to bug #1089261,
as this is solved as a result of implementing this blueprint.

If we can go ahead and get these last 4 patches in, I'd appreciate it :)
https://review.openstack.org/#/c/72762/
https://review.openstack.org/#/c/72761/
https://review.openstack.org/#/c/71930/
https://review.openstack.org/#/c/72763/

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-03-06 Thread Ivan Melnikov
On 06.03.2014 04:00, Joshua Harlow wrote:
 Hi all,
 
 The [state-management] project team holds a weekly meeting in
 #openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
 2014-03-06!!! 

[...]

 Any other topics are welcome :-)

I dared to add item on documentation improvements.

-- 
WBR,
Ivan A. Melnikov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-06 Thread Dugger, Donald D
Fangzhen-

Great to see your interest in Gantt but, as Sylvain said, gantt is really not 
ready for feature enhancments yet.  Our exclusive goal for gantt right now is 
to separate it out from nova proper, a daunting task itself.

Having said that, there is no change in development for the current nova 
scheduler, if you have an idea for an enhancement/cleanup/new feature for the 
nova scheduler you should go ahead and propose working on that.  The plan is 
that any changes that get made to the current nova scheduler will be 
incorporated into gantt when it finally gets separated out.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

From: Sylvain Bauza [mailto:sylvain.ba...@bull.net]
Sent: Thursday, March 6, 2014 3:19 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

Le 06/03/2014 09:35, 方祯 a écrit :
Hi Sylvain, Russell, dims

Thanks for your the replies and guidances!

I have read the docs below the title. In my opinion, It is quite a good idea to 
take storage component and network component into
consideration of the scheduler of nova. I agree that it is quite a large job 
for current GSOC project now. And from the docs I have read before,
I think that providing more additional Filters and Weight functions is quite 
import in both current filter_scheduler and later Cross service 
scheduler(SolverScheduler or other).
So I have the idea that implement some scheduler_filer/weight with other 
metrics of all host state or other cross service data.
As a newbie to OpenStack-dev. I have a concern that is it full enough for GSOC 
term? and what kind of work is suitable for current nova enhancement
, future cross service scheduler and could also be full enough for GSOC term. 
It would be great if somebody could give some advice:)

Thank for Sylvain's help for the information of #openstack-meeting IRC channel 
and dims and Russell's suggestion, and I will update my information soon on the 
GSOC's Wiki pages.

Thanks and Regards,
fangzhen
GitHub : https://github.com/fz1989



I just discovered that Gantt has been proposed as a potential subject of 
interest for GSoC. While I do understand the opportunity for people working on 
Gantt, I don't think, as Russell stated, that Gantt is mature enough for 
helping newcomers to deliver cool features within the given timeline.

IMHO, Gantt should be removed from this wikipage [1], and be replaced by Nova 
proposal for Filters/Weights improvements. The Solver Scheduler recently 
received NACK for a FFE (Feature Freeze Exception) so that means that new 
patches wouldn't be merged until Juno (12th May), so I guess the problem would 
be the same too.

Anyway, people interested in working on the Scheduler should attend weekly 
meetings on Tuesdays, for at least synchronization within the team.

My 2cts,
-Sylvain

[1] https://wiki.openstack.org/wiki/GSoC2014#Common_Scheduler_.28Gantt.29
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-06 Thread Kyle Mestery
On Tue, Mar 4, 2014 at 7:34 AM, Kyle Mestery mest...@noironetworks.comwrote:

 On Tue, Mar 4, 2014 at 5:46 AM, Sean Dague s...@dague.net wrote:

 On 03/03/2014 11:32 PM, Dean Troyer wrote:
  On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery mest...@noironetworks.com
  mailto:mest...@noironetworks.com wrote:
 
  In all cases today with Open Source plugins, Neutron agents have run
  on the hosts. For OpenDaylight, this is not the case. OpenDaylight
  integrates with Neutron as a ML2 MechanismDriver. But it has no
  Neutron code on the compute hosts. OpenDaylight itself communicates
  directly to those compute hosts to program Open vSwitch.
 
 
 
  devstack doesn't provide a way for me to express this today. On the
  compute hosts in the above scenario, there is no q-* services
  enabled, so the is_neutron_enabled function returns 1, meaning no
  neutron.
 
 
  True and working as designed.
 
 
  And then devstack sets Nova up to use nova-networking, which fails.
 
 
  This only happens if you have enabled nova-network.  Since it is on by
  default you must disable it.
 
 
  The patch I have submitted [1] modifies is_neutron_enabled to
  check for the meta neutron service being enabled, which will then
  configure nova to use Neutron instead of nova-networking on the
  hosts. If this sounds wonky and incorrect, I'm open to suggestions
  on how to make this happen.
 
 
  From the review:
 
  is_neutron_enabled() is doing exactly what it is expected to do, return
  success if it finds any q-* service listed in ENABLED_SERVICES. If no
  neutron services are configured on a compute host, then this must not
  say they are.
 
  Putting 'neutron' in ENABLED_SERVICES does nothing and should do
 nothing.
 
  Since you are not implementing the ODS as a Neutron plugin (as far as
  DevStack is concerned) you should then treat it as a system service and
  configure it that way, adding 'opendaylight' to ENABLED_SERVICES
  whenever you want something to know it is being used.
 
 
 
  Note: I have another patch [2] which enables an OpenDaylight
  service, including configuration of OVS on hosts. But I cannot check
  if the opendaylight service is enabled, because this will only run
  on a single node, and again, not on each compute host.
 
 
  I don't understand this conclusion. in multi-node each node gets its own
  specific ENABLED_SERVICES list, you can check that on each node to
  determine how to configure that node.  That is what I'm trying to
  explain in that last paragraph above, maybe not too clearly.

 So in an Open Daylight environment... what's running on the compute host
 to coordinate host level networking?

 Nothing. OpenDaylight communicates to each host using OpenFlow and OVSDB
 to manage networking on the host. In fact, this is one huge advantage for
 the
 ODL MechanismDriver in Neutron, because it's one less agent running on the
 host.

 Thanks,
 Kyle

 As an update here, I've reworked my devstack patch [1]  for adding
OpenDaylight
support to make OpenDaylight a top-level service, per suggestion from Dean.
You
can now enable both odl-server and odl-compute in your local.conf with
my patch.
Enabling odl-server will run OpenDaylight under devstack. Enabling
odl-compute
will configure the host's OVS to work with OpenDaylight.

Per discussion with Sean, I'd like to look at refactoring some other bits
of the Neutron
devstack code in the coming weeks as well.

Thanks!
Kyle

[1] https://review.openstack.org/#/c/69774/


  -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Weekly IRC meeting schedule

2014-03-06 Thread Ilya Sviridov
Hello magnetodb contributors,

I would like to suggest weekly IRC meetings on Thursdays, 1300 UTC.

More technical details later.

Let us vote by replying this email.

With best regards,
Ilya Sviridov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Weekly IRC meeting schedule

2014-03-06 Thread Illia Khudoshyn
1300UTC is fine for me


On Thu, Mar 6, 2014 at 4:24 PM, Ilya Sviridov isviri...@mirantis.comwrote:

 Hello magnetodb contributors,

 I would like to suggest weekly IRC meetings on Thursdays, 1300 UTC.

 More technical details later.

 Let us vote by replying this email.

 With best regards,
 Ilya Sviridov


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Ownership and path to schema definitions

2014-03-06 Thread David Kranz

On 03/05/2014 10:25 PM, Christopher Yeoh wrote:

On Tue, 04 Mar 2014 13:31:07 -0500
David Kranz dkr...@redhat.com wrote:

I think it would be a good time to have at least an initial
discussion about the requirements for theses schemas and where they
will live. The next step in tempest around this is to replace the
existing negative test files with auto-gen versions, and most of the
work in doing that is to define the schemas.

The tempest framework needs to know the http method, url part,
expected error codes, and payload description. I believe only the
last is covered by the current nova schema definitions, with the
others being some kind of attribute or data associated with the
method that is doing the validation. Ideally the information being
used to do the validation could be auto-converted to a more general
schema that could be used by tempest. I'm interested in what folks
have to say about this and especially from the folks who are core
members of both nova and tempest. See below for one example (note
that the tempest generator does not yet handle pattern).


So as you've seen a lot of what is wanted for the tempest framework is
implicitly known already within the method context which is why its not
again explicitly stated in the schema. Not actually having thought
about it a lot, but I suspect the expected errors decorator is
something that would fit just as well in the validation framework
however.

Some of the other stuff such as url part, descriptions etc, not so much
as it would be purely duplicate information that would get out of date.
However for documentation auto generation it is something that we do
also want to have available in an automated fashion.  I did a bit of
exploration early in Icehouse in generating this within the context of
api samples tests where we have access to this sort of stuff and I think
together we'd have all the info we need, I'm just not sure mashing them
together is the right way to do it.
To be clear, I was advocating for some solution where all the 
information needed to create negative tests, api docs, (part of client 
libraries?), etc. could be derived from the source code. The pieces of 
information could come from explicit schema definitions, scanning the 
code, python introspection. I have done this sort of thing in other 
languages but am not enough of a Python wiz to say exactly how this 
could work. Certainly the schema should not have duplicate information.

And from the documentation point of view we need to have a bit of a
think about whether doc strings on methods should be the canonical way
we produce descriptional information about API methods. One hand its
appealing, on the other hand they tend to be not very useful or very
internals Nova focussed. But we could get much better at it.
Agreed. IMO, doc strings on methods have far more benefits (automation, 
not getting out of sync with code) than real costs (non-coders have to 
edit the code to review or improve the docs). This works best when 
developers write at least drafts of doc strings. The REST API case is a 
little more tricky because it does not map as directly to methods.


 -David


Short version - yea I think we want to get to the point where tempest
doesn't generate these manually. But I'm not sure about how we
should do it.

Chris


   -David

  From nova:

get_console_output = {
  'type': 'object',
  'properties': {
  'get_console_output': {
  'type': 'object',
  'properties': {
  'length': {
  'type': ['integer', 'string'],
  'minimum': 0,
  'pattern': '^[0-9]+$',
  },
  },
  'additionalProperties': False,
  },
  },
  'required': ['get_console_output'],
  'additionalProperties': False,
}

  From tempest:

{
  name: get-console-output,
  http-method: POST,
  url: servers/%s/action,
  resources: [
  {name:server, expected_result: 404}
  ],
  json-schema: {
  type: object,
  properties: {
  os-getConsoleOutput: {
  type: object,
  properties: {
  length: {
  type: [integer, string],
  minimum: 0
  }
  }
  }
  },
  additionalProperties: false
  }
}


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-06 Thread Russell Bryant
On 03/06/2014 08:45 AM, Russell Bryant wrote:
 On 03/06/2014 05:37 AM, Thierry Carrez wrote:
 Gary Kotton wrote:
 Hi,
 Unfortunately we did not get the ISO support approved by the deadline.
 If possible can we please get the FFE.

 The feature is completed and has been tested extensively internally. The
 feature is very low risk and has huge value for users. In short a user
 is able to upload a iso to glance then boot from that iso.

 BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
 Code: https://review.openstack.org/#/c/63084/ and 
 https://review.openstack.org/#/c/77965/
 Sponsors: John Garbutt and Nikola Dipanov

 One of the things that we are planning on improving in Juno is the way
 that the Vmops code is arranged and organized. We will soon be posting a
 wiki for ideas to be discussed. That will enable use to make additions
 like this a lot simpler in the future. But sadly that is not part of the
 scope at the moment.

 Sounds self-contained enough... but we have a lot piled up for Nova
 already. I'm +0 on this one, if Nova PTL wants it and it lands early
 enough to limit the distraction, I guess it's fine.

 
 I'm fine with it as long as the sponsors confirm their sponsorship and
 are ready to get it merged this week.
 

I was able to confirm this on IRC with Nikola and John.  FFE approved.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-06 Thread Russell Bryant
On 03/06/2014 07:08 AM, John Garbutt wrote:
 But I do worry about having too many FFEs, so we get distracted from bugs.

Fair concern.

To mitigate it we need to have a hard deadline on these.  We should aim
for this week and an absolute hard deadline of Tuesday.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][FFE] Cinder switch-over to oslo.messaging

2014-03-06 Thread Flavio Percoco

On 06/03/14 11:50 +0100, Thierry Carrez wrote:

Flavio Percoco wrote:

I'd like to request a FFE for the oslo.messaging migration in Cinder.

Some projects have already switched over oslo.messaging and others are
still doing so. I think we should switch remaining projects to
oslo.messaging as soon as possible and keep the RPC library in use
consistent throughout OpenStack.

Cinder's patch has been up for review for a couple of weeks already
and it's been kept updated with master. Besides some of the gate
failures we've had in the last couple of weeks, it seems to work as
expected.

As a final note, most of the work on this patch followed the style and
changes done in Nova, for better or for worse.

The review link is: https://review.openstack.org/#/c/71873/


So on one hand this is a significant change that looks like it could
wait (little direct feature gain). On the other we have oslo.messaging
being adopted in a lot of projects, and we reduce the maintenance
envelope if we switch most projects to it BEFORE release.

This one really boils down to how early it can be merged. If it's done
before the meeting next Tuesday, it's a net gain. If not, it becomes too
much of a distraction from bugfixes for reviewers and any regression it
creates might get overlooked.


FWIW, I just rebased it earlier today and the patch could be merged
today if it gets enough reviews.

Cheers,
Fla.


--
@flaper87
Flavio Percoco


pgpC_dt24wcQj.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-06 Thread Russell Bryant
On 03/06/2014 03:00 AM, Nikola Đipanov wrote:
 On 03/05/2014 07:59 PM, Russell Bryant wrote:
 On 03/05/2014 12:27 PM, Andrew Laski wrote:
 On 03/05/14 at 07:37am, Tracy Jones wrote:
 Hi - Please consider the image cache aging BP for FFE
 (https://review.openstack.org/#/c/56416/)

 This is the last of several patches (already merged) that implement
 image cache cleanup for the vmware driver.  This patch solves a
 significant customer pain point as it removes unused images from their
 datastore.  Without this patch their datastore can become
 unnecessarily full.  In addition to the customer benefit from this
 patch it

 1.  has a turn off switch
 2.  if fully contained within the vmware driver
 3.  has gone through functional testing with our internal QA team

 ndipanov has been good enough to say he will review the patch, so we
 would ask for one additional core sponsor for this FFE.

 Looking over the blueprint and outstanding review it seems that this is
 a fairly low risk change, so I am willing to sponsor this bp as well.

 Nikola, can you confirm if you're willing to sponsor (review) this?

 
 Yeah - I'll review it!

I was also able to confirm on IRC that Daniel Berrange would review this
one.  FFE approved.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Weekly IRC meeting schedule

2014-03-06 Thread Ilya Sviridov
Any other opinions?


On Thu, Mar 6, 2014 at 4:31 PM, Illia Khudoshyn ikhudos...@mirantis.comwrote:

 1300UTC is fine for me


 On Thu, Mar 6, 2014 at 4:24 PM, Ilya Sviridov isviri...@mirantis.comwrote:

 Hello magnetodb contributors,

 I would like to suggest weekly IRC meetings on Thursdays, 1300 UTC.

 More technical details later.

 Let us vote by replying this email.

 With best regards,
 Ilya Sviridov


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Weekly IRC meeting schedule

2014-03-06 Thread Maksym Iarmak
+

13.00 UTC is OK.


2014-03-06 17:18 GMT+02:00 Ilya Sviridov isviri...@mirantis.com:

 Any other opinions?


 On Thu, Mar 6, 2014 at 4:31 PM, Illia Khudoshyn 
 ikhudos...@mirantis.comwrote:

 1300UTC is fine for me


 On Thu, Mar 6, 2014 at 4:24 PM, Ilya Sviridov isviri...@mirantis.comwrote:

 Hello magnetodb contributors,

 I would like to suggest weekly IRC meetings on Thursdays, 1300 UTC.

 More technical details later.

 Let us vote by replying this email.

 With best regards,
 Ilya Sviridov


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Dmitry Mescheryakov
Hello folks,

A number of OpenStack and related projects have a need to perform
operations inside VMs running on OpenStack. A natural solution would
be an agent running inside the VM and performing tasks.

One of the key questions here is how to communicate with the agent. An
idea which was discussed some time ago is to use oslo.messaging for
that. That is an RPC framework - what is needed. You can use different
transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
connectivity your OpenStack networking can provide. At the same time
there is a number of things to consider, like networking, security,
packaging, etc.

So, messaging people, what is your opinion on that idea? I've already
raised that question in the list [1], but seems like not everybody who
has something to say participated. So I am resending with the
different topic. For example, yesterday we started discussing security
of the solution in the openstack-oslo channel. Doug Hellmann at the
start raised two questions: is it possible to separate different
tenants or applications with credentials and ACL so that they use
different queues? My opinion that it is possible using RabbitMQ/Qpid
management interface: for each application we can automatically create
a new user with permission to access only her queues. Another question
raised by Doug is how to mitigate a DOS attack coming from one tenant
so that it does not affect another tenant. The thing is though
different applications will use different queues, they are going to
use a single broker.

Do you share Doug's concerns or maybe you have your own?

Thanks,

Dmitry

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-December/021476.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a Real life workflow

2014-03-06 Thread Joshua Harlow
That sounds a little similar to what taskflow is trying to do (I am of course 
biased).

I agree with letting the native language implement the basics (expressions, 
assignment...) and then building the domain ontop of that. Just seems more 
natural IMHO, and is similar to what linq (in c#) has done.

My 3 cents.

Sent from my really tiny device...

 On Mar 6, 2014, at 5:33 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 
 DSL's are tricky beasts. On one hand I like giving a tool to
 non-developers so they can do their jobs, but I always cringe when the
 DSL reinvents the wheel for basic stuff (compound assignment
 expressions, conditionals, etc).
 
 YAML isn't really a DSL per se, in the sense that it has no language
 constructs. As compared to a Ruby-based DSL (for example) where you
 still have Ruby under the hood for the basic stuff and extensions to the
 language for the domain-specific stuff.
 
 Honestly, I'd like to see a killer object model for defining these
 workflows as a first step. What would a python-based equivalent of that
 real-world workflow look like? Then we can ask ourselves, does the DSL
 make this better or worse? Would we need to expose things like email
 handlers, or leave that to the general python libraries?
 
 $0.02
 
 -S
 
 
 
 On 03/05/2014 10:50 PM, Dmitri Zimine wrote:
 Folks, 
 
 I took a crack at using our DSL to build a real-world workflow. 
 Just to see how it feels to write it. And how it compares with
 alternative tools. 
 
 This one automates a page from OpenStack operation
 guide: 
 http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node
  
 
 Here it is https://gist.github.com/dzimine/9380941
 or here http://paste.openstack.org/show/72741/
 
 I have a bunch of comments, implicit assumptions, and questions which
 came to mind while writing it. Want your and other people's opinions on it. 
 
 But gist and paste don't let annotate lines!!! :(
 
 May be we can put it on the review board, even with no intention to
 check in,  to use for discussion? 
 
 Any interest?
 
 DZ 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][wsme][pecan] Need help for ceilometer havana alarm issue

2014-03-06 Thread ZhiQiang Fan
thank you very much

seems I should search the launchpad more carefully


On Thu, Mar 6, 2014 at 3:37 PM, Mehdi Abaakouk sil...@sileht.net wrote:

 Hi,

 On Thu, Mar 06, 2014 at 10:44:25AM +0800, ZhiQiang Fan wrote:
  I already check the stable/havana and master branch via devstack, the
  problem is still in havana, but master branch is not affected
 
  I think it is important to fix it for havana too, since some high level
  application may depends on the returned faultstring. Currently, I'm not
  sure mater branch fix it in pecan or wsme module, or in ceilometer itself
 
  Is there anyone can help with this problem?

 This is a duplicate bug of
 https://bugs.launchpad.net/ceilometer/+bug/1260398

 This one have already been fixed, I have marked havana as affected, to
 think about it if we cut a new havana version.

 Feel free to prepare the backport.

 Regards,

 --
 Mehdi Abaakouk
 mail: sil...@sileht.net
 irc: sileht

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
blog: zqfan.github.com
git: github.com/zqfan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-06 Thread Sylvain Afchain
Hi all,

I would like to request a FFE for the following patches of the L3 HA VRRP BP :

https://blueprints.launchpad.net/neutron/+spec/l3-high-availability

https://review.openstack.org/#/c/64553/
https://review.openstack.org/#/c/66347/
https://review.openstack.org/#/c/68142/
https://review.openstack.org/#/c/70700/

These should be low risk since HA is not enabled by default.
The server side code has been developed as an extension which minimizes risk.
The agent side code introduces a bit more changes but only to filter whether to 
apply the
new HA behavior.

I think it's a good idea to have this feature in Icehouse, perhaps even marked 
as experimental,
especially considering the demand for HA in real world deployments.

Here is a doc to test it :

https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug

-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Jorge Miramontes
Hi everyone,

I'd like to gauge everyone's interest in a possible mini-summit for Neturon 
LBaaS. If enough people are interested I'd be happy to try and set something 
up. The Designate team just had a productive mini-summit in Austin, TX and it 
was nice to have face-to-face conversations with people in the Openstack 
community. While most of us will meet in Atlanta in May, I feel that a focused 
mini-summit will be more productive since we won't have other Openstack 
distractions around us. Let me know what you all think!

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Weekly IRC meeting schedule

2014-03-06 Thread Dmitriy Ukhlov
+


On Thu, Mar 6, 2014 at 5:25 PM, Maksym Iarmak miar...@mirantis.com wrote:

 +

 13.00 UTC is OK.


 2014-03-06 17:18 GMT+02:00 Ilya Sviridov isviri...@mirantis.com:

 Any other opinions?


 On Thu, Mar 6, 2014 at 4:31 PM, Illia Khudoshyn 
 ikhudos...@mirantis.comwrote:

 1300UTC is fine for me


 On Thu, Mar 6, 2014 at 4:24 PM, Ilya Sviridov isviri...@mirantis.comwrote:

 Hello magnetodb contributors,

 I would like to suggest weekly IRC meetings on Thursdays, 1300 UTC.

 More technical details later.

 Let us vote by replying this email.

 With best regards,
 Ilya Sviridov


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Julien Danjou
On Thu, Mar 06 2014, Dmitry Mescheryakov wrote:

 So, messaging people, what is your opinion on that idea? I've already
 raised that question in the list [1], but seems like not everybody who
 has something to say participated. So I am resending with the
 different topic. For example, yesterday we started discussing security
 of the solution in the openstack-oslo channel. Doug Hellmann at the
 start raised two questions: is it possible to separate different
 tenants or applications with credentials and ACL so that they use
 different queues? My opinion that it is possible using RabbitMQ/Qpid
 management interface: for each application we can automatically create
 a new user with permission to access only her queues. Another question
 raised by Doug is how to mitigate a DOS attack coming from one tenant
 so that it does not affect another tenant. The thing is though
 different applications will use different queues, they are going to
 use a single broker.

What about using HTTP and the REST APIs? What's what supposed to be the
world facing interface of OpenStack. If you want to receive messages,
it's still possible to use long polling connections.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-06 Thread James E. Blair
Sean Dague s...@dague.net writes:

 #1) do we believe OFTC is fundamentally better equipped to resist a
 DDOS, or do we just believe they are a smaller target? The ongoing DDOS
 on meetup.com the past 2 weeks is a good indicator that being a smaller
 fish only helps for so long.

After speaking with a Freenode and OFTC staffer, I am informed that OFTC
is generally and currently not the target of DDoS attacks, likely due to
their smaller profile.  If they were subject to such attacks, they would
likely be less prepared to deal with them than Freenode, however, in
that event, they would expect to extend their capabilities to deal with
it, partially borrowing on experience from Freenode.  And finally,
Freenode is attempting to work with sponsors and networks that can help
mitigate the ongoing DDoS attacks.

I agree that this is not a decision to be taken lightly.  I believe that
we can effect the move successfully if we plan it well and execute it
over an appropriate amount of time.  My own primary concern is actually
the loss of network effect.  If you're only on one network, Freenode is
probably the place to be since so many other projects are there.
Nevertheless, I think our project is substantial enough that we can move
with little attrition.

The fact is though that Freenode has had significant service degradation
due to DDoS attacks for quite some time -- the infra team notices this
every time we have to chase down which side of a netsplit our bots ended
up on and try to bring them back.  We also had an entire day recently
(it was a Saturday) where we could not use Freenode at all.

There isn't much we can do about DDoS attacks on Freenode.  If we stay,
we're going to continue to deal with the occasional outage and spend a
significant amount of time chasing bots.  It's clear that Freenode is
better able to deal with attacks than OFTC would be.  However, OFTC
doesn't have to deal with them because they aren't happening; and that's
worth considering.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Jay Pipes
On Thu, 2014-03-06 at 15:32 +, Jorge Miramontes wrote:
 I'd like to gauge everyone's interest in a possible mini-summit for
 Neturon LBaaS. If enough people are interested I'd be happy to try and
 set something up. The Designate team just had a productive mini-summit
 in Austin, TX and it was nice to have face-to-face conversations with
 people in the Openstack community. While most of us will meet in
 Atlanta in May, I feel that a focused mini-summit will be more
 productive since we won't have other Openstack distractions around us.
 Let me know what you all think!

++

++

I think a few weeks after the design summit would be a good time.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread Alex Meade
Just so everyone is aware. Glance supports 'delayed deletes' where image
data will not actually be deleted at the time of the request. Glance also
has the concept of 'protected images', which allows for setting an image as
protected, preventing it from being deleted until the image is
intentionally set to unprotected. This avoids any actual deletion of prized
images.

Perhaps cinder could emulate that behavior or improve upon it for volumes.

-Alex


On Thu, Mar 6, 2014 at 8:45 AM, zhangyu (AI) zhangy...@huawei.com wrote:

  Got it. Many thanks!



 Leiqiang, you can take action now J



 *From:* John Griffith [mailto:john.griff...@solidfire.com]
 *Sent:* Thursday, March 06, 2014 8:38 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection







 On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com wrote:

 On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
  It seems to be an interesting idea. In fact, a China-based public IaaS,
 QingCloud, has provided a similar feature
  to their virtual servers. Within 2 hours after a virtual server is
 deleted, the server owner can decide whether
  or not to cancel this deletion and re-cycle that deleted virtual
 server.
 
  People make mistakes, while such a feature helps in urgent cases. Any
 idea here?

 Nova has soft_delete and restore for servers. That sounds similar?

 John


 
  -Original Message-
  From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
  Sent: Thursday, March 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection
 
  Hi all,
 
  Current openstack provide the delete volume function to the user.
  But it seems there is no any protection for user's delete operation miss.
 
  As we know the data in the volume maybe very important and valuable.
  So it's better to provide a method to the user to avoid the volume
 delete miss.
 
  Such as:
  We can provide a safe delete for the volume.
  User can specify how long the volume will be delay deleted(actually
 deleted) when he deletes the volume.
  Before the volume is actually deleted, user can cancel the delete
 operation and find back the volume.
  After the specified time, the volume will be actually deleted by the
 system.
 
  Any thoughts? Welcome any advices.
 
  Best regards to you.
 
 
  --
  zhangleiqiang
 
  Best Regards
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 I think a soft-delete for Cinder sounds like a neat idea.  You should file
 a BP that we can target for Juno.



 Thanks,

 John



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Tox issues on a clean environment

2014-03-06 Thread Gary Kotton
Hi,
Anyone know how I cam solve the error below:

  Running setup.py install for jsonpatch
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'entry_poimts'
  warnings.warn(msg)
changing mode of build/scripts-2.7/jsondiff from 664 to 775
changing mode of build/scripts-2.7/jsonpatch from 664 to 775

changing mode of /home/gk-dev/nova/.tox/py27/bin/jsonpatch to 775
changing mode of /home/gk-dev/nova/.tox/py27/bin/jsondiff to 775
  Found existing installation: distribute 0.6.24dev-r0
Not uninstalling distribute at /usr/lib/python2.7/dist-packages, outside 
environment /home/gk-dev/nova/.tox/py27
  Running setup.py install for setuptools

Installing easy_install script to /home/gk-dev/nova/.tox/py27/bin
Installing easy_install-2.7 script to /home/gk-dev/nova/.tox/py27/bin
  Running setup.py install for mccabe

  Running setup.py install for cffi
Traceback (most recent call last):
  File string, line 1, in module
  File /home/gk-dev/nova/.tox/py27/build/cffi/setup.py, line 94, in 
module
from setuptools import setup, Feature, Extension
ImportError: cannot import name Feature
Complete output from command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c 
import 
setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec')) install --record 
/tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
--install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7:
Traceback (most recent call last):

  File string, line 1, in module

  File /home/gk-dev/nova/.tox/py27/build/cffi/setup.py, line 94, in module

from setuptools import setup, Feature, Extension

ImportError: cannot import name Feature


Cleaning up...
Command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c import 
setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec')) install --record 
/tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
--install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7 failed 
with error code 1 in /home/gk-dev/nova/.tox/py27/build/cffi
Traceback (most recent call last):
  File .tox/py27/bin/pip, line 9, in module
load_entry_point('pip==1.5.4', 'console_scripts', 'pip')()
  File 
/home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/__init__.py,
 line 148, in main
parser.print_help()
  File 
/home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 169, in main
log_file_fp.write(text)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 72: 
ordinal not in range(128)

ERROR: could not install deps [-r/home/gk-dev/nova/requirements.txt, 
-r/home/gk-dev/nova/test-requirements.txt]

Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-06 Thread Daniel P. Berrange
On Wed, Mar 05, 2014 at 07:37:39AM -0800, Tracy Jones wrote:
 Hi - Please consider the image cache aging BP for FFE 
 (https://review.openstack.org/#/c/56416/)
 
 This is the last of several patches (already merged) that implement image 
 cache cleanup for the vmware driver.  This patch solves a significant 
 customer pain point as it removes unused images from their datastore.  
 Without this patch their datastore can become unnecessarily full.  In 
 addition to the customer benefit from this patch it
 
 1.  has a turn off switch 
 2.  if fully contained within the vmware driver
 3.  has gone through functional testing with our internal QA team 
 
 ndipanov has been good enough to say he will review the patch, so we would 
 ask for one additional core sponsor for this FFE.

Consider me signed up


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-06 Thread Akihiro Motoki
Hi Kyle,

I am happy to hear OpenDaylight installation and startup are restored
to devstack.
It really helps openstack integration with other open source based software.

I have a question on a file location for non-OpenStack open source software.
when I refactored neutron related devstack code, we placed files related to
such files to lib/neutron_thirdparty directory.
I would like to know the new policy of file locations for such software.
I understand it is limited to neutron and it may happens to other projects.

Thanks,
Akihiro


On Thu, Mar 6, 2014 at 11:19 PM, Kyle Mestery mest...@noironetworks.com wrote:
 On Tue, Mar 4, 2014 at 7:34 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 On Tue, Mar 4, 2014 at 5:46 AM, Sean Dague s...@dague.net wrote:

 On 03/03/2014 11:32 PM, Dean Troyer wrote:
  On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery mest...@noironetworks.com
  mailto:mest...@noironetworks.com wrote:
 
  In all cases today with Open Source plugins, Neutron agents have
  run
  on the hosts. For OpenDaylight, this is not the case. OpenDaylight
  integrates with Neutron as a ML2 MechanismDriver. But it has no
  Neutron code on the compute hosts. OpenDaylight itself communicates
  directly to those compute hosts to program Open vSwitch.
 
 
 
  devstack doesn't provide a way for me to express this today. On the
  compute hosts in the above scenario, there is no q-* services
  enabled, so the is_neutron_enabled function returns 1, meaning no
  neutron.
 
 
  True and working as designed.
 
 
  And then devstack sets Nova up to use nova-networking, which fails.
 
 
  This only happens if you have enabled nova-network.  Since it is on by
  default you must disable it.
 
 
  The patch I have submitted [1] modifies is_neutron_enabled to
  check for the meta neutron service being enabled, which will then
  configure nova to use Neutron instead of nova-networking on the
  hosts. If this sounds wonky and incorrect, I'm open to suggestions
  on how to make this happen.
 
 
  From the review:
 
  is_neutron_enabled() is doing exactly what it is expected to do, return
  success if it finds any q-* service listed in ENABLED_SERVICES. If no
  neutron services are configured on a compute host, then this must not
  say they are.
 
  Putting 'neutron' in ENABLED_SERVICES does nothing and should do
  nothing.
 
  Since you are not implementing the ODS as a Neutron plugin (as far as
  DevStack is concerned) you should then treat it as a system service and
  configure it that way, adding 'opendaylight' to ENABLED_SERVICES
  whenever you want something to know it is being used.
 
 
 
  Note: I have another patch [2] which enables an OpenDaylight
  service, including configuration of OVS on hosts. But I cannot
  check
  if the opendaylight service is enabled, because this will only
  run
  on a single node, and again, not on each compute host.
 
 
  I don't understand this conclusion. in multi-node each node gets its
  own
  specific ENABLED_SERVICES list, you can check that on each node to
  determine how to configure that node.  That is what I'm trying to
  explain in that last paragraph above, maybe not too clearly.

 So in an Open Daylight environment... what's running on the compute host
 to coordinate host level networking?

 Nothing. OpenDaylight communicates to each host using OpenFlow and OVSDB
 to manage networking on the host. In fact, this is one huge advantage for
 the
 ODL MechanismDriver in Neutron, because it's one less agent running on the
 host.

 Thanks,
 Kyle

 As an update here, I've reworked my devstack patch [1]  for adding
 OpenDaylight
 support to make OpenDaylight a top-level service, per suggestion from Dean.
 You
 can now enable both odl-server and odl-compute in your local.conf with
 my patch.
 Enabling odl-server will run OpenDaylight under devstack. Enabling
 odl-compute
 will configure the host's OVS to work with OpenDaylight.

 Per discussion with Sean, I'd like to look at refactoring some other bits of
 the Neutron
 devstack code in the coming weeks as well.

 Thanks!
 Kyle

 [1] https://review.openstack.org/#/c/69774/


 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-06 Thread CARVER, PAUL
James E. Blair [mailto:jebl...@openstack.org] wrote:

significant amount of time chasing bots.  It's clear that Freenode is
better able to deal with attacks than OFTC would be.  However, OFTC
doesn't have to deal with them because they aren't happening; and that's
worth considering.

Does anyone have any idea who is being targeted by the attacks?
I assume they're hitting Freenode as a whole, but presumably the motivation
is one or more channels as opposed to just not liking Freenode in principle.

Honestly I tried IRC in the mid-nineties and didn't see the point (I spent all
my free time reading Usenet (and even paid for Agent at one point after
switching from nn on SunOS to Free Agent on Windows)) and never found
any reason to go back to IRC until finding out that OpenStack's world
revolves around Freenode. So I was only distantly aware of the battlefield
of DDoSers trying to cause netsplits in order to get ops on contentious
channels.

Is there any chance that OpenStack is the target of the DDoSers? Or do
you think there's some other target on Freenode and we're just
collateral damage?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Doug Hellmann
On Thu, Mar 6, 2014 at 10:25 AM, Dmitry Mescheryakov 
dmescherya...@mirantis.com wrote:

 Hello folks,

 A number of OpenStack and related projects have a need to perform
 operations inside VMs running on OpenStack. A natural solution would
 be an agent running inside the VM and performing tasks.

 One of the key questions here is how to communicate with the agent. An
 idea which was discussed some time ago is to use oslo.messaging for
 that. That is an RPC framework - what is needed. You can use different
 transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
 connectivity your OpenStack networking can provide. At the same time
 there is a number of things to consider, like networking, security,
 packaging, etc.

 So, messaging people, what is your opinion on that idea? I've already
 raised that question in the list [1], but seems like not everybody who
 has something to say participated. So I am resending with the
 different topic. For example, yesterday we started discussing security
 of the solution in the openstack-oslo channel. Doug Hellmann at the
 start raised two questions: is it possible to separate different
 tenants or applications with credentials and ACL so that they use
 different queues? My opinion that it is possible using RabbitMQ/Qpid
 management interface: for each application we can automatically create
 a new user with permission to access only her queues. Another question
 raised by Doug is how to mitigate a DOS attack coming from one tenant
 so that it does not affect another tenant. The thing is though
 different applications will use different queues, they are going to
 use a single broker.

 Do you share Doug's concerns or maybe you have your own?


I would also like to understand why you don't consider Marconi the right
solution for this. It is supposed to be a message system that's safe to use
from within tenant images.

Doug




 Thanks,

 Dmitry

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/021476.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tox issues on a clean environment

2014-03-06 Thread Trevor McKay
I am having a very similar issue with horizon, just today. I cloned the
repo and started from scratch on master.

tools/install_venv.py is trying to install cffi as a depdendency,
ultimately fails with 

ImportError: cannot import name Feature

This is Fedora 19.  I know some folks on Fedora 20 who are not having
this issue.  I'm guessing it's a version thing...

Trevor

On Thu, 2014-03-06 at 08:14 -0800, Gary Kotton wrote:
 Hi,
 Anyone know how I cam solve the error below:
 
 
   Running setup.py install for jsonpatch
 /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown
 distribution option: 'entry_poimts'
   warnings.warn(msg)
 changing mode of build/scripts-2.7/jsondiff from 664 to 775
 changing mode of build/scripts-2.7/jsonpatch from 664 to 775
 
 changing mode of /home/gk-dev/nova/.tox/py27/bin/jsonpatch to 775
 changing mode of /home/gk-dev/nova/.tox/py27/bin/jsondiff to 775
   Found existing installation: distribute 0.6.24dev-r0
 Not uninstalling distribute at /usr/lib/python2.7/dist-packages,
 outside environment /home/gk-dev/nova/.tox/py27
   Running setup.py install for setuptools
 
 Installing easy_install script to /home/gk-dev/nova/.tox/py27/bin
 Installing easy_install-2.7 script
 to /home/gk-dev/nova/.tox/py27/bin
   Running setup.py install for mccabe
 
   Running setup.py install for cffi
 Traceback (most recent call last):
   File string, line 1, in module
   File /home/gk-dev/nova/.tox/py27/build/cffi/setup.py, line 94,
 in module
 from setuptools import setup, Feature, Extension
 ImportError: cannot import name Feature
 Complete output from
 command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c import
 setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
  '\n'), __file__, 'exec')) install --record 
 /tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
 --install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7:
 Traceback (most recent call last):
 
 
   File string, line 1, in module
 
 
   File /home/gk-dev/nova/.tox/py27/build/cffi/setup.py, line 94, in
 module
 
 
 from setuptools import setup, Feature, Extension
 
 
 ImportError: cannot import name Feature
 
 
 
 Cleaning up...
 Command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c import
 setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
  '\n'), __file__, 'exec')) install --record 
 /tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
 --install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7 failed 
 with error code 1 in /home/gk-dev/nova/.tox/py27/build/cffi
 Traceback (most recent call last):
   File .tox/py27/bin/pip, line 9, in module
 load_entry_point('pip==1.5.4', 'console_scripts', 'pip')()
   File
 /home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/__init__.py,
  line 148, in main
 parser.print_help()
   File
 /home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/basecommand.py,
  line 169, in main
 log_file_fp.write(text)
 UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position
 72: ordinal not in range(128)
 
 
 ERROR: could not install deps [-r/home/gk-dev/nova/requirements.txt,
 -r/home/gk-dev/nova/test-requirements.txt]
 
 
 Thanks
 Gary
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [re]: [GSoC 2014] Proposal Template

2014-03-06 Thread saikrishna sripada
Hi Masaru,

I tried creating the project template following your suggestions.Thats
really helpful. Only one suggestion:

Under the project description, We can give the link to actual project idea.
The remaining details like these can be removed here since this can be
redundant.

   - What is the goal?
   - How will you achieve your goal?
   - What would be your milestone?
   - At which time will you complete a sub-task of your project?

These details we will be filling any way in the Project template link just
which will be just below in the page. Please confirm.

Thanks,
--sai krishna.

Dear mentors and students,

Hi,

after a short talk with dims, I created an application template wiki
page[1]. Obviously, this is not a completed version, and I'd like your
opinions to improve it. :)

I have :
1) simply added information such as :

   ・Personal Details (e.g. Name, Email, University and so on)

   ・Project Proposal (e.g. Project, idea, implementation issues, and time
scheduling)

   ・Background (e.g. Open source, academic or intern experience, or
language experience)

2) linked this page on GSoC 2014 wiki page[2]
3) created an example of my proposal page [3] (not completed yet!)
4) linked the example to an Oslo project page[4]


Thank you,
Masaru

[1] 
https://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplatehttps://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplate#Link_your_page
[2] https://wiki.openstack.org/wiki/GSoC2014#Communication
[3] https://wiki.openstack.org/wiki/GSoC2014/Student/Masaru
[4]https://wiki.openstack.org/wiki/GSoC2014/Incubator/SharedLib#Students.27_proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Georgy Okrokvertskhov
Hi Julien,

I there are valid reasons why we can consider MQ approach for communicating
with VM agents. The first obvious reason is scalability and performance.
User can ask infrastructure to create 1000 VMs and configure them. With
HTTP approach it will lead to a corresponding number of connections to a
REST API service. Taking into account that cloud has multiple clients the
load on infrastructure will be pretty significant. You can address this
with introducing Load Balancing for each service, but it will significantly
increase management overhead and complexity of OpenStack infrastructure.

The second issue is connectivity and security. I think that in typical
production deployment VMs will not have an access to OpenStack
infrastructure services. It is fine for core infrastructure services like
Nova and Cinder as they do not work directly with VM. But it makes a huge
problem for VM level services like Savanna, Heat, Trove and Murano which
have to be able to communicate with VMs. The solution here is to put an
intermediary to create a controllable way of communication. In case of HTTP
you will need to have a proxy with QoS and Firewalls or policies, to be
able to restrict an access to some specific URLS or services, to throttle
the number of connections and bandwidth to protect services from DDoS
attacks from VM sides.
In case of MQ usage you can have a separate MQ broker for communication
between service and VMs. Typical brokers have throttling mechanism, so you
can protect service from DDoS attacks via MQ. Using different queues and
even vhosts you can effectively segregate different tenants.
For example we use this approach in Murano service when it is installed by
Fuel. The default deployment configuration for Murano produced by Fuel is
to have separate RabbitMQ instance for Murano-VM communications. This
configuration will not expose the OpenStack internals to VM, so even if
someone broke the Murano rabbitmq instance, the OpenSatck itself will be
unaffected and only the Murano part will be broken.

Thanks
Georgy


On Thu, Mar 6, 2014 at 7:46 AM, Julien Danjou jul...@danjou.info wrote:

 On Thu, Mar 06 2014, Dmitry Mescheryakov wrote:

  So, messaging people, what is your opinion on that idea? I've already
  raised that question in the list [1], but seems like not everybody who
  has something to say participated. So I am resending with the
  different topic. For example, yesterday we started discussing security
  of the solution in the openstack-oslo channel. Doug Hellmann at the
  start raised two questions: is it possible to separate different
  tenants or applications with credentials and ACL so that they use
  different queues? My opinion that it is possible using RabbitMQ/Qpid
  management interface: for each application we can automatically create
  a new user with permission to access only her queues. Another question
  raised by Doug is how to mitigate a DOS attack coming from one tenant
  so that it does not affect another tenant. The thing is though
  different applications will use different queues, they are going to
  use a single broker.

 What about using HTTP and the REST APIs? What's what supposed to be the
 world facing interface of OpenStack. If you want to receive messages,
 it's still possible to use long polling connections.

 --
 Julien Danjou
 ;; Free Software hacker
 ;; http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Veiga, Anthony

On Thu, 2014-03-06 at 15:32 +, Jorge Miramontes wrote:
 I'd like to gauge everyone's interest in a possible mini-summit for
 Neturon LBaaS. If enough people are interested I'd be happy to try and
 set something up. The Designate team just had a productive mini-summit
 in Austin, TX and it was nice to have face-to-face conversations with
 people in the Openstack community. While most of us will meet in
 Atlanta in May, I feel that a focused mini-summit will be more
 productive since we won't have other Openstack distractions around us.
 Let me know what you all think!

++

++

I think a few weeks after the design summit would be a good time.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Throwing my hat into the ring as well. I think this would be quite useful.
-Anthony


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tox issues on a clean environment

2014-03-06 Thread Kevin L. Mitchell
On Thu, 2014-03-06 at 08:14 -0800, Gary Kotton wrote:
 File /home/gk-dev/nova/.tox/py27/build/cffi/setup.py, line 94, in
 module
 
 
 from setuptools import setup, Feature, Extension
 
 
 ImportError: cannot import name Feature

Apparently, quite recently, a new version of setuptools was released
that eliminated the Feature class.  From what I understand, the class
has been deprecated for quite a while, but the removal still seems to
have taken some consumers by surprise; we discovered it when a package
that uses MarkupSafe failed tests with the same error today.  We may
have to consider a short-term pin to the version of setuptools (if
that's even possible) on projects that encounter the problem…
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Julien Danjou
On Thu, Mar 06 2014, Georgy Okrokvertskhov wrote:

 I there are valid reasons why we can consider MQ approach for communicating
 with VM agents. The first obvious reason is scalability and performance.
 User can ask infrastructure to create 1000 VMs and configure them. With
 HTTP approach it will lead to a corresponding number of connections to a
 REST API service. Taking into account that cloud has multiple clients the
 load on infrastructure will be pretty significant. You can address this
 with introducing Load Balancing for each service, but it will significantly
 increase management overhead and complexity of OpenStack infrastructure.

Uh? I'm having trouble imagining any large OpenStack deployment without
load-balancing services. I don't think we ever designed OpenStack to run
without load-balancers at large scale.

 The second issue is connectivity and security. I think that in typical
 production deployment VMs will not have an access to OpenStack
 infrastructure services.

Why? Should they be different than other VM? Are you running another
OpenStack cloud to run your OpenStack cloud?

 It is fine for core infrastructure services like
 Nova and Cinder as they do not work directly with VM. But it makes a huge
 problem for VM level services like Savanna, Heat, Trove and Murano which
 have to be able to communicate with VMs. The solution here is to put an
 intermediary to create a controllable way of communication. In case of HTTP
 you will need to have a proxy with QoS and Firewalls or policies, to be
 able to restrict an access to some specific URLS or services, to throttle
 the number of connections and bandwidth to protect services from DDoS
 attacks from VM sides.

This really sounds like weak arguments. You probably already do need
firewall, QoS, and throttling for your users if you're deploying a cloud
and want to mitigate any kind of attack.

 In case of MQ usage you can have a separate MQ broker for communication
 between service and VMs. Typical brokers have throttling mechanism, so you
 can protect service from DDoS attacks via MQ.

Yeah and I'm pretty sure a lot of HTTP servers have throttling for
connection rate and/or bandwidth limitation. I'm not really convinced.

 Using different queues and even vhosts you can effectively segregate
 different tenants.

Sounds like could do the same thing the HTTP protocol.

 For example we use this approach in Murano service when it is
 installed by Fuel. The default deployment configuration for Murano
 produced by Fuel is to have separate RabbitMQ instance for Murano-VM
 communications. This configuration will not expose the OpenStack
 internals to VM, so even if someone broke the Murano rabbitmq
 instance, the OpenSatck itself will be unaffected and only the Murano
 part will be broken.

It really sounds like you already settled on the solution being
RabbitMQ, so I'm not sure what/why you ask in the first place. :)

Is there any problem with starting VMs on a network that is connected to
your internal network? You just have to do that and connect your
application to the/one internal messages bus and that's it.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] development workflows

2014-03-06 Thread Lowery, Mathew
So I submitted this 
dochttp://docs-draft.openstack.org/29/70629/3/check/gate-trove-docs/34ceff7/doc/build/html/dev/install_alt.html
 (in this patch sethttps://review.openstack.org/#/c/70629/) and Dan Nguyen 
(thanks Dan) stated that there were some folks using Vagrant. (My workflow uses 
git push with a git hook to copy files and trigger restarts.) Can anyone point 
me to any doc regarding Trove using Vagrant? Assuming my doc is desirable in 
some form, where is the best place to put it? Thanks.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread John Dewey
I am interested 


On Thursday, March 6, 2014 at 7:32 AM, Jorge Miramontes wrote:

 Hi everyone,
 
 I'd like to gauge everyone's interest in a possible mini-summit for Neturon 
 LBaaS. If enough people are interested I'd be happy to try and set something 
 up. The Designate team just had a productive mini-summit in Austin, TX and it 
 was nice to have face-to-face conversations with people in the Openstack 
 community. While most of us will meet in Atlanta in May, I feel that a 
 focused mini-summit will be more productive since we won't have other 
 Openstack distractions around us. Let me know what you all think! 
 
 Cheers, 
 --Jorge
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][Horizon] test_launch_instance_post questions

2014-03-06 Thread Abishek Subramanian (absubram)
Hi,

I had a couple of questions regarding this UT and the
JS template that it ends up using.
Hopefully someone can point me in the right direction
and help me understand this a little better.

I see that for this particular UT, we have a total of 3 networks
in the network_list (the second network is supposed to be disabled though).
For the nic argument needed by the nova/server_create API though we
only pass the first network's net_id.

I am trying to modify this unit test so as to be able to accept 2
network_ids 
instead of just one. This should be possible yes?
We can have two nics in an instance of just one?
However, I always see that when the test runs,
in code it only finds the first network from the list.

This line of code -

 if netids:
nics = [{net-id: netid, v4-fixed-ip: }
for netid in netids]

There's always just one net-id in this dictionary even though I've added
a new network in the neutron test_data. Can someone please help me
figure out what I might be doing wrong?

How does the JS code in horizon.instances.js file work?
I assume this is where the network list is obtained from?
How does this translate in the unit test environment?



Thanks!
Abishek


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [re]: [GSoC 2014] Proposal Template

2014-03-06 Thread Davanum Srinivas
Sai,

There may be more than one person on a topic, so it would make sense
to have additional questions per person. Yes, link to project idea is
definitely needed.

-- dims

On Thu, Mar 6, 2014 at 11:41 AM, saikrishna sripada
krishna1...@gmail.com wrote:
 Hi Masaru,

 I tried creating the project template following your suggestions.Thats
 really helpful. Only one suggestion:

 Under the project description, We can give the link to actual project idea.
 The remaining details like these can be removed here since this can be
 redundant.

 What is the goal?
 How will you achieve your goal?
 What would be your milestone?
 At which time will you complete a sub-task of your project?

 These details we will be filling any way in the Project template link just
 which will be just below in the page. Please confirm.

 Thanks,
 --sai krishna.

 Dear mentors and students,

 Hi,

 after a short talk with dims, I created an application template wiki
 page[1]. Obviously, this is not a completed version, and I'd like your
 opinions to improve it. :)

 I have :
 1) simply added information such as :

・Personal Details (e.g. Name, Email, University and so on)

・Project Proposal (e.g. Project, idea, implementation issues, and time
 scheduling)

・Background (e.g. Open source, academic or intern experience, or
 language experience)

 2) linked this page on GSoC 2014 wiki page[2]
 3) created an example of my proposal page [3] (not completed yet!)
 4) linked the example to an Oslo project page[4]


 Thank you,
 Masaru

 [1]
 https://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplatehttps://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplate#Link_your_page
 [2] https://wiki.openstack.org/wiki/GSoC2014#Communication
 [3] https://wiki.openstack.org/wiki/GSoC2014/Student/Masaru
 [4]
 https://wiki.openstack.org/wiki/GSoC2014/Incubator/SharedLib#Students.27_proposals



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Daniel P. Berrange
On Thu, Mar 06, 2014 at 07:25:37PM +0400, Dmitry Mescheryakov wrote:
 Hello folks,
 
 A number of OpenStack and related projects have a need to perform
 operations inside VMs running on OpenStack. A natural solution would
 be an agent running inside the VM and performing tasks.
 
 One of the key questions here is how to communicate with the agent. An
 idea which was discussed some time ago is to use oslo.messaging for
 that. That is an RPC framework - what is needed. You can use different
 transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
 connectivity your OpenStack networking can provide. At the same time
 there is a number of things to consider, like networking, security,
 packaging, etc.
 
 So, messaging people, what is your opinion on that idea? I've already
 raised that question in the list [1], but seems like not everybody who
 has something to say participated. So I am resending with the
 different topic. For example, yesterday we started discussing security
 of the solution in the openstack-oslo channel. Doug Hellmann at the
 start raised two questions: is it possible to separate different
 tenants or applications with credentials and ACL so that they use
 different queues? My opinion that it is possible using RabbitMQ/Qpid
 management interface: for each application we can automatically create
 a new user with permission to access only her queues. Another question
 raised by Doug is how to mitigate a DOS attack coming from one tenant
 so that it does not affect another tenant. The thing is though
 different applications will use different queues, they are going to
 use a single broker.

Looking at it from the security POV, I'd absolutely not want to
have any tenant VMs connected to the message bus that openstack
is using between its hosts. Even if you have security policies
in place, the inherent architectural risk of such a design is
just far too great. One small bug or misconfiguration and it
opens the door to a guest owning the entire cloud infrastructure.
Any channel between a guest and host should be isolated per guest,
so there's no possibility of guest messages finding their way out
to either the host or to other guests.

If there was still a desire to use oslo.messaging, then at the
very least you'd want a completely isolated message bus for guest
comms, with no connection to the message bus used between hosts.
Ideally the message bus would be separate per guest too, which
means it ceases to be a bus really - just a point-to-point link
between the virt host + guest OS that happens to use the oslo.messaging
wire format.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] graduation review meeting

2014-03-06 Thread Kurt Griffiths
Team, we will be discussing Marconi graduation from incubation in a couple 
weeks at the TC meeting, March 18th at 20:00 
UTChttp://www.timeanddate.com/worldclock/fixedtime.html?iso=20140318T20p1=1440.

It would be great to have as many people there as possible to help answer 
questions, etc.

Thanks!

Kurt G. | @kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Andrew Woodward
For 59148 patch set 23, we nearly merged and had +2 from Joe Gordon
and Daniel Berrange. And appears to have been quite close.
For 59149, we might not be so close, Daniel can you comment further if
you see this landing in the next few days?

On Thu, Mar 6, 2014 at 5:56 AM, Russell Bryant rbry...@redhat.com wrote:
 On 03/06/2014 03:20 AM, Andrew Woodward wrote:
 I'd Like to request A FFE for the remaining patches in the Ephemeral
 RBD image support chain

 https://review.openstack.org/#/c/59148/
 https://review.openstack.org/#/c/59149/

 are still open after their dependency
 https://review.openstack.org/#/c/33409/ was merged.

 These should be low risk as:
 1. We have been testing with this code in place.
 2. It's nearly all contained within the RBD driver.

 This is needed as it implements an essential functionality that has
 been missing in the RBD driver and this will become the second release
 it's been attempted to be merged into.

 It's not a trivial change, and it doesn't appear that it was super close
 to merging based on review history.

 Are there two nova-core members interested and willing to review this to
 get it merged ASAP?  If so, could you comment on how close you think it is?

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
If google has done it, Google did it right!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-06 Thread Steven Dake

On 03/06/2014 03:15 AM, Thierry Carrez wrote:

Steven Dake wrote:

My general take is workflow would fit in the Orchestration program, but
not be integrated into the heat repo specifically.  It would be a
different repo, managed by the same orchestration program just as we
have heat-cfntools and other repositories.  Figuring out how to handle
the who is the core team of people responsible for program's individual
repositories is the most difficult aspect of making such a merge.  For
example, I'd not desire a bunch of folks from Murano +2/+A heat specific
repos until they understood the code base in detail, or atleast the
broad architecture.   I think the same think applies in reverse from the
Murano perspective.  Ideally folks that are core on a specific program
would need to figure out how to learn how to broadly review each repo
(meaning the heat devs would have to come up to speed on murano and
murano devs would have to come up to speed on heat.  Learning a new code
base is a big commitment for an already overtaxed core team.

Being in the same program means you share the same team and PTL, not
necessarily that all projects under the program have the same core
review team. So you could have different core reviewers for both
(although I'd encourage the core for ones become core for the other,
since it will facilitate behaving like a coherent team). You could also
have a single core team with clear expectations set (do not approve
changes for code you're not familiar with).

This may be possible with jenkins permissions, but what I'd like to see 
is for a way for people familiar with each specific project to be 
graduated to core for that project.  (eg heat or workflow).  An implicit 
expectation do not approve doesn't  totally fit, because at some point, 
we may want to give those folks the ability to approve via a core 
nomination (because they have met the core requirements) for either heat 
or workflow.  WIthout a way of nominating for core for a specific 
project (within a specific program), the poor developer has no way to 
know when they have officially been recognized by the core team as an 
actual core member.


I agree folks in one program need to behave as a coherent team for the 
Orchestration program to be successful, which means a big commitment 
from the existing orchestration program core members (currently 
heat-core) to come up to speed on the example workflow code base and 
community (and vice-versa).


I'm a bit confused as well as to how a incubated project would be 
differentiated from a integrated project in one program.  This may have 
already been discussed by the TC.  For example, Red Hat doesn't 
officially support incubated projects, but we officially support (with 
our full sales/training/documentation/support/ plus a whole bunch of 
other Red Hat internalisms) Integrated projects.  OpenStack vendors need 
a way to let customers know (through an upstream page?) what a project 
in a specific program's status is so we can appropriately set 
expectations with the community and  customers.


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >