Re: [openstack-dev] [Neutron][ML2]

2014-03-06 Thread Akihiro Motoki
Hi,

I think it is better to continue the discussion here. It is a good log :-)

Eugine and I talked the related topic to allow drivers to load
extensions)  in Icehouse Summit
but I could not have enough time to work on it during Icehouse.
I am still interested in implementing it and will register a blueprint on it.

etherpad in icehouse summit has baseline thought on how to achieve it.
https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
I hope it is a good start point of the discussion.

Thanks,
Akihiro

On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti  wrote:
> Hi Kyle,
>
> Just wanted to clarify: Should I continue using this mailing list to post my
> question/concerns about ML2? Please advise.
>
> Thanks,
> Nader.
>
>
>
> On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery 
> wrote:
>>
>> Thanks Edgar, I think this is the appropriate place to continue this
>> discussion.
>>
>>
>> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana  wrote:
>>>
>>> Nader,
>>>
>>> I would encourage you to first discuss the possible extension with the
>>> ML2 team. Rober and Kyle are leading this effort and they have a IRC meeting
>>> every week:
>>> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>>>
>>> Bring your concerns on this meeting and get the right feedback.
>>>
>>> Thanks,
>>>
>>> Edgar
>>>
>>> From: Nader Lahouti 
>>> Reply-To: OpenStack List 
>>> Date: Thursday, March 6, 2014 12:14 PM
>>> To: OpenStack List 
>>> Subject: Re: [openstack-dev] [Neutron][ML2]
>>>
>>> Hi Aaron,
>>>
>>> I appreciate your reply.
>>>
>>> Here is some more details on what I'm trying to do:
>>> I need to add new attribute to the network resource using extensions
>>> (i.e. network config profile) and use it in the mechanism driver (in the
>>> create_network_precommit/postcommit).
>>> If I use current implementation of Ml2Plugin, when a call is made to
>>> mechanism driver's create_network_precommit/postcommit the new attribute is
>>> not included in the 'mech_context'
>>> Here is code from Ml2Plugin:
>>> class Ml2Plugin(...):
>>> ...
>>>def create_network(self, context, network):
>>> net_data = network['network']
>>> ...
>>> with session.begin(subtransactions=True):
>>> self._ensure_default_security_group(context, tenant_id)
>>> result = super(Ml2Plugin, self).create_network(context,
>>> network)
>>> network_id = result['id']
>>> ...
>>> mech_context = driver_context.NetworkContext(self, context,
>>> result)
>>> self.mechanism_manager.create_network_precommit(mech_context)
>>>
>>> Also need to include new extension in the  _supported_extension_aliases.
>>>
>>> So to avoid changes in the existing code, I was going to create my own
>>> plugin (which will be very similar to Ml2Plugin) and use it as core_plugin.
>>>
>>> Please advise the right solution implementing that.
>>>
>>> Regards,
>>> Nader.
>>>
>>>
>>> On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen 
>>> wrote:

 Hi Nader,

 Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one
 plugin in another. I'm guessing  you probably wire a driver that ML2 can 
 use
 though it's hard to tell from the information you've provided what you're
 trying to do.

 Best,

 Aaron


 On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti 
 wrote:
>
> Hi All,
>
> I have a question regarding ML2 plugin in neutron:
> My understanding is that, 'Ml2Plugin' is the default core_plugin for
> neutron ML2. We can use either the default plugin or our own plugin (i.e.
> my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as
> core_plugin.
>
> Is my understanding correct?
>
>
> Regards,
> Nader.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> ___ OpenStack-dev mailing
>>> list OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

_

[openstack-dev] openstack-swift put performance

2014-03-06 Thread Ivan Pustovalov
HI!
I have a cluster of 5 nodes with 3 replicas. All of the servers (e.g.
proxy, account, object, container )
are installed on a single server, and I have 5 of these servers.
I send put object requests from one testing thread and check client
response time from cluster.
And obtained results did not satisfy me.
When I was researching tcp traffic, I found time loss on waiting HTTP 100
from object servers, 10-15 ms on each and 10 ms on proxy while checking
quorum.

In my case, users can put small objects (e.g. 16 kbytes) into the cloud and
I look forward to a load of 2000 requests per second. This time loss
significantly reduces cloud performance.
How I can reduce this time loss and what are best practices for tuning?

-- 
Regards, Ivan Pustovalov.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] time.sleep is affected by eventlet.monkey_patch()

2014-03-06 Thread Yuriy Taraday
Hello.


On Fri, Mar 7, 2014 at 10:34 AM, 黎林果  wrote:
>
> 2014-03-07 *11:55:49*  the sleep time = past time + 30
>

With that eventlet doesn't break the promise of waking your greenthread
after at least 30 seconds. Have you tried doing the same test, but with
moving clock forwards instead of backwards?

All in all it sounds like an eventlet bug. I'm not sure how it can be dealt
with though.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-06 Thread Nader Lahouti
Hi Kyle,

Just wanted to clarify: Should I continue using this mailing list to post
my question/concerns about ML2? Please advise.

Thanks,
Nader.



On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery wrote:

> Thanks Edgar, I think this is the appropriate place to continue this
> discussion.
>
>
> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana  wrote:
>
>> Nader,
>>
>> I would encourage you to first discuss the possible extension with the
>> ML2 team. Rober and Kyle are leading this effort and they have a IRC
>> meeting every week:
>> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>>
>> Bring your concerns on this meeting and get the right feedback.
>>
>> Thanks,
>>
>> Edgar
>>
>> From: Nader Lahouti 
>> Reply-To: OpenStack List 
>> Date: Thursday, March 6, 2014 12:14 PM
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Neutron][ML2]
>>
>> Hi Aaron,
>>
>> I appreciate your reply.
>>
>> Here is some more details on what I'm trying to do:
>> I need to add new attribute to the network resource using extensions
>> (i.e. network config profile) and use it in the mechanism driver (in the
>> create_network_precommit/postcommit).
>> If I use current implementation of Ml2Plugin, when a call is made to
>> mechanism driver's create_network_precommit/postcommit the new attribute is
>> not included in the 'mech_context'
>> Here is code from Ml2Plugin:
>> class Ml2Plugin(...):
>> ...
>>def create_network(self, context, network):
>> net_data = network['network']
>> ...
>> with session.begin(subtransactions=True):
>> self._ensure_default_security_group(context, tenant_id)
>> result = super(Ml2Plugin, self).create_network(context,
>> network)
>> network_id = result['id']
>> ...
>> mech_context = driver_context.NetworkContext(self, context,
>> result)
>> self.mechanism_manager.create_network_precommit(mech_context)
>>
>> Also need to include new extension in the  _supported_extension_aliases.
>>
>> So to avoid changes in the existing code, I was going to create my own
>> plugin (which will be very similar to Ml2Plugin) and use it as core_plugin.
>>
>> Please advise the right solution implementing that.
>>
>> Regards,
>> Nader.
>>
>>
>> On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen wrote:
>>
>>> Hi Nader,
>>>
>>> Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one
>>> plugin in another. I'm guessing  you probably wire a driver that ML2 can
>>> use though it's hard to tell from the information you've provided what
>>> you're trying to do.
>>>
>>> Best,
>>>
>>> Aaron
>>>
>>>
>>> On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti 
>>> wrote:
>>>
 Hi All,

 I have a question regarding ML2 plugin in neutron:
 My understanding is that, 'Ml2Plugin' is the default core_plugin for
 neutron ML2. We can use either the default plugin or our own plugin (i.e.
 my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as
 core_plugin.

 Is my understanding correct?


 Regards,
 Nader.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___ OpenStack-dev mailing
>> list OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Chris Behrens

On Mar 6, 2014, at 11:09 AM, Russell Bryant  wrote:
[…]
> I think a dedicated git repo for this makes sense.
> openstack/nova-blueprints or something, or openstack/nova-proposals if
> we want to be a bit less tied to launchpad terminology.

+1 to this whole idea.. and we definitely should have a dedicated repo for 
this. I’m indifferent to its name. :)  Either one of those works for me.

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] time.sleep is affected by eventlet.monkey_patch()

2014-03-06 Thread 黎林果
Hi stackers,

I have do a test like:

test1.py
import time

def sleep_test():
print time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))
time.sleep(30)
print time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))

sleep_test()

test2.py
import time
import eventlet

def sleep_test():
print time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))
time.sleep(30)
print time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))

eventlet.monkey_patch()
sleep_test()

When sleeping, I modify the system time with a past time.
test1.py's resut:
2014-03-07 11:56:21
2014-03-07 11:52:17  the sleep time = 30
but the test2.py result:
2014-03-07 11:55:19
2014-03-07 *11:55:49*  the sleep time = past time + 30

How can we deal the differents?

Thanks!
Lee
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-06 Thread Dmitri Zimine
I just moved the sample to Git; let's leverage git review for specific comments 
on the syntax. 

https://github.com/dzimine/mistral-workflows/commit/d8c4a8c845e9ca49f6ea94362cef60489f2a46a3

DZ> 

On Mar 6, 2014, at 10:36 PM, Dmitri Zimine  wrote:

> Folks, thanks for the input! 
> 
> @Joe: 
> 
> Hopefully Renat covered the differences.  Yet I am interested in how the same 
> workflow can be expressed as Salt state(s) or Ansible playbooks. Can you (or 
> someone else who knows them well) take a stub? 
> 
> 
> @Joshua
> I am still new to Mistral and learning, but I think it _is_ relevant to 
> taskflow. Should we meet, and you help me catch up? Thanks! 
> 
> @Sandy:
> Aaahr, I used the "D" word?!  :) I keep on arguing that YAML workflow 
> representation doesn't make DSL. 
> 
> And YES to the object model first to define the workflow, with 
> YAML/JSON/PythonDSL/what-else as a syntax to build it. We are having these 
> discussions on another thread and reviews. 
> 
>> Basically, in order to make a grammar expressive enough to work across a
>> web interface, we essentially end up writing a crappy language. Instead,
>> we should focus on the callback hooks to something higher level to deal
>> with these issues. Minstral should just say "I'm done this task, what
>> should I do next?" and the callback service can make decisions on where
>> in the graph to go next.
> 
> There must be some misunderstanding. Mistral _does follow AWS / BPEL engines 
> approach, it is both doing "I'm done this task, what should I do next?" 
> (executor) and "callback service" (engine that coordinates the flow and keeps 
> the state). Like decider and activity workers in AWS Simple Workflow.
> 
> Engine maintains the state. Executors run tasks. Object model describes 
> workflow as a graph of tasks with transitions, conditions, etc. YAML is one 
> way to define a workflow. Nothing controversial :) 
> 
> @all:
> 
> Wether one writes Python code or uses yaml? Depends on the user. There are 
> good arguments for YAML. But if it's crappy, it looses. We want to see how it 
> feels to write it. To me, mixed feelings so far, but promising. What do you 
> guys think?
> 
> Comments welcome here: 
> https://github.com/dzimine/mistral-workflows/commit/d8c4a8c845e9ca49f6ea94362cef60489f2a46a3
> 
> 
> DZ> 
> 
> 
> On Mar 6, 2014, at 10:41 AM, Sandy Walsh  wrote:
> 
>> 
>> 
>> On 03/06/2014 02:16 PM, Renat Akhmerov wrote:
>>> IMO, it looks not bad (sorry, I’m biased too) even now. Keep in mind this 
>>> is not the final version, we keep making it more expressive and concise.
>>> 
>>> As for killer object model it’s not 100% clear what you mean. As always, 
>>> devil in the details. This is a web service with all the consequences. I 
>>> assume what you call “object model” here is nothing else but a python 
>>> binding for the web service which we’re also working on. Custom python 
>>> logic you mentioned will also be possible to easily integrate. Like I said, 
>>> it’s still a pilot stage of the project.
>> 
>> Yeah, the REST aspect is where the "tricky" part comes in :)
>> 
>> Basically, in order to make a grammar expressive enough to work across a
>> web interface, we essentially end up writing a crappy language. Instead,
>> we should focus on the callback hooks to something higher level to deal
>> with these issues. Minstral should just say "I'm done this task, what
>> should I do next?" and the callback service can make decisions on where
>> in the graph to go next.
>> 
>> Likewise with things like sending emails from the backend. Minstral
>> should just call a webhook and let the receiver deal with "active
>> states" as they choose.
>> 
>> Which is why modelling this stuff in code is usually always better and
>> why I'd lean towards the TaskFlow approach to the problem. They're
>> tackling this from a library perspective first and then (possibly)
>> turning it into a service. Just seems like a better fit. It's also the
>> approach taken by Amazon Simple Workflow and many BPEL engines.
>> 
>> -S
>> 
>> 
>>> Renat Akhmerov
>>> @ Mirantis Inc.
>>> 
>>> 
>>> 
>>> On 06 Mar 2014, at 22:26, Joshua Harlow  wrote:
>>> 
 That sounds a little similar to what taskflow is trying to do (I am of 
 course biased).
 
 I agree with letting the native language implement the basics 
 (expressions, assignment...) and then building the "domain" ontop of that. 
 Just seems more natural IMHO, and is similar to what linq (in c#) has done.
 
 My 3 cents.
 
 Sent from my really tiny device...
 
> On Mar 6, 2014, at 5:33 AM, "Sandy Walsh"  
> wrote:
> 
> DSL's are tricky beasts. On one hand I like giving a tool to
> non-developers so they can do their jobs, but I always cringe when the
> DSL reinvents the wheel for basic stuff (compound assignment
> expressions, conditionals, etc).
> 
> YAML isn't really a DSL per se, in the sense that it has no language
> constructs. As compared to a 

Re: [openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-06 Thread Dmitri Zimine
Folks, thanks for the input! 

@Joe: 

Hopefully Renat covered the differences.  Yet I am interested in how the same 
workflow can be expressed as Salt state(s) or Ansible playbooks. Can you (or 
someone else who knows them well) take a stub? 


@Joshua
I am still new to Mistral and learning, but I think it _is_ relevant to 
taskflow. Should we meet, and you help me catch up? Thanks! 

@Sandy:
Aaahr, I used the "D" word?!  :) I keep on arguing that YAML workflow 
representation doesn't make DSL. 

And YES to the object model first to define the workflow, with 
YAML/JSON/PythonDSL/what-else as a syntax to build it. We are having these 
discussions on another thread and reviews. 

> Basically, in order to make a grammar expressive enough to work across a
> web interface, we essentially end up writing a crappy language. Instead,
> we should focus on the callback hooks to something higher level to deal
> with these issues. Minstral should just say "I'm done this task, what
> should I do next?" and the callback service can make decisions on where
> in the graph to go next.

There must be some misunderstanding. Mistral _does follow AWS / BPEL engines 
approach, it is both doing "I'm done this task, what should I do next?" 
(executor) and "callback service" (engine that coordinates the flow and keeps 
the state). Like decider and activity workers in AWS Simple Workflow.

Engine maintains the state. Executors run tasks. Object model describes 
workflow as a graph of tasks with transitions, conditions, etc. YAML is one way 
to define a workflow. Nothing controversial :) 

@all:

Wether one writes Python code or uses yaml? Depends on the user. There are good 
arguments for YAML. But if it's crappy, it looses. We want to see how it feels 
to write it. To me, mixed feelings so far, but promising. What do you guys 
think?

Comments welcome here: 
https://github.com/dzimine/mistral-workflows/commit/d8c4a8c845e9ca49f6ea94362cef60489f2a46a3


DZ> 


On Mar 6, 2014, at 10:41 AM, Sandy Walsh  wrote:

> 
> 
> On 03/06/2014 02:16 PM, Renat Akhmerov wrote:
>> IMO, it looks not bad (sorry, I’m biased too) even now. Keep in mind this is 
>> not the final version, we keep making it more expressive and concise.
>> 
>> As for killer object model it’s not 100% clear what you mean. As always, 
>> devil in the details. This is a web service with all the consequences. I 
>> assume what you call “object model” here is nothing else but a python 
>> binding for the web service which we’re also working on. Custom python logic 
>> you mentioned will also be possible to easily integrate. Like I said, it’s 
>> still a pilot stage of the project.
> 
> Yeah, the REST aspect is where the "tricky" part comes in :)
> 
> Basically, in order to make a grammar expressive enough to work across a
> web interface, we essentially end up writing a crappy language. Instead,
> we should focus on the callback hooks to something higher level to deal
> with these issues. Minstral should just say "I'm done this task, what
> should I do next?" and the callback service can make decisions on where
> in the graph to go next.
> 
> Likewise with things like sending emails from the backend. Minstral
> should just call a webhook and let the receiver deal with "active
> states" as they choose.
> 
> Which is why modelling this stuff in code is usually always better and
> why I'd lean towards the TaskFlow approach to the problem. They're
> tackling this from a library perspective first and then (possibly)
> turning it into a service. Just seems like a better fit. It's also the
> approach taken by Amazon Simple Workflow and many BPEL engines.
> 
> -S
> 
> 
>> Renat Akhmerov
>> @ Mirantis Inc.
>> 
>> 
>> 
>> On 06 Mar 2014, at 22:26, Joshua Harlow  wrote:
>> 
>>> That sounds a little similar to what taskflow is trying to do (I am of 
>>> course biased).
>>> 
>>> I agree with letting the native language implement the basics (expressions, 
>>> assignment...) and then building the "domain" ontop of that. Just seems 
>>> more natural IMHO, and is similar to what linq (in c#) has done.
>>> 
>>> My 3 cents.
>>> 
>>> Sent from my really tiny device...
>>> 
 On Mar 6, 2014, at 5:33 AM, "Sandy Walsh"  
 wrote:
 
 DSL's are tricky beasts. On one hand I like giving a tool to
 non-developers so they can do their jobs, but I always cringe when the
 DSL reinvents the wheel for basic stuff (compound assignment
 expressions, conditionals, etc).
 
 YAML isn't really a DSL per se, in the sense that it has no language
 constructs. As compared to a Ruby-based DSL (for example) where you
 still have Ruby under the hood for the basic stuff and extensions to the
 language for the domain-specific stuff.
 
 Honestly, I'd like to see a killer object model for defining these
 workflows as a first step. What would a python-based equivalent of that
 real-world workflow look like? Then we can ask ourselves, does the DSL
 make this bette

Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-06 Thread Joe Gordon
On Thu, Mar 6, 2014 at 8:24 PM, Matt Riedemann
 wrote:
>
>
> On 3/6/2014 8:08 PM, Matt Riedemann wrote:
>>
>>
>>
>> On 3/6/2014 3:46 PM, James Carey wrote:
>>>
>>>  Please consider a FFE for i18n Message improvements:
>>> BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages
>>>
>>>  The base enablement for lazy translation has already been sync'd
>>> from oslo.   This patch was to enable lazy translation support in Nova.
>>>   It is titled re-enable lazy translation because this was enabled
>>> during Havana but was pulled due to issues that have since been resolved.
>>>
>>>  In order to enable lazy translation it is necessary to do the
>>> following things:
>>>
>>>(1) Fix a bug in oslo with respect to how keywords are extracted from
>>> the format strings when saving replacement text for use when the message
>>> translation is done.   This is
>>> https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
>>> on a fix for in oslo.  Once that is complete it will need to be sync'd
>>> into nova.
>>>
>>>(2) Remove concatenation (+) of translatable messages.  The current
>>> class that is used to hold the translatable message
>>> (gettextutils.Message) does not support concatenation.  There were a few
>>> cases in Nova where this was done and they are coverted to other means
>>> of combining the strings in:
>>> https://review.openstack.org/#/c/78095Remove use of concatenation on
>>> messages
>>>
>>>(3) Remove the use of str() on exceptions.  The intent of this is to
>>> return the message contained in the exception, but these messages may
>>> contain unicode, so str cannot be used on them and gettextutils.Message
>>> enforces this.  Thus these need
>>> to either be removed and allow python formatting to do the right thing,
>>> or changed to unicode().  Since unicode() will change to str() in Py3,
>>> the forward compatible six.text_type() is used instead.  This is done in:
>>> https://review.openstack.org/#/c/78096Remove use of str() on exceptions
>>>
>>>(4) The addition of the call that enables the use of lazy messages.
>>>   This is in:
>>> https://review.openstack.org/#/c/73706Re-enable lazy translation.
>>>
>>>  Lazy translation has been enabled in the other projects so it would
>>> be beneficial to be consistent with the other projects with respect to
>>> message translation.  I have tested that the changes in (2) and (3) work
>>> when lazy translation is not enabled.  Thus if a problem is found, the
>>> two line change in (4) could be removed to get to the previous behavior.
>>>
>>>  I've been talking to Matt Riedemann and Dan Berrange about this.
>>>   Matt has agreed to be a sponsor.
>>>
>>> --Jim Carey
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> Jim,
>>
>> Post back here with the link to the oslo-incubator fix for that bug when
>> you have it available, then we can look at this a bit more.
>>
>
> The oslo patch is here [1].  The bug report has a nice analysis of the
> problem and how H501 makes it so locals() doesn't need to be handled
> anymore.
>
> If this could get into oslo quickly it could be synced to nova and the
> i18n-messages patches would be rebased on top of it.
>
> As Jim pointed out, if there was a problem with enabling lazy translation in
> nova it'd be a trivial change to disable it again.
>
> There was concern raised in IRC today about wanting a Tempest scenario test
> to also hit this code, something along the lines of passing zh_CN through
> the request to make sure nothing blows up.  I think that's reasonable but
> we'd probably need some help from the QA team in figuring out exactly what
> needs to be run there.  I don't have much experience with the scenario
> tests, I just know their main purpose is to test inter-service interaction.
>
> [1] https://review.openstack.org/#/c/78806/
>


Being there are some testing desires for this, and the fix hasn't
landed in oslo-incubator yet,  I am -1 on this.

>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-06 Thread Joe Gordon
On Wed, Mar 5, 2014 at 11:45 AM, Qin Zhao  wrote:
> Hi Joe,
> For example, I used to use a private cloud system, which will calculate
> charge bi-weekly. and it charging formula looks like "Total_charge =
> Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
> Volume_number*C4".  Those Instance/Image/Volume number are the number of
> those objects that user created within these two weeks. And it also has
> quota to limit total image size and total volume size. That formula is not
> very exact, but you can see that it regards each of my 'create' operation ass
> a 'ticket', and will charge all those tickets, plus the instance duration

Charging for creating a VM creation is not very cloud like.  Cloud
instances should be treated as ephemeral and something that you can
throw away and recreate at any time.  Additionally cloud should charge
on resources used (instance CPU hour, network load etc), and not API
calls (at least in any meaningful amount).

> fee. In order to reduce the expense of my department, I am asked not to
> create instance very frequently, and not to create too many images and
> volume. The image quota is not very big. And I would never be permitted to
> exceed the quota, since it request additional dollars.
>
>
> On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon  wrote:
>>
>> On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao  wrote:
>> > Hi Joe,
>> > If we assume the user is willing to create a new instance, the workflow
>> > you
>> > are saying is exactly correct. However, what I am assuming is that the
>> > user
>> > is NOT willing to create a new instance. If Nova can revert the existing
>> > instance, instead of creating a new one, it will become the alternative
>> > way
>> > utilized by those users who are not allowed to create a new instance.
>> > Both paths lead to the target. I think we can not assume all the people
>> > should walk through path one and should not walk through path two. Maybe
>> > creating new instance or adjusting the quota is very easy in your point
>> > of
>> > view. However, the real use case is often limited by business process.
>> > So I
>> > think we may need to consider that some users can not or are not allowed
>> > to
>> > creating the new instance under specific circumstances.
>> >
>>
>> What sort of circumstances would prevent someone from deleting and
>> recreating an instance?
>>
>> >
>> > On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon 
>> > wrote:
>> >>
>> >> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao  wrote:
>> >> > Hi Joe, my meaning is that cloud users may not hope to create new
>> >> > instances
>> >> > or new images, because those actions may require additional approval
>> >> > and
>> >> > additional charging. Or, due to instance/image quota limits, they can
>> >> > not do
>> >> > that. Anyway, from user's perspective, saving and reverting the
>> >> > existing
>> >> > instance will be preferred sometimes. Creating a new instance will be
>> >> > another story.
>> >> >
>> >>
>> >> Are you saying some users may not be able to create an instance at
>> >> all? If so why not just control that via quotas.
>> >>
>> >> Assuming the user has the power to rights and quota to create one
>> >> instance and one snapshot, your proposed idea is only slightly
>> >> different then the current workflow.
>> >>
>> >> Currently one would:
>> >> 1) Create instance
>> >> 2) Snapshot instance
>> >> 3) Use instance / break instance
>> >> 4) delete instance
>> >> 5) boot new instance from snapshot
>> >> 6) goto step 3
>> >>
>> >> From what I gather you are saying that instead of 4/5 you want the
>> >> user to be able to just reboot the instance. I don't think such a
>> >> subtle change in behavior is worth a whole new API extension.
>> >>
>> >> >
>> >> > On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon 
>> >> > wrote:
>> >> >>
>> >> >> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao  wrote:
>> >> >> > I think the current snapshot implementation can be a solution
>> >> >> > sometimes,
>> >> >> > but
>> >> >> > it is NOT exact same as user's expectation. For example, a new
>> >> >> > blueprint
>> >> >> > is
>> >> >> > created last week,
>> >> >> >
>> >> >> > https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
>> >> >> > which
>> >> >> > seems a little similar with this discussion. I feel the user is
>> >> >> > requesting
>> >> >> > Nova to create in-place snapshot (not a new image), in order to
>> >> >> > revert
>> >> >> > the
>> >> >> > instance to a certain state. This capability should be very useful
>> >> >> > when
>> >> >> > testing new software or system settings. It seems a short-term
>> >> >> > temporary
>> >> >> > snapshot associated with a running instance for Nova. Creating a
>> >> >> > new
>> >> >> > instance is not that convenient, and may be not feasible for the
>> >> >> > user,
>> >> >> > especially if he or she is using public cloud.
>> >> >> >
>> >> >>
>> >> >> Why isn't it easy to create a new instance from a snapshot?
>> >> >>
>> >> >> >
>> >> >> > On Tue, 

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Mark McClain

On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:

> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
>> +1
>> 
>> I think if we can have it before the Juno summit, we can take
>> concrete, well thought-out proposals to the community at the summit.
> 
> Unless something has changed starting at the Hong Kong design summit
> (which unfortunately I was not able to attend), the design summits have
> always been a place to gather to *discuss* and *debate* proposed
> blueprints and design specs. It has never been about a gathering to
> rubber-stamp proposals that have already been hashed out in private
> somewhere else.

You are correct that is the goal of the design summit.  While I do think it is 
wise to discuss the next steps with LBaaS at this point in time, I am not a 
proponent of in person mini-design summits.  Many contributors to LBaaS are 
distributed all over the global, and scheduling a mini summit with short notice 
will exclude valuable contributors to the team.  I’d prefer to see an open 
process with discussions on the mailing list and specially scheduled IRC 
meetings to discuss the ideas.

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [TROVE] Manual Installation Again

2014-03-06 Thread Anne Gentle
On Thu, Mar 6, 2014 at 11:03 PM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:

> I've been looking at setting up Trove manually, and of course the first
> document I stumbled on was:
>
> http://docs.openstack.org/developer/trove/dev/manual_install.html
>
> Now, while this proved to be very handy, there are some points where it is
> wrong, and others where it is errm...lean. So in the spirit of trying to
> improve things here I go.
>
> The wrong concerns the action given to trove-manage in the "Prepare
> Database" section:
>
> $ trove-manage --config-file= image_update mysql
> `nova --os-username trove --os-password trove --os-tenant-name trove
> --os-auth-url http://:5000/v2.0 image-list | awk
> '/trove-image/ {print $2}'`
>
> This should probably be:
>
> $ trove-manage --config-file= datastore_version_update
> mysql mysql-5.5 mysql
> `nova --os-username trove --os-password trove --os-tenant-name trove
> --os-auth-url http://:5000/v2.0 image-list | awk
> '/trove-image/ {print $2}'` 1
>
> ...which is a bit of a mouthful - might be better to break it into 2 steps.
>
>
> The lean area concerns the stuff in "Prepare Image". It seems to me that
> more needs to be done than simply converting to qcow2. After spending a
> while reading stuff in trove-integration/scripts repo I suspect that
> something like following is needed:
>
> 1/ setup relevant os user (e.g trove or stack) for what follows
> 2/ install mysql 5.5 in the image (or arrange it to be installed on 1st
> boot)
> 3/ setup keys so guest can rsync the trove client software (or install it
> in the image to avoid the need)
> 4/ configure the trove guest agent service to start (otherwise db instance
> stays stuck in 'BUILD' state forever)
>
> I note that the trove-integration repo uses diskimage-builder and triplo
> to do all these mods to the initial base image.
>
> Now I understand that some of this area is gonna be in flux (e.g use of
> first-boot.d in the tripleo elements), but some mention of what
> customizations to the base image are needed would be most excellent.
>

Hi Mark,
Great observations, at the Trove midcycle meetup we identified this as an
area to be documented. I heard at least 2 people want to work on it so I
hope they're drafting furiously and soon! I think it should eventually go
into the Virtual Machine Image Guide, but may just need to be part of their
install guide right away. Thanks for asking, it helps shape the docs.
Anne


>
> regards
>
> Mark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [TROVE] Manual Installation Again

2014-03-06 Thread Mark Kirkwood

On 07/03/14 18:03, Mark Kirkwood wrote:



The wrong concerns the action given to trove-manage in the "Prepare
Database" section:

$ trove-manage --config-file= image_update mysql
 `nova --os-username trove --os-password trove --os-tenant-name trove
 --os-auth-url http://:5000/v2.0 image-list | awk
'/trove-image/ {print $2}'`

This should probably be:

$ trove-manage --config-file= datastore_version_update
mysql mysql-5.5 mysql
 `nova --os-username trove --os-password trove --os-tenant-name trove
 --os-auth-url http://:5000/v2.0 image-list | awk
'/trove-image/ {print $2}'` 1

...which is a bit of a mouthful - might be better to break it into 2 steps.




...and I got it wrong too - forgot the package arg, sorry:

$ trove-manage --config-file= datastore_version_update 
mysql mysql-5.5 mysql

`nova --os-username trove --os-password trove --os-tenant-name trove
--os-auth-url http://:5000/v2.0 image-list | awk 
'/trove-image/ {print $2}'` mysql-server-5.5 1


Especially in the light of the above I think a less confusing 
presentation would be:


$ nova --os-username trove --os-password trove --os-tenant-name trove
--os-auth-url http://:5000/v2.0 image-list | awk 
'/trove-image/ {print $2}'



$ trove-manage --config-file= datastore_version_update 
mysql mysql-5.5 mysql  mysql-server-5.5 1



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] FFE Request: monitoring-network-from-opendaylight

2014-03-06 Thread Yuuichi Fujioka
Hi,

We would like to request FFE for monitoring-network-from-opendaylight.[1][2]
Unfortunately, it was not merged by Icehouse-3.

This is the first driver of the bp/monitoring-network.(It was merged)[3]
We strongly believe this feature will enhance ceilometer's value.

Because many people are interested in the SDN, and the OpenDaylight is one of 
the Open Source SDN Controllers.
Collected information from the OpenDaylight will make something of value. E.g. 
optimization of the resource location, testing route of the virtual network and 
physical network and etc.

And this feature is one of the plugins and doesn't change core logic.
We feel it is low risk.

Thus we would like to merge the BP on Icehouse.

Thanks.

[1] https://review.openstack.org/#/c/63890/
[2] 
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-network-from-opendaylight
[3] https://blueprints.launchpad.net/ceilometer/+spec/monitoring-network


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Anne Gentle
On Thu, Mar 6, 2014 at 12:05 PM, Sean Dague  wrote:

> One of the issues that the Nova team has definitely hit is Blueprint
> overload. At some point there were over 150 blueprints. Many of them
> were a single sentence.
>
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code
> shows up in the code review. So -1s and -2s on code review are a mix of
> design and code review. A big part of which is that design was never in
> any way sufficiently reviewed before the code started.
>
> In today's Nova meeting a new thought occurred. We already have Gerrit
> which is good for reviewing things. It gives you detailed commenting
> abilities, voting, and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an
> etherpad, or launchpad + a wiki page) we could do something like follows:
>
> 1. create bad blueprint
>

or create a good one with a great template!


> 2. create gerrit review with detailed proposal on the blueprint
> 3. iterate in gerrit working towards blueprint approval
> 4. once approved copy back the approved text into the blueprint (which
> should now be sufficiently detailed)
>
> Basically blueprints would get design review, and we'd be pretty sure we
> liked the approach before the blueprint is approved. This would
> hopefully reduce the late design review in the code reviews that's
> happening a lot now.
>
> There are plenty of niggly details that would be need to be worked out
>
>  * what's the basic text / template format of the design to be reviewed
> (probably want a base template for folks to just keep things consistent).
>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> Enhancement Proposals), or is it happening in a separate gerrit tree.
>

I think this is really worthwhile to try -- and it might offer an
interesting, readable history of decisions made. Plus how funny it was also
brought up at the Ops Summit. Convergence, cool.

It also goes along with our hope to move API design docs into the repo.

Other projects up to try it? The only possible addition is that we might
need to work out is cross-project blueprints and which repo should those
live in? We're doing well on integration, be careful about siloing.

Anne


>  * are there timelines for blueprint approval in a cycle? after which
> point, we don't review any new items.
>
> Anyway, plenty of details to be sorted. However we should figure out if
> the big idea has support before we sort out the details on this one.
>
> Launchpad blueprints will still be used for tracking once things are
> approved, but this will give us a standard way to iterate on that
> content and get to agreement on approach.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stored userdata

2014-03-06 Thread Hiroyuki Eguchi
I'm envisioning a stored userdata feature.
< https://blueprints.launchpad.net/nova/+spec/stored-userdata >

Currently, OpenStack allow user to execute script or send configuration file
when creating a instance by using --user-data /path/to/filename option.

But,In order to use this option, All users must input userdata every time.
So we need to store the userdata in database so that users can manage userdata 
more easily.

I'm planning to develop these Nova-APIs.
 - nova userdata-create
 - nova userdata-update
 - nova userdata-delete
 - nova userdata-show
 - nova userdata-list

Users can specify a userdata_name managed by Nova DB or /path/to/filename in 
--user-data option.

 - nova boot --user-data  ...


If you have any comments or suggestion, please let me know.
And please let me know if there's any discussion about this.


Thanks.
--hiroyuki

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [TROVE] Manual Installation Again

2014-03-06 Thread Mark Kirkwood
I've been looking at setting up Trove manually, and of course the first 
document I stumbled on was:


http://docs.openstack.org/developer/trove/dev/manual_install.html

Now, while this proved to be very handy, there are some points where it 
is wrong, and others where it is errm...lean. So in the spirit of trying 
to improve things here I go.


The wrong concerns the action given to trove-manage in the "Prepare 
Database" section:


$ trove-manage --config-file= image_update mysql
`nova --os-username trove --os-password trove --os-tenant-name trove
--os-auth-url http://:5000/v2.0 image-list | awk 
'/trove-image/ {print $2}'`


This should probably be:

$ trove-manage --config-file= datastore_version_update 
mysql mysql-5.5 mysql

`nova --os-username trove --os-password trove --os-tenant-name trove
--os-auth-url http://:5000/v2.0 image-list | awk 
'/trove-image/ {print $2}'` 1


...which is a bit of a mouthful - might be better to break it into 2 steps.


The lean area concerns the stuff in "Prepare Image". It seems to me that 
more needs to be done than simply converting to qcow2. After spending a 
while reading stuff in trove-integration/scripts repo I suspect that 
something like following is needed:


1/ setup relevant os user (e.g trove or stack) for what follows
2/ install mysql 5.5 in the image (or arrange it to be installed on 1st 
boot)
3/ setup keys so guest can rsync the trove client software (or install 
it in the image to avoid the need)
4/ configure the trove guest agent service to start (otherwise db 
instance stays stuck in 'BUILD' state forever)


I note that the trove-integration repo uses diskimage-builder and triplo 
to do all these mods to the initial base image.


Now I understand that some of this area is gonna be in flux (e.g use of 
first-boot.d in the tripleo elements), but some mention of what 
customizations to the base image are needed would be most excellent.


regards

Mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-06 Thread Akihiro Motoki
I wonder why RA needs to be exposed by security group API.
Does a user need to configure security group to allow IPv6 RA? or
should it be allowed in infra side?

In the current implementation DHCP packets are allowed by provider
rule (which is hardcoded in neutron code now).
I think the role of IPv6 RA is similar to DHCP in IPv4. If so, we
don't need to expose RA in security group API.
Am I missing something?

Thanks,
Akihiro

On Mon, Mar 3, 2014 at 10:39 PM, Xuhan Peng  wrote:
> I created a new blueprint [1] which is triggered by the requirement to allow
> IPv6 Router Advertisement security group rule on compute node in my on-going
> code review [2].
>
> Currently, only security group rule direction, protocol, ethertype and port
> range are supported by neutron security group rule data structure. To allow
> Router Advertisement coming from network node or provider network to VM on
> compute node, we need to specify ICMP type to only allow RA from known hosts
> (network node dnsmasq binded IP or known provider gateway).
>
> To implement this and make the implementation extensible, maybe we can add
> an additional table name "SecurityGroupRuleData" with Key, Value and ID in
> it. For ICMP type RA filter, we can add key="icmp-type" value="134", and
> security group rule to the table. When other ICMP type filters are needed,
> similar records can be stored. This table can also be used for other
> firewall rule key values.
> API change is also needed.
>
> Please let me know your comments about this blueprint.
>
> [1]
> https://blueprints.launchpad.net/neutron/+spec/security-group-icmp-type-filter
> [2] https://review.openstack.org/#/c/72252/
>
> Thank you!
> Xuhan Peng
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Jay Pipes
On Thu, 2014-03-06 at 19:28 -0800, Josh Durgin wrote:
> On 03/06/2014 05:37 PM, Andrew Woodward wrote:
> > Matt,
> >
> > I'd love to see this too, however I'm not seasoned enough to even know
> > much about how to start implementing that. I'd love some direction,
> > and maybe some support after you guys are done with the pending
> > release.
> 
> We're working on setting up CI with Ceph starting with Cinder.
> Jay Pipes' recent blog posts explaining this process are great:
> 
> http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/

Thanks for the shout-out. Andrew, Josh, and anyone else, please feel
free to attend the weekly IRC meeting (#openstack-meeting) at 18:00UTC
on Mondays. We use the hour as a workshop and Q&A session to help those
struggling to set things up.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] API weekly meeting

2014-03-06 Thread Jay Pipes
On Fri, 2014-03-07 at 11:15 +1030, Christopher Yeoh wrote:
> Hi,
> 
> I'd like to start a weekly IRC meeting for those interested in
> discussing Nova API issues. I think it would be a useful forum for:
> 
> - People to keep up with what work is going on the API and where its
>   headed. 
> - Cloud providers, SDK maintainers and users of the REST API to provide
>   feedback about the API and what they want out of it.
> - Help coordinate the development work on the API (both v2 and v3)
> 
> If you're interested in attending please respond and include what time
> zone you're in so we can work out the best time to meet.

Very much interested. I'll make room in my schedule to attend most any
time other than 2-6am EST (7-11UTC).

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-06 Thread Matt Riedemann



On 3/6/2014 8:08 PM, Matt Riedemann wrote:



On 3/6/2014 3:46 PM, James Carey wrote:

 Please consider a FFE for i18n Message improvements:
BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages

 The base enablement for lazy translation has already been sync'd
from oslo.   This patch was to enable lazy translation support in Nova.
  It is titled re-enable lazy translation because this was enabled
during Havana but was pulled due to issues that have since been resolved.

 In order to enable lazy translation it is necessary to do the
following things:

   (1) Fix a bug in oslo with respect to how keywords are extracted from
the format strings when saving replacement text for use when the message
translation is done.   This is
https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
on a fix for in oslo.  Once that is complete it will need to be sync'd
into nova.

   (2) Remove concatenation (+) of translatable messages.  The current
class that is used to hold the translatable message
(gettextutils.Message) does not support concatenation.  There were a few
cases in Nova where this was done and they are coverted to other means
of combining the strings in:
https://review.openstack.org/#/c/78095Remove use of concatenation on
messages

   (3) Remove the use of str() on exceptions.  The intent of this is to
return the message contained in the exception, but these messages may
contain unicode, so str cannot be used on them and gettextutils.Message
enforces this.  Thus these need
to either be removed and allow python formatting to do the right thing,
or changed to unicode().  Since unicode() will change to str() in Py3,
the forward compatible six.text_type() is used instead.  This is done in:
https://review.openstack.org/#/c/78096Remove use of str() on exceptions

   (4) The addition of the call that enables the use of lazy messages.
  This is in:
https://review.openstack.org/#/c/73706Re-enable lazy translation.

 Lazy translation has been enabled in the other projects so it would
be beneficial to be consistent with the other projects with respect to
message translation.  I have tested that the changes in (2) and (3) work
when lazy translation is not enabled.  Thus if a problem is found, the
two line change in (4) could be removed to get to the previous behavior.

 I've been talking to Matt Riedemann and Dan Berrange about this.
  Matt has agreed to be a sponsor.

--Jim Carey


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Jim,

Post back here with the link to the oslo-incubator fix for that bug when
you have it available, then we can look at this a bit more.



The oslo patch is here [1].  The bug report has a nice analysis of the 
problem and how H501 makes it so locals() doesn't need to be handled 
anymore.


If this could get into oslo quickly it could be synced to nova and the 
i18n-messages patches would be rebased on top of it.


As Jim pointed out, if there was a problem with enabling lazy 
translation in nova it'd be a trivial change to disable it again.


There was concern raised in IRC today about wanting a Tempest scenario 
test to also hit this code, something along the lines of passing zh_CN 
through the request to make sure nothing blows up.  I think that's 
reasonable but we'd probably need some help from the QA team in figuring 
out exactly what needs to be run there.  I don't have much experience 
with the scenario tests, I just know their main purpose is to test 
inter-service interaction.


[1] https://review.openstack.org/#/c/78806/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-06 Thread Zhangleiqiang
> get them working. For example, in a devstack VM the only way I can get the
> iSCSI target to show the new size (after an lvextend) is to delete and 
> recreate
> the target, something jgriffiths said he doesn't want to support ;-).

I know a method can achieve it, but it maybe need the instance to pause first 
(during the step2 below), but without detaching/reattaching. The steps as 
follows:

1. Extend the LV
2.Refresh the size info in tgtd:
  a) tgtadm --op show --mode target # get the "tid" and "lun_id" properties of 
target related to the lv; the "size" property in output result is still the old 
size before lvextend
  b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # delete 
lun mapping in tgtd
  c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
--backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping
  d) tgtadm --op show --mode target #now the "size" property in output result 
is the new size
*PS*:  
a) During the procedure, the corresponding device on the compute node won't 
disappear. But I am not sure the result if Instance has IO on this volume, so 
maybe the instance may be paused during this procedure.
b) Maybe we can modify tgtadm, and make it support the operation which is just 
"refresh" the size of backing store.

3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
{target_name} -R

>I also
> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
> etc.).

Till now, we focused on the "volume" which is based on *block device*. Under 
this scenario, we must first try to "extend" the volume and notify the 
hypervisor, I think one of the preconditions is to make sure the extend 
operation will not affect the IO in Instance.

However, there is another scenario which maybe a little different. For 
*online-extend" virtual disks (qcow2, sparse, etc) whose backend storage is 
file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU is 
as follows:
1. QEMU drain all IO
2. *QEMU* extend the virtual disk
3. QEMU resume IO

The difference is the *extend* work need be done by QEMU other than cinder 
driver. 

> Feel free to ping me on IRC (pdmars).

I don't know your time zone, we can continue the discussion on IRC, :)

--
zhangleiqiang

Best Regards


> -Original Message-
> From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
> Sent: Thursday, March 06, 2014 12:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Luohao (brian)
> Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
> online-extend feature to cinder ?
> 
> Hey,
> 
> Sorry I missed this thread a couple of days ago. I am working on a first-pass 
> of
> this and hope to have something soon. So far I've mostly focused on getting
> OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
> with libvirt+kvm+lvm so I'd love some help there if you have ideas about how 
> to
> get them working. For example, in a devstack VM the only way I can get the
> iSCSI target to show the new size (after an lvextend) is to delete and 
> recreate
> the target, something jgriffiths said he doesn't want to support ;-). I also
> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
> etc.). Feel free to ping me on IRC (pdmars).
> 
> Paul
> 
> 
> On Mar 3, 2014, at 8:50 PM, Zhangleiqiang 
> wrote:
> 
> > @john.griffith. Thanks for your information.
> >
> > I have read the BP you mentioned ([1]) and have some rough thoughts about
> it.
> >
> > As far as I know, the corresponding online-extend command for libvirt is
> "blockresize", and for Qemu, the implement differs among disk formats.
> >
> > For the regular qcow2/raw disk file, qemu will take charge of the 
> > drain_all_io
> and truncate_disk actions, but for raw block device, qemu will only check if 
> the
> *Actual* size of the device is larger than current size.
> >
> > I think the former need more consideration, because the extend work is done
> by libvirt, Nova may need to do this first and then notify Cinder. But if we 
> take
> allocation limit of different cinder backend drivers (such as quota,
> nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be
> more complicated.
> >
> > This scenario is not included by the Item 3 of BP ([1]), as it cannot be 
> > simply
> "just work" or notified by the compute node/libvirt after the volume is
> extended.
> >
> > This regular qcow2/raw disk files are normally stored in file system based
> storage, maybe the Manila project is more appropriate for this scenario?
> >
> >
> > Thanks.
> >
> >
> > [1]:
> https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
> >
> > --
> > zhangleiqiang
> >
> > Best Regards
> >
> > From: John Griffith [mailto:john.griff...@solidfire.com]
> > Sent: Tuesday, March 04, 2014 1:05 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Cc: Luohao (brian)
> >

Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Josh Durgin

On 03/06/2014 05:37 PM, Andrew Woodward wrote:

Matt,

I'd love to see this too, however I'm not seasoned enough to even know
much about how to start implementing that. I'd love some direction,
and maybe some support after you guys are done with the pending
release.


We're working on setting up CI with Ceph starting with Cinder.
Jay Pipes' recent blog posts explaining this process are great:

http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/

Josh


As others have illustrated here, the current RBD support in nova is
effectively useless and I'd love to see that second sponsor so us Ceph
users don't have to use hand patched nova for another release.

On Thu, Mar 6, 2014 at 3:30 PM, Matt Riedemann
 wrote:



On 3/6/2014 2:20 AM, Andrew Woodward wrote:


I'd Like to request A FFE for the remaining patches in the Ephemeral
RBD image support chain

https://review.openstack.org/#/c/59148/
https://review.openstack.org/#/c/59149/

are still open after their dependency
https://review.openstack.org/#/c/33409/ was merged.

These should be low risk as:
1. We have been testing with this code in place.
2. It's nearly all contained within the RBD driver.

This is needed as it implements an essential functionality that has
been missing in the RBD driver and this will become the second release
it's been attempted to be merged into.

Andrew
Mirantis
Ceph Community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What would be awesome in Juno is some CI around RBD/Ceph.  I'd feel a lot
more comfortable with this code if we had CI running Tempest against that
type of configuration, just like how we are now requiring 3rd party CI for
virt drivers.

I realize this is tangential but it would make moving these blueprints
through faster so you're not working on it over multiple releases.

Having said that, I'm not signing up for sponsoring this, sorry. :)

--

Thanks,

Matt Riedemann



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Liuji (Jeremy)
+1

Agree with you. I like this idea so much.
It makes the blueprint review/discuss better tracked and recorded.
It's convenient for the people joining later to know the design's history.


> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Friday, March 07, 2014 2:05 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review &
> approval
> 
> One of the issues that the Nova team has definitely hit is Blueprint 
> overload. At
> some point there were over 150 blueprints. Many of them were a single
> sentence.
> 
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code shows
> up in the code review. So -1s and -2s on code review are a mix of design and
> code review. A big part of which is that design was never in any way 
> sufficiently
> reviewed before the code started.
> 
> In today's Nova meeting a new thought occurred. We already have Gerrit which
> is good for reviewing things. It gives you detailed commenting abilities, 
> voting,
> and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an etherpad, 
> or
> launchpad + a wiki page) we could do something like follows:
> 
> 1. create bad blueprint
> 2. create gerrit review with detailed proposal on the blueprint 3. iterate in
> gerrit working towards blueprint approval 4. once approved copy back the
> approved text into the blueprint (which should now be sufficiently detailed)
> 
> Basically blueprints would get design review, and we'd be pretty sure we liked
> the approach before the blueprint is approved. This would hopefully reduce the
> late design review in the code reviews that's happening a lot now.
> 
> There are plenty of niggly details that would be need to be worked out
> 
>  * what's the basic text / template format of the design to be reviewed
> (probably want a base template for folks to just keep things consistent).
>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> Enhancement Proposals), or is it happening in a separate gerrit tree.
>  * are there timelines for blueprint approval in a cycle? after which point, 
> we
> don't review any new items.
> 
> Anyway, plenty of details to be sorted. However we should figure out if the 
> big
> idea has support before we sort out the details on this one.
> 
> Launchpad blueprints will still be used for tracking once things are approved,
> but this will give us a standard way to iterate on that content and get to
> agreement on approach.
> 
>   -Sean
> 
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Bohai (ricky)
+1

Agree with you. I like this idea so much.
It makes the blueprint review/discuss better tracked and recorded.
It's convenient for the people joining later to know the design's history.


> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Friday, March 07, 2014 2:05 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review &
> approval
> 
> One of the issues that the Nova team has definitely hit is Blueprint 
> overload. At
> some point there were over 150 blueprints. Many of them were a single
> sentence.
> 
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code shows
> up in the code review. So -1s and -2s on code review are a mix of design and
> code review. A big part of which is that design was never in any way 
> sufficiently
> reviewed before the code started.
> 
> In today's Nova meeting a new thought occurred. We already have Gerrit which
> is good for reviewing things. It gives you detailed commenting abilities, 
> voting,
> and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an etherpad, 
> or
> launchpad + a wiki page) we could do something like follows:
> 
> 1. create bad blueprint
> 2. create gerrit review with detailed proposal on the blueprint 3. iterate in
> gerrit working towards blueprint approval 4. once approved copy back the
> approved text into the blueprint (which should now be sufficiently detailed)
> 
> Basically blueprints would get design review, and we'd be pretty sure we liked
> the approach before the blueprint is approved. This would hopefully reduce the
> late design review in the code reviews that's happening a lot now.
> 
> There are plenty of niggly details that would be need to be worked out
> 
>  * what's the basic text / template format of the design to be reviewed
> (probably want a base template for folks to just keep things consistent).
>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> Enhancement Proposals), or is it happening in a separate gerrit tree.
>  * are there timelines for blueprint approval in a cycle? after which point, 
> we
> don't review any new items.
> 
> Anyway, plenty of details to be sorted. However we should figure out if the 
> big
> idea has support before we sort out the details on this one.
> 
> Launchpad blueprints will still be used for tracking once things are approved,
> but this will give us a standard way to iterate on that content and get to
> agreement on approach.
> 
>   -Sean
> 
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread ChangBo Guo
+1, we also need some work to clean existing blueprints which are not
approved.
 I think that is responsibility for drafter of those blueprints to follow
new process.

2014-03-07 2:05 GMT+08:00 Sean Dague :

> One of the issues that the Nova team has definitely hit is Blueprint
> overload. At some point there were over 150 blueprints. Many of them
> were a single sentence.
>
> The results of this have been that design review today is typically not
> happening on Blueprint approval, but is instead happening once the code
> shows up in the code review. So -1s and -2s on code review are a mix of
> design and code review. A big part of which is that design was never in
> any way sufficiently reviewed before the code started.
>
> In today's Nova meeting a new thought occurred. We already have Gerrit
> which is good for reviewing things. It gives you detailed commenting
> abilities, voting, and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an
> etherpad, or launchpad + a wiki page) we could do something like follows:
>
> 1. create bad blueprint
> 2. create gerrit review with detailed proposal on the blueprint
> 3. iterate in gerrit working towards blueprint approval
> 4. once approved copy back the approved text into the blueprint (which
> should now be sufficiently detailed)
>
> Basically blueprints would get design review, and we'd be pretty sure we
> liked the approach before the blueprint is approved. This would
> hopefully reduce the late design review in the code reviews that's
> happening a lot now.
>
> There are plenty of niggly details that would be need to be worked out
>
>  * what's the basic text / template format of the design to be reviewed
> (probably want a base template for folks to just keep things consistent).
>  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
> Enhancement Proposals), or is it happening in a separate gerrit tree.
>  * are there timelines for blueprint approval in a cycle? after which
> point, we don't review any new items.
>
> Anyway, plenty of details to be sorted. However we should figure out if
> the big idea has support before we sort out the details on this one.
>
> Launchpad blueprints will still be used for tracking once things are
> approved, but this will give us a standard way to iterate on that
> content and get to agreement on approach.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread Zhangleiqiang
Agree with you and thanks for your advice, :)



--
zhangleiqiang

Best Regards

From: Alex Meade [mailto:mr.alex.me...@gmail.com]
Sent: Friday, March 07, 2014 12:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection

Just so everyone is aware. Glance supports 'delayed deletes' where image data 
will not actually be deleted at the time of the request. Glance also has the 
concept of 'protected images', which allows for setting an image as protected, 
preventing it from being deleted until the image is intentionally set to 
unprotected. This avoids any actual deletion of prized images.

Perhaps cinder could emulate that behavior or improve upon it for volumes.

-Alex

On Thu, Mar 6, 2014 at 8:45 AM, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
Got it. Many thanks!

Leiqiang, you can take action now :)

From: John Griffith 
[mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection



On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 6 March 2014 08:50, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
> It seems to be an interesting idea. In fact, a China-based public IaaS, 
> QingCloud, has provided a similar feature
> to their virtual servers. Within 2 hours after a virtual server is deleted, 
> the server owner can decide whether
> or not to cancel this deletion and re-cycle that "deleted" virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any idea 
> here?
Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang 
> [mailto:zhangleiqi...@huawei.com]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think a soft-delete for Cinder sounds like a neat idea.  You should file a BP 
that we can target for Juno.

Thanks,
John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread Zhangleiqiang
OK. We have proposed a blueprint here.

https://blueprints.launchpad.net/cinder/+spec/volume-delete-protect

Thanks.


--
zhangleiqiang

Best Regards

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection



On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 6 March 2014 08:50, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
> It seems to be an interesting idea. In fact, a China-based public IaaS, 
> QingCloud, has provided a similar feature
> to their virtual servers. Within 2 hours after a virtual server is deleted, 
> the server owner can decide whether
> or not to cancel this deletion and re-cycle that "deleted" virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any idea 
> here?
Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang 
> [mailto:zhangleiqi...@huawei.com]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think a soft-delete for Cinder sounds like a neat idea.  You should file a BP 
that we can target for Juno.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] a question about instance snapshot

2014-03-06 Thread Liuji (Jeremy)
Hi, all

Current openstack seems not support to snapshot instance with memory and dev 
states.
I searched the blueprint and found two relational blueprint like below. 
But these blueprint failed to get in the branch.

[1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
[2]: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms

In the blueprint[1], there is a comment,"
We discussed this pretty extensively on the mailing list and in a design summit 
session. 
The consensus is that this is not a feature we would like to have in nova. 
--russellb " 
But I can't find the discuss mail about it. I hope to know why we think so ?
Without memory snapshot, we can't to provide the feature for user to revert a 
instance to a checkpoint. 

Anyone who knows the history can help me or give me a hint how to find the 
discuss mail?

I am a newbie for openstack and I apologize if I am missing something very 
obvious.


Thanks,
Jeremy Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-06 Thread Deok-June Yi
Hello, Sampth.

> My interest lies in how to evaluate  large number of  notifications within a 
> short time.
> I thought moving alarms into the pipelines would be a good start.

How about evaluate Synaps for your use case? It evaluates alarms in the storm 
topology(pipeline). Moving alarms into the pipelines would be big impact for CM.

see 
http://spcs.github.io/synaps/artifacts/programspec.html#synaps-topology-description

Thank you
June Yi, Samsung SDS.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-06 Thread Matt Riedemann



On 3/6/2014 3:46 PM, James Carey wrote:

 Please consider a FFE for i18n Message improvements:
BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages

 The base enablement for lazy translation has already been sync'd
from oslo.   This patch was to enable lazy translation support in Nova.
  It is titled re-enable lazy translation because this was enabled
during Havana but was pulled due to issues that have since been resolved.

 In order to enable lazy translation it is necessary to do the
following things:

   (1) Fix a bug in oslo with respect to how keywords are extracted from
the format strings when saving replacement text for use when the message
translation is done.   This is
https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working
on a fix for in oslo.  Once that is complete it will need to be sync'd
into nova.

   (2) Remove concatenation (+) of translatable messages.  The current
class that is used to hold the translatable message
(gettextutils.Message) does not support concatenation.  There were a few
cases in Nova where this was done and they are coverted to other means
of combining the strings in:
https://review.openstack.org/#/c/78095Remove use of concatenation on
messages

   (3) Remove the use of str() on exceptions.  The intent of this is to
return the message contained in the exception, but these messages may
contain unicode, so str cannot be used on them and gettextutils.Message
enforces this.  Thus these need
to either be removed and allow python formatting to do the right thing,
or changed to unicode().  Since unicode() will change to str() in Py3,
the forward compatible six.text_type() is used instead.  This is done in:
https://review.openstack.org/#/c/78096Remove use of str() on exceptions

   (4) The addition of the call that enables the use of lazy messages.
  This is in:
https://review.openstack.org/#/c/73706Re-enable lazy translation.

 Lazy translation has been enabled in the other projects so it would
be beneficial to be consistent with the other projects with respect to
message translation.  I have tested that the changes in (2) and (3) work
when lazy translation is not enabled.  Thus if a problem is found, the
two line change in (4) could be removed to get to the previous behavior.

 I've been talking to Matt Riedemann and Dan Berrange about this.
  Matt has agreed to be a sponsor.

--Jim Carey


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Jim,

Post back here with the link to the oslo-incubator fix for that bug when 
you have it available, then we can look at this a bit more.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-06 Thread Xuhan Peng
I opened a bug [1] and submitted a patch [2] to solve this short term
(hopefully for Icehouse)

[1] https://bugs.launchpad.net/neutron/+bug/1289088
[2] https://review.openstack.org/#/c/78835/

Xuhan


On Thu, Mar 6, 2014 at 5:42 PM, Xuhan Peng  wrote:

> Sean, you are right. It doesn't work at all.
>
> So I think short term goal is to get that fixed for ICMP and long term
> goal is to write an extension as Amir pointed out?
>
>
> On Wed, Mar 5, 2014 at 1:55 AM, Collins, Sean <
> sean_colli...@cable.comcast.com> wrote:
>
>> On Tue, Mar 04, 2014 at 12:01:00PM -0500, Brian Haley wrote:
>> > On 03/03/2014 11:18 AM, Collins, Sean wrote:
>> > > On Mon, Mar 03, 2014 at 09:39:42PM +0800, Xuhan Peng wrote:
>> > >> Currently, only security group rule direction, protocol, ethertype
>> and port
>> > >> range are supported by neutron security group rule data structure.
>> To allow
>> > >
>> > > If I am not mistaken, I believe that when you use the ICMP protocol
>> > > type, you can use the port range specs to limit the type.
>> > >
>> > >
>> https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L309
>> > >
>> > > http://i.imgur.com/3n858Pf.png
>> > >
>> > > I assume we just have to check and see if it applies to ICMPv6?
>> >
>> > I tried using horizon to add an icmp type/code rule, and it didn't work.
>> >
>> > Before:
>> >
>> > -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
>> >
>> > After:
>> >
>> > -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
>> > -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
>> >
>> > I'd assume I'll have the same error with v6.
>> >
>> > I am curious what's actually being done under the hood here now...
>>
>> Looks like _port_arg just returns an empty array when hte protocol is
>> ICMP?
>>
>>
>> https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L328
>>
>> Called by:
>>
>>
>> https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L292
>>
>>
>> --
>> Sean M. Collins
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Andrew Woodward
Matt,

I'd love to see this too, however I'm not seasoned enough to even know
much about how to start implementing that. I'd love some direction,
and maybe some support after you guys are done with the pending
release.

As others have illustrated here, the current RBD support in nova is
effectively useless and I'd love to see that second sponsor so us Ceph
users don't have to use hand patched nova for another release.

On Thu, Mar 6, 2014 at 3:30 PM, Matt Riedemann
 wrote:
>
>
> On 3/6/2014 2:20 AM, Andrew Woodward wrote:
>>
>> I'd Like to request A FFE for the remaining patches in the Ephemeral
>> RBD image support chain
>>
>> https://review.openstack.org/#/c/59148/
>> https://review.openstack.org/#/c/59149/
>>
>> are still open after their dependency
>> https://review.openstack.org/#/c/33409/ was merged.
>>
>> These should be low risk as:
>> 1. We have been testing with this code in place.
>> 2. It's nearly all contained within the RBD driver.
>>
>> This is needed as it implements an essential functionality that has
>> been missing in the RBD driver and this will become the second release
>> it's been attempted to be merged into.
>>
>> Andrew
>> Mirantis
>> Ceph Community
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> What would be awesome in Juno is some CI around RBD/Ceph.  I'd feel a lot
> more comfortable with this code if we had CI running Tempest against that
> type of configuration, just like how we are now requiring 3rd party CI for
> virt drivers.
>
> I realize this is tangential but it would make moving these blueprints
> through faster so you're not working on it over multiple releases.
>
> Having said that, I'm not signing up for sponsoring this, sorry. :)
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
If google has done it, Google did it right!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread zhangyu (AI)
After looking into Nova code base, I found there is surely a soft_delete() 
method in the ComputeDriver() class. Furthermore,
Xenapi (and only Xenapi) has implemented this method, which finally applies a 
hard_shutdown_vm() operation to the instance to be deleted.
If I understand it correctly, it means the instance is in fact shutdown, 
instead of being deleted. Later, the user can decide whether to restore it or 
not.

My question is that, when and how is the soft_deleted instance truly deleted? A 
user needs to trigger a real delete operation on it explicitly, doesn't he?

Not for sure why other drivers, especially libvirt, did not implement such a 
feature...

Thanks~

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Thursday, March 06, 2014 8:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection

On 6 March 2014 08:50, zhangyu (AI)  wrote:
> It seems to be an interesting idea. In fact, a China-based public 
> IaaS, QingCloud, has provided a similar feature to their virtual 
> servers. Within 2 hours after a virtual server is deleted, the server owner 
> can decide whether or not to cancel this deletion and re-cycle that "deleted" 
> virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any idea 
> here?

Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete 
> protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] API weekly meeting

2014-03-06 Thread Christopher Yeoh
Hi,

I'd like to start a weekly IRC meeting for those interested in
discussing Nova API issues. I think it would be a useful forum for:

- People to keep up with what work is going on the API and where its
  headed. 
- Cloud providers, SDK maintainers and users of the REST API to provide
  feedback about the API and what they want out of it.
- Help coordinate the development work on the API (both v2 and v3)

If you're interested in attending please respond and include what time
zone you're in so we can work out the best time to meet.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Icehouse-3 development milestone available

2014-03-06 Thread John Wood
Hi everyone,

The third (and last) milestone of the Icehouse development cycle,
"icehouse-3", is now available for Barbican.

You can see the full list of new features and fixed bugs, as well as
tarball downloads, at: https://launchpad.net/barbican/+milestone/icehouse-3

This release includes code changes and bug fixes, including to satisfy 
incubation requirements.

Thanks to all who helped with this release!

Thank you,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] A ramdisk agent

2014-03-06 Thread Devananda van der Veen
All,

The Ironic team has been discussing the need for a "deploy agent" since
well before the last summit -- we even laid out a few blueprints along
those lines. That work was deferred  and we have been using the same deploy
ramdisk that nova-baremetal used, and we will continue to use that ramdisk
for the PXE driver in the Icehouse release.

That being the case, at the sprint this week, a team from Rackspace shared
work they have been doing to create a more featureful hardware agent and an
Ironic driver which utilizes that agent. Early drafts of that work can be
found here:

https://github.com/rackerlabs/teeth-agent
https://github.com/rackerlabs/ironic-teeth-driver

I've updated the original blueprint and assigned it to Josh. For reference:

https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk

I believe this agent falls within the scope of the baremetal provisioning
program, and welcome their contributions and collaboration on this. To that
effect, I have suggested that the code be moved to a new OpenStack project
named "openstack/ironic-python-agent". This would follow an independent
release cycle, and reuse some components of tripleo (os-*-config). To keep
the collaborative momentup up, I would like this work to be done now (after
all, it's not part of the Ironic repo or release). The new driver which
will interface with that agent will need to stay on github -- or in a
gerrit feature branch -- until Juno opens, at which point it should be
proposed to Ironic.

The agent architecture we discussed is roughly:
- a pluggable JSON transport layer by which the Ironic driver will pass
information to the ramdisk. Their initial implementation is a REST API.
- a collection of hardware-specific utilities (python modules, bash
scripts, what ever) which take JSON as input and perform specific actions
(whether gathering data about the hardware or applying changes to it).
- and an agent which routes the incoming JSON to the appropriate utility,
and routes the response back via the transport layer.


-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread zhangyu (AI)
This is TRULY a constructive suggestion. Time and again people have good ideas, 
but fail to
make them high-quality BPs, even being familiar with code base. The major 
reason lies in lacking 
of design experiences. Such a Gerrit-based design iteration will make the 
design and decision
process more productive, leading to high quality design outputs in earlier 
phases of work. 

Beyond the text/template format thing, several high-quality BP examples can be 
recommended, 
by core members, as references to follow. Especially those BPs have a brief and 
concrete summary, 
a descriptive and detailed wiki page and are finally approved and merged. Some 
analysis and 
comments could be added to highlight the excellence and value of those 
recommended BPs.

Thanks, Sean, for your suggestion! A BIG +1

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Friday, March 07, 2014 2:05 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & 
approval

One of the issues that the Nova team has definitely hit is Blueprint overload. 
At some point there were over 150 blueprints. Many of them were a single 
sentence.

The results of this have been that design review today is typically not 
happening on Blueprint approval, but is instead happening once the code shows 
up in the code review. So -1s and -2s on code review are a mix of design and 
code review. A big part of which is that design was never in any way 
sufficiently reviewed before the code started.

In today's Nova meeting a new thought occurred. We already have Gerrit which is 
good for reviewing things. It gives you detailed commenting abilities, voting, 
and history. Instead of attempting (and usually
failing) on doing blueprint review in launchpad (or launchpad + an etherpad, or 
launchpad + a wiki page) we could do something like follows:

1. create bad blueprint
2. create gerrit review with detailed proposal on the blueprint 3. iterate in 
gerrit working towards blueprint approval 4. once approved copy back the 
approved text into the blueprint (which should now be sufficiently detailed)

Basically blueprints would get design review, and we'd be pretty sure we liked 
the approach before the blueprint is approved. This would hopefully reduce the 
late design review in the code reviews that's happening a lot now.

There are plenty of niggly details that would be need to be worked out

 * what's the basic text / template format of the design to be reviewed 
(probably want a base template for folks to just keep things consistent).
 * is this happening in the nova tree (somewhere in docs/ - NEP (Nova 
Enhancement Proposals), or is it happening in a separate gerrit tree.
 * are there timelines for blueprint approval in a cycle? after which point, we 
don't review any new items.

Anyway, plenty of details to be sorted. However we should figure out if the big 
idea has support before we sort out the details on this one.

Launchpad blueprints will still be used for tracking once things are approved, 
but this will give us a standard way to iterate on that content and get to 
agreement on approach.

-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Christopher Yeoh
On Thu, 06 Mar 2014 13:05:15 -0500
Sean Dague  wrote:
> In today's Nova meeting a new thought occurred. We already have Gerrit
> which is good for reviewing things. It gives you detailed commenting
> abilities, voting, and history. Instead of attempting (and usually
> failing) on doing blueprint review in launchpad (or launchpad + an
> etherpad, or launchpad + a wiki page) we could do something like
> follows:
> 
> 1. create bad blueprint
> 2. create gerrit review with detailed proposal on the blueprint
> 3. iterate in gerrit working towards blueprint approval
> 4. once approved copy back the approved text into the blueprint (which
> should now be sufficiently detailed)
> 

+1. I think this could really help avoid wasted work for API related
changes in particular. 

Just wondering if we need step 4 - or if the blueprint text should
always just link to either the unapproved patch for the text in
gerrit, or the text in repository once it's approved. Updates to
proposal would be proposed through the same process.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-06 Thread Kurt Griffiths
> The fact is though that Freenode has had significant service degradation
>due to DDoS attacks for quite some time

Rather than jumping ship, is there anything we as a community can do to
help Freenode? This would obviously require a commitment of time/money,
but it could be worth it for something we rely on so heavily.

-Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Matt Riedemann



On 3/6/2014 2:20 AM, Andrew Woodward wrote:

I'd Like to request A FFE for the remaining patches in the Ephemeral
RBD image support chain

https://review.openstack.org/#/c/59148/
https://review.openstack.org/#/c/59149/

are still open after their dependency
https://review.openstack.org/#/c/33409/ was merged.

These should be low risk as:
1. We have been testing with this code in place.
2. It's nearly all contained within the RBD driver.

This is needed as it implements an essential functionality that has
been missing in the RBD driver and this will become the second release
it's been attempted to be merged into.

Andrew
Mirantis
Ceph Community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What would be awesome in Juno is some CI around RBD/Ceph.  I'd feel a 
lot more comfortable with this code if we had CI running Tempest against 
that type of configuration, just like how we are now requiring 3rd party 
CI for virt drivers.


I realize this is tangential but it would make moving these blueprints 
through faster so you're not working on it over multiple releases.


Having said that, I'm not signing up for sponsoring this, sorry. :)

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Russell Bryant
On 03/06/2014 02:18 PM, Vishvananda Ishaya wrote:
> +1
> 
> I can help review these.

OK, great!

If the risk is limited to users of the rbd backend, I'm OK with it if
we can get one more person to agree to review.  We have a hard
deadline of merging all code for FFEs by this coming Tuesday, though.
 We have a pretty hefty list of exceptions already, so I want to make
sure it doesn't drag out very long.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] renaming: final decision

2014-03-06 Thread Sergey Lukjanov
Hi,

I'm glad to announce that we select the new name for project - Sahara.
It was checked  by Foundation Lawyers and we've voted for it on the
last team meeting[0]. The first Release Candidate for Savanna in
Icehouse should be already named Sahara.

We're planning to start renaming in this weekend. I'll share more
detailed doc with renaming plan later.

[0] 
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-03-06-18.02.html

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Feature Freeze and end of the I cycle

2014-03-06 Thread Sergey Lukjanov
Hi Savanna folks,

Feature Freeze (FF) for Savanna is now in effect. Feature Freeze
Exceptions (FFE) allowed and could be approved by me as the PTL. So,
for now there are several things that we could land till the RC1:

* project rename;
* unit / integration tests addition;
* docs addition / improvement;
* fixes / improvements for Hadoop 2 support in all three plugins -
Vanilla, HDP and IDH due to the fact that this changes are
self-contained, it doesn't include any refactoring of code outside of
the plugins.

Re plans for the end of cycle - we should rename our project till the
first RC. Here is schedule -
https://wiki.openstack.org/wiki/Icehouse_Release_Schedule. Due to the
some potential issues with renaming, we'll probably postpone first RC
for one week.

P.S. Note for the savanna-core team: please, don't approve changes
that aren't fits FFE'd features.
P.P.S. There is an awesome explanation of why and how we're FF -
http://fnords.wordpress.com/2014/03/06/why-we-do-feature-freeze/.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-06 Thread Kyle Mestery
Thanks Edgar, I think this is the appropriate place to continue this
discussion.


On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana  wrote:

> Nader,
>
> I would encourage you to first discuss the possible extension with the ML2
> team. Rober and Kyle are leading this effort and they have a IRC meeting
> every week:
> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>
> Bring your concerns on this meeting and get the right feedback.
>
> Thanks,
>
> Edgar
>
> From: Nader Lahouti 
> Reply-To: OpenStack List 
> Date: Thursday, March 6, 2014 12:14 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron][ML2]
>
> Hi Aaron,
>
> I appreciate your reply.
>
> Here is some more details on what I'm trying to do:
> I need to add new attribute to the network resource using extensions (i.e.
> network config profile) and use it in the mechanism driver (in the
> create_network_precommit/postcommit).
> If I use current implementation of Ml2Plugin, when a call is made to
> mechanism driver's create_network_precommit/postcommit the new attribute is
> not included in the 'mech_context'
> Here is code from Ml2Plugin:
> class Ml2Plugin(...):
> ...
>def create_network(self, context, network):
> net_data = network['network']
> ...
> with session.begin(subtransactions=True):
> self._ensure_default_security_group(context, tenant_id)
> result = super(Ml2Plugin, self).create_network(context,
> network)
> network_id = result['id']
> ...
> mech_context = driver_context.NetworkContext(self, context,
> result)
> self.mechanism_manager.create_network_precommit(mech_context)
>
> Also need to include new extension in the  _supported_extension_aliases.
>
> So to avoid changes in the existing code, I was going to create my own
> plugin (which will be very similar to Ml2Plugin) and use it as core_plugin.
>
> Please advise the right solution implementing that.
>
> Regards,
> Nader.
>
>
> On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen wrote:
>
>> Hi Nader,
>>
>> Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one
>> plugin in another. I'm guessing  you probably wire a driver that ML2 can
>> use though it's hard to tell from the information you've provided what
>> you're trying to do.
>>
>> Best,
>>
>> Aaron
>>
>>
>> On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti 
>> wrote:
>>
>>> Hi All,
>>>
>>> I have a question regarding ML2 plugin in neutron:
>>> My understanding is that, 'Ml2Plugin' is the default core_plugin for
>>> neutron ML2. We can use either the default plugin or our own plugin (i.e.
>>> my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as
>>> core_plugin.
>>>
>>> Is my understanding correct?
>>>
>>>
>>> Regards,
>>> Nader.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Youcef Laribi
Jay,

What I meant is that the people who are involved regularly in LBaaS can have a 
space and time to hash out all the arguments and get clarity, and this is open 
to anybody to attend (hence mini-summit), while at the summit itself there is 
so much going on, it's hard to find time and focus to have these discussions 
(from my previous experience at the last few summits).

My 2 cents :)

Youcef


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Thursday, March 06, 2014 1:31 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
> +1
> 
> I think if we can have it before the Juno summit, we can take 
> concrete, well thought-out proposals to the community at the summit.

Unless something has changed starting at the Hong Kong design summit (which 
unfortunately I was not able to attend), the design summits have always been a 
place to gather to *discuss* and *debate* proposed blueprints and design specs. 
It has never been about a gathering to rubber-stamp proposals that have already 
been hashed out in private somewhere else.

Or, am I missing something? Has this changed in the past year?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Oslo: i18n Message improvements

2014-03-06 Thread James Carey
Please consider a FFE for i18n Message improvements: 
BP: https://blueprints.launchpad.net/nova/+spec/i18n-messages
 
The base enablement for lazy translation has already been sync'd from 
oslo.   This patch was to enable lazy translation support in Nova.  It is 
titled re-enable lazy translation because this was enabled during Havana 
but was pulled due to issues that have since been resolved.

In order to enable lazy translation it is necessary to do the 
following things:

  (1) Fix a bug in oslo with respect to how keywords are extracted from 
the format strings when saving replacement text for use when the message 
translation is done.   This is 
https://bugs.launchpad.net/nova/+bug/1288049, which I'm actively working 
on a fix for in oslo.  Once that is complete it will need to be sync'd 
into nova.

  (2) Remove concatenation (+) of translatable messages.  The current 
class that is used to hold the translatable message (gettextutils.Message) 
does not support concatenation.  There were a few cases in Nova where this 
was done and they are coverted to other means of combining the strings in:
https://review.openstack.org/#/c/78095 Remove use of concatenation on 
messages

  (3) Remove the use of str() on exceptions.  The intent of this is to 
return the message contained in the exception, but these messages may 
contain unicode, so str cannot be used on them and gettextutils.Message 
enforces this.  Thus these need
to either be removed and allow python formatting to do the right thing, or 
changed to unicode().  Since unicode() will change to str() in Py3, the 
forward compatible six.text_type() is used instead.  This is done in: 
https://review.openstack.org/#/c/78096 Remove use of str() on exceptions

  (4) The addition of the call that enables the use of lazy messages. This 
is in:
https://review.openstack.org/#/c/73706 Re-enable lazy translation.

Lazy translation has been enabled in the other projects so it would be 
beneficial to be consistent with the other projects with respect to 
message translation.  I have tested that the changes in (2) and (3) work 
when lazy translation is not enabled.  Thus if a problem is found, the two 
line change in (4) could be removed to get to the previous behavior. 

I've been talking to Matt Riedemann and Dan Berrange about this.  Matt 
has agreed to be a sponsor.

   --Jim Carey 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Jay Pipes
On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
> +1
> 
> I think if we can have it before the Juno summit, we can take
> concrete, well thought-out proposals to the community at the summit.

Unless something has changed starting at the Hong Kong design summit
(which unfortunately I was not able to attend), the design summits have
always been a place to gather to *discuss* and *debate* proposed
blueprints and design specs. It has never been about a gathering to
rubber-stamp proposals that have already been hashed out in private
somewhere else.

Or, am I missing something? Has this changed in the past year?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Dmitry Borodaenko
+1 on both accounts:

Yes, this change has low impact outside of the RBD driver that has
been out there since September and I agree that it should be exempted
from feature freeze.

And yes, RBD driver in Nova is severely crippled without this code
(which is why this was originally reported as a bug). Please let me
explain why for the benefit of prospective reviewers.

The primary benefit of using Ceph as the storage backend in an
OpenStack deployment is to keep all bulk data in a single storage pool
and eliminating the need to duplicate and transfer image data every
time you need to launch or snapshot a VM. The way Ceph achieves this
is with copy-on-write object snapshots: when you create a Cinder
volume from a Glance image, all you pass from Ceph to Cinder is an RBD
object URI to a new snapshot of the same object. When you write into
the new volume, only the parts that are changed get new RADOS object
stripes, the rest of the data remains unchanged and unduplicated.

Contrast this with the way the current implementation of RBD driver in
Nova works: when you launch an instance from a Glance image backed by
RBD, the whole image is downloaded from Ceph onto a local drive on the
compute node, only to be uploaded back as a new Ceph RBD object. This
wastes both network and disk capacity, not a lot when all you deal
with is a dozen of snowflake VMs, and a deal-breaker if you need
thousands of nearly identical VMs with disk contents differences
limited to configuration files in /etc.

Having this kind of limitation defeats the whole purpose of having RBD
driver in Nova, you might as well use the local storage on compute
nodes to store ephemeral disks.

Thank you,
-Dmitry Borodaenko

On Thu, Mar 6, 2014 at 3:18 AM, Sebastien Han
 wrote:
> Big +1 on this.
> Missing such support would make the implementation useless.
>
> 
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood."
>
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien@enovance.com
> Address : 11 bis, rue Roquépine - 75008 Paris
> Web : www.enovance.com - Twitter : @enovance
>
> On 06 Mar 2014, at 11:44, Zhi Yan Liu  wrote:
>
>> +1! according to the low rise and the usefulness for the real cloud 
>> deployment.
>>
>> zhiyan
>>
>> On Thu, Mar 6, 2014 at 4:20 PM, Andrew Woodward  wrote:
>>> I'd Like to request A FFE for the remaining patches in the Ephemeral
>>> RBD image support chain
>>>
>>> https://review.openstack.org/#/c/59148/
>>> https://review.openstack.org/#/c/59149/
>>>
>>> are still open after their dependency
>>> https://review.openstack.org/#/c/33409/ was merged.
>>>
>>> These should be low risk as:
>>> 1. We have been testing with this code in place.
>>> 2. It's nearly all contained within the RBD driver.
>>>
>>> This is needed as it implements an essential functionality that has
>>> been missing in the RBD driver and this will become the second release
>>> it's been attempted to be merged into.
>>>
>>> Andrew
>>> Mirantis
>>> Ceph Community
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Youcef Laribi
+1

I think if we can have it before the Juno summit, we can take concrete, well 
thought-out proposals to the community at the summit.

Cheers,
Youcef

From: Stephen Wong [mailto:s3w...@midokura.com]
Sent: Thursday, March 06, 2014 11:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

I agree with that, and it should take place before the J-Summit.

Location is key here :-)

On Thu, Mar 6, 2014 at 7:32 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hi everyone,

I'd like to gauge everyone's interest in a possible mini-summit for Neturon 
LBaaS. If enough people are interested I'd be happy to try and set something 
up. The Designate team just had a productive mini-summit in Austin, TX and it 
was nice to have face-to-face conversations with people in the Openstack 
community. While most of us will meet in Atlanta in May, I feel that a focused 
mini-summit will be more productive since we won't have other Openstack 
distractions around us. Let me know what you all think!

Cheers,
--Jorge

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-06 Thread Edgar Magana
Kyle,

Please, point me to the wiki with the documentation for testing the
devstack patch!
This work seems to be very interesting. Yeah!!! I love to have one more
agent less but let's have all agents gone once for all.  :-)

Edgar

On 3/6/14 8:24 AM, "Akihiro Motoki"  wrote:

>Hi Kyle,
>
>I am happy to hear OpenDaylight installation and startup are restored
>to devstack.
>It really helps openstack integration with other open source based
>software.
>
>I have a question on a file location for non-OpenStack open source
>software.
>when I refactored neutron related devstack code, we placed files related
>to
>such files to lib/neutron_thirdparty directory.
>I would like to know the new policy of file locations for such software.
>I understand it is limited to neutron and it may happens to other
>projects.
>
>Thanks,
>Akihiro
>
>
>On Thu, Mar 6, 2014 at 11:19 PM, Kyle Mestery 
>wrote:
>> On Tue, Mar 4, 2014 at 7:34 AM, Kyle Mestery 
>> wrote:
>>>
>>> On Tue, Mar 4, 2014 at 5:46 AM, Sean Dague  wrote:

 On 03/03/2014 11:32 PM, Dean Troyer wrote:
 > On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery
>>> > > wrote:
 >
 > In all cases today with Open Source plugins, Neutron agents have
 > run
 > on the hosts. For OpenDaylight, this is not the case.
OpenDaylight
 > integrates with Neutron as a ML2 MechanismDriver. But it has no
 > Neutron code on the compute hosts. OpenDaylight itself
communicates
 > directly to those compute hosts to program Open vSwitch.
 >
 >
 >
 > devstack doesn't provide a way for me to express this today. On
the
 > compute hosts in the above scenario, there is no "q-*" services
 > enabled, so the "is_neutron_enabled" function returns 1,
meaning no
 > neutron.
 >
 >
 > True and working as designed.
 >
 >
 > And then devstack sets Nova up to use nova-networking, which
fails.
 >
 >
 > This only happens if you have enabled nova-network.  Since it is on
by
 > default you must disable it.
 >
 >
 > The patch I have submitted [1] modifies "is_neutron_enabled" to
 > check for the meta neutron service being enabled, which will
then
 > configure nova to use Neutron instead of nova-networking on the
 > hosts. If this sounds wonky and incorrect, I'm open to
suggestions
 > on how to make this happen.
 >
 >
 > From the review:
 >
 > is_neutron_enabled() is doing exactly what it is expected to do,
return
 > success if it finds any "q-*" service listed in ENABLED_SERVICES.
If no
 > neutron services are configured on a compute host, then this must
not
 > say they are.
 >
 > Putting 'neutron' in ENABLED_SERVICES does nothing and should do
 > nothing.
 >
 > Since you are not implementing the ODS as a Neutron plugin (as far
as
 > DevStack is concerned) you should then treat it as a system service
and
 > configure it that way, adding 'opendaylight' to ENABLED_SERVICES
 > whenever you want something to know it is being used.
 >
 >
 >
 > Note: I have another patch [2] which enables an OpenDaylight
 > service, including configuration of OVS on hosts. But I cannot
 > check
 > if the "opendaylight" service is enabled, because this will only
 > run
 > on a single node, and again, not on each compute host.
 >
 >
 > I don't understand this conclusion. in multi-node each node gets its
 > own
 > specific ENABLED_SERVICES list, you can check that on each node to
 > determine how to configure that node.  That is what I'm trying to
 > explain in that last paragraph above, maybe not too clearly.

 So in an Open Daylight environment... what's running on the compute
host
 to coordinate host level networking?

>>> Nothing. OpenDaylight communicates to each host using OpenFlow and
>>>OVSDB
>>> to manage networking on the host. In fact, this is one huge advantage
>>>for
>>> the
>>> ODL MechanismDriver in Neutron, because it's one less agent running on
>>>the
>>> host.
>>>
>>> Thanks,
>>> Kyle
>>>
>> As an update here, I've reworked my devstack patch [1]  for adding
>> OpenDaylight
>> support to make OpenDaylight a top-level service, per suggestion from
>>Dean.
>> You
>> can now enable both "odl-server" and "odl-compute" in your local.conf
>>with
>> my patch.
>> Enabling "odl-server" will run OpenDaylight under devstack. Enabling
>> "odl-compute"
>> will configure the host's OVS to work with OpenDaylight.
>>
>> Per discussion with Sean, I'd like to look at refactoring some other
>>bits of
>> the Neutron
>> devstack code in the coming weeks as well.
>>
>> Thanks!
>> Kyle
>>
>> [1] https://review.openstack.org/#/c/69774/
>>

 -Sean

 --
 Sean Dague
 Samsung Research Ame

[openstack-dev] [savanna] Savanna 2014.1.b3 (Icehouse-3) dev milestone available

2014-03-06 Thread Sergey Lukjanov
Hi folks,

the third development milestone of Icehouse cycle is now available for Savanna.

Here is a list of new features and fixed bug:

https://launchpad.net/savanna/+milestone/icehouse-3

and here you can find tarballs to download it:

http://tarballs.openstack.org/savanna/savanna-2014.1.b3.tar.gz
http://tarballs.openstack.org/savanna-dashboard/savanna-dashboard-2014.1.b3.tar.gz
http://tarballs.openstack.org/savanna-image-elements/savanna-image-elements-2014.1.b3.tar.gz
http://tarballs.openstack.org/savanna-extra/savanna-extra-2014.1.b3.tar.gz

There are 20 blueprint implemented, 45 bugs fixed during the
milestone. It includes savanna, savanna-dashboard,
savanna-image-element and savanna-extra sub-projects. In addition
python-savannaclient 0.5.0 that was released early this week supports
all new features introduced in this savanna release.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [heat] [neutron] - Status of Heat and Neutron tempest blueprints?

2014-03-06 Thread Steve Baker
On 07/03/14 01:53, Sean Dague wrote:
> We're at Freeze, so I want to pick up and understand where we currently
> stand with both Neutron and Heat actually getting tested fully in the gate.
>
> First Neutron -
> https://blueprints.launchpad.net/tempest/+spec/fix-gate-tempest-devstack-vm-quantum-full
>
>
> We know that this is *close* as the full job is running non voting
> everywhere, and typically passing. How close are we? Or should we be
> defering this until Juno (which would be unfortunate).
>
> Second Heat -
> https://blueprints.launchpad.net/tempest/+spec/tempest-heat-integration
>
> The Heat tests that are in a normal Tempest job are relatively trivial
> surface verification, and in no way actually make sure that Heat is
> operating at a real level. This fact is a contributing factor to why
> Heat was broken in i2.
>
> The first real verification for Heat is in the Heat slow job (which we
> created to give Heat a separate time budget, because doing work that
> requires real guests takes time).
>
> The heat slow job looks like it is finally passing much of the time -
> http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIEFORCBidWlsZF9uYW1lOmNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWhlYXQtc2xvdyIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDEwOTg3NDQ4OH0=
>
> It's seeing a 78% pass rate in check. Can anyone in the Heat team
> confirm that the Failures in this job are actually real failures on
> patches that should have been blocked?
>
> I'd like to get that turned on (and on all the projects) as soon as the
> Heat team is confident on it so that Heat actually participates in the
> tempest/devstack gate in a material way and we can prevent future issues
> where a keystone, nova, neutron or whatever change would break Heat in git.
>
>
I've raised https://bugs.launchpad.net/tempest/+bug/1288970 to track the
most common error ( NeutronResourcesTestJSON f ailed to reach
CREATE_COMPLETE status within the required time (300 s))

tl;dr sometimes booting alone is taking 244s, so I think this test would
be significantly more reliable if the timeout was raised.

I would propose raising the default orchestration build_timeout to 600s
for now, but it may need to go up to 1200s when the autoscaling scenario
is enabled again.

https://review.openstack.org/#/c/78756/

Once this change is in heat-slow should be reliable enough to make it
voting.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-06 Thread Edgar Magana
Nader,

I would encourage you to first discuss the possible extension with the ML2
team. Rober and Kyle are leading this effort and they have a IRC meeting
every week:
https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting

Bring your concerns on this meeting and get the right feedback.

Thanks,

Edgar

From:  Nader Lahouti 
Reply-To:  OpenStack List 
Date:  Thursday, March 6, 2014 12:14 PM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron][ML2]

Hi Aaron,

I appreciate your reply.

Here is some more details on what I'm trying to do:
I need to add new attribute to the network resource using extensions (i.e.
network config profile) and use it in the mechanism driver (in the
create_network_precommit/postcommit).
If I use current implementation of Ml2Plugin, when a call is made to
mechanism driver's create_network_precommit/postcommit the new attribute is
not included in the 'mech_context'
Here is code from Ml2Plugin:
class Ml2Plugin(...):
...
   def create_network(self, context, network):
net_data = network['network']
...
with session.begin(subtransactions=True):
self._ensure_default_security_group(context, tenant_id)
result = super(Ml2Plugin, self).create_network(context, network)
network_id = result['id']
...
mech_context = driver_context.NetworkContext(self, context,
result)
self.mechanism_manager.create_network_precommit(mech_context)

Also need to include new extension in the  _supported_extension_aliases.

So to avoid changes in the existing code, I was going to create my own
plugin (which will be very similar to Ml2Plugin) and use it as core_plugin.

Please advise the right solution implementing that.

Regards,
Nader.


On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen  wrote:
> Hi Nader, 
> 
> Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one plugin in
> another. I'm guessing  you probably wire a driver that ML2 can use though it's
> hard to tell from the information you've provided what you're trying to do.
> 
> Best, 
> 
> Aaron
> 
> 
> On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti 
> wrote:
>> Hi All,
>> 
>> I have a question regarding ML2 plugin in neutron:
>> My understanding is that, 'Ml2Plugin' is the default core_plugin for neutron
>> ML2. We can use either the default plugin or our own plugin (i.e.
>> my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as
>> core_plugin.
>> 
>> Is my understanding correct?
>> 
>> 
>> Regards,
>> Nader.
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Need advice - changing DB schema (nova-network)

2014-03-06 Thread Shraddha Pandhe
Hi folks,



I am working on nova-network in Havana. I have a very unique use case where I 
need to add duplicate VLANs in nova-network. I am trying to add multiple 
networks in nova-network with same VLAN ID. The reason is as follows:

The cluster that I have has an L3 backplane. We have been given a limitation 
that, per rack, we have a few networks with unique VLAN tags, and the VLAN tags 
repeat in every rack. So now, when I add networks in nova-network, I need to 
add these networks in same VLAN. 


nova-network currently has a unique constraint on ("vlan", "deleted"). So to 
allow for duplicate VLANs in  the DB, I am removing that unique constraint. I 
am modifying the migrate scripts to make sure that UC doesn't apply again on 
db_sync.  I am also modifying the unit tests to reverse their sense (make sure 
that duplicate VLANs are allowed)

After making these changes, I have verified following scenarios:
1. Add networks with duplicate VLANs
2. Update networks with duplicate VLANs
3. db_sync doesn't revert back the constraint.
4. VM comes up properly and I can ping it. 

Since this is a DB schema change, I am a bit skeptical about it, and hence, 
looking for expert advice.

1. How risky is it to make DB schema change?
2. I know that I have to look out for any new migration scripts that touch that 
UC/Index. Anything else that I need to worry about w.r.t migration scripts?
3. Any more scenarios I should be testing?

Thank you in advance!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fw: Need advice - changing DB schema (nova-network)

2014-03-06 Thread Shraddha Pandhe
Hi folks,



I am working on nova-network in Havana. I have a very unique use case where I 
need to add duplicate VLANs in nova-network. I am trying to add multiple 
networks in nova-network with same VLAN ID. The reason is as follows:

The cluster that I have has an L3 backplane. We have been given a limitation 
that, per rack, we have a few networks with unique VLAN tags, and the VLAN tags 
repeat in every rack. So now, when I add networks in nova-network, I need to 
add these networks in same VLAN. 


nova-network currently has a unique constraint on ("vlan", "deleted"). So to 
allow for duplicate VLANs in  the DB, I am removing that unique constraint. I 
am modifying the migrate scripts to make sure that UC doesn't apply again on 
db_sync.  I am also modifying the unit tests to reverse their sense (make sure 
that duplicate VLANs are allowed)

After making these changes, I have verified following scenarios:
1. Add networks with duplicate VLANs
2. Update networks with duplicate VLANs
3. db_sync doesn't revert back the constraint.
4. VM comes up properly and I can ping it. 

Since this is a DB schema change, I am a bit skeptical about it, and hence, 
looking for expert advice.

1. How risky is it to make DB schema change?
2. I know that I have to look out for any new migration scripts that touch that 
UC/Index. Anything else that I need to worry about w.r.t migration scripts?
3. Any more scenarios I should be testing?

Thank you in advance!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Marconi] oslo.messaging on VMs

2014-03-06 Thread Georgy Okrokvertskhov
As a result of this discussion, I think we need also involve Marconi  team
to this discussion. (I am sorry for changing the Subject).

I am not very familiar with Marconi project details, but at first look it
looks like it can help to setup separate MQ infrastructure for agent <->
service communication.

I don't have any specific design suggestions and I hope Marconi team will
help us to find a right approach.

It looks like that option with oslo.message framework has now lower
priority due to security reasons.

Thanks
Georgy


On Thu, Mar 6, 2014 at 11:33 AM, Steven Dake  wrote:

> On 03/06/2014 10:24 AM, Daniel P. Berrange wrote:
>
>> On Thu, Mar 06, 2014 at 07:25:37PM +0400, Dmitry Mescheryakov wrote:
>>
>>> Hello folks,
>>>
>>> A number of OpenStack and related projects have a need to perform
>>> operations inside VMs running on OpenStack. A natural solution would
>>> be an agent running inside the VM and performing tasks.
>>>
>>> One of the key questions here is how to communicate with the agent. An
>>> idea which was discussed some time ago is to use oslo.messaging for
>>> that. That is an RPC framework - what is needed. You can use different
>>> transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
>>> connectivity your OpenStack networking can provide. At the same time
>>> there is a number of things to consider, like networking, security,
>>> packaging, etc.
>>>
>>> So, messaging people, what is your opinion on that idea? I've already
>>> raised that question in the list [1], but seems like not everybody who
>>> has something to say participated. So I am resending with the
>>> different topic. For example, yesterday we started discussing security
>>> of the solution in the openstack-oslo channel. Doug Hellmann at the
>>> start raised two questions: is it possible to separate different
>>> tenants or applications with credentials and ACL so that they use
>>> different queues? My opinion that it is possible using RabbitMQ/Qpid
>>> management interface: for each application we can automatically create
>>> a new user with permission to access only her queues. Another question
>>> raised by Doug is how to mitigate a DOS attack coming from one tenant
>>> so that it does not affect another tenant. The thing is though
>>> different applications will use different queues, they are going to
>>> use a single broker.
>>>
>> Looking at it from the security POV, I'd absolutely not want to
>> have any tenant VMs connected to the message bus that openstack
>> is using between its hosts. Even if you have security policies
>> in place, the inherent architectural risk of such a design is
>> just far too great. One small bug or misconfiguration and it
>> opens the door to a guest owning the entire cloud infrastructure.
>> Any channel between a guest and host should be isolated per guest,
>> so there's no possibility of guest messages finding their way out
>> to either the host or to other guests.
>>
>> If there was still a desire to use oslo.messaging, then at the
>> very least you'd want a completely isolated message bus for guest
>> comms, with no connection to the message bus used between hosts.
>> Ideally the message bus would be separate per guest too, which
>> means it ceases to be a bus really - just a point-to-point link
>> between the virt host + guest OS that happens to use the oslo.messaging
>> wire format.
>>
>> Regards,
>> Daniel
>>
> I agree and have raised this in the past.
>
> IMO oslo.messaging is a complete nonstarter for guest communication
> because of security concerns.
>
> We do not want guests communicating on the same message bus as
> infrastructure.  The response to that was "well just have all the guests
> communicate on their own unique messaging server infrastructure".  The
> downside of this is one guests activity could damage a different guest
> because of a lack of isolation and the nature in which message buses work.
>  The only workable solution which ensures security is a unique message bus
> per guest - which means a unique daemon per guest.  Surely there has to be
> a better way.
>
> The idea of isolating guests on a user basis, but allowing them to all
> exchange messages on one topic doesn't make logical sense to me.  I just
> don't think its possible, unless somehow rpc delivery were changed to
> deliver credentials enforced by the RPC server in addition to calling
> messages.  Then some type of credential management would need to be done
> for each guest in the infrastructure wishing to use the shared message bus.
>
> The requirements of oslo.messaging solution for a shared agent is that the
> agent would only be able to listen and send messages directed towards it
> (point to point) but would be able to publish messages to a topic for
> server consumption (the agent service, which may be integrated into other
> projects).  This way any number of shared agents could communicate to one
> agent service, but those agents would be isolated from one another.
>
> Perhap

Re: [openstack-dev] [heat] FFE for instance-users

2014-03-06 Thread Thierry Carrez
Steven Hardy wrote:
> If we can go ahead and get these last 4 patches in, I'd appreciate it :)
> https://review.openstack.org/#/c/72762/
> https://review.openstack.org/#/c/72761/
> https://review.openstack.org/#/c/71930/
> https://review.openstack.org/#/c/72763/

Discussed those with Steve Baker yesterday. I'm fine with this but they
really need to make it in before Tuesday, since it's a significant
feature and I would like it to see some mileage. So +1 as long as you
can merge them fast.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-06 Thread Nader Lahouti
Hi Aaron,

I appreciate your reply.

Here is some more details on what I'm trying to do:
I need to add new attribute to the network resource using extensions (i.e.
network config profile) and use it in the mechanism driver (in the
create_network_precommit/postcommit).
If I use current implementation of Ml2Plugin, when a call is made to
mechanism driver's create_network_precommit/postcommit the new attribute is
not included in the 'mech_context'
Here is code from Ml2Plugin:
class Ml2Plugin(...):
...
   def create_network(self, context, network):
net_data = network['network']
...
with session.begin(subtransactions=True):
self._ensure_default_security_group(context, tenant_id)
result = super(Ml2Plugin, self).create_network(context, network)
network_id = result['id']
...
mech_context = driver_context.NetworkContext(self, context,
result)
self.mechanism_manager.create_network_precommit(mech_context)

Also need to include new extension in the  _supported_extension_aliases.

So to avoid changes in the existing code, I was going to create my own
plugin (which will be very similar to Ml2Plugin) and use it as core_plugin.

Please advise the right solution implementing that.

Regards,
Nader.


On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen  wrote:

> Hi Nader,
>
> Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one
> plugin in another. I'm guessing  you probably wire a driver that ML2 can
> use though it's hard to tell from the information you've provided what
> you're trying to do.
>
> Best,
>
> Aaron
>
>
> On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti wrote:
>
>> Hi All,
>>
>> I have a question regarding ML2 plugin in neutron:
>> My understanding is that, 'Ml2Plugin' is the default core_plugin for
>> neutron ML2. We can use either the default plugin or our own plugin (i.e.
>> my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as
>> core_plugin.
>>
>> Is my understanding correct?
>>
>>
>> Regards,
>> Nader.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Ownership and path to schema definitions

2014-03-06 Thread Jay Pipes
On Thu, 2014-03-06 at 13:55 +1030, Christopher Yeoh wrote:
> On Tue, 04 Mar 2014 13:31:07 -0500
> David Kranz  wrote:
> > I think it would be a good time to have at least an initial
> > discussion about the requirements for theses schemas and where they
> > will live. The next step in tempest around this is to replace the
> > existing negative test files with auto-gen versions, and most of the
> > work in doing that is to define the schemas.
> > 
> > The tempest framework needs to know the http method, url part,
> > expected error codes, and payload description. I believe only the
> > last is covered by the current nova schema definitions, with the
> > others being some kind of attribute or data associated with the
> > method that is doing the validation. Ideally the information being
> > used to do the validation could be auto-converted to a more general
> > schema that could be used by tempest. I'm interested in what folks
> > have to say about this and especially from the folks who are core
> > members of both nova and tempest. See below for one example (note
> > that the tempest generator does not yet handle "pattern").
> 
> So as you've seen a lot of what is wanted for the tempest framework is
> implicitly known already within the method context which is why its not
> again explicitly stated in the schema. Not actually having thought
> about it a lot, but I suspect the expected errors decorator is
> something that would fit just as well in the validation framework
> however. 
> 
> Some of the other stuff such as url part, descriptions etc, not so much
> as it would be purely duplicate information that would get out of date.
> However for documentation auto generation it is something that we do
> also want to have available in an automated fashion.  I did a bit of
> exploration early in Icehouse in generating this within the context of
> api samples tests where we have access to this sort of stuff and I think
> together we'd have all the info we need, I'm just not sure mashing them
> together is the right way to do it.

JSON-Home is a perfect complement to JSONSchema in this regard. You
would expose the object model that underpins the request payload
validation using JSONSchema. And you would expose the REST API contract
(things like what HTTP methods are allowed, what parameters are allowed
for a method, what content-types are accepted, etc) using JSON-Home.

See Marconi for an example of using JSON-Home for API discovery:

https://github.com/openstack/marconi/blob/master/marconi/queues/transport/wsgi/v1_1/homedoc.py

See Glance for an example of JSONSchema for object model discovery:

https://github.com/openstack/glance/blob/master/glance/api/v2/image_members.py#L264

Best,
-jay

> And from the documentation point of view we need to have a bit of a
> think about whether doc strings on methods should be the canonical way
> we produce descriptional information about API methods. One hand its
> appealing, on the other hand they tend to be not very useful or very
> internals Nova focussed. But we could get much better at it.
> 
> Short version - yea I think we want to get to the point where tempest
> doesn't generate these manually. But I'm not sure about how we
> should do it.
> 
> Chris
> 
> > 
> >   -David
> > 
> >  From nova:
> > 
> > get_console_output = {
> >  'type': 'object',
> >  'properties': {
> >  'get_console_output': {
> >  'type': 'object',
> >  'properties': {
> >  'length': {
> >  'type': ['integer', 'string'],
> >  'minimum': 0,
> >  'pattern': '^[0-9]+$',
> >  },
> >  },
> >  'additionalProperties': False,
> >  },
> >  },
> >  'required': ['get_console_output'],
> >  'additionalProperties': False,
> > }
> > 
> >  From tempest:
> > 
> > {
> >  "name": "get-console-output",
> >  "http-method": "POST",
> >  "url": "servers/%s/action",
> >  "resources": [
> >  {"name":"server", "expected_result": 404}
> >  ],
> >  "json-schema": {
> >  "type": "object",
> >  "properties": {
> >  "os-getConsoleOutput": {
> >  "type": "object",
> >  "properties": {
> >  "length": {
> >  "type": ["integer", "string"],
> >  "minimum": 0
> >  }
> >  }
> >  }
> >  },
> >  "additionalProperties": false
> >  }
> > }
> > 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




[openstack-dev] [Neutron] Debugger issues with service plugins

2014-03-06 Thread Brandon Logan
While learning the code base of neutron and the extensions better, I've been 
attempting to get a debugger working with Neutron with service plugins (such as 
l3router and lbaas).  When running the debugger without service plugins 
everything works well.  When running the debugger with the service plugins, the 
code almost always seems to hang in the __init__ method of the loading service 
plugin class.  However, just by brute force restarting the debugging it will 
sometimes work as expected.  It is very inconsistent in that it does work 
occassionally, and when it hangs it is not always at the same place.  I have 
noticed that it does hang mostly on when either 1) the 
neutron.db.api.register_models() method is called or 2) setting up rpc code is 
executed.

Obviously, this all works when just running code with service plugins but 
without a debugger.

I've tried it with pdb and pydev debugger.  It happens with both.

I changed eventlet.monkey_patch() line to eventlet.monkey_patch(os=False, 
thread=False) in the neutron.server module.

I was wondering if anyone else has tried this and overcome it.  Please let me 
know if so.

Oh, and I'm also doing this on a single host devstack install.

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Stephen Wong
I agree with that, and it should take place before the J-Summit.

Location is key here :-)


On Thu, Mar 6, 2014 at 7:32 AM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   Hi everyone,
>
>  I'd like to gauge everyone's interest in a possible mini-summit for
> Neturon LBaaS. If enough people are interested I'd be happy to try and set
> something up. The Designate team just had a productive mini-summit in
> Austin, TX and it was nice to have face-to-face conversations with people
> in the Openstack community. While most of us will meet in Atlanta in May, I
> feel that a focused mini-summit will be more productive since we won't have
> other Openstack distractions around us. Let me know what you all think!
>
>  Cheers,
> --Jorge
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN] Live migration instructions recommend unsecured libvirt remote access

2014-03-06 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Live migration instructions recommend unsecured libvirt remote access
- ---

### Summary ###
When using the KVM hypervisor with libvirt on OpenStack Compute nodes,
live migration of instances from one Compute server to another requires
that the libvirt daemon is configured for remote network connectivity.
The libvirt daemon configuration recommended in the OpenStack
Configuration Reference manual configures libvirtd to listen for
incoming TCP connections on all network interfaces without requiring any
authentication or using any encryption.  This insecure configuration
allows for anyone with network access to the libvirt daemon TCP port on
OpenStack Compute nodes to control the hypervisor through the libvirt
API.

### Affected Services / Software ###
Nova, Compute, KVM, libvirt, Grizzly, Havana, Icehouse

### Discussion ###
The default configuration of the libvirt daemon is to not allow remote
access.  Live migration of running instances between OpenStack Compute
nodes requires libvirt daemon remote access between OpenStack Compute
nodes.

The libvirt daemon should not be configured to allow unauthenticated
remote access.  The libvirt daemon  has a choice of 4 secure options for
remote access over TCP.  These options are:

 - SSH tunnel to libvirtd's UNIX socket
 - libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption
 - libvirtd TCP socket, with TLS for encryption and x.509 client
   certificates for authentication
 - libvirtd TCP socket, with TLS for encryption and Kerberos for
   authentication

It is not necessary for the libvirt daemon to listen for remote TCP
connections on all interfaces.  Remote network connectivity to the
libvirt daemon should be restricted as much as possible.  Remote
access is only needed between the OpenStack Compute nodes, so the
libvirt daemon only needs to listen for remote TCP connections on the
interface that is used for this communication.  A firewall can be
configured to lock down access to the TCP port that the libvirt daemon
listens on, but this does not sufficiently protect access to the libvirt
API.  Other processes on a remote OpenStack Compute node might have
network access, but should not be authorized to remotely control the
hypervisor on another OpenStack Compute node.

### Recommended Actions ###
If you are using the KVM hypervisor with libvirt on OpenStack Compute
nodes, you should review your libvirt daemon configuration to ensure
that it is not allowing unauthenticated remote access.

Remote access to the libvirt daemon via TCP is configured by the
"listen_tls", "listen_tcp", and "auth_tcp" configuration directives.  By
default, these directives are all commented out.  This results in remote
access via TCP being disabled.

If you do not need remote libvirt daemon access, you should ensure that
the following configuration directives are set as follows in the
/etc/libvirt/libvirtd.conf configuration file.  Commenting out these
directives will have the same effect, as these values match the internal
defaults:

-  begin example libvirtd.conf snippet 
listen_tls = 1
listen_tcp = 0
auth_tcp = "sasl"
-  end example libvirtd.conf snippet 

If you need to allow remote access to the libvirt daemon between
OpenStack Compute nodes for live migration, you should ensure that
authentication is required.  Additionally, you should consider enabling
TLS to allow remote connections to be encrypted.

The following libvirt daemon configuration directives will allow for
unencrypted remote connections that use SASL for authentication:

-  begin example libvirtd.conf snippet 
listen_tls = 0
listen_tcp = 1
auth_tcp = "sasl"
-  end example libvirtd.conf snippet 

If you want to require TLS encrypted remote connections, you will have
to obtain X.509 certificates and configure the libvirt daemon to use
them to use TLS.  Details on this configuration are in the libvirt
daemon documentation.  Once the certificates are configured, you should
set the following libvirt daemon configuration directives:

-  begin example libvirtd.conf snippet 
listen_tls = 1
listen_tcp = 0
auth_tls = "none"
-  end example libvirtd.conf snippet 

When using TLS, setting the "auth_tls" configuration directive to "none"
uses X.509 client certificates for authentication.  You can additionally
require SASL authentication by setting the following libvirt daemon
configuration directives:

-  begin example libvirtd.conf snippet 
listen_tls = 1
listen_tcp = 0
auth_tls = "sasl"
-  end example libvirtd.conf snippet 

When using TLS, it is also necessary to configure the OpenStack Compute
nodes to use a non-default URI for live migration.  This is done by
setting the following configuration directive in /etc/nova/nova.conf:

-  begin example nova.conf snippet 
live_migration_uri=qemu+tls://%s/system
-  end example nova.conf snippet 

For more details on libvirt daemon remote URI form

Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Steven Dake

On 03/06/2014 10:24 AM, Daniel P. Berrange wrote:

On Thu, Mar 06, 2014 at 07:25:37PM +0400, Dmitry Mescheryakov wrote:

Hello folks,

A number of OpenStack and related projects have a need to perform
operations inside VMs running on OpenStack. A natural solution would
be an agent running inside the VM and performing tasks.

One of the key questions here is how to communicate with the agent. An
idea which was discussed some time ago is to use oslo.messaging for
that. That is an RPC framework - what is needed. You can use different
transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
connectivity your OpenStack networking can provide. At the same time
there is a number of things to consider, like networking, security,
packaging, etc.

So, messaging people, what is your opinion on that idea? I've already
raised that question in the list [1], but seems like not everybody who
has something to say participated. So I am resending with the
different topic. For example, yesterday we started discussing security
of the solution in the openstack-oslo channel. Doug Hellmann at the
start raised two questions: is it possible to separate different
tenants or applications with credentials and ACL so that they use
different queues? My opinion that it is possible using RabbitMQ/Qpid
management interface: for each application we can automatically create
a new user with permission to access only her queues. Another question
raised by Doug is how to mitigate a DOS attack coming from one tenant
so that it does not affect another tenant. The thing is though
different applications will use different queues, they are going to
use a single broker.

Looking at it from the security POV, I'd absolutely not want to
have any tenant VMs connected to the message bus that openstack
is using between its hosts. Even if you have security policies
in place, the inherent architectural risk of such a design is
just far too great. One small bug or misconfiguration and it
opens the door to a guest owning the entire cloud infrastructure.
Any channel between a guest and host should be isolated per guest,
so there's no possibility of guest messages finding their way out
to either the host or to other guests.

If there was still a desire to use oslo.messaging, then at the
very least you'd want a completely isolated message bus for guest
comms, with no connection to the message bus used between hosts.
Ideally the message bus would be separate per guest too, which
means it ceases to be a bus really - just a point-to-point link
between the virt host + guest OS that happens to use the oslo.messaging
wire format.

Regards,
Daniel

I agree and have raised this in the past.

IMO oslo.messaging is a complete nonstarter for guest communication 
because of security concerns.


We do not want guests communicating on the same message bus as 
infrastructure.  The response to that was "well just have all the guests 
communicate on their own unique messaging server infrastructure".  The 
downside of this is one guests activity could damage a different guest 
because of a lack of isolation and the nature in which message buses 
work.  The only workable solution which ensures security is a unique 
message bus per guest - which means a unique daemon per guest.  Surely 
there has to be a better way.


The idea of isolating guests on a user basis, but allowing them to all 
exchange messages on one topic doesn't make logical sense to me.  I just 
don't think its possible, unless somehow rpc delivery were changed to 
deliver credentials enforced by the RPC server in addition to calling 
messages.  Then some type of credential management would need to be done 
for each guest in the infrastructure wishing to use the shared message bus.


The requirements of oslo.messaging solution for a shared agent is that 
the agent would only be able to listen and send messages directed 
towards it (point to point) but would be able to publish messages to a 
topic for server consumption (the agent service, which may be integrated 
into other projects).  This way any number of shared agents could 
communicate to one agent service, but those agents would be isolated 
from one another.


Perhaps user credentials could be passed as well in the delivery of each 
RPC message, but that means putting user credentials in the VM to start 
the communication.  Bootstrapping seems like a second obvious problem 
with this model.


I prefer a point to point model, much as the metadata service works 
today.  Although rpc.messaging is a really nice framework (I know, I 
just ported heat to oslo.messaging!) it doesn't fit this problem well 
because of the security implications.


Regards
-steve




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread John Dewey
On Thursday, March 6, 2014 at 11:09 AM, Russell Bryant wrote:
> On 03/06/2014 01:05 PM, Sean Dague wrote:
> > One of the issues that the Nova team has definitely hit is
> > Blueprint overload. At some point there were over 150 blueprints.
> > Many of them were a single sentence.
> > 
> > The results of this have been that design review today is typically
> > not happening on Blueprint approval, but is instead happening once
> > the code shows up in the code review. So -1s and -2s on code review
> > are a mix of design and code review. A big part of which is that
> > design was never in any way sufficiently reviewed before the code
> > started.
> > 
> 
> 
> We certainly did better this cycle. Having a team of people do the
> reviews helped. We have some criteria documented [1]. Trying to do
> reviews the blueprint whiteboard is just a painful disaster of a workflow.
> 
> > In today's Nova meeting a new thought occurred. We already have
> > Gerrit which is good for reviewing things. It gives you detailed
> > commenting abilities, voting, and history. Instead of attempting
> > (and usually failing) on doing blueprint review in launchpad (or
> > launchpad + an etherpad, or launchpad + a wiki page) we could do
> > something like follows:
> > 
> > 1. create bad blueprint 2. create gerrit review with detailed
> > proposal on the blueprint 3. iterate in gerrit working towards
> > blueprint approval 4. once approved copy back the approved text
> > into the blueprint (which should now be sufficiently detailed)
> > 
> > Basically blueprints would get design review, and we'd be pretty
> > sure we liked the approach before the blueprint is approved. This
> > would hopefully reduce the late design review in the code reviews
> > that's happening a lot now.
> > 
> > There are plenty of niggly details that would be need to be worked
> > out
> > 
> > * what's the basic text / template format of the design to be
> > reviewed (probably want a base template for folks to just keep
> > things consistent). * is this happening in the nova tree (somewhere
> > in docs/ - NEP (Nova Enhancement Proposals), or is it happening in
> > a separate gerrit tree. * are there timelines for blueprint
> > approval in a cycle? after which point, we don't review any new
> > items.
> > 
> > Anyway, plenty of details to be sorted. However we should figure
> > out if the big idea has support before we sort out the details on
> > this one.
> > 
> > Launchpad blueprints will still be used for tracking once things
> > are approved, but this will give us a standard way to iterate on
> > that content and get to agreement on approach.
> > 
> 
> 
> I am a *HUGE* fan of the general idea. It's a tool we already use for
> review and iterating on text. It seems like it would be a huge win.
> I also think it would allow and encourage a lot more people to get
> involved in the reviews.
> 
> I like the idea of iterating in gerrit until it's approved, and then
> using blueprints to track status throughout development. We could
> copy the text back into the blueprint, or just have a link to the
> proper file in the git repo.
> 
> I think a dedicated git repo for this makes sense.
> openstack/nova-blueprints or something, or openstack/nova-proposals if
> we want to be a bit less tied to launchpad terminology.
> 
> If folks are on board with the idea, I'm happy to work on getting a
> repo set up. The base template could be the first review against the
> repo.
> 
> [1] https://wiki.openstack.org/wiki/Blueprints
Funny, we actually had this very recommendation come out of the OpenStack 
Operators mini-summit this week.  There are other people very interested in 
this approach for blueprints.

John
 
> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Vishvananda Ishaya
+1

I can help review these.

Vish

On Mar 6, 2014, at 12:20 AM, Andrew Woodward  wrote:

> I'd Like to request A FFE for the remaining patches in the Ephemeral
> RBD image support chain
> 
> https://review.openstack.org/#/c/59148/
> https://review.openstack.org/#/c/59149/
> 
> are still open after their dependency
> https://review.openstack.org/#/c/33409/ was merged.
> 
> These should be low risk as:
> 1. We have been testing with this code in place.
> 2. It's nearly all contained within the RBD driver.
> 
> This is needed as it implements an essential functionality that has
> been missing in the RBD driver and this will become the second release
> it's been attempted to be merged into.
> 
> Andrew
> Mirantis
> Ceph Community
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Russell Bryant
On 03/06/2014 01:05 PM, Sean Dague wrote:
> One of the issues that the Nova team has definitely hit is
> Blueprint overload. At some point there were over 150 blueprints.
> Many of them were a single sentence.
> 
> The results of this have been that design review today is typically
> not happening on Blueprint approval, but is instead happening once
> the code shows up in the code review. So -1s and -2s on code review
> are a mix of design and code review. A big part of which is that
> design was never in any way sufficiently reviewed before the code
> started.

We certainly did better this cycle.  Having a team of people do the
reviews helped. We have some criteria documented [1].  Trying to do
reviews the blueprint whiteboard is just a painful disaster of a workflow.

> In today's Nova meeting a new thought occurred. We already have
> Gerrit which is good for reviewing things. It gives you detailed
> commenting abilities, voting, and history. Instead of attempting
> (and usually failing) on doing blueprint review in launchpad (or
> launchpad + an etherpad, or launchpad + a wiki page) we could do
> something like follows:
> 
> 1. create bad blueprint 2. create gerrit review with detailed
> proposal on the blueprint 3. iterate in gerrit working towards
> blueprint approval 4. once approved copy back the approved text
> into the blueprint (which should now be sufficiently detailed)
> 
> Basically blueprints would get design review, and we'd be pretty
> sure we liked the approach before the blueprint is approved. This
> would hopefully reduce the late design review in the code reviews
> that's happening a lot now.
> 
> There are plenty of niggly details that would be need to be worked
> out
> 
> * what's the basic text / template format of the design to be
> reviewed (probably want a base template for folks to just keep
> things consistent). * is this happening in the nova tree (somewhere
> in docs/ - NEP (Nova Enhancement Proposals), or is it happening in
> a separate gerrit tree. * are there timelines for blueprint
> approval in a cycle? after which point, we don't review any new
> items.
> 
> Anyway, plenty of details to be sorted. However we should figure
> out if the big idea has support before we sort out the details on
> this one.
> 
> Launchpad blueprints will still be used for tracking once things
> are approved, but this will give us a standard way to iterate on
> that content and get to agreement on approach.

I am a *HUGE* fan of the general idea.  It's a tool we already use for
review and iterating on text.  It seems like it would be a huge win.
I also think it would allow and encourage a lot more people to get
involved in the reviews.

I like the idea of iterating in gerrit until it's approved, and then
using blueprints to track status throughout development.  We could
copy the text back into the blueprint, or just have a link to the
proper file in the git repo.

I think a dedicated git repo for this makes sense.
openstack/nova-blueprints or something, or openstack/nova-proposals if
we want to be a bit less tied to launchpad terminology.

If folks are on board with the idea, I'm happy to work on getting a
repo set up.  The base template could be the first review against the
repo.

[1] https://wiki.openstack.org/wiki/Blueprints

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Joe Gordon
On Thu, Mar 6, 2014 at 9:25 AM, Andrew Woodward  wrote:
> For 59148 patch set 23, we nearly merged and had +2 from Joe Gordon

I am not sponsoring any FFE as I want to focus my attention on fixing
bugs etc. This doesn't mean I am for or against a FFE on this feature
in general.


> and Daniel Berrange. And appears to have been quite close.
> For 59149, we might not be so close, Daniel can you comment further if
> you see this landing in the next few days?
>
> On Thu, Mar 6, 2014 at 5:56 AM, Russell Bryant  wrote:
>> On 03/06/2014 03:20 AM, Andrew Woodward wrote:
>>> I'd Like to request A FFE for the remaining patches in the Ephemeral
>>> RBD image support chain
>>>
>>> https://review.openstack.org/#/c/59148/
>>> https://review.openstack.org/#/c/59149/
>>>
>>> are still open after their dependency
>>> https://review.openstack.org/#/c/33409/ was merged.
>>>
>>> These should be low risk as:
>>> 1. We have been testing with this code in place.
>>> 2. It's nearly all contained within the RBD driver.
>>>
>>> This is needed as it implements an essential functionality that has
>>> been missing in the RBD driver and this will become the second release
>>> it's been attempted to be merged into.
>>
>> It's not a trivial change, and it doesn't appear that it was super close
>> to merging based on review history.
>>
>> Are there two nova-core members interested and willing to review this to
>> get it merged ASAP?  If so, could you comment on how close you think it is?
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> If google has done it, Google did it right!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-06 Thread Kyle Mestery
On Thu, Mar 6, 2014 at 10:24 AM, Akihiro Motoki  wrote:

> Hi Kyle,
>
> I am happy to hear OpenDaylight installation and startup are restored
> to devstack.
> It really helps openstack integration with other open source based
> software.
>
> I have a question on a file location for non-OpenStack open source
> software.
> when I refactored neutron related devstack code, we placed files related to
> such files to lib/neutron_thirdparty directory.
> I would like to know the new policy of file locations for such software.
> I understand it is limited to neutron and it may happens to other projects.
>
> Thanks,
> Akihiro
>
> So, OpenDaylight is unique in that it only runs on the service node with
devstack,
and there is no software running on the compute hosts at all. The way I
have it
setup now, it's a toplevel service. This was suggested by Dean and Sean. I
think
it may make sense to move some of the other things (like Trema, Ryu and
Floodlight)
into a similar model.

Thanks,
Kyle


>
> On Thu, Mar 6, 2014 at 11:19 PM, Kyle Mestery 
> wrote:
> > On Tue, Mar 4, 2014 at 7:34 AM, Kyle Mestery 
> > wrote:
> >>
> >> On Tue, Mar 4, 2014 at 5:46 AM, Sean Dague  wrote:
> >>>
> >>> On 03/03/2014 11:32 PM, Dean Troyer wrote:
> >>> > On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery <
> mest...@noironetworks.com
> >>> > > wrote:
> >>> >
> >>> > In all cases today with Open Source plugins, Neutron agents have
> >>> > run
> >>> > on the hosts. For OpenDaylight, this is not the case.
> OpenDaylight
> >>> > integrates with Neutron as a ML2 MechanismDriver. But it has no
> >>> > Neutron code on the compute hosts. OpenDaylight itself
> communicates
> >>> > directly to those compute hosts to program Open vSwitch.
> >>> >
> >>> >
> >>> >
> >>> > devstack doesn't provide a way for me to express this today. On
> the
> >>> > compute hosts in the above scenario, there is no "q-*" services
> >>> > enabled, so the "is_neutron_enabled" function returns 1, meaning
> no
> >>> > neutron.
> >>> >
> >>> >
> >>> > True and working as designed.
> >>> >
> >>> >
> >>> > And then devstack sets Nova up to use nova-networking, which
> fails.
> >>> >
> >>> >
> >>> > This only happens if you have enabled nova-network.  Since it is on
> by
> >>> > default you must disable it.
> >>> >
> >>> >
> >>> > The patch I have submitted [1] modifies "is_neutron_enabled" to
> >>> > check for the meta neutron service being enabled, which will then
> >>> > configure nova to use Neutron instead of nova-networking on the
> >>> > hosts. If this sounds wonky and incorrect, I'm open to
> suggestions
> >>> > on how to make this happen.
> >>> >
> >>> >
> >>> > From the review:
> >>> >
> >>> > is_neutron_enabled() is doing exactly what it is expected to do,
> return
> >>> > success if it finds any "q-*" service listed in ENABLED_SERVICES. If
> no
> >>> > neutron services are configured on a compute host, then this must not
> >>> > say they are.
> >>> >
> >>> > Putting 'neutron' in ENABLED_SERVICES does nothing and should do
> >>> > nothing.
> >>> >
> >>> > Since you are not implementing the ODS as a Neutron plugin (as far as
> >>> > DevStack is concerned) you should then treat it as a system service
> and
> >>> > configure it that way, adding 'opendaylight' to ENABLED_SERVICES
> >>> > whenever you want something to know it is being used.
> >>> >
> >>> >
> >>> >
> >>> > Note: I have another patch [2] which enables an OpenDaylight
> >>> > service, including configuration of OVS on hosts. But I cannot
> >>> > check
> >>> > if the "opendaylight" service is enabled, because this will only
> >>> > run
> >>> > on a single node, and again, not on each compute host.
> >>> >
> >>> >
> >>> > I don't understand this conclusion. in multi-node each node gets its
> >>> > own
> >>> > specific ENABLED_SERVICES list, you can check that on each node to
> >>> > determine how to configure that node.  That is what I'm trying to
> >>> > explain in that last paragraph above, maybe not too clearly.
> >>>
> >>> So in an Open Daylight environment... what's running on the compute
> host
> >>> to coordinate host level networking?
> >>>
> >> Nothing. OpenDaylight communicates to each host using OpenFlow and OVSDB
> >> to manage networking on the host. In fact, this is one huge advantage
> for
> >> the
> >> ODL MechanismDriver in Neutron, because it's one less agent running on
> the
> >> host.
> >>
> >> Thanks,
> >> Kyle
> >>
> > As an update here, I've reworked my devstack patch [1]  for adding
> > OpenDaylight
> > support to make OpenDaylight a top-level service, per suggestion from
> Dean.
> > You
> > can now enable both "odl-server" and "odl-compute" in your local.conf
> with
> > my patch.
> > Enabling "odl-server" will run OpenDaylight under devstack. Enabling
> > "odl-compute"
> > will configure the host's OVS to work with OpenDaylight.
> >
> > Per discussion with Sean, I'd like to look at ref

Re: [openstack-dev] Climate Incubation Application

2014-03-06 Thread Joe Gordon
On Thu, Mar 6, 2014 at 3:11 AM, Sylvain Bauza  wrote:
> Hi Thierry,
>
>
> 2014-03-06 11:46 GMT+01:00 Thierry Carrez :
>
>> Dina Belova wrote:
>> >> Would Climate also be usable to support functionality like Spot
>> >> Instances ? "Schedule when spot price falls under X" ?
>> >
>> > Really good question. Personally I think that Climate might help
>> > implementing this feature, but probably it's not the main thing that
>> > will work there.
>> >
>> > Here are my concerns about it. Spot instances require way of counting
>> > instance price:
>> > [...]
>>
>> Not necessarily. It's a question of whether Climate would handle only
>> "schedule at" (a given date), or more generally "schedule when" (a
>> certain event happens, with date just being one event type). You can
>> depend on some external system setting spot prices, or any other
>> information, and climate rules that would watch regularly that external
>> information to decide if it's time to run resources or not. I don't
>> think it should be Climate's responsibility to specifically maintain
>> spot price, everyone can come up with their own rules.
>>
>
>
> I can't agree more on this. The goal of Climate is to provide some formal
> contract agreement in betwen an user and the Reservation service, for
> ensuring that the order will be placed and served correctly (with regards to
> quotas and capacity). Of course, what we call 'user' doesn't formally tend
> to be a 'real' user.
> About spot instances use-case, I don't pretend to design it, but I could
> easily imagine that a call to Nova for booting an instance would place an
> order to Climate with a specific type of contract (what we began to call
> 'best-effort' and which needs to be implemented yet) where notifications for
> acquitting the order would come from Ceilometer (for instance). If no
> notifications come to Climate, the lease would not be honored.
>
> See https://wiki.openstack.org/wiki/Climate#Lease_types_.28concepts.29 for
> best-effort definition of a lease.
>

"Immediate reservation. Resources are provisioned immediately (like VM
boot or moving host to reserved user aggregate) or not at all. If
request can be fulfilled, lease is created and success status is
returned. Lease should be marked as active or to_be_started. Otherwise
(if request resource cannot be provisioned right now) failure status
for this request should be returned."

Isn't this what what nova does today? why is climate needed for this?

Also your concept of 'Best-effort reservation.' is very different from
spot instances.

Spot instances will terminate terminate when the price goes above a
threshhold, I didn't see anything like that here.


> -Sylvain
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-06 Thread Sandy Walsh


On 03/06/2014 02:16 PM, Renat Akhmerov wrote:
> IMO, it looks not bad (sorry, I’m biased too) even now. Keep in mind this is 
> not the final version, we keep making it more expressive and concise.
> 
> As for killer object model it’s not 100% clear what you mean. As always, 
> devil in the details. This is a web service with all the consequences. I 
> assume what you call “object model” here is nothing else but a python binding 
> for the web service which we’re also working on. Custom python logic you 
> mentioned will also be possible to easily integrate. Like I said, it’s still 
> a pilot stage of the project.

Yeah, the REST aspect is where the "tricky" part comes in :)

Basically, in order to make a grammar expressive enough to work across a
web interface, we essentially end up writing a crappy language. Instead,
we should focus on the callback hooks to something higher level to deal
with these issues. Minstral should just say "I'm done this task, what
should I do next?" and the callback service can make decisions on where
in the graph to go next.

Likewise with things like sending emails from the backend. Minstral
should just call a webhook and let the receiver deal with "active
states" as they choose.

Which is why modelling this stuff in code is usually always better and
why I'd lean towards the TaskFlow approach to the problem. They're
tackling this from a library perspective first and then (possibly)
turning it into a service. Just seems like a better fit. It's also the
approach taken by Amazon Simple Workflow and many BPEL engines.

-S


> Renat Akhmerov
> @ Mirantis Inc.
> 
> 
> 
> On 06 Mar 2014, at 22:26, Joshua Harlow  wrote:
> 
>> That sounds a little similar to what taskflow is trying to do (I am of 
>> course biased).
>>
>> I agree with letting the native language implement the basics (expressions, 
>> assignment...) and then building the "domain" ontop of that. Just seems more 
>> natural IMHO, and is similar to what linq (in c#) has done.
>>
>> My 3 cents.
>>
>> Sent from my really tiny device...
>>
>>> On Mar 6, 2014, at 5:33 AM, "Sandy Walsh"  wrote:
>>>
>>> DSL's are tricky beasts. On one hand I like giving a tool to
>>> non-developers so they can do their jobs, but I always cringe when the
>>> DSL reinvents the wheel for basic stuff (compound assignment
>>> expressions, conditionals, etc).
>>>
>>> YAML isn't really a DSL per se, in the sense that it has no language
>>> constructs. As compared to a Ruby-based DSL (for example) where you
>>> still have Ruby under the hood for the basic stuff and extensions to the
>>> language for the domain-specific stuff.
>>>
>>> Honestly, I'd like to see a killer object model for defining these
>>> workflows as a first step. What would a python-based equivalent of that
>>> real-world workflow look like? Then we can ask ourselves, does the DSL
>>> make this better or worse? Would we need to expose things like email
>>> handlers, or leave that to the general python libraries?
>>>
>>> $0.02
>>>
>>> -S
>>>
>>>
>>>
 On 03/05/2014 10:50 PM, Dmitri Zimine wrote:
 Folks, 

 I took a crack at using our DSL to build a real-world workflow. 
 Just to see how it feels to write it. And how it compares with
 alternative tools. 

 This one automates a page from OpenStack operation
 guide: 
 http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node
  

 Here it is https://gist.github.com/dzimine/9380941
 or here http://paste.openstack.org/show/72741/

 I have a bunch of comments, implicit assumptions, and questions which
 came to mind while writing it. Want your and other people's opinions on 
 it. 

 But gist and paste don't let annotate lines!!! :(

 May be we can put it on the review board, even with no intention to
 check in,  to use for discussion? 

 Any interest?

 DZ> 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Matt Van Winkle
Hey Sean,
The number one item that came out of the Operator's mini summit on Monday
was better mechanisms to engage Operators in the design and review
process.  Moving Blueprints to Gerrit was something discussed quite a bit.
 It's fantastic to hear the same thing is coming from the Nova development
side as well.

It also allows for Operators to get some credit for participating within
the community.  I can's speak for all, but I can say that based on the
discussion in the room there is good support for this from those of us
that have to run OpenStack on a daily basis.  Please let me know how we
can help move this change along.

Thanks!
Matt

On 3/6/14 12:05 PM, "Sean Dague"  wrote:

>One of the issues that the Nova team has definitely hit is Blueprint
>overload. At some point there were over 150 blueprints. Many of them
>were a single sentence.
>
>The results of this have been that design review today is typically not
>happening on Blueprint approval, but is instead happening once the code
>shows up in the code review. So -1s and -2s on code review are a mix of
>design and code review. A big part of which is that design was never in
>any way sufficiently reviewed before the code started.
>
>In today's Nova meeting a new thought occurred. We already have Gerrit
>which is good for reviewing things. It gives you detailed commenting
>abilities, voting, and history. Instead of attempting (and usually
>failing) on doing blueprint review in launchpad (or launchpad + an
>etherpad, or launchpad + a wiki page) we could do something like follows:
>
>1. create bad blueprint
>2. create gerrit review with detailed proposal on the blueprint
>3. iterate in gerrit working towards blueprint approval
>4. once approved copy back the approved text into the blueprint (which
>should now be sufficiently detailed)
>
>Basically blueprints would get design review, and we'd be pretty sure we
>liked the approach before the blueprint is approved. This would
>hopefully reduce the late design review in the code reviews that's
>happening a lot now.
>
>There are plenty of niggly details that would be need to be worked out
>
> * what's the basic text / template format of the design to be reviewed
>(probably want a base template for folks to just keep things consistent).
> * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
>Enhancement Proposals), or is it happening in a separate gerrit tree.
> * are there timelines for blueprint approval in a cycle? after which
>point, we don't review any new items.
>
>Anyway, plenty of details to be sorted. However we should figure out if
>the big idea has support before we sort out the details on this one.
>
>Launchpad blueprints will still be used for tracking once things are
>approved, but this will give us a standard way to iterate on that
>content and get to agreement on approach.
>
>   -Sean
>
>-- 
>Sean Dague
>Samsung Research America
>s...@dague.net / sean.da...@samsung.com
>http://dague.net
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-06 Thread Renat Akhmerov
IMO, it looks not bad (sorry, I’m biased too) even now. Keep in mind this is 
not the final version, we keep making it more expressive and concise.

As for killer object model it’s not 100% clear what you mean. As always, devil 
in the details. This is a web service with all the consequences. I assume what 
you call “object model” here is nothing else but a python binding for the web 
service which we’re also working on. Custom python logic you mentioned will 
also be possible to easily integrate. Like I said, it’s still a pilot stage of 
the project.

Renat Akhmerov
@ Mirantis Inc.



On 06 Mar 2014, at 22:26, Joshua Harlow  wrote:

> That sounds a little similar to what taskflow is trying to do (I am of course 
> biased).
> 
> I agree with letting the native language implement the basics (expressions, 
> assignment...) and then building the "domain" ontop of that. Just seems more 
> natural IMHO, and is similar to what linq (in c#) has done.
> 
> My 3 cents.
> 
> Sent from my really tiny device...
> 
>> On Mar 6, 2014, at 5:33 AM, "Sandy Walsh"  wrote:
>> 
>> DSL's are tricky beasts. On one hand I like giving a tool to
>> non-developers so they can do their jobs, but I always cringe when the
>> DSL reinvents the wheel for basic stuff (compound assignment
>> expressions, conditionals, etc).
>> 
>> YAML isn't really a DSL per se, in the sense that it has no language
>> constructs. As compared to a Ruby-based DSL (for example) where you
>> still have Ruby under the hood for the basic stuff and extensions to the
>> language for the domain-specific stuff.
>> 
>> Honestly, I'd like to see a killer object model for defining these
>> workflows as a first step. What would a python-based equivalent of that
>> real-world workflow look like? Then we can ask ourselves, does the DSL
>> make this better or worse? Would we need to expose things like email
>> handlers, or leave that to the general python libraries?
>> 
>> $0.02
>> 
>> -S
>> 
>> 
>> 
>>> On 03/05/2014 10:50 PM, Dmitri Zimine wrote:
>>> Folks, 
>>> 
>>> I took a crack at using our DSL to build a real-world workflow. 
>>> Just to see how it feels to write it. And how it compares with
>>> alternative tools. 
>>> 
>>> This one automates a page from OpenStack operation
>>> guide: 
>>> http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node
>>>  
>>> 
>>> Here it is https://gist.github.com/dzimine/9380941
>>> or here http://paste.openstack.org/show/72741/
>>> 
>>> I have a bunch of comments, implicit assumptions, and questions which
>>> came to mind while writing it. Want your and other people's opinions on it. 
>>> 
>>> But gist and paste don't let annotate lines!!! :(
>>> 
>>> May be we can put it on the review board, even with no intention to
>>> check in,  to use for discussion? 
>>> 
>>> Any interest?
>>> 
>>> DZ> 
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-06 Thread Julie Pichon
On 05/03/14 22:36, Lyle, David wrote:
> I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his
> reviews very insightful and more importantly have come to rely on
> their quality. He has contributed to several areas in Horizon and he
> understands the code base well.  Radomir is also very active in
> tuskar-ui both contributing and reviewing.

+1 from me. I find Radomir's reviews useful, and highly value the deep
knowledge of Python shown in both his patches and reviews. He would make
a great addition to the core team.

Julie

> 
> David
> 
> ___ OpenStack-dev mailing
> list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-06 Thread Sean Dague
One of the issues that the Nova team has definitely hit is Blueprint
overload. At some point there were over 150 blueprints. Many of them
were a single sentence.

The results of this have been that design review today is typically not
happening on Blueprint approval, but is instead happening once the code
shows up in the code review. So -1s and -2s on code review are a mix of
design and code review. A big part of which is that design was never in
any way sufficiently reviewed before the code started.

In today's Nova meeting a new thought occurred. We already have Gerrit
which is good for reviewing things. It gives you detailed commenting
abilities, voting, and history. Instead of attempting (and usually
failing) on doing blueprint review in launchpad (or launchpad + an
etherpad, or launchpad + a wiki page) we could do something like follows:

1. create bad blueprint
2. create gerrit review with detailed proposal on the blueprint
3. iterate in gerrit working towards blueprint approval
4. once approved copy back the approved text into the blueprint (which
should now be sufficiently detailed)

Basically blueprints would get design review, and we'd be pretty sure we
liked the approach before the blueprint is approved. This would
hopefully reduce the late design review in the code reviews that's
happening a lot now.

There are plenty of niggly details that would be need to be worked out

 * what's the basic text / template format of the design to be reviewed
(probably want a base template for folks to just keep things consistent).
 * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
Enhancement Proposals), or is it happening in a separate gerrit tree.
 * are there timelines for blueprint approval in a cycle? after which
point, we don't review any new items.

Anyway, plenty of details to be sorted. However we should figure out if
the big idea has support before we sort out the details on this one.

Launchpad blueprints will still be used for tracking once things are
approved, but this will give us a standard way to iterate on that
content and get to agreement on approach.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Georgy Okrokvertskhov
On Thu, Mar 6, 2014 at 8:59 AM, Julien Danjou  wrote:

> On Thu, Mar 06 2014, Georgy Okrokvertskhov wrote:
>
> > I there are valid reasons why we can consider MQ approach for
> communicating
> > with VM agents. The first obvious reason is scalability and performance.
> > User can ask infrastructure to create 1000 VMs and configure them. With
> > HTTP approach it will lead to a corresponding number of connections to a
> > REST API service. Taking into account that cloud has multiple clients the
> > load on infrastructure will be pretty significant. You can address this
> > with introducing Load Balancing for each service, but it will
> significantly
> > increase management overhead and complexity of OpenStack infrastructure.
>
> Uh? I'm having trouble imagining any large OpenStack deployment without
> load-balancing services. I don't think we ever designed OpenStack to run
> without load-balancers at large scale.
>

Not all services require LoadBalancer instances. It makes sense to use LB
for API services but even in Nova there are components which use MQ RPC for
communication and one doesn't need to put them behind LB as they scale
naturally just using MQ concurrently. I believe this change to MQ RPC was
exactly done to address the problems of scalability for internal services.
I agree that LBs are supposed to be in production grade deployment but this
solution is not a silver bullet and has lot of limitations and overall
design implications.

> The second issue is connectivity and security. I think that in typical
> > production deployment VMs will not have an access to OpenStack
> > infrastructure services.
>
> Why? Should they be different than other VM? Are you running another
> OpenStack cloud to run your OpenStack cloud?
>

There are use cases and security requirements that usually enforce to have
very limited access to OpenStack infrastructure components. As cloud admins
do not control the workloads on VMs there is a significant security risk of
being attacked from VM. The common requirements we see in production
deployment is to enable SSL for everything including MySQL, MQ and nova
metadata service.
I also would like to highlight that even Nova\Neutron for working with
cloud-init enables an access to metadata temporary by managing routes on
the VM. So for design purpose it is better to assume that there will be no
access to OpenStack services from VM side and if you need is, you will have
to configure this properly.


>
> > It is fine for core infrastructure services like
> > Nova and Cinder as they do not work directly with VM. But it makes a huge
> > problem for VM level services like Savanna, Heat, Trove and Murano which
> > have to be able to communicate with VMs. The solution here is to put an
> > intermediary to create a controllable way of communication. In case of
> HTTP
> > you will need to have a proxy with QoS and Firewalls or policies, to be
> > able to restrict an access to some specific URLS or services, to throttle
> > the number of connections and bandwidth to protect services from DDoS
> > attacks from VM sides.
>
> This really sounds like weak arguments. You probably already do need
> firewall, QoS, and throttling for your users if you're deploying a cloud
> and want to mitigate any kind of attack.
>
I don't argue about existence of such components in OpenStack deployment. I
just show that with increasing number of services one will have to manage
the complexity of such configuration. Taking into account number of
possible Neutron configurations, possibility of overlapping subnets in
virtual networks, and existence of fully private network which are not
attached through the router to external network the connectivity and access
control looks like a real complex task which will be a headache for cloud
admins and devops.

>
> > In case of MQ usage you can have a separate MQ broker for communication
> > between service and VMs. Typical brokers have throttling mechanism, so
> you
> > can protect service from DDoS attacks via MQ.
>
> Yeah and I'm pretty sure a lot of HTTP servers have throttling for
> connection rate and/or bandwidth limitation. I'm not really convinced.
>
Yes, some of them have and you will need to configure them properly.

>
> > Using different queues and even vhosts you can effectively segregate
> > different tenants.
>
> Sounds like could do the same thing the HTTP protocol.
>
> > For example we use this approach in Murano service when it is
> > installed by Fuel. The default deployment configuration for Murano
> > produced by Fuel is to have separate RabbitMQ instance for Murano<->VM
> > communications. This configuration will not expose the OpenStack
> > internals to VM, so even if someone broke the Murano rabbitmq
> > instance, the OpenSatck itself will be unaffected and only the Murano
> > part will be broken.
>
> It really sounds like you already settled on the solution being
> RabbitMQ, so I'm not sure what/why you ask in the first place. :)
>
> Is there any

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Jay Pipes
On Thu, 2014-03-06 at 21:34 +0400, Eugene Nikanorov wrote:
> If this happens, it might make sense to keep it before, not after the
> summit.
> Basically, on the summit we need to come up with a plan/design/roadmap
> that everyone agrees on and just present it to the core team.

It depends. If the LBaaS summit is *after* the design summit, it can be
more of a working meeting where we take advantage of the time together
to make a lot of progress on goals established (and documented) at the
summit. If it is *before* the summit, it is more likely to be less
productive and more of a brainstorming type meeting, which is kind of
what the design summit is for.

Just my two cents,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-06 Thread Jaromir Coufal

On 2014/05/03 23:36, Lyle, David wrote:

I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews 
very insightful and more importantly have come to rely on their quality. He has 
contributed to several areas in Horizon and he understands the code base well.  
Radomir is also very active in tuskar-ui both contributing and reviewing.


+1

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Prashanth Hari
Same here.. will be interested to join.

Thanks,
Prashanth


On Thu, Mar 6, 2014 at 11:51 AM, Veiga, Anthony <
anthony_ve...@cable.comcast.com> wrote:

>
> >On Thu, 2014-03-06 at 15:32 +, Jorge Miramontes wrote:
> >> I'd like to gauge everyone's interest in a possible mini-summit for
> >> Neturon LBaaS. If enough people are interested I'd be happy to try and
> >> set something up. The Designate team just had a productive mini-summit
> >> in Austin, TX and it was nice to have face-to-face conversations with
> >> people in the Openstack community. While most of us will meet in
> >> Atlanta in May, I feel that a focused mini-summit will be more
> >> productive since we won't have other Openstack distractions around us.
> >> Let me know what you all think!
> >
> >++
> >
> >++
> >
> >I think a few weeks after the design summit would be a good time.
> >
> >-jay
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Throwing my hat into the ring as well. I think this would be quite useful.
> -Anthony
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Carl Perry
I am also interested

On 03/06/2014 11:08 AM, John Dewey wrote:
> I am interested
>
> On Thursday, March 6, 2014 at 7:32 AM, Jorge Miramontes wrote:
>
>> Hi everyone,
>>
>> I'd like to gauge everyone's interest in a possible mini-summit for
>> Neturon LBaaS. If enough people are interested I'd be happy to try
>> and set something up. The Designate team just had a productive
>> mini-summit in Austin, TX and it was nice to have face-to-face
>> conversations with people in the Openstack community. While most of
>> us will meet in Atlanta in May, I feel that a focused mini-summit
>> will be more productive since we won't have other Openstack
>> distractions around us. Let me know what you all think!
>>
>> Cheers,
>> --Jorge
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Eugene Nikanorov
If this happens, it might make sense to keep it before, not after the
summit.
Basically, on the summit we need to come up with a plan/design/roadmap that
everyone agrees on and just present it to the core team.

Thanks,
Eugene.



On Thu, Mar 6, 2014 at 9:08 PM, John Dewey  wrote:

>  I am interested
>
> On Thursday, March 6, 2014 at 7:32 AM, Jorge Miramontes wrote:
>
>   Hi everyone,
>
>  I'd like to gauge everyone's interest in a possible mini-summit for
> Neturon LBaaS. If enough people are interested I'd be happy to try and set
> something up. The Designate team just had a productive mini-summit in
> Austin, TX and it was nice to have face-to-face conversations with people
> in the Openstack community. While most of us will meet in Atlanta in May, I
> feel that a focused mini-summit will be more productive since we won't have
> other Openstack distractions around us. Let me know what you all think!
>
>  Cheers,
> --Jorge
>   ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-06 Thread Steven Dake

On 03/06/2014 03:15 AM, Thierry Carrez wrote:

Steven Dake wrote:

My general take is workflow would fit in the Orchestration program, but
not be integrated into the heat repo specifically.  It would be a
different repo, managed by the same orchestration program just as we
have heat-cfntools and other repositories.  Figuring out how to handle
the who is the core team of people responsible for program's individual
repositories is the most difficult aspect of making such a merge.  For
example, I'd not desire a bunch of folks from Murano +2/+A heat specific
repos until they understood the code base in detail, or atleast the
broad architecture.   I think the same think applies in reverse from the
Murano perspective.  Ideally folks that are core on a specific program
would need to figure out how to learn how to broadly review each repo
(meaning the heat devs would have to come up to speed on murano and
murano devs would have to come up to speed on heat.  Learning a new code
base is a big commitment for an already overtaxed core team.

Being in the same program means you share the same team and PTL, not
necessarily that all projects under the program have the same core
review team. So you could have different core reviewers for both
(although I'd encourage the core for ones become core for the other,
since it will facilitate behaving like a coherent team). You could also
have a single core team with clear expectations set ("do not approve
changes for code you're not familiar with").

This may be possible with jenkins permissions, but what I'd like to see 
is for a way for people familiar with each specific project to be 
graduated to core for that project.  (eg heat or workflow).  An implicit 
expectation do not approve doesn't  totally fit, because at some point, 
we may want to give those folks the ability to approve via a core 
nomination (because they have met the core requirements) for either heat 
or workflow.  WIthout a way of nominating for core for a specific 
project (within a specific program), the poor developer has no way to 
know when they have officially been recognized by the core team as an 
actual core member.


I agree folks in one program need to behave as a coherent team for the 
Orchestration program to be successful, which means a big commitment 
from the existing orchestration program core members (currently 
heat-core) to come up to speed on the example workflow code base and 
community (and vice-versa).


I'm a bit confused as well as to how a incubated project would be 
differentiated from a integrated project in one program.  This may have 
already been discussed by the TC.  For example, Red Hat doesn't 
officially support incubated projects, but we officially support (with 
our full sales/training/documentation/support/ plus a whole bunch of 
other Red Hat internalisms) Integrated projects.  OpenStack vendors need 
a way to let customers know (through an upstream page?) what a project 
in a specific program's status is so we can appropriately set 
expectations with the community and  customers.


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Andrew Woodward
For 59148 patch set 23, we nearly merged and had +2 from Joe Gordon
and Daniel Berrange. And appears to have been quite close.
For 59149, we might not be so close, Daniel can you comment further if
you see this landing in the next few days?

On Thu, Mar 6, 2014 at 5:56 AM, Russell Bryant  wrote:
> On 03/06/2014 03:20 AM, Andrew Woodward wrote:
>> I'd Like to request A FFE for the remaining patches in the Ephemeral
>> RBD image support chain
>>
>> https://review.openstack.org/#/c/59148/
>> https://review.openstack.org/#/c/59149/
>>
>> are still open after their dependency
>> https://review.openstack.org/#/c/33409/ was merged.
>>
>> These should be low risk as:
>> 1. We have been testing with this code in place.
>> 2. It's nearly all contained within the RBD driver.
>>
>> This is needed as it implements an essential functionality that has
>> been missing in the RBD driver and this will become the second release
>> it's been attempted to be merged into.
>
> It's not a trivial change, and it doesn't appear that it was super close
> to merging based on review history.
>
> Are there two nova-core members interested and willing to review this to
> get it merged ASAP?  If so, could you comment on how close you think it is?
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
If google has done it, Google did it right!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] graduation review meeting

2014-03-06 Thread Kurt Griffiths
Team, we will be discussing Marconi graduation from incubation in a couple 
weeks at the TC meeting, March 18th at 20:00 
UTC.

It would be great to have as many people there as possible to help answer 
questions, etc.

Thanks!

Kurt G. | @kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Daniel P. Berrange
On Thu, Mar 06, 2014 at 07:25:37PM +0400, Dmitry Mescheryakov wrote:
> Hello folks,
> 
> A number of OpenStack and related projects have a need to perform
> operations inside VMs running on OpenStack. A natural solution would
> be an agent running inside the VM and performing tasks.
> 
> One of the key questions here is how to communicate with the agent. An
> idea which was discussed some time ago is to use oslo.messaging for
> that. That is an RPC framework - what is needed. You can use different
> transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
> connectivity your OpenStack networking can provide. At the same time
> there is a number of things to consider, like networking, security,
> packaging, etc.
> 
> So, messaging people, what is your opinion on that idea? I've already
> raised that question in the list [1], but seems like not everybody who
> has something to say participated. So I am resending with the
> different topic. For example, yesterday we started discussing security
> of the solution in the openstack-oslo channel. Doug Hellmann at the
> start raised two questions: is it possible to separate different
> tenants or applications with credentials and ACL so that they use
> different queues? My opinion that it is possible using RabbitMQ/Qpid
> management interface: for each application we can automatically create
> a new user with permission to access only her queues. Another question
> raised by Doug is how to mitigate a DOS attack coming from one tenant
> so that it does not affect another tenant. The thing is though
> different applications will use different queues, they are going to
> use a single broker.

Looking at it from the security POV, I'd absolutely not want to
have any tenant VMs connected to the message bus that openstack
is using between its hosts. Even if you have security policies
in place, the inherent architectural risk of such a design is
just far too great. One small bug or misconfiguration and it
opens the door to a guest owning the entire cloud infrastructure.
Any channel between a guest and host should be isolated per guest,
so there's no possibility of guest messages finding their way out
to either the host or to other guests.

If there was still a desire to use oslo.messaging, then at the
very least you'd want a completely isolated message bus for guest
comms, with no connection to the message bus used between hosts.
Ideally the message bus would be separate per guest too, which
means it ceases to be a bus really - just a point-to-point link
between the virt host + guest OS that happens to use the oslo.messaging
wire format.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [re]: [GSoC 2014] Proposal Template

2014-03-06 Thread Davanum Srinivas
Sai,

There may be more than one person on a topic, so it would make sense
to have additional questions per person. Yes, link to project idea is
definitely needed.

-- dims

On Thu, Mar 6, 2014 at 11:41 AM, saikrishna sripada
 wrote:
> Hi Masaru,
>
> I tried creating the project template following your suggestions.Thats
> really helpful. Only one suggestion:
>
> Under the project description, We can give the link to actual project idea.
> The remaining details like these can be removed here since this can be
> redundant.
>
> What is the goal?
> How will you achieve your goal?
> What would be your milestone?
> At which time will you complete a sub-task of your project?
>
> These details we will be filling any way in the Project template link just
> which will be just below in the page. Please confirm.
>
> Thanks,
> --sai krishna.
>
> Dear mentors and students,
>
> Hi,
>
> after a short talk with dims, I created an application template wiki
> page[1]. Obviously, this is not a completed version, and I'd like your
> opinions to improve it. :)
>
> I have :
> 1) simply added information such as :
>
>・Personal Details (e.g. Name, Email, University and so on)
>
>・Project Proposal (e.g. Project, idea, implementation issues, and time
> scheduling)
>
>・Background (e.g. Open source, academic or intern experience, or
> language experience)
>
> 2) linked this page on GSoC 2014 wiki page[2]
> 3) created an example of my proposal page [3] (not completed yet!)
> 4) linked the example to an Oslo project page[4]
>
>
> Thank you,
> Masaru
>
> [1]
> https://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplate
> [2] https://wiki.openstack.org/wiki/GSoC2014#Communication
> [3] https://wiki.openstack.org/wiki/GSoC2014/Student/Masaru
> [4]
> https://wiki.openstack.org/wiki/GSoC2014/Incubator/SharedLib#Students.27_proposals
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][Horizon] test_launch_instance_post questions

2014-03-06 Thread Abishek Subramanian (absubram)
Hi,

I had a couple of questions regarding this UT and the
JS template that it ends up using.
Hopefully someone can point me in the right direction
and help me understand this a little better.

I see that for this particular UT, we have a total of 3 networks
in the network_list (the second network is supposed to be disabled though).
For the nic argument needed by the nova/server_create API though we
only pass the first network's net_id.

I am trying to modify this unit test so as to be able to accept 2
network_ids 
instead of just one. This should be possible yes?
We can have two nics in an instance of just one?
However, I always see that when the test runs,
in code it only finds the first network from the list.

This line of code -

 if netids:
nics = [{"net-id": netid, "v4-fixed-ip": ""}
for netid in netids]

There's always just one net-id in this dictionary even though I've added
a new network in the neutron test_data. Can someone please help me
figure out what I might be doing wrong?

How does the JS code in horizon.instances.js file work?
I assume this is where the network list is obtained from?
How does this translate in the unit test environment?



Thanks!
Abishek


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread John Dewey
I am interested 


On Thursday, March 6, 2014 at 7:32 AM, Jorge Miramontes wrote:

> Hi everyone,
> 
> I'd like to gauge everyone's interest in a possible mini-summit for Neturon 
> LBaaS. If enough people are interested I'd be happy to try and set something 
> up. The Designate team just had a productive mini-summit in Austin, TX and it 
> was nice to have face-to-face conversations with people in the Openstack 
> community. While most of us will meet in Atlanta in May, I feel that a 
> focused mini-summit will be more productive since we won't have other 
> Openstack distractions around us. Let me know what you all think! 
> 
> Cheers, 
> --Jorge
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] development workflows

2014-03-06 Thread Lowery, Mathew
So I submitted this 
doc
 (in this patch set) and Dan Nguyen 
(thanks Dan) stated that there were some folks using Vagrant. (My workflow uses 
git push with a git hook to copy files and trigger restarts.) Can anyone point 
me to any doc regarding Trove using Vagrant? Assuming my doc is desirable in 
some form, where is the best place to put it? Thanks.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tox issues on a clean environment

2014-03-06 Thread Kevin L. Mitchell
On Thu, 2014-03-06 at 08:14 -0800, Gary Kotton wrote:
> File "/home/gk-dev/nova/.tox/py27/build/cffi/setup.py", line 94, in
> 
> 
> 
> from setuptools import setup, Feature, Extension
> 
> 
> ImportError: cannot import name Feature

Apparently, quite recently, a new version of setuptools was released
that eliminated the Feature class.  From what I understand, the class
has been deprecated for quite a while, but the removal still seems to
have taken some consumers by surprise; we discovered it when a package
that uses MarkupSafe failed tests with the same error today.  We may
have to consider a short-term pin to the version of setuptools (if
that's even possible) on projects that encounter the problem…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Julien Danjou
On Thu, Mar 06 2014, Georgy Okrokvertskhov wrote:

> I there are valid reasons why we can consider MQ approach for communicating
> with VM agents. The first obvious reason is scalability and performance.
> User can ask infrastructure to create 1000 VMs and configure them. With
> HTTP approach it will lead to a corresponding number of connections to a
> REST API service. Taking into account that cloud has multiple clients the
> load on infrastructure will be pretty significant. You can address this
> with introducing Load Balancing for each service, but it will significantly
> increase management overhead and complexity of OpenStack infrastructure.

Uh? I'm having trouble imagining any large OpenStack deployment without
load-balancing services. I don't think we ever designed OpenStack to run
without load-balancers at large scale.

> The second issue is connectivity and security. I think that in typical
> production deployment VMs will not have an access to OpenStack
> infrastructure services.

Why? Should they be different than other VM? Are you running another
OpenStack cloud to run your OpenStack cloud?

> It is fine for core infrastructure services like
> Nova and Cinder as they do not work directly with VM. But it makes a huge
> problem for VM level services like Savanna, Heat, Trove and Murano which
> have to be able to communicate with VMs. The solution here is to put an
> intermediary to create a controllable way of communication. In case of HTTP
> you will need to have a proxy with QoS and Firewalls or policies, to be
> able to restrict an access to some specific URLS or services, to throttle
> the number of connections and bandwidth to protect services from DDoS
> attacks from VM sides.

This really sounds like weak arguments. You probably already do need
firewall, QoS, and throttling for your users if you're deploying a cloud
and want to mitigate any kind of attack.

> In case of MQ usage you can have a separate MQ broker for communication
> between service and VMs. Typical brokers have throttling mechanism, so you
> can protect service from DDoS attacks via MQ.

Yeah and I'm pretty sure a lot of HTTP servers have throttling for
connection rate and/or bandwidth limitation. I'm not really convinced.

> Using different queues and even vhosts you can effectively segregate
> different tenants.

Sounds like could do the same thing the HTTP protocol.

> For example we use this approach in Murano service when it is
> installed by Fuel. The default deployment configuration for Murano
> produced by Fuel is to have separate RabbitMQ instance for Murano<->VM
> communications. This configuration will not expose the OpenStack
> internals to VM, so even if someone broke the Murano rabbitmq
> instance, the OpenSatck itself will be unaffected and only the Murano
> part will be broken.

It really sounds like you already settled on the solution being
RabbitMQ, so I'm not sure what/why you ask in the first place. :)

Is there any problem with starting VMs on a network that is connected to
your internal network? You just have to do that and connect your
application to the/one internal messages bus and that's it.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-06 Thread Veiga, Anthony

>On Thu, 2014-03-06 at 15:32 +, Jorge Miramontes wrote:
>> I'd like to gauge everyone's interest in a possible mini-summit for
>> Neturon LBaaS. If enough people are interested I'd be happy to try and
>> set something up. The Designate team just had a productive mini-summit
>> in Austin, TX and it was nice to have face-to-face conversations with
>> people in the Openstack community. While most of us will meet in
>> Atlanta in May, I feel that a focused mini-summit will be more
>> productive since we won't have other Openstack distractions around us.
>> Let me know what you all think!
>
>++
>
>++
>
>I think a few weeks after the design summit would be a good time.
>
>-jay
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Throwing my hat into the ring as well. I think this would be quite useful.
-Anthony


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Georgy Okrokvertskhov
Hi Julien,

I there are valid reasons why we can consider MQ approach for communicating
with VM agents. The first obvious reason is scalability and performance.
User can ask infrastructure to create 1000 VMs and configure them. With
HTTP approach it will lead to a corresponding number of connections to a
REST API service. Taking into account that cloud has multiple clients the
load on infrastructure will be pretty significant. You can address this
with introducing Load Balancing for each service, but it will significantly
increase management overhead and complexity of OpenStack infrastructure.

The second issue is connectivity and security. I think that in typical
production deployment VMs will not have an access to OpenStack
infrastructure services. It is fine for core infrastructure services like
Nova and Cinder as they do not work directly with VM. But it makes a huge
problem for VM level services like Savanna, Heat, Trove and Murano which
have to be able to communicate with VMs. The solution here is to put an
intermediary to create a controllable way of communication. In case of HTTP
you will need to have a proxy with QoS and Firewalls or policies, to be
able to restrict an access to some specific URLS or services, to throttle
the number of connections and bandwidth to protect services from DDoS
attacks from VM sides.
In case of MQ usage you can have a separate MQ broker for communication
between service and VMs. Typical brokers have throttling mechanism, so you
can protect service from DDoS attacks via MQ. Using different queues and
even vhosts you can effectively segregate different tenants.
For example we use this approach in Murano service when it is installed by
Fuel. The default deployment configuration for Murano produced by Fuel is
to have separate RabbitMQ instance for Murano<->VM communications. This
configuration will not expose the OpenStack internals to VM, so even if
someone broke the Murano rabbitmq instance, the OpenSatck itself will be
unaffected and only the Murano part will be broken.

Thanks
Georgy


On Thu, Mar 6, 2014 at 7:46 AM, Julien Danjou  wrote:

> On Thu, Mar 06 2014, Dmitry Mescheryakov wrote:
>
> > So, messaging people, what is your opinion on that idea? I've already
> > raised that question in the list [1], but seems like not everybody who
> > has something to say participated. So I am resending with the
> > different topic. For example, yesterday we started discussing security
> > of the solution in the openstack-oslo channel. Doug Hellmann at the
> > start raised two questions: is it possible to separate different
> > tenants or applications with credentials and ACL so that they use
> > different queues? My opinion that it is possible using RabbitMQ/Qpid
> > management interface: for each application we can automatically create
> > a new user with permission to access only her queues. Another question
> > raised by Doug is how to mitigate a DOS attack coming from one tenant
> > so that it does not affect another tenant. The thing is though
> > different applications will use different queues, they are going to
> > use a single broker.
>
> What about using HTTP and the REST APIs? What's what supposed to be the
> world facing interface of OpenStack. If you want to receive messages,
> it's still possible to use long polling connections.
>
> --
> Julien Danjou
> ;; Free Software hacker
> ;; http://julien.danjou.info
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [re]: [GSoC 2014] Proposal Template

2014-03-06 Thread saikrishna sripada
Hi Masaru,

I tried creating the project template following your suggestions.Thats
really helpful. Only one suggestion:

Under the project description, We can give the link to actual project idea.
The remaining details like these can be removed here since this can be
redundant.

   - What is the goal?
   - How will you achieve your goal?
   - What would be your milestone?
   - At which time will you complete a sub-task of your project?

These details we will be filling any way in the Project template link just
which will be just below in the page. Please confirm.

Thanks,
--sai krishna.

Dear mentors and students,

Hi,

after a short talk with dims, I created an application template wiki
page[1]. Obviously, this is not a completed version, and I'd like your
opinions to improve it. :)

I have :
1) simply added information such as :

   ・Personal Details (e.g. Name, Email, University and so on)

   ・Project Proposal (e.g. Project, idea, implementation issues, and time
scheduling)

   ・Background (e.g. Open source, academic or intern experience, or
language experience)

2) linked this page on GSoC 2014 wiki page[2]
3) created an example of my proposal page [3] (not completed yet!)
4) linked the example to an Oslo project page[4]


Thank you,
Masaru

[1] 
https://wiki.openstack.org/wiki/GSoC2014/StudentApplicationTemplate
[2] https://wiki.openstack.org/wiki/GSoC2014#Communication
[3] https://wiki.openstack.org/wiki/GSoC2014/Student/Masaru
[4]https://wiki.openstack.org/wiki/GSoC2014/Incubator/SharedLib#Students.27_proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tox issues on a clean environment

2014-03-06 Thread Trevor McKay
I am having a very similar issue with horizon, just today. I cloned the
repo and started from scratch on master.

tools/install_venv.py is trying to install cffi as a depdendency,
ultimately fails with 

ImportError: cannot import name Feature

This is Fedora 19.  I know some folks on Fedora 20 who are not having
this issue.  I'm guessing it's a version thing...

Trevor

On Thu, 2014-03-06 at 08:14 -0800, Gary Kotton wrote:
> Hi,
> Anyone know how I cam solve the error below:
> 
> 
>   Running setup.py install for jsonpatch
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown
> distribution option: 'entry_poimts'
>   warnings.warn(msg)
> changing mode of build/scripts-2.7/jsondiff from 664 to 775
> changing mode of build/scripts-2.7/jsonpatch from 664 to 775
> 
> changing mode of /home/gk-dev/nova/.tox/py27/bin/jsonpatch to 775
> changing mode of /home/gk-dev/nova/.tox/py27/bin/jsondiff to 775
>   Found existing installation: distribute 0.6.24dev-r0
> Not uninstalling distribute at /usr/lib/python2.7/dist-packages,
> outside environment /home/gk-dev/nova/.tox/py27
>   Running setup.py install for setuptools
> 
> Installing easy_install script to /home/gk-dev/nova/.tox/py27/bin
> Installing easy_install-2.7 script
> to /home/gk-dev/nova/.tox/py27/bin
>   Running setup.py install for mccabe
> 
>   Running setup.py install for cffi
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/home/gk-dev/nova/.tox/py27/build/cffi/setup.py", line 94,
> in 
> from setuptools import setup, Feature, Extension
> ImportError: cannot import name Feature
> Complete output from
> command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c "import
> setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
>  '\n'), __file__, 'exec'))" install --record 
> /tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
> --install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7:
> Traceback (most recent call last):
> 
> 
>   File "", line 1, in 
> 
> 
>   File "/home/gk-dev/nova/.tox/py27/build/cffi/setup.py", line 94, in
> 
> 
> 
> from setuptools import setup, Feature, Extension
> 
> 
> ImportError: cannot import name Feature
> 
> 
> 
> Cleaning up...
> Command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c "import
> setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
>  '\n'), __file__, 'exec'))" install --record 
> /tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
> --install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7 failed 
> with error code 1 in /home/gk-dev/nova/.tox/py27/build/cffi
> Traceback (most recent call last):
>   File ".tox/py27/bin/pip", line 9, in 
> load_entry_point('pip==1.5.4', 'console_scripts', 'pip')()
>   File
> "/home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/__init__.py",
>  line 148, in main
> parser.print_help()
>   File
> "/home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/basecommand.py",
>  line 169, in main
> log_file_fp.write(text)
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position
> 72: ordinal not in range(128)
> 
> 
> ERROR: could not install deps [-r/home/gk-dev/nova/requirements.txt,
> -r/home/gk-dev/nova/test-requirements.txt]
> 
> 
> Thanks
> Gary
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] oslo.messaging on VMs

2014-03-06 Thread Doug Hellmann
On Thu, Mar 6, 2014 at 10:25 AM, Dmitry Mescheryakov <
dmescherya...@mirantis.com> wrote:

> Hello folks,
>
> A number of OpenStack and related projects have a need to perform
> operations inside VMs running on OpenStack. A natural solution would
> be an agent running inside the VM and performing tasks.
>
> One of the key questions here is how to communicate with the agent. An
> idea which was discussed some time ago is to use oslo.messaging for
> that. That is an RPC framework - what is needed. You can use different
> transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
> connectivity your OpenStack networking can provide. At the same time
> there is a number of things to consider, like networking, security,
> packaging, etc.
>
> So, messaging people, what is your opinion on that idea? I've already
> raised that question in the list [1], but seems like not everybody who
> has something to say participated. So I am resending with the
> different topic. For example, yesterday we started discussing security
> of the solution in the openstack-oslo channel. Doug Hellmann at the
> start raised two questions: is it possible to separate different
> tenants or applications with credentials and ACL so that they use
> different queues? My opinion that it is possible using RabbitMQ/Qpid
> management interface: for each application we can automatically create
> a new user with permission to access only her queues. Another question
> raised by Doug is how to mitigate a DOS attack coming from one tenant
> so that it does not affect another tenant. The thing is though
> different applications will use different queues, they are going to
> use a single broker.
>
> Do you share Doug's concerns or maybe you have your own?
>

I would also like to understand why you don't consider Marconi the right
solution for this. It is supposed to be a message system that's safe to use
from within tenant images.

Doug



>
> Thanks,
>
> Dmitry
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2013-December/021476.html
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-06 Thread CARVER, PAUL
James E. Blair [mailto:jebl...@openstack.org] wrote:

>significant amount of time chasing bots.  It's clear that Freenode is
>better able to deal with attacks than OFTC would be.  However, OFTC
>doesn't have to deal with them because they aren't happening; and that's
>worth considering.

Does anyone have any idea who is being targeted by the attacks?
I assume they're hitting Freenode as a whole, but presumably the motivation
is one or more channels as opposed to just not liking Freenode in principle.

Honestly I tried IRC in the mid-nineties and didn't see the point (I spent all
my free time reading Usenet (and even paid for Agent at one point after
switching from nn on SunOS to Free Agent on Windows)) and never found
any reason to go back to IRC until finding out that OpenStack's world
revolves around Freenode. So I was only distantly aware of the battlefield
of DDoSers trying to cause netsplits in order to "get ops" on contentious
channels.

Is there any chance that OpenStack is the target of the DDoSers? Or do
you think there's some other target on Freenode and we're just
collateral damage?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-06 Thread Akihiro Motoki
Hi Kyle,

I am happy to hear OpenDaylight installation and startup are restored
to devstack.
It really helps openstack integration with other open source based software.

I have a question on a file location for non-OpenStack open source software.
when I refactored neutron related devstack code, we placed files related to
such files to lib/neutron_thirdparty directory.
I would like to know the new policy of file locations for such software.
I understand it is limited to neutron and it may happens to other projects.

Thanks,
Akihiro


On Thu, Mar 6, 2014 at 11:19 PM, Kyle Mestery  wrote:
> On Tue, Mar 4, 2014 at 7:34 AM, Kyle Mestery 
> wrote:
>>
>> On Tue, Mar 4, 2014 at 5:46 AM, Sean Dague  wrote:
>>>
>>> On 03/03/2014 11:32 PM, Dean Troyer wrote:
>>> > On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery >> > > wrote:
>>> >
>>> > In all cases today with Open Source plugins, Neutron agents have
>>> > run
>>> > on the hosts. For OpenDaylight, this is not the case. OpenDaylight
>>> > integrates with Neutron as a ML2 MechanismDriver. But it has no
>>> > Neutron code on the compute hosts. OpenDaylight itself communicates
>>> > directly to those compute hosts to program Open vSwitch.
>>> >
>>> >
>>> >
>>> > devstack doesn't provide a way for me to express this today. On the
>>> > compute hosts in the above scenario, there is no "q-*" services
>>> > enabled, so the "is_neutron_enabled" function returns 1, meaning no
>>> > neutron.
>>> >
>>> >
>>> > True and working as designed.
>>> >
>>> >
>>> > And then devstack sets Nova up to use nova-networking, which fails.
>>> >
>>> >
>>> > This only happens if you have enabled nova-network.  Since it is on by
>>> > default you must disable it.
>>> >
>>> >
>>> > The patch I have submitted [1] modifies "is_neutron_enabled" to
>>> > check for the meta neutron service being enabled, which will then
>>> > configure nova to use Neutron instead of nova-networking on the
>>> > hosts. If this sounds wonky and incorrect, I'm open to suggestions
>>> > on how to make this happen.
>>> >
>>> >
>>> > From the review:
>>> >
>>> > is_neutron_enabled() is doing exactly what it is expected to do, return
>>> > success if it finds any "q-*" service listed in ENABLED_SERVICES. If no
>>> > neutron services are configured on a compute host, then this must not
>>> > say they are.
>>> >
>>> > Putting 'neutron' in ENABLED_SERVICES does nothing and should do
>>> > nothing.
>>> >
>>> > Since you are not implementing the ODS as a Neutron plugin (as far as
>>> > DevStack is concerned) you should then treat it as a system service and
>>> > configure it that way, adding 'opendaylight' to ENABLED_SERVICES
>>> > whenever you want something to know it is being used.
>>> >
>>> >
>>> >
>>> > Note: I have another patch [2] which enables an OpenDaylight
>>> > service, including configuration of OVS on hosts. But I cannot
>>> > check
>>> > if the "opendaylight" service is enabled, because this will only
>>> > run
>>> > on a single node, and again, not on each compute host.
>>> >
>>> >
>>> > I don't understand this conclusion. in multi-node each node gets its
>>> > own
>>> > specific ENABLED_SERVICES list, you can check that on each node to
>>> > determine how to configure that node.  That is what I'm trying to
>>> > explain in that last paragraph above, maybe not too clearly.
>>>
>>> So in an Open Daylight environment... what's running on the compute host
>>> to coordinate host level networking?
>>>
>> Nothing. OpenDaylight communicates to each host using OpenFlow and OVSDB
>> to manage networking on the host. In fact, this is one huge advantage for
>> the
>> ODL MechanismDriver in Neutron, because it's one less agent running on the
>> host.
>>
>> Thanks,
>> Kyle
>>
> As an update here, I've reworked my devstack patch [1]  for adding
> OpenDaylight
> support to make OpenDaylight a top-level service, per suggestion from Dean.
> You
> can now enable both "odl-server" and "odl-compute" in your local.conf with
> my patch.
> Enabling "odl-server" will run OpenDaylight under devstack. Enabling
> "odl-compute"
> will configure the host's OVS to work with OpenDaylight.
>
> Per discussion with Sean, I'd like to look at refactoring some other bits of
> the Neutron
> devstack code in the coming weeks as well.
>
> Thanks!
> Kyle
>
> [1] https://review.openstack.org/#/c/69774/
>
>>>
>>> -Sean
>>>
>>> --
>>> Sean Dague
>>> Samsung Research America
>>> s...@dague.net / sean.da...@samsung.com
>>> http://dague.net
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

_

Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-06 Thread Daniel P. Berrange
On Wed, Mar 05, 2014 at 07:37:39AM -0800, Tracy Jones wrote:
> Hi - Please consider the image cache aging BP for FFE 
> (https://review.openstack.org/#/c/56416/)
> 
> This is the last of several patches (already merged) that implement image 
> cache cleanup for the vmware driver.  This patch solves a significant 
> customer pain point as it removes unused images from their datastore.  
> Without this patch their datastore can become unnecessarily full.  In 
> addition to the customer benefit from this patch it
> 
> 1.  has a turn off switch 
> 2.  if fully contained within the vmware driver
> 3.  has gone through functional testing with our internal QA team 
> 
> ndipanov has been good enough to say he will review the patch, so we would 
> ask for one additional core sponsor for this FFE.

Consider me signed up


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Tox issues on a clean environment

2014-03-06 Thread Gary Kotton
Hi,
Anyone know how I cam solve the error below:

  Running setup.py install for jsonpatch
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'entry_poimts'
  warnings.warn(msg)
changing mode of build/scripts-2.7/jsondiff from 664 to 775
changing mode of build/scripts-2.7/jsonpatch from 664 to 775

changing mode of /home/gk-dev/nova/.tox/py27/bin/jsonpatch to 775
changing mode of /home/gk-dev/nova/.tox/py27/bin/jsondiff to 775
  Found existing installation: distribute 0.6.24dev-r0
Not uninstalling distribute at /usr/lib/python2.7/dist-packages, outside 
environment /home/gk-dev/nova/.tox/py27
  Running setup.py install for setuptools

Installing easy_install script to /home/gk-dev/nova/.tox/py27/bin
Installing easy_install-2.7 script to /home/gk-dev/nova/.tox/py27/bin
  Running setup.py install for mccabe

  Running setup.py install for cffi
Traceback (most recent call last):
  File "", line 1, in 
  File "/home/gk-dev/nova/.tox/py27/build/cffi/setup.py", line 94, in 

from setuptools import setup, Feature, Extension
ImportError: cannot import name Feature
Complete output from command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c 
"import 
setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec'))" install --record 
/tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
--install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7:
Traceback (most recent call last):

  File "", line 1, in 

  File "/home/gk-dev/nova/.tox/py27/build/cffi/setup.py", line 94, in 

from setuptools import setup, Feature, Extension

ImportError: cannot import name Feature


Cleaning up...
Command /home/gk-dev/nova/.tox/py27/bin/python2.7 -c "import 
setuptools;__file__='/home/gk-dev/nova/.tox/py27/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec'))" install --record 
/tmp/pip-2sWKRK-record/install-record.txt --single-version-externally-managed 
--install-headers /home/gk-dev/nova/.tox/py27/include/site/python2.7 failed 
with error code 1 in /home/gk-dev/nova/.tox/py27/build/cffi
Traceback (most recent call last):
  File ".tox/py27/bin/pip", line 9, in 
load_entry_point('pip==1.5.4', 'console_scripts', 'pip')()
  File 
"/home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/__init__.py",
 line 148, in main
parser.print_help()
  File 
"/home/gk-dev/nova/.tox/py27/local/lib/python2.7/site-packages/pip/basecommand.py",
 line 169, in main
log_file_fp.write(text)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 72: 
ordinal not in range(128)

ERROR: could not install deps [-r/home/gk-dev/nova/requirements.txt, 
-r/home/gk-dev/nova/test-requirements.txt]

Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >