Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-26 Thread Renat Akhmerov
Winson, nice job!

Now it totally makes sense to me. You’re good to go with this unless others 
have objections.

Just one technical dummy question (sorry, I’m not yet familiar with 
oslo.messaging): at your picture you have “Transport”, so what can be 
specifically except RabbitMQ? 

Renat Akhmerov
@ Mirantis Inc.



On 26 Feb 2014, at 14:30, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Looks good. Thanks, Winson! 
 
 Renat, What do you think?
 
 
 On Wed, Feb 26, 2014 at 10:00 AM, W Chan m4d.co...@gmail.com wrote:
 The following link is the google doc of the proposed engine/executor message 
 flow architecture.  
 https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing 
  
 
 The diagram on the right is the scalable engine where one or more engine 
 sends requests over a transport to one or more executors.  The executor 
 client, transport, and executor server follows the RPC client/server design 
 pattern in oslo.messaging.
 
 The diagram represents the local engine.  In reality, it's following the same 
 RPC client/server design pattern.  The only difference is that it'll be 
 configured to use a fake RPC backend driver.  The fake driver uses in process 
 queues shared between a pair of engine and executor.
 
 The following are the stepwise changes I will make.
 1) Keep the local and scalable engine structure intact.  Create the Executor 
 Client at ./mistral/engine/scalable/executor/client.py.  Create the Executor 
 Server at ./mistral/engine/scalable/executor/service.py and implement the 
 task operations under ./mistral/engine/scalable/executor/executor.py.  Delete 
 ./mistral/engine/scalable/executor/executor.py.  Modify the launcher 
 ./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py 
 to use the Executor Client instead of sending the message directly to rabbit 
 via pika.  The sum of this is the atomic change that keeps existing structure 
 and without breaking the code.
 2) Remove the local engine. 
 https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
 3) Implement versioning for the engine.  
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
 4) Port abstract engine to use oslo.messaging and implement the engine 
 client, engine server, and modify the API layer to consume the engine client. 
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process.
 
 Winson
 
 
 On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 25 Feb 2014, at 02:21, W Chan m4d.co...@gmail.com wrote:
 
 Renat,
 
 Regarding your comments on change https://review.openstack.org/#/c/75609/, I 
 don't think the port to oslo.messaging is just a swap from pika to 
 oslo.messaging.  OpenStack services as I understand is usually implemented 
 as an RPC client/server over a messaging transport.  Sync vs async calls are 
 done via the RPC client call and cast respectively.  The messaging transport 
 is abstracted and concrete implementation is done via drivers/plugins.  So 
 the architecture of the executor if ported to oslo.messaging needs to 
 include a client, a server, and a transport.  The consumer (in this case the 
 mistral engine) instantiates an instance of the client for the executor, 
 makes the method call to handle task, the client then sends the request over 
 the transport to the server.  The server picks up the request from the 
 exchange and processes the request.  If cast (async), the client side 
 returns immediately.  If call (sync), the client side waits for a response 
 from the server over a reply_q (a unique queue for the session in the 
 transport).  Also, oslo.messaging allows versioning in the message. Major 
 version change indicates API contract changes.  Minor version indicates 
 backend changes but with API compatibility.  
 
 My main concern about this patch is not related with messaging 
 infrastructure. I believe you know better than me how it should look like. 
 I’m mostly concerned with the way of making changes you chose. From my 
 perspective, it’s much better to make atomic changes where every changes 
 doesn’t affect too much in existing architecture. So the first step could be 
 to change pika to oslo.messaging with minimal structural changes without 
 introducing versioning (could be just TODO comment saying that the framework 
 allows it and we may want to use it in the future, to be decide), without 
 getting rid of the current engine structure (local, scalable). Some of the 
 things in the file structure and architecture came from the decisions made by 
 many people and we need to be careful about changing them.
 
 
 So, where I'm headed with this change...  I'm implementing the basic 
 structure/scaffolding for the new executor service using oslo.messaging 
 (default transport with rabbit).  Since the whole change will take a few 
 rounds, I don't want to disrupt any changes that the team is making at the 
 moment and so I'm building 

Re: [openstack-dev] [Tripleo] tripleo-cd-admins team update / contact info question

2014-02-26 Thread James Polley
I'm not sure how well it would work here, but I've used Pagerduty.com for 
something similar before.

The big up side of pagerduty is that it is pretty good at contacting people who 
aren't at their computers. 

It supports email notifications and webhooks for people who want to lots of 
control over what to do with alerts; push notifications to iOS or Android; and 
SMS or phone call as a last resort. Each person can configure their own alerts 
to suit them

It handles escalating unhandled alerts, including looping back to the start if 
it can't reach anyone.

It allows incidents to be handed over to arbitrary people regardless of who the 
roster says is on call, and for schedule overrides when a shift has to be 
reassigned.

Incidents can be created via REST, email, or (I think) webhooks, so it's easy 
for users or for automated systems to raise an alarm

It has some drawbacks: it would force us to define a rotation (or several 
rotations, one for each region, if we want to follow the sun), and someone 
needs to pay for it.

I think it handles most of what we want though. It gives infra admins a 
bat-signal to request urgent help, and it gives us a way to ping other team 
members when we need to hand over. It isn't very good for $randoms asking for 
low-priority issues though - it treats every incident as equally urgent.

I haven't used it, but opsgenie seems to have a similar set of features (more, 
if https://www.opsgenie.com/pagerduty is to be believed) 


 On 26 Feb 2014, at 9:30 am, Robert Collins robe...@robertcollins.net wrote:
 
 In the tripleo meeting today we re-affirmed that the tripleo-cd-admins
 team is aimed at delivering production-availability clouds - thats how
 we know the the tripleo program is succeeding (or not !).
 
 So if you're a member of that team, you're on the hook - effectively
 on call, where production issues will take precedence over development
 / bug fixing etc.
 
 We have the following clouds today:
 cd-undercloud (baremetal, one per region)
 cd-overcloud (KVM in the HP region, not sure yet for the RH region) -
 multi region.
 ci-overcloud (same as cd-overcloud, and will go away when cd-overcloud
 is robust enough).
 
 And we have two users:
 - TripleO ATCs, all of whom are eligible for accounts on *-overcloud
 - TripleO reviewers, indirectly via openstack-infra who provide 99%
 of the load on the clouds
 
 Right now when there is a problem, there's no clearly defined 'get
 hold of someone' mechanism other than IRC in #tripleo.
 
 And thats pretty good since most of the admins are on IRC most of the time.
 
 But.
 
 There are two holes - a) what if its sunday evening :) and b) what if
 someone (for instance Derek) has been troubleshooting a problem, but
 needs to go do personal stuff, or you know, sleep. There's no reliable
 defined handoff mechanism.
 
 So - I think we need to define two things:
  - a stock way for $randoms to ask for support w/ these clouds that
 will be fairly low latency and reliable.
  - a way for us to escalate to each other *even if folk happen to be
 away from the keyboard at the time*.
 And possibly a third:
  - a way for openstack-infra admins to escalate to us in the event of
 OMG things happening. Like, we send 1000 VMs all at once at their git
 mirrors or something.
 
 And with that lets open the door for ideas!
 
 -Rob
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Understanding parameters for tasks and actions

2014-02-26 Thread Timur Nurlygayanov
Hi Renat,

for me just unclear the following syntacsis:
$.image_id
what is $ in this case? It will be more clear if we can replace $ to
something - any instance with readable name, like global.image_id or
context.image_id.
looks like $ can be the different in different name spaces.

it will be unclear for new users and of course it will be easy when we have
experience with the DSL.





On Wed, Feb 26, 2014 at 11:34 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Hi team,

 I'm currently working on the first version of Data Flow and I would like
 to make sure we all clearly understand how to interpret parameters for
 tasks and actions when we declare them in Mistral DSL. I feel like I'm
 getting lost here a little bit. The problem is that we still don't have a
 solid DSL spec since we keep changing our vision (especially after new
 members joined the team). But that may be fine, it's life.

 I also have a couple of suggestions that I'd like to discuss with you.
 Sorry if that seems verbose, I'll try to be as concise as possible.

 I took a couple of snippets from [1] and put them in here.

 # Snippet 1.Services:
   Nova:
 type: REST_API
 parameters:
   baseUrl: $.novaURL
 actions:
   createVM:
 parameters:
   url: /servers/{$.vm_id}
   method: POST
 output:
   select: $.server.id
   store-as: vm_id
 # Snippet 2.Workflow:
   tasks:
 createVM:
   action: Nova:createVM
   on-success: waitForIP
   on-error: sendCreateVMError


 $. - handle to workflow execution storage (what we call 'context' now)
 where we keep workflow variables.

 Let's say our workflow input is JSON like this:
 {
   novaURL: http://localhost:123;,
   image_id: 123
 }

 *Questions*

 So the things that I don't like or am not sure about:

 *1*. Task createVM needs to use image_id but it doesn't have any
 information about it its declaration.
 According to the current vision it should be like

 createVM:
   action: Nova:createVM
   parameters:
 image_id: $.image_id

 And at runtime image_id should be resolved to 123 get passed to action 
 and, in fact, be kind of the third parameter along with url and method. 
 This is specifically interesting because on one hand we have different types 
 of parameters: url and method for REST_API action define the nature of 
 the action itself. But image_id on the other hand is a dynamic data coming 
 eventually from user input.

 *So the question is: do we need to differentiate between these types of 
 parameters explicitly and make a part of the specification?*

 We also had a notion of task-parameters for action declarations which is 
 supposed to be used to declare this second type of parameters (dynamic) but 
 do we really need it? I guess if we clearly declare input and output at task 
 level then actions should be able to use them according to their nature.

 *2*. Action declaration createVM has section response which may not be ok 
 in terms of level of abstraction.

 My current vision is that actions should not declare how we store the
 result (output) in execution. Ideally looking at tasks only should give
 us comprehensive understanding of how workflow data flows. So I would move
 store-as to task level.

 *Suggestions*

 *1*. Define input and output at task level like this:

  createVM:
input:
  image_id: $.image_id
output: vm_id

 Where output: vm_id is basically a replacement for store-as: vm_id at
 action level, i.e. it's a hint to Mistral to store the output of this task
 under vm_id key in execution context. Again, the idea is to define task
 and action responsibilities more strictly:

- *Task is a high-level workflow building block which defines workflow
logical step and how it modifies workflow data. Task doesn't contain
technical details on how it's implemented.*
- *Action is an implementor of the workflow logical step defined by a
task. Action defines specific algorithm of how task is implemented.*


 *2*. User parameters only for actions to specify their additional
 properties influencing their nature (like method for HTTP actions).


 Please let me know your thoughts. We can make required adjustments right
 now.


 [1] https://etherpad.openstack.org/p/mistral-poc

 Renat Akhmerov
 @ Mirantis Inc.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
QA Lead
OpenStack Murano Project
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Understanding parameters for tasks and actions

2014-02-26 Thread Nikolay Makhotkin
Hi, Renat!



*Suggestions*
 *1*. Define input and output at task level like this:
  createVM:
input:
  image_id: $.image_id
output: vm_id
 Where output: vm_id is basically a replacement for store-as: vm_id at
 action level, i.e. it's a hint to Mistral to store the output of this task
 under vm_id key in execution context. Again, the idea is to define task
 and action responsibilities more strictly:

- *Task is a high-level workflow building block which defines workflow
logical step and how it modifies workflow data. Task doesn't contain
technical details on how it's implemented.*


- *Action is an implementor of the workflow logical step defined by a
task. Action defines specific algorithm of how task is implemented.*


 *2*. User parameters only for actions to specify their additional
 properties influencing their nature (like method for HTTP actions).


Just clarify your thoughts:
 - All static keys and parameters should be in actions
 - All dynamic keys and parameters should be in tasks block
 - We may use our context to define some parameters in action or task

And, yes, I think it is a good idea to differentiate this. It is become
easier
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Understanding parameters for tasks and actions

2014-02-26 Thread Nikolay Makhotkin
Timur, '$' here is mean referrence to context at stage which it has been
given to.

'$.image_id' takes 'image_id' from current workflow execution context
variable


On Wed, Feb 26, 2014 at 12:30 PM, Nikolay Makhotkin nmakhot...@mirantis.com
 wrote:

 Hi, Renat!



 *Suggestions*
 *1*. Define input and output at task level like this:
  createVM:
input:
  image_id: $.image_id
output: vm_id
 Where output: vm_id is basically a replacement for store-as: vm_id at
 action level, i.e. it's a hint to Mistral to store the output of this task
 under vm_id key in execution context. Again, the idea is to define task
 and action responsibilities more strictly:

- *Task is a high-level workflow building block which defines
workflow logical step and how it modifies workflow data. Task doesn't
contain technical details on how it's implemented.*


- *Action is an implementor of the workflow logical step defined by a
task. Action defines specific algorithm of how task is implemented.*


 *2*. User parameters only for actions to specify their additional
 properties influencing their nature (like method for HTTP actions).


 Just clarify your thoughts:
  - All static keys and parameters should be in actions
  - All dynamic keys and parameters should be in tasks block
  - We may use our context to define some parameters in action or task

 And, yes, I think it is a good idea to differentiate this. It is become
 easier




-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack Error

2014-02-26 Thread trinath.soman...@freescale.com
Hi Ben-

Can you give a guess of the problem.

I feel that there might be some error with devstack-gate which brings down the 
things from git and configure it in the system.

The first error started at “openstack role add: error: argument –user: expected 
one argument”

Do I need to check or verify any other configuration for this issue.

Kindly help me resolve the same.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Ben Nemec [mailto:openst...@nemebean.com]
Sent: Tuesday, February 25, 2014 9:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Devstack Error


On 2014-02-25 08:19, 
trinath.soman...@freescale.commailto:trinath.soman...@freescale.com wrote:
Hi Stackers-
When I configured Jenkins to run the Sandbox tempest testing, While devstack is 
running,
I have seen error
“ERROR: Invalid Openstack Nova credentials”
and another error
“ERROR: HTTPConnection Pool(host=’127.0.0.1’, port=8774): Max retries exceeded 
wuth url: /v2/91dd….(caused by class ‘socket.error’: [Errno 111] Connection 
refused)
I feel devstack automates the openstack environment.
Kindly guide me resolve the issue.
Thanks in advance.
--
Trinath Somanchi - B39208
trinath.soman...@freescale.commailto:trinath.soman...@freescale.com | extn: 
4048
Those are both symptoms of an underlying problem.  It sounds like a service 
didn't start or wasn't configured correctly, but it's impossible to say for 
sure what went wrong based on this information.
-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Renaming action types

2014-02-26 Thread Nikolay Makhotkin
Agree, I don't see any thing that makes sense with words 'REST' and 'API'
too.


On Wed, Feb 26, 2014 at 11:38 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Folks,

 I'm proposing to rename these two action types REST_API and
 MISTRAL_REST_API to HTTP and MISTRAL_HTTP. Words REST and API don't
 look correct to me, if you look at

 Services:
   Nova:
 type: REST_API
 parameters:
   baseUrl: {$.novaURL}
 actions:
   createVM:
 parameters:
   url: /servers/{$.vm_id}
   method: POST

 There's no information about REST or API here. It's just a spec how to
 form an HTTP request.

 Thoughts?

 Renat Akhmerov
 @ Mirantis Inc.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] set QOS dynamically

2014-02-26 Thread Chenliang (L)
Hi stackers,

I am very interested in setting QOS dynamically(such as CPU/memory) which is 
noted in previously [1].

In some case we need to set QOS of a running instance to advoid interruptting 
the instance instead of using flavor extra specs.

I found it is in discussion.I just want to know what everyone think about this 
feature?



Regards!

Liang Chen


[1]
https://blueprints.launchpad.net/nova/+spec/admin-set-resource-quota-dynamically

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-26 Thread Mathieu Rohon
Hi,

you can get inspired by the L2-population MD, which call new functions
in the agents (like add_fdb_entries) through AMQP.
Does your work relate to an existing blueprint?



On Tue, Feb 25, 2014 at 9:23 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
 Hello,

 Trinath, this presentation I saw before You send me it. There is nice
 explanation what methods are (and should be) in type driver and mech driver
 but I need exactly that information what sent me Assaf. Thanks both of You for
 Your help :)

 --
 Best regards
 Sławek Kapłoński
 Dnia wtorek, 25 lutego 2014 12:18:50 Assaf Muller pisze:

 - Original Message -

  Hi
 
  Hope this helps
 
  http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
 
  ___
 
  Trinath Somanchi
 
  _
  From: Sławek Kapłoński [sla...@kaplonski.pl]
  Sent: Tuesday, February 25, 2014 9:24 PM
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] Neutron ML2 and openvswitch agent
 
  Hello,
 
  I have question to You guys. Can someone explain me (or send to link
  with such explanation) how exactly ML2 plugin which is working on
  neutron server is communicating with compute hosts with openvswitch
  agents?

 Maybe this will set you on your way:
 ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then
 uses ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with
 the topic stated in that file.

 When the message is received by the OVS agent, it calls:
 neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_
 update.
  I suppose that this is working with rabbitmq queues but I need
  to add own function which will be called in this agent and I don't know
  how to do that. It would be perfect if such think will be possible with
  writing for example new mechanical driver in ML2 plugin (but how?).
  Thanks in advance for any help from You :)
 
  --
  Best regards
  Slawek Kaplonski
  sla...@kaplonski.pl
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Understanding parameters for tasks and actions

2014-02-26 Thread Dmitri Zimine
My understanding, correct me if I'm missing the intention: 

Action are defined as code, aka REST_API or SEND_EMAIL. 
These are base actions, they are analogous to function definitions. 

Tasks is a set of parameters for the action.

Actions can be also defined declaratively, under Services, based on base 
actions. 
These are service, or declared actions, they are analogous to partials. 
In the simplest form, they 1) set some of the parameters on the base functions 
and 2) define outputs.

But I guess what Renat is trying to capture here, is to also define new input 
parameters, (analogous to  changing a function signature). If so, they become 
adapters; and there need to be a) define new parameters and b) transform the 
new parameters (input) into original parameters of the base action. 

Is this how we think about it? If yes, the syntax to express it might be 
something like this

# Snippet 1.
Services:
  Nova:
type: REST_API
  parameters:
baseUrl: $.novaURL
actions:
  createVM:
parameters: # partial set for the parameters of base action.
   method: POST
   url: /servers/{{service-id}} # sets up the initial parameter, 
using the new input 
input: 
   service-id: # defines a new input
output: 
   select: $.server.id
   store-as: vm_id
# Snippet 2.
Workflow:
  tasks:
createVM:
  action: Nova:createVM
parameters:
  service-id: {{$.vm_id}}
on-success: waitForIP
on-error: sendCreateVMError

Better ideas on referencing input, better than {{service-id}}are welcome. 

The point here is we don't refer the context variables in Service definitions. 
Only transformation of input, with some static text. The service actions now 
look symmetric on input/output. 

As for output: I would keep it as is. It defines the transformation of the 
results of base action: a) defining new output variables and b) providing their 
values, by transforming the output of base task. Very symmetric to 'input'. 
Currently we stores into the global context, but it doesn't have to be so; I 
think we eventually (soon) need to have task-specific section in the context 
(more on this later)

Thoughts? 


DZ.

PS. For the record, I am split on declarative action definitions. I like it, 
but only as long as it stays simple. Very soon it becomes easier to express it 
in code then in YAML. 


On Feb 25, 2014, at 11:34 PM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Hi team,
 
 I’m currently working on the first version of Data Flow and I would like to 
 make sure we all clearly understand how to interpret “parameters for tasks 
 and actions when we declare them in Mistral DSL. I feel like I’m getting lost 
 here a little bit. The problem is that we still don’t have a solid DSL spec 
 since we keep changing our vision (especially after new members joined the 
 team). But that may be fine, it’s life.
 
 I also have a couple of suggestions that I’d like to discuss with you. Sorry 
 if that seems verbose, I’ll try to be as concise as possible.
 
 I took a couple of snippets from [1] and put them in here.
 
 # Snippet 1.
 Services:
   Nova:
 type: REST_API
 parameters:
   baseUrl: $.novaURL
 actions:
   createVM:
 parameters:
   url: /servers/{$.vm_id}
   method: POST
 output:
   select: $.server.id
   store-as: vm_id
 
 # Snippet 2.
 Workflow:
   tasks:
 createVM:
   action: Nova:createVM
   on-success: waitForIP
   on-error: sendCreateVMError
 
 “$.” - handle to workflow execution storage (what we call ‘context’ now) 
 where we keep workflow variables.
 
 Let’s say our workflow input is JSON like this:
 {
   “novaURL”: “http://localhost:123”,
   “image_id”: “123
 }
 
 Questions
 
 So the things that I don’t like or am not sure about:
 
 1. Task “createVM” needs to use “image_id” but it doesn’t have any 
 information about it its declaration.
 According to the current vision it should be like 
 createVM:
   action: Nova:createVM
   parameters:
 image_id: $.image_id
 And at runtime “image_id should be resolved to “123” get passed to action 
 and, in fact, be kind of the third parameter along with “url” and “method”. 
 This is specifically interesting because on one hand we have different types 
 of parameters: “url” and “method” for REST_API action define the nature of 
 the action itself. But “image_id” on the other hand is a dynamic data coming 
 eventually from user input.
 So the question is: do we need to differentiate between these types of 
 parameters explicitly and make a part of the specification?
 We also had a notion of “task-parameters” for action declarations which is 
 supposed to be used to declare this second type of parameters (dynamic) but 
 do we really need it? I guess if we clearly declare input and output at task 
 level then actions should be able to use 

Re: [openstack-dev] [Mistral] Renaming action types

2014-02-26 Thread Dmitri Zimine
+1

Should we use VERB_NOUN pattern? Or relax it for some obvious? I can't figure a 
good VERB_NOUN for HTTP. REQUEST_HTTP is dull. 

BTW it's interesting that we assume that a service can contain only actions of 
the same type. 

DZ 

On Feb 26, 2014, at 1:10 AM, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Agree, I don't see any thing that makes sense with words 'REST' and 'API' too.
 
 
 On Wed, Feb 26, 2014 at 11:38 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 Folks,
 
 I’m proposing to rename these two action types REST_API and MISTRAL_REST_API 
 to HTTP and MISTRAL_HTTP. Words “REST” and “API” don’t look correct to me, if 
 you look at
 
 Services:
   Nova:
 type: REST_API
 parameters:
   baseUrl: {$.novaURL}
 actions:
   createVM:
 parameters:
   url: /servers/{$.vm_id}
   method: POST

 There’s no information about “REST” or “API” here. It’s just a spec how to 
 form an HTTP request.
 
 Thoughts?
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DOWN and INACTIVE status in FWaaS and LBaaS

2014-02-26 Thread Xuhan Peng
Hello,

This email is triggered by the comments I received in my patch [1] when
trying to fix bug [2].

The problem I was trying to fix is that current firewall remains in status
ACTIVE after admin state is changed to DOWN. My plan is to change the
status of firewall from ACTIVE to DOWN when admin state is down, as other
network resource is doing currently.

But I noticed besides DOWN state, INACTIVE state is also used in FWaaS
and LBaaS. So I hope someone can help me understand any background of this.
If this is not particularly by design and inconsistent with other network
resource, I can open a bug to fix this in FWaaS and LBaaS.

Thanks,
Xu Han

[1]: https://review.openstack.org/#/c/73944/
[2]: https://launchpad.net/bugs/1279213
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-26 Thread Roman Podoliaka
Hi Clark,

 I think we can safely GRANT ALL on *.* to openstack_citest@localhost and 
 call that good enough
Works for me.

Thanks,
Roman

On Tue, Feb 25, 2014 at 8:29 PM, Clark Boylan clark.boy...@gmail.com wrote:
 On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The problem with the prefix approach is it doesn't scale. At some
 point we will decide we need a new prefix then a third and so on
 (which is basically what happened at the schema level). That said we
 recently switched to using single use slaves for all unittesting so I
 think we can safely GRANT ALL on *.* to openstack_citest@localhost and
 call that good enough. This should work fine for upstream testing but
 may not be super friendly to others using the puppet manifests on
 permanent slaves. We can wrap the GRANT in a condition in puppet that
 is set only on single use slaves if this is a problem.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Renaming action types

2014-02-26 Thread Renat Akhmerov
Hm.. I see your point. Generally, I like short names but expressive enough to 
understand what it is. VERB_NOUN would be good but nothing decent comes to my 
mind regarding HTTP :)

If you guys have any suggestions you’re welcome.

Renat Akhmerov
@ Mirantis Inc.



On 26 Feb 2014, at 16:33, Dmitri Zimine d...@stackstorm.com wrote:

 +1
 
 Should we use VERB_NOUN pattern? Or relax it for some obvious? I can't figure 
 a good VERB_NOUN for HTTP. REQUEST_HTTP is dull. 
 
 BTW it's interesting that we assume that a service can contain only actions 
 of the same type. 
 
 DZ 
 
 On Feb 26, 2014, at 1:10 AM, Nikolay Makhotkin nmakhot...@mirantis.com 
 wrote:
 
 Agree, I don't see any thing that makes sense with words 'REST' and 'API' 
 too.
 
 
 On Wed, Feb 26, 2014 at 11:38 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 Folks,
 
 I’m proposing to rename these two action types REST_API and MISTRAL_REST_API 
 to HTTP and MISTRAL_HTTP. Words “REST” and “API” don’t look correct to me, 
 if you look at
 
 Services:
   Nova:
 type: REST_API
 parameters:
   baseUrl: {$.novaURL}
 actions:
   createVM:
 parameters:
   url: /servers/{$.vm_id}
   method: POST

 There’s no information about “REST” or “API” here. It’s just a spec how to 
 form an HTTP request.
 
 Thoughts?
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Understanding parameters for tasks and actions

2014-02-26 Thread Renat Akhmerov

On 26 Feb 2014, at 15:18, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote:

 for me just unclear the following syntacsis:
 $.image_id
 what is $ in this case? It will be more clear if we can replace $ to 
 something - any instance with readable name, like global.image_id or 
 context.image_id.
 looks like $ can be the different in different name spaces.
 
 it will be unclear for new users and of course it will be easy when we have 
 experience with the DSL.

Yes, “$” actually comes from YAQL which is simple expression language with 
pythonic-like syntax. We were planning to use it here. However, we got an idea 
to make it possible to use other languages by having a simple abstraction 
ExpressionEvaluator in the system. It already exists.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-26 Thread Marco Fargetta
Hi Morgan,

On Tue, Feb 25, 2014 at 11:47:43AM -0800, Morgan Fainberg wrote:
 For purposes of supporting multiple backends for Identity (multiple LDAP, mix
 of LDAP and SQL, federation, etc) Keystone is planning to increase the maximum
 size of the USER_ID field from an upper limit of 64 to an upper limit of 255.
 This change would not impact any currently assigned USER_IDs (they would 
 remain
 in the old simple UUID format), however, new USER_IDs would be increased to
 include the IDP identifier (e.g. USER_ID@@IDP_IDENTIFIER). 
 

in this case if a user would access with different systems (e.g. SAML with
portal, LDAP with CLI) it is mapped to two different identities inside keystone.
Is this correct? If so, is there any way to map an individual person with two
identities sharing resources?

Cheers,
Marco


 There is the obvious concern that projects are utilizing (and storing) the
 user_id in a field that cannot accommodate the increased upper limit. Before
 this change is merged in, it is important for the Keystone team to understand
 if there are any places that would be overflowed by the increased size.
 
 The review that would implement this change in size is https://
 review.openstack.org/#/c/74214 and is actively being worked on/reviewed.
 
 I have already spoken with the Nova team, and a single instance has been
 identified that would require a migration (that will have a fix proposed for
 the I3 timeline). 
 
 If there are any other known locations that would have issues with an 
 increased
 USER_ID size, or any concerns with this change to USER_ID format, please
 respond so that the issues/concerns can be addressed.  Again, the plan is not
 to change current USER_IDs but that new ones could be up to 255 characters in
 length.
 
 Cheers,
 Morgan Fainberg
 
 —
 Morgan Fainberg
 Principal Software Engineer
 Core Developer, Keystone
 m...@metacloud.com
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Thierry Carrez
Kenichi Oomichi wrote:
 From: Christopher Yeoh [mailto:cbky...@gmail.com]
 So the problem here is what we consider a bug becomes a feature from
 a user of the API point of view. Eg they really shouldn't be passing
 some data in a request, but its ignored and doesn't cause any issues
 and the request ends up doing what they expect.
 
 In addition, current v2 API behavior is not consistent when receiving
 unexpected API parameters. Most v2 APIs ignore unexpected API parameters,
 but some v2 APIs return a BadRequest response. For example, update host
 API does it in this case by 
 https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/hosts.py#L185
 
 Through v3 API development, we are making all v3 APIs return a BadRequest
 in this case. I think we cannot apply this kind of strict validation to
 running v2 API.

We may need to differentiate between breaking the API and breaking
corner-case behavior. In one case you force everyone in the ecosystem to
adapt (the libraries, the end user code). In the other you only
(potentially) affect those that were not following the API correctly.

So there may be a middle ground between sticking with dirty V2 forever
and Go to V3 and accept a long V2 deprecation:

We could make a V3 that doesn't break the API, only breaks behavior in
error cases due to its stronger input validation. A V3 that shouldn't
break code that was following the API, nor require heavy library
changes. It's still a major API bump because behavior may change and
some end users will be screwed in the process, but damage is more
limited, so V2 could go away after a shorter deprecation period.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DOWN and INACTIVE status in FWaaS and LBaaS

2014-02-26 Thread Oleg Bondarev
Hi,

For LBaaS the background is simple: it uses statuses from
neutron/plugins/common/constants.py and INACTIVE was there initially while
DOWN
appeared later (with VPNaaS first commit). So LBaaS doesn't use DOWN at all.
As for INACTIVE, it is currently used only for members that stop responding
to health checks.
Also there is a patch on review (https://review.openstack.org/#/c/55032)
which sets INACTIVE
status for resources with admin state down.

My personal opinion is that we can easily fix that for LBaaS and replace
INACTIVE with DOWN
to be consistent with other network resources.

Thanks,
Oleg


On Wed, Feb 26, 2014 at 1:50 PM, Xuhan Peng pengxu...@gmail.com wrote:

 Hello,

 This email is triggered by the comments I received in my patch [1] when
 trying to fix bug [2].

 The problem I was trying to fix is that current firewall remains in status
 ACTIVE after admin state is changed to DOWN. My plan is to change the
 status of firewall from ACTIVE to DOWN when admin state is down, as other
 network resource is doing currently.

 But I noticed besides DOWN state, INACTIVE state is also used in FWaaS
 and LBaaS. So I hope someone can help me understand any background of this.
 If this is not particularly by design and inconsistent with other network
 resource, I can open a bug to fix this in FWaaS and LBaaS.

 Thanks,
 Xu Han

 [1]: https://review.openstack.org/#/c/73944/
 [2]: https://launchpad.net/bugs/1279213



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Russell Bryant
On 02/25/2014 05:47 PM, Dan Smith wrote:
 Yeah, so objects is the big one here.
 
 Objects, and everything else. With no-db-compute we did it for a couple
 cycles, then objects, next it will be retooling flows to conductor, then
 dealing with tasks, talking to gantt, etc. It's not going to end any
 time soon.
 
 So what kind of reaction are the Keystone people getting to that?  Do
 they plan on removing their V2 API at some point?  Or just maintain it
 with bug fixes forever?
 
 Yep, that would be good data. We also need to factor in the relative
 deployment scale of nova installations vs. keystone installations in the
 world (AFAIK, RAX doesn't use keystone for example).

The Keystone API is also much smaller.

The path from glance v1 to v2 is another interesting data point.
Despite being a much smaller API surface, Glance has had both v1 and v2
for quite some time.  I'm not sure of the history or expected timeline
for v1 to go away, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party-ci] Proposing a regular workshop/meeting to help folks set up CI environments

2014-02-26 Thread trinath.soman...@freescale.com
Hi Jay-

Rather than March 3rd 

Can you kindly make it on Feb 28th if possible. 

I'm interested in attending the meeting. 

I have doubts to get clarified regarding setp, configuration and CI system.




--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Wednesday, February 26, 2014 4:38 AM
To: Arx Cruz
Cc: OpenStack Development Mailing List; openstack-infra; John Griffith
Subject: Re: [OpenStack-Infra] [third-party-ci] Proposing a regular 
workshop/meeting to help folks set up CI environments

On Tue, 2014-02-25 at 20:02 -0300, Arx Cruz wrote:
 Hello,
 
 Great Idea, I'm very interested!
 
 I wasn't able to see the Google Hangout Event, is the url correct?

Hi Arx!

We changed from Google Hangout to using IRC. See here for more info:

http://lists.openstack.org/pipermail/openstack-dev/2014-February/028124.html

Best,
-jay



___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Eugene Nikanorov
A couple of notes:


On Wed, Feb 26, 2014 at 12:24 AM, Jay Pipes jaypi...@gmail.com wrote:



 neutron l7-policy-create --type=uri-regex-matching \
  --attr=URIRegex=static\.example\.com.*

 Presume above returns an ID for the policy $L7_POLICY_ID. We could then
 assign that policy to operate on the front-end of the load balancer and
 spreading load to the nginx nodes by doing:

 neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
  --subnet-cidr=192.168.1.0/24

 We could then indicate to the balancer that all other traffic should be
 sent to only the Apache nodes:

 neutron l7-policy-create --type=uri-regex-matching \
  --attr=URIRegex=static\.example\.com.* \
  --attr=RegexMatchReverse=true

 neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
  --subnet-cidr=192.168.2.0/24

That's cheating! :)
Once you have both static and webapp servers on one subnet, you'll have to
introduce the notion of 'node groups',
e.g. pools, and somehow refer them within single $BALANCER_ID.

I think notions from world of load balancing are unavoidable in the API and
we should not try to get rid of them.


 The biggest advantage to this proposed API and CLI is that we are not
 introducing any terminology into the Neutron LBaaS API that is not
 necessary when existing terms in the main Neutron API already exist to
 describe such things.

But is there much point in this? We'are introducing quite a lot even within
this proposal: loadbalancer, l7-policy, healthchecks, etc.

You will note that I do not use the term pool
 above, since the concept of a subnet (and its associated CIDR) are
 already well-established objects in the Neutron API and can serve the
 exact same purpose for Neutron LBaaS API.

The subnet is just not flexible enough. Not to say that some
implementations may not support having nodes on different subnets, while
may support L7 rules.



  As far as hiding implementation details from the user:  To a certain
  degree I agree with this, and to a certain degree I do not: OpenStack
  is a cloud OS fulfilling the needs of supplying IaaS. It is not a
  PaaS. As such, the objects that users deal with largely are analogous
  to physical pieces of hardware that make up a cluster, albeit these
  are virtualized or conceptualized. Users can then use these conceptual
  components of a cluster to build the (virtual) infrastructure they
  need to support whatever application they want. These objects have
  attributes and are expected to act in a certain way, which again, are
  usually analogous to actual hardware.

 I disagree. A cloud API should strive to shield users of the cloud from
 having to understand underlying hardware APIs or object models.


I think Stephen's suggestion is not about underlying hardware API, but
about the set of building blocks.
Across all services, Libra/Atlas, ELB, LBaaS those blocks are the same no
matter how we name them.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Neutron][third-party-testing] Third Party Test setup and details

2014-02-26 Thread trinath.soman...@freescale.com
Hi Sukhdev-

Really a good document to go with.

In the ‘System flow’ section of the document, where do this testRunner script 
come from ??

And the below actions are specified to be run with the testRunner script.

Can you elaborate that part.

Also, I’m looking into the document in a way to setup a CI. Will this help me 
in this regard.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
Sent: Wednesday, February 26, 2014 3:39 AM
To: openstack-in...@lists.openstack.org; OpenStack Development Mailing List 
(not for usage questions)
Subject: [OpenStack-Infra] [Neutron][third-party-testing] Third Party Test 
setup and details

Fellow developers,

I just put together a wiki describing the Arista Third Party Setup.
In the attached document we provide a link to the modified Gerrit Plugin to 
handle the regex matching for the Comment Added event so that 
recheck/reverify no bug/ can be handled.

https://wiki.openstack.org/wiki/Arista-third-party-testing

Have a look. Your feedback/comments will be appreciated.

regards..
-Sukhdev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-26 Thread Eugene Nikanorov
Hi,



 I assume then the validation in horizon to force the VIP ip from this pool
 subnet is incorrect. i.e VIP address can be from a different subnet.

Right, existing horizon code is bound to reference LBaaS implementation.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Jay Pipes
On Wed, 2014-02-26 at 16:11 +0400, Eugene Nikanorov wrote:

 On Wed, Feb 26, 2014 at 12:24 AM, Jay Pipes jaypi...@gmail.com
 wrote:

 neutron l7-policy-create --type=uri-regex-matching \
  --attr=URIRegex=static\.example\.com.*
 
 Presume above returns an ID for the policy $L7_POLICY_ID. We
 could then
 
 assign that policy to operate on the front-end of the load
 balancer and
 spreading load to the nginx nodes by doing:
 
 neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
  --subnet-cidr=192.168.1.0/24
 
 We could then indicate to the balancer that all other traffic
 should be
 sent to only the Apache nodes:
 
 neutron l7-policy-create --type=uri-regex-matching \
  --attr=URIRegex=static\.example\.com.* \
  --attr=RegexMatchReverse=true
 
 neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
  --subnet-cidr=192.168.2.0/24
 That's cheating! :)

:)

 Once you have both static and webapp servers on one subnet, you'll
 have to introduce the notion of 'node groups', 
 e.g. pools, and somehow refer them within single $BALANCER_ID.

Agreed. In fact, I had a hangout with Stephen yesterday evening to chat
about just this thing.

I admit that the notion of a named pool of instances would be necessary
in these cases.

That said, what it all boils down to is generating a list of backend IP
addresses. Whether we use a subnet_cidr or a named pool ID, all that is
happening is allowing the user to specify a group of nodes together.

So, I'd love it if both options were possible (i.e. allow subnet_id,
subnet_cidr, pool_id and pool_name when specifying groups of nodes with
balancer-apply-policy) 
 
 I think notions from world of load balancing are unavoidable in the
 API and we should not try to get rid of them.
  
 The biggest advantage to this proposed API and CLI is that we
 are not
 introducing any terminology into the Neutron LBaaS API that is
 not
 necessary when existing terms in the main Neutron API already
 exist to
 describe such things. 
 But is there much point in this? We'are introducing quite a lot even
 within this proposal: loadbalancer, l7-policy, healthchecks, etc.

Fair point. Was just brainstorming :)
 
 You will note that I do not use the term pool
 above, since the concept of a subnet (and its associated CIDR)
 are
 already well-established objects in the Neutron API and can
 serve the
 exact same purpose for Neutron LBaaS API.
 The subnet is just not flexible enough. Not to say that some
 implementations may not support having nodes on different subnets,
 while may support L7 rules.

Agreed. Would just like it to be an option instead of forcing the user
to create a pool if they don't need to (i.e. the subnet would work just
fine...)

  As far as hiding implementation details from the user:  To a
 certain
  degree I agree with this, and to a certain degree I do not:
 OpenStack
  is a cloud OS fulfilling the needs of supplying IaaS. It is
 not a
  PaaS. As such, the objects that users deal with largely are
 analogous
  to physical pieces of hardware that make up a cluster,
 albeit these
  are virtualized or conceptualized. Users can then use these
 conceptual
  components of a cluster to build the (virtual)
 infrastructure they
  need to support whatever application they want. These
 objects have
  attributes and are expected to act in a certain way, which
 again, are
  usually analogous to actual hardware.
 
 
 I disagree. A cloud API should strive to shield users of the
 cloud from
 having to understand underlying hardware APIs or object
 models.
  
 I think Stephen's suggestion is not about underlying hardware API, but
 about the set of building blocks.
 Across all services, Libra/Atlas, ELB, LBaaS those blocks are the same
 no matter how we name them.

Sure, understood. Just trying to brainstorm a bit on how to keep
flexibility in the LBaaS API while also simplifying it as much as
possible.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] git-review patch: Fix parsing of SCP-style URLs

2014-02-26 Thread Alexander Jones
Hi 

I uploaded a minor fix recently, so it needs re-reviewing. Much obliged, 
thanks. :) 

https://review.openstack.org/#/c/72751/ 

Alex 

- Original Message -

 From: Alexander Jones a...@dneg.com
 To: OpenStack-dev@lists.openstack.org
 Sent: Thursday, 20 February, 2014 4:12:16 PM
 Subject: git-review patch: Fix parsing of SCP-style URLs

 Really don't want to have to resolve conflicts again or battle through making
 the test suite succeed... Please can someone merge this?

 https://review.openstack.org/#/c/72751/
 https://bugs.launchpad.net/git-review/+bug/1279016

 Thanks!

 Alexander Jones
 Double Negative RD
 www.dneg.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Tenant expiration dates

2014-02-26 Thread Sanchez, Cristian A
Hi Adam,
I have created this blueprint: 
https://blueprints.launchpad.net/keystone/+spec/tenant-expiration-dates
Thanks

Cristian

From: Adam Young ayo...@redhat.commailto:ayo...@redhat.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: miércoles, 26 de febrero de 2014 01:06
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Keystone] Tenant expiration dates

On 02/24/2014 08:41 AM, Dina Belova wrote:
Cristian, hello

I believe that should not be done in such direct way, really.
Why not using project.extra field in DB to store this info? Is that not 
appropriate for your ideas or there will be problems with there implementing 
using extras?

  It would not make sense to enforce on something that was not queryable 
directly in the database.  Please don't use extra.  I'd like to see it removed. 
 It certainly should not be used for core behavior.

I think start/end datetimes make sense, and could be part of the project 
itself.  Please write up the blueprint.

Thanks,
Dina


On Mon, Feb 24, 2014 at 5:25 PM, Sanchez, Cristian A 
cristian.a.sanc...@intel.commailto:cristian.a.sanc...@intel.com wrote:
Hi,
I’m thinking about creating a blueprint to allow the creating of tenants 
defining start-date and end-date of that tenant. These dates will define a time 
window in which the tenant is considered ‘enabled’ and auth tokens will be 
given only when current time is between those dates.
This can be particularly useful for projects like Climate where resources are 
reserved. And any resource (like VMs) created for a tenant will have the same 
expiration dates as the tenant.

Do you think this is something that can be added to Keystone?

Thanks

Cristian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][SUSE] SUSE OpenStack Havana distribution is different with upstream

2014-02-26 Thread Vincent Untz
Hi,

Le mercredi 26 février 2014, à 17:20 +0800, ZhiQiang Fan a écrit :
 Hi, SUSE OpenStack developers,
 
 After install OpenStack Havana on SLES 11 SP3 following the
 openstack-manuals' guide, I find that the
 ceilometer/storage/impl_sqlalchemy.py has implemented the metaquey
 functionality. However, the upstream, which means
 github.com/openstack/ceilometer stable/havana branch doesn't implement that
 feature yet.
 
 the ceilometer package version is 2013.2.2.dev13
 
 Here is my questions:
 
 1. is this intent or just a mistake during package?
 if it is intent, where can I get the distribution release notes?

You can see the package at
https://build.opensuse.org/package/show/Cloud:OpenStack:Havana/openstack-ceilometer

I think this is added because of the
0001-enable-sql-metadata-query.patch patch (you can get some notes about
history in the openstack-ceilometer.changes file)

 2. is this the only part which is different with community, or there are
 other parts?
 if it is not the only part, where can I get the whole diff note?

See link above :-) Generally, the only differences are backports from
Icehouse.

 3. where can I get help and how, if there is a bug in the different part
 code?
 actually, there is one, which caused by mysql foreign key, blocks
 entire ceilometer-collector service.

Feel free to get in touch with opensuse-cl...@opensuse.org, since this
is where the packaging for OpenStack is discussed for openSUSE and SLES.

(btw, with the build service, you can contribute any change you want to
the package)

Cheers,

Vincent

 This problem is really important (and a bit urgent), please help me.
 
 Thanks

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Les gens heureux ne sont pas pressés.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Defining term DSL

2014-02-26 Thread Nikolay Makhotkin
Due to the comment to https://review.openstack.org/#/c/75888/1 there is a
quiestion:

Do we use term DSL or something else?
I think the word 'DSL' is more fit thing that we call 'workbook
definition', some text describing workflows, services, tasks and actions.
And processing module for this also has name 'dsl'.

Thoughts? Dmitri?

Nikolay,
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-26 Thread Dolph Mathews
On Wed, Feb 26, 2014 at 4:23 AM, Marco Fargetta
marco.farge...@ct.infn.itwrote:

 Hi Morgan,

 On Tue, Feb 25, 2014 at 11:47:43AM -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
 LDAP, mix
  of LDAP and SQL, federation, etc) Keystone is planning to increase the
 maximum
  size of the USER_ID field from an upper limit of 64 to an upper limit of
 255.
  This change would not impact any currently assigned USER_IDs (they would
 remain
  in the old simple UUID format), however, new USER_IDs would be increased
 to
  include the IDP identifier (e.g. USER_ID@@IDP_IDENTIFIER).
 

 in this case if a user would access with different systems (e.g. SAML with
 portal, LDAP with CLI) it is mapped to two different identities inside
 keystone.
 Is this correct? If so, is there any way to map an individual person with
 two

identities sharing resources?


That's correct - they'd result in different identities and keystone has no
means of presuming they are the same person. But I think you answered it
yourself, in that they would effectively be sharing resources with
themselves, so you'd just have to ensure they had the same authorization on
the same projects using both identities.



 Cheers,
 Marco


  There is the obvious concern that projects are utilizing (and storing)
 the
  user_id in a field that cannot accommodate the increased upper limit.
 Before
  this change is merged in, it is important for the Keystone team to
 understand
  if there are any places that would be overflowed by the increased size.
 
  The review that would implement this change in size is https://
  review.openstack.org/#/c/74214 and is actively being worked on/reviewed.
 
  I have already spoken with the Nova team, and a single instance has been
  identified that would require a migration (that will have a fix proposed
 for
  the I3 timeline).
 
  If there are any other known locations that would have issues with an
 increased
  USER_ID size, or any concerns with this change to USER_ID format, please
  respond so that the issues/concerns can be addressed.  Again, the plan
 is not
  to change current USER_IDs but that new ones could be up to 255
 characters in
  length.
 
  Cheers,
  Morgan Fainberg
 
  —
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 
 Eng. Marco Fargetta, PhD

 Istituto Nazionale di Fisica Nucleare (INFN)
 Catania, Italy

 EMail: marco.farge...@ct.infn.it
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-26 Thread Dolph Mathews
On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).

 -1

 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.


Morgan and I talked this suggestion through last night and agreed it's
probably the best approach, and has the benefit of zero impact on other
services, which is something we're obviously trying to avoid. I imagine it
could be as simple as a user_id to domain_id lookup table. All we really
care about is given a globally unique user ID, which identity backend is
the user from?

On the downside, it would likely become bloated with unused ephemeral user
IDs, so we'll need enough metadata about the mapping to implement a purging
behavior down the line.



 The only identifiers that would ever be communicated to any non-Keystone
 OpenStack endpoint would be the UUID user and tenant IDs.

  There is the obvious concern that projects are utilizing (and storing)
  the user_id in a field that cannot accommodate the increased upper
  limit. Before this change is merged in, it is important for the
  Keystone team to understand if there are any places that would be
  overflowed by the increased size.

 I would go so far as to say the user_id and tenant_id fields should be
 *reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
 performance reasons. Lengthening commonly-used and frequently-joined
 identifier fields is not a good option, IMO.

 Best,
 -jay

  The review that would implement this change in size
  is https://review.openstack.org/#/c/74214 and is actively being worked
  on/reviewed.
 
 
  I have already spoken with the Nova team, and a single instance has
  been identified that would require a migration (that will have a fix
  proposed for the I3 timeline).
 
 
  If there are any other known locations that would have issues with an
  increased USER_ID size, or any concerns with this change to USER_ID
  format, please respond so that the issues/concerns can be addressed.
   Again, the plan is not to change current USER_IDs but that new ones
  could be up to 255 characters in length.
 
 
  Cheers,
  Morgan Fainberg
  —
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Tuskar CLI UX

2014-02-26 Thread Jiří Stránský

Hello,

i went through the CLI way of deploying overcloud, so if you're 
interested what's the workflow, here it is:


https://gist.github.com/jistr/9228638


I'd say it's still an open question whether we'll want to give better UX 
than that ^^ and at what cost (this is very much tied to the benefits 
and drawbacks of various solutions we discussed in December [1]). All in 
all it's not as bad as i expected it to be back then [1]. The fact that 
we keep Tuskar API as a layer in front of Heat means that CLI user 
doesn't care about calling merge.py and creating Heat stack manually, 
which is great.


In general the CLI workflow is on the same conceptual level as Tuskar 
UI, so that's fine, we just need to use more commands than tuskar.


There's one naming mismatch though -- Tuskar UI doesn't use Horizon's 
Flavor management, but implements its own and calls it Node Profiles. 
I'm a bit hesitant to do the same thing on CLI -- the most obvious 
option would be to make python-tuskarclient depend on python-novaclient 
and use a renamed Flavor management CLI. But that's wrong and high cost 
given that it's only about naming :)


The above issue is once again a manifestation of the fact that Tuskar 
UI, despite its name, is not a UI for Tuskar, it is UI for a bit more 
services. If this becomes a greater problem, or if we want a top-notch 
CLI experience despite reimplementing bits that can be already done 
(just not in a super-friendly way), we could start thinking about 
building something like OpenStackClient CLI [2], but directed 
specifically at Undercloud/Tuskar needs and using undercloud naming.


Another option would be to get Tuskar UI a bit closer back to the fact 
that Undercloud is OpenStack too, and keep the name Flavors instead of 
changing it to Node Profiles. I wonder if that would be unwelcome to 
the Tuskar UI UX, though.



Jirka


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021919.html

[2] https://wiki.openstack.org/wiki/OpenStackClient

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-02-26 Thread Charles Walker
Hi,


I am trying to deploy the proprietary application made in my company on the
cloud. The pre requisite for this is to have a IAAS which can be either a
public cloud or private cloud (openstack is an option for a private IAAS).


The first prototype I made was based on a homemade python orchestrator and
apache libCloud to interact with IAAS (AWS and Rackspace and GCE).

The orchestrator part is a python code reading a template file which
contains the info needed to deploy my application. This template file
indicates the number of VM and the scripts associated to each VM type to
install it.


Now I was trying to have a look on existing open source tool to do the
orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT (
https://wiki.openstack.org/wiki/Heat).

I am investigating deeper HEAT and also had a look on
https://wiki.openstack.org/wiki/Heat/DSL which mentioned:

*Cloud Service Provider* - A service entity offering hosted cloud services
on OpenStack or another cloud technology. Also known as a Vendor.


I think HEAT as its actual version will not match my requirement but I have
the feeling that it is going to evolve and could cover my needs.


I would like to know if it would be possible to use HEAT as a standalone
component in the future (without Nova and other Ostack modules)? The goal
would be to deploy an application from a template file on multiple cloud
service (like AWS, GCE).


Any feedback from people working on HEAT could help me.


Thanks, Charles.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-26 Thread Mathieu Rohon
To more precise, the kernel will listen on udp port 8472 as soon as
you create a vxlan port, with ip_route2. I doubt it's your case, but i
can't figure out why you want to change the vxlan port? please tell us
about that, it could be interesting.

In your case, your config doesn't seem to be a multi-node one, so you
will have only one vxlan endpoint, which hosts network node and
compute node. If this is right, it's normal that no vxlan tunnel is
created, since you don't have any other vxlan endpoint.


On Wed, Feb 26, 2014 at 11:38 AM, Mathieu Rohon mathieu.ro...@gmail.com wrote:
 Hi,

 FYI setting the vxlan UDP doesn't work properly for the moment :
 https://bugs.launchpad.net/neutron/+bug/1241561

 May be your kernel has the vxlan module already loaded, which bind the
 udp port 8472. that a reason why the vxlan port can't be created by
 ovs. Check your ovs-vswitchd.log

 On Tue, Feb 25, 2014 at 10:08 PM, Varadhan, Sowmini
 sowmini.varad...@hp.com wrote:
 Folks,

 I'm trying to set up a simple single-node devstack + ml2 + vxlan
 combination, and though this ought to be a simple RTFM exercise,
 I'm having some trouble setting this up. Perhaps I'm doing something
 wrong- clues would be welcome.

 I made sure to use ovs_version 1.10.2, and followed
 the instructions in https://wiki.openstack.org/wiki/Neutron/ML2
 (and then some, based on various and sundry blogs that google found)

 Can someone share (all) the contents of their localrc,
 and if possible, a description of their VM (virtualbox?  qemu-kvm?)
 setup so that I can compare against my env?

 FWIW, I tried the attached configs.
 localrc.all - sets up
 Q_PLUGIN=ml2
 Q_ML2_TENANT_NETWORK_TYPE=vxlan
 Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
 Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
 Resulting VM boots, but no vxlan interfaces show up (see ovs-ctl.out.all)

 localrc.vxlan.only - disallow anything other than vxlan and gre.
 VM does not boot- I get a binding_failed error. See ovs-ctl.out.vxlan.only

 Thanks in advance,
 Sowmini

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Renaming action types

2014-02-26 Thread Renat Akhmerov
Thanks Jay.

Regarding underscore naming. If you meant using underscore naming for 
“createVM” and “novaURL” then yes, “createVM” is just a task name and it’s a 
user preference. The same about “novaURL” which will be defined by users. As 
for keywords, seemingly we follow underscore naming.

Renat Akhmerov
@ Mirantis Inc.



On 26 Feb 2014, at 17:58, Jay Pipes jaypi...@gmail.com wrote:

 On Wed, 2014-02-26 at 14:38 +0700, Renat Akhmerov wrote:
 Folks,
 
 I’m proposing to rename these two action types REST_API and
 MISTRAL_REST_API to HTTP and MISTRAL_HTTP. Words “REST” and “API”
 don’t look correct to me, if you look at
 
 
 Services:
  Nova:
type: REST_API
parameters:
  baseUrl: {$.novaURL}
actions:
  createVM:
parameters:
  url: /servers/{$.vm_id}
  method: POST
 
 There’s no information about “REST” or “API” here. It’s just a spec
 how to form an HTTP request.
 
 +1 on HTTP and MISTRAL_HTTP.
 
 On an unrelated note, would it be possible to use under_score_naming
 instead of camelCase naming?
 
 Best,
 -jay
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Renaming action types

2014-02-26 Thread Renat Akhmerov
Ooh, I was wrong. Sorry. We use dash naming. We have “on-success”, “on-error” 
and so forth.

Please let us know if you see other inconsistencies.

Thanks

Renat Akhmerov
@ Mirantis Inc.



On 26 Feb 2014, at 21:00, Renat Akhmerov rakhme...@mirantis.com wrote:

 Thanks Jay.
 
 Regarding underscore naming. If you meant using underscore naming for 
 “createVM” and “novaURL” then yes, “createVM” is just a task name and it’s a 
 user preference. The same about “novaURL” which will be defined by users. As 
 for keywords, seemingly we follow underscore naming.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 26 Feb 2014, at 17:58, Jay Pipes jaypi...@gmail.com wrote:
 
 On Wed, 2014-02-26 at 14:38 +0700, Renat Akhmerov wrote:
 Folks,
 
 I’m proposing to rename these two action types REST_API and
 MISTRAL_REST_API to HTTP and MISTRAL_HTTP. Words “REST” and “API”
 don’t look correct to me, if you look at
 
 
 Services:
 Nova:
   type: REST_API
   parameters:
 baseUrl: {$.novaURL}
   actions:
 createVM:
   parameters:
 url: /servers/{$.vm_id}
 method: POST
 
 There’s no information about “REST” or “API” here. It’s just a spec
 how to form an HTTP request.
 
 +1 on HTTP and MISTRAL_HTTP.
 
 On an unrelated note, would it be possible to use under_score_naming
 instead of camelCase naming?
 
 Best,
 -jay
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-02-26 Thread Steven Dake

On 02/26/2014 06:47 AM, Charles Walker wrote:


Hi,


I am trying to deploy the proprietary application made in my company 
on the cloud. The pre requisite for this is to have a IAAS which can 
be either a public cloud or private cloud (openstack is an option for 
a private IAAS).



The first prototype I made was based on a homemade python orchestrator 
and apache libCloud to interact with IAAS (AWS and Rackspace and GCE).


The orchestrator part is a python code reading a template file which 
contains the info needed to deploy my application. This template file 
indicates the number of VM and the scripts associated to each VM type 
to install it.



Now I was trying to have a look on existing open source tool to do the 
orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT 
(https://wiki.openstack.org/wiki/Heat).


I am investigating deeper HEAT and also had a look on 
https://wiki.openstack.org/wiki/Heat/DSL which mentioned:




You will notice at the top of this page, it is clearly labeled Proposal 
Only.  Just a tip, but I'd recommend taking anything on the wiki with a 
grain of salt (vs what is actually put on docs.openstack.org, which is a 
more accurate world view).


The Heat developers have coalesced around a de-facto standard DSL called 
HOT instead:


http://docs.openstack.org/developer/heat/template_guide/hot_spec.html

*Cloud Service Provider* - A service entity offering hosted cloud 
services on OpenStack or another cloud technology. Also known as a 
Vendor.



I think HEAT as its actual version will not match my requirement but I 
have the feeling that it is going to evolve and could cover my needs.



I would like to know if it would be possible to use HEAT as a 
standalone component in the future (without Nova and other Ostack 
modules)? The goal would be to deploy an application from a template 
file on multiple cloud service (like AWS, GCE).



First, Heat has a hard dependency on keystone.  Second, it wouldn't be 
very useful in this configuration.  Heat provides built-in resources for 
managing things like servers, floating ips, and other types of 
resources.  These resource plugins expect to communicate with openstack 
nova, neutron, etc.  If you were really motivated, you could write 
bespoke plugins for all of the AWS/GCE services to run a hybrid cloud 
using Heat.  If you were even more motivated, you could get these merged 
upstream.  But hybrid cloud is not in scope for the Orchestration 
program.  We don't stop people from trying to use Heat in this way, but 
we don't directly enable it in the resources either.


In the future, I'd recommend asking general questions like this on 
ask.openstack.org so the entire community can share and record the 
experience, rather then being lost on a mailing list.


Thanks!
-steve


Any feedback from people working on HEAT could help me.


Thanks, Charles.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DOWN and INACTIVE status in FWaaS and LBaaS

2014-02-26 Thread Xuhan Peng
Oleg,

Thanks a lot for your quick response! I will open a bug to address that
soon. For the LBaaS part, probably I will just make the fix dependent on
the code review you mentioned.

Xu Han

On Wed, Feb 26, 2014 at 6:43 PM, Oleg Bondarev obonda...@mirantis.comwrote:

 Hi,

 For LBaaS the background is simple: it uses statuses from
 neutron/plugins/common/constants.py and INACTIVE was there initially while
 DOWN
 appeared later (with VPNaaS first commit). So LBaaS doesn't use DOWN at
 all.
 As for INACTIVE, it is currently used only for members that stop
 responding to health checks.
 Also there is a patch on review (https://review.openstack.org/#/c/55032)
 which sets INACTIVE
 status for resources with admin state down.

 My personal opinion is that we can easily fix that for LBaaS and replace
 INACTIVE with DOWN
 to be consistent with other network resources.

 Thanks,
 Oleg


 On Wed, Feb 26, 2014 at 1:50 PM, Xuhan Peng pengxu...@gmail.com wrote:

 Hello,

 This email is triggered by the comments I received in my patch [1] when
 trying to fix bug [2].

 The problem I was trying to fix is that current firewall remains in
 status ACTIVE after admin state is changed to DOWN. My plan is to
 change the status of firewall from ACTIVE to DOWN when admin state is down,
 as other network resource is doing currently.

 But I noticed besides DOWN state, INACTIVE state is also used in
 FWaaS and LBaaS. So I hope someone can help me understand any background of
 this. If this is not particularly by design and inconsistent with other
 network resource, I can open a bug to fix this in FWaaS and LBaaS.

 Thanks,
 Xu Han

 [1]: https://review.openstack.org/#/c/73944/
 [2]: https://launchpad.net/bugs/1279213



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How do I mark one option as deprecating another one ?

2014-02-26 Thread Day, Phil
Hi Folks,

I could do with some pointers on config value deprecation.

All of the examples in the code and documentation seem to deal with  the case 
of old_opt being replaced by new_opt but still returning the same value
Here using deprecated_name and  / or deprecated_opts in the definition of 
new_opt lets me still get the value (and log a warning) if the config still 
uses old_opt

However my use case is different because while I want deprecate old-opt, 
new_opt doesn't take the same value and I need to  different things depending 
on which is specified, i.e. If old_opt is specified and new_opt isn't I still 
want to do some processing specific to old_opt and log a deprecation warning.

Clearly I can code this up as a special case at the point where I look for the 
options - but I was wondering if there is some clever magic in oslo.config that 
lets me declare this as part of the option definition ?



As a second point,  I thought that using a deprecated option automatically 
logged a warning, but in the latest Devstack wait_soft_reboot_seconds is 
defined as:

cfg.IntOpt('wait_soft_reboot_seconds',
   default=120,
   help='Number of seconds to wait for instance to shut down after'
' soft reboot request is made. We fall back to hard reboot'
' if instance does not shutdown within this window.',
   deprecated_name='libvirt_wait_soft_reboot_seconds',
   deprecated_group='DEFAULT'),



but if I include the following in nova.conf

libvirt_wait_soft_reboot_seconds = 20


I can see the new value of 20 being used, but there is no warning logged that 
I'm using a deprecated name ?

Thanks
Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How do I mark one option as deprecating another one ?

2014-02-26 Thread Denis Makogon
Here what oslo.config documentation says.

Represents a Deprecated option. Here's how you can use it

oldopts = [cfg.DeprecatedOpt('oldfoo', group='oldgroup'),
   cfg.DeprecatedOpt('oldfoo2', group='oldgroup2')]
cfg.CONF.register_group(cfg.OptGroup('blaa'))
cfg.CONF.register_opt(cfg.StrOpt('foo', deprecated_opts=oldopts),
   group='blaa')

Multi-value options will return all new and deprecated
options.  For single options, if the new option is present
([blaa]/foo above) it will override any deprecated options
present.  If the new option is not present and multiple
deprecated options are present, the option corresponding to
the first element of deprecated_opts will be chosen.

I hope that it'll help you.

Best regards,
Denis Makogon.


On Wed, Feb 26, 2014 at 4:17 PM, Day, Phil philip@hp.com wrote:

  Hi Folks,



 I could do with some pointers on config value deprecation.



 All of the examples in the code and documentation seem to deal with  the
 case of old_opt being replaced by new_opt but still returning the same
 value

 Here using deprecated_name and  / or deprecated_opts in the definition of
 new_opt lets me still get the value (and log a warning) if the config
 still uses old_opt



 However my use case is different because while I want deprecate old-opt,
 new_opt doesn't take the same value and I need to  different things
 depending on which is specified, i.e. If old_opt is specified and new_opt
 isn't I still want to do some processing specific to old_opt and log a
 deprecation warning.



 Clearly I can code this up as a special case at the point where I look for
 the options - but I was wondering if there is some clever magic in
 oslo.config that lets me declare this as part of the option definition ?







 As a second point,  I thought that using a deprecated option automatically
 logged a warning, but in the latest Devstack wait_soft_reboot_seconds is
 defined as:



 cfg.IntOpt('wait_soft_reboot_seconds',

default=120,

help='Number of seconds to wait for instance to shut down
 after'

 ' soft reboot request is made. We fall back to hard
 reboot'

 ' if instance does not shutdown within this window.',

deprecated_name='libvirt_wait_soft_reboot_seconds',

deprecated_group='DEFAULT'),







 but if I include the following in nova.conf



 libvirt_wait_soft_reboot_seconds = 20





 I can see the new value of 20 being used, but there is no warning logged
 that I'm using a deprecated name ?



 Thanks

 Phil



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Dan Smith
 So I was thinking about this and Ken'ichi has basically said pretty
 much the same thing in his reply to this thread. I don't think it
 makes client moves any easier though - this is all about lowering our
 maintenance costs. 

So, in the other fork of this thread, I think you said we can't improve
v2 because we're concerned about its incredible fragility. The above
statement seems to imply that we can totally rewrite it as decorators on
top of v3? I don't get that :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-26 Thread Ziad Sawalha

On Feb 25, 2014, at 4:08 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Feb 24, 2014 at 4:56 PM, Ziad Sawalha
 ziad.sawa...@rackspace.com wrote:
 Seeking some clarification on the OpenStack hacking guidelines for
 multi-string docstrings.
 
 Q: In OpenStack projects, is a blank line before the triple closing quotes
 recommended (and therefore optional - this is what PEP-257 seems to
 suggest), required, or explicitly rejected (which could be one way to
 interpret the hacking guidelines since they omit the blank line).
 
 This came up in a commit review, and here are some references on the topic:
 
 Link?

https://review.openstack.org/#/c/73515/4/heat/api/aws/exception.py

 Style should never ever be enforced by a human,

Agreed. I’d be happy to include a PEP-257 plugin to flake8 (or
a change to the hacking library) customized for our rules in the
hacking doc.

I’m actually testing a fork of a flake8 plugin here: 
https://github.com/ziadsawalha/flake8_docstrings

 if the code passed
 the pep8 job, then its acceptable.

PEP-8 does not cover this. PEP-257 combined with the our OpenStack hacking 
standards.

 
 Quoting PEP-257: The BDFL [3] recommends inserting a blank line between the
 last paragraph in a multi-line docstring and its closing quotes, placing the
 closing quotes on a line by themselves. This way, Emacs' fill-paragraph
 command can be used on it.
 
 Sample from pep257 (with extra blank line):
 
 def complex(real=0.0, imag=0.0):
Form a complex number.
 
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
 

if imag == 0.0 and real == 0.0: return complex_zero
...
 
 
 The multi-line docstring example in
 http://docs.openstack.org/developer/hacking/ has no extra blank line before
 the ending triple-quotes:
 
 A multi line docstring has a one-line summary, less than 80 characters.
 
 Then a new paragraph after a newline that explains in more detail any
 general information about the function, class or method. Example usages
 are also great to have here if it is a complex class for function.
 
 When writing the docstring for a class, an extra line should be placed
 after the closing quotations. For more in-depth explanations for these
 decisions see http://www.python.org/dev/peps/pep-0257/
 
 If you are going to describe parameters and return values, use Sphinx, the
 appropriate syntax is as follows.
 
 :param foo: the foo parameter
 :param bar: the bar parameter
 :returns: return_type -- description of the return value
 :returns: description of the return value
 :raises: AttributeError, KeyError
 
 
 Regards,
 
 Ziad
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-26 Thread Ziad Sawalha

On Feb 25, 2014, at 5:18 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
wrote:

 On Tue, 2014-02-25 at 00:56 +, Ziad Sawalha wrote:
 Seeking some clarification on the OpenStack hacking guidelines for
 multi-string docstrings. 
 
 
 Q: In OpenStack projects, is a blank line before the triple closing
 quotes recommended (and therefore optional - this is what PEP-257
 seems to suggest), required, or explicitly rejected (which could be
 one way to interpret the hacking guidelines since they omit the blank
 line).
 
 
 I lobbied to relax that restriction, because I happen to use Emacs, and
 know that that limitation no longer exists with Emacs.

Given the recommendation in PEP-257 is based solely on that, should
we explicitly recommend not including that extra space?

Should we add that rule to hacking?

  I submitted the
 change that eliminated that language from nova's HACKING at the time…

Thanks, Kevin - based on that I’ll resubmit my change without the blank line.


 -- 
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Dan Smith
 We may need to differentiate between breaking the API and breaking
 corner-case behavior.

Totally agreed.

 In one case you force everyone in the ecosystem to
 adapt (the libraries, the end user code). In the other you only
 (potentially) affect those that were not following the API correctly.

The problem is that the API spec is too loose right now, which is
definitely a problem. However, I think I'd much rather tighten things
down and deal with the potential fallout of someone's client breaking
and saying oh, I thought 'red' was a valid uuid than whole rewrites.

 We could make a V3 that doesn't break the API, only breaks behavior in
 error cases due to its stronger input validation. A V3 that shouldn't
 break code that was following the API, nor require heavy library
 changes. It's still a major API bump because behavior may change and
 some end users will be screwed in the process, but damage is more
 limited, so V2 could go away after a shorter deprecation period.

What's the difference between saying /v2 will return a 404 after K and
saying If your client doesn't declare support for revision 2 of these
calls we'll return a 405, 406, 410, etc? Actually, 412 seems to be
exactly this case.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] SR-IOV networking patches available

2014-02-26 Thread Robert Li (baoli)
Hi,

The following two Work In Progress patches are available for end-to-end SR-IOV 
networking:
nova client: https://review.openstack.org/#/c/67503/
nova: https://review.openstack.org/#/c/67500/

Please check the commit messages for how to use them.

Neutron changes required to support SR-IOV have already been merged. Many 
thanks to the developers working on them and having them merged in a very short 
time! They are:

https://blueprints.launchpad.net/neutron/+spec/vif-details
https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type

The above patches combined can be used to develop a neutron plugin that 
supports SR-IOV. Please note that Although the nova patches are WIP patches, 
they can be used for your integration testing if you are developing a sr-iov 
capable neutron plugin.

If you use devstack, you may need the following patch for devstack to define 
the PCI whitelist entries:

diff --git a/lib/nova b/lib/nova
index fefeda1..995873a 100644
--- a/lib/nova
+++ b/lib/nova
@@ -475,6 +475,10 @@ function create_nova_conf() {
 iniset $NOVA_CONF DEFAULT ${I/=/ }
 done

+if [ -n $PCI_LIST ]; then
+iniset_multiline  $NOVA_CONF DEFAULT pci_passthrough_whitelist 
${PCI_LIST[@]}
+fi
+
 # All nova-compute workers need to know the vnc configuration options
 # These settings don't hurt anything if n-xvnc and n-novnc are disabled
 if is_service_enabled n-cpu; then

And define something like the following in your localrc file:
PCI_LIST=('{vendor_id:1137,product_id:0071,address:*:0a:00.*,physical_network:physnet1}'
   '{vendor_id:1137,product_id:0071}')
Basically it's a bash array of strings with each string being a json dict. 
Checkout https://review.openstack.org/#/c/67500 for the syntax.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Defining term DSL

2014-02-26 Thread Renat Akhmerov
I don’t see any issues with term DSL (Domain Specific Language). This is really 
a language which 'workbook definitions’ are written in.

Dmitri, could you please provide more details on why you question it?

Thanks

Renat Akhmerov
@ Mirantis Inc.
 

On 26 Feb 2014, at 20:12, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Due to the comment to https://review.openstack.org/#/c/75888/1 there is a 
 quiestion: 
 
 Do we use term DSL or something else? 
 I think the word 'DSL' is more fit thing that we call 'workbook definition', 
 some text describing workflows, services, tasks and actions. And processing 
 module for this also has name 'dsl'.
 
 Thoughts? Dmitri?
 
 Nikolay,
 Mirantis Inc.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-26 Thread Lingxian Kong
2014-02-25 19:48 GMT+08:00 Salvatore Orlando sorla...@nicira.com:

 I understand the fact that resources with invalid tenant_ids can be
 created (only with admin rights at least for Neutron) can be annoying.

 However, I support Jay's point on cross-project interactions. If tenant_id
 validation (and orphaned resource management) can't be efficiently handled,
 then I'd rather let 3rd party scripts dealing with orphaned and invalid
 resources.

 I reckon that it might be worth experimenting whether the notifications
 sent by Keystone (see Dolph's post on this thread) can be used to deal with
 orphaned resources.
 For tenant_id validation, anything involving an extra round trip to
 keystone would not be efficient in my opinion. If there is a way to perform
 this validation in the same call which validates the tenant auth_token then
 it's a different story.
 Notifications from keystone *could* be used to build a local (persistent
 perhaps) cache of active tenant identifiers. However, this would require
 reliable notifications, as well as appropriate cache management, which is
 often less simple than what it looks like.

 Salvatore


Thanks for your explanation and suggestion, Salvatore, I still think it's
a problem that we should handle in OpenStack or outside(through what you
said, say 3rd party scripts), maybe we could add some contents in wiki or
doc? any idea?


-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.kong@gmail.c anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Neutron][third-party-testing] Third Party Test setup and details

2014-02-26 Thread Sukhdev Kapur
On Wed, Feb 26, 2014 at 4:12 AM, trinath.soman...@freescale.com 
trinath.soman...@freescale.com wrote:

  Hi Sukhdev-



 Really a good document to go with.



 In the ‘System flow’ section of the document, where do this testRunner
 script come from ??


This is a very simple python script that we wrote - all it does it executes
the steps/actions described in the system flow.
As an example, here is one of the functions that it invokes to prepare for
testing.

def sanitize( nodeType, repoDict, branchDict, aristaConfig=None ):

   unstack()
   cleanstack()
   removeDevstack()
   downloadDevstack( branchDict )

   print Devstack downloaded
   serviceHost = 'localhost'
   if nodeType != TEMPEST_CONTROLLER:
  serviceHost = determineServiceHost( socket.gethostname() )

   if aristaConfig:
  generateAristaConfig( aristaConfig )

   generateLocalrc( nodeType, serviceHost, repoDict, branchDict,
aristaConfig )
   generateLocalConf()
   return stack()





 And the below actions are specified to be run with the testRunner script.



 Can you elaborate that part.



 Also, I’m looking into the document in a way to setup a CI. Will this help
 me in this regard.


Follow the steps on the link provided in the document to install Jenkin's
Gerrit plugin. Once you install Jenkins, just follow the configuration
steps I describe, including installing the modified gerrit plugin.. If you
have any issue, post it here, this will help me modify the document to make
it more useful for others to follow.

best of luck




 --

 Trinath Somanchi - B39208

 trinath.soman...@freescale.com | extn: 4048



 *From:* Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
 *Sent:* Wednesday, February 26, 2014 3:39 AM

 *To:* openstack-in...@lists.openstack.org; OpenStack Development Mailing
 List (not for usage questions)
 *Subject:* [OpenStack-Infra] [Neutron][third-party-testing] Third Party
 Test setup and details



 Fellow developers,



 I just put together a wiki describing the Arista Third Party Setup.

 In the attached document we provide a link to the modified Gerrit Plugin
 to handle the regex matching for the Comment Added event so that
 recheck/reverify no bug/ can be handled.



 https://wiki.openstack.org/wiki/Arista-third-party-testing



 Have a look. Your feedback/comments will be appreciated.



 regards..

 -Sukhdev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TROVE] Trove capabilities

2014-02-26 Thread Daniel Salinas
I'll be honest, after reading your proposal Denis and looking over
what Kaleb has it feels like you both are calling 2 different things
capabilities.  Denis, your proposal provides a way of blocking or
enabling api paths to the users.  Kaleb's proposal is more about
informing consumers of the api what things are required or not for
making those api calls.  One example that Kaleb is implementing is the
volume on create.  In the end, this will give a user the ability to
query the api for a datastore, redis for example, and find out that
redis does *not* use an ephemeral volume so they cannot pass a volume
size to the create call.  Another good example is users, again using
redis as an example.  If I am a company making a UI for trove and I
need to know when constructing the page for managing a redis instance
in trove, do I display the users tab of my UI or not.  Now I see some
definite overlap with some of the pathing stuff but if you dig further
into it, like with ephemeral volumes, I don't see a logical way to fit
that into your DSL.  Further I don't like changing the model of things
that are displayed to users via the api and having them stored in a
flat file anywhere.  Whether or not you come up with a clever way to
reload that file without restarting the trove process is irrelevant to
me.  In my mind, if data is for displaying to users then it gets put
in the database, plain and simple.  Then we don't need to do any
tricks with looking at a database entry and then looking in some
dictionary that was deserialized from a yaml file to see which ones
are actually shown or used.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Defining term DSL

2014-02-26 Thread Dmitri Zimine
We do use the term DSL, I invite you guys to clarify, how exactly. 

Based on the terminology from [1], it's not part of the model, but the language 
that describes the model in the file. And theoretically this may be not the 
only language to express the workflow. Once the file is parsed, we operate on 
model, not on the language. 

I am afraid we are breaking an abstraction when begin to call things 
DSLWorkbook or DSLWorkflow. What is the difference between Workbook and 
DSLWorkbook, and how DSL is relevant here? 

[1] https://wiki.openstack.org/wiki/Mistral, 

DZ 
On Feb 26, 2014, at 7:19 AM, Renat Akhmerov rakhme...@mirantis.com wrote:

 I don’t see any issues with term DSL (Domain Specific Language). This is 
 really a language which 'workbook definitions’ are written in.
 
 Dmitri, could you please provide more details on why you question it?
 
 Thanks
 
 Renat Akhmerov
 @ Mirantis Inc.
  
 
 On 26 Feb 2014, at 20:12, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Due to the comment to https://review.openstack.org/#/c/75888/1 there is a 
 quiestion: 
 
 Do we use term DSL or something else? 
 I think the word 'DSL' is more fit thing that we call 'workbook definition', 
 some text describing workflows, services, tasks and actions. And processing 
 module for this also has name 'dsl'.
 
 Thoughts? Dmitri?
 
 Nikolay,
 Mirantis Inc.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Live migration

2014-02-26 Thread Dmitry Borodaenko
On Tue, Feb 25, 2014 at 1:40 AM, Matthias Runge mru...@redhat.com wrote:
 I think that the blueprint to add live migrations support to
 Horizon[0] was incorrectly labeled as a duplicate of the earlier
 migrate-instance blueprint[1].

 [0] https://blueprints.launchpad.net/horizon/+spec/live-migration
 [1] https://blueprints.launchpad.net/horizon/+spec/migrate-instance
 I think,
 your [0] is a duplicate of [2], which was impleented during icehouse.

Indeed it is! I don't have rights to modify the original blueprint,
can you fix it to refer to the superceding blueprint properly? All I
could do is add a link in the whiteboard, so that no one else gets
confused.

Thanks!

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sqlalchemy-migrate release impending

2014-02-26 Thread David Ripton
I'd like to release a new version of sqlalchemy-migrate in the next 
couple of days.  The only major new feature is DB2 support.  If anyone 
thinks this is a bad time, please let me know.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron][Keystone] Where to implement VPC API

2014-02-26 Thread Martin, JC
Hi,

  There was some discussion a while back around the VPC implementation in 
Openstack. There is a proposal to implement the AWS VPC features in Nova EC2 
APIs, but this makes sense for the EC2 compatible API only and may not be 
appropriate for an Openstack  specify one.

I would like to know what is the recommendation for the implementation of APIs 
that are orchestrating between keystone, nova, Neutron, designate, … What 
project should it be hosted into, or should it be a separate project ?

Thanks,

JC
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-novaclient 2.16.0 released

2014-02-26 Thread Russell Bryant
I just pushed a new novaclient release.  It contains several bug fixes
and new features made over the last 4-5 months.  I expect another
release closer to the release of Icehouse to capture any last fixes and
features that get merged for Icehouse.

Please report any issues to our bug tracker:

https://bugs.launchpad.net/python-novaclient

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-26 Thread Joe Gordon
On Wed, Feb 26, 2014 at 6:39 AM, Ziad Sawalha
ziad.sawa...@rackspace.com wrote:

 On Feb 25, 2014, at 5:18 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
 wrote:

 On Tue, 2014-02-25 at 00:56 +, Ziad Sawalha wrote:
 Seeking some clarification on the OpenStack hacking guidelines for
 multi-string docstrings.


 Q: In OpenStack projects, is a blank line before the triple closing
 quotes recommended (and therefore optional - this is what PEP-257
 seems to suggest), required, or explicitly rejected (which could be
 one way to interpret the hacking guidelines since they omit the blank
 line).


 I lobbied to relax that restriction, because I happen to use Emacs, and
 know that that limitation no longer exists with Emacs.

 Given the recommendation in PEP-257 is based solely on that, should
 we explicitly recommend not including that extra space?

 Should we add that rule to hacking?

If we do want to explicitly add that to hacking (I am not too keen on
doing that) it should come with a rule to enforce it.  But before
doing so I would be interested to see how many docstrings would need
to be changed to enforce this in different projects.


  I submitted the
 change that eliminated that language from nova's HACKING at the time...

 Thanks, Kevin - based on that I'll resubmit my change without the blank line.

This is missing the point about manually enforcing style. If you pass
the 'pep8' job there is no need to change any style.



 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New Websockify Release

2014-02-26 Thread Solly Ross
Greetings!

We (the websockify/noVNC team) have released a new version of websockify 
(0.6.0).  It contains several fixes and features relating to OpenStack (a 
couple of bugs were fixed, and native support for the `logging` module was 
added).  Unfortunately, to integrate it into OpenStack, a patch is needed to 
the websocketproxy code in Nova (https://gist.github.com/DirectXMan12/9233369) 
due to a refactoring of the websockify API.  My concern is that the various 
distos most likely have not had time to update the package in their package 
repositories.  What is the appropriate timescale for updating Nova to work with 
the new version?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-26 Thread Nir Yechiel


- Original Message -
 A thread [1] was also initiated on the ML by Syvlain but no answers/comment
 for the moment.
 
 [1] http://openstack.markmail.org/thread/qy6ikldtq2o4imzl
 
 Édouard.
 

IMHO admin_state_up = false should bring the network down and the traffic needs 
to be stopped. This is of course a risky operation, so there should be some 
clear warning describing the action and asking the user to confirm that.

With regards to the implementation of this, I think that there is a difference 
between the network admin_state and the individual ports admin_state; setting 
the network admin_state to false should not change the port's state to false. 
Instead, I am in favor of the second solution [1] described by Sylvain in the 
ML. I also included this in the bug.

/Nir

[1] do not change the admin_state_up value of ports and introduce a new field 
in the get_device_details rpc call in order to indicate that the admin_state_up 
of network is down and then set the port as dead


 
 On Mon, Feb 24, 2014 at 9:35 AM, 黎林果 lilinguo8...@gmail.com wrote:
 
  Thanks you very much.
 
  IMHO when admin_state_up is false that entity should be down, meaning
  network should be down.
  otherwise what it the usage of admin_state_up ? same is true for port
  admin_state_up
 
  It likes switch's power button?
 
  2014-02-24 16:03 GMT+08:00 Assaf Muller amul...@redhat.com:
  
  
   - Original Message -
   Hi,
  
   I want to know the admin_state_up attribute about networks but I
   have not found any describes.
  
   Can you help me to understand it? Thank you very much.
  
  
   There's a discussion about this in this bug [1].
   From what I gather, nobody knows what admin_state_up is actually supposed
   to do with respect to networks.
  
   [1] https://bugs.launchpad.net/neutron/+bug/1237807
  
  
   Regard,
  
   Lee Li
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-26 Thread David Ripton

On 02/26/2014 11:40 AM, Joe Gordon wrote:


This is missing the point about manually enforcing style. If you pass
the 'pep8' job there is no need to change any style.


In a perfect world, yes.

In the real world, there are several things in PEP8 or our project 
guidelines that the tools don't enforce perfectly.  I think it's fine 
for human reviewers to point such things out.  (And then submit a patch 
to hacking to avoid the need to do so in the future.)


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Samuel Bercovici
Hi,

I have added to the wiki page: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_model
 that points to a document that includes the current model + L7 + SSL.
Please review.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Monday, February 24, 2014 7:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current state management will not 
handle such relationship well.
To me this means that the state management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-26 Thread Dean Troyer
On Tue, Feb 25, 2014 at 12:01 PM, Ben Nemec openst...@nemebean.com wrote:

 So is there some git magic that also keeps the repos in sync, or do you
 have to commit/pull/restart service every time you make changes?  I ask
 because experience tells me I would inevitably forget one of those steps at
 some point and be stymied by old code still running in my devstack.  Heck,
 I occasionally forget just the restart service step. ;-)


This is why I set RECLONE=True and set the *_REPO and *_BRANCH as required
to point to my working repo so re-running stack.sh just does the Right
Thing.  And it still allows short dev cycles in /opt/stack with manual
restartes of the service(s) involved in screen.  But the real work happens
elsewhere, in my case on a different machine.

Remember, DevStack's opinion of your work area is that it is a disposable
VM.  Doing long-term work in a disposable VM is just asking for trouble in
a number of ways.  Sean's local mounts via VBox and my remote git repos are
only two ways to operate in that environment, I'm sure there are many
others.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Joe Gordon
On Wed, Feb 26, 2014 at 6:38 AM, Dan Smith d...@danplanet.com wrote:
 We may need to differentiate between breaking the API and breaking
 corner-case behavior.

 Totally agreed.

 In one case you force everyone in the ecosystem to
 adapt (the libraries, the end user code). In the other you only
 (potentially) affect those that were not following the API correctly.

 The problem is that the API spec is too loose right now, which is
 definitely a problem. However, I think I'd much rather tighten things
 down and deal with the potential fallout of someone's client breaking
 and saying oh, I thought 'red' was a valid uuid than whole rewrites.

quietly changing things sounds like a recipe for upset users.  There
are two contracts that we produce every time we release an API:

* The specs we document
* The actual implementation and all of its bugs and quirks.

Users actually care about the latter. If the API accepts 'red' as a
valid UUID then that is part of the implicit contract.

If we do go down this route of changing the v2 API like this, it would
be nice to bump the minor version as well, so when a user has to
distinguish between the two versions they don't have to say Havana V2
vs Icehouse V2, they can just say v2.1.

So Either way we should rev some version number when making these
changes, so we don't surprise the end users.


 We could make a V3 that doesn't break the API, only breaks behavior in
 error cases due to its stronger input validation. A V3 that shouldn't
 break code that was following the API, nor require heavy library
 changes. It's still a major API bump because behavior may change and
 some end users will be screwed in the process, but damage is more
 limited, so V2 could go away after a shorter deprecation period.

 What's the difference between saying /v2 will return a 404 after K and
 saying If your client doesn't declare support for revision 2 of these
 calls we'll return a 405, 406, 410, etc? Actually, 412 seems to be
 exactly this case.

 --Dan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-*client] moving to WebOb exceptions

2014-02-26 Thread Andrey Kurilin
Hi,

While working on unification of clients code we try to unify various
exceptions in different client projects.
We have module apiclient.exceptions in oslo-incubator[1]. Since our
apiclient is an oslo-inclubator library and not a standalone lib this
doesn't help in case we need to process exceptions from several clients.

Please, look at horizon module exceptions:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/exceptions.py
From interpreter point of view apiclient exceptions will be different
classes since they are copy-pasted between projects.

The solution would be to use exceptions from external library - Module
WebOb.exc[2] for example (since WebOb is already used in other openstack
projects). This exceptions cover all our custom http exceptions.

We propose to move to webob.exc in three stages(I already have patches for
this in oslo-incubator and I've added link here as an example):
1) In clients: create aliases in module `exceptions` for all http
exceptions which are duplicated with webob.exc. This will help us safely
move to webob.exc without breaking tempest, horizon and other projects.
Usage of such exceptions will not cause significant changes. -
https://review.openstack.org/#/c/71916/
2) In all projects: importing exceptions and use them directly from
webob.exc - https://review.openstack.org/#/c/76198/
3) In clients: remove aliases for webob.exc. (at the end of backwards
compatibility period)

Please share your thoughts about this topic.

[1] -
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/apiclient/exceptions.py
[2] -
http://turbogears.org/2.0/docs/modules/thirdparty/webob.html#module-webob.exc

-- 

Looking forward for your reply,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Dan Smith
 Users actually care about the latter. If the API accepts 'red' as a
 valid UUID then that is part of the implicit contract.

Yeah, but right now, many of those things are would fail on postgres
and succeed on mysql (not uuids necessarily, but others). Since we
can't honor them in all cases, I don't see the changes (versioned, for
warning) as quite so bad as the alternative. My point was that unless we
decide to stop working on the core, we're going to have minor changes
like that regardless. If we decide to freeze v2 and move to v3, then we
have to basically emulate those bugs _and_ maintain v3. If we version v2
and move forward, we;re in better shape, IMHO.

 If we do go down this route of changing the v2 API like this, it would
 be nice to bump the minor version as well, so when a user has to
 distinguish between the two versions they don't have to say Havana V2
 vs Icehouse V2, they can just say v2.1.

Yes, I'm totally all for introducing versioning into v2. Definitely.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy-migrate release impending

2014-02-26 Thread Sean Dague
On 02/26/2014 11:24 AM, David Ripton wrote:
 I'd like to release a new version of sqlalchemy-migrate in the next
 couple of days.  The only major new feature is DB2 support.  If anyone
 thinks this is a bad time, please let me know.
 

So it would be nice if someone could actually work through the 0.9 sqla
support, because I think it's basically just a change in quoting
behavior that's left (mostly where quoting gets called) -
https://review.openstack.org/#/c/66156/

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-26 Thread Varadhan, Sowmini
On 2/26/14 5:47 AM, Mathieu Rohon wrote:
 Hi,

 FYI setting the vxlan UDP doesn't work properly for the moment :
 https://bugs.launchpad.net/neutron/+bug/1241561

So I checked this again by going back to 13.10, still no luck.


 May be your kernel has the vxlan module already loaded, which bind the
 udp port 8472. that a reason why the vxlan port can't be created by
 ovs. Check your ovs-vswitchd.log

Yes vxlan is loaded (as indicated by lsmod) but I didnt
see any messages around 8472 in the ovs-vswitchd.log, so it must
be something else in my config. To double check, I even tried some
other port (8474) for vxlan_udp_port, still no luck.

So is there a template stack.sh around for this? That would
help me elminate the obvious config errors I may have made?

--Sowmini




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-26 Thread Joe Gordon
On Wed, Feb 26, 2014 at 9:05 AM, David Ripton drip...@redhat.com wrote:
 On 02/26/2014 11:40 AM, Joe Gordon wrote:

 This is missing the point about manually enforcing style. If you pass
 the 'pep8' job there is no need to change any style.


 In a perfect world, yes.

While there are exceptions to this,  this just sounds like being extra
nit-picky.  The important aspect here is the mindset, if we don't gate
on style rule x, then we shouldn't waste valuable human review time
and patch revisions on trying to manually enforce it (And yes there
are exceptions to this).


 In the real world, there are several things in PEP8 or our project
 guidelines that the tools don't enforce perfectly.  I think it's fine for
 human reviewers to point such things out.  (And then submit a patch to
 hacking to avoid the need to do so in the future.)

To clarify, we don't rely on long term human enforcement for something
that a computer can do.


 --
 David Ripton   Red Hat   drip...@redhat.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-26 Thread Tim Hinrichs
Hi Jay and Sylvain,

The solver-scheduler sounds like a good fit to me as well.  It clearly 
provisions resources in accordance with policy.  Does it monitor those 
resources and adjust them if the system falls out of compliance with the policy?

I mentioned Congress for two reasons. (i) It does monitoring.  (ii) There was 
mention of compute, networking, and storage, and I couldn't tell if the idea 
was for policy that spans OS components or not.  Congress was designed for 
policies spanning OS components.

Tim

- Original Message -
| From: Jay Lau jay.lau@gmail.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Tuesday, February 25, 2014 10:13:14 PM
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage
| compute/storage resource
| 
| 
| 
| 
| 
| Thanks Sylvain and Tim for the great sharing.
| 
| @Tim, I also go through with Congress and have the same feeling with
| Sylvai, it is likely that Congress is doing something simliar with
| Gantt providing a holistic way for deploying. What I want to do is
| to provide some functions which is very similar with VMWare DRS that
| can do some adaptive scheduling automatically.
| 
| @Sylvain, can you please show more detail for what Pets vs. Cattles
| analogy means?
| 
| 
| 
| 
| 2014-02-26 9:11 GMT+08:00 Sylvain Bauza  sylvain.ba...@gmail.com  :
| 
| 
| 
| Hi Tim,
| 
| 
| As per I'm reading your design document, it sounds more likely
| related to something like Solver Scheduler subteam is trying to
| focus on, ie. intelligent agnostic resources placement on an
| holistic way [1]
| IIRC, Jay is more likely talking about adaptive scheduling decisions
| based on feedback with potential counter-measures that can be done
| for decreasing load and preserving QoS of nodes.
| 
| 
| That said, maybe I'm wrong ?
| 
| 
| [1] https://blueprints.launchpad.net/nova/+spec/solver-scheduler
| 
| 
| 
| 2014-02-26 1:09 GMT+01:00 Tim Hinrichs  thinri...@vmware.com  :
| 
| 
| 
| 
| Hi Jay,
| 
| The Congress project aims to handle something similar to your use
| cases. I just sent a note to the ML with a Congress status update
| with the tag [Congress]. It includes links to our design docs. Let
| me know if you have trouble finding it or want to follow up.
| 
| Tim
| 
| 
| 
| - Original Message -
| | From: Sylvain Bauza  sylvain.ba...@gmail.com 
| | To: OpenStack Development Mailing List (not for usage questions)
| |  openstack-dev@lists.openstack.org 
| | Sent: Tuesday, February 25, 2014 3:58:07 PM
| | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
| | for OpenStack run time policy to manage
| | compute/storage resource
| | 
| | 
| | 
| | Hi Jay,
| | 
| | 
| | Currently, the Nova scheduler only acts upon user request (either
| | live migration or boot an instance). IMHO, that's something Gantt
| | should scope later on (or at least there could be some space within
| | the Scheduler) so that Scheduler would be responsible for managing
| | resources on a dynamic way.
| | 
| | 
| | I'm thinking of the Pets vs. Cattles analogy, and I definitely
| | think
| | that Compute resources could be treated like Pets, provided the
| | Scheduler does a move.
| | 
| | 
| | -Sylvain
| | 
| | 
| | 
| | 2014-02-26 0:40 GMT+01:00 Jay Lau  jay.lau@gmail.com  :
| | 
| | 
| | 
| | 
| | Greetings,
| | 
| | 
| | Here I want to bring up an old topic here and want to get some
| | input
| | from you experts.
| | 
| | 
| | Currently in nova and cinder, we only have some initial placement
| | polices to help customer deploy VM instance or create volume
| | storage
| | to a specified host, but after the VM or the volume was created,
| | there was no policy to monitor the hypervisors or the storage
| | servers to take some actions in the following case:
| | 
| | 
| | 1) Load Balance Policy: If the load of one server is too heavy,
| | then
| | probably we need to migrate some VMs from high load servers to some
| | idle servers automatically to make sure the system resource usage
| | can be balanced.
| | 
| | 2) HA Policy: If one server get down for some hardware failure or
| | whatever reasons, there is no policy to make sure the VMs can be
| | evacuated or live migrated (Make sure migrate the VM before server
| | goes down) to other available servers to make sure customer
| | applications will not be affect too much.
| | 
| | 3) Energy Saving Policy: If a single host load is lower than
| | configured threshold, then low down the frequency of the CPU to
| | save
| | energy; otherwise, increase the CPU frequency. If the average load
| | is lower than configured threshold, then shutdown some hypervisors
| | to save energy; otherwise, power on some hypervisors to load
| | balance. Before power off a hypervisor host, the energy policy need
| | to live migrate all VMs on the hypervisor to other available
| | hypervisors; After Power on a hypervisor host, the 

[openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-26 Thread Brian Haley
While trying to track down why Jenkins was handing out -1's in a Neutron patch,
I was seeing errors in the devstack tests it runs.  When I dug deeper it looked
like it wasn't properly determining that Neutron was enabled - ENABLED_SERVICES
had multiple q-* entries, but 'is_service_enabled neutron' was returning 0.

I boiled it down to a simple reproducer based on the many is_*_enabled() 
functions:

#!/usr/bin/env bash
set -x

function is_foo_enabled {
[[ ,${ENABLED_SERVICES} =~ ,f- ]]  return 0
return 1
}

ENABLED_SERVICES=f-svc

is_foo_enabled

$ ./is_foo_enabled.sh
+ ENABLED_SERVICES=f-svc
+ is_foo_enabled
+ [[ ,f-svc =~ ,f- ]]
+ return 0

So either the return values need to be swapped, or  changed to ||.  I haven't
tested is_service_enabled() but all the is_*_enabled() functions are wrong at 
least.

Is anyone else seeing this besides me?  And/or is someone already working on
fixing it?  Couldn't find a bug for it.

Thanks,

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [heat] bug 1203680 - fix requires doc

2014-02-26 Thread Dean Troyer
On Wed, Feb 26, 2014 at 12:53 AM, Mike Spreitzer mspre...@us.ibm.comwrote:

 (I added some tags in the subject line, probably should have been there
 from the start.)

 Thanks guys, for an informative discussion.  I have updated
 https://wiki.openstack.org/wiki/Gerrit_Workflow and
 https://wiki.openstack.org/wiki/Testing to incorporate what I have
 learned.


Thanks for the updates, but I've massaged the project bits and
restored/expanded the reasons to consider one or the other option.


 But I still do not detect consensus on what to do about bug 1203680.  It
 looks like Sean thinks the fix is just wrong, but I'm not hearing much
 about what a good fix would look like.  As best I can tell, it would
 involve wrapping tox in a script that recapitulates the system package
 install logic from DevStack (I know of no other scripting for installing
 needed system packages).


That bug is closed and against Grizzly, is that the one you meant to
reference?  I added a note about INSTALL_TESTONLY_PACKAGES to the wiki page
above and it will be in the next rev of devstack.org.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-*client] moving to WebOb exceptions

2014-02-26 Thread Dean Troyer
On Wed, Feb 26, 2014 at 11:20 AM, Andrey Kurilin akuri...@mirantis.comwrote:

 While working on unification of clients code we try to unify various
 exceptions in different client projects.
 We have module apiclient.exceptions in oslo-incubator[1]. Since our
 apiclient is an oslo-inclubator library and not a standalone lib this
 doesn't help in case we need to process exceptions from several clients.

[...]

 The solution would be to use exceptions from external library - Module
 WebOb.exc[2] for example (since WebOb is already used in other openstack
 projects). This exceptions cover all our custom http exceptions.


I would oppose adding WebOb as a requirement for the client libraries.  I
see keystoneclient has it today but that is only because the middleware is
still in that repo (moving those is another topic).

The pain of installing the exiting client libs and their prereqs is bad
enough, adding to it is not tenable and is part of what is motivating the
SDK efforts.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-26 Thread Dean Troyer
On Wed, Feb 26, 2014 at 11:51 AM, Brian Haley brian.ha...@hp.com wrote:

 While trying to track down why Jenkins was handing out -1's in a Neutron
 patch,
 I was seeing errors in the devstack tests it runs.  When I dug deeper it
 looked
 like it wasn't properly determining that Neutron was enabled -
 ENABLED_SERVICES
 had multiple q-* entries, but 'is_service_enabled neutron' was returning
 0.


This is the correct return, 0 == success.

Can you point to a specific log example?

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] JSON output values from Tuskar API

2014-02-26 Thread Petr Blaho
Hi,

I am wondering what is the OpenStack way of returning json from
apiclient.

I have got 2 different JSON response examples from http://api.openstack.org/:

json output with namespace:
{
  volume:
  {
status:available,
availability_zone:nova,
id:5aa119a8-d25b-45a7-8d1b-88e127885635,
name:vol-002,
volume_type:None,
metadata:{
  contents:not junk
}
  }
}
(example for GET 'v2/{tenant_id}/volumes/{volume_id}' of Block Storage API v2.0 
taken from
http://api.openstack.org/api-ref-blockstorage.html [most values ommited])

json output without namespace:
{
  alarm_actions: [
  http://site:8000/alarm;
],
alarm_id: null,
combination_rule: null,
description: An alarm,
enabled: true,
type: threshold,
user_id: c96c887c216949acbdfbd8b494863567
}
(example for GET 'v2/alarms/{alarm_id}' of Telemetry API v2.0 taken from
http://api.openstack.org/api-ref-telemetry.html [most values ommited])

Tuskar API now uses without namespace variant.

By looking at API docs at http://api.openstack.org/ I can say that
projects use both ways, altought what I would describe as nicer API
uses namespaced variant.

So, returning to my question, does OpenStack have some rules what
format of JSON (namespaced or not) should APIs return?

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-26 Thread W Chan
Thanks.  I'll start making the changes.  The other transport currently
implemented at oslo.messaging is located at
https://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/_drivers,
prefixed with impl.  There are quid and zmq.


On Wed, Feb 26, 2014 at 12:03 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Winson, nice job!

 Now it totally makes sense to me. You're good to go with this unless
 others have objections.

 Just one technical dummy question (sorry, I'm not yet familiar with
 oslo.messaging): at your picture you have Transport, so what can be
 specifically except RabbitMQ?

 Renat Akhmerov
 @ Mirantis Inc.



 On 26 Feb 2014, at 14:30, Nikolay Makhotkin nmakhot...@mirantis.com
 wrote:

 Looks good. Thanks, Winson!

 Renat, What do you think?


 On Wed, Feb 26, 2014 at 10:00 AM, W Chan m4d.co...@gmail.com wrote:

 The following link is the google doc of the proposed engine/executor
 message flow architecture.
 https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

 The diagram on the right is the scalable engine where one or more engine
 sends requests over a transport to one or more executors.  The executor
 client, transport, and executor server follows the RPC client/server
 design 
 patternhttps://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/rpcin
  oslo.messaging.

 The diagram represents the local engine.  In reality, it's following the
 same RPC client/server design pattern.  The only difference is that it'll
 be configured to use a 
 fakehttps://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_fake.pyRPC
  backend driver.  The fake driver uses in process
 queues http://docs.python.org/2/library/queue.html#module-Queue shared
 between a pair of engine and executor.

 The following are the stepwise changes I will make.
 1) Keep the local and scalable engine structure intact.  Create the
 Executor Client at ./mistral/engine/scalable/executor/client.py.  Create
 the Executor Server at ./mistral/engine/scalable/executor/service.py and
 implement the task operations under
 ./mistral/engine/scalable/executor/executor.py.  Delete
 ./mistral/engine/scalable/executor/executor.py.  Modify the launcher
 ./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py
 to use the Executor Client instead of sending the message directly to
 rabbit via pika.  The sum of this is the atomic change that keeps existing
 structure and without breaking the code.
 2) Remove the local engine.
 https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
 3) Implement versioning for the engine.
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
 4) Port abstract engine to use oslo.messaging and implement the engine
 client, engine server, and modify the API layer to consume the engine
 client.
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
 .

 Winson


 On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov 
 rakhme...@mirantis.comwrote:


 On 25 Feb 2014, at 02:21, W Chan m4d.co...@gmail.com wrote:

 Renat,

 Regarding your comments on change
 https://review.openstack.org/#/c/75609/, I don't think the port to
 oslo.messaging is just a swap from pika to oslo.messaging.  OpenStack
 services as I understand is usually implemented as an RPC client/server
 over a messaging transport.  Sync vs async calls are done via the RPC
 client call and cast respectively.  The messaging transport is abstracted
 and concrete implementation is done via drivers/plugins.  So the
 architecture of the executor if ported to oslo.messaging needs to include a
 client, a server, and a transport.  The consumer (in this case the mistral
 engine) instantiates an instance of the client for the executor, makes the
 method call to handle task, the client then sends the request over the
 transport to the server.  The server picks up the request from the exchange
 and processes the request.  If cast (async), the client side returns
 immediately.  If call (sync), the client side waits for a response from the
 server over a reply_q (a unique queue for the session in the transport).
  Also, oslo.messaging allows versioning in the message. Major version
 change indicates API contract changes.  Minor version indicates backend
 changes but with API compatibility.


 My main concern about this patch is not related with messaging
 infrastructure. I believe you know better than me how it should look like.
 I'm mostly concerned with the way of making changes you chose. From my
 perspective, it's much better to make atomic changes where every changes
 doesn't affect too much in existing architecture. So the first step could
 be to change pika to oslo.messaging with minimal structural changes without
 introducing versioning (could be just TODO comment saying that the
 framework allows it and we may want to use it in the future, to be decide),
 without getting rid of the current engine structure 

Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-26 Thread Doug Hellmann
On Tue, Feb 25, 2014 at 5:44 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 Thanks Doug for replying,



 2014-02-25 23:10 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:




 Do you have any idea on how I could get my goal, ie. having a static
 input plus some extra variable inputs ? I was also thinking about playing
 with __getattr__ and __setattr__ but I'm not sure the Registry could handle
 that.


 Why don't you know what the data is going to look like before you receive
 it?

 One last important point, this API endpoint (Host) is admin-only in case
 of you mention the potential security issues it could lead.


 My issue with this sort of API isn't security, it's that describing how
 to use it for an end user is more difficult than having a clearly defined
 static set of inputs and outputs.



 tl;dr: Admin can provide extra key/value pairs for defining a single Host
 thanks to the API, so we should have possibility to have dynamic key/value
 pairs for Host.

 Ok, sounds like I have to explain the use-case there. Basically, Climate
 provides an API where admin has to enroll hosts for provisioning purposes.
 The thing is, we only need to get the hostname because we place a call to
 Nova for getting the metrics.
 Based on these metrics, we do allow users to put requests for leases based
 on given metrics (like VCPUs or memory limits) and we elect some hosts.

 As the Nova scheduler is not yet available as a service, we do need to
 implement our own possibilities for adding metrics that are not provided by
 Nova, and thus we allow the possibility to add extra key/value pairs within
 the API call for adding a Host.

 With API v1 (Flask with no input validation), the possibility was quite
 easy, as we were getting the dict and diretly passing it to the Manager.
 Now, I have to find some way to still leave the possibility to add extra
 metrics.

 Example of a Host request body is :
 { 'name': foo,
   'fruits': 'bananas',
   'vgpus': 2}

 As 'fruits' and 'vgpus' are dynamic keys, I should be able to accept them
 anyway using WSME.

 Hope it's clearer now, because at the moment I'm thinking of bypassing
 WSME for handling the POST/PUT requests...
 


So you're not segregating the dynamic part of the API at all from the
static part?

Doug



 

 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Enterprise Ready Features

2014-02-26 Thread Brandon Logan
TL;DR: Are enterprise needed features (HA, scalability, resource management, 
etc) on this project's roadmap.  If so, how much of a priority is it?

I've been doing some research on Neutron LBaaS to determine the viability and 
what needs to be done to allow for it to become an enterprise ready solution.  
Since I am fairly new to this project please forgive me, and also correct me, 
if my understanding of some of these things is false.  I've already spoken to 
Eugene about some of this, but I think it would be nice to get everyone's 
opinion.  And since the object model discussions are going on right now I 
believe this to be a good time to bring it up.

As of its current incarnation Neutron LBaaS does not seem to be HA, scalable, 
and doesn't isolate resources for each load balancer.  I know there is a 
blueprint for HA for the agent 
(https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-agent) and HA for 
HaProxy (https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy).  
That is only for HaProxy, though, and sounds like it has to be implemented at 
the driver level.  Is that the intended direction for implementing these goals, 
to implement them at the driver level?  I can definitely see why that is the 
way to do it because some drivers may already implement these features, while 
others don't.  It would be nice if there was a way to give those features to 
drivers that do not have it out of the box.

Basically, I'd like this project to have these enterprise level features to 
that it can be adopted in an enterprise cloud.  It will require a lot of work 
to achieve these goals, and I believe it should be something that is a goal.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Savanna graduation status

2014-02-26 Thread Sergey Lukjanov
Hi folks,

I'd like to follow-up Savanna graduation status. All requirements
listed in the [0] has been solved. You can find details in [1].

So, I'd like to ask for Savanna graduation review topic addition to
the TC meetings backlog. :)

All questions raised on the incubation review were covered on the mid
graduation review. There was no new questions/concerns raised on the
mid graduation review except the diversity that is currently
significantly better than 2 month ago and especially than in the time
of incubation review. You can find more details about it in the
etherpad.

Thanks.

[0] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements#n58
[1] https://etherpad.openstack.org/p/savanna-graduation-status

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [heat] bug 1203680 - fix requires doc

2014-02-26 Thread Mike Spreitzer
Dean Troyer dtro...@gmail.com wrote on 02/26/2014 12:53:31 PM:

 Thanks for the updates, but I've massaged the project bits and 
 restored/expanded the reasons to consider one or the other option.

Thanks for the further updates.  I have just one question about those. One 
way to do both unit testing and system (integration) testing is to: git 
clone your favorite project to make a working local repo somewhere on your 
testing machine, edit and commit there, then use DevStack to create a 
running system in /opt/stack using your modified project in place of the 
copy at git.openstack.org.  I think this is what Sean Dague advocates 
(based on his remarks earlier in this thread), have tried it myself, and 
have documented it at 
https://wiki.openstack.org/wiki/Testing#Indirect_Approach .  I think 
https://wiki.openstack.org/wiki/Gerrit_Workflow#Project_Setup should 
reference that somehow; I think the most direct approach would be to 
generalize the title of 
https://wiki.openstack.org/wiki/Gerrit_Workflow#Unit_Tests_Only and 
generalize the introductory remark to include a reference to 
https://wiki.openstack.org/wiki/Testing#Indirect_Approach .  Does this 
make sense to you?

 That bug is closed and against Grizzly, is that the one you meant to
 reference?  I added a note about INSTALL_TESTONLY_PACKAGES to the 
 wiki page above and it will be in the next rev of devstack.org.

Yes, I am referring to https://bugs.launchpad.net/devstack/+bug/1203680 
--- and also to https://bugs.launchpad.net/devstack/+bug/1203723

As far as my experience goes, the fix to 1203680 would also cover 1203723: 
the only thing I find lacking for nova unit testing in Ubuntu is 
libmysqlclient-dev, which is among the testing requirements of glance (see 
DevStack's files/apts/glance).

As far as I can tell, Sean Dague is saying the fix to 1203680 is wrong 
because it binds unit testing to DevStack and he thinks unit testing 
should be independent of DevStack.

Interestingly, installing DevStack with INSTALL_TESTONLY_PACKAGES set to 
True will have a global side-effect and so will provide the unit testing 
requirements even if the indirect procedure (
https://wiki.openstack.org/wiki/Testing#Indirect_Approach) that Sean 
advocates is used.  In this way the only tie between unit testing and 
DevStack is that they be done on the same machine.  Maybe this is the way 
to go?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] JSON output values from Tuskar API

2014-02-26 Thread Jay Dobies
This is a new concept to me in JSON, I've never heard of a wrapper 
element like that being called a namespace.


My first impression is that is looks like cruft. If there's nothing else 
at the root of the JSON document besides the namespace, all it means is 
that every time I go to access relevant data I have an extra layer of 
indirection. Something like:


volume_wrapper = get_volume(url)
volume = volume_wrapper['volume']

or

volume = get_volume(url)
name = volume['volume']['name']

If we ever forsee an aggregate API, I can see some value in it. For 
instance, a single call that aggregates a volume with some relevant 
metrics from ceilometer. In that case, I could see leaving both distinct 
data sets separate at the root with some form of namespace rather than 
attempting to merge the data.


Even in that case, I think it'd be up to the aggregate API to introduce 
that.


Looking at api.openstack.org, there doesn't appear to be any high level 
resource get that would aggregate the different subcollections.


For instance, {tenant_id}/volumes stuffs everything inside of an element 
called volumes. {tenant_id}/types stuffs everything inside of an 
element called volume_types. If a call to {tenant_id} aggregated both of 
those, then I can see leaving the namespace in on the single ID look ups 
for consistency (even if it's redundant). However, the API doesn't 
appear to support that, so just looking at the examples given it looks 
like an added layer of depth that carries no extra information and makes 
using the returned result a bit awkward IMO.



On 02/26/2014 01:38 PM, Petr Blaho wrote:

Hi,

I am wondering what is the OpenStack way of returning json from
apiclient.

I have got 2 different JSON response examples from http://api.openstack.org/:

json output with namespace:
{
   volume:
   {
 status:available,
 availability_zone:nova,
 id:5aa119a8-d25b-45a7-8d1b-88e127885635,
 name:vol-002,
 volume_type:None,
 metadata:{
   contents:not junk
 }
   }
}
(example for GET 'v2/{tenant_id}/volumes/{volume_id}' of Block Storage API v2.0 
taken from
http://api.openstack.org/api-ref-blockstorage.html [most values ommited])

json output without namespace:
{
   alarm_actions: [
   http://site:8000/alarm;
 ],
 alarm_id: null,
 combination_rule: null,
 description: An alarm,
 enabled: true,
 type: threshold,
 user_id: c96c887c216949acbdfbd8b494863567
}
(example for GET 'v2/alarms/{alarm_id}' of Telemetry API v2.0 taken from
http://api.openstack.org/api-ref-telemetry.html [most values ommited])

Tuskar API now uses without namespace variant.

By looking at API docs at http://api.openstack.org/ I can say that
projects use both ways, altought what I would describe as nicer API
uses namespaced variant.

So, returning to my question, does OpenStack have some rules what
format of JSON (namespaced or not) should APIs return?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-02-26 Thread Nachi Ueno
Hi folks

I wrote an bookmarklet for neutron gerrit review.
This bookmarklet make the comment title for 3rd party ci as gray.

javascript:(function(){list =
document.querySelectorAll('td.GJEA35ODGC'); for(i in
list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML 
list[i].innerHTML.search('CI|Ryu|Testing|Mine') 
0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()

enjoy :)
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-26 Thread Joe Gordon
On Fri, Feb 21, 2014 at 10:17 AM, Sean Dague s...@dague.net wrote:
 On 02/21/2014 11:02 AM, Daniel P. Berrange wrote:
 On Fri, Feb 21, 2014 at 10:46:22AM -0500, Sean Dague wrote:
 On 02/21/2014 09:45 AM, Daniel P. Berrange wrote:
 On Thu, Feb 20, 2014 at 02:45:03PM -0500, Sean Dague wrote:

 So I'm one of the first people to utter if it isn't tested, it's
 probably broken, however I also think we need to be realistic about the
 fact that if you did out the permutations of dependencies and config
 options, we'd have as many test matrix scenarios as grains of sand on
 the planet.

 I do think in some ways this is unique to OpenStack, in that our
 automated testing is head and shoulders above any other Open Source
 project out there, and most proprietary software systems I've seen.

 So this is about being pragmatic. In our dependency testing we are
 actually testing with most recent versions of everything. So I would
 think that even with libvirt, we should err in that direction.

 I'm very much against that, because IME, time  time again across
 all open source projects I've worked on, people silently introduce
 use of features/apis that only exist in newer versions without anyone
 ever noticing until it is too late.

 That being said, we also need to be a little bit careful about taking
 such a hard line about supported vs. not based on only what's in the
 gate. Because if we did the following things would be listed as
 unsupported (in increasing level of ridiculousness):

  * Live migration
  * Using qpid or zmq
  * Running on anything other than Ubuntu 12.04
  * Running on multiple nodes

 Supported to me means we think it should work, and if it doesn't, it's a
 high priority bug that will get fixed quickly. Testing is our sanity
 check. But it can't be considered that it will catch everything, at
 least not before the heat death of the universe.

 I agree we should be pragmatic here to some extent. We shouldn't aim to
 test every single intermediate version, or every possible permutation of
 versions - just a representative sample. Testing both lowest and highest
 versions is a reasonable sample set IMHO.

 Testing lower bounds is interesting, because of the way pip works. That
 being said, if someone wants to take ownership of building that job to
 start as a periodic job, I'm happy to point in the right direction. Just
 right now, it's a lower priority item than things like Tempest self
 testing, Heat actually gating, Neutron running in parallel, Nova API
 coverage.

 If it would be hard work to do it for python modules, we can at least
 not remove the existing testing of an old libvirt version - simply add
 an additional test with newer libvirt.

 Simply adding a test with newer libvirt isn't all that simple at the end
 of the day, as it requires building a new nodepool image. Because
 getting new libvirt in the existing test environment means cloud
 archive, and cloud archive means a ton of other new code as well. Plus
 in Juno we're presumably going to jump to 14.04 as our test base, which
 is going to be it's own big transition.

 So, I'm not opposed, but I also think bifurcating libvirt testing is a
 big enough change in the pipeline that it needs some pretty dedicated
 folks looking at it, and the implications there in. This isn't just a
 yaml change, set and forget it. And given where we are in the
 development cycle, I'm not sure trying to keep the gate stable with a
 new libvirt which we've known to be problematic, is the right time to do
 this.

 But, if someone is stepping up to work through it, can definitely mentor
 them on the right places to be poking.



So it sounds like the consensus here is:

* We should have a uniform policy (unless we take the platform vs app
distinction)
* Long term we want to have a lower bound gate job as well, but that
no one has stepped up to work on it yet
* Setting up libvirt min and libvirt max tests is non-trivial and
needs someone to work on it

So in the short term we shouldn't be forced to to hold libvirt back to
the minimal supported version in
gate(https://blueprints.launchpad.net/nova/+spec/support-libvirt-1x),
hopefully while someone steps up to get a minimal libvirt (and python
deps?) job in the gate.


 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ceph] Reviews needed for ceph rbd ephemeral bugs

2014-02-26 Thread Andrew Woodward
Hi, I'm writing to call for some core-reviewers to give any eye to
some rbd related reviews that are open.

https://review.openstack.org/#/c/59149/
^parent https://review.openstack.org/#/c/59148/
^^parent https://review.openstack.org/#/c/33409/

Thanks,

Andrew W.
Mirantis.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-26 Thread Jay Dobies

Hello,

i went through the CLI way of deploying overcloud, so if you're
interested what's the workflow, here it is:

https://gist.github.com/jistr/9228638


This is excellent to see it all laid out like this, thanks for writing 
it up.



I'd say it's still an open question whether we'll want to give better UX
than that ^^ and at what cost (this is very much tied to the benefits
and drawbacks of various solutions we discussed in December [1]). All in
all it's not as bad as i expected it to be back then [1]. The fact that
we keep Tuskar API as a layer in front of Heat means that CLI user
doesn't care about calling merge.py and creating Heat stack manually,
which is great.


I agree that it's great that Heat is abstracted away. I also agree that 
it's not as bad as I too expected it to be.


But generally speaking, I think it's not an ideal user experience. A few 
things jump out at me:


* We currently have glance, nova, and tuskar represented. We'll likely 
need something to ceilometer as well for gathering metrics and 
configuring notifications (I assume the notifications will fall under 
that, but come with me on it).


That's a lot for an end user to comprehend and remember, which concerns 
me for both adoption and long term usage. Even in the interim when a 
user remembers nova is related to node stuff, doing a --help on nova is 
huge.


That's going to put a lot of stress on our ability to document our 
prescribed path. It will be tricky for us to keep track of the relevant 
commands and still point to the other project client documentation so as 
to not duplicate it all.


* Even at this level, it exposes the underlying guts. There are calls to 
nova baremetal listed in there, but eventually those will turn into 
ironic calls. It doesn't give us a ton of flexibility in terms of 
underlying technology if that knowledge bubbles up to the end user that way.


* This is a good view into what third-party integrators are going to 
face if they choose to skip our UIs and go directly to the REST APIs.



I like the notion of OpenStackClient. I'll talk ideals for a second. If 
we had a standard framework and each project provided a command 
abstraction that plugged in, we could pick and choose what we included 
under the Tuskar umbrella. Advanced users with particular needs could go 
directly to the project clients if needed.


I think this could go beyond usefulness for Tuskar as well. On a 
previous project, I wrote a pluggable client framework, allowing the end 
user to add their own commands that put a custom spin on what data was 
returned or how it was rendered. That's a level between being locked 
into what we decide the UX should be and having to go directly to the 
REST APIs themselves.


That said, I know that's a huge undertaking to get OpenStack in general 
to buy into. I'll leave it more that I think it is a lesser UX (not even 
saying bad, just not great) to have so much for the end user to digest 
to attempt to even play with it. I'm more of the mentality of a unified 
TripleO CLI that would be catered towards handling TripleO stuffs. Short 
of OpenStackClient, I realize I'm not exactly in the majority here, but 
figured it didn't hurt to spell out my opinion  :)




In general the CLI workflow is on the same conceptual level as Tuskar
UI, so that's fine, we just need to use more commands than tuskar.

There's one naming mismatch though -- Tuskar UI doesn't use Horizon's
Flavor management, but implements its own and calls it Node Profiles.
I'm a bit hesitant to do the same thing on CLI -- the most obvious
option would be to make python-tuskarclient depend on python-novaclient
and use a renamed Flavor management CLI. But that's wrong and high cost
given that it's only about naming :)

The above issue is once again a manifestation of the fact that Tuskar
UI, despite its name, is not a UI for Tuskar, it is UI for a bit more
services. If this becomes a greater problem, or if we want a top-notch
CLI experience despite reimplementing bits that can be already done
(just not in a super-friendly way), we could start thinking about
building something like OpenStackClient CLI [2], but directed
specifically at Undercloud/Tuskar needs and using undercloud naming.

Another option would be to get Tuskar UI a bit closer back to the fact
that Undercloud is OpenStack too, and keep the name Flavors instead of
changing it to Node Profiles. I wonder if that would be unwelcome to
the Tuskar UI UX, though.


Jirka


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021919.html

[2] https://wiki.openstack.org/wiki/OpenStackClient

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA VRRP concerns

2014-02-26 Thread Carl Baldwin
Assaf,

It would be helpful if these notes were on the reviews [1].  I think
there are concerns in this email that I have not noticed in the
review.  Maybe I missed them.

Carl

[1] https://blueprints.launchpad.net/neutron/+spec/l3-high-availability


On Mon, Feb 24, 2014 at 8:58 AM, Assaf Muller amul...@redhat.com wrote:
 Hi everyone,

 A few concerns have popped up recently about [1] which I'd like to share and 
 discuss,
 and would love to hear your thoughts Sylvain.

 1) Is there a way through the API to know, for a given router, what agent is 
 hosting
 the active instance? This might be very important for admins to know.

 2) The current approach is to create an administrative network and subnet for 
 VRRP traffic per router group /
 per router. Is this network counted in the quota for the tenant? (Clearly it 
 shouldn't). Same
 question for the HA ports created for each router instance.

 3) The administrative network is created per router and takes away from the 
 VLAN ranges if using
 VLAN tenant networks (For a tunneling based deployment this is a non-issue). 
 Maybe we could
 consider a change that creates an administrative network per tenant (Which 
 would then limit
 the solution to up to 255 routers because of VRRP'd group limit), or an admin 
 network per 255
 routers?

 4) Maybe the VRRP hello and dead times should be configurable? I can see 
 admins that would love to
 up or down these numbers.

 5) The administrative / VRRP networks, subnets and ports that are created - 
 Will they be marked in any way
 as an 'internal' network or some equivalent tag? Otherwise they'd show up 
 when running neutron net-list,
 in the Horizon networks listing as well as the graphical topology drawing 
 (Which, personally, is what
 bothers me most about this). I'd love them tagged and hidden from the normal 
 net-list output,
 and something like a 'neutron net-list --all' introduced.

 6) The IP subnet chosen for VRRP traffic is specified in neutron.conf. If a 
 tenant creates a subnet
 with the same range, and attaches a HA router to that subnet, the operation 
 will fail as the router
 cannot have different interfaces belonging to the same subnet. Nir suggested 
 to look into using
 the 169.254.0.0/16 range as the default because we know it will (hopefully) 
 not be allocated by tenants.

 [1] https://blueprints.launchpad.net/neutron/+spec/l3-high-availability


 Assaf Muller, Cloud Networking Engineer
 Red Hat

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-26 Thread Joshua Harlow
So this design is starting to look pretty familiar to a what we have in 
taskflow.

Any reason why it can't just be used instead?

https://etherpad.openstack.org/p/TaskFlowWorkerBasedEngine

This code is in a functional state right now, using kombu (for the moment, 
until oslo.messaging becomes py3 compliant).

The concept of a engine which puts messages on a queue for a remote executor is 
in-fact exactly the case taskflow is doing (the remote exeuctor/worker will 
then respond when it is done and the engine will then initiate the next piece 
of work to do) in the above listed etherpad (and which is implemented).

Is it the case that in mistral the engine will be maintaining the 
'orchestration' of the workflow during the lifetime of that workflow? In the 
case of mistral what is an engine server? Is this a server that has engines in 
it (where each engine is 'orchestrating' the remote/local workflows and 
monitoring and recording the state transitions and data flow that is 
occurring)? The details @ 
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
 seems to be already what taskflow provides via its engine object, creating a 
application which runs engines and those engines initiate workflows is made to 
be dead simple.

From previous discussions with the mistral folks it seems like the overlap 
here is getting more and more, which seems to be bad (and means something is 
broken/wrong). In fact most of the concepts that u have blueprints for have 
already been completed in taskflow (data-flow, engine being disconnected from 
the rest api…) and ones u don't have listed (resumption, reversion…).

What can we do to fix this situation?

-Josh

From: Nikolay Makhotkin 
nmakhot...@mirantis.commailto:nmakhot...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 25, 2014 at 11:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Looks good. Thanks, Winson!

Renat, What do you think?


On Wed, Feb 26, 2014 at 10:00 AM, W Chan 
m4d.co...@gmail.commailto:m4d.co...@gmail.com wrote:
The following link is the google doc of the proposed engine/executor message 
flow architecture.  
https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

The diagram on the right is the scalable engine where one or more engine sends 
requests over a transport to one or more executors.  The executor client, 
transport, and executor server follows the RPC client/server design 
patternhttps://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/rpc
 in oslo.messaging.

The diagram represents the local engine.  In reality, it's following the same 
RPC client/server design pattern.  The only difference is that it'll be 
configured to use a 
fakehttps://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_fake.py
 RPC backend driver.  The fake driver uses in process 
queueshttp://docs.python.org/2/library/queue.html#module-Queue shared between 
a pair of engine and executor.

The following are the stepwise changes I will make.
1) Keep the local and scalable engine structure intact.  Create the Executor 
Client at ./mistral/engine/scalable/executor/client.py.  Create the Executor 
Server at ./mistral/engine/scalable/executor/service.py and implement the task 
operations under ./mistral/engine/scalable/executor/executor.py.  Delete 
./mistral/engine/scalable/executor/executor.py.  Modify the launcher 
./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py to 
use the Executor Client instead of sending the message directly to rabbit via 
pika.  The sum of this is the atomic change that keeps existing structure and 
without breaking the code.
2) Remove the local engine. 
https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
3) Implement versioning for the engine.  
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
4) Port abstract engine to use oslo.messaging and implement the engine client, 
engine server, and modify the API layer to consume the engine client. 
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process.

Winson


On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

On 25 Feb 2014, at 02:21, W Chan 
m4d.co...@gmail.commailto:m4d.co...@gmail.com wrote:

Renat,

Regarding your comments on change https://review.openstack.org/#/c/75609/, I 
don't think the port to oslo.messaging is just a swap from pika to 
oslo.messaging.  OpenStack services as I understand is usually implemented as 
an RPC client/server over a messaging transport.  Sync vs async calls are done 
via the RPC client call and cast 

[openstack-dev] [savanna] team meeting Feb 27 1800 UTC

2014-02-26 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_February.2C_27

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20140227T18

The main topics are project renaming and Icehouse 3 dev milestone.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [heat] bug 1203680 - fix requires doc

2014-02-26 Thread Dean Troyer
On Wed, Feb 26, 2014 at 1:09 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Thanks for the further updates.  I have just one question about those.
  One way to do both unit testing and system (integration) testing is to:
 git clone your favorite project to make a working local repo somewhere on
 your testing machine, edit and commit there, then use DevStack to create a
 running system in /opt/stack using your modified project in place of the
 copy at git.openstack.org.


Yes, that is similar to what Sean mentioned, the difference being that his
repos are actually in VirtualBox Shared Folders so the Linux VM has access
to them. He works on them natively on his laptop so the VM doesn't need his
desktop toolset installed.


  I think the most direct approach would be to generalize the title of
 https://wiki.openstack.org/wiki/Gerrit_Workflow#Unit_Tests_Only and
 generalize the introductory remark to include a reference to
 https://wiki.openstack.org/wiki/Testing#Indirect_Approach .  Does this
 make sense to you?


Sure.  My edits were prompted by the loss of some information useful to
making a choice between the two and tweaking the phrasing and sections.

There are a lot of ways to do this, enumerating them is beyond the scope of
that doc IMHO.  I think having the basic components/requirements available
should be enough, but then I also have my workflow figured out so i
probably am still missing something.


 As far as my experience goes, the fix to 1203680 would also cover 1203723:
 the only thing I find lacking for nova unit testing in Ubuntu is
 libmysqlclient-dev, which is among the testing requirements of glance (see
 DevStack's files/apts/glance).

 As far as I can tell, Sean Dague is saying the fix to 1203680 is wrong
 because it binds unit testing to DevStack and he thinks unit testing should
 be independent of DevStack.


Let me summarize: Unit testing is not a default requirement for DevStack.
 If you want it, we have #testonly tags in the system packaging prereq
files that will be installed when INSTALL_TESTONLY_PACKAGES=True.

Following that, missing system package requirements that affect only unit
testing can be added with the #testonly tag.  It is good form to attempt to
verify those added on both Ubuntu and Fedora, even better form to get RHEL
and SuSE.


 Interestingly, installing DevStack with INSTALL_TESTONLY_PACKAGES set to
 True will have a global side-effect and so will provide the unit testing
 requirements even if the indirect procedure (
 https://wiki.openstack.org/wiki/Testing#Indirect_Approach) that Sean
 advocates is used.  In this way the only tie between unit testing and
 DevStack is that they be done on the same machine.  Maybe this is the way
 to go?


That is exactly what that variable is for.  There are times we don't want
those packages installed and it is MUCH easier to add them than to remove
them.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-26 Thread Gordon Chung
hi,

just so this issue doesn't get lost again. Mehdi's bp seems like a good 
place to track this issue: 
https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification.

Joe, i agree with you that it's too late for this iteration... maybe it's 
something we should mark  low priority for J cycle.

adding participants to the bp just so we get eyes on it.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-26 Thread Dean Troyer
On Wed, Feb 26, 2014 at 1:43 PM, Jay Dobies jason.dob...@redhat.com wrote:

 I like the notion of OpenStackClient. I'll talk ideals for a second. If we
 had a standard framework and each project provided a command abstraction
 that plugged in, we could pick and choose what we included under the Tuskar
 umbrella. Advanced users with particular needs could go directly to the
 project clients if needed.


This is a thing.  https://github.com/dtroyer/python-oscplugin is an example
of a stand-alone OSC plugin that only needs to be installed to be
recognized.  FWIW, four of the built-in API command sets in OSC also are
loaded in this manner even though they are in the OSC repo so they
represent additional examples of writing plugins.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-26 Thread Sylvain Bauza
2014-02-26 19:40 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:


 So you're not segregating the dynamic part of the API at all from the
 static part?

 Doug


No, you're right, at least with API V1 (Flask). As per our discussion, it
seems our use-case hasn't yet been implemented in WSME, so I'll provide a
different way to add extra attributes with API V2 (Pecan/WSME) by using a
single key with JSON-serialized extra attributes as value.

Something like this:
{ 'name': 'foo',
  'extra_capabilities': {u'fruits':u'bananas',u'vgpus':u'2'}}

Using that body will allow us to keep WSME validation for extra attributes,
but I would really like having the logic I mentioned within WSME in the
next releases. Do you think this is something manageable some way, or is it
a tricky feature ?

Thanks for your support Doug,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-26 Thread Mathieu Rohon
Please look at my last reply. Do you have several compute nodes? If
you have only one node, you won't have any vxlan port on br-tun.

On Wed, Feb 26, 2014 at 6:44 PM, Varadhan, Sowmini
sowmini.varad...@hp.com wrote:
 On 2/26/14 5:47 AM, Mathieu Rohon wrote:
 Hi,

 FYI setting the vxlan UDP doesn't work properly for the moment :
 https://bugs.launchpad.net/neutron/+bug/1241561

 So I checked this again by going back to 13.10, still no luck.


 May be your kernel has the vxlan module already loaded, which bind the
 udp port 8472. that a reason why the vxlan port can't be created by
 ovs. Check your ovs-vswitchd.log

 Yes vxlan is loaded (as indicated by lsmod) but I didnt
 see any messages around 8472 in the ovs-vswitchd.log, so it must
 be something else in my config. To double check, I even tried some
 other port (8474) for vxlan_udp_port, still no luck.

 So is there a template stack.sh around for this? That would
 help me elminate the obvious config errors I may have made?

 --Sowmini




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday February 27th at 17:00UTC

2014-02-26 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, February 27th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

12:00 EST
02:00 JST
03:30 ACDT
18:00 CET
11:00 CST
9:00 PST

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-26 Thread Sean Dague
On 02/26/2014 03:33 PM, Gordon Chung wrote:
 hi,
 
 just so this issue doesn't get lost again. Mehdi's bp seems like a good
 place to track this issue:
 https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification.
 
 Joe, i agree with you that it's too late for this iteration... maybe
 it's something we should mark  low priority for J cycle.
 
 adding participants to the bp just so we get eyes on it.
 
 cheers,
 gordon chung
 openstack, ibm software standards

Ceilometer really needs to stop importing server projects in unit tests.
By nature this is just going to break all the time.

Cross project interactions need to be tested by something in the gate
which is cross project gating - like tempest/devstack.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-26 Thread Brian Haley

On 02/26/2014 01:36 PM, Dean Troyer wrote:

On Wed, Feb 26, 2014 at 11:51 AM, Brian Haley brian.ha...@hp.com
mailto:brian.ha...@hp.com wrote:

While trying to track down why Jenkins was handing out -1's in a
Neutron patch,
I was seeing errors in the devstack tests it runs.  When I dug
deeper it looked
like it wasn't properly determining that Neutron was enabled -
ENABLED_SERVICES
had multiple q-* entries, but 'is_service_enabled neutron' was
returning 0.


This is the correct return, 0 == success.


But, at least in lib/neutron, there is a check like this:

if is_service_enabled neutron; then
...
fi

Which will fail with a 0 return code and miss some config.


Can you point to a specific log example?


http://logs.openstack.org/89/70689/24/check/check-devstack-dsvm-neutron/8e137e0/console.html

And snippet from log where the above call caused the test to not put a 
'sudo' at the front of something correctly:


2014-02-26 13:11:58.382 | + ping_check private 10.1.0.4 90
2014-02-26 13:11:58.382 | + is_service_enabled neutron
2014-02-26 13:11:58.385 | ++ set +o
2014-02-26 13:11:58.386 | ++ grep xtrace
2014-02-26 13:11:58.391 | + local 'xtrace=set -o xtrace'
2014-02-26 13:11:58.391 | + set +o xtrace
2014-02-26 13:11:58.392 | + return 0
2014-02-26 13:11:58.410 | + _ping_check_neutron private 10.1.0.4 90
2014-02-26 13:11:58.410 | + local from_net=private
2014-02-26 13:11:58.410 | + local ip=10.1.0.4
2014-02-26 13:11:58.410 | + local timeout_sec=90
2014-02-26 13:11:58.411 | + local expected=True
2014-02-26 13:11:58.411 | + local check_command=
2014-02-26 13:11:58.411 | ++ _get_probe_cmd_prefix private
2014-02-26 13:11:58.411 | ++ local from_net=private
2014-02-26 13:11:58.412 | +++ _get_net_id private
2014-02-26 13:11:58.412 | +++ neutron --os-tenant-name admin 
--os-username admin --os-password secret net-list

2014-02-26 13:11:58.412 | +++ grep private
2014-02-26 13:11:58.412 | +++ awk '{print $2}'
2014-02-26 13:11:59.586 | ++ net_id=779f6b6a-f477-494b-9a7a-6d81c731c4f8
2014-02-26 13:11:59.590 | +++ neutron-debug --os-tenant-name admin 
--os-username admin --os-password secret probe-list -c id -c network_id

2014-02-26 13:11:59.591 | +++ grep 779f6b6a-f477-494b-9a7a-6d81c731c4f8
2014-02-26 13:11:59.592 | +++ awk '{print $2}'
2014-02-26 13:11:59.594 | +++ head -n 1
2014-02-26 13:12:00.906 | 2014-02-26 13:12:00.876 28681 ERROR 
neutron.common.legacy [-] Skipping unknown group key: firewall_driver

2014-02-26 13:12:01.183 | ++ probe_id=ace800b2-5753-4609-b2c3-f9c87d0cc004
2014-02-26 13:12:01.184 | ++ echo ' ip netns exec 
qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004'
2014-02-26 13:12:01.184 | + probe_cmd=' ip netns exec 
qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004'

2014-02-26 13:12:01.185 | + [[ True = \T\r\u\e ]]
2014-02-26 13:12:01.185 | + check_command='while !  ip netns exec 
qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004 ping -w 1 -c 1 10.1.0.4; do 
sleep 1; done'
2014-02-26 13:12:01.185 | + timeout 90 sh -c 'while !  ip netns exec 
qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004 ping -w 1 -c 1 10.1.0.4; do 
sleep 1; done'

2014-02-26 13:12:01.194 | Cannot open network namespace: Permission denied
2014-02-26 13:12:02.201 | Cannot open network namespace: Permission denied
2014-02-26 13:12:03.207 | Cannot open network namespace: Permission denied
2014-02-26 13:12:04.213 | Cannot open network namespace: Permission denied

Thanks for any help on this.

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-26 Thread Eugene Nikanorov
Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the
listener concept.
It's not just a different tcp port, but also a protocol, so session
persistence and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our
instance object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on
creation.
Remember provider is needed for REST call dispatching. The value of
provider attribute (e.g. ProviderResourceAssociation) is result of
scheduling.

3) As we discussed before, pool-vip relation will be removed, but pool
reuse by different vips (e.g. different backends) will be forbidden for
implementation simplicity, because this is definitely not a priority right
now.
I think it's a fair limitation that can be removed later.

On workflows:
WFs #2 and #3 are problematic. First off, sharing the same IP is not
possible for other vip for the following reason:
vip is created (with new model) with flavor (or provider) and scheduled to
a provider (and then to a particular backend), doing so for 2 vips makes
address reuse impossible if we want to maintain logical API, or otherwise
we would need to expose implementation details that will allow us to
connect two vips to the same backend.

On the open discussion questions:
I think most of them are resolved by following existing API expectations
about status fields, etc.
Main thing that allows to go with existing API expectations is the notion
of 'root object'.
Root object is the object which status and admin_state show real
operability of the configuration. While from implementation perspective it
is a mounting point between logical config and the backend.

The real challenge of model #3 is ability to share pools between different
VIPs, e.g. between different flavors/providers/backends.
User may be unaware of it, but it requires really complex logic to handle
statistics, healthchecks, etc.
I think while me may leave this ability at object model and API level, we
will limit it, as I said previously.

Thanks,
Eugene.



On Wed, Feb 26, 2014 at 9:06 PM, Samuel Bercovici samu...@radware.comwrote:

  Hi,



 I have added to the wiki page:
 https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_modelthat
  points to a document that includes the current model + L7 + SSL.

 Please review.



 Regards,

 -Sam.





 *From:* Samuel Bercovici
 *Sent:* Monday, February 24, 2014 7:36 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Samuel Bercovici
 *Subject:* RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion



 Hi,



 I also agree that the model should be pure logical.

 I think that the existing model is almost correct but the pool should be
 made pure logical. This means that the vip ßàpool relationships needs
 also to become any to any.

 Eugene, has rightfully pointed that the current state management will
 not handle such relationship well.

 To me this means that the state management is broken and not the model.

 I will propose an update to the state management in the next few days.



 Regards,

 -Sam.









 *From:* Mark McClain [mailto:mmccl...@yahoo-inc.commmccl...@yahoo-inc.com]

 *Sent:* Monday, February 24, 2014 6:32 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion





 On Feb 21, 2014, at 1:29 PM, Jay Pipes jaypi...@gmail.com wrote:



 I disagree on this point. I believe that the more implementation details
 bleed into the API, the harder the API is to evolve and improve, and the
 less flexible the API becomes.

 I'd personally love to see the next version of the LBaaS API be a
 complete breakaway from any implementation specifics and refocus itself
 to be a control plane API that is written from the perspective of the
 *user* of a load balancing service, not the perspective of developers of
 load balancer products.



 I agree with Jay.  We the API needs to be user centric and free of
 implementation details.  One of my concerns I've voiced in some of the IRC
 discussions is that too many implementation details are exposed to the user.



 mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-26 Thread Doug Hellmann
On Wed, Feb 26, 2014 at 3:34 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:




 2014-02-26 19:40 GMT+01:00 Doug Hellmann doug.hellm...@dreamhost.com:


 So you're not segregating the dynamic part of the API at all from the
 static part?

 Doug


 No, you're right, at least with API V1 (Flask). As per our discussion, it
 seems our use-case hasn't yet been implemented in WSME, so I'll provide a
 different way to add extra attributes with API V2 (Pecan/WSME) by using a
 single key with JSON-serialized extra attributes as value.

 Something like this:
 { 'name': 'foo',
   'extra_capabilities': {u'fruits':u'bananas',u'vgpus':u'2'}}

 Using that body will allow us to keep WSME validation for extra
 attributes, but I would really like having the logic I mentioned within
 WSME in the next releases. Do you think this is something manageable some
 way, or is it a tricky feature ?


It's probably something we can do, but it will take some thought. When
receiving data we can stick the attributes on the model, but model will
have no way to know what extra attributes need to be serialized, so
returning the data may be a little challenging.

Doug




 Thanks for your support Doug,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Chris Behrens

This thread is many messages deep now and I’m busy with a conference this week, 
but I wanted to carry over my opinion from the other “v3 API in Icehouse” 
thread and add a little to it.

Bumping versions is painful. v2 is going to need to live for “a long time” to 
create the least amount of pain. I would think that at least anyone running a 
decent sized Public Cloud would agree, if not anyone just running any sort of 
decent sized cloud. I don’t think there’s a compelling enough reason to 
deprecate v2 and cause havoc with what we currently have in v3. I’d like us to 
spend more time on the proposed “tasks” changes. And I think we need more time 
to figure out if we’re doing versioning in the correct way. If we’ve got it 
wrong, a v3 doesn’t fix the problem and we’ll just be causing more havoc with a 
v4.

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] datastore version-specific disk-image-builder element

2014-02-26 Thread Lowery, Mathew
We have a need to support multiple versions of a single datastore with a single 
set of disk-image-builder elements. Let's take MySQL on Ubuntu as an example. 
5.5 is already present in trove-integration. Let's say we want to add 5.6 to 
coexist. Let's also assume that simply asking for the existing apt repository 
for MySQL 5.6 is not possible (i.e. another apt repo is required). In 
trove-integration/scripts/files/elements, two approaches come to mind.

Approach #1: modify ubuntu-mysql/install.d/10-mysql with a case statement

Pros:

  *   No new directories or files

Cons:

  *   New variable DATASTORE_VERSION or SERVICE_VERSION is required to pivot on
  *   File contention (i.e. merges) as all versions will be in the same file

Approach #2: add ubuntu-mysql-5.6/install.d/10-mysql

Pros:

  *   There is already a precedent for using a separate element for each 
variant (i.e. fedora vs. ubuntu)
  *   Files are fine-grained; purpose is obvious from filename
  *   Less file contention as there is a separate file for each version
  *   No new variable

Cons:

  *   Possible explosion in the number of directories due to number of variants

Questions:

  *   Has this been discussed already? What is the desired approach?
  *   Will the community accept MySQL 5.6 disk-image-builder element? And Mongo 
2.4.9? (Using whatever approach above is agreed on.)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Flavor Framework

2014-02-26 Thread Eugene Nikanorov
Hi neutron folks,

I know that there are patches on gerrit for VPN, FWaaS and L3 services that
are leveraging Provider Framework.
Recently we've been discussing more comprehensive approach that will allow
user to choose service capabilities rather than vendor or provider.

I've started creating design draft of Flavor Framework, please take a look:
https://wiki.openstack.org/wiki/Neutron/FlavorFramework

It also now looks clear to me that the code that introduces providers for
vpn, fwaas, l3 is really necessary to move forward to Flavors with one
exception: providers should not be exposed to public API.
While provider attribute could be visible to administrator (like
segmentation_id of network), it can't be specified on creation and it's not
available to a regular user.

Looking forward to get your feedback.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-*client] moving to WebOb exceptions

2014-02-26 Thread Alexei Kornienko
Hi,

Could you please explain in more details what causes pain in adding
additional requirement to clients (in most cases when client is used inside
other openstack projects it's there already)?
My IMHO is that having to write ugly hacks to handle several exception
classes with the same name causes much more pain.

Regards,
Alexei


2014-02-26 20:02 GMT+02:00 Dean Troyer dtro...@gmail.com:

 On Wed, Feb 26, 2014 at 11:20 AM, Andrey Kurilin akuri...@mirantis.comwrote:

 While working on unification of clients code we try to unify various
 exceptions in different client projects.
 We have module apiclient.exceptions in oslo-incubator[1]. Since our
 apiclient is an oslo-inclubator library and not a standalone lib this
 doesn't help in case we need to process exceptions from several clients.

 [...]

 The solution would be to use exceptions from external library - Module
 WebOb.exc[2] for example (since WebOb is already used in other openstack
 projects). This exceptions cover all our custom http exceptions.


 I would oppose adding WebOb as a requirement for the client libraries.  I
 see keystoneclient has it today but that is only because the middleware is
 still in that repo (moving those is another topic).

 The pain of installing the exiting client libs and their prereqs is bad
 enough, adding to it is not tenable and is part of what is motivating the
 SDK efforts.

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-26 Thread Gordon Chung
 Ceilometer really needs to stop importing server projects in unit tests.
 By nature this is just going to break all the time.

i believe that was the takeaway from the thread -- it's an old thread and 
i was just doing some back-reading. 

 Cross project interactions need to be tested by something in the gate
 which is cross project gating - like tempest/devstack.

that said, we currently import swift for unit test as well to test a swift 
middleware solution which gathers metrics. i've added this as a bug here 
for discussion: https://bugs.launchpad.net/ceilometer/+bug/1285388

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-26 Thread Carl Baldwin
Brian,

In shell it is correct to return 0 for success and non-zero for failure.

Carl
On Feb 26, 2014 10:54 AM, Brian Haley brian.ha...@hp.com wrote:

 While trying to track down why Jenkins was handing out -1's in a Neutron
 patch,
 I was seeing errors in the devstack tests it runs.  When I dug deeper it
 looked
 like it wasn't properly determining that Neutron was enabled -
 ENABLED_SERVICES
 had multiple q-* entries, but 'is_service_enabled neutron' was returning
 0.

 I boiled it down to a simple reproducer based on the many is_*_enabled()
 functions:

 #!/usr/bin/env bash
 set -x

 function is_foo_enabled {
 [[ ,${ENABLED_SERVICES} =~ ,f- ]]  return 0
 return 1
 }

 ENABLED_SERVICES=f-svc

 is_foo_enabled

 $ ./is_foo_enabled.sh
 + ENABLED_SERVICES=f-svc
 + is_foo_enabled
 + [[ ,f-svc =~ ,f- ]]
 + return 0

 So either the return values need to be swapped, or  changed to ||.  I
 haven't
 tested is_service_enabled() but all the is_*_enabled() functions are wrong
 at least.

 Is anyone else seeing this besides me?  And/or is someone already working
 on
 fixing it?  Couldn't find a bug for it.

 Thanks,

 -Brian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >