[openstack-dev] [ironic] Baremetal provisioning using ironic iLO driver

2014-02-25 Thread Faizan Barmawer
Hi All,

We are working on the following blueprints for developing ironic iLO driver
for power and deploy.
https://blueprints.launchpad.net/ironic/+spec/ironic-ilo-virtualmedia-driver
https://blueprints.launchpad.net/ironic/+spec/ironic-ilo-power-driver

I have given an overall flow of how ironic-ilo-virtualmedia-driver works in
this etherpad
https://etherpad.openstack.org/p/iLODriverIronicDevstack

I request your valuable suggestions/comments/feedback on the ilo deploy
process which we have adopted in this driver.

Regards,
Barmawer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Renaming action types

2014-02-25 Thread Renat Akhmerov
Folks,

I’m proposing to rename these two action types REST_API and MISTRAL_REST_API to 
HTTP and MISTRAL_HTTP. Words “REST” and “API” don’t look correct to me, if you 
look at

Services:
  Nova:
type: REST_API
parameters:
  baseUrl: {$.novaURL}
actions:
  createVM:
parameters:
  url: /servers/{$.vm_id}
  method: POST
   
There’s no information about “REST” or “API” here. It’s just a spec how to form 
an HTTP request.

Thoughts?

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Understanding parameters for tasks and actions

2014-02-25 Thread Renat Akhmerov
Hi team,

I’m currently working on the first version of Data Flow and I would like to 
make sure we all clearly understand how to interpret “parameters" for tasks and 
actions when we declare them in Mistral DSL. I feel like I’m getting lost here 
a little bit. The problem is that we still don’t have a solid DSL spec since we 
keep changing our vision (especially after new members joined the team). But 
that may be fine, it’s life.

I also have a couple of suggestions that I’d like to discuss with you. Sorry if 
that seems verbose, I’ll try to be as concise as possible.

I took a couple of snippets from [1] and put them in here.

# Snippet 1.
Services:
  Nova:
type: REST_API
parameters:
  baseUrl: $.novaURL
actions:
  createVM:
parameters:
  url: /servers/{$.vm_id}
  method: POST
output:
  select: $.server.id
  store-as: vm_id

# Snippet 2.
Workflow:
  tasks:
createVM:
  action: Nova:createVM
  on-success: waitForIP
  on-error: sendCreateVMError

“$.” - handle to workflow execution storage (what we call ‘context’ now) where 
we keep workflow variables.

Let’s say our workflow input is JSON like this:
{
  “novaURL”: “http://localhost:123”,
  “image_id”: “123"
}

Questions

So the things that I don’t like or am not sure about:

1. Task “createVM” needs to use “image_id” but it doesn’t have any information 
about it its declaration.
According to the current vision it should be like 
createVM:
  action: Nova:createVM
  parameters:
image_id: $.image_id
And at runtime “image_id" should be resolved to “123” get passed to action and, 
in fact, be kind of the third parameter along with “url” and “method”. This is 
specifically interesting because on one hand we have different types of 
parameters: “url” and “method” for REST_API action define the nature of the 
action itself. But “image_id” on the other hand is a dynamic data coming 
eventually from user input.
So the question is: do we need to differentiate between these types of 
parameters explicitly and make a part of the specification?
We also had a notion of “task-parameters” for action declarations which is 
supposed to be used to declare this second type of parameters (dynamic) but do 
we really need it? I guess if we clearly declare input and output at task level 
then actions should be able to use them according to their nature.
2. Action declaration “createVM” has section “response” which may not be ok in 
terms of level of abstraction. 
My current vision is that actions should not declare how we store the result 
(“output”) in execution. Ideally looking at tasks only should give us 
comprehensive understanding of how workflow data flows. So I would move 
“store-as” to task level.

Suggestions

1. Define “input” and “output” at task level like this:

 createVM:
   input:
 image_id: $.image_id
   output: vm_id

Where “output: vm_id” is basically a replacement for “store-as: vm_id” at 
action level, i.e. it’s a hint to Mistral to store the output of this task 
under “vm_id” key in execution context. Again, the idea is to define task and 
action responsibilities more strictly:
Task is a high-level workflow building block which defines workflow logical 
step and how it modifies workflow data. Task doesn’t contain technical details 
on how it’s implemented.
Action is an implementor of the workflow logical step defined by a task. Action 
defines specific algorithm of how task is implemented.

2. User “parameters” only for actions to specify their additional properties 
influencing their nature (like method for HTTP actions).


Please let me know your thoughts. We can make required adjustments right now.


[1] https://etherpad.openstack.org/p/mistral-poc

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-25 Thread Nikolay Makhotkin
Looks good. Thanks, Winson!

Renat, What do you think?


On Wed, Feb 26, 2014 at 10:00 AM, W Chan  wrote:

> The following link is the google doc of the proposed engine/executor
> message flow architecture.
> https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing
>
> The diagram on the right is the scalable engine where one or more engine
> sends requests over a transport to one or more executors.  The executor
> client, transport, and executor server follows the RPC client/server
> design 
> patternin
>  oslo.messaging.
>
> The diagram represents the local engine.  In reality, it's following the
> same RPC client/server design pattern.  The only difference is that it'll
> be configured to use a 
> fakeRPC
>  backend driver.  The fake driver uses in process
> queues  shared
> between a pair of engine and executor.
>
> The following are the stepwise changes I will make.
> 1) Keep the local and scalable engine structure intact.  Create the
> Executor Client at ./mistral/engine/scalable/executor/client.py.  Create
> the Executor Server at ./mistral/engine/scalable/executor/service.py and
> implement the task operations under
> ./mistral/engine/scalable/executor/executor.py.  Delete
> ./mistral/engine/scalable/executor/executor.py.  Modify the launcher
> ./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py
> to use the Executor Client instead of sending the message directly to
> rabbit via pika.  The sum of this is the atomic change that keeps existing
> structure and without breaking the code.
> 2) Remove the local engine.
> https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
> 3) Implement versioning for the engine.
> https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
> 4) Port abstract engine to use oslo.messaging and implement the engine
> client, engine server, and modify the API layer to consume the engine
> client.
> https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
> .
>
> Winson
>
>
> On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov wrote:
>
>>
>> On 25 Feb 2014, at 02:21, W Chan  wrote:
>>
>> Renat,
>>
>> Regarding your comments on change https://review.openstack.org/#/c/75609/,
>> I don't think the port to oslo.messaging is just a swap from pika to
>> oslo.messaging.  OpenStack services as I understand is usually implemented
>> as an RPC client/server over a messaging transport.  Sync vs async calls
>> are done via the RPC client call and cast respectively.  The messaging
>> transport is abstracted and concrete implementation is done via
>> drivers/plugins.  So the architecture of the executor if ported to
>> oslo.messaging needs to include a client, a server, and a transport.  The
>> consumer (in this case the mistral engine) instantiates an instance of the
>> client for the executor, makes the method call to handle task, the client
>> then sends the request over the transport to the server.  The server picks
>> up the request from the exchange and processes the request.  If cast
>> (async), the client side returns immediately.  If call (sync), the client
>> side waits for a response from the server over a reply_q (a unique queue
>> for the session in the transport).  Also, oslo.messaging allows versioning
>> in the message. Major version change indicates API contract changes.  Minor
>> version indicates backend changes but with API compatibility.
>>
>>
>> My main concern about this patch is not related with messaging
>> infrastructure. I believe you know better than me how it should look like.
>> I'm mostly concerned with the way of making changes you chose. From my
>> perspective, it's much better to make atomic changes where every changes
>> doesn't affect too much in existing architecture. So the first step could
>> be to change pika to oslo.messaging with minimal structural changes without
>> introducing versioning (could be just TODO comment saying that the
>> framework allows it and we may want to use it in the future, to be decide),
>> without getting rid of the current engine structure (local, scalable). Some
>> of the things in the file structure and architecture came from the
>> decisions made by many people and we need to be careful about changing them.
>>
>>
>> So, where I'm headed with this change...  I'm implementing the basic
>> structure/scaffolding for the new executor service using oslo.messaging
>> (default transport with rabbit).  Since the whole change will take a few
>> rounds, I don't want to disrupt any changes that the team is making at the
>> moment and so I'm building the structure separately.  I'm also adding
>> versioning (v1) in the module structure to anticipate any versioning
>> changes in the future.   I expect the change requ

Re: [openstack-dev] [Nova][VMWare] VMware VM snapshot

2014-02-25 Thread Gary Kotton
Hi,
This is something that would be nice to add as an extension. At the moment it 
is not part of our near term development plans. It would be nice to engage the 
community to discuss and see how this can be exposed correctly using the Nova 
API's. It may be something worthwhile discussing at the up and coming summit.
Thanks
Gary

From: Qin Zhao mailto:chaoc...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, February 25, 2014 2:05 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova][VMWare] VMware VM snapshot

What I mean is the snapshot of vsphere, which is describe in this page --> 
http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc_50%2FGUID-CA948C69-7F58-4519-AEB1-739545EA94E5.html

It is very useful, if the user plan to perform some risky operations in a VM. I 
am not quite sure if we can model it in Nova, and let the user to create 
snapshot chain via Nova api. Has it been discussed in design session or mail 
group? Anybody know that?


On Tue, Feb 25, 2014 at 6:40 PM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 25 February 2014 09:27, Qin Zhao 
mailto:chaoc...@gmail.com>> wrote:
> Hi,
>
> One simple question about VCenter driver. I feel the VM snapshot function of
> VCenter is very useful and is loved by VCenter users. Does anybody think
> about to let VCenter driver support it?

It depends if that can be modelled well with the current
Nova/Cinder/Glance primitives.

If you do boot from volume, and you see the volume snapshots, and they
behave how cinder expects, and you can model that snapshot as an image
in glance that you can boot new instances from, then maybe it would
work just fine. But we need to take care not to bend the current API
primitives too far out of place.

I remember there being some talk about this at the last summit. How did that go?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [heat] bug 1203680 - fix requires doc

2014-02-25 Thread Mike Spreitzer
(I added some tags in the subject line, probably should have been there 
from the start.)

Thanks guys, for an informative discussion.  I have updated 
https://wiki.openstack.org/wiki/Gerrit_Workflow and 
https://wiki.openstack.org/wiki/Testing to incorporate what I have 
learned.

Like Ben I know of no more greased path from update to integration testing 
than git pull and manually recycle the right processes.  If there is 
something better, please let us know.

But I still do not detect consensus on what to do about bug 1203680.  It 
looks like Sean thinks the fix is just wrong, but I'm not hearing much 
about what a good fix would look like.  As best I can tell, it would 
involve wrapping tox in a script that recapitulates the system package 
install logic from DevStack (I know of no other scripting for installing 
needed system packages).

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]The mechanism of physical_network & segmentation_id is logical?

2014-02-25 Thread 黎林果
> I would not expect a similar feature to be implemented for the openvswitch 
> monolithic plugin, since that is being deprecated.

What's the relation of ML2 and other plugins?

I found the create_subnet only implemented in ML2 bug not in openvswith.


2014-02-25 11:54 GMT+08:00 黎林果 :
> Yes. You are right.
>
> The bp has implemented this function.
>
> Thank you very much.
>
> 2014-02-25 11:01 GMT+08:00 Robert Kukura :
>> On 02/24/2014 09:11 PM, 黎林果 wrote:
>>> Bob,
>>>
>>> Thank you very much. I have understood.
>>>
>>> Another question:
>>> When create network with provider, if the network type is VLAN, the
>>> provider:segmentation_id must be specified.
>>>
>>> In function: def _process_provider_create(self, context, attrs)
>>>
>>> I think it can come from the db too. If get from db failed, then throw
>>> exception.
>>
>> I think you are suggesting that if the provider:network_type and
>> provider:physical_network are specified, but provider:segmentation_id is
>> not specified, then a value should be allocated from the tenant network
>> pool. Is that correct?
>>
>> If so, that sounds similar to
>> https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs,
>> which is being implemented in the ML2 plugin for icehouse. I would not
>> expect a similar feature to be implemented for the openvswitch
>> monolithic plugin, since that is being deprecated.
>>
>>>
>>> what's your opinion?
>>
>> If I understand it correctly, I agree this feature could be useful.
>>
>> -Bob
>>
>>>
>>> Thanks!
>>>
>>> 2014-02-24 21:50 GMT+08:00 Robert Kukura :
 On 02/24/2014 07:09 AM, 黎林果 wrote:
> Hi stackers,
>
>   When create a network, if we don't set provider:network_type,
> provider:physical_network or provider:segmentation_id, the
> network_type will be from cfg, but the other tow is from db's first
> record. Code is
>
> (physical_network,
>  segmentation_id) = ovs_db_v2.reserve_vlan(session)
>
>
>
>   There has tow questions.
>   1, network_vlan_ranges = physnet1:100:200
>  Can we config much physical_networks by cfg?

 Hi Lee,

 You can configure multiple physical_networks. For example:

 network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3

 This makes ranges of VLAN tags on physnet1 and physnet2 available for
 allocation as tenant networks (assuming tenant_network_type = vlan).

 This also makes physnet1, physnet2, and physnet3 available for
 allocation of VLAN (and flat for OVS) provider networks (with admin
 privilege). Note that physnet3 is available for allocation of provider
 networks, but not for tenant networks because it does not have a range
 of VLANs specified.

>
>   2, If yes, the physical_network should be uncertainty. Dose this 
> logical?

 Each physical_network is considered to be a separate VLAN trunk, so VLAN
 2345 on physnet1 is a different isolated network than VLAN 2345 on
 physnet2. All the specified (physical_network,segmentation_id) tuples
 form a pool of available tenant networks. Normal tenants have no
 visibility of which physical_network trunk their networks get allocated on.

 -Bob

>
>
> Regards!
>
> Lee Li
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Jay Lau
@Zhangleiqiang, thanks for the info, yes, it does provide load balance and
DPM.

What I want to do is not only those two policies but also HA or some
customized policies just like openstack nova filters, also I hope that this
policy can manage not only compute resource, but also storage, network etc.





2014-02-26 12:16 GMT+08:00 Zhangleiqiang :

>  Hi, Jay & Sylvain:
>
>
>
> I found  the OpenStack-Neat Project (http://openstack-neat.org/) have
> already aimed to do the things similar to DRS and DPM.
>
>
>
> Hope it will be helpful.
>
>
>
>
>
> --
>
> Leiqzhang
>
>
>
> Best Regards
>
>
>
> *From:* Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
> *Sent:* Wednesday, February 26, 2014 9:11 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
> OpenStack run time policy to manage compute/storage resource
>
>
>
> Hi Tim,
>
>
>
> As per I'm reading your design document, it sounds more likely related to
> something like Solver Scheduler subteam is trying to focus on, ie.
> intelligent agnostic resources placement on an holistic way [1]
>
> IIRC, Jay is more likely talking about adaptive scheduling decisions based
> on feedback with potential counter-measures that can be done for decreasing
> load and preserving QoS of nodes.
>
>
>
> That said, maybe I'm wrong ?
>
>
>
> [1]https://blueprints.launchpad.net/nova/+spec/solver-scheduler
>
>
>
> 2014-02-26 1:09 GMT+01:00 Tim Hinrichs :
>
> Hi Jay,
>
> The Congress project aims to handle something similar to your use cases.
>  I just sent a note to the ML with a Congress status update with the tag
> [Congress].  It includes links to our design docs.  Let me know if you have
> trouble finding it or want to follow up.
>
> Tim
>
>
> - Original Message -
> | From: "Sylvain Bauza" 
> | To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> | Sent: Tuesday, February 25, 2014 3:58:07 PM
> | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
> OpenStack run time policy to manage
> | compute/storage resource
> |
> |
> |
> | Hi Jay,
> |
> |
> | Currently, the Nova scheduler only acts upon user request (either
> | live migration or boot an instance). IMHO, that's something Gantt
> | should scope later on (or at least there could be some space within
> | the Scheduler) so that Scheduler would be responsible for managing
> | resources on a dynamic way.
> |
> |
> | I'm thinking of the Pets vs. Cattles analogy, and I definitely think
> | that Compute resources could be treated like Pets, provided the
> | Scheduler does a move.
> |
> |
> | -Sylvain
> |
> |
> |
> | 2014-02-26 0:40 GMT+01:00 Jay Lau < jay.lau@gmail.com > :
> |
> |
> |
> |
> | Greetings,
> |
> |
> | Here I want to bring up an old topic here and want to get some input
> | from you experts.
> |
> |
> | Currently in nova and cinder, we only have some initial placement
> | polices to help customer deploy VM instance or create volume storage
> | to a specified host, but after the VM or the volume was created,
> | there was no policy to monitor the hypervisors or the storage
> | servers to take some actions in the following case:
> |
> |
> | 1) Load Balance Policy: If the load of one server is too heavy, then
> | probably we need to migrate some VMs from high load servers to some
> | idle servers automatically to make sure the system resource usage
> | can be balanced.
> |
> | 2) HA Policy: If one server get down for some hardware failure or
> | whatever reasons, there is no policy to make sure the VMs can be
> | evacuated or live migrated (Make sure migrate the VM before server
> | goes down) to other available servers to make sure customer
> | applications will not be affect too much.
> |
> | 3) Energy Saving Policy: If a single host load is lower than
> | configured threshold, then low down the frequency of the CPU to save
> | energy; otherwise, increase the CPU frequency. If the average load
> | is lower than configured threshold, then shutdown some hypervisors
> | to save energy; otherwise, power on some hypervisors to load
> | balance. Before power off a hypervisor host, the energy policy need
> | to live migrate all VMs on the hypervisor to other available
> | hypervisors; After Power on a hypervisor host, the Load Balance
> | Policy will help live migrate some VMs to the new powered
> | hypervisor.
> |
> | 4) Customized Policy: Customer can also define some customized
> | policies based on their specified requirement.
> |
> | 5) Some run-time policies for block storage or even network.
> |
> |
> |
> | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
> | indeed many customers want such features.
> |
> |
> |
> | I have filed a bp here [1] long ago, but after some discussion with
> | Russell, we think that this should not belong to nova but other
> | projects. Till now, I did not find a good place where we can put
> |

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Joe Gordon
So it turns out nova isn't the only OpenStack project to attempt a
full API revision.

Keystone v3 - Grizzly
Glancev2 - Folsom
Cinder v2 - Grizzly

Out of those 3, nova doesn't use any of them! (Although there are
blueprints and patches up for cinder and glance v2, but they are still
in review).

This discussion about the nova API can easily be applied to the other
projects as well. But more importantly, I don't think a 2 year
deprecation cycle is enough, if it takes almost a year and a half just
to get nova to use glance v2, then a two year deprecation cycle seems
awfully short.

best,
Joe

On Tue, Feb 25, 2014 at 8:52 PM, Dan Smith  wrote:
>> This would reduce the amount of duplication which is required (I doubt
>> we could remove all duplication though) and whether its worth it for say
>> the rescue example is debatable. But for those cases you'd only need to make
>> the modification in one file.
>
> Don't forget the cases where the call chain changes -- where we end up
> calling into conductor instead of compute, or changing how we fetch
> complicated information (like BDMs) that we end up needing to send to
> something complicated like the run_instance call. As we try to evolve
> compute/api.py to do different things, the changes we have to drive into
> the api/openstack/ code will continue.
>
> However, remember I've maintained that I think we can unify a lot of the
> work to make using almost the same code work for multiple ways of
> accessing the API. I think it's much better to structure it as multiple
> views into the same data. My concern with the v2 -> v3 approach, is that
> I think instead of explicitly duplicating 100% of everything and then
> looking for ways to squash the two pieces back together, we should be
> making calculated changes and supporting just the delta. I think that if
> we did that, we'd avoid making simple naming and CamelCase changes as
> I'm sure we'll try to avoid starting from v3. Why not start with v2?
>
> We've already established that we can get a version from the client on
> an incoming request, so why wouldn't we start with v2 and evolve the
> important pieces instead of explicitly making the decision to break
> everyone by moving to v3 and *then* start with that practice?
>
> --Dan
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Jay Lau
Thanks Sylvain and Tim for the great sharing.

@Tim, I also go through with Congress and have the same feeling with
Sylvai, it is likely that Congress is doing something simliar with Gantt
providing a holistic way for deploying. What I want to do is to provide
some functions which is very similar with VMWare DRS that can do some
adaptive scheduling automatically.

@Sylvain, can you please show more detail for what  "Pets vs. Cattles
analogy" means?


2014-02-26 9:11 GMT+08:00 Sylvain Bauza :

> Hi Tim,
>
> As per I'm reading your design document, it sounds more likely related to
> something like Solver Scheduler subteam is trying to focus on, ie.
> intelligent agnostic resources placement on an holistic way [1]
> IIRC, Jay is more likely talking about adaptive scheduling decisions based
> on feedback with potential counter-measures that can be done for decreasing
> load and preserving QoS of nodes.
>
> That said, maybe I'm wrong ?
>
> [1]https://blueprints.launchpad.net/nova/+spec/solver-scheduler
>
>
> 2014-02-26 1:09 GMT+01:00 Tim Hinrichs :
>
> Hi Jay,
>>
>> The Congress project aims to handle something similar to your use cases.
>>  I just sent a note to the ML with a Congress status update with the tag
>> [Congress].  It includes links to our design docs.  Let me know if you have
>> trouble finding it or want to follow up.
>>
>> Tim
>>
>> - Original Message -
>> | From: "Sylvain Bauza" 
>> | To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> | Sent: Tuesday, February 25, 2014 3:58:07 PM
>> | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
>> OpenStack run time policy to manage
>> | compute/storage resource
>> |
>> |
>> |
>> | Hi Jay,
>> |
>> |
>> | Currently, the Nova scheduler only acts upon user request (either
>> | live migration or boot an instance). IMHO, that's something Gantt
>> | should scope later on (or at least there could be some space within
>> | the Scheduler) so that Scheduler would be responsible for managing
>> | resources on a dynamic way.
>> |
>> |
>> | I'm thinking of the Pets vs. Cattles analogy, and I definitely think
>> | that Compute resources could be treated like Pets, provided the
>> | Scheduler does a move.
>> |
>> |
>> | -Sylvain
>> |
>> |
>> |
>> | 2014-02-26 0:40 GMT+01:00 Jay Lau < jay.lau@gmail.com > :
>> |
>> |
>> |
>> |
>> | Greetings,
>> |
>> |
>> | Here I want to bring up an old topic here and want to get some input
>> | from you experts.
>> |
>> |
>> | Currently in nova and cinder, we only have some initial placement
>> | polices to help customer deploy VM instance or create volume storage
>> | to a specified host, but after the VM or the volume was created,
>> | there was no policy to monitor the hypervisors or the storage
>> | servers to take some actions in the following case:
>> |
>> |
>> | 1) Load Balance Policy: If the load of one server is too heavy, then
>> | probably we need to migrate some VMs from high load servers to some
>> | idle servers automatically to make sure the system resource usage
>> | can be balanced.
>> |
>> | 2) HA Policy: If one server get down for some hardware failure or
>> | whatever reasons, there is no policy to make sure the VMs can be
>> | evacuated or live migrated (Make sure migrate the VM before server
>> | goes down) to other available servers to make sure customer
>> | applications will not be affect too much.
>> |
>> | 3) Energy Saving Policy: If a single host load is lower than
>> | configured threshold, then low down the frequency of the CPU to save
>> | energy; otherwise, increase the CPU frequency. If the average load
>> | is lower than configured threshold, then shutdown some hypervisors
>> | to save energy; otherwise, power on some hypervisors to load
>> | balance. Before power off a hypervisor host, the energy policy need
>> | to live migrate all VMs on the hypervisor to other available
>> | hypervisors; After Power on a hypervisor host, the Load Balance
>> | Policy will help live migrate some VMs to the new powered
>> | hypervisor.
>> |
>> | 4) Customized Policy: Customer can also define some customized
>> | policies based on their specified requirement.
>> |
>> | 5) Some run-time policies for block storage or even network.
>> |
>> |
>> |
>> | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
>> | indeed many customers want such features.
>> |
>> |
>> |
>> | I have filed a bp here [1] long ago, but after some discussion with
>> | Russell, we think that this should not belong to nova but other
>> | projects. Till now, I did not find a good place where we can put
>> | this in, can any of you show some comments?
>> |
>> |
>> |
>> | [1]
>> |
>> https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
>> |
>> | --
>> |
>> |
>> | Thanks,
>> |
>> | Jay
>> |
>> | ___
>> | OpenStack-dev mailing list
>> | OpenStack-dev@lists.openstack.org
>> | http://lists.op

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Christopher Yeoh
On Tue, 25 Feb 2014 20:52:14 -0800
Dan Smith  wrote:

> > This would reduce the amount of duplication which is required (I
> > doubt we could remove all duplication though) and whether its worth
> > it for say the rescue example is debatable. But for those cases
> > you'd only need to make the modification in one file.
> 
> Don't forget the cases where the call chain changes -- where we end up
> calling into conductor instead of compute, or changing how we fetch
> complicated information (like BDMs) that we end up needing to send to
> something complicated like the run_instance call. As we try to evolve
> compute/api.py to do different things, the changes we have to drive
> into the api/openstack/ code will continue.

Sure, but won't most of those continue to be abstracted away by this
new common layer? Its not like the rework will expect any new data in
the request, or ultimately any new data returned in the request
(because these would all involve API changes).

> We've already established that we can get a version from the client on
> an incoming request, so why wouldn't we start with v2 and evolve the
> important pieces instead of explicitly making the decision to break
> everyone by moving to v3 and *then* start with that practice?

Because the V2 API code is really fragile which we want to keep stable.
And the cost of breaking it is high as we probably won't catch many
cases in the gate or review and we'll end up breaking clients when
providers deploy it. And so doing a backport is a huge amount of work
with a large amount of risk attached and compared to starting afresh
with the V3 work we'd be making infrastructure changes to a live, moving
target.

It's the same sorts of reasons that library developers on occasion
decide to release a new major version which knowingly are not backwards
compatible. Even if it means some sort of dual support for a period.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-25 Thread W Chan
The following link is the google doc of the proposed engine/executor
message flow architecture.
https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

The diagram on the right is the scalable engine where one or more engine
sends requests over a transport to one or more executors.  The executor
client, transport, and executor server follows the RPC client/server design
patternin
oslo.messaging.

The diagram represents the local engine.  In reality, it's following the
same RPC client/server design pattern.  The only difference is that it'll
be configured to use a
fakeRPC
backend driver.  The fake driver uses in process
queues  shared
between a pair of engine and executor.

The following are the stepwise changes I will make.
1) Keep the local and scalable engine structure intact.  Create the
Executor Client at ./mistral/engine/scalable/executor/client.py.  Create
the Executor Server at ./mistral/engine/scalable/executor/service.py and
implement the task operations under
./mistral/engine/scalable/executor/executor.py.  Delete
./mistral/engine/scalable/executor/executor.py.  Modify the launcher
./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py
to use the Executor Client instead of sending the message directly to
rabbit via pika.  The sum of this is the atomic change that keeps existing
structure and without breaking the code.
2) Remove the local engine.
https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
3) Implement versioning for the engine.
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
4) Port abstract engine to use oslo.messaging and implement the engine
client, engine server, and modify the API layer to consume the engine
client.
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
.

Winson


On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov wrote:

>
> On 25 Feb 2014, at 02:21, W Chan  wrote:
>
> Renat,
>
> Regarding your comments on change https://review.openstack.org/#/c/75609/,
> I don't think the port to oslo.messaging is just a swap from pika to
> oslo.messaging.  OpenStack services as I understand is usually implemented
> as an RPC client/server over a messaging transport.  Sync vs async calls
> are done via the RPC client call and cast respectively.  The messaging
> transport is abstracted and concrete implementation is done via
> drivers/plugins.  So the architecture of the executor if ported to
> oslo.messaging needs to include a client, a server, and a transport.  The
> consumer (in this case the mistral engine) instantiates an instance of the
> client for the executor, makes the method call to handle task, the client
> then sends the request over the transport to the server.  The server picks
> up the request from the exchange and processes the request.  If cast
> (async), the client side returns immediately.  If call (sync), the client
> side waits for a response from the server over a reply_q (a unique queue
> for the session in the transport).  Also, oslo.messaging allows versioning
> in the message. Major version change indicates API contract changes.  Minor
> version indicates backend changes but with API compatibility.
>
>
> My main concern about this patch is not related with messaging
> infrastructure. I believe you know better than me how it should look like.
> I'm mostly concerned with the way of making changes you chose. From my
> perspective, it's much better to make atomic changes where every changes
> doesn't affect too much in existing architecture. So the first step could
> be to change pika to oslo.messaging with minimal structural changes without
> introducing versioning (could be just TODO comment saying that the
> framework allows it and we may want to use it in the future, to be decide),
> without getting rid of the current engine structure (local, scalable). Some
> of the things in the file structure and architecture came from the
> decisions made by many people and we need to be careful about changing them.
>
>
> So, where I'm headed with this change...  I'm implementing the basic
> structure/scaffolding for the new executor service using oslo.messaging
> (default transport with rabbit).  Since the whole change will take a few
> rounds, I don't want to disrupt any changes that the team is making at the
> moment and so I'm building the structure separately.  I'm also adding
> versioning (v1) in the module structure to anticipate any versioning
> changes in the future.   I expect the change request will lead to some
> discussion as we are doing here.  I will migrate the core operations of the
> executor (handle_task, handle_task_error, do_task_action) to the server
> component when we agree on the architecture and switch the con

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-02-25 Thread Irena Berezovsky
Hi Sandhya,
I mentioned the port state with regards to expected operation that can be 
applied to neutron port after neutron port is already bound to certain virtual 
interface. 
Since for my case, there will be neutron L2 agent on Host, it will manage port 
admin state locally. I am not sure how it should work for your case, and if you 
need L2 agent for this.

BR,
Irena

-Original Message-
From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com] 
Sent: Tuesday, February 25, 2014 4:19 PM
To: OpenStack Development Mailing List (not for usage questions); Irena 
Berezovsky; Robert Kukura; Robert Li (baoli); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi,
As a follow up from today's IRC, Irena, are you looking to write the below 
mentioned Base/Mixin class that inherits from AgentMechanismDriverBase class? 
When you mentioned port state, were you referring to the 
validate_port_binding() method?

Pls clarify.

Thanks,
Sandhya

On 2/6/14 7:57 AM, "Sandhya Dasu (sadasu)"  wrote:

>Hi Bob and Irena,
>   Thanks for the clarification. Irena, I am not opposed to a 
>SriovMechanismDriverBase/Mixin approach, but I want to first figure out 
>how much common functionality there is. Have you already looked at this?
>
>Thanks,
>Sandhya
>
>On 2/5/14 1:58 AM, "Irena Berezovsky"  wrote:
>
>>Please see inline my understanding
>>
>>-Original Message-
>>From: Robert Kukura [mailto:rkuk...@redhat.com]
>>Sent: Tuesday, February 04, 2014 11:57 PM
>>To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for 
>>usage questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen
>>(brbowen)
>>Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
>>binding of ports
>>
>>On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
>>> Hi,
>>>  I have a couple of questions for ML2 experts regarding support 
>>> of SR-IOV ports.
>>
>>I'll try, but I think these questions might be more about how the 
>>various SR-IOV implementations will work than about ML2 itself...
>>
>>> 1. The SR-IOV ports would not be managed by ova or linuxbridge L2 
>>> agents. So, how does a MD for SR-IOV ports bind/unbind its ports to 
>>> the host? Will it just be a db update?
>>
>>I think whether or not to use an L2 agent depends on the specific 
>>SR-IOV implementation. Some (Mellanox?) might use an L2 agent, while 
>>others
>>(Cisco?) might put information in binding:vif_details that lets the 
>>nova VIF driver take care of setting up the port without an L2 agent.
>>[IrenaB] Based on VIF_Type that MD defines, and going forward with 
>>other binding:vif_details attributes, VIFDriver should do the VIF pluging 
>>part.
>>As for required networking configuration is required, it is usually 
>>done either by L2 Agent or external Controller, depends on MD.
>>
>>> 
>>> 2. Also, how do we handle the functionality in mech_agent.py, within 
>>> the SR-IOV context?
>>
>>My guess is that those SR-IOV MechanismDrivers that use an L2 agent 
>>would inherit the AgentMechanismDriverBase class if it provides useful 
>>functionality, but any MechanismDriver implementation is free to not 
>>use this base class if its not applicable. I'm not sure if an 
>>SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being 
>>planned, and how that would relate to AgentMechanismDriverBase.
>>
>>[IrenaB] Agree with Bob, and as I stated before I think there is a 
>>need for SriovMechanismDriverBase/Mixin that provides all the generic 
>>functionality and helper methods that are common to SRIOV ports.
>>-Bob
>>
>>> 
>>> Thanks,
>>> Sandhya
>>> 
>>> From: Sandhya Dasu mailto:sad...@cisco.com>>
>>> Reply-To: "OpenStack Development Mailing List (not for usage 
>>>questions)"
>>> >> >
>>> Date: Monday, February 3, 2014 3:14 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> >> >, Irena Berezovsky  
>>>mailto:ire...@mellanox.com>>, "Robert Li (baoli)"
>>> mailto:ba...@cisco.com>>, Robert Kukura  
>>>mailto:rkuk...@redhat.com>>, "Brian Bowen  
>>>(brbowen)" mailto:brbo...@cisco.com>>
>>> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV  
>>>extra hr of discussion today
>>> 
>>> Hi,
>>> Since, openstack-meeting-alt seems to be in use, baoli and 
>>> myself are moving to openstack-meeting. Hopefully, Bob Kukura & 
>>> Irena can join soon.
>>> 
>>> Thanks,
>>> Sandhya
>>> 
>>> From: Sandhya Dasu mailto:sad...@cisco.com>>
>>> Reply-To: "OpenStack Development Mailing List (not for usage 
>>>questions)"
>>> >> >
>>> Date: Monday, February 3, 2014 1:26 PM
>>> To: Irena Berezovsky >>>, "Robert Li (baoli)" >>>, Robert Kukura >>>, "OpenStack Development Mailing List 
>>>(not for usage questions)"
>>> >> 

Re: [openstack-dev] Libvirt Resize/Cold Migrations and SSH

2014-02-25 Thread Jesse Pretorius
On 25 February 2014 23:05, Solly Ross  wrote:

> 1. to detect shared storage
> 2. to create the directory for the instance on the destination system
> 3. to copy the disk image from the source to the destination system (uses
> either rysnc over ssh or scp)
> 4. to remove the directory created in (2) in case of an error during the
> process
>
> So here's my question: can number 3 be "elminated", so to speak?  Having
> to give full SSH permissions for a file copy seems a bit overkill (we
> could, for example, run an rsync daemon, in which case
> rsync would connect via the daemon and not ssh).  Is it worth it?
>  Additionally, if we do not eliminate number 3, is it worth it to refactor
> the code to eliminate numbers 2 and 4 (I already have code
> to eliminate number 1 -- see https://gist.github.com/DirectXMan12/9217699
> ).
>

An implementation of rsync would seem to me like a deployer preference, and
some would perhaps prefer not to.

I would rather see rsync (via daemon) as an additional option than as a
wholesale replacement option. Perhaps over time the others can be
deprecated if they're not really being used and the rsync option becomes
selected.

Perhaps in the future other options would be preferred as well?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Kenichi Oomichi

> -Original Message-
> From: Christopher Yeoh [mailto:cbky...@gmail.com]
> Sent: Wednesday, February 26, 2014 11:33 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Future of the Nova API
> 
> On Tue, 25 Feb 2014 09:26:09 -0500
> Sean Dague  wrote:
> >
> > What I want out of Nova API at the end of the day:
> >
> > 1. a way to discover what the API is
> >
> > because this massively simplifies writing clients, SDKs, tests, and
> > documentation. All those pipelines are terribly manual, and have
> > errors in them because of it. Like has been said before you actually
> > need to read the Nova source code to figure out how to use parts of
> > the API.
> >
> > I think this is a great example of that -
> >
> https://blog.heroku.com/archives/2014/1/8/json_schema_for_heroku_platform_api?utm_source=newsletter&utm_medium=email
> &utm_campaign=januarynewsletter&mkt_tok=3RkMMJWWfF9wsRonuKzNZKXonjHpfsX57OQtX6SxlMI%2F0ER3fOvrPUfGjI4AScJrI%2BSLDwEY
> GJlv6SgFQrjAMapmyLgLUhE%3D
> >
> 
> So from what I understand I think the jsonschema work that Ken'ichi has
> been working on for V3 goes a fair way in being able to support this
> sort of thing.

Yes, right.
On the sample Sean pointed, the API reference documentation is being
maintained with using jsonschema. That is the best situation I hope for
Nova API documentation. We will be able to generate/get complete API
documentation with synchronizing the API implementation in this situation.

As the first step, I've proposed the API sample generator[1] which auto-
generates API sample files from API schema. I created a prototype[2], and
I've confirmed it is not so difficult to implement it.


> The jsonschema we'd be able to provide for V2 however is a bit trickier
> as we'd have to leave the input validation pretty loose (and probably in
> rather wierdly inconsistent ways because that's how its implemented) in
> most cases.

Right. If applying strict jsonschema validation to v2 API, the backward
incompatibility issues would happen. so we have to apply loose validation
to v2 API if we need it, but I am not sure that such v2 API validation is
worth.


> > = Current tensions =
> >
> > Cloud providers want v2 "for a long time" (which, honestly, was news).
> > I'm not sure I completely know what that means. Are we talking about
> > the bug for bug surface that is today? Are we talking about not
> > wanting to change endpoints?
> 
> So the problem here is what we consider a "bug" becomes a feature from
> a user of the API point of view. Eg they really shouldn't be passing
> some data in a request, but its ignored and doesn't cause any issues
> and the request ends up doing what they expect.

In addition, current v2 API behavior is not consistent when receiving
unexpected API parameters. Most v2 APIs ignore unexpected API parameters,
but some v2 APIs return a BadRequest response. For example, "update host"
API does it in this case by 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/hosts.py#L185

Through v3 API development, we are making all v3 APIs return a BadRequest
in this case. I think we cannot apply this kind of strict validation to
running v2 API.


Thanks
ken'ichi Ohmichi

---
[1]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026537.html
[2]: https://review.openstack.org/#/c/71465/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Dina Belova
Also we have to mention Adam's letter - cause now he said he loves to see
start/end date functionality in keystone.

If so, we may store this info in Keystone - but anyway, I suppose that it
might be somehow duplicated in Climate not to process one more request to
Keystone when it'll be needed. I still have no clear idea how that will be
looking speaking about user rights.

Now we're using trusts + special admin user. We'll get rid of this special
user in future. But to work with projects we still need admin's rights.

Any ideas?

On Wednesday, February 26, 2014, Dina Belova  wrote:

> Don't think it's needed in this case. We may store this info in Climate
> not to intersect with Keystone without serious reasons.
>
> On Wednesday, February 26, 2014, Sanchez, Cristian A <
> cristian.a.sanc...@intel.com>
> wrote:
>
>> One question to clarify: the project will be marked as reservable by
>> calling Keystone API (from Climate) to store that info in the project extra
>> specs in Keystone DB.
>> Is this correct?
>>
>> From: Sylvain Bauza > sylvain.ba...@gmail.com>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Date: martes, 25 de febrero de 2014 17:55
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design
>>
>>
>>
>>
>> 2014-02-25 17:42 GMT+01:00 Dina Belova > dbel...@mirantis.com>>:
>>
>> >>> I think it should be a Climate "policy" (be careful, the name is
>> confusing) : if admin wants to grant any new project for reservations, he
>> should place a call to Climate. That's up to Climate-Nova (ie. Nova
>> extension) to query Climate in order to see if project has been granted or
>> not.
>>
>> Now I think that it'll be better, yes.
>> I see some workflow like:
>>
>> 1) Mark project as reservable in Climate
>> 2) When some resource is created (like Nova instance) it should be
>> checked (in the API extensions, for example) via Climate if project is
>> reservable. If is, and there is no special reservation flags passed, it
>> should be used default_reservation stuff for this instance
>>
>> Sylvain, is that ira you're talking about?
>>
>>
>> tl;dr : Yes, let's define/create a new endpoint for the need.
>>
>> That's exactly what I'm thinking, Climate should manage reservations on
>> its own (including any new model) and projects using it for reserving
>> resources should place a call to it in order to get some information.
>>
>> -Sylvain
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
>

-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Dina Belova
Don't think it's needed in this case. We may store this info in Climate not
to intersect with Keystone without serious reasons.

On Wednesday, February 26, 2014, Sanchez, Cristian A <
cristian.a.sanc...@intel.com> wrote:

> One question to clarify: the project will be marked as reservable by
> calling Keystone API (from Climate) to store that info in the project extra
> specs in Keystone DB.
> Is this correct?
>
> From: Sylvain Bauza  sylvain.ba...@gmail.com >>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org >>
> Date: martes, 25 de febrero de 2014 17:55
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org >>
> Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design
>
>
>
>
> 2014-02-25 17:42 GMT+01:00 Dina Belova 
> >:
>
> >>> I think it should be a Climate "policy" (be careful, the name is
> confusing) : if admin wants to grant any new project for reservations, he
> should place a call to Climate. That's up to Climate-Nova (ie. Nova
> extension) to query Climate in order to see if project has been granted or
> not.
>
> Now I think that it'll be better, yes.
> I see some workflow like:
>
> 1) Mark project as reservable in Climate
> 2) When some resource is created (like Nova instance) it should be checked
> (in the API extensions, for example) via Climate if project is reservable.
> If is, and there is no special reservation flags passed, it should be used
> default_reservation stuff for this instance
>
> Sylvain, is that ira you're talking about?
>
>
> tl;dr : Yes, let's define/create a new endpoint for the need.
>
> That's exactly what I'm thinking, Climate should manage reservations on
> its own (including any new model) and projects using it for reserving
> resources should place a call to it in order to get some information.
>
> -Sylvain
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Christopher Yeoh
On Wed, 26 Feb 2014 15:14:55 +1030
Christopher Yeoh  wrote:

> On Tue, 25 Feb 2014 10:37:14 +
> John Garbutt  wrote:
> > 
> 
> So I was pondering if its possible to write a decorator for v3 (not
> json schema because we have to do some crazy stuff) that does the
> equivalent of V2 input validation. Getting it right for perfectly good
> input would not be that hard. But getting all the quirks of V2 just
> right would be very tricky and error prone, with no tests.

So I sent this a bit prematurely and meant to say here that this
decorator in addition to doing V2 type input validation would also
transform the incoming data into the form the V3 expects. And
also filtering out the junk that the strong input validation of v3 would
reject. 

On the way out it could transform success return codes (ick! But it
might cover all cases) to keep backwards compatibility.

Yea, its really ugly but its way better than trying to get v2 and v3
or v3 improvements to co-exist in the same place.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Dan Smith
> This would reduce the amount of duplication which is required (I doubt
> we could remove all duplication though) and whether its worth it for say
> the rescue example is debatable. But for those cases you'd only need to make
> the modification in one file.

Don't forget the cases where the call chain changes -- where we end up
calling into conductor instead of compute, or changing how we fetch
complicated information (like BDMs) that we end up needing to send to
something complicated like the run_instance call. As we try to evolve
compute/api.py to do different things, the changes we have to drive into
the api/openstack/ code will continue.

However, remember I've maintained that I think we can unify a lot of the
work to make using almost the same code work for multiple ways of
accessing the API. I think it's much better to structure it as multiple
views into the same data. My concern with the v2 -> v3 approach, is that
I think instead of explicitly duplicating 100% of everything and then
looking for ways to squash the two pieces back together, we should be
making calculated changes and supporting just the delta. I think that if
we did that, we'd avoid making simple naming and CamelCase changes as
I'm sure we'll try to avoid starting from v3. Why not start with v2?

We've already established that we can get a version from the client on
an incoming request, so why wouldn't we start with v2 and evolve the
important pieces instead of explicitly making the decision to break
everyone by moving to v3 and *then* start with that practice?

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Christopher Yeoh
On Tue, 25 Feb 2014 10:37:14 +
John Garbutt  wrote:
> 
> Now I am tempted to say we morph the V3 code to also produce the V2
> responses. And change the v3 API, so thats easier to do, and easier
> for clients to move (like don't change URLs unless we really have to).
> I know the risk for screwing that up is enormous, but maybe that makes
> the most sense?

So I was thinking about this and Ken'ichi has basically said pretty
much the same thing in his reply to this thread. I don't think it
makes client moves any easier though - this is all about lowering our
maintenance costs. 

So for the V3 API where the json/schema input validation patches have
merged, for static input validation (ie not things like does this
server exist) its all done in the decorator now, outside of actual
method. This makes the v3 API code much cleaner.

So I was pondering if its possible to write a decorator for v3 (not
json schema because we have to do some crazy stuff) that does the
equivalent of V2 input validation. Getting it right for perfectly good
input would not be that hard. But getting all the quirks of V2 just
right would be very tricky and error prone, with no tests.

Mind you, that's not a lot different from porting v3 back to v2 either.
There's also significant risk of accidentally changing the v2 API in a
way we don't intend.

But regardless I think its only something that would be feasible to
attempt once V2 is frozen. And only worth considering if the the
deprecation period for V2 is very long >2 years.

I think we do really need to get a better quantitative grip on what this
dual maintenance burden actually is. I don't think its too hard 
to measure the gate/check impact (more VMs please!) but in terms of
developer and review time overhead what do we think it will be and what
is acceptable and what isn't?

Because that needs to be somehow balanced against how long (hand wave)
that we'll have to keep dual support for and the developer cost and
review time to do any backport work. Plus throw in what sort of code
base we end up with in the end.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-25 Thread Renat Akhmerov
Yes, right. Thanks Winson.

Renat Akhmerov
@ Mirantis Inc.



On 26 Feb 2014, at 01:39, W Chan  wrote:

> Sure.  Let me give this some thoughts and work with you separately.  Before 
> we speak up, we should have a proposal for discussion.
> 
> 
> On Mon, Feb 24, 2014 at 9:53 PM, Dmitri Zimine  wrote:
> Winson, 
> 
> While you're looking into this and working on the design, may be also think 
> through other executor/engine communications.
> 
> We talked about executor communicating to engine over 3 channels (DB, REST, 
> RabbitMQ) which I wasn't happy about ;) and put it off for some time. May be 
> it can be rationalized as part of your design. 
> 
> DZ. 
> 
> On Feb 24, 2014, at 11:21 AM, W Chan  wrote:
> 
>> Renat,
>> 
>> Regarding your comments on change https://review.openstack.org/#/c/75609/, I 
>> don't think the port to oslo.messaging is just a swap from pika to 
>> oslo.messaging.  OpenStack services as I understand is usually implemented 
>> as an RPC client/server over a messaging transport.  Sync vs async calls are 
>> done via the RPC client call and cast respectively.  The messaging transport 
>> is abstracted and concrete implementation is done via drivers/plugins.  So 
>> the architecture of the executor if ported to oslo.messaging needs to 
>> include a client, a server, and a transport.  The consumer (in this case the 
>> mistral engine) instantiates an instance of the client for the executor, 
>> makes the method call to handle task, the client then sends the request over 
>> the transport to the server.  The server picks up the request from the 
>> exchange and processes the request.  If cast (async), the client side 
>> returns immediately.  If call (sync), the client side waits for a response 
>> from the server over a reply_q (a unique queue for the session in the 
>> transport).  Also, oslo.messaging allows versioning in the message. Major 
>> version change indicates API contract changes.  Minor version indicates 
>> backend changes but with API compatibility.  
>> 
>> So, where I'm headed with this change...  I'm implementing the basic 
>> structure/scaffolding for the new executor service using oslo.messaging 
>> (default transport with rabbit).  Since the whole change will take a few 
>> rounds, I don't want to disrupt any changes that the team is making at the 
>> moment and so I'm building the structure separately.  I'm also adding 
>> versioning (v1) in the module structure to anticipate any versioning changes 
>> in the future.   I expect the change request will lead to some discussion as 
>> we are doing here.  I will migrate the core operations of the executor 
>> (handle_task, handle_task_error, do_task_action) to the server component 
>> when we agree on the architecture and switch the consumer (engine) to use 
>> the new RPC client for the executor instead of sending the message to the 
>> queue over pika.  Also, the launcher for ./mistral/cmd/task_executor.py will 
>> change as well in subsequent round.  An example launcher is here 
>> https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine.
>>   The interceptor project here is what I use to research how oslo.messaging 
>> works.  I hope this is clear. The blueprint only changes how the request and 
>> response are being transported.  It shouldn't change how the executor 
>> currently works.
>> 
>> Finally, can you clarify the difference between local vs scalable engine?  I 
>> personally do not prefer to explicitly name the engine scalable because this 
>> requirement should be in the engine by default and we do not need to 
>> explicitly state/separate that.  But if this is a roadblock for the change, 
>> I can put the scalable structure back in the change to move this forward.
>> 
>> Thanks.
>> Winson
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral]

2014-02-25 Thread Renat Akhmerov
Thanks, Dmitri.

I like the idea what you suggested mostly looks fine to me. Just want to 
clarify something about the snippet you provided.

Workflow:
  tasks:
timeInToronto:
  action:std:REST_API
  parameters:
baseUrl: "http://api.timezonedb.com";
method: "GET"
parameters: "zone=/America/Toronto&key="

Services:
  TimeService:
type: REST_API
parameters:
  baseUrl:http://api.timezonedb.com
  key:
actions:
  get-time:
task-parameters:
  zone:

Task “timeInToronto” has property “parameters” which in turn also has property 
“parameters”. Could you please explain your intentions here? Maybe we should 
have just one section “parameters”?

Btw, the more I think about all these parameters the more I come to realize 
that we need to redesign this part significantly. The reason is that I’m 
currently working on the first Data Flow implementation and I feel that not 
everything is good with our understanding of what parameters are, at least with 
my understanding ). But I think it makes sense to start a new thread to discuss 
this in details.

Thanks
 
Renat Akhmerov
@ Mirantis Inc.



On 26 Feb 2014, at 07:32, Dmitri Zimine  wrote:

> I have created a blueprint to capture the intention to simplify calling 
> standard actions:

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Zhangleiqiang
Hi, Jay & Sylvain:

I found  the OpenStack-Neat Project (http://openstack-neat.org/) have already 
aimed to do the things similar to DRS and DPM.

Hope it will be helpful.


--
Leiqzhang

Best Regards

From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Wednesday, February 26, 2014 9:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage compute/storage resource

Hi Tim,

As per I'm reading your design document, it sounds more likely related to 
something like Solver Scheduler subteam is trying to focus on, ie. intelligent 
agnostic resources placement on an holistic way [1]
IIRC, Jay is more likely talking about adaptive scheduling decisions based on 
feedback with potential counter-measures that can be done for decreasing load 
and preserving QoS of nodes.

That said, maybe I'm wrong ?

[1]https://blueprints.launchpad.net/nova/+spec/solver-scheduler

2014-02-26 1:09 GMT+01:00 Tim Hinrichs 
mailto:thinri...@vmware.com>>:
Hi Jay,

The Congress project aims to handle something similar to your use cases.  I 
just sent a note to the ML with a Congress status update with the tag 
[Congress].  It includes links to our design docs.  Let me know if you have 
trouble finding it or want to follow up.

Tim

- Original Message -
| From: "Sylvain Bauza" 
mailto:sylvain.ba...@gmail.com>>
| To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
| Sent: Tuesday, February 25, 2014 3:58:07 PM
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage
| compute/storage resource
|
|
|
| Hi Jay,
|
|
| Currently, the Nova scheduler only acts upon user request (either
| live migration or boot an instance). IMHO, that's something Gantt
| should scope later on (or at least there could be some space within
| the Scheduler) so that Scheduler would be responsible for managing
| resources on a dynamic way.
|
|
| I'm thinking of the Pets vs. Cattles analogy, and I definitely think
| that Compute resources could be treated like Pets, provided the
| Scheduler does a move.
|
|
| -Sylvain
|
|
|
| 2014-02-26 0:40 GMT+01:00 Jay Lau < 
jay.lau@gmail.com > :
|
|
|
|
| Greetings,
|
|
| Here I want to bring up an old topic here and want to get some input
| from you experts.
|
|
| Currently in nova and cinder, we only have some initial placement
| polices to help customer deploy VM instance or create volume storage
| to a specified host, but after the VM or the volume was created,
| there was no policy to monitor the hypervisors or the storage
| servers to take some actions in the following case:
|
|
| 1) Load Balance Policy: If the load of one server is too heavy, then
| probably we need to migrate some VMs from high load servers to some
| idle servers automatically to make sure the system resource usage
| can be balanced.
|
| 2) HA Policy: If one server get down for some hardware failure or
| whatever reasons, there is no policy to make sure the VMs can be
| evacuated or live migrated (Make sure migrate the VM before server
| goes down) to other available servers to make sure customer
| applications will not be affect too much.
|
| 3) Energy Saving Policy: If a single host load is lower than
| configured threshold, then low down the frequency of the CPU to save
| energy; otherwise, increase the CPU frequency. If the average load
| is lower than configured threshold, then shutdown some hypervisors
| to save energy; otherwise, power on some hypervisors to load
| balance. Before power off a hypervisor host, the energy policy need
| to live migrate all VMs on the hypervisor to other available
| hypervisors; After Power on a hypervisor host, the Load Balance
| Policy will help live migrate some VMs to the new powered
| hypervisor.
|
| 4) Customized Policy: Customer can also define some customized
| policies based on their specified requirement.
|
| 5) Some run-time policies for block storage or even network.
|
|
|
| I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
| indeed many customers want such features.
|
|
|
| I have filed a bp here [1] long ago, but after some discussion with
| Russell, we think that this should not belong to nova but other
| projects. Till now, I did not find a good place where we can put
| this in, can any of you show some comments?
|
|
|
| [1]
| https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
|
| --
|
|
| Thanks,
|
| Jay
|
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
|
|
|
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Keystone] Tenant expiration dates

2014-02-25 Thread Adam Young

On 02/24/2014 08:41 AM, Dina Belova wrote:

Cristian, hello

I believe that should not be done in such direct way, really.
Why not using project.extra field in DB to store this info? Is that 
not appropriate for your ideas or there will be problems with there 
implementing using extras?


  It would not make sense to enforce on something that was not 
queryable directly in the database.  Please don't use extra.  I'd like 
to see it removed.  It certainly should not be used for core behavior.


I think start/end datetimes make sense, and could be part of the project 
itself.  Please write up the blueprint.



Thanks,
Dina


On Mon, Feb 24, 2014 at 5:25 PM, Sanchez, Cristian A 
mailto:cristian.a.sanc...@intel.com>> 
wrote:


Hi,
I'm thinking about creating a blueprint to allow the creating of
tenants defining start-date and end-date of that tenant. These
dates will define a time window in which the tenant is considered
'enabled' and auth tokens will be given only when current time is
between those dates.
This can be particularly useful for projects like Climate where
resources are reserved. And any resource (like VMs) created for a
tenant will have the same expiration dates as the tenant.

Do you think this is something that can be added to Keystone?

Thanks

Cristian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete operation should not apply to the volume being used?

2014-02-25 Thread zhangyu (AI)
Got it. Thanks for clarification!~

From: Jay S Bryant [mailto:jsbry...@us.ibm.com]
Sent: Wednesday, February 26, 2014 11:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete 
operation should not apply to the volume being used?

I would agree.  I don't think that Cinder should/could be able to act upon 
Nova's state for the VM.  Force-delete is really in place as a backup to 
clean-up after certain failures in Cinder.  Other mechanisms are in place to 
handle issues in Nova.


Jay S. Bryant
   IBM Cinder Subject Matter Expert  &  Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
  -- Sean O'Casey




From:"zhangyu (AI)" mailto:zhangy...@huawei.com>>
To:"OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>,
Cc:"Luohao \(brian\)" 
mailto:brian.luo...@huawei.com>>
Date:02/25/2014 08:20 PM
Subject:Re: [openstack-dev] [Cinder] [Nova]Do you think volume force 
delete operation should not apply to the volume being used?




IMHO, Attach/detach operations can only be issued from the Nova side because 
they are in fact VM/instance management operations.
Meanwhile, volume create/delete are volume management stuffs, therefore Cinder 
exposes API for them.

Also, according to current Cinder code base, no nova detach-volume action is 
issued from the execution flow of a volume deletion.

Thank you for suggestions~

From: Yuzhou (C) [mailto:vitas.yuz...@huawei.com]
Sent: Wednesday, February 26, 2014 9:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete 
operation should not apply to the volume being used?

I thinkforce delete = nova detach volume,then cinder delete volume

Volume status in db shoud be modified after nova detach volume.

Thanks!


From: zhangyu (AI) [mailto:zhangy...@huawei.com]
Sent: Wednesday, February 26, 2014 8:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete 
operation should not apply to the volume being used?

If I understand your question correctly, the case you describe should be like 
the following:

Assume we have created both an instance and a volume, then we try to  attach 
that volume to the instance.
Before that operation is completed (the status of the volume is “attaching” 
now), for whatever reasons we decide to apply a “force delete” operation on 
that volume.
Then, after we applied that force delete, we come to see that, from the Cinder 
side, the volume has been successfully deleted and the status is surely 
“deleted”.
However, from the Nova side, we see that the status of the deleted volume 
remains to be “attaching”.

If this is truly your case, I think it is a bug. The reason might lie in that, 
Cinder forgets to refresh the attach_status attribute of a volume in DB when 
applying a “force delete” operation.
Is there any other suggestions?

Thanks!



From: yunling [mailto:yunlingz...@hotmail.com]
Sent: Monday, February 17, 2014 9:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder]Do you think volume force delete operation 
should not apply to the volume being used?

Hi stackers:


I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder when 
doing volume force delete operation on an attaching volume.
I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and detached.


How do you think?


thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete operation should not apply to the volume being used?

2014-02-25 Thread Jay S Bryant
I would agree.  I don't think that Cinder should/could be able to act upon 
Nova's state for the VM.  Force-delete is really in place as a backup to 
clean-up after certain failures in Cinder.  Other mechanisms are in place 
to handle issues in Nova.


Jay S. Bryant
   IBM Cinder Subject Matter Expert  &  Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   "zhangyu (AI)" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Cc: "Luohao \(brian\)" 
Date:   02/25/2014 08:20 PM
Subject:Re: [openstack-dev] [Cinder] [Nova]Do you think volume 
force delete operation should not apply to the volume being used?



IMHO, Attach/detach operations can only be issued from the Nova side 
because they are in fact VM/instance management operations. 
Meanwhile, volume create/delete are volume management stuffs, therefore 
Cinder exposes API for them.
 
Also, according to current Cinder code base, no nova detach-volume action 
is issued from the execution flow of a volume deletion.
 
Thank you for suggestions~
 
From: Yuzhou (C) [mailto:vitas.yuz...@huawei.com] 
Sent: Wednesday, February 26, 2014 9:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force 
delete operation should not apply to the volume being used?
 
I thinkforce delete = nova detach volume,then cinder delete volume 

 
Volume status in db shoud be modified after nova detach volume.
 
Thanks!
 
 
From: zhangyu (AI) [mailto:zhangy...@huawei.com] 
Sent: Wednesday, February 26, 2014 8:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force 
delete operation should not apply to the volume being used?
 
If I understand your question correctly, the case you describe should be 
like the following:
 
Assume we have created both an instance and a volume, then we try to 
attach that volume to the instance.
Before that operation is completed (the status of the volume is 
“attaching” now), for whatever reasons we decide to apply a “force delete” 
operation on that volume.
Then, after we applied that force delete, we come to see that, from the 
Cinder side, the volume has been successfully deleted and the status is 
surely “deleted”.
However, from the Nova side, we see that the status of the deleted volume 
remains to be “attaching”.
 
If this is truly your case, I think it is a bug. The reason might lie in 
that, Cinder forgets to refresh the attach_status attribute of a volume in 
DB when applying a “force delete” operation.
Is there any other suggestions?
 
Thanks!
 
 
 
From: yunling [mailto:yunlingz...@hotmail.com] 
Sent: Monday, February 17, 2014 9:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder]Do you think volume force delete 
operation should not apply to the volume being used?
 
Hi stackers: 
 
 
I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder 
when doing volume force delete operation on an attaching volume. 
I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and 
detached. 
 
 
How do you think? 
 
 
thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread Yuzhou (C)
If you want to know how exactly ML2 plugin which is working on
neutron server is communicating with openvswitch
agents,you shoud read code: rpc.py /plugin.py /ovs_neutron_agent.py

In rpc.py, define functions for rpc.
RpcCallbacks class define some rpc that agent send to Ml2 plugin.
AgentNotifierApi class define some rpc that Ml2 plugin send to agent

In plugin.py,you should pay close attention to "def _setup_rpc(self)"

Ml2 plugin send rpc to agent (agent process rpc from plugin)
Network_delete
Port_update
Security_groups_rule_updated
Security_groups_member_updated
Security_groups_provider_updated
Tunnel_update

Ml2 plugin process rpc from agent (agent send rpc to pulgin)
Report_state
Get_device_details
Update_device_down
Update_device_up
Tunnel_sync
Security_group_runles_for_devices






>Re: [openstack-dev] Neutron ML2 and openvswitch agent
>Sławek Kapłoński Tue, 25 Feb 2014 12:31:54 -0800
>Hello,

>Trinath, this presentation I saw before You send me it. There is nice 
>explanation what methods are (and should be) in type driver and mech driver 
>but I need exactly that information what sent me Assaf. Thanks both of You for 
>Your help :)

--
>Best regards
>Sławek Kapłoński
>Dnia wtorek, 25 lutego 2014 12:18:50 Assaf Muller pisze:

> - Original Message -
> 
> > Hi
> > 
> > Hope this helps
> > 
> > http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
> > 
> > ___
> > 
> > Trinath Somanchi
> > 
> > _
> > From: Sławek Kapłoński [sla...@kaplonski.pl]
> > Sent: Tuesday, February 25, 2014 9:24 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] Neutron ML2 and openvswitch agent
> > 
> > Hello,
> > 
> > I have question to You guys. Can someone explain me (or send to link
> > with such explanation) how exactly ML2 plugin which is working on
> > neutron server is communicating with compute hosts with openvswitch
> > agents?
> 
> Maybe this will set you on your way:
> ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then
> uses ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with
> the topic stated in that file.
> 
> When the message is received by the OVS agent, it calls:
> neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_
> update.
> > I suppose that this is working with rabbitmq queues but I need
> > to add own function which will be called in this agent and I don't know
> > how to do that. It would be perfect if such think will be possible with
> > writing for example new mechanical driver in ML2 plugin (but how?).
> > Thanks in advance for any help from You :)
> > 
> > --
> > Best regards
> > Slawek Kaplonski
> > sla...@kaplonski.pl
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Christopher Yeoh
On Tue, 25 Feb 2014 14:47:16 -0800
Dan Smith  wrote:
> > Yeah, so objects is the big one here.
> 
> Objects, and everything else. With no-db-compute we did it for a
> couple cycles, then objects, next it will be retooling flows to
> conductor, then dealing with tasks, talking to gantt, etc. It's not
> going to end any time soon.

So I think there's some areas where I think the burden of making
changes to two APIs is very low. Take for example adding object support
for unrescue:

diff 
ec78b42d7b7e9da99ba0638a4f6d0aa7c8e5^..ec78b42d7b7e9da99ba0638a4f6d0aa7c8e5
 | diffstat
 api/openstack/compute/contrib/rescue.py|2 +-
 api/openstack/compute/plugins/v3/rescue.py |3 ++-
 compute/manager.py |   14 +++---
 compute/rpcapi.py  |   12 
 tests/compute/test_compute.py  |8 +---
 tests/compute/test_rpcapi.py   |2 +-
 6 files changed, 24 insertions(+), 17 deletions(-)

And the delta for the v2/v3 parts is:

diff --git a/nova/api/openstack/compute/contrib/rescue.py 
b/nova/api/openstack/compute/con
index fe31f2c..0233be2 100644
--- a/nova/api/openstack/compute/contrib/rescue.py
+++ b/nova/api/openstack/compute/contrib/rescue.py
@@ -75,7 +75,7 @@ class RescueController(wsgi.Controller):
 """Unrescue an instance."""
 context = req.environ["nova.context"]
 authorize(context)
-instance = self._get_instance(context, id)
+instance = self._get_instance(context, id, want_objects=True)
 try:
 self.compute_api.unrescue(context, instance)
 except exception.InstanceInvalidState as state_error:
diff --git a/nova/api/openstack/compute/plugins/v3/rescue.py 
b/nova/api/openstack/compute/
index 5ae876b..66b4c17 100644
--- a/nova/api/openstack/compute/plugins/v3/rescue.py
+++ b/nova/api/openstack/compute/plugins/v3/rescue.py
@@ -77,7 +77,8 @@ class RescueController(wsgi.Controller):
 """Unrescue an instance."""
 context = req.environ["nova.context"]
 authorize(context)
-instance = common.get_instance(self.compute_api, context, id)
+instance = common.get_instance(self.compute_api, context, id,
+   want_objects=True)
 try:
 self.compute_api.unrescue(context, instance)
 except exception.InstanceInvalidState as state_error:

eg a one line trivial change in a patch with
 6 files changed, 24 insertions(+), 17 deletions(-)

So in those specific cases I think the v2/v3 dual maintenance burden is very 
low.

But there are also other cases (such as some of the flavors apis) where
the extension basically does:

1. parse incoming data
2. call some flavor code
3. get what is returned and mangle it into a temporary data structure
4. format data for returning to the user

Now 1 and 4 are very v2 and v3 API specific. But 2 and 3 tend to be more
generic (this is not always the case with error paths etc) and do need to be
changed with object transition (and perhaps some of the other changes you are
talking about). eg foo['aaa'] -> foo.aaa. Or adding want_objects=True to 
a method.

Now I still maintain that trying to squeeze both v2 and v3 parsing/formatting
into the same file/method is the wrong thing to do. But we could possibly
expand on nova/api/openstack/common.py and push cases where we can cases of
2 and 3 into it as common methods which the v2/v3 apis call down into.

This would reduce the amount of duplication which is required (I doubt
we could remove all duplication though) and whether its worth it for say
the rescue example is debatable. But for those cases you'd only need to make
the modification in one file.

However we would still have unittest and and tempest burden (I don't see
how we avoid that if we are ever going to fix the v2 API).

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Christopher Yeoh
On Tue, 25 Feb 2014 09:26:09 -0500
Sean Dague  wrote:

> On 02/25/2014 08:17 AM, Ken'ichi Ohmichi wrote:
> > 2014-02-25 19:48 GMT+09:00 Thierry Carrez :
> >> Sean Dague wrote:
> >>> So, that begs a new approach. Because I think at this point even
> >>> if we did put out Nova v3, there can never be a v4. It's too
> >>> much, too big, and doesn't fit in the incremental nature of the
> >>> project. So whatever gets decided about v3, the thing that's
> >>> important to me is a sane way to be able to add backwards
> >>> compatible changes (which we actually don't have today, and I
> >>> don't think any other service in OpenStack does either), as well
> >>> a mechanism for deprecating parts of the API. With some future
> >>> decision about whether removing them makes sense.
> >>
> >> I agree with Sean. Whatever solution we pick, we need to make sure
> >> it's solid enough that it can handle further evolutions of the
> >> Nova API without repeating this dilemma tomorrow. V2 or V3, we
> >> would stick to it for the foreseeable future.
> >>
> >> Between the cleanup of the API, the drop of XML support, and
> >> including a sane mechanism for supporting further changes without
> >> major bumps of the API, we may have enough to technically justify
> >> v3 at this point. However from a user standpoint, given the
> >> surface of the API, it can't be deprecated fast -- so this ideal
> >> solution only works in a world with infinite maintenance resources.
> >>
> >> Keeping V2 forever is more like a trade-off, taking into account
> >> the available maintenance resources and the reality of Nova's API
> >> huge surface. It's less satisfying technically, especially if
> >> you're deeply aware of the API incoherent bits, and the prospect
> >> of living with some of this incoherence forever is not really
> >> appealing.
> > 
> > What is the maintenance cost for keeping both APIs?
> > I think Chris and his team have already paid most part of it, the
> > works for porting
> > the existing v2 APIs to v3 APIs is almost done.
> > So I'd like to clarify the maintenance cost we are discussing.
> > 
> > If the cost means that we should implement both API methods when
> > creating a new API, how about implementing internal proxy from v2
> > to v3 API? When creating a new API, it is enough to implement API
> > method for v3 API. and when receiving a v2 request, Nova translates
> > it to v3 API. The request styles(url, body) of v2 and v3 are
> > different and this idea makes new v2 APIs v3 style. but now v2 API
> > has already a lot of inconsistencies. so it does not seem so big
> > problem.
> > 
> > 
> > From the viewpoint of OpenStack interoperability also, I believe we
> > need a new API.
> > Many v2 API parameters are not validated. If implementing strict
> > validation for v2 API,
> > incompatibility issues happen. That is why we are implementing input
> > validation for
> > v3 API. If staying v2 API forever, we should have this kind of
> > problem forever. v2 API is fragile now. So the interoperability
> > should depend on v2 API, that seems
> > sandbox.. (I know that it is a little overstatement, but we have
> > found a lot of this kind
> > of problem already..)
> 
> So I think this remains a good question about what keeping v2 forever
> means. Because it does mean keeping the fact that we don't validate
> input at the surface and depend on database specific errors to trickle
> back up correctly. So if MySQL changes how it handles certain things,
> you'll get different errors on the surface.
> 
> I'm gong to non-sequitor for a minute, because I think it's important
> to step back some times.
> 
> What I want out of Nova API at the end of the day:
> 
> 1. a way to discover what the API is
> 
> because this massively simplifies writing clients, SDKs, tests, and
> documentation. All those pipelines are terribly manual, and have
> errors in them because of it. Like has been said before you actually
> need to read the Nova source code to figure out how to use parts of
> the API.
> 
> I think this is a great example of that -
> https://blog.heroku.com/archives/2014/1/8/json_schema_for_heroku_platform_api?utm_source=newsletter&utm_medium=email&utm_campaign=januarynewsletter&mkt_tok=3RkMMJWWfF9wsRonuKzNZKXonjHpfsX57OQtX6SxlMI%2F0ER3fOvrPUfGjI4AScJrI%2BSLDwEYGJlv6SgFQrjAMapmyLgLUhE%3D
>

So from what I understand I think the jsonschema work that Ken'ichi has
been working on for V3 goes a fair way in being able to support this
sort of thing. The jsonschema we'd be able to provide for V2 however is
a bit trickier as we'd have to leave the input validation pretty loose
(and probably in rather wierdly inconsistent ways because that's how
its implemented) in most cases.

> Extensions thus far have largely just been used as a cheat to get
> around API compatibility changes based on the theory that users could
> list extensions to figure out what the API would look like. It's a bad
> theory, and not even nova command line does this. So users will 

Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete operation should not apply to the volume being used?

2014-02-25 Thread zhangyu (AI)
IMHO, Attach/detach operations can only be issued from the Nova side because 
they are in fact VM/instance management operations.
Meanwhile, volume create/delete are volume management stuffs, therefore Cinder 
exposes API for them.

Also, according to current Cinder code base, no nova detach-volume action is 
issued from the execution flow of a volume deletion.

Thank you for suggestions~

From: Yuzhou (C) [mailto:vitas.yuz...@huawei.com]
Sent: Wednesday, February 26, 2014 9:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete 
operation should not apply to the volume being used?

I thinkforce delete = nova detach volume,then cinder delete volume

Volume status in db shoud be modified after nova detach volume.

Thanks!


From: zhangyu (AI) [mailto:zhangy...@huawei.com]
Sent: Wednesday, February 26, 2014 8:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete 
operation should not apply to the volume being used?

If I understand your question correctly, the case you describe should be like 
the following:

Assume we have created both an instance and a volume, then we try to  attach 
that volume to the instance.
Before that operation is completed (the status of the volume is "attaching" 
now), for whatever reasons we decide to apply a "force delete" operation on 
that volume.
Then, after we applied that force delete, we come to see that, from the Cinder 
side, the volume has been successfully deleted and the status is surely 
"deleted".
However, from the Nova side, we see that the status of the deleted volume 
remains to be "attaching".

If this is truly your case, I think it is a bug. The reason might lie in that, 
Cinder forgets to refresh the attach_status attribute of a volume in DB when 
applying a "force delete" operation.
Is there any other suggestions?

Thanks!



From: yunling [mailto:yunlingz...@hotmail.com]
Sent: Monday, February 17, 2014 9:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder]Do you think volume force delete operation 
should not apply to the volume being used?

Hi stackers:


I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder when 
doing volume force delete operation on an attaching volume.
I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and detached.


How do you think?


thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete operation should not apply to the volume being used?

2014-02-25 Thread Yuzhou (C)
I thinkforce delete = nova detach volume,then cinder delete volume

Volume status in db shoud be modified after nova detach volume.

Thanks!


From: zhangyu (AI) [mailto:zhangy...@huawei.com]
Sent: Wednesday, February 26, 2014 8:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete 
operation should not apply to the volume being used?

If I understand your question correctly, the case you describe should be like 
the following:

Assume we have created both an instance and a volume, then we try to  attach 
that volume to the instance.
Before that operation is completed (the status of the volume is "attaching" 
now), for whatever reasons we decide to apply a "force delete" operation on 
that volume.
Then, after we applied that force delete, we come to see that, from the Cinder 
side, the volume has been successfully deleted and the status is surely 
"deleted".
However, from the Nova side, we see that the status of the deleted volume 
remains to be "attaching".

If this is truly your case, I think it is a bug. The reason might lie in that, 
Cinder forgets to refresh the attach_status attribute of a volume in DB when 
applying a "force delete" operation.
Is there any other suggestions?

Thanks!



From: yunling [mailto:yunlingz...@hotmail.com]
Sent: Monday, February 17, 2014 9:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder]Do you think volume force delete operation 
should not apply to the volume being used?

Hi stackers:


I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder when 
doing volume force delete operation on an attaching volume.
I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and detached.


How do you think?


thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-02-25 Thread Matt Riedemann



On 2/12/2014 1:57 PM, Matthew Treinish wrote:

On Wed, Feb 12, 2014 at 11:32:39AM -0700, Matt Riedemann wrote:



On 1/17/2014 8:34 AM, Matthew Treinish wrote:

On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz wrote:

On 01/16/2014 10:56 PM, Matthew Treinish wrote:

Hi everyone,

With some recent changes made to Tempest compatibility with nosetests is going
away. We've started using newer features that nose just doesn't support. One
example of this is that we've started using testscenarios and we're planning to
do this in more places moving forward.

So at Icehouse-3 I'm planning to push the patch out to remove nosetests from the
requirements list and all the workarounds and references to nose will be pulled
out of the tree. Tempest will also start raising an unsupported exception when
you try to run it with nose so that there isn't any confusion on this moving
forward. We talked about doing this at summit briefly and I've brought it up a
couple of times before, but I believe it is time to do this now. I feel for
tempest to move forward we need to do this now so that there isn't any ambiguity
as we add even more features and new types of testing.

I'm with you up to here.


Now, this will have implications for people running tempest with python 2.6
since up until now we've set nosetests. There is a workaround for getting
tempest to run with python 2.6 and testr see:

https://review.openstack.org/#/c/59007/1/README.rst

but essentially this means that when nose is marked as unsupported on tempest
python 2.6 will also be unsupported by Tempest. (which honestly it basically has
been for while now just we've gone without making it official)

The way we handle different runners/os can be categorized as "tested
in gate", "unsupported" (should work, possibly some hacks needed),
and "hostile". At present, both nose and py2.6 I would say are in
the unsupported category. The title of this message and the content
up to here says we are moving nose to the hostile category. With
only 2 months to feature freeze I see no justification in moving
py2.6 to the hostile category. I don't see what new testing features
scheduled for the next two months will be enabled by saying that
tempest cannot and will not run on 2.6. It has been agreed I think
by all projects that py2.6 will be dropped in J. It is OK that py2.6
will require some hacks to work and if in the next few months it
needs a few more then that is ok. If I am missing another connection
between the py2.6 and nose issues, please explain.



So honestly we're already at this point in tempest. Nose really just doesn't
work with tempest, and we're adding more features to tempest, your negative test
generator being one of them, that interfere further with nose. I've seen several


I disagree here, my team is running Tempest API, CLI and scenario
tests every day with nose on RHEL 6 with minimal issues.  I had to
workaround the negative test discovery by simply sed'ing that out of
the tests before running it, but that's acceptable to me until we
can start testing on RHEL 7.  Otherwise I'm completely OK with
saying py26 isn't really supported and isn't used in the gate, and
it's a buyer beware situation to make it work, which includes
pushing up trivial patches to make it work (which I did a few of
last week, and they were small syntax changes or usages of
testtools).

I don't understand how the core projects can be running unit tests
in the gate on py26 but our functional integration project is going
to actively go out and make it harder to run Tempest with py26, that
sucks.

If we really want to move the test project away from py26, let's
make the concerted effort to get the core projects to move with it.


So as I said before the python 2.6 story for tempest remains the same after this
change. The only thing that we'll be doing is actively preventing nose from
working with tempest.



And FWIW, I tried the discover.py patch with unittest2 and
testscenarios last week and either I botched it, it's not documented
properly on how to apply it, or I screwed something up, but it
didn't work for me, so I'm not convinced that's the workaround.

What's the other option for running Tempest on py26 (keeping RHEL 6
in mind)?  Using tox with testr and pip?  I'm doing this all
single-node.


Yes, that is what the discover patch is used to enable. By disabling nose the
only path to run tempest with py2.6 is to use testr. (which is what it always
should have been)

Attila confirmed it was working here:
http://fpaste.org/76651/32143139/
in that example he applies 2 patches the second one is currently in the gate for
tempest. (https://review.openstack.org/#/c/72388/ ) So all that needs to be done
is to apply that discover patch:

https://code.google.com/p/unittest-ext/issues/detail?id=79

(which I linked to before)

Then tempest should run more or less the same between 2.7 and 2.6. (The only
difference I've seen is in how skips are handled)




patches this cycle that attempted to i

Re: [openstack-dev] [OpenStack-Infra] [Neutron][third-party-testing] Third Party Test setup and details

2014-02-25 Thread trinath.soman...@freescale.com
Excellent!

Will check this



From: Sukhdev Kapur [sukhdevka...@gmail.com]
Sent: Wednesday, February 26, 2014 3:38 AM
To: openstack-in...@lists.openstack.org; OpenStack Development Mailing List 
(not for usage questions)
Subject: [OpenStack-Infra] [Neutron][third-party-testing] Third Party Test  
setup and details

Fellow developers,

I just put together a wiki describing the Arista Third Party Setup.
In the attached document we provide a link to the modified Gerrit Plugin to 
handle the regex matching for the "Comment Added" event so that 
"recheck/reverify no bug/" can be handled.

https://wiki.openstack.org/wiki/Arista-third-party-testing

Have a look. Your feedback/comments will be appreciated.

regards..
-Sukhdev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Sylvain Bauza
Hi Tim,

As per I'm reading your design document, it sounds more likely related to
something like Solver Scheduler subteam is trying to focus on, ie.
intelligent agnostic resources placement on an holistic way [1]
IIRC, Jay is more likely talking about adaptive scheduling decisions based
on feedback with potential counter-measures that can be done for decreasing
load and preserving QoS of nodes.

That said, maybe I'm wrong ?

[1]https://blueprints.launchpad.net/nova/+spec/solver-scheduler


2014-02-26 1:09 GMT+01:00 Tim Hinrichs :

> Hi Jay,
>
> The Congress project aims to handle something similar to your use cases.
>  I just sent a note to the ML with a Congress status update with the tag
> [Congress].  It includes links to our design docs.  Let me know if you have
> trouble finding it or want to follow up.
>
> Tim
>
> - Original Message -
> | From: "Sylvain Bauza" 
> | To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> | Sent: Tuesday, February 25, 2014 3:58:07 PM
> | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
> OpenStack run time policy to manage
> | compute/storage resource
> |
> |
> |
> | Hi Jay,
> |
> |
> | Currently, the Nova scheduler only acts upon user request (either
> | live migration or boot an instance). IMHO, that's something Gantt
> | should scope later on (or at least there could be some space within
> | the Scheduler) so that Scheduler would be responsible for managing
> | resources on a dynamic way.
> |
> |
> | I'm thinking of the Pets vs. Cattles analogy, and I definitely think
> | that Compute resources could be treated like Pets, provided the
> | Scheduler does a move.
> |
> |
> | -Sylvain
> |
> |
> |
> | 2014-02-26 0:40 GMT+01:00 Jay Lau < jay.lau@gmail.com > :
> |
> |
> |
> |
> | Greetings,
> |
> |
> | Here I want to bring up an old topic here and want to get some input
> | from you experts.
> |
> |
> | Currently in nova and cinder, we only have some initial placement
> | polices to help customer deploy VM instance or create volume storage
> | to a specified host, but after the VM or the volume was created,
> | there was no policy to monitor the hypervisors or the storage
> | servers to take some actions in the following case:
> |
> |
> | 1) Load Balance Policy: If the load of one server is too heavy, then
> | probably we need to migrate some VMs from high load servers to some
> | idle servers automatically to make sure the system resource usage
> | can be balanced.
> |
> | 2) HA Policy: If one server get down for some hardware failure or
> | whatever reasons, there is no policy to make sure the VMs can be
> | evacuated or live migrated (Make sure migrate the VM before server
> | goes down) to other available servers to make sure customer
> | applications will not be affect too much.
> |
> | 3) Energy Saving Policy: If a single host load is lower than
> | configured threshold, then low down the frequency of the CPU to save
> | energy; otherwise, increase the CPU frequency. If the average load
> | is lower than configured threshold, then shutdown some hypervisors
> | to save energy; otherwise, power on some hypervisors to load
> | balance. Before power off a hypervisor host, the energy policy need
> | to live migrate all VMs on the hypervisor to other available
> | hypervisors; After Power on a hypervisor host, the Load Balance
> | Policy will help live migrate some VMs to the new powered
> | hypervisor.
> |
> | 4) Customized Policy: Customer can also define some customized
> | policies based on their specified requirement.
> |
> | 5) Some run-time policies for block storage or even network.
> |
> |
> |
> | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
> | indeed many customers want such features.
> |
> |
> |
> | I have filed a bp here [1] long ago, but after some discussion with
> | Russell, we think that this should not belong to nova but other
> | projects. Till now, I did not find a good place where we can put
> | this in, can any of you show some comments?
> |
> |
> |
> | [1]
> |
> https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
> |
> | --
> |
> |
> | Thanks,
> |
> | Jay
> |
> | ___
> | OpenStack-dev mailing list
> | OpenStack-dev@lists.openstack.org
> | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> |
> |
> |
> | ___
> | OpenStack-dev mailing list
> | OpenStack-dev@lists.openstack.org
> |
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=XDB3hT4WE2iDrNVK0sQ8qKooX2r1T4E%2BVHek3GREhnE%3D%0A&s=e2346cd017c9d8108c12a101892492e2ac75953e4a5ea5c17394c775cf086d7f
> |
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 

Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete operation should not apply to the volume being used?

2014-02-25 Thread zhangyu (AI)
If I understand your question correctly, the case you describe should be like 
the following:

Assume we have created both an instance and a volume, then we try to  attach 
that volume to the instance.
Before that operation is completed (the status of the volume is "attaching" 
now), for whatever reasons we decide to apply a "force delete" operation on 
that volume.
Then, after we applied that force delete, we come to see that, from the Cinder 
side, the volume has been successfully deleted and the status is surely 
"deleted".
However, from the Nova side, we see that the status of the deleted volume 
remains to be "attaching".

If this is truly your case, I think it is a bug. The reason might lie in that, 
Cinder forgets to refresh the attach_status attribute of a volume in DB when 
applying a "force delete" operation.
Is there any other suggestions?

Thanks!



From: yunling [mailto:yunlingz...@hotmail.com]
Sent: Monday, February 17, 2014 9:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder]Do you think volume force delete operation 
should not apply to the volume being used?

Hi stackers:


I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder when 
doing volume force delete operation on an attaching volume.
I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and detached.


How do you think?


thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][TripleO] Neutron DB migrations best practice

2014-02-25 Thread Robert Collins
So we had this bug earlier in the week;
https://bugs.launchpad.net/tripleo/+bug/1283921

   Table 'ovs_neutron.ml2_vlan_allocations' doesn't exist" in neutron-server.log

We fixed this by running neutron-db-migrate upgrade head... which we
figured out by googling and asking weird questions in
#openstack-neutron.

But what are we meant to do? Nova etc are dead easy: nova-manage db
sync every time the code changes, done.

Neutron seems to do something special and different here, and it's not
documented from an ops perspective AFAICT - so - please help, cluebats
needed!

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral]

2014-02-25 Thread Dmitri Zimine
I have created a blueprint to capture the intention to simplify calling 
standard actions:

https://blueprints.launchpad.net/mistral/+spec/mistral-shorthand-action-in-dsl

DZ> 

On Feb 11, 2014, at 7:40 AM, Dmitri Zimine  wrote:

> Yes it makes sense, let's draft how it may look;
> 
> and also think over implementation implications - now we separate task 
> parameters, action parameters, and service parameters, we may need to merge 
> them when instantiating the action. 
> 
> DZ. 
> 
> On Feb 11, 2014, at 6:19 AM, Renat Akhmerov  wrote:
> 
>> Dmitry, I think you are right here. I think for simple case we should be 
>> able to use in-place action definition without having to define the action 
>> separately. Like you said it’s only valuable if we need to reuse it.
>> 
>> The only difference I see between std:send-email and something like REST_API 
>> is that a set of parameters for the latter is dynamic (versus std:send-email 
>> where it’s always “recipients”, “subject”, “body”). Even though it’s still 
>> the same protocol (HTTP) but a particular request representation may be 
>> different (i.e. query string, headers, the structure of body in case POST 
>> etc.). But I think that doesn’t cancel the idea of being able to define the 
>> action along with the task itself.
>> 
>> So good point. As for the syntax itself, we need to think it over. In the 
>> snippet you provided “action: std:REST_API”, so we need to make sure not to 
>> have ambiguities in the ways how we can refer actions. A convention could be 
>> “if we don’t use a namespace we assume that there’s a separate action 
>> definition included into the same workbook, otherwise it should be 
>> considered in-place action definition and task property “action” refers to 
>> an action type rather than the action itself”. Does that make sense?
> 
>> 
>> Renat Akhmerov
>> @ Mirantis Inc.
>> 
>> On 11 Feb 2014, at 16:23, Dmitri Zimine  wrote:
>> 
>>> Do we have (or think about) a shorthand to calling REST_API action, without 
>>> defining a service? 
>>> 
>>> FULL  DSL:
>>> 
>>> Services:
>>> TimeService:
>>> type: REST_API
>>> parameters:
>>>   baseUrl:http://api.timezonedb.com
>>>   key:
>>> actions:
>>>   get-time:
>>> task-parameters:
>>>   zone:
>>> Workflow:
>>> tasks:
>>>timeInToronto:
>>>   action: TimeService:get-time
>>>   parameters:
>>> zone: "America/Toronto"
>>> 
>>> SHORTCUT - may look something like this: 
>>> 
>>> Workflow:
>>> tasks:
>>> timeInToronto:
>>> action:std:REST_API
>>> parameters:
>>>   baseUrl: "http://api.timezonedb.com";
>>>   method: "GET"
>>>   parameters: "zone=/America/Toronto&key="
>>>   
>>> Why asking:  
>>> 
>>> 1) analogy with std:send-email action. I wonder do we have to make user 
>>> define Service for std:send-email? and I think that for standard tasks we 
>>> shouldn't have to. If there is any thinking on REST_API, it may apply here. 
>>> 
>>> 2) For a one-off web service calls the complete syntax is may be overkill 
>>> (but yes, it comes handy for reuse). See examples below. 
>>> 
>>> 
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sanchez, Cristian A
One question to clarify: the project will be marked as reservable by calling 
Keystone API (from Climate) to store that info in the project extra specs in 
Keystone DB.
Is this correct?

From: Sylvain Bauza mailto:sylvain.ba...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: martes, 25 de febrero de 2014 17:55
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design




2014-02-25 17:42 GMT+01:00 Dina Belova 
mailto:dbel...@mirantis.com>>:

>>> I think it should be a Climate "policy" (be careful, the name is confusing) 
>>> : if admin wants to grant any new project for reservations, he should place 
>>> a call to Climate. That's up to Climate-Nova (ie. Nova extension) to query 
>>> Climate in order to see if project has been granted or not.

Now I think that it'll be better, yes.
I see some workflow like:

1) Mark project as reservable in Climate
2) When some resource is created (like Nova instance) it should be checked (in 
the API extensions, for example) via Climate if project is reservable. If is, 
and there is no special reservation flags passed, it should be used 
default_reservation stuff for this instance

Sylvain, is that ira you're talking about?


tl;dr : Yes, let's define/create a new endpoint for the need.

That's exactly what I'm thinking, Climate should manage reservations on its own 
(including any new model) and projects using it for reserving resources should 
place a call to it in order to get some information.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Tim Hinrichs
Hi Jay,

The Congress project aims to handle something similar to your use cases.  I 
just sent a note to the ML with a Congress status update with the tag 
[Congress].  It includes links to our design docs.  Let me know if you have 
trouble finding it or want to follow up.

Tim

- Original Message -
| From: "Sylvain Bauza" 
| To: "OpenStack Development Mailing List (not for usage questions)" 

| Sent: Tuesday, February 25, 2014 3:58:07 PM
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage
| compute/storage resource
| 
| 
| 
| Hi Jay,
| 
| 
| Currently, the Nova scheduler only acts upon user request (either
| live migration or boot an instance). IMHO, that's something Gantt
| should scope later on (or at least there could be some space within
| the Scheduler) so that Scheduler would be responsible for managing
| resources on a dynamic way.
| 
| 
| I'm thinking of the Pets vs. Cattles analogy, and I definitely think
| that Compute resources could be treated like Pets, provided the
| Scheduler does a move.
| 
| 
| -Sylvain
| 
| 
| 
| 2014-02-26 0:40 GMT+01:00 Jay Lau < jay.lau@gmail.com > :
| 
| 
| 
| 
| Greetings,
| 
| 
| Here I want to bring up an old topic here and want to get some input
| from you experts.
| 
| 
| Currently in nova and cinder, we only have some initial placement
| polices to help customer deploy VM instance or create volume storage
| to a specified host, but after the VM or the volume was created,
| there was no policy to monitor the hypervisors or the storage
| servers to take some actions in the following case:
| 
| 
| 1) Load Balance Policy: If the load of one server is too heavy, then
| probably we need to migrate some VMs from high load servers to some
| idle servers automatically to make sure the system resource usage
| can be balanced.
| 
| 2) HA Policy: If one server get down for some hardware failure or
| whatever reasons, there is no policy to make sure the VMs can be
| evacuated or live migrated (Make sure migrate the VM before server
| goes down) to other available servers to make sure customer
| applications will not be affect too much.
| 
| 3) Energy Saving Policy: If a single host load is lower than
| configured threshold, then low down the frequency of the CPU to save
| energy; otherwise, increase the CPU frequency. If the average load
| is lower than configured threshold, then shutdown some hypervisors
| to save energy; otherwise, power on some hypervisors to load
| balance. Before power off a hypervisor host, the energy policy need
| to live migrate all VMs on the hypervisor to other available
| hypervisors; After Power on a hypervisor host, the Load Balance
| Policy will help live migrate some VMs to the new powered
| hypervisor.
| 
| 4) Customized Policy: Customer can also define some customized
| policies based on their specified requirement.
| 
| 5) Some run-time policies for block storage or even network.
| 
| 
| 
| I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
| indeed many customers want such features.
| 
| 
| 
| I have filed a bp here [1] long ago, but after some discussion with
| Russell, we think that this should not belong to nova but other
| projects. Till now, I did not find a good place where we can put
| this in, can any of you show some comments?
| 
| 
| 
| [1]
| https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
| 
| --
| 
| 
| Thanks,
| 
| Jay
| 
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| 
| 
| 
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| 
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=XDB3hT4WE2iDrNVK0sQ8qKooX2r1T4E%2BVHek3GREhnE%3D%0A&s=e2346cd017c9d8108c12a101892492e2ac75953e4a5ea5c17394c775cf086d7f
| 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Sylvain Bauza
Hi Jay,

Currently, the Nova scheduler only acts upon user request (either live
migration or boot an instance). IMHO, that's something Gantt should scope
later on (or at least there could be some space within the Scheduler) so
that Scheduler would be responsible for managing resources on a dynamic way.

I'm thinking of the Pets vs. Cattles analogy, and I definitely think that
Compute resources could be treated like Pets, provided the Scheduler does a
move.

-Sylvain


2014-02-26 0:40 GMT+01:00 Jay Lau :

> Greetings,
>
> Here I want to bring up an old topic here and want to get some input from
> you experts.
>
> Currently in nova and cinder, we only have some initial placement polices
> to help customer deploy VM instance or create volume storage to a specified
> host, but after the VM or the volume was created, there was no policy to
> monitor the hypervisors or the storage servers to take some actions in the
> following case:
>
> 1) Load Balance Policy: If the load of one server is too heavy, then
> probably we need to  migrate some VMs from high load servers to some idle
> servers automatically to make sure the system resource usage can be
> balanced.
> 2) HA Policy: If one server get down for some hardware failure or whatever
> reasons, there is no policy to make sure the VMs can be evacuated or live
> migrated (Make sure migrate the VM before server goes down) to other
> available servers to make sure customer applications will not be affect too
> much.
> 3) Energy Saving Policy: If a single host load is lower than configured
> threshold, then low down the frequency of the CPU to save energy;
> otherwise, increase the CPU frequency. If the average load is lower than
> configured threshold, then shutdown some hypervisors to save energy;
> otherwise, power on some hypervisors to load balance.  Before power off a
> hypervisor host, the energy policy need to live migrate all VMs on the
> hypervisor to other available hypervisors; After Power on a hypervisor
> host, the Load Balance Policy will help live migrate some VMs to the new
> powered hypervisor.
> 4) Customized Policy: Customer can also define some customized policies
> based on their specified requirement.
> 5) Some run-time policies for block storage or even network.
>
> I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there indeed
> many customers want such features.
>
> I have filed a bp here [1] long ago, but after some discussion with
> Russell, we think that this should not belong to nova but other projects.
> Till now, I did not find a good place where we can put this in, can any of
> you show some comments?
>
> [1]
> https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
>
> --
> Thanks,
>
> Jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-25 Thread Rabi Mishra

- Original Message -
> From: "Mark McClain" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, February 26, 2014 3:43:59 AM
> Subject: Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS 
> Pool
> 
> 
> On Feb 25, 2014, at 1:06 AM, Rabi Mishra  wrote:
> 
> > Hi All,
> > 
> > 'subnet_id' attribute of LBaaS Pool resource has been documented as "The
> > network that pool members belong to"
> > 
> > However, with 'HAProxy' driver, it allows to add members belonging to
> > different subnets/networks to a lbaas Pool.
> > 
> Rabi-
> 
> The documentation is a bit misleading here.  The subnet_id in the pool is
> used to create the port that the load balancer instance uses to connect with
> the members.

I assume then the validation in horizon to force the VIP ip from this pool 
subnet is incorrect. i.e VIP address can be from a different subnet.

> 
> mark
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][FWaaS] Rescheduling this week's IRC meeting

2014-02-25 Thread Sumit Naiksatam
Hi, On account of the ongoing RSA conference, some members of our
neutron firewall sub-team will not be able to attend this Wednesday's
IRC. So we will have the meeting on Feb 28th (Friday) at 1800 UTC on:
#openstack-meeting

Hope you can attend.

Thanks,
~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Jay Pipes
On Tue, 2014-02-25 at 09:26 -0500, Sean Dague wrote:
> What I want out of Nova API at the end of the day:
> 
> 1. a way to discover what the API is
> 
> because this massively simplifies writing clients, SDKs, tests, and
> documentation. All those pipelines are terribly manual, and have errors
> in them because of it. Like has been said before you actually need to
> read the Nova source code to figure out how to use parts of the API.

++

The key here, IMO, is to have JSON-Schema documents returned in the same
manner as Glance's v2 API and Heroku's API does, with separate schema
documents returned for each resource exposed in the API.

> 2. stop being optional
> 
> If we ever want interoperability between Nova implementations we need to
> stop allowing the API to be optional. That means getting rid of
> extensions. Content is either part of the Nova API, or it isn't in the
> tree. Without this we'll never get an ecosystem around the API because
> anything more complicated than basic server lifecycle is not guarunteed
> to exist in an OpenStack implementation.
> 
> Extensions thus far have largely just been used as a cheat to get around
> API compatibility changes based on the theory that users could list
> extensions to figure out what the API would look like. It's a bad
> theory, and not even nova command line does this. So users will get
> errors on nova cli with clouds because features aren't enabled, and the
> user has no idea why their commands don't work. Because it's right there
> in the nova help.

No surprise... 100% agreement from me on this.

> 3. a solid validation surface
> 
> We really need to be far more defensive on our API validation surface.
> Right now bad data makes it far too far down the code stack. That's just
> a recipe for security issues.

++. WSME makes this kind of thing much more solid. What is/was the
status of WSME/Pecan integration in Nova?

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Jay Lau
Greetings,

Here I want to bring up an old topic here and want to get some input from
you experts.

Currently in nova and cinder, we only have some initial placement polices
to help customer deploy VM instance or create volume storage to a specified
host, but after the VM or the volume was created, there was no policy to
monitor the hypervisors or the storage servers to take some actions in the
following case:

1) Load Balance Policy: If the load of one server is too heavy, then
probably we need to  migrate some VMs from high load servers to some idle
servers automatically to make sure the system resource usage can be
balanced.
2) HA Policy: If one server get down for some hardware failure or whatever
reasons, there is no policy to make sure the VMs can be evacuated or live
migrated (Make sure migrate the VM before server goes down) to other
available servers to make sure customer applications will not be affect too
much.
3) Energy Saving Policy: If a single host load is lower than configured
threshold, then low down the frequency of the CPU to save energy;
otherwise, increase the CPU frequency. If the average load is lower than
configured threshold, then shutdown some hypervisors to save energy;
otherwise, power on some hypervisors to load balance.  Before power off a
hypervisor host, the energy policy need to live migrate all VMs on the
hypervisor to other available hypervisors; After Power on a hypervisor
host, the Load Balance Policy will help live migrate some VMs to the new
powered hypervisor.
4) Customized Policy: Customer can also define some customized policies
based on their specified requirement.
5) Some run-time policies for block storage or even network.

I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there indeed
many customers want such features.

I have filed a bp here [1] long ago, but after some discussion with
Russell, we think that this should not belong to nova but other projects.
Till now, I did not find a good place where we can put this in, can any of
you show some comments?

[1]
https://blueprints.launchpad.net/nova/+spec/resource-optimization-service

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-25 Thread Kevin L. Mitchell
On Tue, 2014-02-25 at 00:56 +, Ziad Sawalha wrote:
> Seeking some clarification on the OpenStack hacking guidelines for
> multi-string docstrings. 
> 
> 
> Q: In OpenStack projects, is a blank line before the triple closing
> quotes recommended (and therefore optional - this is what PEP-257
> seems to suggest), required, or explicitly rejected (which could be
> one way to interpret the hacking guidelines since they omit the blank
> line).


I lobbied to relax that restriction, because I happen to use Emacs, and
know that that limitation no longer exists with Emacs.  I submitted the
change that eliminated that language from nova's HACKING at the time…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party-ci] Proposing a regular workshop/meeting to help folks set up CI environments

2014-02-25 Thread Jay Pipes
On Tue, 2014-02-25 at 20:02 -0300, Arx Cruz wrote:
> Hello,
> 
> Great Idea, I'm very interested!
> 
> I wasn't able to see the Google Hangout Event, is the url correct?

Hi Arx!

We changed from Google Hangout to using IRC. See here for more info:

http://lists.openstack.org/pipermail/openstack-dev/2014-February/028124.html

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Dan Smith
> Yeah, so objects is the big one here.

Objects, and everything else. With no-db-compute we did it for a couple
cycles, then objects, next it will be retooling flows to conductor, then
dealing with tasks, talking to gantt, etc. It's not going to end any
time soon.

> So what kind of reaction are the Keystone people getting to that?  Do
> they plan on removing their V2 API at some point?  Or just maintain it
> with bug fixes forever?

Yep, that would be good data. We also need to factor in the relative
deployment scale of nova installations vs. keystone installations in the
world (AFAIK, RAX doesn't use keystone for example).

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Sylvain Bauza
Thanks Doug for replying,



2014-02-25 23:10 GMT+01:00 Doug Hellmann :

>
>
>
>> Do you have any idea on how I could get my goal, ie. having a static
>> input plus some extra variable inputs ? I was also thinking about playing
>> with __getattr__ and __setattr__ but I'm not sure the Registry could handle
>> that.
>>
>
> Why don't you know what the data is going to look like before you receive
> it?
>
> One last important point, this API endpoint (Host) is admin-only in case
>> of you mention the potential security issues it could lead.
>>
>
> My issue with this sort of API isn't security, it's that describing how to
> use it for an end user is more difficult than having a clearly defined
> static set of inputs and outputs.
>
>>

tl;dr: Admin can provide extra key/value pairs for defining a single Host
thanks to the API, so we should have possibility to have dynamic key/value
pairs for Host.

Ok, sounds like I have to explain the use-case there. Basically, Climate
provides an API where admin has to enroll hosts for provisioning purposes.
The thing is, we only need to get the hostname because we place a call to
Nova for getting the metrics.
Based on these metrics, we do allow users to put requests for leases based
on given metrics (like VCPUs or memory limits) and we elect some hosts.

As the Nova scheduler is not yet available as a service, we do need to
implement our own possibilities for adding metrics that are not provided by
Nova, and thus we allow the possibility to add extra key/value pairs within
the API call for adding a Host.

With API v1 (Flask with no input validation), the possibility was quite
easy, as we were getting the dict and diretly passing it to the Manager.
Now, I have to find some way to still leave the possibility to add extra
metrics.

Example of a Host request body is :
{ 'name': foo,
  'fruits': 'bananas',
  'vgpus': 2}

As 'fruits' and 'vgpus' are dynamic keys, I should be able to accept them
anyway using WSME.

Hope it's clearer now, because at the moment I'm thinking of bypassing WSME
for handling the POST/PUT requests...

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Matt Riedemann



On Tuesday, February 25, 2014 4:02:13 PM, Dan Smith wrote:

+1, seems would could explore for another cycle just to find out that
backporting everything to V2 isn't going to be what we want, and now
we've just wasted more time.



If we say it's just deprecated and frozen against new features, then
it's maintenance is just limited to bug fixes right?


No, any time we make a change to how the api communicates with compute,
conductor, or the scheduler, both copies of the API code have to be
changed. If we never get rid of v2 (and I don't think we have a good
reason to right now) then we're doing that *forever*. I do not want to
sign up for that.


Yeah, so objects is the big one here.  And it doesn't sound like we're 
talking about getting rid of V2 *right now*, we're talking about 
deprecating it after V3 is released (plan would be Juno for 
nova-network and tasks) and then maintaining it for some amount of time 
before it could be removed, and it doesn't sound like we know what that 
number is until we get some input from deployers/operators.




I'm really curious what deployers like RAX, HP Cloud, etc think about
freezing V2 to features and having to deploying V3 to get them. Does RAX
expose V3 right now? Also curious if RAX/HP/etc see the V3 value
statement when compared to what it will mean for their users.


I'd also be interested to see what happens with the Keystone V2 API 
because as I understand it, it's deprecated already and there is no V3 
support in python-keystoneclient, that's all moved to 
python-openstackclient, which I don't think even Tempest is using yet, 
at least not for API tests.


So what kind of reaction are the Keystone people getting to that?  Do 
they plan on removing their V2 API at some point?  Or just maintain it 
with bug fixes forever?




--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] need advice on how to supply automated testing with bugfix patch

2014-02-25 Thread Chris Friesen


I'm in the process of putting together a bug report and a patch for 
properly handling resource tracking on live migration.


The change involves code that will run on the destination compute node 
in order to properly account for the resources that the instance to be 
migrated will consume.


Testing it manually is really simple...start with an instance on one 
compute node, check the hypervisor stats on the destination node, 
trigger a live-migration, and immediately check the hypervisor stats 
again.  With the current code the hypervisor doesn't update until the 
audit runs, with the patch it updates right away.


I can see how to do a tempest testcase for this, but I don't have a good 
handle on how to set this up as a unit test.  I *think* it should be 
possible to modify _test_check_can_live_migrate_destination() but it 
would mean setting up fake resource tracking and adding fake resources 
(cpu/memory/disk) to the fake instance being fake migrated and I don't 
have any experience with that.


Anyone have any suggestions?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] tripleo-cd-admins team update / contact info question

2014-02-25 Thread Robert Collins
In the tripleo meeting today we re-affirmed that the tripleo-cd-admins
team is aimed at delivering production-availability clouds - thats how
we know the the tripleo program is succeeding (or not !).

So if you're a member of that team, you're on the hook - effectively
on call, where production issues will take precedence over development
/ bug fixing etc.

We have the following clouds today:
cd-undercloud (baremetal, one per region)
cd-overcloud (KVM in the HP region, not sure yet for the RH region) -
multi region.
ci-overcloud (same as cd-overcloud, and will go away when cd-overcloud
is robust enough).

And we have two users:
 - TripleO ATCs, all of whom are eligible for accounts on *-overcloud
 - TripleO reviewers, indirectly via openstack-infra who provide 99%
of the load on the clouds

Right now when there is a problem, there's no clearly defined 'get
hold of someone' mechanism other than IRC in #tripleo.

And thats pretty good since most of the admins are on IRC most of the time.

But.

There are two holes - a) what if its sunday evening :) and b) what if
someone (for instance Derek) has been troubleshooting a problem, but
needs to go do personal stuff, or you know, sleep. There's no reliable
defined handoff mechanism.

So - I think we need to define two things:
  - a stock way for $randoms to ask for support w/ these clouds that
will be fairly low latency and reliable.
  - a way for us to escalate to each other *even if folk happen to be
away from the keyboard at the time*.
And possibly a third:
  - a way for openstack-infra admins to escalate to us in the event of
OMG things happening. Like, we send 1000 VMs all at once at their git
mirrors or something.

And with that lets open the door for ideas!

-Rob
-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-25 Thread Mark McClain

On Feb 25, 2014, at 1:06 AM, Rabi Mishra  wrote:

> Hi All,
> 
> 'subnet_id' attribute of LBaaS Pool resource has been documented as "The 
> network that pool members belong to"
> 
> However, with 'HAProxy' driver, it allows to add members belonging to 
> different subnets/networks to a lbaas Pool.  
> 
Rabi-

The documentation is a bit misleading here.  The subnet_id in the pool is used 
to create the port that the load balancer instance uses to connect with the 
members.

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] client 0.5.0 release

2014-02-25 Thread Sergey Lukjanov
Will be available in global requirements after merging
https://review.openstack.org/76357.

On Wed, Feb 26, 2014 at 12:50 AM, Sergey Lukjanov
 wrote:
> Hi folks,
>
> I'm glad to announce that python-savannaclient v0.5.0 released!
>
> pypi: https://pypi.python.org/pypi/python-savannaclient/0.5.0
> tarball: 
> http://tarballs.openstack.org/python-savannaclient/python-savannaclient-0.5.0.tar.gz
> launchpad: https://launchpad.net/python-savannaclient/0.5.x/0.5.0
>
> Notes:
>
> * it's first release with CLI covers mostly all features;
> * dev docs moved to client from the main repo;
> * support for all new Savanna features introduced in Icehouse release cycle;
> * single common entrypoint, actual - savannaclient.client.Client('1.1);
> * auth improvements;
> * base resource class improvements;
> * 93 commits from the prev. release.
>
> Thanks.
>
> On Thu, Feb 20, 2014 at 3:53 AM, Sergey Lukjanov  
> wrote:
>> Additionally, it contains support for the latest EDP features.
>>
>>
>> On Thu, Feb 20, 2014 at 3:52 AM, Sergey Lukjanov 
>> wrote:
>>>
>>> Hi folks,
>>>
>>> I'd like to make a 0.5.0 release of savanna client soon, please, share
>>> your thoughts about stuff that should be included to it.
>>>
>>> Currently we have the following major changes/fixes:
>>>
>>> * mostly implemented CLI;
>>> * unified entry point for python bindings like other OpenStack clients;
>>> * auth improvements;
>>> * base resource class improvements.
>>>
>>> Full diff:
>>> https://github.com/openstack/python-savannaclient/compare/0.4.1...master
>>>
>>> Thanks.
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Savanna Technical Lead
>>> Mirantis Inc.
>>
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-25 Thread Sukhdev Kapur
Folks,

I just sent out another email.

Here is the link to the wiki which has details about this patch.

https://wiki.openstack.org/wiki/Arista-third-party-testing

Hope this helps.

-Sukhdev




On Fri, Feb 14, 2014 at 6:01 PM, Sukhdev Kapur
wrote:

>
>
>
> On Thu, Feb 13, 2014 at 12:39 PM, Jay Pipes  wrote:
>
>> On Thu, 2014-02-13 at 12:34 -0800, Sukhdev Kapur wrote:
>> > Jay,
>> >
>> > Just an FYI. We have modified the Gerrit plugin it accept/match regex
>> > to generate notifications of for "receck no bug/bug ###". It turned
>> > out to be very simple fix and we (Arista Testing) is now triggering on
>> > recheck comments as well.
>>
>> Thanks for the update, Sukhdev! Is this updated Gerrit plugin somewhere
>> where other folks can use it?
>>
>
>
> Yes the patch is ready.  I am documenting it as a part of overall
> description of Arista Testing Setup and will be releasing soon as part of
> the document that I am writing.
> Hopefully next week.
>
> regards..
> -Sukhdev
>
>
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Doug Hellmann
On Tue, Feb 25, 2014 at 3:47 PM, Sylvain Bauza wrote:

> Well, I agreed with the fact I switched some way the use of this feature
> to match my needs, but then let me ask you a quick question : how can
> handle WSME variable request body ?
>
> The first glance I have is that WSME is requiring a static API in terms of
> inputs, could you then confirm ?
>

Yes, that's correct.



> Do you have any idea on how I could get my goal, ie. having a static input
> plus some extra variable inputs ? I was also thinking about playing with
> __getattr__ and __setattr__ but I'm not sure the Registry could handle that.
>

Why don't you know what the data is going to look like before you receive
it?

One last important point, this API endpoint (Host) is admin-only in case of
> you mention the potential security issues it could lead.
>

My issue with this sort of API isn't security, it's that describing how to
use it for an end user is more difficult than having a clearly defined
static set of inputs and outputs.

Doug



>
> Thanks for your help,
> -Sylvain
>
>
> 2014-02-25 18:55 GMT+01:00 Doug Hellmann :
>
> OK, that's not how that feature is meant to be used.
>>
>> The idea is that on application startup plugins or extensions will be
>> loaded that configure the extra attributes for the class. That happens one
>> time, and the configuration does not depend on data that appears in the
>> request itself.
>>
>> Doug
>>
>>
>> On Tue, Feb 25, 2014 at 9:07 AM, Sylvain Bauza 
>> wrote:
>>
>>> Let me give you a bit of code then, that's currently WIP with heavy
>>> rewrites planned on the Controller side thanks to Pecan hooks [1]
>>>
>>> So, L102 (GET request) the convert() method is passing the result dict
>>> as kwargs, where the Host.__init__() method is adding dynamic attributes.
>>> That does work :-)
>>>
>>> L108, I'm specifying that my body string is basically an Host object.
>>> Unfortunately, I can provide extra keys to that where I expect to be extra
>>> attributes. WSME will then convert the body into an Host [2], but as the
>>> Host class doesn't yet know which extra attributes are allowed, none of my
>>> extra keys are taken.
>>> As a result, the 'host' (instance of Host) argument of the post() method
>>> is not containing the extra attributes and thus, not passed for creation to
>>> my Manager.
>>>
>>> As said, I can still get the request body using Pecan directly within
>>> the post() method, but I then would have to manage the mimetype, and do the
>>> adding of the extra attributes there. That's pretty ugly IMHO.
>>>
>>> Thanks,
>>> -Sylvain
>>>
>>> [1] http://paste.openstack.org/show/69418/
>>>
>>> [2] https://github.com/stackforge/wsme/blob/master/wsmeext/pecan.py#L71
>>>
>>>
>>> 2014-02-25 14:39 GMT+01:00 Doug Hellmann :
>>>
>>>


 On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza >>> > wrote:

> Hi,
>
> Thanks to WSME 0.6, there is now possibility to add extra attributes
> to a Dynamic basetype.
> I successfully ended up showing my extra attributes from a dict to a
> DynamicType using add_attributes() but I'm now stuck with POST requests
> having dynamic body data.
>
> Although I'm declaring in wsexpose() my DynamicType, I can't say to
> WSME to map the pecan.request.body dict with my wsattrs and create new
> attributes if none matched.
>
> Any idea on how to do this ? I looked at WSME and the type is
> registered at API startup, not when being called, so the get_arg() method
> fails to fill in the gaps.
>
> I can possibly do a workaround within my post function, where I could
> introspect pecan.request.body and add extra attributes, so it sounds a bit
> crappy as I have to handle the mimetype already managed by WSME.
>

 I'm not sure I understand the question. Are you saying that the dynamic
 type feature works for GET arguments but not POST body content?

 Doug



>
>
> Thanks,
> -Sylvain
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/op

[openstack-dev] [Neutron][third-party-testing] Third Party Test setup and details

2014-02-25 Thread Sukhdev Kapur
Fellow developers,

I just put together a wiki describing the Arista Third Party Setup.
In the attached document we provide a link to the modified Gerrit Plugin to
handle the regex matching for the "Comment Added" event so that
"recheck/reverify no bug/" can be handled.

https://wiki.openstack.org/wiki/Arista-third-party-testing

Have a look. Your feedback/comments will be appreciated.

regards..
-Sukhdev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-25 Thread Joe Gordon
On Mon, Feb 24, 2014 at 4:56 PM, Ziad Sawalha
 wrote:
> Seeking some clarification on the OpenStack hacking guidelines for
> multi-string docstrings.
>
> Q: In OpenStack projects, is a blank line before the triple closing quotes
> recommended (and therefore optional - this is what PEP-257 seems to
> suggest), required, or explicitly rejected (which could be one way to
> interpret the hacking guidelines since they omit the blank line).
>
> This came up in a commit review, and here are some references on the topic:

Link?

Style should never ever be enforced by a human, if the code passed
the pep8 job, then its acceptable.

>
> Quoting PEP-257: "The BDFL [3] recommends inserting a blank line between the
> last paragraph in a multi-line docstring and its closing quotes, placing the
> closing quotes on a line by themselves. This way, Emacs' fill-paragraph
> command can be used on it."
>
> Sample from pep257 (with extra blank line):
>
> def complex(real=0.0, imag=0.0):
> """Form a complex number.
>
> Keyword arguments:
> real -- the real part (default 0.0)
> imag -- the imaginary part (default 0.0)
>
> """
> if imag == 0.0 and real == 0.0: return complex_zero
> ...
>
>
> The multi-line docstring example in
> http://docs.openstack.org/developer/hacking/ has no extra blank line before
> the ending triple-quotes:
>
> """A multi line docstring has a one-line summary, less than 80 characters.
>
> Then a new paragraph after a newline that explains in more detail any
> general information about the function, class or method. Example usages
> are also great to have here if it is a complex class for function.
>
> When writing the docstring for a class, an extra line should be placed
> after the closing quotations. For more in-depth explanations for these
> decisions see http://www.python.org/dev/peps/pep-0257/
>
> If you are going to describe parameters and return values, use Sphinx, the
> appropriate syntax is as follows.
>
> :param foo: the foo parameter
> :param bar: the bar parameter
> :returns: return_type -- description of the return value
> :returns: description of the return value
> :raises: AttributeError, KeyError
> """
>
> Regards,
>
> Ziad
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Dan Smith
> +1, seems would could explore for another cycle just to find out that
> backporting everything to V2 isn't going to be what we want, and now
> we've just wasted more time.

> If we say it's just deprecated and frozen against new features, then
> it's maintenance is just limited to bug fixes right?

No, any time we make a change to how the api communicates with compute,
conductor, or the scheduler, both copies of the API code have to be
changed. If we never get rid of v2 (and I don't think we have a good
reason to right now) then we're doing that *forever*. I do not want to
sign up for that.

I'm really curious what deployers like RAX, HP Cloud, etc think about
freezing V2 to features and having to deploying V3 to get them. Does RAX
expose V3 right now? Also curious if RAX/HP/etc see the V3 value
statement when compared to what it will mean for their users.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Matt Riedemann



On 2/25/2014 6:00 AM, Christopher Yeoh wrote:

On Tue, 25 Feb 2014 10:31:42 +
John Garbutt  wrote:


On 25 February 2014 06:11, Christopher Yeoh  wrote:

On Mon, 24 Feb 2014 17:37:04 -0800
Dan Smith  wrote:


onSharedStorage = True
on_shared_storage = False


This is a good example. I'm not sure it's worth breaking users _or_
introducing a new microversion for something like this. This is
definitely what I would call a "purity" concern as opposed to
"usability".


I thought micro versioning was so we could make backwards compatible
changes. If we make breaking changes we need to support the old and
the new for a little while.


Isn't the period that we have to support the old and the new for these
sorts of breaking the changes exactly the same period of time that we'd
have to keep V2 around if we released V3? Either way we're forcing
people off the old behaviour.


I am tempted to say the breaking changes just create a new extension,
but there are other ways...


Oh, please no :-) Essentially that is no different to creating a new
extension in the v3 namespace except it makes the v2 namespace even
more confusing?


For return values:
* get new clients to send Accepts headers, to version the response
* this amounts to the "major" version
* for those request the new format, they get the new format
* for those getting the old format, they get the old format

For this case, on requests:
* we can accept both formats, or maybe that also depends on the
Accepts headers (with is a bit funky, granted).
* only document the new one
* maybe in two years remove the old format? maybe never?



So the idea of accept headers seems to me like just an alternative to
using a different namespace except a new namespace is much cleaner.


Same for URLs, we could have the old a new names, with the new URL
always returning the new format (think instace_actions ->
server_actions).

If the code only differers in presentation, that implies much less
double testing that two full versions of the API. It seems like we
could make some of these clean ups in, and keep the old version, with
relatively few changes.


As I've said before the API layer is very thin. Essentially most of it
is just about parsing the input, calling something, then formatting the
output. But we still do double testing even though the difference
between them most of the time is just "presentation".  Theoretically if
the unittests were good enough in terms of checking the API we'd only
have to tempest test a single API but I think experience has shown that
we're not that good at doing exhaustive unittests. So we use the
fallback of throwing tempest at both APIs


Not even Tempest in a lot of cases, like the host_maintenance_mode virt 
driver APIs that are only implemented in VMware and XenAPI virt drivers, 
we have no Tempest coverage there because the libvirt driver doesn't 
implement that API.





We could port the V2 classes over to the V3 code, to get the code
benefits.


I'm not exactly sure what you mean here. If you mean backporting say
the V3 infrastructure so V2 can use it, I don't want people
underestimating the difficulty of that. When we developed the new
architecture we had the benefit of being able to bootstrap it without
it having to work for a while. Eg. getting core bits like servers and
images up and running without having to have the additional parts which
depend on it working with it yet. With V2 we can't do that, so
operating on a "active" system is going to be more difficult. The CD
people will not be happy with breakage :-)

But even then it took a considerable amount of effort - both coding and
review to get the changes merged, and that was back in Havana when it
was easier to review bandwidth. And we also discovered that especially
with that sort of infrastructure work its very difficult to get many
people working parallel - or even one person working on too many things
at one time. Because you end up in merge confict/rebase hell. I've been
there a lot in Havana and Icehouse.


+1 to not backporting all of the internal improvements from V3 to V2. 
That'd be a ton of duplicate code and review time and if we aren't 
making backwards incompatible changes to V2 I don't see the point, we're 
just kicking the can down the road on when we do need to make backwards 
incompatible changes and require a new API major version bump for 
.





Return codes are a bit harder, it seems odd to change those based on
Accepts headers, but maybe I could live with that.


Maybe this is the code mess we were trying to avoid, but I feel we
should at least see how bad this kind of approach would look?


So to me this approach really doesn't look a whole lot different to
just having a separate v2/v3 codebase in terms of maintenance. LOC
would be lower, but testing load is similar if we make the same sorts
of changes. Some things like input validation are a bit harder to
implement (because you need quite lax input validation for v2-old and
strict for v2-new).

Re: [openstack-dev] [infra] Meeting Tuesday February 25th at 19:00 UTC

2014-02-25 Thread Elizabeth Krumbach Joseph
On Mon, Feb 24, 2014 at 11:21 AM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday February 25th, at 19:00 UTC in
> #openstack-meeting

Thanks to everyone who was able to make it to our meeting, minutes and log here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-25-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-25-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-25-19.02.log.htm

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Weekly meetings

2014-02-25 Thread Vladimir Kozhukalov
Hi folks,

Fuel team is glad to announce that we scheduled weekly IRC meeting. It is
supposed to be held on Thursdays at 19:00 UTC in #openstack-meeting. Our
first meeting is scheduled on 2/27/2014.

We are trying to become even more open. Please feel free to add topics to
meeting agenda https://wiki.openstack.org/wiki/Meetings/Fuel#Agenda.

Vladimir Kozhukalov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-25 Thread Varadhan, Sowmini
Folks,

I'm trying to set up a simple single-node devstack + ml2 + vxlan
combination, and though this ought to be a simple RTFM exercise,
I'm having some trouble setting this up. Perhaps I'm doing something
wrong- clues would be welcome.

I made sure to use ovs_version 1.10.2, and followed
the instructions in https://wiki.openstack.org/wiki/Neutron/ML2
(and then some, based on various and sundry blogs that google found)

Can someone share (all) the contents of their localrc,
and if possible, a description of their VM (virtualbox?  qemu-kvm?)
setup so that I can compare against my env?

FWIW, I tried the attached configs.
localrc.all - sets up
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
Resulting VM boots, but no vxlan interfaces show up (see ovs-ctl.out.all)

localrc.vxlan.only - disallow anything other than vxlan and gre.
VM does not boot- I get a "binding_failed" error. See ovs-ctl.out.vxlan.only

Thanks in advance,
Sowmini
OFFLINE=False
RECLONE=yes

HOST_IP=192.168.122.198
PUBLIC_INTERFACE=eth1
SERVICE_HOST=$HOST_IP

MULTI_HOST=1
LOGFILE=$HOME/logs/devstack.log
LOGDAYS=7
SCREEN_LOGDIR=$HOME/logs/screen
LOG_COLOR=False

DATABASE_USER=root
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password

ADMIN_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password

Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)


disable_service n-net
disable_service tempest
disable_service horizon
disable_service cinder
disable_service heat

enable_service  neutron
enable_service  q-agt
enable_service  q-svc
enable_service  q-l3
enable_service  q-dhcp


SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

# disable_all_services
#
# enable_service   g-api
# enable_service   glance
# enable_service   keystone
# enable_service   nova
# enable_service   quantum
# enable_service   rabbit

disable_service  n-net

CINDER_BRANCH=master
GLANCE_BRANCH=master
HEAT_BRANCH=master
HORIZON_BRANCH=master
KEYSTONE_BRANCH=master
NOVA_BRANCH=master
QUANTUM_BRANCH=master
SWIFT_BRANCH=master
TEMPEST_BRANCH=master



#FLOATING_RANGE=10.10.37.0/24
FLOATING_RANGE=10.10.30.0/24
Q_FLOATING_ALLOCATION_POOL="start=10.10.30.64,end=10.10.30.127"
FIXED_NETWORK_SIZE=256
SWIFT_HASH=password


OFFLINE=False
RECLONE=yes

HOST_IP=192.168.122.198
# PUBLIC_INTERFACE=eth1
SERVICE_HOST=$HOST_IP

#MULTI_HOST=1
LOGFILE=$HOME/logs/devstack.log
LOGDAYS=7
SCREEN_LOGDIR=$HOME/logs/screen
LOG_COLOR=False

DATABASE_USER=root
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password

ADMIN_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
#
# fails with "vif_type=binding_failed" for the router interface?
#
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPES=vxlan
Q_ML2_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS="vxlan,gre"
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
Q_SRV_EXTRA_OPTS=(tenant_network_types=vxlan)


disable_service n-net
disable_service tempest
disable_service horizon
disable_service cinder
disable_service heat
disable_service swift

enable_service  neutron
enable_service  q-agt
enable_service  q-svc
enable_service  q-l3
enable_service  q-dhcp
enable_service  q-meta


SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

# disable_all_services
#
# enable_service   g-api
# enable_service   glance
# enable_service   keystone
# enable_service   nova
# enable_service   quantum
# enable_service   rabbit

disable_service  n-net

CINDER_BRANCH=master
GLANCE_BRANCH=master
HEAT_BRANCH=master
HORIZON_BRANCH=master
KEYSTONE_BRANCH=master
NOVA_BRANCH=master
QUANTUM_BRANCH=/home/sowmini/devstack/neutron
SWIFT_BRANCH=master
TEMPEST_BRANCH=master


FLAT_INTERFACE=eth1
OVS_PHYSICAL_BRIDGE=br-int
Q_USE_SECGROUP=True

#FLOATING_RANGE=10.10.37.0/24
FLOATING_RANGE=10.10.30.0/24
Q_FLOATING_ALLOCATION_POOL="start=10.10.30.64,end=10.10.30.127"
FIXED_NETWORK_SIZE=256
SWIFT_HASH=password


owmini@sowmini-virtual-machine:~/devstack/devstack$ sudo ovs-vsctl show
0352c6e8-cced-4f21-8cff-36550186b4b8
Bridge br-int
Port "qr-c4d5a7c3-69"
tag: 1
Interface "qr-c4d5a7c3-69"
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-ex
Port "qg-f70ef8ee-65"
Interface "qg-f70ef8ee-65"
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
  

[openstack-dev] Libvirt Resize/Cold Migrations and SSH

2014-02-25 Thread Solly Ross
Hi All,
I've been working on/thinking about a bug filed a while ago related to libvirt 
resize/cold migrations.  The bug ended up being roughly as such:

On a Packstack install, cold migrations and resizes fail under the default 
setup with an error about not being able to do an SSH `mkdir` operation.
The case ended up being that Nova was failing to do the resize because the 
individual compute nodes didn't have passwordless (key-based) ssh permissions
into the other compute nodes.

The proposed temporary fix was to manually give the compute nodes SSH 
permissions into each other, with the moderate-term
fix being to have Packstack distribute SSH keys among the compute nodes and set 
up permissions.

While these fixes work, they left me with a certain dirty taste in my mouth, 
since it doesn't seem quite elegant to have Nova SSH-ing around
between compute nodes, and the upstream community seemed to agree with this 
(there was a thread a while ago, but I got sidetracked with other
work).  Upon further investigation, I found four points at which the Nova 
libvirt driver uses SSH, all of which revolve around the method
`migrate_disk_and_power_off` (the main part of the resize/cold migration code):

1. to detect shared storage
2. to create the directory for the instance on the destination system
3. to copy the disk image from the source to the destination system (uses 
either rysnc over ssh or scp)
4. to remove the directory created in (2) in case of an error during the process

Number 1 can be trivially eliminated by using the existing 
'_is_instance_storage_shared' method in the RPCAPI from the compute manager, 
and passing that value to the driver (with the other drivers
most likely ignoring it) instead of checking from within the driver code.  
Numbers 2 and 4 can be eliminated by using a "pre_x, x, cleanup_x" flow, 
similarly to how live migrations are handled (with
"pre_x" and "cleanup_x" being run on the destination machines via the RPCAPI).  
That only leaves number 3.  Note that these are only used when we are going 
between machines without shared storage.
Shared storage eliminates cases 2-4.

So here's my question: can number 3 be "elminated", so to speak?  Having to 
give full SSH permissions for a file copy seems a bit overkill (we could, for 
example, run an rsync daemon, in which case
rsync would connect via the daemon and not ssh).  Is it worth it?  
Additionally, if we do not eliminate number 3, is it worth it to refactor the 
code to eliminate numbers 2 and 4 (I already have code
to eliminate number 1 -- see https://gist.github.com/DirectXMan12/9217699).

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-25 Thread Steven Hardy
On Tue, Feb 25, 2014 at 11:47:43AM -0800, Morgan Fainberg wrote:
> For purposes of supporting multiple backends for Identity (multiple LDAP, mix 
> of LDAP and SQL, federation, etc) Keystone is planning to increase the 
> maximum size of the USER_ID field from an upper limit of 64 to an upper limit 
> of 255. This change would not impact any currently assigned USER_IDs (they 
> would remain in the old simple UUID format), however, new USER_IDs would be 
> increased to include the IDP identifier (e.g. USER_ID@@IDP_IDENTIFIER). 

Hmm, my immediate reaction is there must be a better way than mangling both
bits of data into the ID field, considering pretty much everything
everywhere else in openstack uses uuids for user-visible object identifiers.

> There is the obvious concern that projects are utilizing (and storing) the 
> user_id in a field that cannot accommodate the increased upper limit. Before 
> this change is merged in, it is important for the Keystone team to understand 
> if there are any places that would be overflowed by the increased size.
> 
> The review that would implement this change in size is 
> https://review.openstack.org/#/c/74214 and is actively being worked 
> on/reviewed.
> 
> I have already spoken with the Nova team, and a single instance has been 
> identified that would require a migration (that will have a fix proposed for 
> the I3 timeline). 
> 
> If there are any other known locations that would have issues with an 
> increased USER_ID size, or any concerns with this change to USER_ID format, 
> please respond so that the issues/concerns can be addressed.  Again, the plan 
> is not to change current USER_IDs but that new ones could be up to 255 
> characters in length.

Yes, this will affect Heat in at least one place - we store the trustor
user ID when we create a trust between the stack owner and the heat service
user:

https://github.com/openstack/heat/blob/master/heat/db/sqlalchemy/migrate_repo/versions/027_user_creds_trusts.py#L23

IMHO this is coming pretty late in the cycle considering the potential
impact, but if this is definitely happening we can go ahead an update our
schema.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sylvain Bauza
2014-02-25 17:42 GMT+01:00 Dina Belova :

>
> >>> I think it should be a Climate "policy" (be careful, the name is
> confusing) : if admin wants to grant any new project for reservations, he
> should place a call to Climate. That's up to Climate-Nova (ie. Nova
> extension) to query Climate in order to see if project has been granted or
> not.
>
> Now I think that it'll be better, yes.
> I see some workflow like:
>
> 1) Mark project as reservable in Climate
> 2) When some resource is created (like Nova instance) it should be checked
> (in the API extensions, for example) via Climate if project is reservable.
> If is, and there is no special reservation flags passed, it should be used
> default_reservation stuff for this instance
>
> Sylvain, is that ira you're talking about?
>
>
tl;dr : Yes, let's define/create a new endpoint for the need.

That's exactly what I'm thinking, Climate should manage reservations on its
own (including any new model) and projects using it for reserving resources
should place a call to it in order to get some information.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] client 0.5.0 release

2014-02-25 Thread Sergey Lukjanov
Hi folks,

I'm glad to announce that python-savannaclient v0.5.0 released!

pypi: https://pypi.python.org/pypi/python-savannaclient/0.5.0
tarball: 
http://tarballs.openstack.org/python-savannaclient/python-savannaclient-0.5.0.tar.gz
launchpad: https://launchpad.net/python-savannaclient/0.5.x/0.5.0

Notes:

* it's first release with CLI covers mostly all features;
* dev docs moved to client from the main repo;
* support for all new Savanna features introduced in Icehouse release cycle;
* single common entrypoint, actual - savannaclient.client.Client('1.1);
* auth improvements;
* base resource class improvements;
* 93 commits from the prev. release.

Thanks.

On Thu, Feb 20, 2014 at 3:53 AM, Sergey Lukjanov  wrote:
> Additionally, it contains support for the latest EDP features.
>
>
> On Thu, Feb 20, 2014 at 3:52 AM, Sergey Lukjanov 
> wrote:
>>
>> Hi folks,
>>
>> I'd like to make a 0.5.0 release of savanna client soon, please, share
>> your thoughts about stuff that should be included to it.
>>
>> Currently we have the following major changes/fixes:
>>
>> * mostly implemented CLI;
>> * unified entry point for python bindings like other OpenStack clients;
>> * auth improvements;
>> * base resource class improvements.
>>
>> Full diff:
>> https://github.com/openstack/python-savannaclient/compare/0.4.1...master
>>
>> Thanks.
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Sylvain Bauza
Well, I agreed with the fact I switched some way the use of this feature to
match my needs, but then let me ask you a quick question : how can handle
WSME variable request body ?

The first glance I have is that WSME is requiring a static API in terms of
inputs, could you then confirm ?
Do you have any idea on how I could get my goal, ie. having a static input
plus some extra variable inputs ? I was also thinking about playing with
__getattr__ and __setattr__ but I'm not sure the Registry could handle that.

One last important point, this API endpoint (Host) is admin-only in case of
you mention the potential security issues it could lead.

Thanks for your help,
-Sylvain


2014-02-25 18:55 GMT+01:00 Doug Hellmann :

> OK, that's not how that feature is meant to be used.
>
> The idea is that on application startup plugins or extensions will be
> loaded that configure the extra attributes for the class. That happens one
> time, and the configuration does not depend on data that appears in the
> request itself.
>
> Doug
>
>
> On Tue, Feb 25, 2014 at 9:07 AM, Sylvain Bauza wrote:
>
>> Let me give you a bit of code then, that's currently WIP with heavy
>> rewrites planned on the Controller side thanks to Pecan hooks [1]
>>
>> So, L102 (GET request) the convert() method is passing the result dict as
>> kwargs, where the Host.__init__() method is adding dynamic attributes.
>> That does work :-)
>>
>> L108, I'm specifying that my body string is basically an Host object.
>> Unfortunately, I can provide extra keys to that where I expect to be extra
>> attributes. WSME will then convert the body into an Host [2], but as the
>> Host class doesn't yet know which extra attributes are allowed, none of my
>> extra keys are taken.
>> As a result, the 'host' (instance of Host) argument of the post() method
>> is not containing the extra attributes and thus, not passed for creation to
>> my Manager.
>>
>> As said, I can still get the request body using Pecan directly within the
>> post() method, but I then would have to manage the mimetype, and do the
>> adding of the extra attributes there. That's pretty ugly IMHO.
>>
>> Thanks,
>> -Sylvain
>>
>> [1] http://paste.openstack.org/show/69418/
>>
>> [2] https://github.com/stackforge/wsme/blob/master/wsmeext/pecan.py#L71
>>
>>
>> 2014-02-25 14:39 GMT+01:00 Doug Hellmann :
>>
>>
>>>
>>>
>>> On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza 
>>> wrote:
>>>
 Hi,

 Thanks to WSME 0.6, there is now possibility to add extra attributes to
 a Dynamic basetype.
 I successfully ended up showing my extra attributes from a dict to a
 DynamicType using add_attributes() but I'm now stuck with POST requests
 having dynamic body data.

 Although I'm declaring in wsexpose() my DynamicType, I can't say to
 WSME to map the pecan.request.body dict with my wsattrs and create new
 attributes if none matched.

 Any idea on how to do this ? I looked at WSME and the type is
 registered at API startup, not when being called, so the get_arg() method
 fails to fill in the gaps.

 I can possibly do a workaround within my post function, where I could
 introspect pecan.request.body and add extra attributes, so it sounds a bit
 crappy as I have to handle the mimetype already managed by WSME.

>>>
>>> I'm not sure I understand the question. Are you saying that the dynamic
>>> type feature works for GET arguments but not POST body content?
>>>
>>> Doug
>>>
>>>
>>>


 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-25 Thread Jay Pipes
On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
> For purposes of supporting multiple backends for Identity (multiple
> LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
> increase the maximum size of the USER_ID field from an upper limit of
> 64 to an upper limit of 255. This change would not impact any
> currently assigned USER_IDs (they would remain in the old simple UUID
> format), however, new USER_IDs would be increased to include the IDP
> identifier (e.g. USER_ID@@IDP_IDENTIFIER).

-1

I think a better solution would be to have a simple translation table
only in Keystone that would store this longer identifier (for folks
using federation and/or LDAP) along with the Keystone user UUID that is
used in foreign key relations and other mapping tables through Keystone
and other projects.

The only identifiers that would ever be communicated to any non-Keystone
OpenStack endpoint would be the UUID user and tenant IDs.

> There is the obvious concern that projects are utilizing (and storing)
> the user_id in a field that cannot accommodate the increased upper
> limit. Before this change is merged in, it is important for the
> Keystone team to understand if there are any places that would be
> overflowed by the increased size.

I would go so far as to say the user_id and tenant_id fields should be
*reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
performance reasons. Lengthening commonly-used and frequently-joined
identifier fields is not a good option, IMO.

Best,
-jay

> The review that would implement this change in size
> is https://review.openstack.org/#/c/74214 and is actively being worked
> on/reviewed.
> 
> 
> I have already spoken with the Nova team, and a single instance has
> been identified that would require a migration (that will have a fix
> proposed for the I3 timeline). 
> 
> 
> If there are any other known locations that would have issues with an
> increased USER_ID size, or any concerns with this change to USER_ID
> format, please respond so that the issues/concerns can be addressed.
>  Again, the plan is not to change current USER_IDs but that new ones
> could be up to 255 characters in length.
> 
> 
> Cheers,
> Morgan Fainberg
> —
> Morgan Fainberg
> Principal Software Engineer
> Core Developer, Keystone
> m...@metacloud.com
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] status and feedback

2014-02-25 Thread Tim Hinrichs
Hi all,

I wanted to send out a quick status update on Congress and start a discussion 
about short-term goals.

1) Logistics

IRC Meeting time
Tuesday 1700 UTC in openstack-meeting-3
Every other week starting Feb 25
(Note this is a new meeting time/location.)

2) We have two design docs, which we would like feedback on.

Toplevel design doc:
https://docs.google.com/a/vmware.com/document/d/1f2xokl9Tc47aV67KEua0PBRz4jsdSDLXth7dYe-jz6Q/edit

Data integration design doc:
https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit

3) Short term goals.

I think it's useful to get an end-to-end system up and running to make it 
easier to communicate what we're driving at.  To that end, I'm suggesting we 
take the following policy and build up enough of Congress to make it work.

"Every network connected to a VM must either be public or owned by someone in 
the same group as the VM owner"

This example is compelling because it combines information from Neutron, Nova, 
and Keystone (or another group-management data-source such as ActiveDirectory). 
 To that end, I suggest focusing on the following tasks in the short-term.

- Data integration framework, including read-only support for 
Neutron/Nova/Keystone
- Exposing policy engine via API
- Policy engine error handling
- Simple scaling tests
- Basic user docs
- Execution framework: it would be nice if we could actually execute some of 
the actions we tell Congress about, but this is lowest priority on the list, 
for me.

Thoughts?
Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Update on new project creation

2014-02-25 Thread Monty Taylor

Hey all!

You may or may not have noticed that there's been a backlog with new 
project creation in infra. There are some specific issues with our 
automation that are causing this, and we've got a plan in place to fix 
them. Until we do, which is targeted to be done by the end of March, 
we're expecting there to continue to be delays in project creation.


To help mitigate the pain around that, we've decided two new things:

** Topic name: new-project **

First of all, if you are submitting a change to create a new project, 
we're going to require that you set the topic name to new-project. This 
will allow us to easily batch-review and process the requests.


** New Project Fridays **

We're going to have to manually run some scripts for new projects now 
until the automation is fixed. To keep the crazy down, we will be having 
New Project Fridays. Which means we'll get down and dirty with approving 
and running the scripts for new projects on Fridays.


Sorry for the inconvenience. There are a bunch of moving pieces to 
adding a new project, and we've kinda hit the limit of the current 
automation, but hope to have it all fixed up soon.


Thanks!
Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Stephen Balukoff
Hi Ed,

That sounds good to me, actually:  As long as 'cloud admin' API functions
are represented as well as 'simple user workflows', then I'm all for a
unified API that simply exposes more depending on permissions.

Stephen


On Tue, Feb 25, 2014 at 12:15 PM, Ed Hall  wrote:

>
>  On Feb 25, 2014, at 10:10 AM, Stephen Balukoff 
> wrote:
>
>On Feb 25, 2014 at 3:39 AM, enikano...@mirantis.com wrote:
>
>> Agree, however actual hardware is beyond logical LBaaS API but could
>> be a part of admin LBaaS API.
>>
>
>  Aah yes--  In my opinion, users should almost never be exposed to
> anything that represents a specific piece of hardware, but cloud
> administrators must be. The logical constructs the user is exposed to can
> "come close" to what an actual piece of hardware is, but again, we should
> be abstract enough that a cloud admin can swap out one piece of hardware
> for another without affecting the user's workflow, application
> configuration, (hopefully) availability, etc.
>
>  I recall you said previously that the concept of having an 'admin API'
> had been discussed earlier, but I forget the resolution behind this (if
> there was one). Maybe we should revisit this discussion?
>
>  I tend to think that if we acknowledge the need for an admin API, as
> well as some of the core features it's going to need, and contrast this
> with the user API (which I think is mostly what Jay and Mark McClain are
> rightly concerned about), it'll start to become obvious which features
> belong where, and what kind of data model will emerge which supports both
> APIs.
>
>
>  [I’m new to this discussion; my role at my employer has been shifted from
> an internal to a community focus and I’m madly
> attempting to come up to speed. I’m a software developer with an
> operations focus; I’ve worked with OpenStack since Diablo
> as Yahoo’s team lead for network integration.]
>
> Two levels (user and admin) would be the minimum. But our experience over
> time is that even administrators occasionally
> need to be saved from themselves. This suggests that, rather than two or
> more separate APIs, a single API with multiple
> roles is needed. Certain operations and attributes would only be
> accessible to someone acting in an appropriate role.
>
>  This might seem over-elaborate at first glance, but there are other
> dividends: a single API is more likely to be consistent,
> and maintained consistently as it evolves. By taking a role-wise view the
> hierarchy of concerns is clarified. If you focus on
> the data model first you are more likely to produce an arrangement that
> mirrors the hardware but presents difficulties in
> representing and implementing user and operator intent.
>
>  Just some general insights/opinions — take for what they’re worth.
>
>   -Ed
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread Sławek Kapłoński
Hello,

Trinath, this presentation I saw before You send me it. There is nice 
explanation what methods are (and should be) in type driver and mech driver 
but I need exactly that information what sent me Assaf. Thanks both of You for 
Your help :)

--
Best regards
Sławek Kapłoński
Dnia wtorek, 25 lutego 2014 12:18:50 Assaf Muller pisze:

> - Original Message -
> 
> > Hi
> > 
> > Hope this helps
> > 
> > http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
> > 
> > ___
> > 
> > Trinath Somanchi
> > 
> > _
> > From: Sławek Kapłoński [sla...@kaplonski.pl]
> > Sent: Tuesday, February 25, 2014 9:24 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] Neutron ML2 and openvswitch agent
> > 
> > Hello,
> > 
> > I have question to You guys. Can someone explain me (or send to link
> > with such explanation) how exactly ML2 plugin which is working on
> > neutron server is communicating with compute hosts with openvswitch
> > agents?
> 
> Maybe this will set you on your way:
> ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then
> uses ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with
> the topic stated in that file.
> 
> When the message is received by the OVS agent, it calls:
> neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_
> update.
> > I suppose that this is working with rabbitmq queues but I need
> > to add own function which will be called in this agent and I don't know
> > how to do that. It would be perfect if such think will be possible with
> > writing for example new mechanical driver in ML2 plugin (but how?).
> > Thanks in advance for any help from You :)
> > 
> > --
> > Best regards
> > Slawek Kaplonski
> > sla...@kaplonski.pl
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Jay Pipes
On Mon, 2014-02-24 at 18:07 -0800, Stephen Balukoff wrote:
> Hi y'all,
> 
> Jay, in the L7 example you give, it looks like you're setting SSL
> parameters for a given load balancer front-end. 

Correct. The example comes straight out of the same example in the ELB
API documentation. The only difference being in my CLI commands, there's
no mention of a listener, whereas in the ELB examples, there is (since
the ELB API can only configure this on the load balancer by adding or
removing listener objects to/from the load balancer object.

> Do you have an example you can share where where certain traffic is
> sent to one set of back-end nodes, and other traffic is sent to a
> different set of back-end nodes based on the URL in the client
> request? (I'm trying to understand how this can work without the
> concept of 'pools'.)  

Great example. This is quite a common scenario -- consider serving
requests for static images or content from one set of nginx servers and
non-static content from another set of, say, Apache servers running
Tomcat or similar.

OK, I'll try to work through my ongoing CLI suggestions for the
following scenario:

* User has 3 Nova instances running nginx and serving static files.
These instances all have private IP addresses in subnet 192.168.1.0/24.
* User has 3 Nova instances running Apache and tomcat and serving
dynamic content. These instances all have private IP addresses in subnet
192.168.2.0/24
* User wants any traffic coming in to the balancer's front-end IP with a
URI beginning with "static.example.com" to get directed to any of the
nginx nodes
* User wants any other traffic coming in to the balancer's front-end IP
to get directed to any of the Apache nodes
* User wants sticky session handling enabled ONLY for traffic going to
the Apache nodes

Here is what some proposed CLI commands might look like in my
"user-centric flow of things":

# Assume we've created a load balancer with ID $BALANCER_ID using
# Something like I showed in my original response:
 
neutron balancer-create --type=advanced --front= \
 --back= --algorithm="least-connections" \
 --topology="active-standby"

Note that in the above call,  includes **all of the Nova
instances that would be balanced across**, including all of the nginx
and all of the Apache instances.

Now, let's set up our static balancing. First, we'd create a new L7
policy, just like the SSL negotiation one in the previous example:

neutron l7-policy-create --type="uri-regex-matching" \
 --attr=URIRegex="static\.example\.com.*"

Presume above returns an ID for the policy $L7_POLICY_ID. We could then
assign that policy to operate on the front-end of the load balancer and
spreading load to the nginx nodes by doing:

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
 --subnet-cidr=192.168.1.0/24

We could then indicate to the balancer that all other traffic should be
sent to only the Apache nodes:

neutron l7-policy-create --type="uri-regex-matching" \
 --attr=URIRegex="static\.example\.com.*" \
 --attr="RegexMatchReverse=true"

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID \
 --subnet-cidr=192.168.2.0/24

> Also, what if the first group of nodes needs a different health check
> run against it than the second group of nodes?

neutron balancer-apply-healthcheck $BALANCER_ID $HEALTHCHECK_ID \
 --subnet-cidr=192.168.1.0/24

where $HEALTHCHECK_ID would be the ID of a simple healthcheck object.

The biggest advantage to this proposed API and CLI is that we are not
introducing any terminology into the Neutron LBaaS API that is not
necessary when existing terms in the main Neutron API already exist to
describe such things. You will note that I do not use the term "pool"
above, since the concept of a subnet (and its associated CIDR) are
already well-established objects in the Neutron API and can serve the
exact same purpose for Neutron LBaaS API.

> As far as hiding implementation details from the user:  To a certain
> degree I agree with this, and to a certain degree I do not: OpenStack
> is a cloud OS fulfilling the needs of supplying IaaS. It is not a
> PaaS. As such, the objects that users deal with largely are analogous
> to physical pieces of hardware that make up a cluster, albeit these
> are virtualized or conceptualized. Users can then use these conceptual
> components of a cluster to build the (virtual) infrastructure they
> need to support whatever application they want. These objects have
> attributes and are expected to act in a certain way, which again, are
> usually analogous to actual hardware.

I disagree. A cloud API should strive to shield users of the cloud from
having to understand underlying hardware APIs or object models.

> If we were building a PaaS, the story would be a lot different--  but
> what we are building is a cloud OS that provides Infrastructure (as a
> service).

I still think we need to simplify the APIs as much as we can, and remove
the underlying implementation (which includes the da

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Ed Hall

On Feb 25, 2014, at 10:10 AM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>> wrote:
 On Feb 25, 2014 at 3:39 AM, 
enikano...@mirantis.com wrote:
Agree, however actual hardware is beyond logical LBaaS API but could be a part 
of admin LBaaS API.

Aah yes--  In my opinion, users should almost never be exposed to anything that 
represents a specific piece of hardware, but cloud administrators must be. The 
logical constructs the user is exposed to can "come close" to what an actual 
piece of hardware is, but again, we should be abstract enough that a cloud 
admin can swap out one piece of hardware for another without affecting the 
user's workflow, application configuration, (hopefully) availability, etc.

I recall you said previously that the concept of having an 'admin API' had been 
discussed earlier, but I forget the resolution behind this (if there was one). 
Maybe we should revisit this discussion?

I tend to think that if we acknowledge the need for an admin API, as well as 
some of the core features it's going to need, and contrast this with the user 
API (which I think is mostly what Jay and Mark McClain are rightly concerned 
about), it'll start to become obvious which features belong where, and what 
kind of data model will emerge which supports both APIs.

[I’m new to this discussion; my role at my employer has been shifted from an 
internal to a community focus and I’m madly
attempting to come up to speed. I’m a software developer with an operations 
focus; I’ve worked with OpenStack since Diablo
as Yahoo’s team lead for network integration.]

Two levels (user and admin) would be the minimum. But our experience over time 
is that even administrators occasionally
need to be saved from themselves. This suggests that, rather than two or more 
separate APIs, a single API with multiple
roles is needed. Certain operations and attributes would only be accessible to 
someone acting in an appropriate role.

This might seem over-elaborate at first glance, but there are other dividends: 
a single API is more likely to be consistent,
and maintained consistently as it evolves. By taking a role-wise view the 
hierarchy of concerns is clarified. If you focus on
the data model first you are more likely to produce an arrangement that mirrors 
the hardware but presents difficulties in
representing and implementing user and operator intent.

Just some general insights/opinions — take for what they’re worth.

 -Ed

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-25 Thread Morgan Fainberg
For purposes of supporting multiple backends for Identity (multiple LDAP, mix 
of LDAP and SQL, federation, etc) Keystone is planning to increase the maximum 
size of the USER_ID field from an upper limit of 64 to an upper limit of 255. 
This change would not impact any currently assigned USER_IDs (they would remain 
in the old simple UUID format), however, new USER_IDs would be increased to 
include the IDP identifier (e.g. USER_ID@@IDP_IDENTIFIER). 

There is the obvious concern that projects are utilizing (and storing) the 
user_id in a field that cannot accommodate the increased upper limit. Before 
this change is merged in, it is important for the Keystone team to understand 
if there are any places that would be overflowed by the increased size.

The review that would implement this change in size is 
https://review.openstack.org/#/c/74214 and is actively being worked on/reviewed.

I have already spoken with the Nova team, and a single instance has been 
identified that would require a migration (that will have a fix proposed for 
the I3 timeline). 

If there are any other known locations that would have issues with an increased 
USER_ID size, or any concerns with this change to USER_ID format, please 
respond so that the issues/concerns can be addressed.  Again, the plan is not 
to change current USER_IDs but that new ones could be up to 255 characters in 
length.

Cheers,
Morgan Fainberg
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party-testing] CHANGED TIME and VENUE: Workshop/Q&A session on third party testing will be on IRC now!

2014-02-25 Thread Jay Pipes
Hi again Stackers,

After discussions with folks on the infrastructure team, I'm making some
changes to the proposed workshop venue and time. As was rightly pointed
out by Jim B and others, we want to encourage folks that are setting up
their CI systems to use the standard communication tools to interact
with the OpenStack community. That standard tool is IRC, with meetings
on Freenode. In addition, Google Hangout is not a free/libre piece of
software, and we want to encourage free and open source contribution and
participation.

Alright, with that said, we will conduct the first 3rd party OpenStack
CI workshop/Q&A session on Freenode IRC, #openstack-meeting on Monday,
March 3rd, at 13:00 EST (18:00 UTC):

https://wiki.openstack.org/wiki/Meetings#Third_Party_OpenStack_CI_Workshop_and_Q.26A_Meetings

Unlike regular OpenStack team meetings on IRC, there will not be a set
agenda. Instead, the IRC channel will be reserved for folks eager to get
questions about their CI installation answered and are looking for some
debugging assistance with Jenkins, Zuul, Nodepool et al.

I look forward to seeing you there!

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Artifact dependencies: Strict vs Soft

2014-02-25 Thread Arnaud Legendre
Hi Alexander, Thank you for your input. 

I think we need to clearly define what a version means for an artifact. 
First, I would like to comeback to the definition of an artifact: this broader 
audience might not be aware of this concept. 
As of today, my understanding is the following: 
An artifact is a set of metadata without any pre-defined structure. The only 
contract is that these artifacts will reference one or many blocks of bits 
(potentially images) stored in the Glance storage backends. 
With that in mind, I can see two types of versions: metadata version and the 
version of the actual bits. 
I think the version you are talking about is a mix of two versions you I 
mention above. Could you confirm? 

Now, I have another question: you mention that you have can several versions of 
an artifact accessible in the system: does that mean that the previous versions 
are still available (i.e. both metadata and actual blocks of data are 
available)? Can I rollback and use version #1 if the latest version of my 
artifact is version #2? Based on your question, I think the answer is Yes in 
which case this comes with a lot of other issues: we are dealing with block of 
data that can have big sizes: you need to give the ability to the user to say: 
"I want to store only the last 2 versions and not the full history". So, to 
answer you question, I would like to see an API which is providing all the 
versions available (accessible) for a given artifact. Then, it's up to the 
artifact using it to decide which one it should "import". 

Thanks, 
Arnaud 



- Original Message -

From: "Alexander Tivelkov"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Tuesday, February 25, 2014 3:57:41 AM 
Subject: [openstack-dev] [Glance][Artifacts] Artifact dependencies: Strict vs 
Soft 

Hi folks, 

While I am still working on designing artifact-related APIs (sorry, the task is 
taking me longer then expected due to a heavy load in Murano, related to the 
preparation of incubation request), I've got a topic I wanted to discuss with 
the broader audience. 

It seems like we have agreed on the idea that the artifact storage should 
support dependencies between the artifacts: ability for any given artifact to 
reference some other artifacts as its dependencies, and the API call which will 
allow to retrieve all the dependency graph of the given artifact (i.e. its 
direct and transitive dependecies) 

Another idea which was always kept in mind when we were designing the artifact 
concept was artifact versioning: the system should allow to store different 
artifact having the identical name but different versions, and the API should 
be able to return the latest (based on some notation) version of the artifact. 
Being able to construct such a queries actually gives an ability to define kind 
of aliases, so the url like /v2/artifacts?type=image&name=ubuntu&version=latest 
will always return the latest version of the given artifact (ubuntu image in 
this case). The need to be able to define such "aliaces" was expressed in [1], 
and the ability to satisfy this need with artifact API was mentioned at [2] 

But combining these two ideas brings up an interesting question: how should 
artifacts define their dependencies? Should this be an explicit strict 
reference (i.e. referencing the specific artifact by its id), or it should be 
an implicit soft reference, similar to the "alias" described above (i.e. 
specifying the dependency as "A requires the latest version of B" or even "A 
requires 0.2<=B<0.3")? 
The later seems familiar: it is similar to pip dependency specification, right? 
This approach obviosuly may be very usefull (at least I clearly see its 
benefits for Murano's application packages), but it implies lazy evaluation, 
which may dramatically impact the performance. 
In contrary, the former approach - with explicit references - requires much 
less computation. Even more, if we decide that the artifact dependencies are 
immutable, this will allow us to denormalize the storage of the dependency 
graph and store all the transitive dependencies of the given artifact in a flat 
table, so the dependency graph may be returned by a sinle SQL query, without a 
need for recursive calls, which are otherwise unavoidable in the normalized 
database storing such hierarchical structures. 

Meanwhile, the mutability of dependencis is also unclear to me: ability to 
modify them seems to have its own pros and cons, so this is another topic to 
dicsuss. 

I'd like to hear your opinion on all of these. Any feedback is welcome, and we 
may come back to this topic on the Thursday's meeting. 


Thanks! 


[1] https://blueprints.launchpad.net/glance/+spec/glance-image-aliases 
[2] https://blueprints.launchpad.net/glance/+spec/artifact-repository-api 


-- 
Regards, 
Alexander Tivelkov 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
https://urldef

Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-25 Thread W Chan
Sure.  Let me give this some thoughts and work with you separately.  Before
we speak up, we should have a proposal for discussion.


On Mon, Feb 24, 2014 at 9:53 PM, Dmitri Zimine  wrote:

> Winson,
>
> While you're looking into this and working on the design, may be also
> think through other executor/engine communications.
>
> We talked about executor communicating to engine over 3 channels (DB,
> REST, RabbitMQ) which I wasn't happy about ;) and put it off for some time.
> May be it can be rationalized as part of your design.
>
> DZ.
>
> On Feb 24, 2014, at 11:21 AM, W Chan  wrote:
>
> Renat,
>
> Regarding your comments on change https://review.openstack.org/#/c/75609/,
> I don't think the port to oslo.messaging is just a swap from pika to
> oslo.messaging.  OpenStack services as I understand is usually implemented
> as an RPC client/server over a messaging transport.  Sync vs async calls
> are done via the RPC client call and cast respectively.  The messaging
> transport is abstracted and concrete implementation is done via
> drivers/plugins.  So the architecture of the executor if ported to
> oslo.messaging needs to include a client, a server, and a transport.  The
> consumer (in this case the mistral engine) instantiates an instance of the
> client for the executor, makes the method call to handle task, the client
> then sends the request over the transport to the server.  The server picks
> up the request from the exchange and processes the request.  If cast
> (async), the client side returns immediately.  If call (sync), the client
> side waits for a response from the server over a reply_q (a unique queue
> for the session in the transport).  Also, oslo.messaging allows versioning
> in the message. Major version change indicates API contract changes.  Minor
> version indicates backend changes but with API compatibility.
>
> So, where I'm headed with this change...  I'm implementing the basic
> structure/scaffolding for the new executor service using oslo.messaging
> (default transport with rabbit).  Since the whole change will take a few
> rounds, I don't want to disrupt any changes that the team is making at the
> moment and so I'm building the structure separately.  I'm also adding
> versioning (v1) in the module structure to anticipate any versioning
> changes in the future.   I expect the change request will lead to some
> discussion as we are doing here.  I will migrate the core operations of the
> executor (handle_task, handle_task_error, do_task_action) to the server
> component when we agree on the architecture and switch the consumer
> (engine) to use the new RPC client for the executor instead of sending the
> message to the queue over pika.  Also, the launcher for
> ./mistral/cmd/task_executor.py will change as well in subsequent round.  An
> example launcher is here
> https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine.
>  The interceptor project here is what I use to research how oslo.messaging
> works.  I hope this is clear. The blueprint only changes how the request
> and response are being transported.  It shouldn't change how the executor
> currently works.
>
> Finally, can you clarify the difference between local vs scalable engine?
>  I personally do not prefer to explicitly name the engine scalable because
> this requirement should be in the engine by default and we do not need to
> explicitly state/separate that.  But if this is a roadblock for the change,
> I can put the scalable structure back in the change to move this forward.
>
> Thanks.
> Winson
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-25 Thread W Chan
Thanks.  I will do that today and follow up with a description of the
proposal.


On Mon, Feb 24, 2014 at 10:21 PM, Renat Akhmerov wrote:

> "In process" is fine to me.
>
> Winson, please register a blueprint for this change and put the link in
> here so that everyone can see what it all means exactly. My feeling is that
> we can approve and get it done pretty soon.
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 25 Feb 2014, at 12:40, Dmitri Zimine  wrote:
>
> > I agree with Winson's points. Inline.
> >
> > On Feb 24, 2014, at 8:31 PM, Renat Akhmerov 
> wrote:
> >
> >>
> >> On 25 Feb 2014, at 07:12, W Chan  wrote:
> >>
> >>> As I understand, the local engine runs the task immediately whereas
> the scalable engine sends it over the message queue to one or more
> executors.
> >>
> >> Correct.
> >
> > Note: that "local" is confusing here, "in process" will reflect what it
> is doing better.
> >
> >>
> >>> In what circumstances would we see a Mistral user using a local engine
> (other than testing) instead of the scalable engine?
> >>
> >> Yes, mostly testing we it could be used for demonstration purposes also
> or in the environments where installing RabbitMQ is not desirable.
> >>
> >>> If we are keeping the local engine, can we move the abstraction to the
> executor instead, having drivers for a local executor and remote executor?
>  The message flow from the engine to the executor would be consistent, it's
> just where the request will be processed.
> >>
> >> I think I get the idea and it sounds good to me. We could really have
> executor in both cases but the transport from engine to executor can be
> different. Is that what you're suggesting? And what do you call driver here?
> >
> > +1 to "abstraction to the executor", indeed the local and remote engines
> today differ only by how they invoke executor, e.g. transport / driver.
> >
> >>
> >>> And since we are porting to oslo.messaging, there's already a fake
> driver that allows for an in process Queue for local execution.  The local
> executor can be a derivative of that fake driver for non-testing purposes.
>  And if we don't want to use an in process queue here to avoid the
> complexity, we can have the client side module of the executor determine
> whether to dispatch to a local executor vs. RPC call to a remote executor.
> >>
> >> Yes, that sounds interesting. Could you please write up some etherpad
> with details explaining your idea?
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Extending Sparklines to Horizon tables

2014-02-25 Thread Sayali Lunkad
Hello everyone,

I am working on this bp
https://blueprints.launchpad.net/horizon/+spec/sparklines/+edit in which I
am extending the tables in horizon to show sparklines. I have listed down
some of the columns that I think will be appropriate to extend and the
possible meters we could use for the same. Please let me know of any other
variations that can be added or removed in the design.


TABLES

COLUMNS

POSSIBLE METER

Usage Summary

   -

   vcpu
   -

   disk
   -

   RAM


   -

   vcpu
   -

   disk.read.requests, disk.write.requests
   -

   memory(MB)

  Instance

   -

   Status
   -

   power state


   -

   instance

  Hypervisors

   -

   vcpu
   -

   RAM
   -

   storage
   -

   instances


   -

   Vcpu
   -

   memory(MB)
   -

   disk.read.requests, disk.write.requests
   -

   instance

  Volume

   -

   status


   -

   volume

  Image

   -

   status


   -

   image.upload
   -

   image.delete
   -

   image.download

  Thank you,

Sayali
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-25 Thread Clark Boylan
On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 wrote:
> Hi all,
>
> [1] made it possible for openstack_citest MySQL user to create new
> databases in tests on demand (which is very useful for parallel
> running of tests on MySQL and PostgreSQL, thank you, guys!).
>
> Unfortunately, openstack_citest user can only create tables in the
> created databases, but not to perform SELECT/UPDATE/INSERT queries.
> Please see the bug [2] filed by Joshua Harlow.
>
> In PostgreSQL the user who creates a database, becomes the owner of
> the database (and can do everything within this database), and in
> MySQL we have to GRANT those privileges explicitly. But
> openstack_citest doesn't have the permission to do GRANT (even on its
> own databases).
>
> I think, we could overcome this issue by doing something like this
> while provisioning a node:
> GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
> 'openstack_citest'@'localhost';
>
> and then create databases giving them names starting with the prefix value.
>
> Is it an acceptable solution? Or am I missing something?
>
> Thanks,
> Roman
>
> [1] https://review.openstack.org/#/c/69519/
> [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The problem with the prefix approach is it doesn't scale. At some
point we will decide we need a new prefix then a third and so on
(which is basically what happened at the schema level). That said we
recently switched to using single use slaves for all unittesting so I
think we can safely GRANT ALL on *.* to openstack_citest@localhost and
call that good enough. This should work fine for upstream testing but
may not be super friendly to others using the puppet manifests on
permanent slaves. We can wrap the GRANT in a condition in puppet that
is set only on single use slaves if this is a problem.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-25 Thread Ben Nemec

On 2014-02-24 14:51, Sean Dague wrote:

On 02/24/2014 03:10 PM, Ben Nemec wrote:

On 2014-02-21 17:09, Sean Dague wrote:

On 02/21/2014 05:28 PM, Clark Boylan wrote:

On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec 
wrote:

On 2014-02-21 13:01, Mike Spreitzer wrote:

https://bugs.launchpad.net/devstack/+bug/1203680 is literally about
Glance
but Nova has the same problem.  There is a fix released, but just
merging
that fix accomplishes nothing --- we need people who run DevStack 
to

set the
new variable (INSTALL_TESTONLY_PACKAGES).  This is something that
needs to
be documented (in http://devstack.org/configuration.html and all 
the

places
that tell people how to do unit testing, for examples), so that
people know
to do it, right?



IMHO, that should be enabled by default.  Every developer using
devstack is
going to want to run unit tests at some point (or should 
anyway...),

and if
the gate doesn't want the extra install time for something like
tempest that
probably doesn't need these packages, then it's much simpler to
disable it
in that one config instead of every separate config used by every
developer.

-Ben



I would be wary of relying on devstack to configure your unittest
environments. Just like it takes over the node you run it on, 
devstack

takes full ownership of the repos it clones and will do potentially
lossy things like `git reset --hard` when you don't expect it to. +1
to documenting the requirements for unittesting, not sure I would
include devstack in that documentation.


Agreed, I never run unit tests in the devstack tree. I run them on my
laptop or other non dedicated computers. That's why we do unit tests 
in

virtual envs, they don't need a full environment.

Also many of the unit tests can't be run when openstack services are
actually running, because they try to bind to ports that openstack
services use.

It's one of the reasons I've never considered that path a priority in
devstack.

-Sean



What is the point of devstack if we can't use it for development?


I builds you a consistent cloud.


Are
we really telling people that they shouldn't be altering the code in
/opt/stack because it's owned by devstack, and devstack reserves the
right to blow it away any time it feels the urge?


Actually, I tell people that all that time. Most of them don't listen 
to

me. :)

Devstack defaults to RECLONE=False, but that tends to break people in
other ways (like having month old trees they are building against). But
the reality is I've watched tons of people have their work reset on 
them

because they were developing in /opt/stack, so I tell people don't do
that (and if they do it anyway, at least they realize it's dangerous).


How would you feel about doing a git stash before doing reclones?  
Granted, that still requires people to know that the changes were 
stashed, but at least if someone reclones, loses their changes, and 
freaks out on #openstack-dev or something we can tell them how to get 
the changes back. :-)





And if that's not
what we're saying, aren't they going to want to run unit tests before
they push their changes from /opt/stack?  I don't think it's 
reasonable
to tell them that they have to copy their code to another system to 
run

unit tests on it.


Devstack can clone from alternate sources, and that's my approach on
anything long running. For instance, keeping trees in ~/code/ and 
adjust

localrc to use those trees/branches that I'm using (with the added
benefit of being able to easily reclone the rest of the tree).

Lots of people use devstack + vagrant, and do basically the same thing
with their laptop repos being mounted up into the guest.


So is there some git magic that also keeps the repos in sync, or do you 
have to commit/pull/restart service every time you make changes?  I ask 
because experience tells me I would inevitably forget one of those steps 
at some point and be stymied by old code still running in my devstack.  
Heck, I occasionally forget just the "restart service" step. ;-)




And some people do it the way you are suggesting above.

The point is, for better or worse, what we have is a set of tools from
which you can assemble a workflow that suits your needs. We don't have 
a

prescribed "this is the one way to develop" approach. There is some
assumption that you'll pull together something from the tools provided.

-Sean


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-25 Thread Stephen Balukoff
Hi Eugene!

Responses inline:

On Tue, Feb 25, 2014 at 3:33 AM, Eugene Nikanorov
wrote:
>
> I'm really not sure what Mark McClain on some other folks see as
> implementation details. To me the 'instance' concept is as logical as
> others (vips/pool/etc). But anyway, it looks like majority of those who
> discuss, sees it as redundant concept.
>

Maybe we should have a discussion around what qualifies as a 'logical
concept' or 'logical construct,' and why the 'loadbalancer' concept you've
been championing either does or does not qualify, so we're all (closer to
being) on the same page before we discuss model changes?



> Agree, however actual hardware is beyond logical LBaaS API but could be a
> part of admin LBaaS API.
>

Aah yes--  In my opinion, users should almost never be exposed to anything
that represents a specific piece of hardware, but cloud administrators must
be. The logical constructs the user is exposed to can "come close" to what
an actual piece of hardware is, but again, we should be abstract enough
that a cloud admin can swap out one piece of hardware for another without
affecting the user's workflow, application configuration, (hopefully)
availability, etc.

I recall you said previously that the concept of having an 'admin API' had
been discussed earlier, but I forget the resolution behind this (if there
was one). Maybe we should revisit this discussion?

I tend to think that if we acknowledge the need for an admin API, as well
as some of the core features it's going to need, and contrast this with the
user API (which I think is mostly what Jay and Mark McClain are rightly
concerned about), it'll start to become obvious which features belong
where, and what kind of data model will emerge which supports both APIs.


Thanks,
Stephen



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Community meeting minutes - 02/25/2014

2014-02-25 Thread Alexander Tivelkov
Hi,

Thanks for joining murano weekly meeting.
Here are the meeting minutes and the logs:

http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-02-25-17.00.html
http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-02-25-17.00.log.html

See you next week!

--
Regards,
Alexander Tivelkov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Dynamic types and POST requests

2014-02-25 Thread Doug Hellmann
OK, that's not how that feature is meant to be used.

The idea is that on application startup plugins or extensions will be
loaded that configure the extra attributes for the class. That happens one
time, and the configuration does not depend on data that appears in the
request itself.

Doug


On Tue, Feb 25, 2014 at 9:07 AM, Sylvain Bauza wrote:

> Let me give you a bit of code then, that's currently WIP with heavy
> rewrites planned on the Controller side thanks to Pecan hooks [1]
>
> So, L102 (GET request) the convert() method is passing the result dict as
> kwargs, where the Host.__init__() method is adding dynamic attributes.
> That does work :-)
>
> L108, I'm specifying that my body string is basically an Host object.
> Unfortunately, I can provide extra keys to that where I expect to be extra
> attributes. WSME will then convert the body into an Host [2], but as the
> Host class doesn't yet know which extra attributes are allowed, none of my
> extra keys are taken.
> As a result, the 'host' (instance of Host) argument of the post() method
> is not containing the extra attributes and thus, not passed for creation to
> my Manager.
>
> As said, I can still get the request body using Pecan directly within the
> post() method, but I then would have to manage the mimetype, and do the
> adding of the extra attributes there. That's pretty ugly IMHO.
>
> Thanks,
> -Sylvain
>
> [1] http://paste.openstack.org/show/69418/
>
> [2] https://github.com/stackforge/wsme/blob/master/wsmeext/pecan.py#L71
>
>
> 2014-02-25 14:39 GMT+01:00 Doug Hellmann :
>
>
>>
>>
>> On Tue, Feb 25, 2014 at 6:55 AM, Sylvain Bauza 
>> wrote:
>>
>>> Hi,
>>>
>>> Thanks to WSME 0.6, there is now possibility to add extra attributes to
>>> a Dynamic basetype.
>>> I successfully ended up showing my extra attributes from a dict to a
>>> DynamicType using add_attributes() but I'm now stuck with POST requests
>>> having dynamic body data.
>>>
>>> Although I'm declaring in wsexpose() my DynamicType, I can't say to WSME
>>> to map the pecan.request.body dict with my wsattrs and create new
>>> attributes if none matched.
>>>
>>> Any idea on how to do this ? I looked at WSME and the type is registered
>>> at API startup, not when being called, so the get_arg() method fails to
>>> fill in the gaps.
>>>
>>> I can possibly do a workaround within my post function, where I could
>>> introspect pecan.request.body and add extra attributes, so it sounds a bit
>>> crappy as I have to handle the mimetype already managed by WSME.
>>>
>>
>> I'm not sure I understand the question. Are you saying that the dynamic
>> type feature works for GET arguments but not POST body content?
>>
>> Doug
>>
>>
>>
>>>
>>>
>>> Thanks,
>>> -Sylvain
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GSoC 2014

2014-02-25 Thread Davanum Srinivas
Andrew,

Please see some details regarding projects/ideas/mentors etc in our
wiki - https://wiki.openstack.org/wiki/GSoC2014 You can also talk to
some of us on #openstack-gsoc irc channel.

-- dims

On Tue, Feb 25, 2014 at 11:55 AM, Andrew Chul  wrote:
> Hi, guys! My name is Andrew Chul, I'm from Russia. I had graduated National
> Research University "Moscow Power Engineering Institute" a few years ago.
> And then I've started to getting post-graduated education in "Smolensk
> University of Humanities".
>
>
> The time for filing of an application for participating in projects is
> coming and I'm looking forward of 10th March. I've seen your project in the
> list of organizations which will take part in Google Summer of Code 2014.
> And I need to say that my eyes exploded interest to your project. Why? I
> dreamed about such project. I'm very interesting in such areas, as machine
> learning, artificial intelligence. And, primarily, I'm developer on php, but
> active develop myself in Python.
>
>
> So, 10th March will coming soon and I will fill an application to
> participating in your project. I hope that I will be able to work side by
> side with you in such interesting and cognitive project. Thank you for
> attention.
>
>
> --
> Best regards, Andrew Chul.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Sanchez, Cristian A
+1 to Dina on the workflow

From: Dina Belova mailto:dbel...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: martes, 25 de febrero de 2014 13:42
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design

Ok, so

>>> I'm just asking why we should hack Keystone workflow by adding an hook, 
>>> like we did for Nova. From my POV, that's not worth it.

Idea was about some extra specs, that will be processed by Climate anyway. 
Keystone will know nothing about reservations or smth.

>>> I think it should be a Climate "policy" (be careful, the name is confusing) 
>>> : if admin wants to grant any new project for reservations, he should place 
>>> a call to Climate. That's up to Climate-Nova (ie. Nova extension) to query 
>>> Climate in order to see if project has been granted or not.

Now I think that it'll be better, yes.
I see some workflow like:

1) Mark project as reservable in Climate
2) When some resource is created (like Nova instance) it should be checked (in 
the API extensions, for example) via Climate if project is reservable. If is, 
and there is no special reservation flags passed, it should be used 
default_reservation stuff for this instance

Sylvain, is that ira you're talking about?

Dina



On Tue, Feb 25, 2014 at 7:53 PM, Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>> wrote:



2014-02-25 16:25 GMT+01:00 Dina Belova 
mailto:dbel...@mirantis.com>>:

Why should it require to be part of Keystone to hook up on Climate ?

Sorry, can't get your point.



I'm just asking why we should hack Keystone workflow by adding an hook, like we 
did for Nova. From my POV, that's not worth it.


Provided we consider some projects as 'reservable', we could say this should be 
a Climate API endpoint like CRUD /project/ and up to the admin responsability 
to populate it.
If we say that new projects should automatically be 'reservable', that's only 
policy from Climate to whiteboard these.

So you propose to make some API requests to Climate (like for hosts) and mark 
some already existing projects as reserved. But how we'll automate process of 
some resource reservation belonging to that tenant? Or do you propose still to 
add some checkings to, for example, climate-nova extensions to check this 
somehow there?

Thanks



I think it should be a Climate "policy" (be careful, the name is confusing) : 
if admin wants to grant any new project for reservations, he should place a 
call to Climate. That's up to Climate-Nova (ie. Nova extension) to query 
Climate in order to see if project has been granted or not.

Conceptually, this 'reservation' information is tied to Climate and should not 
be present within the projects.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting tomorrow (2/25)

2014-02-25 Thread Dugger, Donald D
Sylvain-

Good point and, since you drove the discussion, we did talk about it.  For 
those that weren't there on IRC the log is at:

http://eavesdrop.openstack.org/meetings/gantt/2014/gantt.2014-02-25-15.00.log.html

and the etherpad where we are collecting the BPs (don't be daunted by the size 
of the etherpad, the good stuff is at the bottom) is at:

https://etherpad.openstack.org/p/icehouse-external-scheduler

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Tuesday, February 25, 2014 6:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [gantt] scheduler sub-group meeting tomorrow (2/25)

Hi Don,

Maybe it would be worth discussing on how we could share the blueprints with 
people willing to help ?

-Sylvain

2014-02-24 18:08 GMT+01:00 Dugger, Donald D 
mailto:donald.d.dug...@intel.com>>:
All-

I'm tempted to cancel the gantt meeting for tomorrow.  The only topics I have 
are the no-db scheduler update (we can probably do that via email) and the 
gantt code forklift (I've been out with the flu and there's no progress on 
that).

I'm willing to chair but I'd like to have some specific topics to talk about.

Suggestions anyone?

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-25 Thread Édouard Thuleau
A thread [1] was also initiated on the ML by Syvlain but no answers/comment
for the moment.

[1] http://openstack.markmail.org/thread/qy6ikldtq2o4imzl

Édouard.


On Mon, Feb 24, 2014 at 9:35 AM, 黎林果  wrote:

> Thanks you very much.
>
> "IMHO when admin_state_up is false that entity should be down, meaning
> network should be down.
> otherwise what it the usage of admin_state_up ? same is true for port
> admin_state_up"
>
> It likes switch's power button?
>
> 2014-02-24 16:03 GMT+08:00 Assaf Muller :
> >
> >
> > - Original Message -
> >> Hi,
> >>
> >> I want to know the admin_state_up attribute about networks but I
> >> have not found any describes.
> >>
> >> Can you help me to understand it? Thank you very much.
> >>
> >
> > There's a discussion about this in this bug [1].
> > From what I gather, nobody knows what admin_state_up is actually supposed
> > to do with respect to networks.
> >
> > [1] https://bugs.launchpad.net/neutron/+bug/1237807
> >
> >>
> >> Regard,
> >>
> >> Lee Li
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party-ci] Proposing a regular workshop/meeting to help folks set up CI environments

2014-02-25 Thread Jay Pipes
Hi Stackers,

I've been contacted by a number of folks with questions about setting up
a third-party CI system, and while I'm very happy to help anyone who
contacts me, I figured it would be a good idea to have a regular meeting
on Google Hangouts that would be used as a Q&A session or workshop for
folks struggling to set up their own environments.

I think Google Hangouts are ideal because we can share our screens (yes,
even on Linux systems) and get real-time feedback to the folks who have
questions.

I propose we have the first weekly meeting this coming Monday, March
3rd, at 10:00 EST (07:00 PST, 15:00 UTC).

I created a Googe Hangout Event here:

http://bit.ly/1cLVnkv

Feel free to sign up for the event by selecting "Yes" in the "Are you
going?" dropdown.

If Google Hangouts works well for this first week, we'll use it again.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread Assaf Muller


- Original Message -
> Hi
> 
> Hope this helps
> 
> http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
> 
> ___
> 
> Trinath Somanchi
> 
> _
> From: Sławek Kapłoński [sla...@kaplonski.pl]
> Sent: Tuesday, February 25, 2014 9:24 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] Neutron ML2 and openvswitch agent
> 
> Hello,
> 
> I have question to You guys. Can someone explain me (or send to link
> with such explanation) how exactly ML2 plugin which is working on
> neutron server is communicating with compute hosts with openvswitch
> agents?

Maybe this will set you on your way:
ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then uses
ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with the topic
stated in that file.

When the message is received by the OVS agent, it calls:
neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_update.

> I suppose that this is working with rabbitmq queues but I need
> to add own function which will be called in this agent and I don't know
> how to do that. It would be perfect if such think will be possible with
> writing for example new mechanical driver in ML2 plugin (but how?).
> Thanks in advance for any help from You :)
> 
> --
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] IRC meeting moved

2014-02-25 Thread Tim Hinrichs
Hi all,

Last week we moved the Congress IRC meeting time to every other Tuesday at 
17:00 UTC in #openstack-meeting-3 (that's an hour earlier than it was 
previously).  But we neglected to mail out the new time, and it doesn't look 
like anyone remembered the time change.  So I'll hang around at both our old 
and new time slots this week.

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread trinath.soman...@freescale.com
Hi

Hope this helps

http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron

___

Trinath Somanchi

_
From: Sławek Kapłoński [sla...@kaplonski.pl]
Sent: Tuesday, February 25, 2014 9:24 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Neutron ML2 and openvswitch agent

Hello,

I have question to You guys. Can someone explain me (or send to link
with such explanation) how exactly ML2 plugin which is working on
neutron server is communicating with compute hosts with openvswitch
agents? I suppose that this is working with rabbitmq queues but I need
to add own function which will be called in this agent and I don't know
how to do that. It would be perfect if such think will be possible with
writing for example new mechanical driver in ML2 plugin (but how?).
Thanks in advance for any help from You :)

--
Best regards
Slawek Kaplonski
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] GSoC 2014

2014-02-25 Thread Andrew Chul
Hi, guys! My name is Andrew Chul, I'm from Russia. I had graduated National
Research University "Moscow Power Engineering Institute" a few years ago.
And then I've started to getting post-graduated education in "Smolensk
University of Humanities".

The time for filing of an application for participating in projects is
coming and I'm looking forward of 10th March. I've seen your project in the
list of organizations which will take part in Google Summer of Code 2014.
And I need to say that my eyes exploded interest to your project. Why? I
dreamed about such project. I'm very interesting in such areas, as machine
learning, artificial intelligence. And, primarily, I'm developer on php,
but active develop myself in Python.

So, 10th March will coming soon and I will fill an application to
participating in your project. I hope that I will be able to work side by
side with you in such interesting and cognitive project. Thank you for
attention.

-- 
Best regards, Andrew Chul.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-25 Thread Dina Belova
Ok, so

>>> I'm just asking why we should hack Keystone workflow by adding an hook,
like we did for Nova. From my POV, that's not worth it.

Idea was about some extra specs, that will be processed by Climate anyway.
Keystone will know nothing about reservations or smth.

>>> I think it should be a Climate "policy" (be careful, the name is
confusing) : if admin wants to grant any new project for reservations, he
should place a call to Climate. That's up to Climate-Nova (ie. Nova
extension) to query Climate in order to see if project has been granted or
not.

Now I think that it'll be better, yes.
I see some workflow like:

1) Mark project as reservable in Climate
2) When some resource is created (like Nova instance) it should be checked
(in the API extensions, for example) via Climate if project is reservable.
If is, and there is no special reservation flags passed, it should be used
default_reservation stuff for this instance

Sylvain, is that ira you're talking about?

Dina



On Tue, Feb 25, 2014 at 7:53 PM, Sylvain Bauza wrote:

>
>
>
> 2014-02-25 16:25 GMT+01:00 Dina Belova :
>
>  Why should it require to be part of Keystone to hook up on Climate ?
>>
>>
>> Sorry, can't get your point.
>>
>>
>
> I'm just asking why we should hack Keystone workflow by adding an hook,
> like we did for Nova. From my POV, that's not worth it.
>
>
>
>> Provided we consider some projects as 'reservable', we could say this
>>> should be a Climate API endpoint like CRUD /project/ and up to the admin
>>> responsability to populate it.
>>> If we say that new projects should automatically be 'reservable', that's
>>> only policy from Climate to whiteboard these.
>>
>>
>> So you propose to make some API requests to Climate (like for hosts) and
>> mark some already existing projects as reserved. But how we'll automate
>> process of some resource reservation belonging to that tenant? Or do you
>> propose still to add some checkings to, for example, climate-nova
>> extensions to check this somehow there?
>>
>> Thanks
>>
>>
>
> I think it should be a Climate "policy" (be careful, the name is
> confusing) : if admin wants to grant any new project for reservations, he
> should place a call to Climate. That's up to Climate-Nova (ie. Nova
> extension) to query Climate in order to see if project has been granted or
> not.
>
> Conceptually, this 'reservation' information is tied to Climate and should
> not be present within the projects.
>
> -Sylvain
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-25 Thread yunhong jiang
On Tue, 2014-02-25 at 10:45 +, John Garbutt wrote:
> 
> As a heads up, the overheads of DB calls turned out to dwarf any
> algorithmic improvements I managed. There will clearly be some RPC
> overhead, but it didn't stand out as much as the DB issue.
> 
> The move to conductor work should certainly stop the scheduler making
> those pesky DB calls to update the nova instance. And then,
> improvements like no-db-scheduler and improvements to scheduling
> algorithms should shine through much more.
> 
Although DB access is sure the key for performance, but do we really
want to pursue conductor-based scheduler?

--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-25 Thread yunhong jiang
On Tue, 2014-02-25 at 03:05 +, Liuji (Jeremy) wrote:
> Now that USB devices are used so widely in private/hybrid cloud like
> used as USB key, and there are no technical issues in libvirt/qemu.
> I think it a valuable feature in openstack.

USB key is an interesting scenario. I assume the USB key is just for
some specific VM, wondering how the admin/user know which usb disk to
which VM?

--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack Error

2014-02-25 Thread Denis Makogon
Hi, Trinath.

Ideal solution is to rebuild your dev. environnment.

But for the future discussions and questions please use IRC #openstack-dev
to ask any
question about setting up development environment (devstack).

Best regards,
Denis Makogon.




On Tue, Feb 25, 2014 at 5:53 PM, Ben Nemec  wrote:

>  On 2014-02-25 08:19, trinath.soman...@freescale.com wrote:
>
>  Hi Stackers-
>
> When I configured Jenkins to run the Sandbox tempest testing, While
> devstack is running,
>
> I have seen error
>
> "ERROR: Invalid Openstack Nova credentials"
>
> and another error
>
> "ERROR: HTTPConnection Pool(host='127.0.0.1', port=8774): Max retries
> exceeded wuth url: /v2/91dd(caused by : [Errno 111]
> Connection refused)
>
> I feel devstack automates the openstack environment.
>
> Kindly guide me resolve the issue.
>
> Thanks in advance.
>
> --
>
> Trinath Somanchi - B39208
>
> trinath.soman...@freescale.com | extn: 4048
>
>  Those are both symptoms of an underlying problem.  It sounds like a
> service didn't start or wasn't configured correctly, but it's impossible to
> say for sure what went wrong based on this information.
>
> -Ben
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] can't set rules in the common policy

2014-02-25 Thread Ben Nemec
 

On 2014-02-25 02:00, Tian, Shuangtai wrote: 

> Hi, Stackers 
> 
> When I init a Enforcer class with rules, I find the rules will rewrite by 
> configure policy rules, because the policy file is modified , 
> 
> the load rules always try to load the rules from the cache or configure file 
> when checks the policy in the enforce function, and 
> 
> force to rewrite the rules always using the configure policy. 
> 
> I think this problem also exists when we use the set_rules to set rules 
> before we use the enforce to load rules in the first time. 
> 
> Anyone also meets this problem, or if the way I used is wrong? I proposed a 
> patch to this problem : https://review.openstack.org/#/c/72848/ [1] 
> 
> Best regards, 
> 
> Tian, Shuangtai

 I don't think you're doing anything wrong. You can see I worked around
the same issue in the test cases when I was working on the Oslo parallel
testing:
https://review.openstack.org/#/c/70483/1/tests/unit/test_policy.py

Your proposed change looks reasonable to me. I'd probably like to see it
used to remove some of the less pleasant parts of my change, but I'll
leave detailed feedback on the review.

Thanks!

-Ben

 

Links:
--
[1] https://review.openstack.org/#/c/72848/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack Error

2014-02-25 Thread Ben Nemec
 

On 2014-02-25 08:19, trinath.soman...@freescale.com wrote: 

> Hi Stackers- 
> 
> When I configured Jenkins to run the Sandbox tempest testing, While devstack 
> is running, 
> 
> I have seen error 
> 
> "ERROR: Invalid Openstack Nova credentials" 
> 
> and another error 
> 
> "ERROR: HTTPConnection Pool(host='127.0.0.1', port=8774): Max retries 
> exceeded wuth url: /v2/91dd….(caused by : [Errno 111] 
> Connection refused) 
> 
> I feel devstack automates the openstack environment. 
> 
> Kindly guide me resolve the issue. 
> 
> Thanks in advance. 
> 
> -- 
> 
> Trinath Somanchi - B39208 
> 
> trinath.soman...@freescale.com | extn: 4048

Those are both symptoms of an underlying problem. It sounds like a
service didn't start or wasn't configured correctly, but it's impossible
to say for sure what went wrong based on this information. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >