Re: [openstack-dev] [ironic bare metal installation issue]

2014-06-04 Thread Clint Byrum
Excerpts from 严超's message of 2014-06-03 21:23:25 -0700:
 Hi, All :
 I've deployed my ironic following this link:
 http://ma.ttwagner.com/bare-metal-deploys-with-devstack-and-ironic/ , all
 steps is completed.
 Now one of my node-show provision_state is active. But why is this
 node still in installation state as follow ?
  [image: 内嵌图片 1]


Ironic has done all that it can for the machine. That is the kernel
and ramdisk from the image, and Ironic has no real way to check that
this deploy succeeds. It is on the same level as checking to see if your
VM actually boots after kvm has been spawned.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] question about createbackup API in nova

2014-06-04 Thread Bohai (ricky)
Hi stackers,

When I use the createBackup API, I found it just snapshots the root disk of the 
instance.
For an instance with multiple cinder backend volumes, it will not snapshot them.
It's a little different to the things in current createImage API.

My question is whether it's reasonable and discussed decision?
I tried but can't find the reason.

Best regards to you.
Ricky



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Resource action API

2014-06-04 Thread yang zhang






Hi all,   Now heat only supports suspending/resuming a whole stack, all the 
resources of the stack will be suspended/resumed,but sometime we just want to 
suspend or resume only a part of resources in the stack, so I think adding 
resource-action API for heat isnecessary. this API will be helpful to solve 2 
problems:- If we want to suspend/resume the resources of the stack, you 
need to get the phy_id first and then call the API of other services, and this 
won't update the statusof the resource in heat, which often cause some 
unexpected problem.- this API could offer a turn on/off function for some 
native resources, e.g., we can turn on/off the autoscalinggroup or a single 
policy with the API, this is like the suspend/resume services feature[1] in 
AWS.   I registered a bp for it, and you are welcome for discussing it.
https://blueprints.launchpad.net/heat/+spec/resource-action-api
[1]  
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html
Regards!Zhang Yang

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question of necessary queries for Event implemented on HBase

2014-06-04 Thread Igor Degtiarov
Hello,

For today I have prepared a spec for events feature in HBase
https://review.openstack.org/#/c/96417/,
and it was successfully merged. In spec was described the details of event
features in HBase.

Yours,
Igor D.

On Wed, May 21, 2014 at 3:03 PM, Dmitriy Ukhlov dukh...@mirantis.com
wrote:

 Hello Igor,

 Sounds reasonable.


 On 05/21/2014 02:38 PM, Igor Degtiarov wrote:


 Hi,

 I have found that filter model for Events has mandatory parameters
 start_time and end_time
 of the events period. So, it seems that structure for rowkey as
 ''timestamp + event_id will be more suitable.

  --
 Best regards,
 Dmitriy Ukhlov
 Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Yingjun Li
+1, if doing so, a related bug related bug may be solved as well: 
https://bugs.launchpad.net/nova/+bug/1323538

On Jun 3, 2014, at 21:29, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,
 
 tl;dr
 =
 
 Move CPU and RAM allocation ratio definition out of the Nova scheduler and 
 into the resource tracker. Remove the calculations for overcommit out of the 
 core_filter and ram_filter scheduler pieces.
 
 Details
 ===
 
 Currently, in the Nova code base, the thing that controls whether or not the 
 scheduler places an instance on a compute host that is already full (in 
 terms of memory or vCPU usage) is a pair of configuration options* called 
 cpu_allocation_ratio and ram_allocation_ratio.
 
 These configuration options are defined in, respectively, 
 nova/scheduler/filters/core_filter.py and 
 nova/scheduler/filters/ram_filter.py.
 
 Every time an instance is launched, the scheduler loops through a collection 
 of host state structures that contain resource consumption figures for each 
 compute node. For each compute host, the core_filter and ram_filter's 
 host_passes() method is called. In the host_passes() method, the host's 
 reported total amount of CPU or RAM is multiplied by this configuration 
 option, and the product is then subtracted from the reported used amount of 
 CPU or RAM. If the result is greater than or equal to the number of vCPUs 
 needed by the instance being launched, True is returned and the host 
 continues to be considered during scheduling decisions.
 
 I propose we move the definition of the allocation ratios out of the 
 scheduler entirely, as well as the calculation of the total amount of 
 resources each compute node contains. The resource tracker is the most 
 appropriate place to define these configuration options, as the resource 
 tracker is what is responsible for keeping track of total and used resource 
 amounts for all compute nodes.
 
 Benefits:
 
 * Allocation ratios determine the amount of resources that a compute node 
 advertises. The resource tracker is what determines the amount of resources 
 that each compute node has, and how much of a particular type of resource 
 have been used on a compute node. It therefore makes sense to put 
 calculations and definition of allocation ratios where they naturally belong.
 * The scheduler currently needlessly re-calculates total resource amounts on 
 every call to the scheduler. This isn't necessary. The total resource amounts 
 don't change unless either a configuration option is changed on a compute 
 node (or host aggregate), and this calculation can be done more efficiently 
 once in the resource tracker.
 * Move more logic out of the scheduler
 * With the move to an extensible resource tracker, we can more easily evolve 
 to defining all resource-related options in the same place (instead of in 
 different filter files in the scheduler...)
 
 Thoughts?
 
 Best,
 -jay
 
 * Host aggregates may also have a separate allocation ratio that overrides 
 any configuration setting that a particular host may have
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-04 Thread Wang, Yalei
Hi, Xurong,

How do you test it in postgresql? Do you use tox to do a unittest and get the 
result(1h)?


/Yalei

From: Xurong Yang [mailto:ido...@gmail.com]
Sent: Thursday, May 29, 2014 6:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron] One performance issue about VXLAN pool 
initiation

Hi, Folks,

When we configure VXLAN range [1,16M], neutron-server service costs long time 
and cpu rate is very high(100%) when initiation. One test base on postgresql 
has been verified: more than 1h when VXLAN range is [1, 1M].

So, any good solution about this performance issue?

Thanks,
Xurong Yang


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] [heat] Resource action API

2014-06-04 Thread Clint Byrum
Excerpts from yang zhang's message of 2014-06-04 00:01:41 -0700:
 
 
 
 
 
 
 Hi all,   Now heat only supports suspending/resuming a whole stack, all the 
 resources of the stack will be suspended/resumed,but sometime we just want to 
 suspend or resume only a part of resources in the stack, so I think adding 
 resource-action API for heat isnecessary. this API will be helpful to solve 2 
 problems:- If we want to suspend/resume the resources of the stack, you 
 need to get the phy_id first and then call the API of other services, and 
 this won't update the statusof the resource in heat, which often cause some 
 unexpected problem.- this API could offer a turn on/off function for some 
 native resources, e.g., we can turn on/off the autoscalinggroup or a single 
 policy with the API, this is like the suspend/resume services feature[1] in 
 AWS.   I registered a bp for it, and you are welcome for discussing it.   
  https://blueprints.launchpad.net/heat/+spec/resource-action-api
 [1]  
 http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html
 Regards!Zhang Yang

Hi zhang. I'd rather we model the intended states of each resource, and
ensure that Heat can assert them. Actions are tricky things to model.

So if you want your nova server to be stopped, how about

resources:
  server1:
type: OS::Nova::Server
properties:
  flavor: superbig
  image: TheBestOS
  state: STOPPED

We don't really need to model actions then, just the API's we have
available.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-04 Thread Eugene Nikanorov
Unit tests use sqlite db backend, so it might be much faster than
production environment where DB is on different server.

Eugene.


On Wed, Jun 4, 2014 at 11:14 AM, Wang, Yalei yalei.w...@intel.com wrote:

  Hi, Xurong,



 How do you test it in postgresql? Do you use tox to do a unittest and get
 the result(1h)?





 /Yalei



 *From:* Xurong Yang [mailto:ido...@gmail.com]
 *Sent:* Thursday, May 29, 2014 6:01 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [Neutron] One performance issue about VXLAN
 pool initiation



 Hi, Folks,



 When we configure VXLAN range [1,16M], neutron-server service costs long
 time and cpu rate is very high(100%) when initiation. One test base on
 postgresql has been verified: more than 1h when VXLAN range is [1, 1M].



 So, any good solution about this performance issue?



 Thanks,

 Xurong Yang





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] [heat] Resource action API

2014-06-04 Thread yang zhang


 From: cl...@fewbar.com
 To: openstack-dev@lists.openstack.org
 Date: Wed, 4 Jun 2014 00:09:39 -0700
 Subject: Re: [openstack-dev] [Spam]  [heat] Resource action API
 
 Excerpts from yang zhang's message of 2014-06-04 00:01:41 -0700:
  
  
  
  
  
  
  Hi all,   Now heat only supports suspending/resuming a whole stack, all the 
  resources of the stack will be suspended/resumed,but sometime we just want 
  to suspend or resume only a part of resources in the stack, so I think 
  adding resource-action API for heat isnecessary. this API will be helpful 
  to solve 2 problems:- If we want to suspend/resume the resources of the 
  stack, you need to get the phy_id first and then call the API of other 
  services, and this won't update the statusof the resource in heat, which 
  often cause some unexpected problem.- this API could offer a turn 
  on/off function for some native resources, e.g., we can turn on/off the 
  autoscalinggroup or a single policy with the API, this is like the 
  suspend/resume services feature[1] in AWS.   I registered a bp for it, and 
  you are welcome for discussing it.
  https://blueprints.launchpad.net/heat/+spec/resource-action-api
  [1]  
  http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html
  Regards!Zhang Yang
 
 Hi zhang. I'd rather we model the intended states of each resource, and
 ensure that Heat can assert them. Actions are tricky things to model.
 
 So if you want your nova server to be stopped, how about
 
 resources:
   server1:
 type: OS::Nova::Server
 properties:
   flavor: superbig
   image: TheBestOS
   state: STOPPED  We don't really need to model actions then, just 
 the API's we have
 available.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

At first, I want to do it like this, using a resource parameter, but this need 
to update the stack  in order to suspend the Resource, It means we can't stop 
another resource when a resource is stopping, but it seems not a big deal, 
stopping resource usually is soon, compare to API,  using resource parameter is 
easy to implement as the result of mature code of stack-update, we could finish 
it in a short period. 
Does anyone else have good ideas?



  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] Re: [Spam] [heat] Resource action API

2014-06-04 Thread Clint Byrum
Excerpts from yang zhang's message of 2014-06-04 01:14:37 -0700:
 
  From: cl...@fewbar.com
  To: openstack-dev@lists.openstack.org
  Date: Wed, 4 Jun 2014 00:09:39 -0700
  Subject: Re: [openstack-dev] [Spam]  [heat] Resource action API
  
  Excerpts from yang zhang's message of 2014-06-04 00:01:41 -0700:
   
   
   
   
   
   
   Hi all,   Now heat only supports suspending/resuming a whole stack, all 
   the resources of the stack will be suspended/resumed,but sometime we just 
   want to suspend or resume only a part of resources in the stack, so I 
   think adding resource-action API for heat isnecessary. this API will be 
   helpful to solve 2 problems:- If we want to suspend/resume the 
   resources of the stack, you need to get the phy_id first and then call 
   the API of other services, and this won't update the statusof the 
   resource in heat, which often cause some unexpected problem.- this 
   API could offer a turn on/off function for some native resources, e.g., 
   we can turn on/off the autoscalinggroup or a single policy with the API, 
   this is like the suspend/resume services feature[1] in AWS.   I 
   registered a bp for it, and you are welcome for discussing it.
   https://blueprints.launchpad.net/heat/+spec/resource-action-api
   [1]  
   http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html
   Regards!Zhang Yang
  
  Hi zhang. I'd rather we model the intended states of each resource, and
  ensure that Heat can assert them. Actions are tricky things to model.
  
  So if you want your nova server to be stopped, how about
  
  resources:
server1:
  type: OS::Nova::Server
  properties:
flavor: superbig
image: TheBestOS
state: STOPPED  We don't really need to model actions then, just 
  the API's we have
  available.
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 At first, I want to do it like this, using a resource parameter, but this 
 need to update the stack  in order to suspend the Resource, It means we can't 
 stop another resource when a resource is stopping, but it seems not a big 
 deal, stopping resource usually is soon, compare to API,  using resource 
 parameter is easy to implement as the result of mature code of stack-update, 
 we could finish it in a short period. 
 Does anyone else have good ideas?
 

It's a bit far off, but the eventual goal of the convergence effort is
to make it so you _can_ update two things concurrently, since updates
will just be recording intended state in the db, not waiting for all of
that to complete.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Zhenguo Niu
+1, makes sense to me.


On Wed, Jun 4, 2014 at 3:08 PM, Yingjun Li liyingjun1...@gmail.com wrote:

 +1, if doing so, a related bug related bug may be solved as well:
 https://bugs.launchpad.net/nova/+bug/1323538

 On Jun 3, 2014, at 21:29, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,

 tl;dr
 =

 Move CPU and RAM allocation ratio definition out of the Nova scheduler and
 into the resource tracker. Remove the calculations for overcommit out of
 the core_filter and ram_filter scheduler pieces.

 Details
 ===

 Currently, in the Nova code base, the thing that controls whether or not
 the scheduler places an instance on a compute host that is already full
 (in terms of memory or vCPU usage) is a pair of configuration options*
 called cpu_allocation_ratio and ram_allocation_ratio.

 These configuration options are defined in, respectively,
 nova/scheduler/filters/core_filter.py and
 nova/scheduler/filters/ram_filter.py.

 Every time an instance is launched, the scheduler loops through a
 collection of host state structures that contain resource consumption
 figures for each compute node. For each compute host, the core_filter and
 ram_filter's host_passes() method is called. In the host_passes() method,
 the host's reported total amount of CPU or RAM is multiplied by this
 configuration option, and the product is then subtracted from the reported
 used amount of CPU or RAM. If the result is greater than or equal to the
 number of vCPUs needed by the instance being launched, True is returned and
 the host continues to be considered during scheduling decisions.

 I propose we move the definition of the allocation ratios out of the
 scheduler entirely, as well as the calculation of the total amount of
 resources each compute node contains. The resource tracker is the most
 appropriate place to define these configuration options, as the resource
 tracker is what is responsible for keeping track of total and used resource
 amounts for all compute nodes.

 Benefits:

 * Allocation ratios determine the amount of resources that a compute node
 advertises. The resource tracker is what determines the amount of resources
 that each compute node has, and how much of a particular type of resource
 have been used on a compute node. It therefore makes sense to put
 calculations and definition of allocation ratios where they naturally
 belong.
 * The scheduler currently needlessly re-calculates total resource amounts
 on every call to the scheduler. This isn't necessary. The total resource
 amounts don't change unless either a configuration option is changed on a
 compute node (or host aggregate), and this calculation can be done more
 efficiently once in the resource tracker.
 * Move more logic out of the scheduler
 * With the move to an extensible resource tracker, we can more easily
 evolve to defining all resource-related options in the same place (instead
 of in different filter files in the scheduler...)

 Thoughts?

 Best,
 -jay

 * Host aggregates may also have a separate allocation ratio that overrides
 any configuration setting that a particular host may have

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Zhenguo Niu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread John Garbutt
On 3 June 2014 14:29, Jay Pipes jaypi...@gmail.com wrote:
 tl;dr
 =

 Move CPU and RAM allocation ratio definition out of the Nova scheduler and
 into the resource tracker. Remove the calculations for overcommit out of the
 core_filter and ram_filter scheduler pieces.

+1

I hope to see us send more specific stats to the scheduler, that each
filter/weigher can then interpret.

The extensible system then means you can optimise what you send down
the scheduler to a minimum. The next step is doing differential
updates, with more infrequent full sync updates. But we are getting
there.

As you say, I love that we do the calculation once per host, not once
per request. It plays really well with the caching scheduler work, and
the new build and run instance flow that help work towards the
scheduler process(es) doing the bare minimum on each user request.

 Details
 ===

 Currently, in the Nova code base, the thing that controls whether or not the
 scheduler places an instance on a compute host that is already full (in
 terms of memory or vCPU usage) is a pair of configuration options* called
 cpu_allocation_ratio and ram_allocation_ratio.

 These configuration options are defined in, respectively,
 nova/scheduler/filters/core_filter.py and
 nova/scheduler/filters/ram_filter.py.

 Every time an instance is launched, the scheduler loops through a collection
 of host state structures that contain resource consumption figures for each
 compute node. For each compute host, the core_filter and ram_filter's
 host_passes() method is called. In the host_passes() method, the host's
 reported total amount of CPU or RAM is multiplied by this configuration
 option, and the product is then subtracted from the reported used amount of
 CPU or RAM. If the result is greater than or equal to the number of vCPUs
 needed by the instance being launched, True is returned and the host
 continues to be considered during scheduling decisions.

 I propose we move the definition of the allocation ratios out of the
 scheduler entirely, as well as the calculation of the total amount of
 resources each compute node contains. The resource tracker is the most
 appropriate place to define these configuration options, as the resource
 tracker is what is responsible for keeping track of total and used resource
 amounts for all compute nodes.

+1

 Benefits:

  * Allocation ratios determine the amount of resources that a compute node
 advertises. The resource tracker is what determines the amount of resources
 that each compute node has, and how much of a particular type of resource
 have been used on a compute node. It therefore makes sense to put
 calculations and definition of allocation ratios where they naturally
 belong.
  * The scheduler currently needlessly re-calculates total resource amounts
 on every call to the scheduler. This isn't necessary. The total resource
 amounts don't change unless either a configuration option is changed on a
 compute node (or host aggregate), and this calculation can be done more
 efficiently once in the resource tracker.
  * Move more logic out of the scheduler
  * With the move to an extensible resource tracker, we can more easily
 evolve to defining all resource-related options in the same place (instead
 of in different filter files in the scheduler...)

+1

Thats a much nicer solution than shoving info from the aggregate into
the scheduler. Great to avoid that were possible.


Now there are limits to this, I think. Some examples that to mind:
* For per aggregate ratios, we just report the free resources, taking
into account the ratio. (as above)
* For the availability zone filter, each host should report its
availability zone to the scheduler
* If we have filters that adjust the ratio per flavour, we will still
need that calculation in the scheduler, but thats cool


In general, the approach I am advocating is:
* each host provides the data needed for the filter / weightier
* ideally in a way that requires minimal processing

And after some IRC discussions with Dan Smith, he pointed out that we
need to think about:
* with data versioned in a way that supports live-upgrades


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Jaesuk Ahn
+1 for this proposal.   

We are highly interested in this one.  


Jaesuk Ahn, Ph.D.
Team Lead, Next Generation Cloud Platform Dev. Project  
KT  


… active member of openstack community  



On Jun 4, 2014, 5:29:36 PM, John Garbutt j...@johngarbutt.com wrote:  
On 3 June 2014 14:29, Jay Pipes


mailto:jaypi...@gmail.com wrote:
 tl;dr
 =

 Move CPU and RAM allocation ratio definition out of the Nova scheduler and
 into the resource tracker. Remove the calculations for overcommit out of the
 core_filter and ram_filter scheduler pieces.

+1

I hope to see us send more specific stats to the scheduler, that each
filter/weigher can then interpret.

The extensible system then means you can optimise what you send down
the scheduler to a minimum. The next step is doing differential
updates, with more infrequent full sync updates. But we are getting
there.

As you say, I love that we do the calculation once per host, not once
per request. It plays really well with the caching scheduler work, and
the new build and run instance flow that help work towards the
scheduler process(es) doing the bare minimum on each user request.

 Details
 ===

 Currently, in the Nova code base, the thing that controls whether or not the
 scheduler places an instance on a compute host that is already full (in
 terms of memory or vCPU usage) is a pair of configuration options* called
 cpu_allocation_ratio and ram_allocation_ratio.

 These configuration options are defined in, respectively,
 nova/scheduler/filters/core_filter.py and
 nova/scheduler/filters/ram_filter.py.

 Every time an instance is launched, the scheduler loops through a collection
 of host state structures that contain resource consumption figures for each
 compute node. For each compute host, the core_filter and ram_filter's
 host_passes() method is called. In the host_passes() method, the host's
 reported total amount of CPU or RAM is multiplied by this configuration
 option, and the product is then subtracted from the reported used amount of
 CPU or RAM. If the result is greater than or equal to the number of vCPUs
 needed by the instance being launched, True is returned and the host
 continues to be considered during scheduling decisions.

 I propose we move the definition of the allocation ratios out of the
 scheduler entirely, as well as the calculation of the total amount of
 resources each compute node contains. The resource tracker is the most
 appropriate place to define these configuration options, as the resource
 tracker is what is responsible for keeping track of total and used resource
 amounts for all compute nodes.

+1

 Benefits:

 * Allocation ratios determine the amount of resources that a compute node
 advertises. The resource tracker is what determines the amount of resources
 that each compute node has, and how much of a particular type of resource
 have been used on a compute node. It therefore makes sense to put
 calculations and definition of allocation ratios where they naturally
 belong.
 * The scheduler currently needlessly re-calculates total resource amounts
 on every call to the scheduler. This isn't necessary. The total resource
 amounts don't change unless either a configuration option is changed on a
 compute node (or host aggregate), and this calculation can be done more
 efficiently once in the resource tracker.
 * Move more logic out of the scheduler
 * With the move to an extensible resource tracker, we can more easily
 evolve to defining all resource-related options in the same place (instead
 of in different filter files in the scheduler...)

+1

Thats a much nicer solution than shoving info from the aggregate into
the scheduler. Great to avoid that were possible.


Now there are limits to this, I think. Some examples that to mind:
* For per aggregate ratios, we just report the free resources, taking
into account the ratio. (as above)
* For the availability zone filter, each host should report its
availability zone to the scheduler
* If we have filters that adjust the ratio per flavour, we will still
need that calculation in the scheduler, but thats cool


In general, the approach I am advocating is:
* each host provides the data needed for the filter / weightier
* ideally in a way that requires minimal processing

And after some IRC discussions with Dan Smith, he pointed out that we
need to think about:
* with data versioned in a way that supports live-upgrades


Thanks,
John

___
OpenStack-dev mailing list
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] request to review bug 1301359

2014-06-04 Thread Harshada Kakad
HI All,
Could someone please, review the bug
https://bugs.launchpad.net/horizon/+bug/1301359


Made Size parameter optional while creating DB Instance.

While creating Database Instance size parameter depends on
whether trove_volume_support is set. So size paramater is
set to mandatory if trove_volume_support is set else its kept optional.

Here is the link for review :  https://review.openstack.org/#/c/86295/

--
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
*website : www.izeltech.com http://www.izeltech.com*

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Kafka support and high throughput

2014-06-04 Thread Flavio Percoco

On 02/06/14 07:52 -0700, Keith Newstadt wrote:

Thanks for the responses Flavio, Roland.

Some background on why I'm asking:  we're using Kafka as the message queue for 
a stream processing service we're building, which we're delivering to our 
internal customers as a service along with OpenStack.  We're considering 
building a high throughput ingest API to get the clients' data streams into the 
stream processing service.  It occurs to me that this API is simply a messaging 
API, and so I'm wondering if we should consider building this high throughput 
API as part of the Marconi project.

Has this topic come up in the Marconi team's discussions, and would it fit into 
the vision of the Marconi roadmap?


Yes it has and I'm happy to see this coming up in the ML, thanks.

Some things that we're considering in order to have a more flexible
architecture that will support a higher throughput are:

- Queue Flavors (Terrible name). This is for marconi what flavors are
 for Nova. It basically defines a set of properties that will belong
 to a queue. Some of those properties may be related to the messages
 lifetime or the storage capabilities (in-memory, freaking fast,
 durable, etc). This is yet to be done.

- 2 new drivers (AMQP, redis). The former adds support to brokers and
 the later to well, redis, which brings in support for in-memory
 queues. Work In Progress.

- A new transport. This is something we've discussed but we haven't
 reached an agreement yet on when this should be done nor what it
 should be based on. The gist of this feature is adding support for
 another protocol that can serve Marconi's API alongside the HTTP
 one. We've considered TCP and websocket so far. The former is
 perfect for lower level communications without the HTTP overhead
 whereas the later is useful for web apps.

That said. A Kafka plugin is something we heard a lot about at the
summit and we've discussed it a bit. I'd love to see that happening as
an external plugin for now. There's no need to wait for the rest to
happen.

I'm more than happy to help with guidance and support on the repo
creation, driver structure etc.

Cheers,
Flavio



Thanks,
Keith Newstadt
keith_newst...@symantec.com
@knewstadt


Date: Sun, 1 Jun 2014 15:01:40 +
From: Hochmuth, Roland M roland.hochm...@hp.com
To: OpenStack List openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Kafka support and high
throughput
Message-ID: cfae6524.762da%roland.hochm...@hp.com
Content-Type: text/plain; charset=us-ascii

There are some folks in HP evaluating different messaging technologies for
Marconi, such as RabbitMQ and Kafka. I'll ping them and maybe they can
share
some information.

On a related note, the Monitoring as a Service solution we are working
on uses Kafka. This was just open-sourced at,
https://github.com/hpcloud-mon,
and will be moving over to StackForge starting next week. The architecture
is at,
https://github.com/hpcloud-mon/mon-arch.

I haven't really looked at Marconi. If you are interested in
throughput, low latency, durability, scale and fault-tolerance Kafka
seems like a great choice.

It has been also pointed out from various sources that possibly Kafka
could be another oslo.messaging transport. Are you looking into that as
that would be very interesting to me and something that is on my task
list that I haven't gotten to yet.


On 5/30/14, 7:03 AM, Keith Newstadt keith_newst...@symantec.com wrote:


Has anyone given thought to using Kafka to back Marconi?  And has there
been discussion about adding high throughput APIs to Marconi.

We're looking at providing Kafka as a messaging service for our
customers, in a scenario where throughput is a priority.  We've had good
luck using both streaming HTTP interfaces and long poll interfaces to get
high throughput for other web services we've built.  Would this use case
be appropriate in the context of the Marconi roadmap?

Thanks,
Keith Newstadt
keith_newst...@symantec.com






Keith Newstadt
Cloud Services Architect
Cloud Platform Engineering
Symantec Corporation 
www.symantec.com


Office: (781) 530-2299  Mobile: (617) 513-1321
Email: keith_newst...@symantec.com
Twitter: @knewstadt




This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed and may contain information that 
is non-public, proprietary, privileged, confidential, and exempt from 
disclosure under applicable law or may constitute as attorney work product. If 
you are not the intended recipient, you are hereby notified that any use, 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, notify us 
immediately by telephone and (i) destroy this message if a facsimile or (ii) 
delete this message immediately if this is an electronic communication.




Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Jay Lau
Does there is any blueprint related to this? Thanks.


2014-06-03 21:29 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 Hi Stackers,

 tl;dr
 =

 Move CPU and RAM allocation ratio definition out of the Nova scheduler and
 into the resource tracker. Remove the calculations for overcommit out of
 the core_filter and ram_filter scheduler pieces.

 Details
 ===

 Currently, in the Nova code base, the thing that controls whether or not
 the scheduler places an instance on a compute host that is already full
 (in terms of memory or vCPU usage) is a pair of configuration options*
 called cpu_allocation_ratio and ram_allocation_ratio.

 These configuration options are defined in, respectively,
 nova/scheduler/filters/core_filter.py and nova/scheduler/filters/ram_
 filter.py.

 Every time an instance is launched, the scheduler loops through a
 collection of host state structures that contain resource consumption
 figures for each compute node. For each compute host, the core_filter and
 ram_filter's host_passes() method is called. In the host_passes() method,
 the host's reported total amount of CPU or RAM is multiplied by this
 configuration option, and the product is then subtracted from the reported
 used amount of CPU or RAM. If the result is greater than or equal to the
 number of vCPUs needed by the instance being launched, True is returned and
 the host continues to be considered during scheduling decisions.

 I propose we move the definition of the allocation ratios out of the
 scheduler entirely, as well as the calculation of the total amount of
 resources each compute node contains. The resource tracker is the most
 appropriate place to define these configuration options, as the resource
 tracker is what is responsible for keeping track of total and used resource
 amounts for all compute nodes.

 Benefits:

  * Allocation ratios determine the amount of resources that a compute node
 advertises. The resource tracker is what determines the amount of resources
 that each compute node has, and how much of a particular type of resource
 have been used on a compute node. It therefore makes sense to put
 calculations and definition of allocation ratios where they naturally
 belong.
  * The scheduler currently needlessly re-calculates total resource amounts
 on every call to the scheduler. This isn't necessary. The total resource
 amounts don't change unless either a configuration option is changed on a
 compute node (or host aggregate), and this calculation can be done more
 efficiently once in the resource tracker.
  * Move more logic out of the scheduler
  * With the move to an extensible resource tracker, we can more easily
 evolve to defining all resource-related options in the same place (instead
 of in different filter files in the scheduler...)

 Thoughts?

 Best,
 -jay

 * Host aggregates may also have a separate allocation ratio that overrides
 any configuration setting that a particular host may have

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Tempest + Rally: first success

2014-06-04 Thread Andrey Kurilin
Thanks for traceback. I reported a bug on launchpad[1] based on your logs.

Below you can find several possible solutions at the moment:
1) Wait for bug-fix
2) Create neutron subnet manually(some tutorials: [2], [3])
3) Copy existing Tempest configuration file into Rally(Rally stores Tempest
configuration file in
~./rally/tempest/for-deployment-deployment-uuid/tempest.conf)

[1] - https://bugs.launchpad.net/rally/+bug/1326297
[2] -
http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron_initial-external-network.html
[3] -
http://docs.openstack.org/admin-guide-cloud/content/advanced_networking.html


On Wed, Jun 4, 2014 at 12:16 AM, om prakash pandey pande...@gmail.com
wrote:

 Thanks Andrey! Please see below the logs(Environment specific output has
 been snipped):

 2014-06-04 02:32:07.303 1939 DEBUG rally.cmd.cliutils [-] INFO logs from
 urllib3 and requests module are hide. run
 /usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py:137
 2014-06-04 02:32:07.363 1939 INFO rally.orchestrator.api [-] Starting
 verification of deployment: 9c023039-211e-4794-84df-4ada68c656dd
 2014-06-04 02:32:07.378 1939 INFO
 rally.verification.verifiers.tempest.tempest [-] Verification
 d596caf4-feb7-455c-832e-b6b77b1dcb9c | Starting:  Run verification.
 2014-06-04 02:32:07.378 1939 DEBUG
 rally.verification.verifiers.tempest.tempest [-] Tempest config file:
 /home/om/.rally/tempest/for-deployment-9c023039-211e-4794-84df-4ada68c656dd/tempest.conf
  generate_config_file
 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py:90
 2014-06-04 02:32:07.379 1939 INFO
 rally.verification.verifiers.tempest.tempest [-] Starting: Creation of
 configuration file for tempest.
 2014-06-04 02:32:07.385 1939 DEBUG keystoneclient.session [-] REQ: curl -i
 -X POST
 ---
 2014-06-04 02:32:09.003 1939 DEBUG glanceclient.common.http [-]
 HTTP/1.1 200 OK
 content-length: 9276
 via: 1.1 xyzcloud.com
 server: Apache/2.4.9 (Ubuntu)
 connection: close
 date: Tue, 03 Jun 2014 21:02:08 GMT
 content-type: application/json; charset=UTF-8
 x-openstack-request-id: req-5fe73b11-85c7-49b7-bf31-34a44ceaaf6b

 
 2014-06-04 02:32:13.399 1939 DEBUG neutronclient.client [-]
 REQ: curl -i
 https://xyzcloud.com:443//v2.0/subnets.json?network_id=13d63b58-57c9-4ba2-ae63-733836257636
 -X GET -H X-Auth-Token: bc604fb2a258429f99ed8940064fb1cb -H
 Content-Type: application/json -H Accept: application/json -H
 User-Agent: python-neutronclient
  http_log_req
 /usr/local/lib/python2.7/dist-packages/neutronclient/common/utils.py:173
 2014-06-04 02:32:14.068 1939 DEBUG neutronclient.client [-] RESP:{'date':
 'Tue, 03 Jun 2014 21:02:14 GMT', 'status': '200', 'content-length': '15',
 'content-type': 'application/json; charset=UTF-8', 'content-location': '
 https://xyzcloud.com:443//v2.0/subnets.json?network_id=13d63b58-57c9-4ba2-ae63-733836257636'}
 {subnets: []}
  http_log_resp
 /usr/local/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
 2014-06-04 02:32:14.071 1939 CRITICAL rally [-] IndexError: list index out
 of range
 2014-06-04 02:32:14.071 1939 TRACE rally Traceback (most recent call last):
 2014-06-04 02:32:14.071 1939 TRACE rally   File /usr/local/bin/rally,
 line 10, in module
 2014-06-04 02:32:14.071 1939 TRACE rally sys.exit(main())
 2014-06-04 02:32:14.071 1939 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line 44, in main
 2014-06-04 02:32:14.071 1939 TRACE rally return cliutils.run(sys.argv,
 categories)
 2014-06-04 02:32:14.071 1939 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line 193,
 in run
 2014-06-04 02:32:14.071 1939 TRACE rally ret = fn(*fn_args,
 **fn_kwargs)
 2014-06-04 02:32:14.071 1939 TRACE rally   File string, line 2, in
 start
 2014-06-04 02:32:14.071 1939 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py, line 63, in
 default_from_global
 2014-06-04 02:32:14.071 1939 TRACE rally return f(*args, **kwargs)
 2014-06-04 02:32:14.071 1939 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py, line
 58, in start
 2014-06-04 02:32:14.071 1939 TRACE rally api.verify(deploy_id,
 set_name, regex)
 2014-06-04 02:32:14.071 1939 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py, line
 150, in verify
 2014-06-04 02:32:14.071 1939 TRACE rally
 verifier.verify(set_name=set_name, regex=regex)
 2014-06-04 02:32:14.071 1939 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
 line 271, in verify
 2014-06-04 02:32:14.071 1939 TRACE rally
 self._prepare_and_run(set_name, regex)
 2014-06-04 02:32:14.071 1939 TRACE rally   File
 

Re: [openstack-dev] [Murano] Murano API improvements

2014-06-04 Thread Alexander Tivelkov
+2 to Stan: there is a strong need for v2 API which would satisfy the needs
of Murano 0.5+. The current one is definitely outdated.

I'd like to point that there are at least two major flaws in the v1 design:

1. Sessions. Actually, they violate one of the core principles of RESTful
design - the client's state should not be stored on server, while our
session implicitly do so.
I would suggest to drop client-bound sessions entirely and instead
introduce a concept of environment modification drafts: each environment
may have an associated draft containing changes which are currently
pending. These changes may include new components which have to be added to
environment or modified copies of already existing components which have to
be changed.
There should be just one draft per environment, and any user of the
tenant may see it, interact with it, apply it (i.e. run the deployment)
etc.
When the deployment is started, the draft is blocked (i.e. nobody can
plan any other changes), and when it finishes the new configuration is
saved within the environment, and the draft is cleaned up.
In UI this may be implemented as having two tables for components view of
the environment, where one of the table contains current state of the
environment, while the second lists the Pending changes.
This is just s brief suggestions on this topic, if there are other
opinions, please post them here. Or should we create an etherpad to discuss
it more actively?

2. Currently our API acts like an interactive JSON editor: it allows to
crated any arbitrary nested JSON structures.
Instead, it should act as an interactive editor of the valid Murano object
model, and the API itself should be aware of Murano's specific: i.e. it
should be able to differentiate between nesting objects and setting a link
to the object existing at other model location, validate contracts etc.
Also there was an idea of introducing a MuranoPL wizards to init the
object model. This worths a separate email, but, briefly speaking, there
should be a way to define some MuranoPL code which will construct a complex
Murano object model based on the simple input, and this code has to be
executed at API side.  This will also require significant modifications of
the API and making it aware of available MuranoPL wizard initializers and
their semantics.

So, this will require a significant modification of API, which means we
have to design and deliver a v2 API spec.

--
Regards,
Alexander Tivelkov


On Mon, Jun 2, 2014 at 1:06 PM, Stan Lagun sla...@mirantis.com wrote:

 I think API need to be redesigned at some point. There is a blueprint for
 this: https://blueprints.launchpad.net/murano/+spec/api-vnext
 It seems reasonable to implement new API on new framework at once

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis

  sla...@mirantis.com


 On Mon, Jun 2, 2014 at 12:21 PM, Ruslan Kamaldinov 
 rkamaldi...@mirantis.com wrote:

 Let's follow the standard procedure. Both blueprints lack specification of
 implementation details. There also has to be someone willing to implement
 these
 blueprints in near feature.

 I'm not opposed to these ideas and I'd really like to see Pecan added
 during
 Juno, but we still need to follow the procedure. I cannot approve an
 idea, it
 should be a specification. Let's work together on the new API
 specification
 first, then we'll need to find a volunteer to implement it on top of
 Pecan.


 --
 Ruslan

 On Mon, Jun 2, 2014 at 8:35 AM, Timur Nurlygayanov
 tnurlygaya...@mirantis.com wrote:
  Hi all,
 
  We need to rewrite Murano API on new API framework and we have the
 commit:
  https://review.openstack.org/#/c/60787
  (Sergey, sorry, but -1 from me, need to fix small isses)
 
  Also, today I created blueprint:
  https://blueprints.launchpad.net/murano/+spec/murano-api-workers
  this feature allows to run many API threads on one host and this allows
 to
  scale Murano API processes.
 
  I suggest to update and merge this commit with migration to Pecan
 framework
  and after that we can easily implement this blueprint and add many other
  improvements to Murano API and Murano python agent.
 
  Ruslan, could you please approve these blueprints and target them to
 some
  milestone?
 
 
  Thank you!
 
  --
 
  Timur,
  QA Engineer
  OpenStack Projects
  Mirantis Inc
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Murray, Paul (HP Cloud)
Hi Jay,

This sounds good to me. You left out the part of limits from the discussion – 
these filters set the limits used at the resource tracker. You also left out 
the force-to-host and its effect on limits. Yes, I would agree with doing this 
at the resource tracker too.

And of course the extensible resource tracker is the right way to do it ☺

Paul.

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: 04 June 2014 10:04
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation 
ratio out of scheduler

Does there is any blueprint related to this? Thanks.

2014-06-03 21:29 GMT+08:00 Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com:
Hi Stackers,

tl;dr
=

Move CPU and RAM allocation ratio definition out of the Nova scheduler and into 
the resource tracker. Remove the calculations for overcommit out of the 
core_filter and ram_filter scheduler pieces.

Details
===

Currently, in the Nova code base, the thing that controls whether or not the 
scheduler places an instance on a compute host that is already full (in terms 
of memory or vCPU usage) is a pair of configuration options* called 
cpu_allocation_ratio and ram_allocation_ratio.

These configuration options are defined in, respectively, 
nova/scheduler/filters/core_filter.py and nova/scheduler/filters/ram_filter.py.

Every time an instance is launched, the scheduler loops through a collection of 
host state structures that contain resource consumption figures for each 
compute node. For each compute host, the core_filter and ram_filter's 
host_passes() method is called. In the host_passes() method, the host's 
reported total amount of CPU or RAM is multiplied by this configuration option, 
and the product is then subtracted from the reported used amount of CPU or RAM. 
If the result is greater than or equal to the number of vCPUs needed by the 
instance being launched, True is returned and the host continues to be 
considered during scheduling decisions.

I propose we move the definition of the allocation ratios out of the scheduler 
entirely, as well as the calculation of the total amount of resources each 
compute node contains. The resource tracker is the most appropriate place to 
define these configuration options, as the resource tracker is what is 
responsible for keeping track of total and used resource amounts for all 
compute nodes.

Benefits:

 * Allocation ratios determine the amount of resources that a compute node 
advertises. The resource tracker is what determines the amount of resources 
that each compute node has, and how much of a particular type of resource have 
been used on a compute node. It therefore makes sense to put calculations and 
definition of allocation ratios where they naturally belong.
 * The scheduler currently needlessly re-calculates total resource amounts on 
every call to the scheduler. This isn't necessary. The total resource amounts 
don't change unless either a configuration option is changed on a compute node 
(or host aggregate), and this calculation can be done more efficiently once in 
the resource tracker.
 * Move more logic out of the scheduler
 * With the move to an extensible resource tracker, we can more easily evolve 
to defining all resource-related options in the same place (instead of in 
different filter files in the scheduler...)

Thoughts?

Best,
-jay

* Host aggregates may also have a separate allocation ratio that overrides any 
configuration setting that a particular host may have

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic workflow question]

2014-06-04 Thread 严超
Hi, All:
I searched a lot about how ironic automatically install image on
bare metal. But there seems to be no clear workflow out there.
What I know is, in traditional PXE, a bare metal pull image from
PXE server using tftp. In tftp root, there is a ks.conf which tells tftp
which image to kick start.
But in ironic there is no ks.conf pointed in tftp. How do bare
metal know which image to install ? Is there any clear workflow where I can
read ?



*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-06-04 Thread Jiri Tomasek

On 05/31/2014 11:13 PM, Jeremy Stanley wrote:

On 2014-05-29 20:55:01 + (+), Lyle, David wrote:
[...]

There are several more xstatic packages that horizon will pull in that are
maintained outside openstack. The packages added are only those that did
not have existing xstatic packages. These packages will be updated very
sparingly, only when updating say bootstrap or jquery versions.

[...]

I'll admit that my Web development expertise is probably almost 20
years stale at this point, so forgive me if this is a silly
question: what is the reasoning against working with the upstreams
who do not yet distribute needed Javascript library packages to help
them participate in the distribution channels you need? This strikes
me as similar to forking a Python library which doesn't publish to
PyPI, just so you can publish it to PyPI. When some of these
dependencies begin to publish xstatic packages themselves, do the
equivalent repositories in Gerrit get decommissioned at that point?
Standard way to publish javascript libraries these days is publish 
minified javascript file/s (usually in dist part of repository) and 
standard way to include it in the project is to use nodejs tools such as 
Bower to list the js dependencies and have them installed automatically.


In our case it is more convinient to use xstatic packages, which we have 
to create if someone hasn't done it already. I think it might happen, 
that some of 'our' packages might turn into the official ones.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
Hi!

Workflow is not entirely documented by now AFAIK. After PXE boots deploy
kernel and ramdisk, it exposes hard drive via iSCSI and notifies Ironic.
After that Ironic partitions the disk, copies an image and reboots node
with final kernel and ramdisk.

On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
 Hi, All:
 
 I searched a lot about how ironic automatically install image
 on bare metal. But there seems to be no clear workflow out there.
 
 What I know is, in traditional PXE, a bare metal pull image
 from PXE server using tftp. In tftp root, there is a ks.conf which
 tells tftp which image to kick start.
 
 But in ironic there is no ks.conf pointed in tftp. How do bare
 metal know which image to install ? Is there any clear workflow where
 I can read ?
 
 
 
 
 Best Regards!
 Chao Yan
 --
 My twitter:Andy Yan @yanchao727
 My Weibo:http://weibo.com/herewearenow
 --
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all

I have posted this series and it has evoked a surprising (to me) amount
of discussion so I wanted to clarify some things and talk to some of the
issues so we can move forward.

So first note is this is an early posting and still is not tested much.

https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-api,n,z

First the terminology
   I have a pipeline meaning the association between a mistral workbook,
   a trigger url and a plan. This is a running entity not just a
   different workbook.

The main issue seems to be the extent to which I am exposing the mistral
workbook. Many of you expected a simpler workflow DSL that would be
converted into the mistral workbook.

The reason for me doing it this way are:
1) so we don't have to write much code
2) this is an iterative process. Let's try it the simple way first and
   only make it more complicated if we really need to (the agile way?).
3) to be consistent in the way we treat heat templates, mistral
   workbooks and language packs - i.e. we provide standard ones and
   allow you to customize down to the underlying openstack primitives
   if you want (we should aim for this to be only a small percentage
   of our users).
   eg. pipeline == (check-build-deploy mistral workbook +
basic-docker heat template + custom plan)
   here the user just choose the heat template and workbook from a list
   of options.

4) if the mistral dsl is difficult for our users to work with we should
   give the mistral devs a chance to improve it before working around
   it.
5) our users are primary developers and I don't think the current
   mistral DSL is tricky to figure out for someone that can code.
6) doing it this way we can make use of heat and mistral's horizon
   plugins and link to them from the pipeline instead of having to
   redo all of the same pages. In a similar why that heat links to
   servers/volumes etc from a running stack.

- -Angus


Some things to note:
- - the entire mistral engine can be replaced with an engine level plugin

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjwz2AAoJEFrDYBLxZjWoEr0H/3nh66Umdw2nGUEs+SikigXa
XAN90NHHPuf1ssEqrF/rMjRKg+GvrLx+31x4oFfHEj7oplzGeVA9TJC0HOp4h6dh
iCeXAHF7KX+t4M4VuZ0y9TJB/jLxfxg4Qge7ENJpNDD/gggjMYSNhcWzBG87QBE/
Mi4YAvxNk1/C3/YZYx2Iujq7oM+6tflTeuoG6Ld72JMHryWT5/tdYZrCMnuD4F7Q
8a6Ge3t1dQh7ZlNHEuRDAg3G5oy+FInXyFasXYlYbtdpTxDL8/HbXegyAcsw42on
2ZKRDYBubQr1MJKvSV5I3jjOe4lxXXFylbWpYpoU8Y5ZXEKp69R4wrcVISF1jQQ=
=P0Sl
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread 严超
Hi,
Thank you very much for your reply !
But there are still some questions for me. Now I've come to the step where
ironic partitions the disk as you replied.
Then, how does ironic copies an image ? I know the image comes from glance.
But how to know image is really available when reboot?
And, what are the differences between final kernel (ramdisk) and original
kernel (ramdisk) ?

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-06-04 19:36 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:

 Hi!

 Workflow is not entirely documented by now AFAIK. After PXE boots deploy
 kernel and ramdisk, it exposes hard drive via iSCSI and notifies Ironic.
 After that Ironic partitions the disk, copies an image and reboots node
 with final kernel and ramdisk.

 On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
  Hi, All:
 
  I searched a lot about how ironic automatically install image
  on bare metal. But there seems to be no clear workflow out there.
 
  What I know is, in traditional PXE, a bare metal pull image
  from PXE server using tftp. In tftp root, there is a ks.conf which
  tells tftp which image to kick start.
 
  But in ironic there is no ks.conf pointed in tftp. How do bare
  metal know which image to install ? Is there any clear workflow where
  I can read ?
 
 
 
 
  Best Regards!
  Chao Yan
  --
  My twitter:Andy Yan @yanchao727
  My Weibo:http://weibo.com/herewearenow
  --
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] mocking policy

2014-06-04 Thread Radomir Dopieralski
Hello,

I'd like to start a discussion about the use of mocking libraries in
Horizon's tests, in particular, mox and mock.

As you may know, Mox is the library that has been used so far, and we
have a lot of tests written using it. It is based on a similar Java
library and does very strict checking, although its error reporting may
leave something more to be desired.

Mock is a more pythonic library, insluded in the stdlib of recent Python
versions, but also available as a separate library for older pythons. It
has a much more relaxed approach, allowing you to only test the things
that you actually care about and to write tests that don't have to be
rewritten after each and every refactoring.

Some OpenStack projects, such as Nova, seem to have adopted an approach
that favors Mock in newly written tests, but allows use of Mox for older
tests, or when it's more suitable for the job.

In Horizon we only use Mox, and Mock is not even in requirements.txt. I
would like to propose to add Mock to requirements.txt and start using it
in new tests where it makes more sense than Mox -- in particular, when
we are writing unit tests only testing small part of the code.

Thoughts?
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] request to review bug 1301359

2014-06-04 Thread Matthias Runge
On Wed, Jun 04, 2014 at 04:13:33PM +0530, Harshada Kakad wrote:
 HI Matthias Runge,
 
 Which feature in trove are you talking about?
 And even which capabilities are missing which will make the patch fail?
 I believe the patch has nothing to do with
 https://review.openstack.org/#/c/83503/

If your patch relies on a not approved feature of a client, and we'd do
a full integration testing with Horizon, your patch would fail, because
the underlying client does not have the required feature.

Does that make sense?
-- 
Matthias Runge mru...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] mocking policy

2014-06-04 Thread Ana Krivokapic

On 06/04/2014 02:41 PM, Radomir Dopieralski wrote:
 Hello,

 I'd like to start a discussion about the use of mocking libraries in
 Horizon's tests, in particular, mox and mock.

 As you may know, Mox is the library that has been used so far, and we
 have a lot of tests written using it. It is based on a similar Java
 library and does very strict checking, although its error reporting may
 leave something more to be desired.

 Mock is a more pythonic library, insluded in the stdlib of recent Python
 versions, but also available as a separate library for older pythons. It
 has a much more relaxed approach, allowing you to only test the things
 that you actually care about and to write tests that don't have to be
 rewritten after each and every refactoring.

 Some OpenStack projects, such as Nova, seem to have adopted an approach
 that favors Mock in newly written tests, but allows use of Mox for older
 tests, or when it's more suitable for the job.

 In Horizon we only use Mox, and Mock is not even in requirements.txt. I
 would like to propose to add Mock to requirements.txt and start using it
 in new tests where it makes more sense than Mox -- in particular, when
 we are writing unit tests only testing small part of the code.

 Thoughts?

Makes sense to me.

+1 for adding Mock to the requirements (test-requirements.txt rather
than requirements.txt, right?) and using it in newly written tests.

-- 
Regards,

Ana Krivokapic
Software Engineer
OpenStack team
Red Hat Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
 Hi,
 
 Thank you very much for your reply !
 
 But there are still some questions for me. Now I've come to the step
 where ironic partitions the disk as you replied.
 
 Then, how does ironic copies an image ? I know the image comes from
 glance. But how to know image is really available when reboot? 
I don't quite understand your question, what do you mean by available?
Anyway, before deploying Ironic downloads image from Glance, caches it
and just copies to a mounted iSCSI partition (using dd or so).

 
 And, what are the differences between final kernel (ramdisk) and
 original kernel (ramdisk) ? 
We have 2 sets of kernel+ramdisk:
1. Deploy k+r: these are used only for deploy process itself to provide
iSCSI volume and call back to Ironic. There's ongoing effort to create
smarted ramdisk, called Ironic Python Agent, but it's WIP.
2. Your k+r as stated in Glance metadata for an image - they will be
used for booting after deployment.

 
 Best Regards!
 Chao Yan
 --
 My twitter:Andy Yan @yanchao727
 My Weibo:http://weibo.com/herewearenow
 --
 
 
 
 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:
 Hi!
 
 Workflow is not entirely documented by now AFAIK. After PXE
 boots deploy
 kernel and ramdisk, it exposes hard drive via iSCSI and
 notifies Ironic.
 After that Ironic partitions the disk, copies an image and
 reboots node
 with final kernel and ramdisk.
 
 On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
  Hi, All:
 
  I searched a lot about how ironic automatically
 install image
  on bare metal. But there seems to be no clear workflow out
 there.
 
  What I know is, in traditional PXE, a bare metal
 pull image
  from PXE server using tftp. In tftp root, there is a ks.conf
 which
  tells tftp which image to kick start.
 
  But in ironic there is no ks.conf pointed in tftp.
 How do bare
  metal know which image to install ? Is there any clear
 workflow where
  I can read ?
 
 
 
 
  Best Regards!
  Chao Yan
  --
  My twitter:Andy Yan @yanchao727
  My Weibo:http://weibo.com/herewearenow
  --
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
 Thank you !
 
 I noticed the two sets of k+r in tftp configuration of ironic.
 
 Should the two sets be the same k+r ?
Deploy images are created for you by DevStack/whatever. If you do it by
hand, you may use diskimage-builder. Currently they are stored in flavor
metadata, will be stored in node metadata later.

And than you have production images that are whatever you want to
deploy and they are stored in Glance metadata for the instance image.

TFTP configuration should be created automatically, I doubt you should
change it anyway.

 
 The first set is defined in the ironic node definition. 
 
 How do we define the second set correctly ? 
 
 Best Regards!
 Chao Yan
 --
 My twitter:Andy Yan @yanchao727
 My Weibo:http://weibo.com/herewearenow
 --
 
 
 
 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:
 On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
  Hi,
 
  Thank you very much for your reply !
 
  But there are still some questions for me. Now I've come to
 the step
  where ironic partitions the disk as you replied.
 
  Then, how does ironic copies an image ? I know the image
 comes from
  glance. But how to know image is really available when
 reboot?
 
 I don't quite understand your question, what do you mean by
 available?
 Anyway, before deploying Ironic downloads image from Glance,
 caches it
 and just copies to a mounted iSCSI partition (using dd or so).
 
 
  And, what are the differences between final kernel (ramdisk)
 and
  original kernel (ramdisk) ?
 
 We have 2 sets of kernel+ramdisk:
 1. Deploy k+r: these are used only for deploy process itself
 to provide
 iSCSI volume and call back to Ironic. There's ongoing effort
 to create
 smarted ramdisk, called Ironic Python Agent, but it's WIP.
 2. Your k+r as stated in Glance metadata for an image - they
 will be
 used for booting after deployment.
 
 
  Best Regards!
  Chao Yan
  --
  My twitter:Andy Yan @yanchao727
  My Weibo:http://weibo.com/herewearenow
  --
 
 
 
  2014-06-04 19:36 GMT+08:00 Dmitry Tantsur
 dtant...@redhat.com:
  Hi!
 
  Workflow is not entirely documented by now AFAIK.
 After PXE
  boots deploy
  kernel and ramdisk, it exposes hard drive via iSCSI
 and
  notifies Ironic.
  After that Ironic partitions the disk, copies an
 image and
  reboots node
  with final kernel and ramdisk.
 
  On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
   Hi, All:
  
   I searched a lot about how ironic
 automatically
  install image
   on bare metal. But there seems to be no clear
 workflow out
  there.
  
   What I know is, in traditional PXE, a bare
 metal
  pull image
   from PXE server using tftp. In tftp root, there is
 a ks.conf
  which
   tells tftp which image to kick start.
  
   But in ironic there is no ks.conf pointed
 in tftp.
  How do bare
   metal know which image to install ? Is there any
 clear
  workflow where
   I can read ?
  
  
  
  
   Best Regards!
   Chao Yan
   --
   My twitter:Andy Yan @yanchao727
   My Weibo:http://weibo.com/herewearenow
   --
  
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 

Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread 严超
Thank you !
I noticed the two sets of k+r in tftp configuration of ironic.
Should the two sets be the same k+r ?
The first set is defined in the ironic node definition.
How do we define the second set correctly ?

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-06-04 21:00 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:

 On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
  Hi,
 
  Thank you very much for your reply !
 
  But there are still some questions for me. Now I've come to the step
  where ironic partitions the disk as you replied.
 
  Then, how does ironic copies an image ? I know the image comes from
  glance. But how to know image is really available when reboot?
 I don't quite understand your question, what do you mean by available?
 Anyway, before deploying Ironic downloads image from Glance, caches it
 and just copies to a mounted iSCSI partition (using dd or so).

 
  And, what are the differences between final kernel (ramdisk) and
  original kernel (ramdisk) ?
 We have 2 sets of kernel+ramdisk:
 1. Deploy k+r: these are used only for deploy process itself to provide
 iSCSI volume and call back to Ironic. There's ongoing effort to create
 smarted ramdisk, called Ironic Python Agent, but it's WIP.
 2. Your k+r as stated in Glance metadata for an image - they will be
 used for booting after deployment.

 
  Best Regards!
  Chao Yan
  --
  My twitter:Andy Yan @yanchao727
  My Weibo:http://weibo.com/herewearenow
  --
 
 
 
  2014-06-04 19:36 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:
  Hi!
 
  Workflow is not entirely documented by now AFAIK. After PXE
  boots deploy
  kernel and ramdisk, it exposes hard drive via iSCSI and
  notifies Ironic.
  After that Ironic partitions the disk, copies an image and
  reboots node
  with final kernel and ramdisk.
 
  On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
   Hi, All:
  
   I searched a lot about how ironic automatically
  install image
   on bare metal. But there seems to be no clear workflow out
  there.
  
   What I know is, in traditional PXE, a bare metal
  pull image
   from PXE server using tftp. In tftp root, there is a ks.conf
  which
   tells tftp which image to kick start.
  
   But in ironic there is no ks.conf pointed in tftp.
  How do bare
   metal know which image to install ? Is there any clear
  workflow where
   I can read ?
  
  
  
  
   Best Regards!
   Chao Yan
   --
   My twitter:Andy Yan @yanchao727
   My Weibo:http://weibo.com/herewearenow
   --
  
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] mocking policy

2014-06-04 Thread Christian Berendt
On 06/04/2014 02:56 PM, Ana Krivokapic wrote:
 +1 for adding Mock to the requirements (test-requirements.txt rather
 than requirements.txt, right?) and using it in newly written tests.

+1 for the usage of mock.

Also the use of mox3 should be preferred. mox is not compatible with
Python3 and it's not under an active development.

Christian.

-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday June 5th at 17:00UTC

2014-06-04 Thread Matthew Treinish
Hi Everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, June 5th at 17:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish


pgp4tt013ucZb.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Sub-team Meeting Reminder - Wednesday June 4 @ 1400 utc

2014-06-04 Thread Steve Gordon
Hi all,

Just a heads-up - apparently this clashed with the docs meeting, as a result 
the NFV meeting will occur in #openstack-meeting-alt.

Apologies for any convenience,

Steve

- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, June 3, 2014 10:18:57 AM
 Subject: [NFV] Sub-team Meeting Reminder - Wednesday June 4 @ 1400 utc
 
 Hi all,
 
 Just a reminder that the first post-summit meeting of the sub-team is
 scheduled for Wednesday June 4 @ 1400 UTC in #openstack-meeting.
 
 Agenda:
 
 First meeting!
 Meet and greet
 Review Mission
 Review our current blueprint list and fill in anything we're not tracking
 yet
 Review use case prioritization
 Discuss tracking approaches:
 Use cases
 Blueprints
 Bugs
 
 The agenda is also available at https://wiki.openstack.org/wiki/Meetings/NFV
 for editing.
 
 Thanks!
 
 Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread 严超
Yes, but when you assign a production image to an ironic bare metal node.
You should provide *ramdisk_id and kernel_id. *
Should the *ramdisk_id and kernel_id* be the same as deploy images (aka the
first set of k+r) ?
You didn't answer me if the two sets of r + k should be the same ?

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-06-04 21:27 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:

 On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
  Thank you !
 
  I noticed the two sets of k+r in tftp configuration of ironic.
 
  Should the two sets be the same k+r ?
 Deploy images are created for you by DevStack/whatever. If you do it by
 hand, you may use diskimage-builder. Currently they are stored in flavor
 metadata, will be stored in node metadata later.

 And than you have production images that are whatever you want to
 deploy and they are stored in Glance metadata for the instance image.

 TFTP configuration should be created automatically, I doubt you should
 change it anyway.

 
  The first set is defined in the ironic node definition.
 
  How do we define the second set correctly ?
 
  Best Regards!
  Chao Yan
  --
  My twitter:Andy Yan @yanchao727
  My Weibo:http://weibo.com/herewearenow
  --
 
 
 
  2014-06-04 21:00 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:
  On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
   Hi,
  
   Thank you very much for your reply !
  
   But there are still some questions for me. Now I've come to
  the step
   where ironic partitions the disk as you replied.
  
   Then, how does ironic copies an image ? I know the image
  comes from
   glance. But how to know image is really available when
  reboot?
 
  I don't quite understand your question, what do you mean by
  available?
  Anyway, before deploying Ironic downloads image from Glance,
  caches it
  and just copies to a mounted iSCSI partition (using dd or so).
 
  
   And, what are the differences between final kernel (ramdisk)
  and
   original kernel (ramdisk) ?
 
  We have 2 sets of kernel+ramdisk:
  1. Deploy k+r: these are used only for deploy process itself
  to provide
  iSCSI volume and call back to Ironic. There's ongoing effort
  to create
  smarted ramdisk, called Ironic Python Agent, but it's WIP.
  2. Your k+r as stated in Glance metadata for an image - they
  will be
  used for booting after deployment.
 
  
   Best Regards!
   Chao Yan
   --
   My twitter:Andy Yan @yanchao727
   My Weibo:http://weibo.com/herewearenow
   --
  
  
  
   2014-06-04 19:36 GMT+08:00 Dmitry Tantsur
  dtant...@redhat.com:
   Hi!
  
   Workflow is not entirely documented by now AFAIK.
  After PXE
   boots deploy
   kernel and ramdisk, it exposes hard drive via iSCSI
  and
   notifies Ironic.
   After that Ironic partitions the disk, copies an
  image and
   reboots node
   with final kernel and ramdisk.
  
   On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
Hi, All:
   
I searched a lot about how ironic
  automatically
   install image
on bare metal. But there seems to be no clear
  workflow out
   there.
   
What I know is, in traditional PXE, a bare
  metal
   pull image
from PXE server using tftp. In tftp root, there is
  a ks.conf
   which
tells tftp which image to kick start.
   
But in ironic there is no ks.conf pointed
  in tftp.
   How do bare
metal know which image to install ? Is there any
  clear
   workflow where
I can read ?
   
   
   
   
Best Regards!
Chao Yan
--
My twitter:Andy Yan @yanchao727
My Weibo:http://weibo.com/herewearenow
--
   
  
___
OpenStack-dev mailing list

[openstack-dev] [Nova] Different tenants can assign the same hostname to different machines without an error

2014-06-04 Thread samuel

Hi everyone,

Concerning the bug described at [1], where n different machines may have 
the same hostname and then n different DNS entries with that hostname 
are written; some points have to be discussed with the nova community.


On the bug review [2], Andrew Laski pointed out that we should discuss 
about having users getting errors due to the display name they choose. 
He thinks that name should be used purely for aesthetic purposes so I 
think a better approach to this problem would be to decouple display 
name and DNS entries. And display name has never needed to be globally 
unique before.


The patch [2] proposes changing the default DNS driver from 
'nova.network.noop_dns_driver.NoopDNSDriver' to other that verifies if 
DNS entries already exists before adding them, such as the 
'nova.network.minidns.MiniDNS'.


What are your thoughts up there?

Sincerely,
Samuel Queiroz

[1] https://bugs.launchpad.net/nova/+bug/1283538
[2] https://review.openstack.org/#/c/94252/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] question about createbackup API in nova

2014-06-04 Thread Andrew Laski


On 06/04/2014 02:56 AM, Bohai (ricky) wrote:

Hi stackers,

When I use the createBackup API, I found it just snapshots the root disk of the 
instance.
For an instance with multiple cinder backend volumes, it will not snapshot them.
It's a little different to the things in current createImage API.

My question is whether it's reasonable and discussed decision?
I tried but can't find the reason.


I don't know if the original reasoning has been discussed, but there has 
been some discussion on improving this behavior.  Some of the summit 
discussion around this see 
https://etherpad.openstack.org/p/juno-nova-multi-volume-snapshots .




Best regards to you.
Ricky



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:51 +0800, 严超 wrote:
 Yes, but when you assign a production image to an ironic bare metal
 node. You should provide ramdisk_id and kernel_id. 
What do you mean by assign here? Could you quote some documentation?
Instance image is assigned using --image argument to `nova boot`, kr
are fetched from it's metadata.

Deploy kr are currently taken from flavor provided by --flavor argument
(this will change eventually).
If you're using e.g. DevStack, you don't even touch deploy kr, they're
bound to flavor baremetal.

Please see quick start guide for hints on this:
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html

 
 Should the ramdisk_id and kernel_id be the same as deploy images (aka
 the first set of k+r) ?
 
 You didn't answer me if the two sets of r + k should be the same ? 
 
 
 Best Regards!
 Chao Yan
 --
 My twitter:Andy Yan @yanchao727
 My Weibo:http://weibo.com/herewearenow
 --
 
 
 
 2014-06-04 21:27 GMT+08:00 Dmitry Tantsur dtant...@redhat.com:
 On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
  Thank you !
 
  I noticed the two sets of k+r in tftp configuration of
 ironic.
 
  Should the two sets be the same k+r ?
 
 Deploy images are created for you by DevStack/whatever. If you
 do it by
 hand, you may use diskimage-builder. Currently they are stored
 in flavor
 metadata, will be stored in node metadata later.
 
 And than you have production images that are whatever you
 want to
 deploy and they are stored in Glance metadata for the instance
 image.
 
 TFTP configuration should be created automatically, I doubt
 you should
 change it anyway.
 
 
  The first set is defined in the ironic node definition.
 
  How do we define the second set correctly ?
 
  Best Regards!
  Chao Yan
  --
  My twitter:Andy Yan @yanchao727
  My Weibo:http://weibo.com/herewearenow
  --
 
 
 
  2014-06-04 21:00 GMT+08:00 Dmitry Tantsur
 dtant...@redhat.com:
  On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
   Hi,
  
   Thank you very much for your reply !
  
   But there are still some questions for me. Now
 I've come to
  the step
   where ironic partitions the disk as you replied.
  
   Then, how does ironic copies an image ? I know the
 image
  comes from
   glance. But how to know image is really available
 when
  reboot?
 
  I don't quite understand your question, what do you
 mean by
  available?
  Anyway, before deploying Ironic downloads image from
 Glance,
  caches it
  and just copies to a mounted iSCSI partition (using
 dd or so).
 
  
   And, what are the differences between final kernel
 (ramdisk)
  and
   original kernel (ramdisk) ?
 
  We have 2 sets of kernel+ramdisk:
  1. Deploy k+r: these are used only for deploy
 process itself
  to provide
  iSCSI volume and call back to Ironic. There's
 ongoing effort
  to create
  smarted ramdisk, called Ironic Python Agent, but
 it's WIP.
  2. Your k+r as stated in Glance metadata for an
 image - they
  will be
  used for booting after deployment.
 
  
   Best Regards!
   Chao Yan
   --
   My twitter:Andy Yan @yanchao727
   My Weibo:http://weibo.com/herewearenow
   --
  
  
  
   2014-06-04 19:36 GMT+08:00 Dmitry Tantsur
  dtant...@redhat.com:
   Hi!
  
   Workflow is not entirely documented by now
 AFAIK.
  After PXE
   boots deploy
   kernel and ramdisk, it exposes hard drive
 via iSCSI
  and
   notifies Ironic.
   After that Ironic partitions the disk,
 copies an
  image and
   reboots node
   with final kernel and ramdisk.
  
   On Wed, 2014-06-04 

Re: [openstack-dev] [ironic bare metal install problem]

2014-06-04 Thread Matt Wagner

On 04/06/14 17:21 +0800, 严超 wrote:

   Hi, All:

 When I tried to deploy a bare metal using ironic and following
http://ma.ttwagner.com/bare-metal-deploys-with-devstack-and-ironic/, I face
the one issue:


It looks like this message came through several times, and that
someone helped you in another thread.

But, as the author of that linked blog post, I should disclaim that
it's seriously old and probably not entirely accurate anymore. I wrote
it back in January when it was hard to find any documentation, but
thankfully the situation has improved.

I'll have to circle back and make sure that blog post is still
accurate and not spreading misinformation, but you may find the
developer quickstart guide at
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html
to be useful as well. It's official documentation and should be more
up-to-date.

-- Matt


pgpWg5b9deJNj.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [UX] [ceilometer] Design for Alarming and Alarm Management

2014-06-04 Thread Martinez, Christian
I'm adding the ceilometer tag so the Ceilometer guys can participate as well.
Cheers,
H

From: Martinez, Christian [mailto:christian.marti...@intel.com]
Sent: Tuesday, June 3, 2014 5:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm 
Management

Hi Liz,
The designs look really cool and I think that we should consider a couple of 
things (more related to the alarm's implementation made at Ceilometer):

* There are combined alarms, which are a combination of two or more 
alarms. We need to see how they work and how we can show/modify them (or even 
if we want to show them)

* Currently, the alarms doesn't have a severity field. Which will be 
the intention to have this? Is to be able to filter by alarm severity? Is to 
have a way to distinguish the not-so-critical alarms that the ones that are 
critical?

* The alarms have a list of actions to be executed based on their 
current state. I think that the intention of that feature was to create alarms 
that could manage and trigger different actions based on their alarm state. 
For instance, if an alarm is created but doesn't have enough data to be 
evaluated, the state is insufficient data, and you can add actions to be 
triggered when this happens, for instance writing a LOG file or calling an URL. 
Maybe we could use this functionality that to notify the user whenever an alarm 
is triggered and we also should consider that when creating or updating the 
alarms as well.

More related to Alarms in general :

* What are the ideas around the alarm notifications? I saw that your 
intention is to have some sort of g+ notifications but what about other 
solutions/options, like email (using Mistral, perhaps'), logs. What do you guys 
think about that?

* The alarms could be created by the users as well.. I would add that 
CRUD functionality on the alarms tab on the overview section as well.

Hope it helps

Regards,
H
From: Liz Blanchard [mailto:lsure...@redhat.com]
Sent: Tuesday, June 3, 2014 3:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

Hi All,

I've recently put together a set of wireframes[1] around Alarm Management that 
would support the following blueprint:
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page

If you have a chance it would be great to hear any feedback that folks have on 
this direction moving forward with Alarms.

Best,
Liz

[1] 
http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-05-30.pdf
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Devananda van der Veen
On Wed, Jun 4, 2014 at 6:51 AM, 严超 yanchao...@gmail.com wrote:
 Yes, but when you assign a production image to an ironic bare metal node.
 You should provide ramdisk_id and kernel_id.
 Should the ramdisk_id and kernel_id be the same as deploy images (aka the
 first set of k+r) ?
 You didn't answer me if the two sets of r + k should be the same ?

The deploy kernel  ramdisk are created by diskimage-builder, and
contain a small agent used by Ironic when provisioning the node.

The user kernel  ramdisk must be separate. These should be the same
kernel  ramdisk contained in the user-specified image, eg. the image
you are requesting when issuing nova boot --image $UUID.

-Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Kafka support and high throughput

2014-06-04 Thread Hochmuth, Roland M
Hi Flavio, In your discussions around a developing Kafka plugin for
Marconi would that be potentially be done by adding a Kafka transport to
oslo.messaging? That is something that I'm very interested in for the
monitoring as a service project I'm working on.

Thanks --Roland


On 6/4/14, 3:06 AM, Flavio Percoco fla...@redhat.com wrote:

On 02/06/14 07:52 -0700, Keith Newstadt wrote:
Thanks for the responses Flavio, Roland.

Some background on why I'm asking:  we're using Kafka as the message
queue for a stream processing service we're building, which we're
delivering to our internal customers as a service along with OpenStack.
We're considering building a high throughput ingest API to get the
clients' data streams into the stream processing service.  It occurs to
me that this API is simply a messaging API, and so I'm wondering if we
should consider building this high throughput API as part of the Marconi
project.

Has this topic come up in the Marconi team's discussions, and would it
fit into the vision of the Marconi roadmap?

Yes it has and I'm happy to see this coming up in the ML, thanks.

Some things that we're considering in order to have a more flexible
architecture that will support a higher throughput are:

- Queue Flavors (Terrible name). This is for marconi what flavors are
  for Nova. It basically defines a set of properties that will belong
  to a queue. Some of those properties may be related to the messages
  lifetime or the storage capabilities (in-memory, freaking fast,
  durable, etc). This is yet to be done.

- 2 new drivers (AMQP, redis). The former adds support to brokers and
  the later to well, redis, which brings in support for in-memory
  queues. Work In Progress.

- A new transport. This is something we've discussed but we haven't
  reached an agreement yet on when this should be done nor what it
  should be based on. The gist of this feature is adding support for
  another protocol that can serve Marconi's API alongside the HTTP
  one. We've considered TCP and websocket so far. The former is
  perfect for lower level communications without the HTTP overhead
  whereas the later is useful for web apps.

That said. A Kafka plugin is something we heard a lot about at the
summit and we've discussed it a bit. I'd love to see that happening as
an external plugin for now. There's no need to wait for the rest to
happen.

I'm more than happy to help with guidance and support on the repo
creation, driver structure etc.

Cheers,
Flavio


Thanks,
Keith Newstadt
keith_newst...@symantec.com
@knewstadt


Date: Sun, 1 Jun 2014 15:01:40 +
From: Hochmuth, Roland M roland.hochm...@hp.com
To: OpenStack List openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Kafka support and high
  throughput
Message-ID: cfae6524.762da%roland.hochm...@hp.com
Content-Type: text/plain; charset=us-ascii

There are some folks in HP evaluating different messaging technologies
for
Marconi, such as RabbitMQ and Kafka. I'll ping them and maybe they can
share
some information.

On a related note, the Monitoring as a Service solution we are working
on uses Kafka. This was just open-sourced at,
https://github.com/hpcloud-mon,
and will be moving over to StackForge starting next week. The
architecture
is at,
https://github.com/hpcloud-mon/mon-arch.

I haven't really looked at Marconi. If you are interested in
throughput, low latency, durability, scale and fault-tolerance Kafka
seems like a great choice.

It has been also pointed out from various sources that possibly Kafka
could be another oslo.messaging transport. Are you looking into that as
that would be very interesting to me and something that is on my task
list that I haven't gotten to yet.


On 5/30/14, 7:03 AM, Keith Newstadt keith_newst...@symantec.com
wrote:

Has anyone given thought to using Kafka to back Marconi?  And has there
been discussion about adding high throughput APIs to Marconi.

We're looking at providing Kafka as a messaging service for our
customers, in a scenario where throughput is a priority.  We've had good
luck using both streaming HTTP interfaces and long poll interfaces to
get
high throughput for other web services we've built.  Would this use case
be appropriate in the context of the Marconi roadmap?

Thanks,
Keith Newstadt
keith_newst...@symantec.com





Keith Newstadt
Cloud Services Architect
Cloud Platform Engineering
Symantec Corporation
www.symantec.com


Office: (781) 530-2299  Mobile: (617) 513-1321
Email: keith_newst...@symantec.com
Twitter: @knewstadt




This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential,
and exempt from disclosure under applicable law or may constitute as
attorney work product. If you are not the intended recipient, you are
hereby notified that any use, dissemination, 

Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Lucas Alvares Gomes
On Wed, Jun 4, 2014 at 2:51 PM, 严超 yanchao...@gmail.com wrote:
 Yes, but when you assign a production image to an ironic bare metal node.
 You should provide ramdisk_id and kernel_id.
 Should the ramdisk_id and kernel_id be the same as deploy images (aka the
 first set of k+r) ?
 You didn't answer me if the two sets of r + k should be the same ?

No it should _not_ be the same kernel and ramdisk, the deploy ramdisk
is a special ramdisk with a modified init script to help with the
deployment of the node[1]. For the image you're deploying you should
use it's own ramdisk and kernel, you can take a look at how the
load-image script[2] from TripleO extracts the kernel and ramdisk from
the image and register everything with Glance.

[1] 
https://github.com/openstack/diskimage-builder/tree/master/elements/deploy-ironic
[2] 
https://github.com/openstack/tripleo-incubator/blob/master/scripts/load-image

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] KMIP support

2014-06-04 Thread Clark, Robert Graham
Thanks guys, you¹ve answered everything I needed to know!

I¹ll look to see what help I can provide to the KMIP efforts.

-Rob



On 04/06/2014 15:18, Becker, Bill bill.bec...@safenet-inc.com wrote:

Regarding:
 Also, is the ³OpenStack KMIP Client² ever going to be a thing?
 (https://wiki.openstack.org/wiki/KMIPclient)

We made some progress with some prototype code, but never completed the
effort. We subsequently marked the corresponding blueprint as obsolete:
https://blueprints.launchpad.net/nova/+spec/kmip-client-for-volume-encrypt
ion

The work that JHU/APL is doing to integrate KMIP into barbican supersedes
the original idea.

--Bill

-Original Message-
From: Nathan Reller [mailto:rellerrel...@yahoo.com]
Sent: Tuesday, June 03, 2014 8:50 AM
To: Openstack-Dev
Subject: Re: [openstack-dev] [Barbican] KMIP support

 I was wondering about the progress of KMIP support in Barbican?

As John pointed out, JHU/APL is working on adding KMIP support to
Barbican.
We submitted the first CR to add a Secret Store interface into Barbican.
The next step is to add a KMIP implementation of the Secret Store.

 Is this waiting on an open python KMIP support?

We are working in parallel to add KMIP support to Barbican and to release
an open source version of a Python KMIP library. We would like to have
both out by Juno.

 Also, is the ³OpenStack KMIP Client² ever going to be a thing?
 (https://wiki.openstack.org/wiki/KMIPclient)

That work was not proposed by us, so I can't comment on the status of
that.
Right now our path forward is to support Barbican by adding a KMIP Secret
Store.

-Nate

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The information contained in this electronic mail transmission
may be privileged and confidential, and therefore, protected
from disclosure. If you have received this communication in
error, please notify us immediately by replying to this
message and deleting it from your computer without copying
or disclosing it.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] failed to load customised image

2014-06-04 Thread sonia verma
Hi All,

I want to upload my customised rootfs and kernel image for powerpc to the
openstack database.

For this I followed the following document:

http://docs.openstack.org/grizzly/openstack-compute/admin/content/creating-custom-images.html#d6e6473

I followed the following steps as described in the document:

1. copied the kernel and rootfs for powerpc on the Openstack controller.

kernel-image - uImage

rootfs - guest.rootfs.ext2.gz

2. sudo tune2fs -L uec-rootfs serverfinal.img

3.when i tried to upload the images to the openstack image service using
the following commad:

cloud-publish-image -t image --kernel-file uImage --ramdisk-file
guest.rootfs.ext2.gz ppc64 disk.img bucket1

i got the following error:

failed to check for existing manifest
failed to register uImage

Please help me regarding this.

Thanks Sonia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] test configuration for ml2/ovs L2 and L3 agents

2014-06-04 Thread Carlino, Chuck
Hey Carl,

Thanks for the quick response.

I'm missing something because the version in your review is quite different the 
version I see when I clone devstack on my test machine, or when I browse 
https://github.com/openstack-dev/devstack/blob/master/samples/local.conf.  I'm 
not referring to your changes, just the base code.

Chuck


On Jun 3, 2014, at 1:55 PM, Carl Baldwin 
c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:

Chuck,

I accidentally uploaded by local.conf changes to gerrit [1].  I
immediately abandoned them so that reviewers wouldn't waste time
thinking I was trying to get changes upstream.  But, since they're up
there now, you could take a look.

I am currently running a multi-node devstack on a couple of cloud VMs
with these changes.

Carl

[1] https://review.openstack.org/#/c/96972/

On Tue, Jun 3, 2014 at 9:23 AM, Carlino, Chuck 
chuck.carl...@hp.commailto:chuck.carl...@hp.com wrote:
Hi all,

I'm struggling a bit to get a test set up working for L2/L3 work (ml2/ovs).  
I've been trying multi-host devstack (just controller node for now), and I must 
be missing something important because n-sch bombs out.  Single node devstack 
works fine, but it's not very useful for L2/L3.

Any suggestions, or maybe someone has some local.conf files they'd care to 
share?

Many thanks,
Chuck

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Different tenants can assign the same hostname to different machines without an error

2014-06-04 Thread Day, Phil

 The patch [2] proposes changing the default DNS driver from
 'nova.network.noop_dns_driver.NoopDNSDriver' to other that verifies if
 DNS entries already exists before adding them, such as the
 'nova.network.minidns.MiniDNS'.

Changing a default setting in a way that isn't backwards compatible when the 
cloud admin can already make that config change if they want it doesn't seem 
like the right thing to do. I think you need to do this in two stages:
1) In Juno deprecate the existing default value first (i.e add a waning for one 
cycle that says that the default will change) 
2) In K change the default

 

 -Original Message-
 From: samuel [mailto:sam...@lsd.ufcg.edu.br]
 Sent: 04 June 2014 15:01
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Nova] Different tenants can assign the same
 hostname to different machines without an error
 
 Hi everyone,
 
 Concerning the bug described at [1], where n different machines may have
 the same hostname and then n different DNS entries with that hostname are
 written; some points have to be discussed with the nova community.
 
 On the bug review [2], Andrew Laski pointed out that we should discuss
 about having users getting errors due to the display name they choose.
 He thinks that name should be used purely for aesthetic purposes so I think a
 better approach to this problem would be to decouple display name and DNS
 entries. And display name has never needed to be globally unique before.
 
 The patch [2] proposes changing the default DNS driver from
 'nova.network.noop_dns_driver.NoopDNSDriver' to other that verifies if
 DNS entries already exists before adding them, such as the
 'nova.network.minidns.MiniDNS'.
 
 What are your thoughts up there?
 
 Sincerely,
 Samuel Queiroz
 
 [1] https://bugs.launchpad.net/nova/+bug/1283538
 [2] https://review.openstack.org/#/c/94252/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-06-04 Thread Mark Washenberger
I will provide a little more context for the TC audience. I asked Hemanth
to tag this message [TC] because at the Juno summit in the cross-project
track there was discussion of cross-project api consistency [1]. The main
outcome of that meeting was that TC should recommend API conventions via
openstack/governance as defined by those interested in the community. If
you dig further into that etherpad, I believe there is a writeup of
actions but I don't think we actually found time to hit that point during
the discussion.

Thanks!


[1] -
https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis


On Fri, May 30, 2014 at 11:22 AM, Hemanth Makkapati 
hemanth.makkap...@rackspace.com wrote:

  Hello All,
 I'm writing to notify you of the approach the Glance community has decided
 to take for doing functional API.  Also, I'm writing to solicit your
 feedback on this approach in the light of cross-project API consistency.

 At the Atlanta Summit, the Glance team has discussed introducing
 functional API in Glance so as to be able to expose operations/actions that
 do not naturally fit into the CRUD-style. A few approaches are proposed and
 discussed here
 https://etherpad.openstack.org/p/glance-adding-functional-operations-to-api.
 We have all converged on the approach to include 'action' and action type
 in the URL. For instance, 'POST /images/{image_id}/actions/{action_type}'.

 However, this is different from the way Nova does actions. Nova includes
 action type in the payload. For instance, 'POST /servers/{server_id}/action
 {type: action_type, ...}'. At this point, we hit a cross-project API
 consistency issue mentioned here
 https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis
 (under the heading 'How to act on resource - cloud perform on resources').
 Though we are differing from the way Nova does actions and hence another
 source of cross-project API inconsistency , we have a few reasons to
 believe that Glance's way is helpful in certain ways.


 The reasons are as following:
 1. Discoverability of operations.  It'll be easier to expose permitted
 actions through schemas a json home document living at
 /images/{image_id}/actions/.
 2. More conducive for rate-limiting. It'll be easier to rate-limit actions
 in different ways if the action type is available in the URL.
 3. Makes more sense for functional actions that don't require a request
 body (e.g., image deactivation).

 At this point we are curious to see if the API conventions group believes
 this is a valid and reasonable approach.

 Any feedback is much appreciated. Thank you!

 Regards,
 Hemanth Makkapati

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Mike Spreitzer
John Garbutt j...@johngarbutt.com wrote on 06/04/2014 04:29:36 AM:

 On 3 June 2014 14:29, Jay Pipes jaypi...@gmail.com wrote:
  tl;dr
  =
 
  Move CPU and RAM allocation ratio definition out of the Nova scheduler 
and
  into the resource tracker. Remove the calculations for overcommit out 
of the
  core_filter and ram_filter scheduler pieces.
 ...
 * If we have filters that adjust the ratio per flavour, we will still
 need that calculation in the scheduler, but thats cool
 
 
 In general, the approach I am advocating is:
 * each host provides the data needed for the filter / weightier
 * ideally in a way that requires minimal processing
 
 And after some IRC discussions with Dan Smith, he pointed out that we
 need to think about:
 * with data versioned in a way that supports live-upgrades

Not only live upgrades but also dynamic reconfiguration.

Overcommitting affects the quality of service delivered to the cloud user. 
 In this situation in particular, as in many situations in general, I 
think we want to enable the service provider to offer multiple qualities 
of service.  That is, enable the cloud provider to offer a selectable 
level of overcommit.  A given instance would be placed in a pool that is 
dedicated to the relevant level of overcommit (or, possibly, a better pool 
if the selected one is currently full).  Ideally the pool sizes would be 
dynamic.  That's the dynamic reconfiguration I mentioned preparing for.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-04 Thread Eoghan Glynn

Comments inline ...

 Hi Liz,
 
 The designs look really cool and I think that we should consider a couple of
 things (more related to the alarm’s implementation made at Ceilometer):
 
 · There are combined alarms, which are a combination of two or more alarms.
 We need to see how they work and how we can show/modify them (or even if we
 want to show them)

+1

Combined alarms allow a meta-alarm to be layered over several under-pinning
alarms, with their state combined using logical AND or OR.
 
 · Currently, the alarms doesn’t have a severity field. Which will be the
 intention to have this? Is to be able to filter by “alarm severity”? Is to
 have a way to distinguish the “not-so-critical” alarms that the ones that
 are critical?

No such concept currently.

 · The alarms have a “list of actions” to be executed based on their current
 state. I think that the intention of that feature was to create alarms that
 could manage and trigger different actions based on their “alarm state”. For
 instance, if an alarm is created but doesn’t have enough data to be
 evaluated, the state is “insufficient data”, and you can add actions to be
 triggered when this happens, for instance writing a LOG file or calling an
 URL. Maybe we could use this functionality that to notify the user whenever
 an alarm is triggered and we also should consider that when creating or
 updating the alarms as well.

Alarm actions are currently either:

1. log the event

2. POST out to a webhook with a notification of the state change and related
   data (e.g. the recent datapoints).

In reality, all non-toy alarms would have action of form #2.

This is the form used by Heat for example when autoscaling is driven by
ceilometer alarms.

Re. the authorization of such actions in the alarm notification consumer,
one of two approaches are generally used:

1. pre-sign the webhook URL with the EC2 signer (this depends on the
   physical security of the URL being maintained, i.e. the URL not being
   leaked by ceilometer, or in this case horizon)

2. use the new-fangled keystone trusts

Heat originally used approach #1, but is changing over to approach #2 for Juno.

Actions are then associated with a target state (alarm, ok, insufficient_data)
with most alarm actions in practice being associated with the transition into
the alarm state. Multiple actions can be associated with a single target state.

By default, actions are only executed when the alarm state transition fires.

However, a continuous notification mode can be enabled on the alarm (such
that the actions are repeated on each alarm evaluation cycle as long as the
alarm *remains* in the target state).

 
 
 More related to Alarms in general :
 
 · What are the ideas around the alarm notifications? I saw that your
 intention is to have some sort of “g+ notifications” but what about other
 solutions/options, like email (using Mistral, perhaps’), logs. What do you
 guys think about that?

Current only webhook notifications are supported.

But the idea for the last couple of cycles has been to leverage Marconi
SNS-style user-consumable notifications (email, SMS, tweets etc.) when 
if this becomes available.

 · The alarms could be created by the users as well.. I would add that CRUD
 functionality on the alarms tab on the overview section as well.

+1

Cheers,
Eoghan

 
 
 
 Hope it helps
 
 
 
 Regards,
 
 H
 
 
 From: Liz Blanchard [mailto:lsure...@redhat.com]
 Sent: Tuesday, June 3, 2014 3:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm
 Management
 
 
 
 
 Hi All,
 
 
 
 
 
 I’ve recently put together a set of wireframes[1] around Alarm Management
 that would support the following blueprint:
 
 
 https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page
 
 
 
 
 
 If you have a chance it would be great to hear any feedback that folks have
 on this direction moving forward with Alarms.
 
 
 
 
 
 Best,
 
 
 Liz
 
 
 
 
 
 [1]
 http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-05-30.pdf
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Scott Devoid

 Not only live upgrades but also dynamic reconfiguration.

 Overcommitting affects the quality of service delivered to the cloud user.
  In this situation in particular, as in many situations in general, I think
 we want to enable the service provider to offer multiple qualities of
 service.  That is, enable the cloud provider to offer a selectable level of
 overcommit.  A given instance would be placed in a pool that is dedicated
 to the relevant level of overcommit (or, possibly, a better pool if the
 selected one is currently full).  Ideally the pool sizes would be dynamic.
  That's the dynamic reconfiguration I mentioned preparing for.


+1 This is exactly the situation I'm in as an operator. You can do
different levels of overcommit with host-aggregates and different flavors,
but this has several drawbacks:

   1. The nature of this is *slightly* exposed to the end-user, through
   extra-specs and the fact that two flavors cannot have the same name. One
   scenario we have is that we want to be able to document our flavor
   names--what each name means, but we want to provide different QoS standards
   for different projects. Since flavor names must be unique, we have to
   create different flavors for different levels of service. *Sometimes you
   do want to lie to your users!*
   2. If I have two pools of nova-compute HVs with different overcommit
   settings, I have to manage the pool sizes manually. Even if I use puppet to
   change the config and flip an instance into a different pool, that requires
   me to restart nova-compute. Not an ideal situation.
   3. If I want to do anything complicated, like 3 overcommit tiers with
   good, better, best performance and allow the scheduler to pick
   better for a good instance if the good pool is full, this is very
   hard and complicated to do with the current system.


I'm looking forward to seeing this in nova-specs!
~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-04 Thread Eoghan Glynn

Hi Liz,

Two further thoughts occurred to me after hitting send on
my previous mail.

First, is the concept of alarm dimensioning; see my RDO Ceilometer
getting started guide[1] for an explanation of that notion.

A key associated concept is the notion of dimensioning which defines the set 
of matching meters that feed into an alarm evaluation. Recall that meters are 
per-resource-instance, so in the simplest case an alarm might be defined over a 
particular meter applied to all resources visible to a particular user. More 
useful however would the option to explicitly select which specific resources 
we're interested in alarming on. On one extreme we would have narrowly 
dimensioned alarms where this selection would have only a single target 
(identified by resource ID). On the other extreme, we'd have widely dimensioned 
alarms where this selection identifies many resources over which the statistic 
is aggregated, for example all instances booted from a particular image or all 
instances with matching user metadata (the latter is how Heat identifies 
autoscaling groups).

We'd have to think about how that concept is captured in the
UX for alarm creation/update.

Second, there are a couple of more advanced alarming features 
that were added in Icehouse:

1. The ability to constrain alarms on time ranges, such that they
   would only fire say during 9-to-5 on a weekday. This would
   allow for example different autoscaling policies to be applied
   out-of-hours, when resource usage is likely to be cheaper and
   manual remediation less straight-forward.

2. The ability to exclude low-quality datapoints with anomolously
   low sample counts. This allows the leading edge of the trend of
   widely dimensioned alarms not to be skewed by eagerly-reporting
   outliers.

Perhaps not in a first iteration, but at some point it may make sense
to expose these more advanced features in the UI.

Cheers,
Eoghan

[1] http://openstack.redhat.com/CeilometerQuickStart



- Original Message -
 
 Hi Liz,
 
 Looks great!
 
 Some thoughts on the wireframe doc:
 
 * The description of form:
 
 If CPU Utilization exceeds 80%, send alarm.
   
   misses the time-window aspect of the alarm definition.
 
   Whereas the boilerplate default descriptions generated by
   ceilometer itself:
 
 cpu_util  70.0 during 3 x 600s
 
   captures this important info.
 
 * The metric names, e.g. CPU Utilization, are not an exact
   match for the meter names used by ceilometer, e.g. cpu_util.
 
 * Non-admin users can create alarms in ceilometer:
 
   This is where admins can come in and
define and edit any alarms they want
the environment to use.
 
   (though these alarms will only have visibility onto the stats
that would be accessible to the user on behalf of whom the
alarm is being evaluated)
 
 * There's no concept currently of alarm severity.
 
 * Should users be able to enable/dis-able alarms.
 
   Yes, the API allows for disabled (i.e. non-evaluated) alarms.
 
 * Should users be able to own/assign alarms?
 
   Only admin users can create an alarm on behalf of another
   user/tenant.
 
 * Should users be able to acknowledge, close alarms?
 
   No, we have no concept of ACKing an alarm.
 
 * Admins can also see a full list of all Alarms that have
taken place in the past.
 
   In ceilometer terminology, we refer to this as alarm history
   or alarm change events.
 
 * CPU Utilization exceeded 80%.
 
   Again good to capture the duration in that description of the
   event.
 
 * Within the Overview section, there should be a new tab that allows the
user to click and view all Alarms that have occurred in their
environment.
 
   Not sure really what environment means here. Non-admin tenants only
   have visibility to their own alarm, whereas admins have visibility to
   all alarms.
 
 * This list would keep the latest  alarms.
 
   Presumably this would be based on querying the alarm-history API,
   as opposed to an assumption that Horizon is consuming the actual
   alarm notifications?
 
 Cheers,
 Eoghan
 
 - Original Message -
  Hi All,
  
  I’ve recently put together a set of wireframes[1] around Alarm Management
  that would support the following blueprint:
  https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page
  
  If you have a chance it would be great to hear any feedback that folks have
  on this direction moving forward with Alarms.
  
  Best,
  Liz
  
  [1]
  http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-05-30.pdf
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___

Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-04 Thread Kyle Mestery
On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 This is an LBaaS topic bud I'd like to get some Neutron Core members to
 give their opinions on this matter so I've just directed this to Neutron
 proper.

 The design for the new API and object model for LBaaS needs to be locked
 down before the hackathon in a couple of weeks and there are some
 questions that need answered.  This is pretty urgent to come on to a
 decision on and to get a clear strategy defined so we can actually do
 real code during the hackathon instead of wasting some of that valuable
 time discussing this.


 Implementation must be backwards compatible

 There are 2 ways that have come up on how to do this:

 1) New API and object model are created in the same extension and plugin
 as the old.  Any API requests structured for the old API will be
 translated/adapted to the into the new object model.
 PROS:
 -Only one extension and plugin
 -Mostly true backwards compatibility
 -Do not have to rename unchanged resources and models
 CONS:
 -May end up being confusing to an end-user.
 -Separation of old api and new api is less clear
 -Deprecating and removing old api and object model will take a bit more
 work
 -This is basically API versioning the wrong way

 2) A new extension and plugin are created for the new API and object
 model.  Each API would live side by side.  New API would need to have
 different names for resources and object models from Old API resources
 and object models.
 PROS:
 -Clean demarcation point between old and new
 -No translation layer needed
 -Do not need to modify existing API and object model, no new bugs
 -Drivers do not need to be immediately modified
 -Easy to deprecate and remove old API and object model later
 CONS:
 -Separate extensions and object model will be confusing to end-users
 -Code reuse by copy paste since old extension and plugin will be
 deprecated and removed.
 -This is basically API versioning the wrong way

 Now if #2 is chosen to be feasible and acceptable then there are a
 number of ways to actually do that.  I won't bring those up until a
 clear decision is made on which strategy above is the most acceptable.

Thanks for sending this out Brandon. I'm in favor of option #2 above,
especially considering the long-term plans to remove LBaaS from
Neutron. That approach will help the eventual end goal there. I am
also curious on what others think, and to this end, I've added this as
an agenda item for the team meeting next Monday. Brandon, it would be
great to get you there for the part of the meeting where we'll discuss
this.

Thanks!
Kyle

 Thanks,
 Brandon






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [meetings] Murano bug scrub

2014-06-04 Thread Timur Nurlygayanov
Today we continued our 'bug scrub' meeting and discussed all issues, which
were assigned to 'juno-1' mailstone and also all 'new' issues.

The meeting minutes are available by the following links:

Minutes:
http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-04-15.59.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-04-15.59.txt
Log:
http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-04-15.59.log.html


Thank you all who participated in today's meeting!




On Mon, Jun 2, 2014 at 10:13 PM, Timur Nurlygayanov 
tnurlygaya...@mirantis.com wrote:

 Thanks for today's bug scrub meeting!

 The meeting minutes are available by the following links:

 Minutes:
 http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-02-17.02.html
 Minutes (text):
 http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-02-17.02.txt
 Log:
 http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-02-17.02.log.html

 We plan to continue our meeting 4 June, at 4:00 - 6:00 PM UTC, in
 *#murano* IRC.

 You are welcome!



 On Tue, May 27, 2014 at 2:41 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 Hi all,

 We want to schedule the bug scrub meeting for Murano project to 06/02/14
 (June 2), at 1700 UTC.
 On this meeting we will discuss all new bugs, which we plan to fix in
 juno-1 release cycle.

 All actual descriptions of Murano bugs are available here:
 https://bugs.launchpad.net/murano

 If you want to participate, welcome to the IRC *#murano* chanel!


 Thank you!

 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc




 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc




-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Murali Allada
Angus/Julien,

I would disagree that we should expose the mistral DSL to end users.

What if we decide to use something other than Mistral in the future? We should 
be able to plug in any workflow system we want without changing what we expose 
to the end user.

To me, the pipeline DSL is similar to our plan file. We don't expose a heat 
template to our end users.

Murali



On Jun 4, 2014, at 10:58 AM, Julien Vey 
vey.jul...@gmail.commailto:vey.jul...@gmail.com
 wrote:

Hi Angus,

I really agree with you. I would insist on #3, most of our users will use the 
default workbook, and only advanced users will want to customize the workflow. 
advanced users should easily understand a mistral workbook, cause they are 
advanced

To add to the cons of creating our own DSL, it will require a lot more work, 
more design discussions, more maintenance... We might end up doing what mistral 
is already doing. If we have some difficulties with Mistral's DSL, we can talk 
with the team, and contribute back our experience of using Mistral.

Julien





2014-06-04 14:11 GMT+02:00 Angus Salkeld 
angus.salk...@rackspace.commailto:angus.salk...@rackspace.com:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all

I have posted this series and it has evoked a surprising (to me) amount
of discussion so I wanted to clarify some things and talk to some of the
issues so we can move forward.

So first note is this is an early posting and still is not tested much.

https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-api,n,z

First the terminology
   I have a pipeline meaning the association between a mistral workbook,
   a trigger url and a plan. This is a running entity not just a
   different workbook.

The main issue seems to be the extent to which I am exposing the mistral
workbook. Many of you expected a simpler workflow DSL that would be
converted into the mistral workbook.

The reason for me doing it this way are:
1) so we don't have to write much code
2) this is an iterative process. Let's try it the simple way first and
   only make it more complicated if we really need to (the agile way?).
3) to be consistent in the way we treat heat templates, mistral
   workbooks and language packs - i.e. we provide standard ones and
   allow you to customize down to the underlying openstack primitives
   if you want (we should aim for this to be only a small percentage
   of our users).
   eg. pipeline == (check-build-deploy mistral workbook +
basic-docker heat template + custom plan)
   here the user just choose the heat template and workbook from a list
   of options.

4) if the mistral dsl is difficult for our users to work with we should
   give the mistral devs a chance to improve it before working around
   it.
5) our users are primary developers and I don't think the current
   mistral DSL is tricky to figure out for someone that can code.
6) doing it this way we can make use of heat and mistral's horizon
   plugins and link to them from the pipeline instead of having to
   redo all of the same pages. In a similar why that heat links to
   servers/volumes etc from a running stack.

- -Angus


Some things to note:
- - the entire mistral engine can be replaced with an engine level plugin

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjwz2AAoJEFrDYBLxZjWoEr0H/3nh66Umdw2nGUEs+SikigXa
XAN90NHHPuf1ssEqrF/rMjRKg+GvrLx+31x4oFfHEj7oplzGeVA9TJC0HOp4h6dh
iCeXAHF7KX+t4M4VuZ0y9TJB/jLxfxg4Qge7ENJpNDD/gggjMYSNhcWzBG87QBE/
Mi4YAvxNk1/C3/YZYx2Iujq7oM+6tflTeuoG6Ld72JMHryWT5/tdYZrCMnuD4F7Q
8a6Ge3t1dQh7ZlNHEuRDAg3G5oy+FInXyFasXYlYbtdpTxDL8/HbXegyAcsw42on
2ZKRDYBubQr1MJKvSV5I3jjOe4lxXXFylbWpYpoU8Y5ZXEKp69R4wrcVISF1jQQ=
=P0Sl
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Devdatta Kulkarni
Hi Angus, Julien,

No major disagreements.

My thinking is that we should provide more application developer focused
mechanism for customizing workflows (point #3). This may not necessarily be an 
entirely new DSL.
It could just be additions to the current Plan structure. For example, we could 
add a section that
defines what stages we want a particular assembly to go through (unit testing, 
functional testing vs. just unit testing).
These stages could actually be just the task names from a predefined Mistral 
workbook.
Btw, the stages could be listed in a different file (so not tied with a Plan).

I guess the main point is, requiring application developers to define a 
complete workflow
consisting of, what is the entry point, what should happen on failure, how many 
times a task should
be re-tried on failure, etc. seem too low level for application developers to 
be describing while deploying their apps to Solum.
Shouldn't application developers be more concerned with 'what' not 'how'?

- Devdatta


From: Julien Vey [vey.jul...@gmail.com]
Sent: Wednesday, June 04, 2014 10:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [solum] reviews for the new API

Hi Angus,

I really agree with you. I would insist on #3, most of our users will use the 
default workbook, and only advanced users will want to customize the workflow. 
advanced users should easily understand a mistral workbook, cause they are 
advanced

To add to the cons of creating our own DSL, it will require a lot more work, 
more design discussions, more maintenance... We might end up doing what mistral 
is already doing. If we have some difficulties with Mistral's DSL, we can talk 
with the team, and contribute back our experience of using Mistral.

Julien


2014-06-04 14:11 GMT+02:00 Angus Salkeld 
angus.salk...@rackspace.commailto:angus.salk...@rackspace.com:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all

I have posted this series and it has evoked a surprising (to me) amount
of discussion so I wanted to clarify some things and talk to some of the
issues so we can move forward.

So first note is this is an early posting and still is not tested much.

https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-api,n,z

First the terminology
   I have a pipeline meaning the association between a mistral workbook,
   a trigger url and a plan. This is a running entity not just a
   different workbook.

The main issue seems to be the extent to which I am exposing the mistral
workbook. Many of you expected a simpler workflow DSL that would be
converted into the mistral workbook.

The reason for me doing it this way are:
1) so we don't have to write much code
2) this is an iterative process. Let's try it the simple way first and
   only make it more complicated if we really need to (the agile way?).
3) to be consistent in the way we treat heat templates, mistral
   workbooks and language packs - i.e. we provide standard ones and
   allow you to customize down to the underlying openstack primitives
   if you want (we should aim for this to be only a small percentage
   of our users).
   eg. pipeline == (check-build-deploy mistral workbook +
basic-docker heat template + custom plan)
   here the user just choose the heat template and workbook from a list
   of options.

4) if the mistral dsl is difficult for our users to work with we should
   give the mistral devs a chance to improve it before working around
   it.
5) our users are primary developers and I don't think the current
   mistral DSL is tricky to figure out for someone that can code.
6) doing it this way we can make use of heat and mistral's horizon
   plugins and link to them from the pipeline instead of having to
   redo all of the same pages. In a similar why that heat links to
   servers/volumes etc from a running stack.

- -Angus


Some things to note:
- - the entire mistral engine can be replaced with an engine level plugin

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjwz2AAoJEFrDYBLxZjWoEr0H/3nh66Umdw2nGUEs+SikigXa
XAN90NHHPuf1ssEqrF/rMjRKg+GvrLx+31x4oFfHEj7oplzGeVA9TJC0HOp4h6dh
iCeXAHF7KX+t4M4VuZ0y9TJB/jLxfxg4Qge7ENJpNDD/gggjMYSNhcWzBG87QBE/
Mi4YAvxNk1/C3/YZYx2Iujq7oM+6tflTeuoG6Ld72JMHryWT5/tdYZrCMnuD4F7Q
8a6Ge3t1dQh7ZlNHEuRDAg3G5oy+FInXyFasXYlYbtdpTxDL8/HbXegyAcsw42on
2ZKRDYBubQr1MJKvSV5I3jjOe4lxXXFylbWpYpoU8Y5ZXEKp69R4wrcVISF1jQQ=
=P0Sl
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-04 Thread Brandon Logan
Thanks for your feedback Kyle.  I will be at that meeting on Monday.

Thanks,
Brandon

On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
 On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
  This is an LBaaS topic bud I'd like to get some Neutron Core members to
  give their opinions on this matter so I've just directed this to Neutron
  proper.
 
  The design for the new API and object model for LBaaS needs to be locked
  down before the hackathon in a couple of weeks and there are some
  questions that need answered.  This is pretty urgent to come on to a
  decision on and to get a clear strategy defined so we can actually do
  real code during the hackathon instead of wasting some of that valuable
  time discussing this.
 
 
  Implementation must be backwards compatible
 
  There are 2 ways that have come up on how to do this:
 
  1) New API and object model are created in the same extension and plugin
  as the old.  Any API requests structured for the old API will be
  translated/adapted to the into the new object model.
  PROS:
  -Only one extension and plugin
  -Mostly true backwards compatibility
  -Do not have to rename unchanged resources and models
  CONS:
  -May end up being confusing to an end-user.
  -Separation of old api and new api is less clear
  -Deprecating and removing old api and object model will take a bit more
  work
  -This is basically API versioning the wrong way
 
  2) A new extension and plugin are created for the new API and object
  model.  Each API would live side by side.  New API would need to have
  different names for resources and object models from Old API resources
  and object models.
  PROS:
  -Clean demarcation point between old and new
  -No translation layer needed
  -Do not need to modify existing API and object model, no new bugs
  -Drivers do not need to be immediately modified
  -Easy to deprecate and remove old API and object model later
  CONS:
  -Separate extensions and object model will be confusing to end-users
  -Code reuse by copy paste since old extension and plugin will be
  deprecated and removed.
  -This is basically API versioning the wrong way
 
  Now if #2 is chosen to be feasible and acceptable then there are a
  number of ways to actually do that.  I won't bring those up until a
  clear decision is made on which strategy above is the most acceptable.
 
 Thanks for sending this out Brandon. I'm in favor of option #2 above,
 especially considering the long-term plans to remove LBaaS from
 Neutron. That approach will help the eventual end goal there. I am
 also curious on what others think, and to this end, I've added this as
 an agenda item for the team meeting next Monday. Brandon, it would be
 great to get you there for the part of the meeting where we'll discuss
 this.
 
 Thanks!
 Kyle
 
  Thanks,
  Brandon
 
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] test message - please ignore

2014-06-04 Thread ramki Krishnan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-04 Thread Liz Blanchard
Thanks for the excellent feedback on these, guys! I’ll be working on making 
updates over the next week and will send a fresh link out when done. Anyone 
else with feedback, please feel free to fire away.

Best,
Liz
On Jun 4, 2014, at 12:33 PM, Eoghan Glynn egl...@redhat.com wrote:

 
 Hi Liz,
 
 Two further thoughts occurred to me after hitting send on
 my previous mail.
 
 First, is the concept of alarm dimensioning; see my RDO Ceilometer
 getting started guide[1] for an explanation of that notion.
 
 A key associated concept is the notion of dimensioning which defines the set 
 of matching meters that feed into an alarm evaluation. Recall that meters are 
 per-resource-instance, so in the simplest case an alarm might be defined over 
 a particular meter applied to all resources visible to a particular user. 
 More useful however would the option to explicitly select which specific 
 resources we're interested in alarming on. On one extreme we would have 
 narrowly dimensioned alarms where this selection would have only a single 
 target (identified by resource ID). On the other extreme, we'd have widely 
 dimensioned alarms where this selection identifies many resources over which 
 the statistic is aggregated, for example all instances booted from a 
 particular image or all instances with matching user metadata (the latter is 
 how Heat identifies autoscaling groups).
 
 We'd have to think about how that concept is captured in the
 UX for alarm creation/update.
 
 Second, there are a couple of more advanced alarming features 
 that were added in Icehouse:
 
 1. The ability to constrain alarms on time ranges, such that they
   would only fire say during 9-to-5 on a weekday. This would
   allow for example different autoscaling policies to be applied
   out-of-hours, when resource usage is likely to be cheaper and
   manual remediation less straight-forward.
 
 2. The ability to exclude low-quality datapoints with anomolously
   low sample counts. This allows the leading edge of the trend of
   widely dimensioned alarms not to be skewed by eagerly-reporting
   outliers.
 
 Perhaps not in a first iteration, but at some point it may make sense
 to expose these more advanced features in the UI.
 
 Cheers,
 Eoghan
 
 [1] http://openstack.redhat.com/CeilometerQuickStart
 
 
 
 - Original Message -
 
 Hi Liz,
 
 Looks great!
 
 Some thoughts on the wireframe doc:
 
 * The description of form:
 
If CPU Utilization exceeds 80%, send alarm.
 
  misses the time-window aspect of the alarm definition.
 
  Whereas the boilerplate default descriptions generated by
  ceilometer itself:
 
cpu_util  70.0 during 3 x 600s
 
  captures this important info.
 
 * The metric names, e.g. CPU Utilization, are not an exact
  match for the meter names used by ceilometer, e.g. cpu_util.
 
 * Non-admin users can create alarms in ceilometer:
 
  This is where admins can come in and
   define and edit any alarms they want
   the environment to use.
 
  (though these alarms will only have visibility onto the stats
   that would be accessible to the user on behalf of whom the
   alarm is being evaluated)
 
 * There's no concept currently of alarm severity.
 
 * Should users be able to enable/dis-able alarms.
 
  Yes, the API allows for disabled (i.e. non-evaluated) alarms.
 
 * Should users be able to own/assign alarms?
 
  Only admin users can create an alarm on behalf of another
  user/tenant.
 
 * Should users be able to acknowledge, close alarms?
 
  No, we have no concept of ACKing an alarm.
 
 * Admins can also see a full list of all Alarms that have
   taken place in the past.
 
  In ceilometer terminology, we refer to this as alarm history
  or alarm change events.
 
 * CPU Utilization exceeded 80%.
 
  Again good to capture the duration in that description of the
  event.
 
 * Within the Overview section, there should be a new tab that allows the
   user to click and view all Alarms that have occurred in their
   environment.
 
  Not sure really what environment means here. Non-admin tenants only
  have visibility to their own alarm, whereas admins have visibility to
  all alarms.
 
 * This list would keep the latest  alarms.
 
  Presumably this would be based on querying the alarm-history API,
  as opposed to an assumption that Horizon is consuming the actual
  alarm notifications?
 
 Cheers,
 Eoghan
 
 - Original Message -
 Hi All,
 
 I’ve recently put together a set of wireframes[1] around Alarm Management
 that would support the following blueprint:
 https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page
 
 If you have a chance it would be great to hear any feedback that folks have
 on this direction moving forward with Alarms.
 
 Best,
 Liz
 
 [1]
 http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-05-30.pdf
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Qin Zhao
On Thu, Jun 5, 2014 at 12:19 AM, Mike Spreitzer mspre...@us.ibm.com wrote:


 Overcommitting affects the quality of service delivered to the cloud user.
  In this situation in particular, as in many situations in general, I think
 we want to enable the service provider to offer multiple qualities of
 service.  That is, enable the cloud provider to offer a selectable level of
 overcommit.  A given instance would be placed in a pool that is dedicated
 to the relevant level of overcommit (or, possibly, a better pool if the
 selected one is currently full).  Ideally the pool sizes would be dynamic.
  That's the dynamic reconfiguration I mentioned preparing for.


+1   I do agree that we need  a dynamic overcommit setting per compute
node. Or, maybe we also need pluggable resource calculation method for
resource tracker. Since each compute node may have different hypervisor,
hardware or quality-of-service commitment, it does not make sense to have
unified settings in scheduler or resource tracker.

The way preferred by I is: 1) Make our default cpu/ram filter simpler. No
need to care about any overcommit ratio. Users can change the filter, if
they hope that. 2) Resource tracker of each compute node can do the
calculation according to its settings (or its calculation method, if it is
pluggable), so that resource usage behavior can be specified to meet the
unique requirement.

-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Adrian Otto
Murali,

I’d like to explore the possibility that Solum users don’t need to see a 
pipeline DSL at all in the general case, and that only power users are 
interacting with it. If that UX is possible, then what DSL is used is much less 
important as long as it allows customizations to at least the main workflow 
points, and allows new ones to be inserted, and for default steps to be skipped.

I know that some contributors expressed concern that exposing a lower level DSL 
may constrain us if an alternate direction is desired in the future. I think we 
can use API versions to deal with that.

In general, I agree with Angus’ position that if we can’t find a simple way to 
express the Pipeline using a Mistral DSL, that we should raise that concern 
with the Mistral team
to see what can be adjusted to make it a better fit.

Adrian

On Jun 4, 2014, at 10:08 AM, Murali Allada 
murali.all...@rackspace.commailto:murali.all...@rackspace.com wrote:

Angus/Julien,

I would disagree that we should expose the mistral DSL to end users.

What if we decide to use something other than Mistral in the future? We should 
be able to plug in any workflow system we want without changing what we expose 
to the end user.

To me, the pipeline DSL is similar to our plan file. We don't expose a heat 
template to our end users.

Murali



On Jun 4, 2014, at 10:58 AM, Julien Vey 
vey.jul...@gmail.commailto:vey.jul...@gmail.com
 wrote:

Hi Angus,

I really agree with you. I would insist on #3, most of our users will use the 
default workbook, and only advanced users will want to customize the workflow. 
advanced users should easily understand a mistral workbook, cause they are 
advanced

To add to the cons of creating our own DSL, it will require a lot more work, 
more design discussions, more maintenance... We might end up doing what mistral 
is already doing. If we have some difficulties with Mistral's DSL, we can talk 
with the team, and contribute back our experience of using Mistral.

Julien





2014-06-04 14:11 GMT+02:00 Angus Salkeld 
angus.salk...@rackspace.commailto:angus.salk...@rackspace.com:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all

I have posted this series and it has evoked a surprising (to me) amount
of discussion so I wanted to clarify some things and talk to some of the
issues so we can move forward.

So first note is this is an early posting and still is not tested much.

https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-api,n,z

First the terminology
   I have a pipeline meaning the association between a mistral workbook,
   a trigger url and a plan. This is a running entity not just a
   different workbook.

The main issue seems to be the extent to which I am exposing the mistral
workbook. Many of you expected a simpler workflow DSL that would be
converted into the mistral workbook.

The reason for me doing it this way are:
1) so we don't have to write much code
2) this is an iterative process. Let's try it the simple way first and
   only make it more complicated if we really need to (the agile way?).
3) to be consistent in the way we treat heat templates, mistral
   workbooks and language packs - i.e. we provide standard ones and
   allow you to customize down to the underlying openstack primitives
   if you want (we should aim for this to be only a small percentage
   of our users).
   eg. pipeline == (check-build-deploy mistral workbook +
basic-docker heat template + custom plan)
   here the user just choose the heat template and workbook from a list
   of options.

4) if the mistral dsl is difficult for our users to work with we should
   give the mistral devs a chance to improve it before working around
   it.
5) our users are primary developers and I don't think the current
   mistral DSL is tricky to figure out for someone that can code.
6) doing it this way we can make use of heat and mistral's horizon
   plugins and link to them from the pipeline instead of having to
   redo all of the same pages. In a similar why that heat links to
   servers/volumes etc from a running stack.

- -Angus


Some things to note:
- - the entire mistral engine can be replaced with an engine level plugin

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjwz2AAoJEFrDYBLxZjWoEr0H/3nh66Umdw2nGUEs+SikigXa
XAN90NHHPuf1ssEqrF/rMjRKg+GvrLx+31x4oFfHEj7oplzGeVA9TJC0HOp4h6dh
iCeXAHF7KX+t4M4VuZ0y9TJB/jLxfxg4Qge7ENJpNDD/gggjMYSNhcWzBG87QBE/
Mi4YAvxNk1/C3/YZYx2Iujq7oM+6tflTeuoG6Ld72JMHryWT5/tdYZrCMnuD4F7Q
8a6Ge3t1dQh7ZlNHEuRDAg3G5oy+FInXyFasXYlYbtdpTxDL8/HbXegyAcsw42on
2ZKRDYBubQr1MJKvSV5I3jjOe4lxXXFylbWpYpoU8Y5ZXEKp69R4wrcVISF1jQQ=
=P0Sl
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Roshan Agrawal
Agreeing with what Murali said below. We should make things really simple for 
the 99 percentile of the users, and not force the complexity needed by the 
minority of the advanced users on the rest of the 99 percentile users.

Mistral is a generic workflow DSL, we do not need to expose all that complexity 
to the Solum user that wants to customize the pipeline. Non-advanced users 
will have a need to customize the pipeline. In this case, the user is not 
necessarily the developer persona, but typically an admin/release manager 
persona.

Pipeline customization should be doable easily, without having the understand 
or author a generic workflow DSL.

For the really advanced user who needs to have a finer grained need to tweak 
the mistral workflow DSL (I am not sure if there will be a use case for this if 
we have the right customizations exposed via the pipeline API), we should have 
the option for the user to tweak the mistral DSL directly, but we should not 
expect 99.9% (or more) of the users to deal with a generic workflow.


From: Murali Allada [mailto:murali.all...@rackspace.com]
Sent: Wednesday, June 04, 2014 12:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [solum] reviews for the new API

Angus/Julien,

I would disagree that we should expose the mistral DSL to end users.

What if we decide to use something other than Mistral in the future? We should 
be able to plug in any workflow system we want without changing what we expose 
to the end user.

To me, the pipeline DSL is similar to our plan file. We don't expose a heat 
template to our end users.

Murali



On Jun 4, 2014, at 10:58 AM, Julien Vey 
vey.jul...@gmail.commailto:vey.jul...@gmail.com
 wrote:


Hi Angus,

I really agree with you. I would insist on #3, most of our users will use the 
default workbook, and only advanced users will want to customize the workflow. 
advanced users should easily understand a mistral workbook, cause they are 
advanced

To add to the cons of creating our own DSL, it will require a lot more work, 
more design discussions, more maintenance... We might end up doing what mistral 
is already doing. If we have some difficulties with Mistral's DSL, we can talk 
with the team, and contribute back our experience of using Mistral.

Julien




2014-06-04 14:11 GMT+02:00 Angus Salkeld 
angus.salk...@rackspace.commailto:angus.salk...@rackspace.com:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all

I have posted this series and it has evoked a surprising (to me) amount
of discussion so I wanted to clarify some things and talk to some of the
issues so we can move forward.

So first note is this is an early posting and still is not tested much.

https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-api,n,z

First the terminology
   I have a pipeline meaning the association between a mistral workbook,
   a trigger url and a plan. This is a running entity not just a
   different workbook.

The main issue seems to be the extent to which I am exposing the mistral
workbook. Many of you expected a simpler workflow DSL that would be
converted into the mistral workbook.

The reason for me doing it this way are:
1) so we don't have to write much code
2) this is an iterative process. Let's try it the simple way first and
   only make it more complicated if we really need to (the agile way?).
3) to be consistent in the way we treat heat templates, mistral
   workbooks and language packs - i.e. we provide standard ones and
   allow you to customize down to the underlying openstack primitives
   if you want (we should aim for this to be only a small percentage
   of our users).
   eg. pipeline == (check-build-deploy mistral workbook +
basic-docker heat template + custom plan)
   here the user just choose the heat template and workbook from a list
   of options.

4) if the mistral dsl is difficult for our users to work with we should
   give the mistral devs a chance to improve it before working around
   it.
5) our users are primary developers and I don't think the current
   mistral DSL is tricky to figure out for someone that can code.
6) doing it this way we can make use of heat and mistral's horizon
   plugins and link to them from the pipeline instead of having to
   redo all of the same pages. In a similar why that heat links to
   servers/volumes etc from a running stack.

- -Angus


Some things to note:
- - the entire mistral engine can be replaced with an engine level plugin

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjwz2AAoJEFrDYBLxZjWoEr0H/3nh66Umdw2nGUEs+SikigXa
XAN90NHHPuf1ssEqrF/rMjRKg+GvrLx+31x4oFfHEj7oplzGeVA9TJC0HOp4h6dh
iCeXAHF7KX+t4M4VuZ0y9TJB/jLxfxg4Qge7ENJpNDD/gggjMYSNhcWzBG87QBE/
Mi4YAvxNk1/C3/YZYx2Iujq7oM+6tflTeuoG6Ld72JMHryWT5/tdYZrCMnuD4F7Q
8a6Ge3t1dQh7ZlNHEuRDAg3G5oy+FInXyFasXYlYbtdpTxDL8/HbXegyAcsw42on

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Jay Pipes

On 06/03/2014 10:40 PM, ChangBo Guo wrote:

Jay, thanks for raising this up .
+1 for this .
  A related question about the CPU and RAM allocation ratio, shall we
apply them when get hypervisor information with command nova
hypervisor-show ${hypervisor-name}
The output  shows like
| memory_mb | 15824 |
| memory_mb_used| 1024 |
| running_vms   | 1 |
| service_host  | node-6 |
| service_id| 39 |
| vcpus | 4 |
| vcpus_used| 1

vcpus is showing the number of physical CPU, I think that's not
correct.  Any thoughts ?


Yes, I believe it would be appropriate to return the adjusted total of 
vCPU and memory. This would be trivial if we actually stored the 
allocation ratios in each compute node record, where they naturally 
belong (as the ratios describe an attribute of the compute node, not any 
scheduling policy), instead of in the scheduler filters.


Best,
-jay


2014-06-03 21:29 GMT+08:00 Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com:

Hi Stackers,

tl;dr
=

Move CPU and RAM allocation ratio definition out of the Nova
scheduler and into the resource tracker. Remove the calculations for
overcommit out of the core_filter and ram_filter scheduler pieces.

Details
===

Currently, in the Nova code base, the thing that controls whether or
not the scheduler places an instance on a compute host that is
already full (in terms of memory or vCPU usage) is a pair of
configuration options* called cpu_allocation_ratio and
ram_allocation_ratio.

These configuration options are defined in, respectively,
nova/scheduler/filters/core___filter.py and
nova/scheduler/filters/ram___filter.py.

Every time an instance is launched, the scheduler loops through a
collection of host state structures that contain resource
consumption figures for each compute node. For each compute host,
the core_filter and ram_filter's host_passes() method is called. In
the host_passes() method, the host's reported total amount of CPU or
RAM is multiplied by this configuration option, and the product is
then subtracted from the reported used amount of CPU or RAM. If the
result is greater than or equal to the number of vCPUs needed by the
instance being launched, True is returned and the host continues to
be considered during scheduling decisions.

I propose we move the definition of the allocation ratios out of the
scheduler entirely, as well as the calculation of the total amount
of resources each compute node contains. The resource tracker is the
most appropriate place to define these configuration options, as the
resource tracker is what is responsible for keeping track of total
and used resource amounts for all compute nodes.

Benefits:

  * Allocation ratios determine the amount of resources that a
compute node advertises. The resource tracker is what determines the
amount of resources that each compute node has, and how much of a
particular type of resource have been used on a compute node. It
therefore makes sense to put calculations and definition of
allocation ratios where they naturally belong.
  * The scheduler currently needlessly re-calculates total resource
amounts on every call to the scheduler. This isn't necessary. The
total resource amounts don't change unless either a configuration
option is changed on a compute node (or host aggregate), and this
calculation can be done more efficiently once in the resource tracker.
  * Move more logic out of the scheduler
  * With the move to an extensible resource tracker, we can more
easily evolve to defining all resource-related options in the same
place (instead of in different filter files in the scheduler...)

Thoughts?

Best,
-jay

* Host aggregates may also have a separate allocation ratio that
overrides any configuration setting that a particular host may have

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
ChangBo Guo(gcb)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Jay Pipes

On 06/04/2014 03:08 AM, Yingjun Li wrote:

+1, if doing so, a related bug related bug may be solved as well:
https://bugs.launchpad.net/nova/+bug/1323538


Yep, I agree that the above bug would be addressed.

Best,
-jay


On Jun 3, 2014, at 21:29, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:


Hi Stackers,

tl;dr
=

Move CPU and RAM allocation ratio definition out of the Nova scheduler
and into the resource tracker. Remove the calculations for overcommit
out of the core_filter and ram_filter scheduler pieces.

Details
===

Currently, in the Nova code base, the thing that controls whether or
not the scheduler places an instance on a compute host that is already
full (in terms of memory or vCPU usage) is a pair of configuration
options* called cpu_allocation_ratio and ram_allocation_ratio.

These configuration options are defined in, respectively,
nova/scheduler/filters/core_filter.py and
nova/scheduler/filters/ram_filter.py.

Every time an instance is launched, the scheduler loops through a
collection of host state structures that contain resource consumption
figures for each compute node. For each compute host, the core_filter
and ram_filter's host_passes() method is called. In the host_passes()
method, the host's reported total amount of CPU or RAM is multiplied
by this configuration option, and the product is then subtracted from
the reported used amount of CPU or RAM. If the result is greater than
or equal to the number of vCPUs needed by the instance being launched,
True is returned and the host continues to be considered during
scheduling decisions.

I propose we move the definition of the allocation ratios out of the
scheduler entirely, as well as the calculation of the total amount of
resources each compute node contains. The resource tracker is the most
appropriate place to define these configuration options, as the
resource tracker is what is responsible for keeping track of total and
used resource amounts for all compute nodes.

Benefits:

* Allocation ratios determine the amount of resources that a compute
node advertises. The resource tracker is what determines the amount of
resources that each compute node has, and how much of a particular
type of resource have been used on a compute node. It therefore makes
sense to put calculations and definition of allocation ratios where
they naturally belong.
* The scheduler currently needlessly re-calculates total resource
amounts on every call to the scheduler. This isn't necessary. The
total resource amounts don't change unless either a configuration
option is changed on a compute node (or host aggregate), and this
calculation can be done more efficiently once in the resource tracker.
* Move more logic out of the scheduler
* With the move to an extensible resource tracker, we can more easily
evolve to defining all resource-related options in the same place
(instead of in different filter files in the scheduler...)

Thoughts?

Best,
-jay

* Host aggregates may also have a separate allocation ratio that
overrides any configuration setting that a particular host may have

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Jay Pipes

On 06/04/2014 12:19 PM, Mike Spreitzer wrote:

John Garbutt j...@johngarbutt.com wrote on 06/04/2014 04:29:36 AM:

  On 3 June 2014 14:29, Jay Pipes jaypi...@gmail.com wrote:
   tl;dr
   =
  
   Move CPU and RAM allocation ratio definition out of the Nova
scheduler and
   into the resource tracker. Remove the calculations for overcommit
out of the
   core_filter and ram_filter scheduler pieces.
  ...
  * If we have filters that adjust the ratio per flavour, we will still
  need that calculation in the scheduler, but thats cool
 
 
  In general, the approach I am advocating is:
  * each host provides the data needed for the filter / weightier
  * ideally in a way that requires minimal processing
 
  And after some IRC discussions with Dan Smith, he pointed out that we
  need to think about:
  * with data versioned in a way that supports live-upgrades

Not only live upgrades but also dynamic reconfiguration.

Overcommitting affects the quality of service delivered to the cloud
user.  In this situation in particular, as in many situations in
general, I think we want to enable the service provider to offer
multiple qualities of service.  That is, enable the cloud provider to
offer a selectable level of overcommit.  A given instance would be
placed in a pool that is dedicated to the relevant level of overcommit
(or, possibly, a better pool if the selected one is currently full).
  Ideally the pool sizes would be dynamic.  That's the dynamic
reconfiguration I mentioned preparing for.


What you describe above is what the host aggregates functionality is 
for. Unfortunately, host aggregates are an API extension and so Nova 
can't rely on them as a general and reliable way of grouping host 
resources together.


All the more reason why adding integral things like host 
aggregates/groups as an extension was a mistake.


-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Julien Vey
Murali, Roshan.

I think there is a misunderstood. By default, the user wouldn't see any
workflow dsl. If the user does not specify anything, we would use a
pre-defined mistral workbook defined by Solum, as Adrian described

If the user needs more, mistral is not so complicated. Have a look at this
example Angus has done
https://review.openstack.org/#/c/95709/4/etc/solum/workbooks/build.yaml
We can define anything as solum actions, and the users would just have to
call one of this actions. Solum takes care of the implementation. If we
have comments about the DSL, Mistral's team is willing to help.

Our end-users will be developers, and a lot of them will need a custom
workflow at some point. For instance, if Jenkins has so many plugins, it's
because there are as many custom build workflows, specific to each company.
If we add an abstraction on top of mistral or any other workflow engine, we
will lock developers in our own decisions, and any additional feature would
require a new development in Solum, whereas exposing (when users want it)
mistral, we would allow for any customization.

Julien




2014-06-04 19:50 GMT+02:00 Roshan Agrawal roshan.agra...@rackspace.com:

  Agreeing with what Murali said below. We should make things really
 simple for the 99 percentile of the users, and not force the complexity
 needed by the minority of the “advanced users” on the rest of the 99
 percentile users.



 Mistral is a generic workflow DSL, we do not need to expose all that
 complexity to the Solum user that wants to customize the pipeline.
 “Non-advanced” users will have a need to customize the pipeline. In this
 case, the user is not necessarily the developer persona, but typically an
 admin/release manager persona.



 Pipeline customization should be doable easily, without having the
 understand or author a generic workflow DSL.



 For the really advanced user who needs to have a finer grained need to
 tweak the mistral workflow DSL (I am not sure if there will be a use case
 for this if we have the right customizations exposed via the pipeline API),
 we should have the “option” for the user to tweak the mistral DSL directly,
 but we should not expect 99.9% (or more) of the users to deal with a
 generic workflow.





 *From:* Murali Allada [mailto:murali.all...@rackspace.com]
 *Sent:* Wednesday, June 04, 2014 12:09 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [solum] reviews for the new API



 Angus/Julien,



 I would disagree that we should expose the mistral DSL to end users.



 What if we decide to use something other than Mistral in the future? We
 should be able to plug in any workflow system we want without changing what
 we expose to the end user.



 To me, the pipeline DSL is similar to our plan file. We don't expose a
 heat template to our end users.



 Murali







 On Jun 4, 2014, at 10:58 AM, Julien Vey vey.jul...@gmail.com

  wrote:



  Hi Angus,



 I really agree with you. I would insist on #3, most of our users will use
 the default workbook, and only advanced users will want to customize the
 workflow. advanced users should easily understand a mistral workbook,
 cause they are advanced



 To add to the cons of creating our own DSL, it will require a lot more
 work, more design discussions, more maintenance... We might end up doing
 what mistral is already doing. If we have some difficulties with Mistral's
 DSL, we can talk with the team, and contribute back our experience of using
 Mistral.



 Julien









 2014-06-04 14:11 GMT+02:00 Angus Salkeld angus.salk...@rackspace.com:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi all

 I have posted this series and it has evoked a surprising (to me) amount
 of discussion so I wanted to clarify some things and talk to some of the
 issues so we can move forward.

 So first note is this is an early posting and still is not tested much.


 https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-api,n,z

 First the terminology
I have a pipeline meaning the association between a mistral workbook,
a trigger url and a plan. This is a running entity not just a
different workbook.

 The main issue seems to be the extent to which I am exposing the mistral
 workbook. Many of you expected a simpler workflow DSL that would be
 converted into the mistral workbook.

 The reason for me doing it this way are:
 1) so we don't have to write much code
 2) this is an iterative process. Let's try it the simple way first and
only make it more complicated if we really need to (the agile way?).
 3) to be consistent in the way we treat heat templates, mistral
workbooks and language packs - i.e. we provide standard ones and
allow you to customize down to the underlying openstack primitives
if you want (we should aim for this to be only a small percentage
of our users).
eg. pipeline == (check-build-deploy mistral workbook +
 

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Jay Pipes

On 06/04/2014 06:10 AM, Murray, Paul (HP Cloud) wrote:

Hi Jay,

This sounds good to me. You left out the part of limits from the
discussion – these filters set the limits used at the resource tracker.


Yes, and that is, IMO, bad design. Allocation ratios are the domain of 
the compute node and the resource tracker. Not the scheduler. The 
allocation ratios simply adjust the amount of resources that the compute 
node advertises to others. Allocation ratios are *not* scheduler policy, 
and they aren't related to flavours.



You also left out the force-to-host and its effect on limits.


force-to-host is definitively non-cloudy. It was a bad idea that should 
never have been added to Nova in the first place.


That said, I don't see how force-to-host has any affect on limits. 
Limits should not be output from the scheduler. In fact, they shouldn't 
be anything other than an *input* to the scheduler, provided in each 
host state struct that gets built from records updated in the resource 
tracker and the Nova database.


 Yes, I

would agree with doing this at the resource tracker too.

And of course the extensible resource tracker is the right way to do it J


:) Yes, clearly this is something that I ran into while brainstorming 
around the extensible resource tracker patches.


Best,
-jay


Paul.

*From:*Jay Lau [mailto:jay.lau@gmail.com]
*Sent:* 04 June 2014 10:04
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova] Proposal: Move CPU and memory
allocation ratio out of scheduler

Does there is any blueprint related to this? Thanks.

2014-06-03 21:29 GMT+08:00 Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com:

Hi Stackers,

tl;dr
=

Move CPU and RAM allocation ratio definition out of the Nova scheduler
and into the resource tracker. Remove the calculations for overcommit
out of the core_filter and ram_filter scheduler pieces.

Details
===

Currently, in the Nova code base, the thing that controls whether or not
the scheduler places an instance on a compute host that is already
full (in terms of memory or vCPU usage) is a pair of configuration
options* called cpu_allocation_ratio and ram_allocation_ratio.

These configuration options are defined in, respectively,
nova/scheduler/filters/core_filter.py and
nova/scheduler/filters/ram_filter.py.

Every time an instance is launched, the scheduler loops through a
collection of host state structures that contain resource consumption
figures for each compute node. For each compute host, the core_filter
and ram_filter's host_passes() method is called. In the host_passes()
method, the host's reported total amount of CPU or RAM is multiplied by
this configuration option, and the product is then subtracted from the
reported used amount of CPU or RAM. If the result is greater than or
equal to the number of vCPUs needed by the instance being launched, True
is returned and the host continues to be considered during scheduling
decisions.

I propose we move the definition of the allocation ratios out of the
scheduler entirely, as well as the calculation of the total amount of
resources each compute node contains. The resource tracker is the most
appropriate place to define these configuration options, as the resource
tracker is what is responsible for keeping track of total and used
resource amounts for all compute nodes.

Benefits:

  * Allocation ratios determine the amount of resources that a compute
node advertises. The resource tracker is what determines the amount of
resources that each compute node has, and how much of a particular type
of resource have been used on a compute node. It therefore makes sense
to put calculations and definition of allocation ratios where they
naturally belong.
  * The scheduler currently needlessly re-calculates total resource
amounts on every call to the scheduler. This isn't necessary. The total
resource amounts don't change unless either a configuration option is
changed on a compute node (or host aggregate), and this calculation can
be done more efficiently once in the resource tracker.
  * Move more logic out of the scheduler
  * With the move to an extensible resource tracker, we can more easily
evolve to defining all resource-related options in the same place
(instead of in different filter files in the scheduler...)

Thoughts?

Best,
-jay

* Host aggregates may also have a separate allocation ratio that
overrides any configuration setting that a particular host may have

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Jay Pipes

On 06/04/2014 11:56 AM, Day, Phil wrote:

Hi Jay,


* Host aggregates may also have a separate allocation ratio that
overrides any configuration setting that a particular host may
have


So with your proposal would the resource tracker be responsible for
picking and using override values defined as part of an aggregate
that includes the host ?


Not quite sure what you're asking, but I *think* you are asking whether 
I am proposing that the host aggregate's allocation ratio that a compute 
node might be in would override any allocation ratio that might be set 
on the compute node? I would say that no, the idea would be that the 
compute node's allocation ratio would override any host aggregate it 
might belong to.



I don't think at the moment hosts have any logic which checks which
aggregate they are in, so that adds another DB query per compute host
every time the resource tracker needs that info - is that going to be
more load on the DB for a large system that the current logic ?


It uses objects, and there would be an extra join to the host_aggregates 
table, but not another query. For the record, the way that the scheduler 
currently works with host aggregates is poorly-performing due to a DB 
call that occur *on every single call to launch instances*:


http://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/core_filter.py#n81


I like the idea of nodes controlling overcommit and reporting the
adjusted resources to the scheduler - just want to be sure of the
impact.


Cool, good to have you on board with the idea.


(This is beginning to feel like a nova-specs review ;-)


LOL, yes indeed. I like to socialize the idea before pushing up the 
spec, though, and this gets me some good feedback with which to write 
the spec...


Best,
-jay


-Original Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: 03 June 2014 14:29 To: OpenStack
Development Mailing List Subject: [openstack-dev] [nova] Proposal:
Move CPU and memory allocation ratio out of scheduler

Hi Stackers,

tl;dr =

Move CPU and RAM allocation ratio definition out of the Nova
scheduler and into the resource tracker. Remove the calculations
for overcommit out of the core_filter and ram_filter scheduler
pieces.

Details ===

Currently, in the Nova code base, the thing that controls whether
or not the scheduler places an instance on a compute host that is
already full (in terms of memory or vCPU usage) is a pair of
configuration options* called cpu_allocation_ratio and
ram_allocation_ratio.

These configuration options are defined in, respectively,
nova/scheduler/filters/core_filter.py and
nova/scheduler/filters/ram_filter.py.

Every time an instance is launched, the scheduler loops through a
collection of host state structures that contain resource
consumption figures for each compute node. For each compute host,
the core_filter and ram_filter's host_passes() method is called. In
the host_passes() method, the host's reported total amount of CPU
or RAM is multiplied by this configuration option, and the product
is then subtracted from the reported used amount of CPU or RAM. If
the result is greater than or equal to the number of vCPUs needed
by the instance being launched, True is returned and the host
continues to be considered during scheduling decisions.

I propose we move the definition of the allocation ratios out of
the scheduler entirely, as well as the calculation of the total
amount of resources each compute node contains. The resource
tracker is the most appropriate place to define these configuration
options, as the resource tracker is what is responsible for keeping
track of total and used resource amounts for all compute nodes.

Benefits:

* Allocation ratios determine the amount of resources that a
compute node advertises. The resource tracker is what determines
the amount of resources that each compute node has, and how much of
a particular type of resource have been used on a compute node. It
therefore makes sense to put calculations and definition of
allocation ratios where they naturally belong. * The scheduler
currently needlessly re-calculates total resource amounts on every
call to the scheduler. This isn't necessary. The total resource
amounts don't change unless either a configuration option is
changed on a compute node (or host aggregate), and this calculation
can be done more efficiently once in the resource tracker. * Move
more logic out of the scheduler * With the move to an extensible
resource tracker, we can more easily evolve to defining all
resource-related options in the same place (instead of in different
filter files in the scheduler...)

Thoughts?

Best, -jay

* Host aggregates may also have a separate allocation ratio that
overrides any configuration setting that a particular host may
have

___ OpenStack-dev
mailing list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[openstack-dev] [NFV] - follow up on scheduling discussion

2014-06-04 Thread ramki Krishnan
All,

Thanks for the interest in the NFV scheduling topic. Please find a proposal on 
Smart Scheduler (Solver Scheduler) enhancements for NFV: Use Cases, 
Constraints etc.. 
https://docs.google.com/document/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit#heading=h.wlbclagujw8c

Based on this proposal, we are planning to enhance the existing 
solver-scheduler blueprint 
https://blueprints.launchpad.net/nova/+spec/solver-scheduler.

Would love to hear your comments and thoughts. Would be glad to arrange a 
conference call if needed.

Thanks,
Ramki


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Steve Gordon
- Original Message -
 From: ChangBo Guo glongw...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, June 3, 2014 10:40:19 PM
 Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation 
 ratio out of scheduler
 
 Jay, thanks for raising this up .
 +1 for this .
  A related question about the CPU and RAM allocation ratio, shall we apply
 them when get hypervisor information with command nova hypervisor-show
 ${hypervisor-name}
 The output  shows like
 | memory_mb |
 15824
 |
 | memory_mb_used|
 1024
 |
 | running_vms   |
 1
 |
 | service_host  |
 node-6
 |
 | service_id|
 39
 |
 | vcpus |
 4
 |
 | vcpus_used| 1
 
 vcpus is showing the number of physical CPU, I think that's not correct.
 Any thoughts ?

Yes, this is a frequent source of confusion particularly as this ultimately 
ends up being reflected in the dashboard as vCPUS used/vCPUs available even 
though the available figure is based on pCPUs, not vCPUs (so no factoring in of 
overcommit). See also:

https://bugs.launchpad.net/nova/+bug/1202965

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Resource action API

2014-06-04 Thread Zane Bitter

On 04/06/14 03:01, yang zhang wrote:

Hi all,
Now heat only supports suspending/resuming a whole stack, all the
resources of the stack will be suspended/resumed,
but sometime we just want to suspend or resume only a part of resources


Any reason you wouldn't put that subset of resources into a nested stack 
and suspend/resume that?



in the stack, so I think adding resource-action API for heat is
necessary. this API will be helpful to solve 2 problems:


I'm sceptical of this idea because the whole justification for having 
suspend/resume in Heat is that it's something that needs to follow the 
same dependency tree as stack delete/create.


Are you suggesting that if you suspend an individual resource, all of 
the resources dependent on it will also be suspended?



 - If we want to suspend/resume the resources of the stack, you need
to get the phy_id first and then call the API of other services, and
this won't update the status
of the resource in heat, which often cause some unexpected problem.


This is true, except for stack resources, which obviously _do_ store the 
state.



 - this API could offer a turn on/off function for some native
resources, e.g., we can turn on/off the autoscalinggroup or a single
policy with
the API, this is like the suspend/resume services feature[1] in AWS.


Which, I notice, is not exposed in CloudFormation.


  I registered a bp for it, and you are welcome for discussing it.
https://blueprints.launchpad.net/heat/+spec/resource-action-api


Please propose blueprints to the heat-specs repo:
http://lists.openstack.org/pipermail/openstack-dev/2014-May/036432.html

thanks,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][glance] Consistency around proposed server instance tagging API

2014-06-04 Thread Jay Pipes

Hi Stackers,

I'm looking to get consensus on a proposed API for server instance 
tagging in Nova:


https://review.openstack.org/#/c/91444/

In the proposal, the REST API for the proposed server instance tagging 
looks like so:


Get list of tags for server:

GET /v2/{project_id}/servers/{server_id}/tags

Replace the set of tags for a server:

POST /v2/{project_id}/servers/{server_id}/tags

Add a single tag to a server:

PUT /v2/{project_id}/servers/{server_id}/tags/{tag}

Remove all tags on a server

DELETE /v2/{project_id}/servers/{server_id}/tags

Remove a tag on a server:

DELETE /v2/{project_id}/servers/{server_id}/tags/{tag}

It is this last API call that has drawn the attention of John Garbutt 
(cc'd). In Glance v2 API, if you attempt to delete a tag that does not 
exist, then a 404 Not Found is returned. In my proposal, if you attempt 
to delete a tag that does not exist for the server, a 204 No Content is 
returned.


John would like to gain some consensus on what approach is best going 
forward for simple string tagging APIs.


Please let us know your thoughts.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Kerberization of Horizon (kerbhorizon?)

2014-06-04 Thread Adam Young
OK,  so I'm cranking on All of the Kerberso stuff: plus S4U2Proxy work 
etcexcept that I have never worked with DJango directly before.  I 
want to get a sanity check on my approach:


Instead of authenticating to Keystone, Horizon will use mod_auth_krb5 
and REMOTE_USER to authenticate the user.  Then, in order to get a 
Keystone token, the code in 
openstack_dashboard/api/keystone.py:keystoneclient   needs to fetch a 
token for the user.


This will be done using a Kerberized Keystone and S4U2Proxy setup. There 
are alternatives using TGT delegation that I really want to have nothing 
to do with.


The keystoneclient call currently does:


conn = api_version['client'].Client(token=user.token.id,
endpoint=endpoint,
original_ip=remote_addr,
insecure=insecure,
cacert=cacert,
auth_url=endpoint,
debug=settings.DEBUG)

when I am done it would do:
from keystoneclient.contrib.auth.v3 import kerberos
...

if  REMOTE_USER:||
||auth = kerberos.Kerberos(OS_AUTH_URL)
|else:|
||auth = v3.auth.Token(token=user.token.id)

|sess=session.Session(kerb_auth, verify=OS_CACERT)|||
|||conn = client.Client(session=sess, region_name='RegionOne') |



(with the other parameters from the original call going into auth, 
session. or client as appropriate)



Am I on track?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kerberization of Horizon (kerbhorizon?)

2014-06-04 Thread Gabriel Hurley
I've implemented Kerberos (via Apache) + Django once before, and yes, taking 
this as pseudo-code you're on the right track. Obviously the devil is in the 
details and you'll work out the particulars as you go.

The most important bit (obviously) is just making absolutely sure your 
REMOTE_USER header/environment variable is trusted, but that's outside the 
Django layer.

Assuming that you can work out with the other parameters from the original 
call going into auth, session, or client as appropriate as you said then you 
should be fine.

All the best,


-  Gabriel

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Wednesday, June 04, 2014 11:53 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Kerberization of Horizon (kerbhorizon?)

OK,  so I'm cranking on All of the Kerberso stuff: plus S4U2Proxy work 
etcexcept that I have never worked with DJango directly before.  I want to 
get a sanity check on my approach:

Instead of authenticating to Keystone, Horizon will use mod_auth_krb5 and 
REMOTE_USER to authenticate the user.  Then, in order to get a Keystone token, 
the code in openstack_dashboard/api/keystone.py:keystoneclient   needs to fetch 
a token for the user.

This will be done using a Kerberized Keystone and S4U2Proxy setup.  There are 
alternatives using TGT delegation that I really want to have nothing to do with.

The keystoneclient call currently does:


conn = api_version['client'].Client(token=user.token.id,
endpoint=endpoint,
original_ip=remote_addr,
insecure=insecure,
cacert=cacert,
auth_url=endpoint,
debug=settings.DEBUG)

when I am done it would do:
from keystoneclient.contrib.auth.v3 import kerberos
...

if  REMOTE_USER:
auth = kerberos.Kerberos(OS_AUTH_URL)
else:
auth = v3.auth.Token(token=user.token.id)

sess=session.Session(kerb_auth, verify=OS_CACERT)
conn = client.Client(session=sess, region_name='RegionOne')



(with the other parameters from the original call going into auth, session. or 
client as appropriate)


Am I on track?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Murali Allada
The problem with exposing Mistral DSL the way it is, is that there are many 
things the user should not be aware of.

Lets take the example Mistral DSL created here, 
https://review.openstack.org/#/c/95709/4/etc/solum/workbooks/build.yaml

Why should the end user know and define things like auth_token, base_image_id 
(which is the language pack id in the plan file), on-success actions, etc. 
Users should not be concerned with these things. All they should be able to do 
is say 'yes/no' to a task which Solum supports, and if Yes, configure it 
further. Dependency between tasks, internal solum/glance/keystone id's etc 
should be hidden, but are required by the mistral DSL.




On Jun 4, 2014, at 1:10 PM, Julien Vey 
vey.jul...@gmail.commailto:vey.jul...@gmail.com
 wrote:

Murali, Roshan.

I think there is a misunderstood. By default, the user wouldn't see any 
workflow dsl. If the user does not specify anything, we would use a 
pre-defined mistral workbook defined by Solum, as Adrian described

If the user needs more, mistral is not so complicated. Have a look at this 
example Angus has done 
https://review.openstack.org/#/c/95709/4/etc/solum/workbooks/build.yaml
We can define anything as solum actions, and the users would just have to call 
one of this actions. Solum takes care of the implementation. If we have 
comments about the DSL, Mistral's team is willing to help.

Our end-users will be developers, and a lot of them will need a custom workflow 
at some point. For instance, if Jenkins has so many plugins, it's because there 
are as many custom build workflows, specific to each company. If we add an 
abstraction on top of mistral or any other workflow engine, we will lock 
developers in our own decisions, and any additional feature would require a new 
development in Solum, whereas exposing (when users want it) mistral, we would 
allow for any customization.

Julien




2014-06-04 19:50 GMT+02:00 Roshan Agrawal 
roshan.agra...@rackspace.commailto:roshan.agra...@rackspace.com:
Agreeing with what Murali said below. We should make things really simple for 
the 99 percentile of the users, and not force the complexity needed by the 
minority of the “advanced users” on the rest of the 99 percentile users.

Mistral is a generic workflow DSL, we do not need to expose all that complexity 
to the Solum user that wants to customize the pipeline. “Non-advanced” users 
will have a need to customize the pipeline. In this case, the user is not 
necessarily the developer persona, but typically an admin/release manager 
persona.

Pipeline customization should be doable easily, without having the understand 
or author a generic workflow DSL.

For the really advanced user who needs to have a finer grained need to tweak 
the mistral workflow DSL (I am not sure if there will be a use case for this if 
we have the right customizations exposed via the pipeline API), we should have 
the “option” for the user to tweak the mistral DSL directly, but we should not 
expect 99.9% (or more) of the users to deal with a generic workflow.


From: Murali Allada 
[mailto:murali.all...@rackspace.commailto:murali.all...@rackspace.com]
Sent: Wednesday, June 04, 2014 12:09 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [solum] reviews for the new API


Angus/Julien,

I would disagree that we should expose the mistral DSL to end users.

What if we decide to use something other than Mistral in the future? We should 
be able to plug in any workflow system we want without changing what we expose 
to the end user.

To me, the pipeline DSL is similar to our plan file. We don't expose a heat 
template to our end users.

Murali



On Jun 4, 2014, at 10:58 AM, Julien Vey 
vey.jul...@gmail.commailto:vey.jul...@gmail.com
 wrote:


Hi Angus,

I really agree with you. I would insist on #3, most of our users will use the 
default workbook, and only advanced users will want to customize the workflow. 
advanced users should easily understand a mistral workbook, cause they are 
advanced

To add to the cons of creating our own DSL, it will require a lot more work, 
more design discussions, more maintenance... We might end up doing what mistral 
is already doing. If we have some difficulties with Mistral's DSL, we can talk 
with the team, and contribute back our experience of using Mistral.

Julien




2014-06-04 14:11 GMT+02:00 Angus Salkeld 
angus.salk...@rackspace.commailto:angus.salk...@rackspace.com:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all

I have posted this series and it has evoked a surprising (to me) amount
of discussion so I wanted to clarify some things and talk to some of the
issues so we can move forward.

So first note is this is an early posting and still is not tested much.

https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-api,n,z

First the terminology
   I have a pipeline meaning the association between a mistral workbook,
   a 

Re: [openstack-dev] [Horizon] How to conditionally modify attributes in CreateNetwork class.

2014-06-04 Thread Nader Lahouti
Hi Timur,

Really appreciate your reply.
Will try your suggestions.

Thanks,
Nader.



On Tue, Jun 3, 2014 at 4:22 AM, Timur Sufiev tsuf...@mirantis.com wrote:

 Hello, Nader!

 As for `contributes` attribute, you could override `contribute(self,
 data, context)` method in your descendant of `workflows.Step` which by
 default simply iterates over all keys in `contributes`.

 Either you could use even more flexible approach (which also fits for
 `default_steps`): define in your `workflows.Step` descendants methods
 `contributes(self)` and `default_steps(self)` (with the conditional
 logic you need) and then decorate them with @property.

 On Fri, May 30, 2014 at 10:15 AM, Nader Lahouti nader.laho...@gmail.com
 wrote:
  Hi All,
 
  Currently in the
  horizon/openstack_dashboard/dashboards/project/networks/workflows.py in
  classes such as CreateNetwork, CreateNetworkInfo and CreateSubnetInfo,
 the
  contributes or default_steps as shown below are fixed. Is it possible to
 add
  entries to those attributes conditionally?
 
  156class CreateSubnetInfo(workflows.Step):
  157action_class = CreateSubnetInfoAction
  158contributes = (with_subnet, subnet_name, cidr,
  159   ip_version, gateway_ip, no_gateway)
  160
 
  262class CreateNetwork(workflows.Workflow):
  263slug = create_network
  264name = _(Create Network)
  265finalize_button_name = _(Create)
  266success_message = _('Created network %s.')
  267failure_message = _('Unable to create network %s.')
  268default_steps = (CreateNetworkInfo,
  269 CreateSubnetInfo,
  270 CreateSubnetDetail)
 
  Thanks for your input.
 
  Nader.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Need help with a gnarly Object Version issue

2014-06-04 Thread Day, Phil
Hi Folks,

I've been working on a change to make the user_data field an optional part of 
the Instance object since passing it around everywhere seems a bad idea since:

-  It can be huge

-  It's only used when getting metadata

-  It can contain user sensitive data

-
https://review.openstack.org/#/c/92623/9

I've included the object version changes, and that all works fine - but I'm 
left with one issue that I'm not sure how to proceed with:

On a compute manager that is still running the old version of the code (i.e 
using the previous object version), if a method that hasn't yet been converted 
to objects gets a dict created from the new  version of the object (e.g. 
rescue, get_console_output), then object_compat() decorator will call the 
_from_db_object() method in objects.Instance. Because this is the old 
version of the object code, it expects user_data to be a field in dict, and 
throws a key error.

I can think of a number of possible fixes - but I'm not sure any of them are 
very elegant (and of course they have to fix the problem before the data is 
sent to the compute manager):


1)  Rather than removing the user_data field from the object just set it to 
a null value if its not requested.


2)  Add object versioning in the client side of the RPC layer for those 
methods that don't take objects.

I'm open to other ideas, and general guidance around how deletion of fields 
from Objects is meant to be handled ?

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Randall Burt
Sorry to poke my head in, but doesn't that beg the question of why you'd want 
to expose some third party DSL in the first place? If its an advanced feature, 
I wonder why it would be even considered before the 90% solution works, much 
less take a dependency on another non-integrated service. IMO, the value of 
Solum initially lies in addressing that 90% CI/CD/ALM solution well and in a 
way that doesn't require me to deploy and maintain an additional service that's 
not part of integrated OpenStack. At best, it would seem prudent to me to 
simply allow for a basic way for me to insert my own CI/CD steps and be 
prescriptive about how those custom steps participate in Solum's workflow 
rather than locking me into some specific workflow DSL. If I then choose to use 
Mistral to do some custom work, I can, but Solum shouldn't care what I use.

If Solum isn't fairly opinionated (at least early on) about the basic 
CI/CD-ALM lifecycle and the steps therein, I would question its utility if its 
a wrapper over my existing Jenkins jobs and a workflow service.

On Jun 4, 2014, at 1:10 PM, Julien Vey vey.jul...@gmail.com wrote:

 Murali, Roshan.
 
 I think there is a misunderstood. By default, the user wouldn't see any 
 workflow dsl. If the user does not specify anything, we would use a 
 pre-defined mistral workbook defined by Solum, as Adrian described
 
 If the user needs more, mistral is not so complicated. Have a look at this 
 example Angus has done 
 https://review.openstack.org/#/c/95709/4/etc/solum/workbooks/build.yaml
 We can define anything as solum actions, and the users would just have to 
 call one of this actions. Solum takes care of the implementation. If we have 
 comments about the DSL, Mistral's team is willing to help.
 
 Our end-users will be developers, and a lot of them will need a custom 
 workflow at some point. For instance, if Jenkins has so many plugins, it's 
 because there are as many custom build workflows, specific to each company. 
 If we add an abstraction on top of mistral or any other workflow engine, we 
 will lock developers in our own decisions, and any additional feature would 
 require a new development in Solum, whereas exposing (when users want it) 
 mistral, we would allow for any customization.
 
 Julien
 
 
 
 
 2014-06-04 19:50 GMT+02:00 Roshan Agrawal roshan.agra...@rackspace.com:
 Agreeing with what Murali said below. We should make things really simple for 
 the 99 percentile of the users, and not force the complexity needed by the 
 minority of the “advanced users” on the rest of the 99 percentile users.  
 
  
 
 Mistral is a generic workflow DSL, we do not need to expose all that 
 complexity to the Solum user that wants to customize the pipeline. 
 “Non-advanced” users will have a need to customize the pipeline. In this 
 case, the user is not necessarily the developer persona, but typically an 
 admin/release manager persona.  
 
  
 
 Pipeline customization should be doable easily, without having the understand 
 or author a generic workflow DSL.
 
  
 
 For the really advanced user who needs to have a finer grained need to tweak 
 the mistral workflow DSL (I am not sure if there will be a use case for this 
 if we have the right customizations exposed via the pipeline API), we should 
 have the “option” for the user to tweak the mistral DSL directly, but we 
 should not expect 99.9% (or more) of the users to deal with a generic 
 workflow.
 
  
 
  
 
 From: Murali Allada [mailto:murali.all...@rackspace.com] 
 Sent: Wednesday, June 04, 2014 12:09 PM
 
 
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [solum] reviews for the new API
 
  
 
 Angus/Julien,
 
  
 
 I would disagree that we should expose the mistral DSL to end users.
 
  
 
 What if we decide to use something other than Mistral in the future? We 
 should be able to plug in any workflow system we want without changing what 
 we expose to the end user.
 
  
 
 To me, the pipeline DSL is similar to our plan file. We don't expose a heat 
 template to our end users.
 
  
 
 Murali
 
  
 
  
 
  
 
 On Jun 4, 2014, at 10:58 AM, Julien Vey vey.jul...@gmail.com
 
  wrote:
 
 
 
 
 Hi Angus,
 
  
 
 I really agree with you. I would insist on #3, most of our users will use the 
 default workbook, and only advanced users will want to customize the 
 workflow. advanced users should easily understand a mistral workbook, cause 
 they are advanced
 
  
 
 To add to the cons of creating our own DSL, it will require a lot more work, 
 more design discussions, more maintenance... We might end up doing what 
 mistral is already doing. If we have some difficulties with Mistral's DSL, we 
 can talk with the team, and contribute back our experience of using Mistral.
 
  
 
 Julien
 
  
 
  
 
  
 
  
 
 2014-06-04 14:11 GMT+02:00 Angus Salkeld angus.salk...@rackspace.com:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hi all
 
 I have posted this series and it has 

[openstack-dev] [sahara] team meeting June 5 1800 UTC

2014-06-04 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140605T18


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kerberization of Horizon (kerbhorizon?)

2014-06-04 Thread Adam Young

On 06/04/2014 03:10 PM, Gabriel Hurley wrote:


I've implemented Kerberos (via Apache) + Django once before, and yes, 
taking this as pseudo-code you're on the right track. Obviously the 
devil is in the details and you'll work out the particulars as you go.


The most important bit (obviously) is just making absolutely sure your 
REMOTE_USER header/environment variable is trusted, but that's outside 
the Django layer.


Assuming that you can work out with the other parameters from the 
original call going into auth, session, or client as appropriate as 
you said then you should be fine.




Thanks.  One part I'm not really sure about was if it is OK to skip 
adding a token to the session before calling on the keystone code. It 
seems like the django_openstack_auth code creates a user object and adds 
that to the session.  I don't want any of the login forms from that 
package.  I'm guessing that I would really need to write 
django-openstack-kerberos-backend to merge the logic from
RemoteUserBackend with django_openstack_auth; I think I want the  logic 
of django_openstack_auth.backend.KeystoneBackend.authenticate





All the best,

-Gabriel

*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Wednesday, June 04, 2014 11:53 AM
*To:* OpenStack Development Mailing List
*Subject:* [openstack-dev] Kerberization of Horizon (kerbhorizon?)

OK,  so I'm cranking on All of the Kerberso stuff: plus S4U2Proxy work 
etcexcept that I have never worked with DJango directly before.  I 
want to get a sanity check on my approach:


Instead of authenticating to Keystone, Horizon will use 
mod_auth_krb5 and REMOTE_USER to authenticate the user. Then, in order 
to get a Keystone token, the code in 
openstack_dashboard/api/keystone.py:keystoneclient   needs to fetch a 
token for the user.


This will be done using a Kerberized Keystone and S4U2Proxy setup.  
There are alternatives using TGT delegation that I really want to have 
nothing to do with.


The keystoneclient call currently does:


conn = api_version['client'].Client(token=user.token.id,
endpoint=endpoint,
original_ip=remote_addr,
insecure=insecure,
cacert=cacert,
auth_url=endpoint,
debug=settings.DEBUG)

when I am done it would do:

from keystoneclient.contrib.auth.v3 import kerberos

...

if  REMOTE_USER:
auth = kerberos.Kerberos(OS_AUTH_URL)
else:
auth = v3.auth.Token(token=user.token.id)

sess=session.Session(kerb_auth,verify=OS_CACERT)
conn=client.Client(session=sess,region_name='RegionOne')



(with the other parameters from the original call going into auth, 
session. or client as appropriate)



Am I on track?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kerberization of Horizon (kerbhorizon?)

2014-06-04 Thread Gabriel Hurley
I suspect that to be true. Just adding a second authentication backend to the 
django-openstack-auth package would be fine. At least some of the logic should 
be reusable and creating a whole additional package seems like an unnecessary 
separation.


-  Gabriel

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Wednesday, June 04, 2014 12:43 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Kerberization of Horizon (kerbhorizon?)

On 06/04/2014 03:10 PM, Gabriel Hurley wrote:
I've implemented Kerberos (via Apache) + Django once before, and yes, taking 
this as pseudo-code you're on the right track. Obviously the devil is in the 
details and you'll work out the particulars as you go.

The most important bit (obviously) is just making absolutely sure your 
REMOTE_USER header/environment variable is trusted, but that's outside the 
Django layer.

Assuming that you can work out with the other parameters from the original 
call going into auth, session, or client as appropriate as you said then you 
should be fine.

Thanks.  One part I'm not really sure about was if it is OK to skip adding a 
token to the session before calling on the keystone code.   It seems like the 
django_openstack_auth code creates a user object and adds that to the session.  
I don't want any of the login forms from that package.  I'm guessing that I 
would really need to write django-openstack-kerberos-backend to merge the logic 
from
  RemoteUserBackend with django_openstack_auth; I think I want the  logic of 
django_openstack_auth.backend.KeystoneBackend.authenticate





All the best,


-  Gabriel

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Wednesday, June 04, 2014 11:53 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Kerberization of Horizon (kerbhorizon?)

OK,  so I'm cranking on All of the Kerberso stuff: plus S4U2Proxy work 
etcexcept that I have never worked with DJango directly before.  I want to 
get a sanity check on my approach:

Instead of authenticating to Keystone, Horizon will use mod_auth_krb5 and 
REMOTE_USER to authenticate the user.  Then, in order to get a Keystone token, 
the code in openstack_dashboard/api/keystone.py:keystoneclient   needs to fetch 
a token for the user.

This will be done using a Kerberized Keystone and S4U2Proxy setup.  There are 
alternatives using TGT delegation that I really want to have nothing to do with.

The keystoneclient call currently does:


conn = api_version['client'].Client(token=user.token.id,
endpoint=endpoint,
original_ip=remote_addr,
insecure=insecure,
cacert=cacert,
auth_url=endpoint,
debug=settings.DEBUG)

when I am done it would do:
from keystoneclient.contrib.auth.v3 import kerberos
...

if  REMOTE_USER:
auth = kerberos.Kerberos(OS_AUTH_URL)
else:
auth = v3.auth.Token(token=user.token.id)

sess=session.Session(kerb_auth,verify=OS_CACERT)
conn=client.Client(session=sess,region_name='RegionOne')



(with the other parameters from the original call going into auth, session. or 
client as appropriate)


Am I on track?







___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Adrian Otto
Good question Randall.

Either we implement some workflow, which is what we do now that is not 
configurable, or we adopt another workflow system. We figured that using an 
existing workflow system would be smarter than duplicating one by making Solum 
any more configurable.

We have plans to allow the workflow component to be pluggable, so if you prefer 
to have Jenkins or xyz underneath, that would be possible.

We selected Mistral as a first implementation attempt since we already use a 
devstack environment for the basic deployment, and adding Mistral to that is 
rather easy.

Adrian

Sent from my Verizon Wireless 4G LTE smartphone


 Original message 
From: Randall Burt
Date:06/04/2014 12:35 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [solum] reviews for the new API

Sorry to poke my head in, but doesn't that beg the question of why you'd want 
to expose some third party DSL in the first place? If its an advanced feature, 
I wonder why it would be even considered before the 90% solution works, much 
less take a dependency on another non-integrated service. IMO, the value of 
Solum initially lies in addressing that 90% CI/CD/ALM solution well and in a 
way that doesn't require me to deploy and maintain an additional service that's 
not part of integrated OpenStack. At best, it would seem prudent to me to 
simply allow for a basic way for me to insert my own CI/CD steps and be 
prescriptive about how those custom steps participate in Solum's workflow 
rather than locking me into some specific workflow DSL. If I then choose to use 
Mistral to do some custom work, I can, but Solum shouldn't care what I use.

If Solum isn't fairly opinionated (at least early on) about the basic 
CI/CD-ALM lifecycle and the steps therein, I would question its utility if its 
a wrapper over my existing Jenkins jobs and a workflow service.

On Jun 4, 2014, at 1:10 PM, Julien Vey vey.jul...@gmail.com wrote:

 Murali, Roshan.

 I think there is a misunderstood. By default, the user wouldn't see any 
 workflow dsl. If the user does not specify anything, we would use a 
 pre-defined mistral workbook defined by Solum, as Adrian described

 If the user needs more, mistral is not so complicated. Have a look at this 
 example Angus has done 
 https://review.openstack.org/#/c/95709/4/etc/solum/workbooks/build.yaml
 We can define anything as solum actions, and the users would just have to 
 call one of this actions. Solum takes care of the implementation. If we have 
 comments about the DSL, Mistral's team is willing to help.

 Our end-users will be developers, and a lot of them will need a custom 
 workflow at some point. For instance, if Jenkins has so many plugins, it's 
 because there are as many custom build workflows, specific to each company. 
 If we add an abstraction on top of mistral or any other workflow engine, we 
 will lock developers in our own decisions, and any additional feature would 
 require a new development in Solum, whereas exposing (when users want it) 
 mistral, we would allow for any customization.

 Julien




 2014-06-04 19:50 GMT+02:00 Roshan Agrawal roshan.agra...@rackspace.com:
 Agreeing with what Murali said below. We should make things really simple for 
 the 99 percentile of the users, and not force the complexity needed by the 
 minority of the “advanced users” on the rest of the 99 percentile users.



 Mistral is a generic workflow DSL, we do not need to expose all that 
 complexity to the Solum user that wants to customize the pipeline. 
 “Non-advanced” users will have a need to customize the pipeline. In this 
 case, the user is not necessarily the developer persona, but typically an 
 admin/release manager persona.



 Pipeline customization should be doable easily, without having the understand 
 or author a generic workflow DSL.



 For the really advanced user who needs to have a finer grained need to tweak 
 the mistral workflow DSL (I am not sure if there will be a use case for this 
 if we have the right customizations exposed via the pipeline API), we should 
 have the “option” for the user to tweak the mistral DSL directly, but we 
 should not expect 99.9% (or more) of the users to deal with a generic 
 workflow.





 From: Murali Allada [mailto:murali.all...@rackspace.com]
 Sent: Wednesday, June 04, 2014 12:09 PM


 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [solum] reviews for the new API



 Angus/Julien,



 I would disagree that we should expose the mistral DSL to end users.



 What if we decide to use something other than Mistral in the future? We 
 should be able to plug in any workflow system we want without changing what 
 we expose to the end user.



 To me, the pipeline DSL is similar to our plan file. We don't expose a heat 
 template to our end users.



 Murali







 On Jun 4, 2014, at 10:58 AM, Julien Vey vey.jul...@gmail.com

  wrote:


[openstack-dev] [Heat]Heat template parameters encryption

2014-06-04 Thread Vijendar Komalla
Hi Devs,
I have submitted an WIP review (https://review.openstack.org/#/c/97900/)
for Heat parameters encryption blueprint
https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
This quick and dirty implementation encrypts all the parameters on on
Stack 'store' and decrypts on on Stack 'load'.
Following are couple of improvements I am thinking about;
1. Instead of encrypting individual parameters, on Stack 'store' encrypt
all the parameters together as a dictionary  [something like
crypt.encrypt(json.dumps(param_dictionary))]
2. Just encrypt parameters that were marked as 'hidden', instead of
encrypting all parameters

I would like to hear your feedback/suggestions.


Thanks,
Vijendar


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-06-04 Thread Carl Baldwin
We'll meet tomorrow at the regular time in #openstack-meeting-3.

Juno-1 is just one week away.  We will discuss the distributed
virtual router (DVR) work to see what the community can do to help the
DVR team land the hard work that they've been doing.  I believe that we
also have some IPAM stuff to discuss as well.

Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Backporting bugfixes to stable releases

2014-06-04 Thread Dmitry Borodaenko
The backporting rules are now part of Fuel wiki:
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series

There is also a new page on code review rules, please also review and
make use of it:
https://wiki.openstack.org/wiki/Fuel/Code_Review_Rules

On Mon, Jun 2, 2014 at 12:33 PM, Dmitry Borodaenko
dborodae...@mirantis.com wrote:
 Our experience in backporting leftover bugfixes from MOST 5.0 to 4.1
 was not pleasant, primarily because too many backport commits had to
 be dealt with at the same time.

 We can do better next time if we follow a couple of simple rules:

 1) When you create a new bug with High or Critical priority or upgrade
 an existing bug, always check if this bug is present in the supported
 stable release series (at least one most recent stable release). If it
 is present there, target it to all affected series (even if you don't
 expect to be able to eventually backport a fix). If it's not present
 in stable releases, document that on the bug so that other people
 don't have to re-check.

 2) When you propose a fix for a bug, cherry-pick the fix commit onto
 the stable/x.x branch for each series it is targeted to. Use the same
 Change-Id and topic (git review -t bug/id) to make it easier to
 track down all backports for that bug.

 3) When you approve a bugfix commit for master branch, use the
 information available so far on the bug and in the review request to
 review and maybe update backporting status of the bug. Is its priority
 high enough to need a backport? Is it targeted to all affected release
 series? Are there backport commits already? For all series where
 backport should exist and doesn't, create a backport review request
 yourself. For all other affected series, change bug status to Won't
 Fix and explain in bug comments.

 Needless to say, we should keep following the rule #0, too: do not
 merge commits into stable branches until it was merged into master or
 documented why it doesn't apply to master.

 --
 Dmitry Borodaenko



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-04 Thread Carl Baldwin
Yes, memcached is a candidate that looks promising.  First things first,
though.  I think we need the abstraction of an ipam interface merged.  That
will take some more discussion and work on its own.

Carl
On May 30, 2014 4:37 PM, Eugene Nikanorov enikano...@mirantis.com wrote:

  I was thinking it would be a separate process that would communicate over
 the RPC channel or something.
 memcached?

 Eugene.


 On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Eugene,

 That was part of the whole new set of complications that I
 dismissively waved my hands at.  :)

 I was thinking it would be a separate process that would communicate
 over the RPC channel or something.  More complications come when you
 think about making this process HA, etc.  It would mean going over RPC
 to rabbit to get an allocation which would be slow.  But the current
 implementation is slow.  At least going over RPC is greenthread
 friendly where going to the database doesn't seem to be.

 Carl

 On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
 enikano...@mirantis.com wrote:
  Hi Carl,
 
  The idea of in-memory storage was discussed for similar problem, but
 might
  not work for multiple server deployment.
  Some hybrid approach though may be used, I think.
 
  Thanks,
  Eugene.
 
 
  On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
  This is very similar to IPAM...  There is a space of possible ids or
  addresses that can grow very large.  We need to track the allocation
  of individual ids or addresses from that space and be able to quickly
  come up with a new allocations and recycle old ones.  I've had this in
  the back of my mind for a week or two now.
 
  A similar problem came up when the database would get populated with
  the entire free space worth of ip addresses to reflect the
  availability of all of the individual addresses.  With a large space
  (like an ip4 /8 or practically any ip6 subnet) this would take a very
  long time or never finish.
 
  Neutron was a little smarter about this.  It compressed availability
  in to availability ranges in a separate table.  This solved the
  original problem but is not problem free.  It turns out that writing
  database operations to manipulate both the allocations table and the
  availability table atomically is very difficult and ends up being very
  slow and has caused us some grief.  The free space also gets
  fragmented which degrades performance.  This is what led me --
  somewhat reluctantly -- to change how IPs get recycled back in to the
  free pool which hasn't been very popular.
 
  I wonder if we can discuss a good pattern for handling allocations
  where the free space can grow very large.  We could use the pattern
  for the allocation of both IP addresses, VXlan ids, and other similar
  resource spaces.
 
  For IPAM, I have been entertaining the idea of creating an allocation
  agent that would manage the availability of IPs in memory rather than
  in the database.  I hesitate, because that brings up a whole new set
  of complications.  I'm sure there are other potential solutions that I
  haven't yet considered.
 
  The L3 subteam is currently working on a pluggable IPAM model.  Once
  the initial framework for this is done, we can more easily play around
  with changing the underlying IPAM implementation.
 
  Thoughts?
 
  Carl
 
  On Thu, May 29, 2014 at 4:01 AM, Xurong Yang ido...@gmail.com wrote:
   Hi, Folks,
  
   When we configure VXLAN range [1,16M], neutron-server service costs
 long
   time and cpu rate is very high(100%) when initiation. One test base
 on
   postgresql has been verified: more than 1h when VXLAN range is [1,
 1M].
  
   So, any good solution about this performance issue?
  
   Thanks,
   Xurong Yang
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-04 Thread Buraschi, Andres
Hi Brandon, hi Kyle!
I'm a bit confused about the deprecation (btw, thanks for sending this 
Brandon!), as I (wrongly) assumed #1 would be the chosen path for the new API 
implementation. I understand the proposal and #2 sounds actually cleaner. 

Just out of curiosity, Kyle, where is LBaaS functionality intended to end up, 
if long-term plans are to remove it from Neutron?

(Nit question, I must clarify)

Thank you!
Andres

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Wednesday, June 04, 2014 2:18 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API

Thanks for your feedback Kyle.  I will be at that meeting on Monday.

Thanks,
Brandon

On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
 On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:
  This is an LBaaS topic bud I'd like to get some Neutron Core members 
  to give their opinions on this matter so I've just directed this to 
  Neutron proper.
 
  The design for the new API and object model for LBaaS needs to be 
  locked down before the hackathon in a couple of weeks and there are 
  some questions that need answered.  This is pretty urgent to come on 
  to a decision on and to get a clear strategy defined so we can 
  actually do real code during the hackathon instead of wasting some 
  of that valuable time discussing this.
 
 
  Implementation must be backwards compatible
 
  There are 2 ways that have come up on how to do this:
 
  1) New API and object model are created in the same extension and 
  plugin as the old.  Any API requests structured for the old API will 
  be translated/adapted to the into the new object model.
  PROS:
  -Only one extension and plugin
  -Mostly true backwards compatibility -Do not have to rename 
  unchanged resources and models
  CONS:
  -May end up being confusing to an end-user.
  -Separation of old api and new api is less clear -Deprecating and 
  removing old api and object model will take a bit more work -This is 
  basically API versioning the wrong way
 
  2) A new extension and plugin are created for the new API and object 
  model.  Each API would live side by side.  New API would need to 
  have different names for resources and object models from Old API 
  resources and object models.
  PROS:
  -Clean demarcation point between old and new -No translation layer 
  needed -Do not need to modify existing API and object model, no new 
  bugs -Drivers do not need to be immediately modified -Easy to 
  deprecate and remove old API and object model later
  CONS:
  -Separate extensions and object model will be confusing to end-users 
  -Code reuse by copy paste since old extension and plugin will be 
  deprecated and removed.
  -This is basically API versioning the wrong way
 
  Now if #2 is chosen to be feasible and acceptable then there are a 
  number of ways to actually do that.  I won't bring those up until a 
  clear decision is made on which strategy above is the most acceptable.
 
 Thanks for sending this out Brandon. I'm in favor of option #2 above, 
 especially considering the long-term plans to remove LBaaS from 
 Neutron. That approach will help the eventual end goal there. I am 
 also curious on what others think, and to this end, I've added this as 
 an agenda item for the team meeting next Monday. Brandon, it would be 
 great to get you there for the part of the meeting where we'll discuss 
 this.
 
 Thanks!
 Kyle
 
  Thanks,
  Brandon
 
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Issues on dnsmasq

2014-06-04 Thread Shixiong Shang
Hi, Xu Han:

You mentioned in the weekly meeting that you guys saw some issues in the lab, 
which may pertain to the dnsmasq source code I wrote. Would you please share 
with me the symptom and the procedures to reproduce them? I would like to take 
a look and fix the issue if necessary.

Thanks!

Shixiong


Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-04 Thread Carl Baldwin
You are right.  I did feel a bit bad about hijacking the thread.  But,
most of discussion was related closely enough that I never decided to
fork in to a newer thread.

I think I'm done now.  I'll have a look at your review and we'll put
IPAM to rest for now.  :)

Carl

On Wed, Jun 4, 2014 at 2:36 PM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 We hijacked the vxlan initialization performance thread with ipam! :)
 I've tried to address initial problem with some simple sqla stuff:
 https://review.openstack.org/97774
 With sqlite it gives ~3x benefit over existing code in master.
 Need to do a little bit more testing with real backends to make sure
 parameters are optimal.

 Thanks,
 Eugene.


 On Thu, Jun 5, 2014 at 12:29 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Yes, memcached is a candidate that looks promising.  First things first,
 though.  I think we need the abstraction of an ipam interface merged.  That
 will take some more discussion and work on its own.

 Carl

 On May 30, 2014 4:37 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

  I was thinking it would be a separate process that would communicate
  over the RPC channel or something.
 memcached?

 Eugene.


 On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Eugene,

 That was part of the whole new set of complications that I
 dismissively waved my hands at.  :)

 I was thinking it would be a separate process that would communicate
 over the RPC channel or something.  More complications come when you
 think about making this process HA, etc.  It would mean going over RPC
 to rabbit to get an allocation which would be slow.  But the current
 implementation is slow.  At least going over RPC is greenthread
 friendly where going to the database doesn't seem to be.

 Carl

 On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
 enikano...@mirantis.com wrote:
  Hi Carl,
 
  The idea of in-memory storage was discussed for similar problem, but
  might
  not work for multiple server deployment.
  Some hybrid approach though may be used, I think.
 
  Thanks,
  Eugene.
 
 
  On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin c...@ecbaldwin.net
  wrote:
 
  This is very similar to IPAM...  There is a space of possible ids or
  addresses that can grow very large.  We need to track the allocation
  of individual ids or addresses from that space and be able to quickly
  come up with a new allocations and recycle old ones.  I've had this
  in
  the back of my mind for a week or two now.
 
  A similar problem came up when the database would get populated with
  the entire free space worth of ip addresses to reflect the
  availability of all of the individual addresses.  With a large space
  (like an ip4 /8 or practically any ip6 subnet) this would take a very
  long time or never finish.
 
  Neutron was a little smarter about this.  It compressed availability
  in to availability ranges in a separate table.  This solved the
  original problem but is not problem free.  It turns out that writing
  database operations to manipulate both the allocations table and the
  availability table atomically is very difficult and ends up being
  very
  slow and has caused us some grief.  The free space also gets
  fragmented which degrades performance.  This is what led me --
  somewhat reluctantly -- to change how IPs get recycled back in to the
  free pool which hasn't been very popular.
 
  I wonder if we can discuss a good pattern for handling allocations
  where the free space can grow very large.  We could use the pattern
  for the allocation of both IP addresses, VXlan ids, and other similar
  resource spaces.
 
  For IPAM, I have been entertaining the idea of creating an allocation
  agent that would manage the availability of IPs in memory rather than
  in the database.  I hesitate, because that brings up a whole new set
  of complications.  I'm sure there are other potential solutions that
  I
  haven't yet considered.
 
  The L3 subteam is currently working on a pluggable IPAM model.  Once
  the initial framework for this is done, we can more easily play
  around
  with changing the underlying IPAM implementation.
 
  Thoughts?
 
  Carl
 
  On Thu, May 29, 2014 at 4:01 AM, Xurong Yang ido...@gmail.com
  wrote:
   Hi, Folks,
  
   When we configure VXLAN range [1,16M], neutron-server service costs
   long
   time and cpu rate is very high(100%) when initiation. One test base
   on
   postgresql has been verified: more than 1h when VXLAN range is [1,
   1M].
  
   So, any good solution about this performance issue?
  
   Thanks,
   Xurong Yang
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  

Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-04 Thread Brandon Logan
Hi Andres,
I've assumed (and we know how assumptions work) that the deprecation
would take place in Juno and after a cyle or two it would totally be
removed from the code.  Even if #1 is the way to go, the old /vips
resource would be deprecated in favor of /loadbalancers and /listeners.

I agree #2 is cleaner, but I don't want to start on an implementation
(though I kind of already have) that will fail to be merged in because
of the strategy.  The strategies are pretty different so one needs to be
decided on.

As for where LBaaS is intended to end up, I don't want to speak for
Kyle, so this is my understanding; It will end up outside of the Neutron
code base but Neutron and LBaaS and other services will all fall under a
Networking (or Network) program.  That is my understanding and I could
be totally wrong.

Thanks,
Brandon

On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
 Hi Brandon, hi Kyle!
 I'm a bit confused about the deprecation (btw, thanks for sending this 
 Brandon!), as I (wrongly) assumed #1 would be the chosen path for the new API 
 implementation. I understand the proposal and #2 sounds actually cleaner. 
 
 Just out of curiosity, Kyle, where is LBaaS functionality intended to end up, 
 if long-term plans are to remove it from Neutron?
 
 (Nit question, I must clarify)
 
 Thank you!
 Andres
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
 Sent: Wednesday, June 04, 2014 2:18 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
 
 Thanks for your feedback Kyle.  I will be at that meeting on Monday.
 
 Thanks,
 Brandon
 
 On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
  On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan 
  brandon.lo...@rackspace.com wrote:
   This is an LBaaS topic bud I'd like to get some Neutron Core members 
   to give their opinions on this matter so I've just directed this to 
   Neutron proper.
  
   The design for the new API and object model for LBaaS needs to be 
   locked down before the hackathon in a couple of weeks and there are 
   some questions that need answered.  This is pretty urgent to come on 
   to a decision on and to get a clear strategy defined so we can 
   actually do real code during the hackathon instead of wasting some 
   of that valuable time discussing this.
  
  
   Implementation must be backwards compatible
  
   There are 2 ways that have come up on how to do this:
  
   1) New API and object model are created in the same extension and 
   plugin as the old.  Any API requests structured for the old API will 
   be translated/adapted to the into the new object model.
   PROS:
   -Only one extension and plugin
   -Mostly true backwards compatibility -Do not have to rename 
   unchanged resources and models
   CONS:
   -May end up being confusing to an end-user.
   -Separation of old api and new api is less clear -Deprecating and 
   removing old api and object model will take a bit more work -This is 
   basically API versioning the wrong way
  
   2) A new extension and plugin are created for the new API and object 
   model.  Each API would live side by side.  New API would need to 
   have different names for resources and object models from Old API 
   resources and object models.
   PROS:
   -Clean demarcation point between old and new -No translation layer 
   needed -Do not need to modify existing API and object model, no new 
   bugs -Drivers do not need to be immediately modified -Easy to 
   deprecate and remove old API and object model later
   CONS:
   -Separate extensions and object model will be confusing to end-users 
   -Code reuse by copy paste since old extension and plugin will be 
   deprecated and removed.
   -This is basically API versioning the wrong way
  
   Now if #2 is chosen to be feasible and acceptable then there are a 
   number of ways to actually do that.  I won't bring those up until a 
   clear decision is made on which strategy above is the most acceptable.
  
  Thanks for sending this out Brandon. I'm in favor of option #2 above, 
  especially considering the long-term plans to remove LBaaS from 
  Neutron. That approach will help the eventual end goal there. I am 
  also curious on what others think, and to this end, I've added this as 
  an agenda item for the team meeting next Monday. Brandon, it would be 
  great to get you there for the part of the meeting where we'll discuss 
  this.
  
  Thanks!
  Kyle
  
   Thanks,
   Brandon
  
  
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Infra] Meeting Tuesday June 3rd at 19:00 UTC

2014-06-04 Thread Elizabeth K. Joseph
On Mon, Jun 2, 2014 at 9:39 AM, Elizabeth K. Joseph
l...@princessleia.com wrote:
 Hi everyone,

 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting on Tuesday June 3rd, at 19:00 UTC in #openstack-meeting

Meeting minutes and log:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-06-03-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-06-03-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-06-03-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-04 Thread Kyle Mestery
Hi all:

I was curious if people are having issues booking the room from the
block I have setup. I received word from the hotel that only one (1!)
person has booked yet. Given the mid-cycle is approaching in a month,
I wanted to make sure that people are making plans for travel. Are
people booking in places other than the one I had setup as reserved?
If so, I'll remove the room block. Keep in mind the hotel I had a
block reserved at is very convenient in that it's literally walking
distance to the mid-cycle location at the Bloomington, MN Cisco
offices.

Thanks!
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-04 Thread Edgar Magana Perdomo (eperdomo)
I tried to book online and it seems that the pre-payment is non-refundable:

Hyatt.Com Rate Rate RulesFull prepayment required, non-refundable, no
date changes.


The price is $149 USD per night. Is that what you have blocked?

Edgar

On 6/4/14, 2:47 PM, Kyle Mestery mest...@noironetworks.com wrote:

Hi all:

I was curious if people are having issues booking the room from the
block I have setup. I received word from the hotel that only one (1!)
person has booked yet. Given the mid-cycle is approaching in a month,
I wanted to make sure that people are making plans for travel. Are
people booking in places other than the one I had setup as reserved?
If so, I'll remove the room block. Keep in mind the hotel I had a
block reserved at is very convenient in that it's literally walking
distance to the mid-cycle location at the Bloomington, MN Cisco
offices.

Thanks!
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Roshan Agrawal
I documented a starting point for What can be customized in the CI Pipeline. 
Hopefully this helps in the design discussion

Take a look at the google docs below:
https://docs.google.com/document/d/1a0yjxKWbwnY7g9NZtYALEZdm1g8Uf4fixDZLAgRBZCU/edit?pli=1#

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Wednesday, June 04, 2014 2:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [solum] reviews for the new API

Good question Randall.

Either we implement some workflow, which is what we do now that is not 
configurable, or we adopt another workflow system. We figured that using an 
existing workflow system would be smarter than duplicating one by making Solum 
any more configurable.

We have plans to allow the workflow component to be pluggable, so if you prefer 
to have Jenkins or xyz underneath, that would be possible.

We selected Mistral as a first implementation attempt since we already use a 
devstack environment for the basic deployment, and adding Mistral to that is 
rather easy.

Adrian

Sent from my Verizon Wireless 4G LTE smartphone

 Original message 
From: Randall Burt
Date:06/04/2014 12:35 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [solum] reviews for the new API

Sorry to poke my head in, but doesn't that beg the question of why you'd want 
to expose some third party DSL in the first place? If its an advanced feature, 
I wonder why it would be even considered before the 90% solution works, much 
less take a dependency on another non-integrated service. IMO, the value of 
Solum initially lies in addressing that 90% CI/CD/ALM solution well and in a 
way that doesn't require me to deploy and maintain an additional service that's 
not part of integrated OpenStack. At best, it would seem prudent to me to 
simply allow for a basic way for me to insert my own CI/CD steps and be 
prescriptive about how those custom steps participate in Solum's workflow 
rather than locking me into some specific workflow DSL. If I then choose to use 
Mistral to do some custom work, I can, but Solum shouldn't care what I use.

If Solum isn't fairly opinionated (at least early on) about the basic 
CI/CD-ALM lifecycle and the steps therein, I would question its utility if its 
a wrapper over my existing Jenkins jobs and a workflow service.

On Jun 4, 2014, at 1:10 PM, Julien Vey 
vey.jul...@gmail.commailto:vey.jul...@gmail.com wrote:

 Murali, Roshan.

 I think there is a misunderstood. By default, the user wouldn't see any 
 workflow dsl. If the user does not specify anything, we would use a 
 pre-defined mistral workbook defined by Solum, as Adrian described

 If the user needs more, mistral is not so complicated. Have a look at this 
 example Angus has done 
 https://review.openstack.org/#/c/95709/4/etc/solum/workbooks/build.yaml
 We can define anything as solum actions, and the users would just have to 
 call one of this actions. Solum takes care of the implementation. If we have 
 comments about the DSL, Mistral's team is willing to help.

 Our end-users will be developers, and a lot of them will need a custom 
 workflow at some point. For instance, if Jenkins has so many plugins, it's 
 because there are as many custom build workflows, specific to each company. 
 If we add an abstraction on top of mistral or any other workflow engine, we 
 will lock developers in our own decisions, and any additional feature would 
 require a new development in Solum, whereas exposing (when users want it) 
 mistral, we would allow for any customization.

 Julien




 2014-06-04 19:50 GMT+02:00 Roshan Agrawal 
 roshan.agra...@rackspace.commailto:roshan.agra...@rackspace.com:
 Agreeing with what Murali said below. We should make things really simple for 
 the 99 percentile of the users, and not force the complexity needed by the 
 minority of the advanced users on the rest of the 99 percentile users.



 Mistral is a generic workflow DSL, we do not need to expose all that 
 complexity to the Solum user that wants to customize the pipeline. 
 Non-advanced users will have a need to customize the pipeline. In this 
 case, the user is not necessarily the developer persona, but typically an 
 admin/release manager persona.



 Pipeline customization should be doable easily, without having the understand 
 or author a generic workflow DSL.



 For the really advanced user who needs to have a finer grained need to tweak 
 the mistral workflow DSL (I am not sure if there will be a use case for this 
 if we have the right customizations exposed via the pipeline API), we should 
 have the option for the user to tweak the mistral DSL directly, but we 
 should not expect 99.9% (or more) of the users to deal with a generic 
 workflow.





 From: Murali Allada [mailto:murali.all...@rackspace.com]
 Sent: Wednesday, June 04, 2014 12:09 PM


 To: OpenStack Development Mailing List (not for usage questions)
 

[openstack-dev] [Barbican] KMIP support

2014-06-04 Thread Benjamin, Bruce P.
  All,

  I'm researching a bunch of HSM applications and I'm struggling to find much 
 info. I was wondering about the progress of KMIP support in Barbican? Is this 
 waiting on an open python KMIP support?



Just for a bit more clarification, APL is supporting a KMIP implementation as a 
backend to Barbican via the following activities:

-  Update to Barbican, called 'Create Secret Store Resource' 
https://wiki.openstack.org/wiki/Barbican/Blueprints/create-secret-store, allows 
many different secret store backend implementations to be supported, including 
PKCS#11, PKCS#12, and KMIP

-  Backend plug-in interface to Barbican called 'Implement KMIP Secret 
Store' that specifically supports a KMIP server (HSM) as a backend.  
https://github.com/cloudkeep/barbican/wiki/Blueprint:-Implement-KMIP-Secret-Store

-  New Python library, which will probably be called PyKMIP, will 
support an open source implementation of KMIP client and server (not yet posted 
- will probably reside in github.)

Bruce
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-04 Thread Kyle Mestery
I think it's even cheaper than that. Try calling the hotel to get the
better rate, I think Carl was able to successfully acquire the room at
the cheaper rate (something like $115 a night or so).

On Wed, Jun 4, 2014 at 4:56 PM, Edgar Magana Perdomo (eperdomo)
eperd...@cisco.com wrote:
 I tried to book online and it seems that the pre-payment is non-refundable:

 Hyatt.Com Rate Rate RulesFull prepayment required, non-refundable, no
 date changes.


 The price is $149 USD per night. Is that what you have blocked?

 Edgar

 On 6/4/14, 2:47 PM, Kyle Mestery mest...@noironetworks.com wrote:

Hi all:

I was curious if people are having issues booking the room from the
block I have setup. I received word from the hotel that only one (1!)
person has booked yet. Given the mid-cycle is approaching in a month,
I wanted to make sure that people are making plans for travel. Are
people booking in places other than the one I had setup as reserved?
If so, I'll remove the room block. Keep in mind the hotel I had a
block reserved at is very convenient in that it's literally walking
distance to the mid-cycle location at the Bloomington, MN Cisco
offices.

Thanks!
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Missing button to send Ctrl+Alt+Del for SPICE Console

2014-06-04 Thread Martinx - ジェームズ
Hello Stackers!

I'm using SPICE Consoles now but, there is no button to send Ctrl + Alt +
Del to a Windows Instance, so, it becomes very hard to log in into those
guests...

Can you guys enable it at Horizon?!

Tks!
Thiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-04 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-06-04 16:19:05 -0700:
 On 04/06/14 15:58, Vijendar Komalla wrote:
  Hi Devs,
  I have submitted an WIP review (https://review.openstack.org/#/c/97900/)
  for Heat parameters encryption blueprint
  https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
  This quick and dirty implementation encrypts all the parameters on on
  Stack 'store' and decrypts on on Stack 'load'.
  Following are couple of improvements I am thinking about;
  1. Instead of encrypting individual parameters, on Stack 'store' encrypt
  all the parameters together as a dictionary  [something like
  crypt.encrypt(json.dumps(param_dictionary))]
 
 Yeah, definitely don't encrypt them individually.
 
  2. Just encrypt parameters that were marked as 'hidden', instead of
  encrypting all parameters
 
  I would like to hear your feedback/suggestions.
 
 Just as a heads-up, we will soon need to store the properties of 
 resources too, at which point parameters become the least of our 
 problems. (In fact, in theory we wouldn't even need to store 
 parameters... and probably by the time convergence is completely 
 implemented, we won't.) Which is to say that there's almost certainly no 
 point in discriminating between hidden and non-hidden parameters.
 
 I'll refrain from commenting on whether the extra security this affords 
 is worth the giant pain it causes in debugging, except to say that IMO 
 there should be a config option to disable the feature (and if it's 
 enabled by default, it should probably be disabled by default in e.g. 
 devstack).

Storing secrets seems like a job for Barbican. That handles the giant
pain problem because in devstack you can just tell Barbican to have an
open read policy.

I'd rather see good hooks for Barbican than blanket encryption. I've
worked with a few things like this and they are despised and worked
around universally because of the reason Zane has expressed concern about:
debugging gets ridiculous.

How about this:

parameters:
  secrets:
type: sensitive
resources:
  sensitive_deployment:
type: OS::Heat::StructuredDeployment
properties:
  config: weverConfig
  server: myserver
  input_values:
secret_handle: { get_param: secrets }

The sensitive type would, on the client side, store the value in Barbican,
never in Heat. Instead it would just pass in a handle which the user
can then build policy around. Obviously this implies the user would set
up Barbican's in-instance tools to access the secrets value. But the
idea is, let Heat worry about being high performing and introspectable,
and then let Barbican worry about sensitive things.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-04 Thread Randall Burt
On Jun 4, 2014, at 7:05 PM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Zane Bitter's message of 2014-06-04 16:19:05 -0700:
 On 04/06/14 15:58, Vijendar Komalla wrote:
 Hi Devs,
 I have submitted an WIP review (https://review.openstack.org/#/c/97900/)
 for Heat parameters encryption blueprint
 https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
 This quick and dirty implementation encrypts all the parameters on on
 Stack 'store' and decrypts on on Stack 'load'.
 Following are couple of improvements I am thinking about;
 1. Instead of encrypting individual parameters, on Stack 'store' encrypt
 all the parameters together as a dictionary  [something like
 crypt.encrypt(json.dumps(param_dictionary))]
 
 Yeah, definitely don't encrypt them individually.
 
 2. Just encrypt parameters that were marked as 'hidden', instead of
 encrypting all parameters
 
 I would like to hear your feedback/suggestions.
 
 Just as a heads-up, we will soon need to store the properties of 
 resources too, at which point parameters become the least of our 
 problems. (In fact, in theory we wouldn't even need to store 
 parameters... and probably by the time convergence is completely 
 implemented, we won't.) Which is to say that there's almost certainly no 
 point in discriminating between hidden and non-hidden parameters.
 
 I'll refrain from commenting on whether the extra security this affords 
 is worth the giant pain it causes in debugging, except to say that IMO 
 there should be a config option to disable the feature (and if it's 
 enabled by default, it should probably be disabled by default in e.g. 
 devstack).
 
 Storing secrets seems like a job for Barbican. That handles the giant
 pain problem because in devstack you can just tell Barbican to have an
 open read policy.
 
 I'd rather see good hooks for Barbican than blanket encryption. I've
 worked with a few things like this and they are despised and worked
 around universally because of the reason Zane has expressed concern about:
 debugging gets ridiculous.
 
 How about this:
 
 parameters:
  secrets:
type: sensitive
 resources:
  sensitive_deployment:
type: OS::Heat::StructuredDeployment
properties:
  config: weverConfig
  server: myserver
  input_values:
secret_handle: { get_param: secrets }
 
 The sensitive type would, on the client side, store the value in Barbican,
 never in Heat. Instead it would just pass in a handle which the user
 can then build policy around. Obviously this implies the user would set
 up Barbican's in-instance tools to access the secrets value. But the
 idea is, let Heat worry about being high performing and introspectable,
 and then let Barbican worry about sensitive things.

While certainly ideal, it doesn't solve the current problem since we can't yet 
guarantee Barbican will even be available in a given release of OpenStack. In 
the meantime, Heat continues to store sensitive user information unencrypted in 
its database. Once Barbican is integrated, I'd be all for changing this 
implementation, but until then, we do need an interim solution. Sure, debugging 
is a pain and as developers we can certainly grumble, but leaking sensitive 
user information because we were too fussed to protect data at rest seems worse 
IMO. Additionally, the solution as described sounds like we're imposing a 
pretty awkward process on a user to save ourselves from having to decrypt some 
data in the cases where we can't access the stack information directly from the 
API or via debugging running Heat code (where the data isn't encrypted anymore).


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Consistency around proposed server instance tagging API

2014-06-04 Thread Christopher Yeoh
On Thu, Jun 5, 2014 at 4:14 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,

 I'm looking to get consensus on a proposed API for server instance tagging
 in Nova:

 https://review.openstack.org/#/c/91444/

 In the proposal, the REST API for the proposed server instance tagging
 looks like so:

 Remove a tag on a server:

 DELETE /v2/{project_id}/servers/{server_id}/tags/{tag}

 It is this last API call that has drawn the attention of John Garbutt
 (cc'd). In Glance v2 API, if you attempt to delete a tag that does not
 exist, then a 404 Not Found is returned. In my proposal, if you attempt to
 delete a tag that does not exist for the server, a 204 No Content is
 returned.


I think attempting to delete a tag that doesn't exist should return a 404.
The user can always ignore the error if they know there's a
chance that the tag they want to get rid of doesn't exist. But for those
that believe it should exist its an error is more useful to them as
it may be an indicator that something is wrong with their logic.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Consistency around proposed server instance tagging API

2014-06-04 Thread Davanum Srinivas
+1 to 404 for a DELETE if the tag does not exist.

There's a good discussion in this paragraph from RESTful Web APIs
book - 
http://books.google.com/books?id=wWnGQBAJlpg=PA36ots=Ff9jCI293bdq=restful%20http%20delete%20404%20sam%20rubypg=PA36#v=onepageq=restful%20http%20delete%20404%20sam%20rubyf=false

-- dims

On Wed, Jun 4, 2014 at 8:21 PM, Christopher Yeoh cbky...@gmail.com wrote:

 On Thu, Jun 5, 2014 at 4:14 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,

 I'm looking to get consensus on a proposed API for server instance tagging
 in Nova:

 https://review.openstack.org/#/c/91444/

 In the proposal, the REST API for the proposed server instance tagging
 looks like so:

 Remove a tag on a server:

 DELETE /v2/{project_id}/servers/{server_id}/tags/{tag}

 It is this last API call that has drawn the attention of John Garbutt
 (cc'd). In Glance v2 API, if you attempt to delete a tag that does not
 exist, then a 404 Not Found is returned. In my proposal, if you attempt to
 delete a tag that does not exist for the server, a 204 No Content is
 returned.


 I think attempting to delete a tag that doesn't exist should return a 404.
 The user can always ignore the error if they know there's a
 chance that the tag they want to get rid of doesn't exist. But for those
 that believe it should exist its an error is more useful to them as
 it may be an indicator that something is wrong with their logic.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >