Re: [openstack-dev] Support for multiple sort keys and sort directions in REST GET APIs

2014-04-06 Thread Duncan Thomas
Stephen

Mike is right, it is mostly (possibly only?) extensions that do double
lookups. Your plan looks sensible, and definitely useful. I guess I'll
see if I can actually break it once the review is up :-) I mostly
wanted to give a heads-up - there are people who are way better at
reviewing this than me.



On 3 April 2014 19:15, Mike Perez thin...@gmail.com wrote:
 Duncan, I think the point you raise could happen even without this change. In
 the example of listing volumes, you would first query for the list in some
 multi-key sort. The API extensions for example that add additional response
 keys will do another lookup on that resource for the appropriate column it's
 retrieving. There are some extensions that still do this unfortunately, but
 quite a few got taken care of in Havana in using cache instead of doing these
 wasteful lookups.

 Overall Steven, I think this change is useful, especially from one of the
 Horizon sessions I heard in Hong Kong for filtering/sorting.

 --
 Mike Perez

 On 11:18 Thu 03 Apr , Duncan Thomas wrote:
 Some of the cinder APIs do weird database joins and double lookups and
 things, making every field sortable might have some serious database
 performance impact and open up a DoS attack. Will need more
 investigation to be sure.

 On 2 April 2014 19:42, Steven Kaufer kau...@us.ibm.com wrote:
  I have proposed blueprints in both nova and cinder for supporting multiple
  sort keys and sort directions for the GET APIs (servers and volumes).  I am
  trying to get feedback from other projects in order to have a more uniform
  API across services.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]a question about os-volume_upload_image

2014-04-06 Thread Duncan Thomas
Mike

Glance metadata gets used for billing tags, among things, that we
would like to stay as attached to a volume as possible, as another
example. Windows images use this - which is why cinder copies all of
the glance metadata in the first place, rather than just a bootable
flag.

Apparently protected properties (glance metadata items that are
immutable once set) won't cause a problem here since we will be
setting them on upload, which is allowed.


On 3 April 2014 18:19, Mike Perez thin...@gmail.com wrote:
 On 18:37 Thu 03 Apr , Lingxian Kong wrote:
 Thanks Duncan for your answer.

 I am very interested in making a contribution towards this effort, but
 what to do next? Waiting for approving for this blueprint? Or see
 others' opinions on this before we putting more efforts in achieving
 this? I just want to make sure that we could handle other people's use
 cases and not just our own.

 What use case is that exactly? I mentioned earlier the original purpose was 
 for
 knowing if something was bootable. I'm curious on how else this is being used.

 --
 Mike Perez

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]a question about os-volume_upload_image

2014-04-06 Thread Duncan Thomas
My advice is two-fold:
 - No need to wait for the blueprint to be approved before submitting
a review - put a review up and let people see the details, then
respond to the discussion as necessary

 - Drop into the Wednesday (16:00 UTC) IRC meeting for Cinder - most
if not all of the core team are usually on and we can answer any
specific questions then. The Agenda is on the wiki if you want to add
to it. All welcome


On 3 April 2014 11:37, Lingxian Kong anlin.k...@gmail.com wrote:
 Thanks Duncan for your answer.

 I am very interested in making a contribution towards this effort, but
 what to do next? Waiting for approving for this blueprint? Or see
 others' opinions on this before we putting more efforts in achieving
 this? I just want to make sure that we could handle other people's use
 cases and not just our own.




 2014-04-03 18:12 GMT+08:00 Duncan Thomas duncan.tho...@gmail.com:

 On 3 April 2014 08:28, 王宏 w.wangho...@gmail.com wrote:
  I agree. Actually, I already have a BP on it:
  https://blueprints.launchpad.net/cinder/+spec/restore-image.
  I am happy for any suggestion.

 Needs a little thought since container_format and some other fields
 will need to be regenerated (e.g. we can source for a QCOW image, but
 we always upload raw), but as long as these are dealt with it seems
 like a useful feature

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 ---
 Lingxian Kong
 Huawei Technologies Co.,LTD.
 IT Product Line CloudOS PDU
 China, Xi'an
 Mobile: +86-18602962792
 Email: konglingx...@huawei.com; anlin.k...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-06 Thread Nandavar, Divakar Padiyar
 Well, it seems to me that the problem is the above blueprint and the code it 
 introduced. This is an anti-feature IMO, and probably the best solution 
 would be to remove the above code and go back to having a single  
 nova-compute managing a single vCenter cluster, not multiple ones.

Problem is not introduced by managing multiple clusters from single 
nova-compute proxy node.  Internally this proxy driver is still presenting the 
compute-node for each of the cluster its managing.What we need to think 
about is applicability of the live migration use case when a cluster is 
modelled as a compute.   Since the cluster is modelled as a compute, it is 
assumed that a typical use case of live-move is taken care by the underlying 
cluster itself.   With this there are other use cases which are no-op 
today like host maintenance mode, live move, setting instance affinity etc.,
 In order to resolve this I was thinking of 
A way to expose operations on individual ESX Hosts like Putting host in 
maintenance mode,  live move, instance affinity etc., by introducing Parent - 
Child compute node concept.   Scheduling can be restricted to Parent compute 
node and Child compute node can be used for providing more drill down on 
compute and also enable additional compute operations.Any thoughts on this?

Thanks,
Divakar


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Sunday, April 06, 2014 2:02 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live 
migration with one nova compute
Importance: High

On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:
 
 
 
 2014-04-04 12:46 GMT+08:00 Jay Pipes jaypi...@gmail.com:
 On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
  Thanks Jay and Chris for the comments!
 
  @Jay Pipes, I think that we still need to enable one nova
 compute
  live migration as one nova compute can manage multiple
 clusters and
  VMs can be migrated between those clusters managed by one
 nova
  compute.
 
 
 Why, though? That is what I am asking... seems to me like this
 is an
 anti-feature. What benefit does the user get from moving an
 instance
 from one VCenter cluster to another VCenter cluster if the two
 clusters
 are on the same physical machine?
 @Jay Pipes, for VMWare, one physical machine (ESX server) can only 
 belong to one VCenter cluster, so we may have following scenarios.
 
 DC
  |
 
  |---Cluster1
  |  |
 
  |  |---host1
  |
 
  |---Cluser2
 |
 
 |---host2
 
 
 Then when using VCDriver, I can use one nova compute manage both
 Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to
 host1 ;-(
 
 
 The bp was introduced by
 https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-
 by-one-service

Well, it seems to me that the problem is the above blueprint and the code it 
introduced. This is an anti-feature IMO, and probably the best solution would 
be to remove the above code and go back to having a single nova-compute 
managing a single vCenter cluster, not multiple ones.

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [NOVA] Missing network info in nova list

2014-04-06 Thread Sławek Kapłoński
Hello,

In nova logs there was not any error or TRACE. Everything looks ok
there but I probably found reason and solution of that problem. It was
probably related to bug: https://bugs.launchpad.net/nova/+bug/1254320
When I apply this patch on my compute nodes I have no any new instance
with missing network information in nova list or nova show.

-- 
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl


Dnia Wed, 2 Apr 2014 19:14:30 -0700
Aaron Rosen aaronoro...@gmail.com napisał:

 Hi Slawek,
 
 Interesting, I haven't seen this issue of network info not showing up
 on nova list and the instance being in ACTIVE state. Could you check
 out the nova logs and see if there are any TRACE's there?  If you're
 using icehouse you should be able to do neutron port-update port_id
 that maps to the instance's port and doing that will send a
 notification to nova  to update it's cache for the instance.
 
 Best,
 
 Aaron
 
 
 On Tue, Apr 1, 2014 at 12:10 AM, Sławek Kapłoński
 sla...@kaplonski.plwrote:
 
  Hello,
 
  Maybe the problem is not that missing data in nova's database
  becasue when I
  made:
  nova --debug list
  then I see that it is asking neutron about ports and this missing
  IP is corretly send from neutron.
  Also when I made:
  nova interface-list instance_id
  then there is no problem and IP is displayed. But still this IP is
  missing in
  list of instances and I displaying instance details (also in
  horizon).
 
  --
  Best regards
  Sławek Kapłoński
 
  Dnia poniedziałek, 31 marca 2014 18:17:18 Sławek Kapłoński pisze:
   Hello,
  
   I have openstack installation with neutron. When I made test and
   create many instances in one query (using --num-instances) all
   was ok but one instance (from 80 created) has no IP address when
   I made nova list or nova show . I found that there is missing
   value in network info in nova database in Instance-info-cache
   table. Everything except that is working fine: port with this IP
   is assigned to instance in neutron, binding is ok, instance has
   got configured this IP and I can ping it. Maybe someone know why
   this informations are missing in nova database and how to refresh
   it?
 
  ___
  Mailing list:
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post to : openst...@lists.openstack.org
  Unsubscribe :
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-06 Thread Jay Lau
Hi Divakar,

Can I say that the bare metal provisioning is now using kind of Parent -
Child compute mode? I was also thinking that we can use host:node to
identify a kind of Parent-Child or Hierarchy Compute. So can you please
show some difference for your Parent - Child Compute Node and bare metal
provisioning?

Thanks!


2014-04-06 14:59 GMT+08:00 Nandavar, Divakar Padiyar 
divakar.padiyar-nanda...@hp.com:

  Well, it seems to me that the problem is the above blueprint and the
 code it introduced. This is an anti-feature IMO, and probably the best
 solution would be to remove the above code and go back to having a single
  nova-compute managing a single vCenter cluster, not multiple ones.

 Problem is not introduced by managing multiple clusters from single
 nova-compute proxy node.  Internally this proxy driver is still presenting
 the compute-node for each of the cluster its managing.What we need to
 think about is applicability of the live migration use case when a
 cluster is modelled as a compute.   Since the cluster is modelled as a
 compute, it is assumed that a typical use case of live-move is taken care
 by the underlying cluster itself.   With this there are other use
 cases which are no-op today like host maintenance mode, live move, setting
 instance affinity etc., In order to resolve this I was thinking of
 A way to expose operations on individual ESX Hosts like Putting host in
 maintenance mode,  live move, instance affinity etc., by introducing Parent
 - Child compute node concept.   Scheduling can be restricted to Parent
 compute node and Child compute node can be used for providing more drill
 down on compute and also enable additional compute operations.Any
 thoughts on this?

 Thanks,
 Divakar


 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Sunday, April 06, 2014 2:02 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
 migration with one nova compute
 Importance: High

 On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:
 
 
 
  2014-04-04 12:46 GMT+08:00 Jay Pipes jaypi...@gmail.com:
  On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
   Thanks Jay and Chris for the comments!
  
   @Jay Pipes, I think that we still need to enable one nova
  compute
   live migration as one nova compute can manage multiple
  clusters and
   VMs can be migrated between those clusters managed by one
  nova
   compute.
 
 
  Why, though? That is what I am asking... seems to me like this
  is an
  anti-feature. What benefit does the user get from moving an
  instance
  from one VCenter cluster to another VCenter cluster if the two
  clusters
  are on the same physical machine?
  @Jay Pipes, for VMWare, one physical machine (ESX server) can only
  belong to one VCenter cluster, so we may have following scenarios.
 
  DC
   |
 
   |---Cluster1
   |  |
 
   |  |---host1
   |
 
   |---Cluser2
  |
 
  |---host2
 
 
  Then when using VCDriver, I can use one nova compute manage both
  Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to
  host1 ;-(
 
 
  The bp was introduced by
  https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-
  by-one-service

 Well, it seems to me that the problem is the above blueprint and the code
 it introduced. This is an anti-feature IMO, and probably the best solution
 would be to remove the above code and go back to having a single
 nova-compute managing a single vCenter cluster, not multiple ones.

 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-06 Thread Samuel Bercovici
Per the last LBaaS meeting.


1.   Please find a list of use cases.
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing


a)  Please review and see if you have additional ones for the project-user

b)  We can then chose 2-3 use cases to play around with how the CLI, API, 
etc. would look


2.   Please find a document to place screen captures of web UI. I took the 
liberty to place a few links showing ELB.
https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uHerSq3pLQA/edit?usp=sharing


Regards,
-Sam.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-06 Thread Christopher Yeoh
On Sun, Apr 6, 2014 at 10:06 AM, Hopper, Justin justin.hop...@hp.comwrote:

 Russell,

 At this point the guard that Nova needs to provide around the instance
 does not need to be complex.  It would even suffice to keep those
 instances hidden from such operations as ³nova list² when invoked by
 directly by the user.


Are you looking for something to prevent accidental manipulation of an
instance created by Trove or intentional changes as well? Whilst doing some
filtering in nova list is simple on the surface, we don't try to keep
server uuids secret in the API, so its likely that sort of information will
leak through other parts of the API say through volume or networking
interfaces. Having to enforce another level of permissions throughout the
API would be a considerable change. Also it would introduce inconsistencies
into the information returned by Nova - eg does quota/usage information
returned to the user include the server that Trove created or is that meant
to be adjusted as well?

If you need a high level of support from the Nova API to hide servers, then
if its possible, as Russell suggests to get what you want by building on
top of the Nova API using additional identities then I think that would be
the way to go. If you're just looking for a simple way to offer to Trove
clients a filtered list of servers, then perhaps Trove could offer a server
list call which is a proxy to Nova and filters out the servers which are
Trove specific since Trove knows which ones it created.

Chris


 Thanks,

 Justin Hopper
 Software Engineer - DBaaS
 irc: juice | gpg: EA238CF3 | twt: @justinhopper




 On 4/5/14, 14:20, Russell Bryant rbry...@redhat.com wrote:

 On 04/04/2014 08:12 PM, Hopper, Justin wrote:
  Greetings,
 
  I am trying to address an issue from certain perspectives and I think
  some support from Nova may be needed.
 
  _Problem_
  Services like Trove use run in Nova Compute Instances.  These Services
  try to provide an integrated and stable platform for which the ³service²
  can run in a predictable manner.  Such elements include configuration of
  the service, networking, installed packages, etc.  In today¹s world,
  when Trove spins up an Instance to deploy a database on, it creates that
  Instance with the Users Credentials.  Thus, to Nova, the User has full
  access to that Instance through Nova¹s API.  This access can be used in
  ways which unintentionally compromise the service.
 
  _Solution_
  A proposal is being formed that would put such Instances in a read-only
  or invisible mode from the perspective of Nova.  That is, the Instance
  can only be managed from the Service from which it was created.  At this
  point, we do not need any granular controls.  A simple lock-down of the
  Nova API for these Instances would suffice.  However, Trove would still
  need to interact with this Instance via Nova API.
 
  The basic requirements for Nova would beŠ
 
  A way to identify a request originating from a Service vs coming
  directly from an end-user
  A way to Identify which instances are being managed by a Service
  A way to prevent some or all access to the Instance unless the
  Service ID in the request matches that attached to the Instance
 
  Any feedback on this would be appreciated.
 
 The use case makes sense to me.  I'm thinking we should expect an
 identity to be created in Keystone for trove and have trove use that for
 managing all of its instances.
 
 If that is sufficient, trove would need some changes to use its service
 credentials instead of the user credentials.  I don't think any changes
 are needed in Nova.
 
 Is there anything missing to support your use case using that approach?
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] API inconsistencies with security groups

2014-04-06 Thread Christopher Yeoh
On Sat, Apr 5, 2014 at 10:17 PM, Joshua Hesketh 
joshua.hesk...@rackspace.com wrote:

 Hi Chris,

 Thanks for your input.


 On 4/5/14 9:56 PM, Christopher Yeoh wrote:

 On Sat, 5 Apr 2014 15:16:33 +1100
 Joshua Hesketh joshua.hesk...@rackspace.com wrote:

 I'm moving a conversation that has begun on a review to this mailing
 list as it is perhaps systematic of a larger issue regarding API
 compatibility (specifically between neutron and nova-networking).
 Unfortunately these are areas I don't have much experience with so
 I'm hoping to gain some clarity here.

 There is a bug in nova where launching an instance with a given
 security group is case-insensitive for nova-networks but
 case-sensitive for neutron. This highlights inconsistencies but I
 also think this is a legitimate bug[0]. Specifically the 'nova boot'
 command accepts the incorrectly cased security- group but the
 instance enters an error state as it has been unable to boot it.
 There is an inherent mistake here where the initial check approves
 the security-group name but when it comes time to assign the security
 group (at the scheduler level) it fails.

 I think this should be fixed but then the nova CLI behaves
 differently with different tasks. For example, `nova
 secgroup-add-rule` is case sensitive. So in reality it is unclear if
 security groups should, or should not, be case sensitive. The API
 implies that they should not. The CLI has methods where some are and
 some are not.

 I've addressed the initial bug as a patch to the neutron driver[1]
 and also amended the case-sensitive lookup in the
 python-novaclient[2] but both reviews are being held up by this issue.

 I guess the questions are:
- are people aware of this inconsistency?
- is there some documentation on the inconsistencies?
- is a fix of this nature considered an API compatibility break?
- and what are the expectations (in terms of case-sensitivity)?

 I don't know the history behind making security group names case
 insensitive for nova-network, but without that knowledge it seems a
 little odd to me. The Nova API is in general case sensitive - with the
 exception of when you supply types  - eg True/False, Enabled/Disabled.

 If someone thinks there's a good reason for having it case insensitive
 then I'd like to hear what that is. But otherwise in an ideal world I
 think they should be case sensitive.

 Working with what we have however, I think it would also be bad if
 using the neutron API directly security group were case sensitive but
 talking to it via Nova it was case insensitive. Put this down as one of
 the risks of doing proxying type work in Nova.

 I think the proposed patches are backwards incompatible API changes.


 I agree that changing the python-novaclient[2] is new functionality and
 perhaps
 more controversial, but it is not directly related to an API change. The
 change I proposed to nova[1] stops the scheduler from getting stuck when it
 tries to launch an instance with an already accepted security group.

 Perhaps the fix here should be that the nova API never accepted the
 security
 group to begin with. However, that would be an API change. The change I've
 proposed at the moment stops instances from entering an error state, but it
 doesn't do anything to help with the inconsistencies.


So if Nova can detect earlier on in the process that an instance launch is
definitely going to fail because the security group is invalid then I think
its ok to return an error to the user earlier rather than return success
and have it fail later on anyway.

 That's likely true. However I would appreciate reviews on 77347 with the
above

 in mind.



I might be misunderstanding exactly what is going on here, but I'll comment
directly on the 77347.

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-06 Thread Alun Champion
How do these use-cases relate to availability zones or cells, is the
assumption that the same private network is available across both? An
application owner could look to protect availability not just provide
scalability.

On 6 April 2014 07:51, Samuel Bercovici samu...@radware.com wrote:
 Per the last LBaaS meeting.



 1.   Please find a list of use cases.

 https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing



 a)  Please review and see if you have additional ones for the
 project-user

 b)  We can then chose 2-3 use cases to play around with how the CLI,
 API, etc. would look



 2.   Please find a document to place screen captures of web UI. I took
 the liberty to place a few links showing ELB.

 https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uHerSq3pLQA/edit?usp=sharing





 Regards,

 -Sam.










 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-06 Thread Russell Bryant
On 04/06/2014 09:02 AM, Christopher Yeoh wrote:
 On Sun, Apr 6, 2014 at 10:06 AM, Hopper, Justin justin.hop...@hp.com
 mailto:justin.hop...@hp.com wrote:
 
 Russell,
 
 At this point the guard that Nova needs to provide around the instance
 does not need to be complex.  It would even suffice to keep those
 instances hidden from such operations as ³nova list² when invoked by
 directly by the user.
 
 
 Are you looking for something to prevent accidental manipulation of an
 instance created by Trove or intentional changes as well? Whilst doing
 some filtering in nova list is simple on the surface, we don't try to
 keep server uuids secret in the API, so its likely that sort of
 information will leak through other parts of the API say through volume
 or networking interfaces. Having to enforce another level of permissions
 throughout the API would be a considerable change. Also it would
 introduce inconsistencies into the information returned by Nova - eg
 does quota/usage information returned to the user include the server
 that Trove created or is that meant to be adjusted as well?
 
 If you need a high level of support from the Nova API to hide servers,
 then if its possible, as Russell suggests to get what you want by
 building on top of the Nova API using additional identities then I think
 that would be the way to go. If you're just looking for a simple way to
 offer to Trove clients a filtered list of servers, then perhaps Trove
 could offer a server list call which is a proxy to Nova and filters out
 the servers which are Trove specific since Trove knows which ones it
 created.

Yeah, I would *really* prefer to go the route of having trove own all
instances from the perspective of Nova.  Trove is what is really
managing these instances, and it already has to keep track of what
instances are associated with which user.

It sounds like what you really want is for Trove to own the instances,
so I think we need to get down to very specifically won't work with that
approach.

For example, is it a billing thing?  As it stands, all notifications for
trove managed instances will have the user's info in them.  Do you not
want to lose that?  If that's the problem, that seems solvable with a
much simpler approach.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-06 Thread Tim Bell

Anne,

From my understanding, Trove is due to graduate in the Juno release.

Is documentation for developers, operators and users not one of the criteria 
(http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements)
 ?


* Documentation / User support
** Project must have end-user docs such as API use, CLI use, Dashboard use
** Project should have installation docs providing install/deployment in an
   integrated manner similar to other OpenStack projects, including
   configuration reference information for all options
** Project should have a proven history of providing user support (on the
   openstack@ mailing list and on Ask OpenStack)



If this is not provided in time for the Juno release on 
docs.openstack.orghttp://docs.openstack.org, does that mean that the 
graduation status is delayed until K ?

Tim

On 4 Apr 2014, at 18:56, Anne Gentle 
anne.gen...@rackspace.commailto:anne.gen...@rackspace.com wrote:



4. New incoming doc requests:

Lots of users want Trove documentation.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-06 Thread Steven Dake

Hi folks,

There are two problems we should address regarding the growth and change 
to the HOT specification.


First our +2/+A process for normal changes doesn't totally make sense 
for hot_spec.rst.  We generally have some informal bar for controversial 
changes (of which changes to hot_spec.rst is generally considered:).  I 
would suggest raising the bar on hot_spec.rst to at-least what is 
required for a heat-core team addition (currently 5 approval votes).  
This gives folks plenty of time to review and make sure the heat core 
team is committed to the changes, rather then a very small 2 member 
subset.  Of course a -2 vote from any heat-core would terminate the 
review as usual.


Second, There is a window where we say hey we want this sweet new 
functionality yet it remains unimplemented.  I suggest we create some 
special tag for these intrinsics/sections/features, so folks know they 
are unimplemented and NOT officially part of the specification until 
that is the case.


We can call this tag something simple like 
*standardization_pending_implementation* for each section which is 
unimplemented.  A review which proposes this semantic is here:

https://review.openstack.org/85610

My goal is not to add more review work to people's time, but I really 
believe any changes to the HOT specification have a profound impact on 
all things Heat, and we should take special care when considering these 
changes.


Thoughts or concerns?

Regards,
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-06 Thread Anne Gentle
On Sun, Apr 6, 2014 at 12:44 PM, Tim Bell tim.b...@cern.ch wrote:


  Anne,

  From my understanding, Trove is due to graduate in the Juno release.

  Is documentation for developers, operators and users not one of the
 criteria (
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements)
 ?

  * Documentation / User support
 ** Project must have end-user docs such as API use, CLI use, Dashboard use
 ** Project should have installation docs providing install/deployment in an
integrated manner similar to other OpenStack projects, including
configuration reference information for all options
 ** Project should have a proven history of providing user support (on the
openstack@ mailing list and on Ask OpenStack)



  If this is not provided in time for the Juno release on
 docs.openstack.org, does that mean that the graduation status is delayed
 until K ?


Hi Tim,
We have trove on the agenda for review at the next TC meeting.
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee

Here's what I know is completed for docs, as well as what is needed to be
done:

Needs:
There is a writer at Tesora working on adding Trove to the install guides. *
API reference info to be added to http://api.openstack.org/api-ref.html through
the api-site/api-ref/ process. *
We still need a section added to the Virtual Machine Image Guide so that
deployers know how to make DBaaS (trove) work for their users:
http://docs.openstack.org/image-guide/content/ *
I don't see an indicator of Dashboard use in the docs. This would go into
the End User Guide or Admin User Guide. *

There is configuration reference doc:
http://docs.openstack.org/trunk/config-reference/content/ch_configuring-trove.html


There is command-line interface reference:
http://docs.openstack.org/cli-reference/content/troveclient_commands.html

There is API documentation:
http://git.openstack.org/cgit/openstack/database-api/tree/openstack-database-api/src/markdown/database-api-v1.md

There is contributor dev doc at:
http://docs.openstack.org/developer/trove/

Thanks for asking, Tim.
Anne

* Indicates a gap and need in the docs.


  Tim

   On 4 Apr 2014, at 18:56, Anne Gentle anne.gen...@rackspace.com wrote:



  4. New incoming doc requests:

  Lots of users want Trove documentation.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-06 Thread Steve Gordon
- Original Message -
 
 Anne,
 
 From my understanding, Trove is due to graduate in the Juno release.
 
 Is documentation for developers, operators and users not one of the criteria
 (http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements)
 ?
 
 
 * Documentation / User support
 ** Project must have end-user docs such as API use, CLI use, Dashboard use
 ** Project should have installation docs providing install/deployment in an
integrated manner similar to other OpenStack projects, including
configuration reference information for all options
 ** Project should have a proven history of providing user support (on the
openstack@ mailing list and on Ask OpenStack)
 
 
 
 If this is not provided in time for the Juno release on
 docs.openstack.orghttp://docs.openstack.org, does that mean that the
 graduation status is delayed until K ?
 
 Tim

There seems to be a bit of a chicken and egg problem here, in that 
documentation isn't typically accepted into openstack-manuals until the 
relevant project is officially moved to integrated by the TC. As a result 
there's a limited amount of time to integrate any documentation the project is 
carrying separately into the formal guides.

Specifically speaking of Trove some of the low hanging fruit is done:

* Configuration Reference: 
http://docs.openstack.org/trunk/config-reference/content/ch_configuring-trove.html
* Command Line Reference: 
http://docs.openstack.org/cli-reference/content/troveclient_commands.html

But by my reckoning that means we're still missing coverage of the following 
content in the official documentation project:

* trove coverage in the installation guide (Add the X module - similar to 
what we have for Orchestration and Telemetry)
* end user documentation
* API documentation

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-06 Thread Tim Bell

Anne,

Thanks... pleased to see the documentation is in the pipeline, gaps raised with 
the TC and that this is not left as an optional afterthought.

Is it planned to add something to the installation guides (as for the other 
projects) such as 
http://docs.openstack.org/trunk/install-guide/install/yum/content/ ?

Tim

From: Anne Gentle [mailto:a...@openstack.org]
Sent: 06 April 2014 20:33
To: OpenStack Development Mailing List (not for usage questions)
Cc: Glen Campbell; openstack-d...@lists.openstack.org
Subject: Re: [openstack-dev] Doc for Trove ?



On Sun, Apr 6, 2014 at 12:44 PM, Tim Bell 
tim.b...@cern.chmailto:tim.b...@cern.ch wrote:

Anne,

From my understanding, Trove is due to graduate in the Juno release.

Is documentation for developers, operators and users not one of the criteria 
(http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements)
 ?


* Documentation / User support

** Project must have end-user docs such as API use, CLI use, Dashboard use

** Project should have installation docs providing install/deployment in an

   integrated manner similar to other OpenStack projects, including

   configuration reference information for all options

** Project should have a proven history of providing user support (on the

   openstack@ mailing list and on Ask OpenStack)


If this is not provided in time for the Juno release on 
docs.openstack.orghttp://docs.openstack.org, does that mean that the 
graduation status is delayed until K ?

Hi Tim,
We have trove on the agenda for review at the next TC meeting. 
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee

Here's what I know is completed for docs, as well as what is needed to be done:

Needs:
There is a writer at Tesora working on adding Trove to the install guides. *
API reference info to be added to http://api.openstack.org/api-ref.html through 
the api-site/api-ref/ process. *
We still need a section added to the Virtual Machine Image Guide so that 
deployers know how to make DBaaS (trove) work for their users:
http://docs.openstack.org/image-guide/content/ *
I don't see an indicator of Dashboard use in the docs. This would go into the 
End User Guide or Admin User Guide. *

There is configuration reference doc:
http://docs.openstack.org/trunk/config-reference/content/ch_configuring-trove.html

There is command-line interface reference:
http://docs.openstack.org/cli-reference/content/troveclient_commands.html

There is API documentation:
http://git.openstack.org/cgit/openstack/database-api/tree/openstack-database-api/src/markdown/database-api-v1.md

There is contributor dev doc at:
http://docs.openstack.org/developer/trove/

Thanks for asking, Tim.
Anne

* Indicates a gap and need in the docs.


Tim

On 4 Apr 2014, at 18:56, Anne Gentle 
anne.gen...@rackspace.commailto:anne.gen...@rackspace.com wrote:




4. New incoming doc requests:

Lots of users want Trove documentation.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Doc for Trove ?

2014-04-06 Thread Anne Gentle
On Sun, Apr 6, 2014 at 1:40 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
 
  Anne,
 
  From my understanding, Trove is due to graduate in the Juno release.
 
  Is documentation for developers, operators and users not one of the
 criteria
  (
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements
 )
  ?
 
 
  * Documentation / User support
  ** Project must have end-user docs such as API use, CLI use, Dashboard
 use
  ** Project should have installation docs providing install/deployment in
 an
 integrated manner similar to other OpenStack projects, including
 configuration reference information for all options
  ** Project should have a proven history of providing user support (on the
 openstack@ mailing list and on Ask OpenStack)
 
 
 
  If this is not provided in time for the Juno release on
  docs.openstack.orghttp://docs.openstack.org, does that mean that the
  graduation status is delayed until K ?
 
  Tim

 There seems to be a bit of a chicken and egg problem here, in that
 documentation isn't typically accepted into openstack-manuals until the
 relevant project is officially moved to integrated by the TC.


Yes, we have several issues to work through. A few off the top of my head:

- The OpenStack community needs project teams to scale their doc efforts
towards end users and deployers.
- Trove is the first project being placed in the scrutiny of the new
integration graduation requirements. Heat and Ceilometer were integrated
prior to these requirements being outlined.
- The core docs team has not signed up for non-core projects based on our
mission statement.
- Probably other issues I'm forgetting but please fill them in. :)

It's a tough problem to unknot for sure, we all need to work towards what
we want here and what it's going to take to get what we all want.
Thanks,
Anne


 As a result there's a limited amount of time to integrate any
 documentation the project is carrying separately into the formal guides.

 Specifically speaking of Trove some of the low hanging fruit is done:

 * Configuration Reference:
 http://docs.openstack.org/trunk/config-reference/content/ch_configuring-trove.html
 * Command Line Reference:
 http://docs.openstack.org/cli-reference/content/troveclient_commands.html

 But by my reckoning that means we're still missing coverage of the
 following content in the official documentation project:

 * trove coverage in the installation guide (Add the X module - similar
 to what we have for Orchestration and Telemetry)
 * end user documentation
 * API documentation

 Thanks,

 Steve

 ___
 Openstack-docs mailing list
 openstack-d...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-06 Thread Anne Gentle
On Sun, Apr 6, 2014 at 1:45 PM, Tim Bell tim.b...@cern.ch wrote:



 Anne,



 Thanks... pleased to see the documentation is in the pipeline, gaps raised
 with the TC and that this is not left as an optional afterthought.



 Is it planned to add something to the installation guides (as for the
 other projects) such as
 http://docs.openstack.org/trunk/install-guide/install/yum/content/ ?




Yes, since the Trove midcycle meetup in February we've had a writer at
Tesora assigned to this task. It's a huge task though since we document
four distros so all the helping hands we can get would be great.
Thanks,
Anne


  Tim



 *From:* Anne Gentle [mailto:a...@openstack.org]
 *Sent:* 06 April 2014 20:33

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Glen Campbell; openstack-d...@lists.openstack.org

 *Subject:* Re: [openstack-dev] Doc for Trove ?







 On Sun, Apr 6, 2014 at 12:44 PM, Tim Bell tim.b...@cern.ch wrote:



 Anne,



 From my understanding, Trove is due to graduate in the Juno release.



 Is documentation for developers, operators and users not one of the
 criteria (
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements)
 ?



 * Documentation / User support

 ** Project must have end-user docs such as API use, CLI use, Dashboard use

 ** Project should have installation docs providing install/deployment in an

integrated manner similar to other OpenStack projects, including

configuration reference information for all options

 ** Project should have a proven history of providing user support (on the

openstack@ mailing list and on Ask OpenStack)





 If this is not provided in time for the Juno release on docs.openstack.org,
 does that mean that the graduation status is delayed until K ?



 Hi Tim,

 We have trove on the agenda for review at the next TC meeting.
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee



 Here's what I know is completed for docs, as well as what is needed to be
 done:


 Needs:

 There is a writer at Tesora working on adding Trove to the install guides.
 *

 API reference info to be added to http://api.openstack.org/api-ref.html 
 through
 the api-site/api-ref/ process. *

 We still need a section added to the Virtual Machine Image Guide so that
 deployers know how to make DBaaS (trove) work for their users:

 http://docs.openstack.org/image-guide/content/ *

 I don't see an indicator of Dashboard use in the docs. This would go into
 the End User Guide or Admin User Guide. *



 There is configuration reference doc:


 http://docs.openstack.org/trunk/config-reference/content/ch_configuring-trove.html




 There is command-line interface reference:

 http://docs.openstack.org/cli-reference/content/troveclient_commands.html



 There is API documentation:


 http://git.openstack.org/cgit/openstack/database-api/tree/openstack-database-api/src/markdown/database-api-v1.md



 There is contributor dev doc at:

 http://docs.openstack.org/developer/trove/



 Thanks for asking, Tim.

 Anne



 * Indicates a gap and need in the docs.





 Tim



 On 4 Apr 2014, at 18:56, Anne Gentle anne.gen...@rackspace.com wrote:







 4. New incoming doc requests:



 Lots of users want Trove documentation.






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-06 Thread Tim Bell

My worry is that many deployers are waiting for programs to reach integrated 
before looking at them in detail. When something is announced as integrated, 
there is an expectation that the TC criteria are met.

When there is no installation documentation or end user CLI or dashboard 
information, it can give a negative experience that delays deployment 
significantly.

Can we find a way that the incubation documentation can be provided for those 
who are interested in testing both the code and the documentation in the 
different environments. Thus, like the code, this should naturally follow a 
release candidate cycle so we can all give input on documentation content as 
well as  functionality.

Tim

 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: 06 April 2014 20:40
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Glen Campbell; openstack-d...@lists.openstack.org
 Subject: Re: [openstack-dev] Doc for Trove ?
 
 - Original Message -
 
  Anne,
 
  From my understanding, Trove is due to graduate in the Juno release.
 
  Is documentation for developers, operators and users not one of the
  criteria
  (http://git.openstack.org/cgit/openstack/governance/tree/reference/inc
  ubation-integration-requirements)
  ?
 
 
  * Documentation / User support
  ** Project must have end-user docs such as API use, CLI use, Dashboard
  use
  ** Project should have installation docs providing install/deployment in an
 integrated manner similar to other OpenStack projects, including
 configuration reference information for all options
  ** Project should have a proven history of providing user support (on the
 openstack@ mailing list and on Ask OpenStack)
 
 
 
  If this is not provided in time for the Juno release on
  docs.openstack.orghttp://docs.openstack.org, does that mean that the
  graduation status is delayed until K ?
 
  Tim
 
 There seems to be a bit of a chicken and egg problem here, in that 
 documentation isn't typically accepted into openstack-manuals
 until the relevant project is officially moved to integrated by the TC. As a 
 result there's a limited amount of time to integrate any
 documentation the project is carrying separately into the formal guides.
 
 Specifically speaking of Trove some of the low hanging fruit is done:
 
 * Configuration Reference: 
 http://docs.openstack.org/trunk/config-reference/content/ch_configuring-trove.html
 * Command Line Reference: 
 http://docs.openstack.org/cli-reference/content/troveclient_commands.html
 
 But by my reckoning that means we're still missing coverage of the following 
 content in the official documentation project:
 
 * trove coverage in the installation guide (Add the X module - similar to 
 what we have for Orchestration and Telemetry)
 * end user documentation
 * API documentation
 
 Thanks,
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-06 Thread Vipul Sabhaya
On Sun, Apr 6, 2014 at 9:36 AM, Russell Bryant rbry...@redhat.com wrote:

 On 04/06/2014 09:02 AM, Christopher Yeoh wrote:
  On Sun, Apr 6, 2014 at 10:06 AM, Hopper, Justin justin.hop...@hp.com
  mailto:justin.hop...@hp.com wrote:
 
  Russell,
 
  At this point the guard that Nova needs to provide around the
 instance
  does not need to be complex.  It would even suffice to keep those
  instances hidden from such operations as ³nova list² when invoked by
  directly by the user.
 
 
  Are you looking for something to prevent accidental manipulation of an
  instance created by Trove or intentional changes as well? Whilst doing
  some filtering in nova list is simple on the surface, we don't try to
  keep server uuids secret in the API, so its likely that sort of
  information will leak through other parts of the API say through volume
  or networking interfaces. Having to enforce another level of permissions
  throughout the API would be a considerable change. Also it would
  introduce inconsistencies into the information returned by Nova - eg
  does quota/usage information returned to the user include the server
  that Trove created or is that meant to be adjusted as well?
 
  If you need a high level of support from the Nova API to hide servers,
  then if its possible, as Russell suggests to get what you want by
  building on top of the Nova API using additional identities then I think
  that would be the way to go. If you're just looking for a simple way to
  offer to Trove clients a filtered list of servers, then perhaps Trove
  could offer a server list call which is a proxy to Nova and filters out
  the servers which are Trove specific since Trove knows which ones it
  created.

 Yeah, I would *really* prefer to go the route of having trove own all
 instances from the perspective of Nova.  Trove is what is really
 managing these instances, and it already has to keep track of what
 instances are associated with which user.

 Although this approach would work, there are some manageability issues
with it.  When trove is managing 100's of nova instances, then things tend
to break down when looking directly at the Trove tenant through the Nova
API and trying to piece together the associations, what resource failed to
provision, etc.


 It sounds like what you really want is for Trove to own the instances,
 so I think we need to get down to very specifically won't work with that
 approach.

 For example, is it a billing thing?  As it stands, all notifications for
 trove managed instances will have the user's info in them.  Do you not
 want to lose that?  If that's the problem, that seems solvable with a
 much simpler approach.


We have for the most part solved the billing issue since Trove does
maintain the association, and able to send events on-behalf of the correct
user.  We would lose out on the additional layer of checks that Nova
provides, such as Rate Limiting per project, Quotas enforced at the Nova
layer.  The trove tenant would essentially need full access without any
such limits.

Since we'd prefer to keep these checks at the Infrastructure layer intact
for Users that interact with the Trove API, I think the issue goes beyond
just filtering them out from the API.

One idea that we've floated around is possibly introducing a 'shadow'
tenant, that allows Services like Trove to create Nova / Cinder / Neutron
resources on behalf of the actual tenant.  The resources owned by this
shadow tenant would only be visible / manipulated by a higher-level
Service.  This could require some Service token to be provided along with
the original tenant token.

Example: POST /v2/{shadow_tenant_id}/servers


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel-dev] [Fuel] Issues about OSTF tests

2014-04-06 Thread Mike Scherbakov
Timur,
sorry for missing your question.
We are switching from fuel-dev ML to openstack-dev, and will keep watching
for all Fuel-related questions.

Back to your question, you can use GET request like this:
GET /api/nodes?cluster_id=1

Following is just part of the response:
[{model: WDC WD3200BPVT-7, disk: sda, name: sda, size:
320072933376}], cpu: {real: 1, total: 8, spec: [{model: Intel(R)
Core(TM) i7-2670QM CPU @ 2.20GHz, frequency: 2201}, {model: Intel(R)
Core(TM) i7-2670QM CPU @ 2.20GHz, frequency: 2201}, {model: Intel(R)
Core(TM) i7-2670QM CPU @ 2.20GHz, frequency: 2201}, {model: Intel(R)
Core(TM) i7-2670QM CPU @ 2.20GHz, frequency: 2201},

.
 memory: {slots: 2, total: 8589934592, maximum_capacity:
17179869184, devices: [{frequency: 1333, type: DDR3, size:
4294967296}, {frequency: 1333, type: DDR3, size: 4294967296}]}},
pending_deletion: false, online: true, progress: 0, pending_roles:
[controller], os_platform: ubuntu, id: 2, manufacturer: 80AD}]

See REST API refs here:
http://docs.mirantis.com/fuel-dev/develop/api_doc.html

Thanks,



On Mon, Mar 31, 2014 at 6:22 PM, Timur Nurlygayanov 
tnurlygaya...@mirantis.com wrote:

 Hi Fuel team,

 I have a questions about the OSTF tests for next release 5.0.

 In this release we plan to include Murano 0.5, which will have significant
 architecture changes and it will require significant changes in OSTF tests
 for Murano.
 For example, we will not support all services, which Murano supports in
 the last stable release (we have moved to another engine).


 About the changes in OSTF tests for Murano:
 1. We will remove all tests for not-supported services
 2. We want to add some additional checks for OSTF tests, which will
 collect information about OpenStack cluster configuration and will skip
 some tests, if we have no required resources.

 And now I have a question: how we can get the information about the
 OpenStack cloud resources, like summary size of RAM on compute nodes? (I
 know that we can use Nailgan API, but some working examples will be very
 useful)

 Thank you! :)

 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-06 Thread Steve Baker
On 07/04/14 06:23, Steven Dake wrote:
 Hi folks,

 There are two problems we should address regarding the growth and
 change to the HOT specification.

 First our +2/+A process for normal changes doesn't totally make sense
 for hot_spec.rst.  We generally have some informal bar for
 controversial changes (of which changes to hot_spec.rst is generally
 considered:).  I would suggest raising the bar on hot_spec.rst to
 at-least what is required for a heat-core team addition (currently 5
 approval votes).  This gives folks plenty of time to review and make
 sure the heat core team is committed to the changes, rather then a
 very small 2 member subset.  Of course a -2 vote from any heat-core
 would terminate the review as usual.

 Second, There is a window where we say hey we want this sweet new
 functionality yet it remains unimplemented.  I suggest we create
 some special tag for these intrinsics/sections/features, so folks know
 they are unimplemented and NOT officially part of the specification
 until that is the case.

 We can call this tag something simple like
 *standardization_pending_implementation* for each section which is
 unimplemented.  A review which proposes this semantic is here:
 https://review.openstack.org/85610

 My goal is not to add more review work to people's time, but I really
 believe any changes to the HOT specification have a profound impact on
 all things Heat, and we should take special care when considering
 these changes.

 Thoughts or concerns?
How about we just use the existing blueprint approval process for
changes to the HOT spec? The PTL can make the call whether the change
can be approved by the PTL or whether it requires discussion and
consensus first.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Neutron] Networking Discussions last week

2014-04-06 Thread Mike Scherbakov
Hi all,
we had a number of discussions last week in Moscow, with participation of
guys from Russia, Ukraine and Poland.
That was a great time!! Thanks everyone who participated.

Special thanks to Przemek for great preparations, including the following:
https://docs.google.com/a/mirantis.com/presentation/d/115vCujjWoQ0cLKgVclV59_y1sLDhn2zwjxEDmLYsTzI/edit#slide=id.p

I've searched over blueprints which require update after meetings:
https://blueprints.launchpad.net/fuel/+spec/multiple-cluster-networks
https://blueprints.launchpad.net/fuel/+spec/fuel-multiple-l3-agents
https://blueprints.launchpad.net/fuel/+spec/fuel-storage-networks
https://blueprints.launchpad.net/fuel/+spec/separate-public-floating
https://blueprints.launchpad.net/fuel/+spec/advanced-networking

We will need to create one for UI.

Neutron blueprints which are in the interest of large and thus complex
deployments, with the requirements of scalability and high availability:
https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
https://blueprints.launchpad.net/neutron/+spec/quantum-multihost

The last one was rejected... there is might be another way of achieving
same use cases? Use case, I think, was explained in great details here:
https://wiki.openstack.org/wiki/NovaNeutronGapHighlights
Any thoughts on this?

Thanks,
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-06 Thread Steve Baker
On 05/04/14 04:47, Tomas Sedovic wrote:
 Hi All,

 I was wondering if the time has come to document what exactly are we
 doing with tripleo-heat-templates and merge.py[1], figure out what needs
 to happen to move away and raise the necessary blueprints on Heat and
 TripleO side.

 (merge.py is a script we use to build the final TripleO Heat templates
 from smaller chunks)

 There probably isn't an immediate need for us to drop merge.py, but its
 existence either indicates deficiencies within Heat or our unfamiliarity
 with some of Heat's features (possibly both).

 I worry that the longer we stay with merge.py the harder it will be to
 move forward. We're still adding new features and fixing bugs in it (at
 a slow pace but still).

 Below is my understanding of the main marge.py functionality and a rough
 plan of what I think might be a good direction to move to. It is almost
 certainly incomplete -- please do poke holes in this. I'm hoping we'll
 get to a point where everyone's clear on what exactly merge.py does and
 why. We can then document that and raise the appropriate blueprints.


 ## merge.py features ##


 1. Merging parameters and resources

 Any uniquely-named parameters and resources from multiple templates are
 put together into the final template.

 If a resource of the same name is in multiple templates, an error is
 raised. Unless it's of a whitelisted type (nova server, launch
 configuration, etc.) in which case they're all merged into a single
 resource.

 For example: merge.py overcloud-source.yaml swift-source.yaml

 The final template has all the parameters from both. Moreover, these two
 resources will be joined together:

  overcloud-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP


  swift-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Metadata:
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


 The final template will contain:

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


 We use this to keep the templates more manageable (instead of having one
 huge file) and also to be able to pick the components we want: instead
 of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
 uses the VirtualPowerManager driver) or `ironic-vm-source`.



 2. FileInclude

 If you have a pseudo resource with the type of `FileInclude`, we will
 look at the specified Path and SubKey and put the resulting dictionary in:

  overcloud-source.yaml 

   NovaCompute0Config:
 Type: FileInclude
 Path: nova-compute-instance.yaml
 SubKey: Resources.NovaCompute0Config
 Parameters:
   NeutronNetworkType: gre
   NeutronEnableTunnelling: True


  nova-compute-instance.yaml 

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: {Ref: NeutronNetworkType}
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: {Ref: NeutronEnableTunnelling}
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

 The result:

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: gre
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: True
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

 Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter
 substitution.

 This is useful when 

Re: [openstack-dev] Doc for Trove ?

2014-04-06 Thread Steve Gordon
- Original Message -
 
 My worry is that many deployers are waiting for programs to reach integrated
 before looking at them in detail. When something is announced as integrated,
 there is an expectation that the TC criteria are met.
 
 When there is no installation documentation or end user CLI or dashboard
 information, it can give a negative experience that delays deployment
 significantly.
 
 Can we find a way that the incubation documentation can be provided for those
 who are interested in testing both the code and the documentation in the
 different environments. Thus, like the code, this should naturally follow a
 release candidate cycle so we can all give input on documentation content as
 well as  functionality.
 
 Tim

In the ideal case it seems to me that the incubating project would be working 
on the documentation in a way that it can easily be dropped into 
openstack-manuals upon integration in the same way as support is dropped into 
Horizon. They would probably also need to be exposing builds of said 
documentation somewhere in the interim before moving into trunk as well. 
Currently though as far as I know the pre-incubation documentation is only 
worked up in the context of the developer site [1].

-Steve

[1] http://docs.openstack.org/developer/trove/dev/install.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] event notifications issue

2014-04-06 Thread Nader Lahouti
Hi All,

I was able to get keystone notification when creating/deleting a tenant by
setting these parameters in keystone.conf:

(NOTE: the brach that I was using:
git branch -v
* (no branch) 0d83e7e Bump stable/havana next version to 2013.2.2)
)
notification_topics = Key_Notify
rpc_backend = keystone.openstack.common.rpc.impl_kombu
control_exchange = Key_openstack
notification_driver = keystone.openstack.common.notifier.rpc_notifier

Now I changed the branch to:
git branch -v
* master e45ff9e Merge Updated from global requirements

And cannot get any notification. It seems the notifications.py changed.

What I need to do, in order to get event notification with the current code
and make it work?

Thanks,
Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-06 Thread Michael Elder
Adam, 

I was imprecise, thank you for correcting that error. 

I think the net of the statement still holds though: the Keystone token 
mechanism defines a mechanism for authorization, why doesn't the heat 
stack manage a token for any behavior that requires authorization? 


-M


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook



From:   Adam Young ayo...@redhat.com
To: openstack-dev@lists.openstack.org
Date:   04/04/2014 09:54 PM
Subject:Re: [openstack-dev] [heat] Problems with Heat software 
configurations and KeystoneV2



On 04/04/2014 02:46 PM, Clint Byrum wrote:
 Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:
 Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624

 I still have concerns though about the design approach of creating a 
new
 project for every stack and new users for every resource.

 If I provision 1000 patterns a day with an average of 10 resources per
 pattern, you're looking at 10,000 users per day. How can that scale?

 If that can't scale, then keystone is not viable at all. I like to think
 we can scale keystone to the many millions of users level.

 How can we ensure that all stale projects and users are cleaned up as
 instances are destroy? When users choose to go through horizon or nova 
to
 tear down instances, what cleans up the project  users associated with
 that heat stack?

 So, they created these things via Heat, but have now left the dangling
 references in Heat, and expect things to work properly?

 If they create it via Heat, they need to delete it via Heat.

 Keystone defines the notion of tokens to support authentication, why
 doesn't the design provision and store a token for the stack and its
 equivalent management?

 Tokens are _authentication_, not _authorization_.

Tokens are authorization, not authentication.  For Authentication you 
should be using a real cryptographically secure authentication 
mechanism:  either Kerberos or X509.


 For the latter, we
 need to have a way to lock down access to an individual resource in
 Heat. This allows putting secrets in deployments and knowing that only
 the instance which has been deployed to will have access to the secrets.
 I do see an optimization possible, which is to just create a user for 
the
 box that is given access to any deployments on the box. That would make
 sense if users are going to create many many deployments per server. But
 even at 10 per server, having 10 users is simpler than trying to manage
 shared users and edit their authorization rules.

 Now, I actually think that OAUTH tokens _are_ intended to be 
authorization
 as well as authentication, so that is probably where the focus should
 be long term. But really, you're talking about the same thing: a single
 key lookup in keystone.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilomete]

2014-04-06 Thread Hachem Chraiti
hi, How can i user use Ceilometer API in Python programs?(to show
meters,alarms,...)give some python code for example please
Sincerly,Chraiti Hachem,software engineer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-06 Thread Steve Baker
On 07/04/14 12:52, Michael Elder wrote:


 I think the net of the statement still holds though: the Keystone
 token mechanism defines a mechanism for authorization, why doesn't the
 heat stack manage a token for any behavior that requires authorization?
Heat does use a token, but that token is associated with a user which
can only perform limited operations on one heat resource. This reduces
the risk that an unauthorized action can be performed due to using some
form of shared user.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] servicevm: weekly servicevm IRC meeting(April 8, 2014)

2014-04-06 Thread Isaku Yamahata
Hi. This is a reminder mail for the servicevm IRC meeting
April 8, 2014 Tuesdays 5:00(AM)UTC-
#openstack-meeting on freenode

- status update
- details of dividing the blueprints
  (Sorry I'm going to write it up from now on.)
- design summit plan
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-06 Thread Michael Elder
If Keystone is configured with an external identity provider (LDAP, 
OpenID, etc), how does the creation of a new user per resource affect that 
external identity source? 

My suggestion is broader, but in the same spirit: Could we consider 
defining an _authorization_ stack token (thanks Adam), which acts like 
an OAuth token (by delegating a set of actionable behaviors that a token 
holder may perform). The stack token would be managed within the stack 
in some protected form and used for any activities later performed on 
resources which are managed by the stack. Instead of imposing user 
administration tasks like creating users, deleting users, etc against the 
Keystone database, Heat would instead provide these stack tokens to any 
service which it connects to when managing a resource. In fact, there's no 
real reason that the stack token couldn't piggyback on the existing 
Keystone token mechanism, except that it would be potentially longer lived 
and restricted to the specific set of resources for which it was granted. 

Not sure if email is the best medium for this discussion, so if there's a 
better option, I'm happy to follow that path as well. 

-M 


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook



From:   Steve Baker sba...@redhat.com
To: openstack-dev@lists.openstack.org
Date:   04/06/2014 09:16 PM
Subject:Re: [openstack-dev] [heat] Problems with Heat software 
configurations and KeystoneV2



On 07/04/14 12:52, Michael Elder wrote:


I think the net of the statement still holds though: the Keystone token 
mechanism defines a mechanism for authorization, why doesn't the heat 
stack manage a token for any behavior that requires authorization? 
Heat does use a token, but that token is associated with a user which can 
only perform limited operations on one heat resource. This reduces the 
risk that an unauthorized action can be performed due to using some form 
of shared user.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators Design Summit ideas for Atlanta

2014-04-06 Thread Tom Fifield
So far, there's been no comment from anyone working on nova, so there's 
been no session proposed.


I can, of course, propose a session ... but without buy-in from the 
project team it's unlikely to be accepted.



Regards,


Tom


On 01/04/14 22:44, Matt Van Winkle wrote:

So, I've been watching the etherpad and the summit submissions and I
noticed that there isn't anything for nova.  Maybe I'm off base, but it
seems like we'd be missing the mark to not have a Developer/Operator's
exchange on the key product.  Is there anything we can do to get a session
slotted like these other products?

Thanks!
Matt

On 3/28/14 2:01 AM, Tom Fifield t...@openstack.org wrote:


Thanks to those projects that responded. I've proposed sessions in
swift, ceilometer, tripleO and horizon.

On 17/03/14 07:54, Tom Fifield wrote:

All,

Many times we've heard a desire for more feedback and interaction from
users. However, their attendance at design summit sessions is met with
varied success.

However, last summit, by happy accident, a swift session turned into a
something a lot more user driven. A competent user was able to describe
their use case, and the developers were able to stage a number of
question to them. In this way, some of the assumptions about the way
certain things were implemented, and the various priorities of future
plans became clearer. It worked really well ... perhaps this is
something we'd like to have happen for all the projects?

*Idea*: Add an ops session for each project in the design summit

https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-sessions


Most operators running OpenStack tend to treat it more holistically than
those coding it. They are aware of, but don't necessarily think or work
in terms of project  breakdowns. To this end, I'd imagine the such
sessions would:

   * have a primary purpose for developers to ask the operators to answer
 questions, and request information

   * allow operators to tell the developers things (give feedback) as a
 secondary purpose that could potentially be covered better in a
 cross-project session

   * need good moderation, for example to push operator-to-operator
 discussion into forums with more time available (eg
 https://etherpad.openstack.org/p/ATL-ops-unconference-RFC )

   * be reinforced by having volunteer good users in potentially every
 design summit session
 (https://etherpad.openstack.org/p/ATL-ops-in-design-sessions )


Anyway, just a strawman - please jump on the etherpad

(https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-session
s)
or leave your replies here!


Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators Design Summit ideas for Atlanta

2014-04-06 Thread Michael Still
It might be that this is happening because there is no clear incumbent
for the Nova PTL position. Is it ok to hold off on this until after
the outcome of the election is known?

Michael

On Mon, Apr 7, 2014 at 2:23 PM, Tom Fifield t...@openstack.org wrote:
 So far, there's been no comment from anyone working on nova, so there's been
 no session proposed.

 I can, of course, propose a session ... but without buy-in from the project
 team it's unlikely to be accepted.


 Regards,


 Tom



 On 01/04/14 22:44, Matt Van Winkle wrote:

 So, I've been watching the etherpad and the summit submissions and I
 noticed that there isn't anything for nova.  Maybe I'm off base, but it
 seems like we'd be missing the mark to not have a Developer/Operator's
 exchange on the key product.  Is there anything we can do to get a session
 slotted like these other products?

 Thanks!
 Matt

 On 3/28/14 2:01 AM, Tom Fifield t...@openstack.org wrote:

 Thanks to those projects that responded. I've proposed sessions in
 swift, ceilometer, tripleO and horizon.

 On 17/03/14 07:54, Tom Fifield wrote:

 All,

 Many times we've heard a desire for more feedback and interaction from
 users. However, their attendance at design summit sessions is met with
 varied success.

 However, last summit, by happy accident, a swift session turned into a
 something a lot more user driven. A competent user was able to describe
 their use case, and the developers were able to stage a number of
 question to them. In this way, some of the assumptions about the way
 certain things were implemented, and the various priorities of future
 plans became clearer. It worked really well ... perhaps this is
 something we'd like to have happen for all the projects?

 *Idea*: Add an ops session for each project in the design summit


 https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-sessions


 Most operators running OpenStack tend to treat it more holistically than
 those coding it. They are aware of, but don't necessarily think or work
 in terms of project  breakdowns. To this end, I'd imagine the such
 sessions would:

* have a primary purpose for developers to ask the operators to
 answer
  questions, and request information

* allow operators to tell the developers things (give feedback) as a
  secondary purpose that could potentially be covered better in a
  cross-project session

* need good moderation, for example to push operator-to-operator
  discussion into forums with more time available (eg
  https://etherpad.openstack.org/p/ATL-ops-unconference-RFC )

* be reinforced by having volunteer good users in potentially every
  design summit session
  (https://etherpad.openstack.org/p/ATL-ops-in-design-sessions )


 Anyway, just a strawman - please jump on the etherpad


 (https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-session
 s)
 or leave your replies here!


 Regards,


 Tom


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators Design Summit ideas for Atlanta

2014-04-06 Thread Tom Fifield

If the timing works, that seems fine :)

Regards,


Tom

On 07/04/14 10:32, Michael Still wrote:

It might be that this is happening because there is no clear incumbent
for the Nova PTL position. Is it ok to hold off on this until after
the outcome of the election is known?

Michael

On Mon, Apr 7, 2014 at 2:23 PM, Tom Fifield t...@openstack.org wrote:

So far, there's been no comment from anyone working on nova, so there's been
no session proposed.

I can, of course, propose a session ... but without buy-in from the project
team it's unlikely to be accepted.


Regards,


Tom



On 01/04/14 22:44, Matt Van Winkle wrote:


So, I've been watching the etherpad and the summit submissions and I
noticed that there isn't anything for nova.  Maybe I'm off base, but it
seems like we'd be missing the mark to not have a Developer/Operator's
exchange on the key product.  Is there anything we can do to get a session
slotted like these other products?

Thanks!
Matt

On 3/28/14 2:01 AM, Tom Fifield t...@openstack.org wrote:


Thanks to those projects that responded. I've proposed sessions in
swift, ceilometer, tripleO and horizon.

On 17/03/14 07:54, Tom Fifield wrote:


All,

Many times we've heard a desire for more feedback and interaction from
users. However, their attendance at design summit sessions is met with
varied success.

However, last summit, by happy accident, a swift session turned into a
something a lot more user driven. A competent user was able to describe
their use case, and the developers were able to stage a number of
question to them. In this way, some of the assumptions about the way
certain things were implemented, and the various priorities of future
plans became clearer. It worked really well ... perhaps this is
something we'd like to have happen for all the projects?

*Idea*: Add an ops session for each project in the design summit


https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-sessions


Most operators running OpenStack tend to treat it more holistically than
those coding it. They are aware of, but don't necessarily think or work
in terms of project  breakdowns. To this end, I'd imagine the such
sessions would:

* have a primary purpose for developers to ask the operators to
answer
  questions, and request information

* allow operators to tell the developers things (give feedback) as a
  secondary purpose that could potentially be covered better in a
  cross-project session

* need good moderation, for example to push operator-to-operator
  discussion into forums with more time available (eg
  https://etherpad.openstack.org/p/ATL-ops-unconference-RFC )

* be reinforced by having volunteer good users in potentially every
  design summit session
  (https://etherpad.openstack.org/p/ATL-ops-in-design-sessions )


Anyway, just a strawman - please jump on the etherpad


(https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-session
s)
or leave your replies here!


Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilomete]

2014-04-06 Thread Steve Martinelli
This seems like a good place to start:
https://github.com/openstack/python-ceilometerclient/blob/master/doc/source/index.rst



Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada




From:   
Hachem Chraiti hachem...@gmail.com
To:   
openstack-dev@lists.openstack.org,

Date:   
04/06/2014 08:57 PM
Subject:  
 [openstack-dev]
[Ceilomete]




hi, How can i user use Ceilometer API in Python programs?(to
show meters,alarms,...)
give some python code for example please

Sincerly,
Chraiti Hachem,software engineer ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-06 Thread Deepak Shetty
Duncan,
Thanks for your response. Tho' i agree to what you said.. I am still
trying to understand why i see what i see .. i.e. why the base class
variables (_mount_shared) shows up empty in __del__
I am assuming here that the obj is not completely gone/deleted, so its vars
must still be in scope and valid.. but debug prints suggests the otherwise
:(


On Sun, Apr 6, 2014 at 12:07 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 I'm not yet sure of the right way to do cleanup on shutdown, but any
 driver should do as much checking as possible on startup - the service
 might not have gone down cleanly (kill -9, SEGFAULT, etc), or
 something might have gone wrong during clean shutdown. The driver
 coming up should therefore not make any assumptions it doesn't
 absolutely have to, but rather should check and attempt cleanup
 itself, on startup.

 On 3 April 2014 15:14, Deepak Shetty dpkshe...@gmail.com wrote:
 
  Hi,
  I am looking to umount the glsuterfs shares that are mounted as part
 of
  gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
 devstack
  env) or when c-vol service is being shutdown.
 
  I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
 didn't
  work
 
  def __del__(self):
  LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
  self._mounted_shares)
  for share in self._mounted_shares:
  mount_path = self._get_mount_point_for_share(share)
  command = ['umount', mount_path]
  self._do_umount(command, True, share)
 
  self._mounted_shares is defined in the base class (RemoteFsDriver)
 
  ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-] Caught
  SIGINT, stopping children
  2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-] Waiting
 on
  2 children to exit
  2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-] Child
 30185
  exited with status 1
  2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-] DPKS:
  Inside __del__ Hurray!, shares=[]
  2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-] Child
 30186
  exited with status 1
  Exception TypeError: 'NoneType' object is not callable in bound method
  GlusterfsDriver.__del__ of
 cinder.volume.drivers.glusterfs.GlusterfsDriver
  object at 0x2777ed0 ignored
  [stack@devstack-vm tempest]$
 
  So the _mounted_shares is empty ([]) which isn't true since I have 2
  glsuterfs shares mounted and when i print _mounted_shares in other parts
 of
  code, it does show me the right thing.. as below...
 
  From volume/drivers/glusterfs.py @ line 1062:
  LOG.debug(_('Available shares: %s') % self._mounted_shares)
 
  which dumps the debugprint  as below...
 
  2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
  [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None] Available shares:
  [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']
 from
  (pid=30185) _ensure_shares_mounted
  /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
 
  This brings in few Qs ( I am usign devstack env) ...
 
  1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
  gluster backends setup, hence 2 cinder-volume instances, but i see
 __del__
  being called once only (as per above debug prints)
  2) I tried atexit and registering a function to do the cleanup.
 Ctrl-C'ing
  c-vol (from screen ) gives the same issue.. shares is empty ([]), but
 this
  time i see that my atexit handler called twice (once for each backend)
  3) In general, whats the right way to do cleanup inside cinder volume
 driver
  when a service is going down or being restarted ?
  4) The solution should work in both devstack (ctrl-c to shutdown c-vol
  service) and production (where we do service restart c-vol)
 
  Would appreciate a response
 
  thanx,
  deepak
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-06 Thread Deepak Shetty
To add:
I was looking at Nova code and it seems there is a framework for
cleanup using the terminate calls.. IIUC this works as libvirt calls
terminate on Nova instance when the VM is shutting down/destroying, hence
terminate seems to be a good place to do cleanup on Nova side.. something
similar is missing on Cinder side and __del__ way of cleanup isn't working
as I posted above.


On Mon, Apr 7, 2014 at 10:24 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Duncan,
 Thanks for your response. Tho' i agree to what you said.. I am still
 trying to understand why i see what i see .. i.e. why the base class
 variables (_mount_shared) shows up empty in __del__
 I am assuming here that the obj is not completely gone/deleted, so its
 vars must still be in scope and valid.. but debug prints suggests the
 otherwise :(


 On Sun, Apr 6, 2014 at 12:07 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 I'm not yet sure of the right way to do cleanup on shutdown, but any
 driver should do as much checking as possible on startup - the service
 might not have gone down cleanly (kill -9, SEGFAULT, etc), or
 something might have gone wrong during clean shutdown. The driver
 coming up should therefore not make any assumptions it doesn't
 absolutely have to, but rather should check and attempt cleanup
 itself, on startup.

 On 3 April 2014 15:14, Deepak Shetty dpkshe...@gmail.com wrote:
 
  Hi,
  I am looking to umount the glsuterfs shares that are mounted as
 part of
  gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
 devstack
  env) or when c-vol service is being shutdown.
 
  I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
 didn't
  work
 
  def __del__(self):
  LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
  self._mounted_shares)
  for share in self._mounted_shares:
  mount_path = self._get_mount_point_for_share(share)
  command = ['umount', mount_path]
  self._do_umount(command, True, share)
 
  self._mounted_shares is defined in the base class (RemoteFsDriver)
 
  ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
 Caught
  SIGINT, stopping children
  2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
 Waiting on
  2 children to exit
  2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-] Child
 30185
  exited with status 1
  2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-] DPKS:
  Inside __del__ Hurray!, shares=[]
  2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-] Child
 30186
  exited with status 1
  Exception TypeError: 'NoneType' object is not callable in bound
 method
  GlusterfsDriver.__del__ of
 cinder.volume.drivers.glusterfs.GlusterfsDriver
  object at 0x2777ed0 ignored
  [stack@devstack-vm tempest]$
 
  So the _mounted_shares is empty ([]) which isn't true since I have 2
  glsuterfs shares mounted and when i print _mounted_shares in other
 parts of
  code, it does show me the right thing.. as below...
 
  From volume/drivers/glusterfs.py @ line 1062:
  LOG.debug(_('Available shares: %s') % self._mounted_shares)
 
  which dumps the debugprint  as below...
 
  2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
  [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None] Available shares:
  [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']
 from
  (pid=30185) _ensure_shares_mounted
  /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
 
  This brings in few Qs ( I am usign devstack env) ...
 
  1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
  gluster backends setup, hence 2 cinder-volume instances, but i see
 __del__
  being called once only (as per above debug prints)
  2) I tried atexit and registering a function to do the cleanup.
 Ctrl-C'ing
  c-vol (from screen ) gives the same issue.. shares is empty ([]), but
 this
  time i see that my atexit handler called twice (once for each backend)
  3) In general, whats the right way to do cleanup inside cinder volume
 driver
  when a service is going down or being restarted ?
  4) The solution should work in both devstack (ctrl-c to shutdown c-vol
  service) and production (where we do service restart c-vol)
 
  Would appreciate a response
 
  thanx,
  deepak
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-06 Thread Jay Pipes
On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar wrote:
  Well, it seems to me that the problem is the above blueprint and the code 
  it introduced. This is an anti-feature IMO, and probably the best solution 
  would be to remove the above code and go back to having a single  
  nova-compute managing a single vCenter cluster, not multiple ones.
 
 Problem is not introduced by managing multiple clusters from single 
 nova-compute proxy node.  

I strongly disagree.

 Internally this proxy driver is still presenting the compute-node for each 
 of the cluster its managing.

In what way?

  What we need to think about is applicability of the live migration use case 
 when a cluster is modelled as a compute.   Since the cluster is modelled 
 as a compute, it is assumed that a typical use case of live-move is taken 
 care by the underlying cluster itself.   With this there are other use 
 cases which are no-op today like host maintenance mode, live move, setting 
 instance affinity etc., In order to resolve this I was thinking of 
 A way to expose operations on individual ESX Hosts like Putting host in 
 maintenance mode,  live move, instance affinity etc., by introducing Parent - 
 Child compute node concept.   Scheduling can be restricted to Parent compute 
 node and Child compute node can be used for providing more drill down on 
 compute and also enable additional compute operations.Any thoughts on 
 this?

The fundamental problem is that hacks were put in place in order to make
Nova defer control to vCenter, when the design of Nova and vCenter are
not compatible, and we're paying the price for that right now.

All of the operations you describe above -- putting a host in
maintenance mode, live-migration of an instance, ensuring a new instance
is launched near or not-near another instance -- depend on a fundamental
design feature in Nova: that a nova-compute worker fully controls and
manages a host that provides a place to put server instances. We have
internal driver interfaces for the *hypervisor*, not for the *manager of
hypervisors*, because, you know, that's what Nova does.

The problem with all of the vCenter stuff is that it is trying to say to
Nova don't worry, I got this but unfortunately, Nova wants and needs
to manage these things, not surrender control to a different system that
handles orchestration and scheduling in its own unique way.

If a shop really wants to use vCenter for scheduling and orchestration
of server instances, what exactly is the point of using OpenStack Nova
to begin with? What exactly is the point of trying to use OpenStack Nova
for scheduling and host operations when you've already shelled out US
$6,000 for vCenter Server and a boatload more money for ESX licensing?

Sorry, I'm just at a loss why Nova was changed to accomodate vCenter
cluster and management concepts to begin with. I just don't understand
the use case here.

Best,
-jay






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev