Re: [openstack-dev] Summit session for config validation and diagnostic

2014-04-07 Thread Ezra Silvera
Oleg,

I'm very much interested in this. 
I believe that as this distributed environment becomes more and more 
complex, involving many different modules and teams it's becoming crucial 
to have some common view (and maybe framework) for diagnostics and problem 
resolution.

Thanks,

Ezra 
IBM Research Labs Haifa


 
Boris Pavlovic bo...@pavlovic.me wrote on 02/04/2014 10:52:50 PM:
 
  From: Boris Pavlovic bo...@pavlovic.me
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org, 
  Date: 02/04/2014 10:55 PM
  Subject: Re: [openstack-dev] Summit session for config validation 
  and diagnostic
  
  Oleg,
  
  Seems like very interesting topic.
  
  Especially I am interested in how could be this integrated with 
  production clouds (could it be run on controllers?). What API it 
  present and is there integration with horizon? 
  
  Best regards,
  Boris Pavlovic
  

  On Wed, Apr 2, 2014 at 11:20 PM, Oleg Gelbukh ogelb...@mirantis.com 
wrote:
  Hello, OpenStack developers
  
  Everyone knows that finding out the root cause of some failure in 
  OpenStack with many projects interacting with each other and 
  external components could be serious challenge even for seasoned 
  stacker. I've seen more than one attempt to attack this task since I
  joined this community (including our own project Rubick).
  
  Now I want to make somewhat bold attempt to bring all interested 
  parties face to face in Atlanta during a summit session in 'Other 
  projects' track:
  
  http://summit.openstack.org/cfp/details/208
  
  I see it as a good place to coordinate efforts scattered across 
  multiple projects now. Let's find out what are most important and 
  common use cases for us, and outline how we're going to solve them.
  
  What do you think?
  
  --
  Best regards,
  Oleg Gelbukh
  Mirantis Labs
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Saju M
Hi,

Amazon provides option to Import/Export VM.
http://aws.amazon.com/ec2/vm-import/

does OpenStack has same feature ?
Have anyone started to implement this in Openstack ?. If yes, Please point
me to the blueprint. I would like to work on that.


Regards
Saju Madhavan
+91 09535134654
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Jesse Pretorius
On 7 April 2014 09:06, Saju M sajup...@gmail.com wrote:

 Amazon provides option to Import/Export VM.
 http://aws.amazon.com/ec2/vm-import/

 does OpenStack has same feature ?
 Have anyone started to implement this in Openstack ?. If yes, Please point
 me to the blueprint. I would like to work on that.


To my knowledge, there are no blueprints for anything like this. I'm not
sure whether it would fit into the realm of Glance, or another project, but
we would certainly love to see something like this work its way into
Openstack. Initially import only, but eventually export too.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Neutron] Networking Discussions last week

2014-04-07 Thread Assaf Muller


- Original Message -
 Hi all,
 we had a number of discussions last week in Moscow, with participation of
 guys from Russia, Ukraine and Poland.
 That was a great time!! Thanks everyone who participated.
 
 Special thanks to Przemek for great preparations, including the following:
 https://docs.google.com/a/mirantis.com/presentation/d/115vCujjWoQ0cLKgVclV59_y1sLDhn2zwjxEDmLYsTzI/edit#slide=id.p
 
 I've searched over blueprints which require update after meetings:
 https://blueprints.launchpad.net/fuel/+spec/multiple-cluster-networks
 https://blueprints.launchpad.net/fuel/+spec/fuel-multiple-l3-agents
 https://blueprints.launchpad.net/fuel/+spec/fuel-storage-networks
 https://blueprints.launchpad.net/fuel/+spec/separate-public-floating
 https://blueprints.launchpad.net/fuel/+spec/advanced-networking
 
 We will need to create one for UI.
 
 Neutron blueprints which are in the interest of large and thus complex
 deployments, with the requirements of scalability and high availability:
 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost
 
 The last one was rejected... there is might be another way of achieving same
 use cases? Use case, I think, was explained in great details here:
 https://wiki.openstack.org/wiki/NovaNeutronGapHighlights
 Any thoughts on this?
 

https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
This is the up the date blueprint, called Distributed virtual
router, or DVR. It's in early implementation reviews and is
targeted for the Juno release.

 Thanks,
 --
 Mike Scherbakov
 #mihgen
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel-dev][fuel-ostf] Extending networks diagnostic toolkit

2014-04-07 Thread Dmitriy Shulyak
Hi,

There is number of additional network verifications that can improve
troubleshooting experience or even cluster perfomance, like:

1. multicast group verification for corosync messaging
2. network connectivity with jumbo packets
3. l3 connectivity verification
4. some fencing verification
5. allocated ip verification
https://bugs.launchpad.net/fuel/+bug/1275641
6. measure network perfomance with iperf

Adding this stuff on fuel-web network tab will significantly worsen UX,
also it is not friendly enough to extend current model with additional
verifications.

Whole approach looks like networking health check for deployment, so in my
opinion it should be done as separate tab similar to ostf health check.

fuel-ostf already has necessery db and rest-api code to support such
extensions, and with some work this can be used as diagnostic tool not only
for fuel, but in tripleo as well.

In my opinion this feature should be splited in two main parts:

PART 1 - new plugin-executor for ostf, ui tab in fuel-web, extending this
plugin with existing verifications

1. for now ostf has one plugin-executor - this plugin uses nose for
running tests, add new executor that will be named smth like distributed,
astute still will perform role of orchestartor

2. adding new reporter to astute that will publish messages to ostf
queue

3. add ostf amqp receiver

4. extend current plugin with verifications listed above

After this part of refactoring it should be possible to support rapid
extension of distributed cluster diagnostic.

PART 2 - make integration with fuel plugable, it means:

1. remove proxy dependency from ostf, it can be done with socks
protocol that provides http proxy over ssh (it is supported by openssh
server)

2. make integration with nailgun plugable

3. replace astute/mcollective with custom agent or some community
solution


I will appreciate comments or suggestions, so dont hesitate to share your
thoughts
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Doc for Trove ?

2014-04-07 Thread Andreas Jaeger
On 04/06/2014 08:47 PM, Anne Gentle wrote:
 [...]
 Yes, since the Trove midcycle meetup in February we've had a writer at
 Tesora assigned to this task. It's a huge task though since we document
 four distros so all the helping hands we can get would be great. 

The distro experts in the team can help with that. I suggest to get the
document ready for a single distribution initially and then others help
adding the rest of the distros.

Is there any other way to help?

We could also add the file already today into git - and just not include
the files in the published documents. That way more people can work on it,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo][Neutron] Tripleo Neutron

2014-04-07 Thread mar...@redhat.com
Hello Tripleo/Neutron:

I've recently found some cycles to look into Neutron. Mostly because
networking rocks, but also so we can perhaps better address Neutron
related issues/needs down the line. I thought it may be good to ask the
wider team if there are others that are also interested in
NeutronTripleo. We could form a loose focus group to discuss blueprints
and review each other's code/chase up with cores. My search may have
missed earlier discussions in openstack-dev[Tripleo][Neutron] and
Tripleo bluprints so my apologies if this has already been started
somewhere. If any of the above is of interest then:

*is the following list sane - does it make sense to pick these off or
are these 'nice to haves' but not of immediate concern? Even just
validating, prioritizing and recording concerns could be worthwhile for
example?
* are you interested in discussing any of the following further and
perhaps investigating and/or helping with blueprints where/if necessary?

Right now I have:

[Undercloud]:

1. Define a neutron node (tripleo-image-elements/disk-image-builder) and
make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This
comes under by lifeless blueprint at
https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies

2. HA the neutron node. For each neutron services/agents of interest
(neutron-dhcp-agent, neutron-l3-agent, neutron-lbaas-agent ... ) fix any
issues with running these in HA - perhaps there are none \o/? Useful
whether using a dedicated Neutron node or just for HA the
undercloud-control node

3. Does it play with Ironic OK? I know there were some issues with
Ironic and Neutron DHCP, though I think this has now been addressed.
Other known/unkown bugs/issues with Ironic/Neutron - the baremetal
driver will be deprecated at some point...

4. Subnetting. Right now the undercloud uses a single subnet. Does it
make sense to have multiple subnets here - one point I've heard is for
segregation of your undercloud nodes (i.e. 1 broadcast domain).

5. Security. Are we at least using Neutron as we should be in the
Undercloud, security-groups, firewall rules etc?

[Overcloud]:

1. Configuration. In the overcloud it's just Neutron. So one concern
is which and how to expose neutron configuration options via Tuskar-UI.
We would pass these through the deployment heat-template for definition
of Neutron plugin-specific .conf files (like dnsmasq-neutron.conf) for
example or initial definition of tenant subnets and router(s) for access
to external networks.

2. 3. ???


thanks! marios

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-07 Thread Thomas Herve

 Hi folks,
 
 There are two problems we should address regarding the growth and change
 to the HOT specification.
 
 First our +2/+A process for normal changes doesn't totally make sense
 for hot_spec.rst.  We generally have some informal bar for controversial
 changes (of which changes to hot_spec.rst is generally considered:).  I
 would suggest raising the bar on hot_spec.rst to at-least what is
 required for a heat-core team addition (currently 5 approval votes).
 This gives folks plenty of time to review and make sure the heat core
 team is committed to the changes, rather then a very small 2 member
 subset.  Of course a -2 vote from any heat-core would terminate the
 review as usual.
 
 Second, There is a window where we say hey we want this sweet new
 functionality yet it remains unimplemented.  I suggest we create some
 special tag for these intrinsics/sections/features, so folks know they
 are unimplemented and NOT officially part of the specification until
 that is the case.
 
 We can call this tag something simple like
 *standardization_pending_implementation* for each section which is
 unimplemented.  A review which proposes this semantic is here:
 https://review.openstack.org/85610
 
 My goal is not to add more review work to people's time, but I really
 believe any changes to the HOT specification have a profound impact on
 all things Heat, and we should take special care when considering these
 changes.
 
 Thoughts or concerns?

Hi Steve, 

I'm -1 on merging the docs. Regardless of the warnings we put, people are going 
to be confused seeing features here that they can't use. There is also a huge 
changes that the implementation will change from the original doc, thus making 
us forced to update it if/when we merge.

AFAIK gerrit is persistent, so we can keep the doc patch in it forever and link 
it in a blueprint. And merge the doc change alongside the implementation.

Cheers,

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-07 Thread Juan Manuel Rey
Hi,

I'm fairly new to this list, actually this is my first email sent, and to
OpenStack in general, but I'm not new at all to VMware so I'll try to give
you my point of view about possible use case here.

Jay you are saying that by using Nova to manage ESXi hosts we don't need
vCenter because they basically overlap in their capabilities. I agree with
you to some extent, Nova may have similar capabilities as vCenter Server
but as you know OpenStack as a full cloud solution adds a lot more features
that vCenter lacks, like multitenancy just to name one.

Also in any vSphere environment managing ESXi hosts individually, this is
without vCenter, is completely out of the question. vCenter is the enabler
of many vSphere features. And precisely that's is, IMHO, the use case of
using Nova to manage vCenter to manage vSphere. Without vCenter we only
have a bunch of hypervisors and none of the HA or DRS (dynamic resource
balancing) capabilities that a vSphere cluster provides, this in my
experience with vSphere users/customers is a no go scenario.

I don't know why the decision to manage vCenter with Nova was made but
based on the above I understand the reasoning.

Best,
---
Juan Manuel Rey
@jreypo


On Mon, Apr 7, 2014 at 7:20 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar wrote:
   Well, it seems to me that the problem is the above blueprint and the
 code it introduced. This is an anti-feature IMO, and probably the best
 solution would be to remove the above code and go back to having a single
  nova-compute managing a single vCenter cluster, not multiple ones.
 
  Problem is not introduced by managing multiple clusters from single
 nova-compute proxy node.

 I strongly disagree.

  Internally this proxy driver is still presenting the compute-node for
 each of the cluster its managing.

 In what way?

   What we need to think about is applicability of the live migration use
 case when a cluster is modelled as a compute.   Since the cluster is
 modelled as a compute, it is assumed that a typical use case of live-move
 is taken care by the underlying cluster itself.   With this there are
 other use cases which are no-op today like host maintenance mode, live
 move, setting instance affinity etc., In order to resolve this I was
 thinking of
  A way to expose operations on individual ESX Hosts like Putting host in
 maintenance mode,  live move, instance affinity etc., by introducing Parent
 - Child compute node concept.   Scheduling can be restricted to Parent
 compute node and Child compute node can be used for providing more drill
 down on compute and also enable additional compute operations.Any
 thoughts on this?

 The fundamental problem is that hacks were put in place in order to make
 Nova defer control to vCenter, when the design of Nova and vCenter are
 not compatible, and we're paying the price for that right now.

 All of the operations you describe above -- putting a host in
 maintenance mode, live-migration of an instance, ensuring a new instance
 is launched near or not-near another instance -- depend on a fundamental
 design feature in Nova: that a nova-compute worker fully controls and
 manages a host that provides a place to put server instances. We have
 internal driver interfaces for the *hypervisor*, not for the *manager of
 hypervisors*, because, you know, that's what Nova does.

 The problem with all of the vCenter stuff is that it is trying to say to
 Nova don't worry, I got this but unfortunately, Nova wants and needs
 to manage these things, not surrender control to a different system that
 handles orchestration and scheduling in its own unique way.

 If a shop really wants to use vCenter for scheduling and orchestration
 of server instances, what exactly is the point of using OpenStack Nova
 to begin with? What exactly is the point of trying to use OpenStack Nova
 for scheduling and host operations when you've already shelled out US
 $6,000 for vCenter Server and a boatload more money for ESX licensing?

 Sorry, I'm just at a loss why Nova was changed to accomodate vCenter
 cluster and management concepts to begin with. I just don't understand
 the use case here.

 Best,
 -jay






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Status of ovs-firewall-driver blueprint?

2014-04-07 Thread Thapar, Vishal (HP Networking)
Hi,

I am working on an OVS based implementation of Neutron Security Groups and came 
across this blueprint:
https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

I've gone through every mail, document and IRC chat log on this to get a good 
grasp of history on this, but couldn't find any recent activity on this 
blueprint. It is listed on Meetings page on wiki but last meeting seems to have 
been held last year in December. I've just started working on prototyping this 
and would like to work with community to see it to completion.

Could anyone suggest on how to proceed on this? Do I need to request a meeting 
for this?

Thanks and Regards,
Vishal.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-07 Thread mar...@redhat.com
On 07/04/14 00:27, Steve Baker wrote:
 On 05/04/14 04:47, Tomas Sedovic wrote:
 Hi All,

 I was wondering if the time has come to document what exactly are we
 doing with tripleo-heat-templates and merge.py[1], figure out what needs
 to happen to move away and raise the necessary blueprints on Heat and
 TripleO side.

 (merge.py is a script we use to build the final TripleO Heat templates
 from smaller chunks)

 There probably isn't an immediate need for us to drop merge.py, but its
 existence either indicates deficiencies within Heat or our unfamiliarity
 with some of Heat's features (possibly both).

 I worry that the longer we stay with merge.py the harder it will be to
 move forward. We're still adding new features and fixing bugs in it (at
 a slow pace but still).

 Below is my understanding of the main marge.py functionality and a rough
 plan of what I think might be a good direction to move to. It is almost
 certainly incomplete -- please do poke holes in this. I'm hoping we'll
 get to a point where everyone's clear on what exactly merge.py does and
 why. We can then document that and raise the appropriate blueprints.


 ## merge.py features ##


 1. Merging parameters and resources

 Any uniquely-named parameters and resources from multiple templates are
 put together into the final template.

 If a resource of the same name is in multiple templates, an error is
 raised. Unless it's of a whitelisted type (nova server, launch
 configuration, etc.) in which case they're all merged into a single
 resource.

 For example: merge.py overcloud-source.yaml swift-source.yaml

 The final template has all the parameters from both. Moreover, these two
 resources will be joined together:

  overcloud-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP


  swift-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Metadata:
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


 The final template will contain:

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


 We use this to keep the templates more manageable (instead of having one
 huge file) and also to be able to pick the components we want: instead
 of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
 uses the VirtualPowerManager driver) or `ironic-vm-source`.



 2. FileInclude

 If you have a pseudo resource with the type of `FileInclude`, we will
 look at the specified Path and SubKey and put the resulting dictionary in:

  overcloud-source.yaml 

   NovaCompute0Config:
 Type: FileInclude
 Path: nova-compute-instance.yaml
 SubKey: Resources.NovaCompute0Config
 Parameters:
   NeutronNetworkType: gre
   NeutronEnableTunnelling: True


  nova-compute-instance.yaml 

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: {Ref: NeutronNetworkType}
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: {Ref: NeutronEnableTunnelling}
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

 The result:

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: gre
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: True
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

 Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter

Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-07 Thread Samuel Bercovici
Please elaborate, do you mean that the nodes could be on different zones/cells 
or something else?


-Original Message-
From: Alun Champion [mailto:p...@achampion.net] 
Sent: Sunday, April 06, 2014 4:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web 
ui screen captures

How do these use-cases relate to availability zones or cells, is the assumption 
that the same private network is available across both? An application owner 
could look to protect availability not just provide scalability.

On 6 April 2014 07:51, Samuel Bercovici samu...@radware.com wrote:
 Per the last LBaaS meeting.



 1.   Please find a list of use cases.

 https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1
 -mXuSINis/edit?usp=sharing



 a)  Please review and see if you have additional ones for the
 project-user

 b)  We can then chose 2-3 use cases to play around with how the CLI,
 API, etc. would look



 2.   Please find a document to place screen captures of web UI. I took
 the liberty to place a few links showing ELB.

 https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uH
 erSq3pLQA/edit?usp=sharing





 Regards,

 -Sam.










 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting reminder - 04/07/2014

2014-04-07 Thread Renat Akhmerov
Hi,

This is a reminder that we have another community meeting today at 
#openstack-meeting at 16.00 UTC.

Agenda:
Review action items
Current status (quickly by team members)
POC demo scenario readiness and ways to improve it
TaskFlow integration status
Open discussion

It can also be found at https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
as well as the links to the previous meeting minutes and logs.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-07 Thread Andreas Jaeger
On 04/06/2014 08:32 PM, Anne Gentle wrote:
 [...]
 There is API documentation:
 http://git.openstack.org/cgit/openstack/database-api/tree/openstack-database-api/src/markdown/database-api-v1.md
 

We should publish that one on docs.openstack.org

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Deepak Shetty
Cinder provides backup/restore API for cinder volumes. Will that be used as
part of the higher level VM import/export orchestration, or both are
totally different ? If different, how ?


On Mon, Apr 7, 2014 at 12:54 PM, Jesse Pretorius
jesse.pretor...@gmail.comwrote:

 On 7 April 2014 09:06, Saju M sajup...@gmail.com wrote:

 Amazon provides option to Import/Export VM.
 http://aws.amazon.com/ec2/vm-import/

 does OpenStack has same feature ?
 Have anyone started to implement this in Openstack ?. If yes, Please
 point me to the blueprint. I would like to work on that.


 To my knowledge, there are no blueprints for anything like this. I'm not
 sure whether it would fit into the realm of Glance, or another project, but
 we would certainly love to see something like this work its way into
 Openstack. Initially import only, but eventually export too.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-07 Thread Alun Champion
Yes, to ensure availability the same application could be deployed
across multiple availability zones/cells, as these can offer some
level of independence of risk. The use-cases seemed to expect same
network, which may not be achievable given the above, but ideally
should be able to load-balance across zones/cells. Other cloud
management solutions have moved to name based LB because of the above,
is this something being considered?

On 7 April 2014 05:27, Samuel Bercovici samu...@radware.com wrote:
 Please elaborate, do you mean that the nodes could be on different 
 zones/cells or something else?


 -Original Message-
 From: Alun Champion [mailto:p...@achampion.net]
 Sent: Sunday, April 06, 2014 4:51 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and 
 web ui screen captures

 How do these use-cases relate to availability zones or cells, is the 
 assumption that the same private network is available across both? An 
 application owner could look to protect availability not just provide 
 scalability.

 On 6 April 2014 07:51, Samuel Bercovici samu...@radware.com wrote:
 Per the last LBaaS meeting.



 1.   Please find a list of use cases.

 https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1
 -mXuSINis/edit?usp=sharing



 a)  Please review and see if you have additional ones for the
 project-user

 b)  We can then chose 2-3 use cases to play around with how the CLI,
 API, etc. would look



 2.   Please find a document to place screen captures of web UI. I took
 the liberty to place a few links showing ELB.

 https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uH
 erSq3pLQA/edit?usp=sharing





 Regards,

 -Sam.










 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Carlos Gonçalves
Might be of help to you to know that glance-pythonclient provides an 
‘image-download’ option.
Also, 
https://blueprints.launchpad.net/horizon/+spec/download-images-and-snapshots

On 07 Apr 2014, at 08:06, Saju M sajup...@gmail.com wrote:

 Hi,
 
 Amazon provides option to Import/Export VM.
 http://aws.amazon.com/ec2/vm-import/
 
 does OpenStack has same feature ?
 Have anyone started to implement this in Openstack ?. If yes, Please point me 
 to the blueprint. I would like to work on that.
 
 
 Regards
 Saju Madhavan
 +91 09535134654
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-07 Thread Russell Bryant
On 04/06/2014 03:22 PM, Vipul Sabhaya wrote:
 
 
 
 On Sun, Apr 6, 2014 at 9:36 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 04/06/2014 09:02 AM, Christopher Yeoh wrote:
  On Sun, Apr 6, 2014 at 10:06 AM, Hopper, Justin
 justin.hop...@hp.com mailto:justin.hop...@hp.com
  mailto:justin.hop...@hp.com mailto:justin.hop...@hp.com wrote:
 
  Russell,
 
  At this point the guard that Nova needs to provide around the
 instance
  does not need to be complex.  It would even suffice to keep those
  instances hidden from such operations as ³nova list² when
 invoked by
  directly by the user.
 
 
  Are you looking for something to prevent accidental manipulation of an
  instance created by Trove or intentional changes as well? Whilst doing
  some filtering in nova list is simple on the surface, we don't try to
  keep server uuids secret in the API, so its likely that sort of
  information will leak through other parts of the API say through
 volume
  or networking interfaces. Having to enforce another level of
 permissions
  throughout the API would be a considerable change. Also it would
  introduce inconsistencies into the information returned by Nova - eg
  does quota/usage information returned to the user include the server
  that Trove created or is that meant to be adjusted as well?
 
  If you need a high level of support from the Nova API to hide servers,
  then if its possible, as Russell suggests to get what you want by
  building on top of the Nova API using additional identities then I
 think
  that would be the way to go. If you're just looking for a simple
 way to
  offer to Trove clients a filtered list of servers, then perhaps Trove
  could offer a server list call which is a proxy to Nova and
 filters out
  the servers which are Trove specific since Trove knows which ones it
  created.
 
 Yeah, I would *really* prefer to go the route of having trove own all
 instances from the perspective of Nova.  Trove is what is really
 managing these instances, and it already has to keep track of what
 instances are associated with which user.
 
 Although this approach would work, there are some manageability issues
 with it.  When trove is managing 100’s of nova instances, then things
 tend to break down when looking directly at the Trove tenant through the
 Nova API and trying to piece together the associations, what resource
 failed to provision, etc.

This isn't specific enough to understand what the problem is.

 It sounds like what you really want is for Trove to own the instances,
 so I think we need to get down to very specifically won't work with that
 approach.
 
 For example, is it a billing thing?  As it stands, all notifications for
 trove managed instances will have the user's info in them.  Do you not
 want to lose that?  If that's the problem, that seems solvable with a
 much simpler approach.
 
 
 We have for the most part solved the billing issue since Trove does
 maintain the association, and able to send events on-behalf of the
 correct user.  We would lose out on the additional layer of checks that
 Nova provides, such as Rate Limiting per project, Quotas enforced at the
 Nova layer.  The trove tenant would essentially need full access without
 any such limits.

Don't you get rate limiting and quotas through the trove API, instead?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Neutron] Tripleo Neutron

2014-04-07 Thread Dmitriy Shulyak
Hi Marios, thanks for raising this.

There is in progress blueprint that should address some issues with neutron
ha deployment -
https://blueprints.launchpad.net/neutron/+spec/l3-high-availability.

Right now neutron-dhcp agent can be configured as active/active.

But l3-agent and metadata-agent still should be active/passive,
afaik the best approach would be to use corosync+pacemaker, that is also
stated in official documentation
http://docs.openstack.org/high-availability-guide/content/ch-network.html.

What other choices, except  corosync+pacemaker, do we have for neutron ha?

Thanks



On Mon, Apr 7, 2014 at 11:18 AM, mar...@redhat.com mandr...@redhat.comwrote:

 Hello Tripleo/Neutron:

 I've recently found some cycles to look into Neutron. Mostly because
 networking rocks, but also so we can perhaps better address Neutron
 related issues/needs down the line. I thought it may be good to ask the
 wider team if there are others that are also interested in
 NeutronTripleo. We could form a loose focus group to discuss blueprints
 and review each other's code/chase up with cores. My search may have
 missed earlier discussions in openstack-dev[Tripleo][Neutron] and
 Tripleo bluprints so my apologies if this has already been started
 somewhere. If any of the above is of interest then:

 *is the following list sane - does it make sense to pick these off or
 are these 'nice to haves' but not of immediate concern? Even just
 validating, prioritizing and recording concerns could be worthwhile for
 example?
 * are you interested in discussing any of the following further and
 perhaps investigating and/or helping with blueprints where/if necessary?

 Right now I have:

 [Undercloud]:

 1. Define a neutron node (tripleo-image-elements/disk-image-builder) and
 make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This
 comes under by lifeless blueprint at

 https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies

 2. HA the neutron node. For each neutron services/agents of interest
 (neutron-dhcp-agent, neutron-l3-agent, neutron-lbaas-agent ... ) fix any
 issues with running these in HA - perhaps there are none \o/? Useful
 whether using a dedicated Neutron node or just for HA the
 undercloud-control node

 3. Does it play with Ironic OK? I know there were some issues with
 Ironic and Neutron DHCP, though I think this has now been addressed.
 Other known/unkown bugs/issues with Ironic/Neutron - the baremetal
 driver will be deprecated at some point...

 4. Subnetting. Right now the undercloud uses a single subnet. Does it
 make sense to have multiple subnets here - one point I've heard is for
 segregation of your undercloud nodes (i.e. 1 broadcast domain).

 5. Security. Are we at least using Neutron as we should be in the
 Undercloud, security-groups, firewall rules etc?

 [Overcloud]:

 1. Configuration. In the overcloud it's just Neutron. So one concern
 is which and how to expose neutron configuration options via Tuskar-UI.
 We would pass these through the deployment heat-template for definition
 of Neutron plugin-specific .conf files (like dnsmasq-neutron.conf) for
 example or initial definition of tenant subnets and router(s) for access
 to external networks.

 2. 3. ???


 thanks! marios

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-07 Thread Steven Hardy
On Sun, Apr 06, 2014 at 11:23:28AM -0700, Steven Dake wrote:
 Hi folks,
 
 There are two problems we should address regarding the growth and
 change to the HOT specification.
 
 First our +2/+A process for normal changes doesn't totally make
 sense for hot_spec.rst.  We generally have some informal bar for
 controversial changes (of which changes to hot_spec.rst is generally
 considered:).  I would suggest raising the bar on hot_spec.rst to
 at-least what is required for a heat-core team addition (currently 5
 approval votes).  This gives folks plenty of time to review and make
 sure the heat core team is committed to the changes, rather then a
 very small 2 member subset.  Of course a -2 vote from any heat-core
 would terminate the review as usual.

I agree with Steve Baker here, I think the first step is to get an approved
blueprint, with discussion before approval if needed, then the second step
is the review (which should be primarily to evaluate the implemenation, not
discuss the feature IMO).

I do understand the motivation of what you're proposing, and in general I
think it's already happening informally - I generally just +1 any change
where I think it's controversial (or sometimes +2 with no +A if it's been
posted for a while and looks nearly ready to merge)

So how about:
- All hot spec functional changes must have an associated blueprint
- Where discussion is required, the blueprint can link a wiki page and we
  can discuss on the ML, during this phase the BP will be unapproved and in
  Discussion state for definition.
- heat-core should never approve any functional changes to the hot spec
  without an *approved* associated blueprint
- heat-core should never approve any functional change to the hot spec
  without positive feedback from a significant subset of the core team

On the last point, I think it's largely down to common sense along with the
existing process - if we've got feedback from many folks during the review
iterations, I personally don't think we need to strictly enforce 5*+2's on
the final patch, if e.g some minor change was needed in the final
iteration but overall the change has had feedback indicating consensus.

 Second, There is a window where we say hey we want this sweet new
 functionality yet it remains unimplemented.  I suggest we create
 some special tag for these intrinsics/sections/features, so folks
 know they are unimplemented and NOT officially part of the
 specification until that is the case.

IMO we should not merge documentation/spec changes before the
implementation.  There should be a series of patches implementing the
feature, which ends with a documentation update to the spec and template
guide.

I think the place for documenting functionality which we want but is not
yet implemented is the blueprint, or a wiki page linked from the blueprint.

The review process should cater for this already IMO, if we only approve
HOT patches with approved blueprints (which sufficiently define the new
interface), and don't merge patches changing implementation affecting the
HOT interfaces unless an associated docs/spec patch is also posted at the
same time (or even the same patch if it's a simple change).

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-07 Thread Ladislav Smola

On 04/06/2014 11:27 PM, Steve Baker wrote:

On 05/04/14 04:47, Tomas Sedovic wrote:

Hi All,

I was wondering if the time has come to document what exactly are we
doing with tripleo-heat-templates and merge.py[1], figure out what needs
to happen to move away and raise the necessary blueprints on Heat and
TripleO side.

(merge.py is a script we use to build the final TripleO Heat templates
from smaller chunks)

There probably isn't an immediate need for us to drop merge.py, but its
existence either indicates deficiencies within Heat or our unfamiliarity
with some of Heat's features (possibly both).

I worry that the longer we stay with merge.py the harder it will be to
move forward. We're still adding new features and fixing bugs in it (at
a slow pace but still).

Below is my understanding of the main marge.py functionality and a rough
plan of what I think might be a good direction to move to. It is almost
certainly incomplete -- please do poke holes in this. I'm hoping we'll
get to a point where everyone's clear on what exactly merge.py does and
why. We can then document that and raise the appropriate blueprints.


## merge.py features ##


1. Merging parameters and resources

Any uniquely-named parameters and resources from multiple templates are
put together into the final template.

If a resource of the same name is in multiple templates, an error is
raised. Unless it's of a whitelisted type (nova server, launch
configuration, etc.) in which case they're all merged into a single
resource.

For example: merge.py overcloud-source.yaml swift-source.yaml

The final template has all the parameters from both. Moreover, these two
resources will be joined together:

 overcloud-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP


 swift-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Metadata:
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


The final template will contain:

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


We use this to keep the templates more manageable (instead of having one
huge file) and also to be able to pick the components we want: instead
of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
uses the VirtualPowerManager driver) or `ironic-vm-source`.



2. FileInclude

If you have a pseudo resource with the type of `FileInclude`, we will
look at the specified Path and SubKey and put the resulting dictionary in:

 overcloud-source.yaml 

   NovaCompute0Config:
 Type: FileInclude
 Path: nova-compute-instance.yaml
 SubKey: Resources.NovaCompute0Config
 Parameters:
   NeutronNetworkType: gre
   NeutronEnableTunnelling: True


 nova-compute-instance.yaml 

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: {Ref: NeutronNetworkType}
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: {Ref: NeutronEnableTunnelling}
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

The result:

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: gre
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: True
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter
substitution.

This is useful when 

[openstack-dev] [Tuskar][TripleO] Tuskar Planning for Juno

2014-04-07 Thread Tzu-Mainn Chen
Hi all,

One of the topics of discussion during the TripleO midcycle meetup a few weeks
ago was the direction we'd like to take Tuskar during Juno.  Based on the ideas
presented there, we've created a tentative list of items we'd like to address:

https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning 

Please feel free to take a look and question, comment, or criticize!


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-07 Thread Steven Hardy
On Mon, Apr 07, 2014 at 09:30:50AM +0200, Thomas Spatzier wrote:
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org
  Date: 06/04/2014 22:32
  Subject: Re: [openstack-dev] [heat] Managing changes to the Hot
  Specification (hot_spec.rst)
 
  On 07/04/14 06:23, Steven Dake wrote:
   Hi folks,
  
   There are two problems we should address regarding the growth and
   change to the HOT specification.
  
   First our +2/+A process for normal changes doesn't totally make sense
   for hot_spec.rst.  We generally have some informal bar for
   controversial changes (of which changes to hot_spec.rst is generally
   considered:).  I would suggest raising the bar on hot_spec.rst to
   at-least what is required for a heat-core team addition (currently 5
   approval votes).  This gives folks plenty of time to review and make
   sure the heat core team is committed to the changes, rather then a
   very small 2 member subset.  Of course a -2 vote from any heat-core
   would terminate the review as usual.
  
   Second, There is a window where we say hey we want this sweet new
   functionality yet it remains unimplemented.  I suggest we create
   some special tag for these intrinsics/sections/features, so folks know
   they are unimplemented and NOT officially part of the specification
   until that is the case.
  
   We can call this tag something simple like
   *standardization_pending_implementation* for each section which is
   unimplemented.  A review which proposes this semantic is here:
   https://review.openstack.org/85610
  
   My goal is not to add more review work to people's time, but I really
   believe any changes to the HOT specification have a profound impact on
   all things Heat, and we should take special care when considering
   these changes.
  
   Thoughts or concerns?
 
 So in general, I think that might be a good approach, since doing this thru
 gerrit reviews seems to work pretty efficiently.
 However, we must be careful to really not confuse users, never forget to
 update the flags (probably by ensuring during in the feature implementation
 patches that the flag is removed), etc.
 
  How about we just use the existing blueprint approval process for
  changes to the HOT spec? The PTL can make the call whether the change
  can be approved by the PTL or whether it requires discussion and
  consensus first.
 
 I'm just a normal Heater (no core) so might not have the right level of
 insight into the process, but to me it looked like the relation between BPs
 and patches is not 100% strict. I.e. even if BPs exist but are in a
 somewhat abstract shape, changes get in, or changes get in without BPs. So
 for the BP-based approach to work, this would have to be handled more
 tightly. ... but I'm not suggesting unpragmatic development processes here,
 maybe just apply this to hot_spec?

IMO patches which add or modify interfaces should always have a blueprint,
and generally any patch fixing a user-visible problem should have a bug.

Sometimes, if a patch is refactoring, doing cosmetic cleanups, or fixing a
non-user-visible problem, I think it's fine to merge it without a BP or a
bug, but if we're routinely merging user-visible changes without either,
then we need to stop (I don't actually think we are btw..)

So perhaps we just communicate a reminder to heat-core that they should pay
particular attention to the hot spec (and any other user-visible changes
such as additions to the API) to ensure changes are tracked appropriately
in launchpad?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Neutron] Tripleo Neutron

2014-04-07 Thread Roman Podoliaka
Hi all,

Perhaps, we should file a design session for Neutron-specific questions?

 1. Define a neutron node (tripleo-image-elements/disk-image-builder) and 
 make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This 
 comes under by lifeless blueprint at 
 https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies

As far as I understand, this must be pretty straightforward: just
reuse the neutron elements we need when building an image for a
neutron node.

 2. HA the neutron node. For each neutron services/agents of interest 
 (neutron-dhcp-agent, neutron-l3-agent, neutron-lbaas-agent ... ) fix any 
 issues with running these in HA - perhaps there are none \o/? Useful 
 whether using a dedicated Neutron node or just for HA the 
 undercloud-control node

- HA for DHCP-agent is provided out-of-box - we can just use
'dhcp_agents_per_network' option
(https://github.com/openstack/tripleo-image-elements/blob/master/elements/neutron/os-apply-config/etc/neutron/neutron.conf#L59)

- for L3-agent there is a BP started, but the patches haven't been
merged yet  - 
https://blueprints.launchpad.net/neutron/+spec/l3-high-availability

- API must be no different from other API services we have

 3. Does it play with Ironic OK? I know there were some issues with Ironic 
 and Neutron DHCP, though I think this has now been addressed. Other 
 known/unkown bugs/issues with Ironic/Neutron - the baremetal driver will be 
 deprecated at some point...

You must be talking about specifying PXE boot options by the means of
neutron-dhcp-agent. Yes, this has been merged to Neutron for a while
now (https://review.openstack.org/#/c/30441/).

Thanks,
Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-07 Thread Matthew Booth
On 07/04/14 06:20, Jay Pipes wrote:
 On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar wrote:
 Well, it seems to me that the problem is the above blueprint and the code 
 it introduced. This is an anti-feature IMO, and probably the best solution 
 would be to remove the above code and go back to having a single  
 nova-compute managing a single vCenter cluster, not multiple ones.

 Problem is not introduced by managing multiple clusters from single 
 nova-compute proxy node.  
 
 I strongly disagree.
 
 Internally this proxy driver is still presenting the compute-node for each 
 of the cluster its managing.
 
 In what way?
 
  What we need to think about is applicability of the live migration use case 
 when a cluster is modelled as a compute.   Since the cluster is modelled 
 as a compute, it is assumed that a typical use case of live-move is taken 
 care by the underlying cluster itself.   With this there are other use 
 cases which are no-op today like host maintenance mode, live move, setting 
 instance affinity etc., In order to resolve this I was thinking of 
 A way to expose operations on individual ESX Hosts like Putting host in 
 maintenance mode,  live move, instance affinity etc., by introducing Parent 
 - Child compute node concept.   Scheduling can be restricted to Parent 
 compute node and Child compute node can be used for providing more drill 
 down on compute and also enable additional compute operations.Any 
 thoughts on this?
 
 The fundamental problem is that hacks were put in place in order to make
 Nova defer control to vCenter, when the design of Nova and vCenter are
 not compatible, and we're paying the price for that right now.
 
 All of the operations you describe above -- putting a host in
 maintenance mode, live-migration of an instance, ensuring a new instance
 is launched near or not-near another instance -- depend on a fundamental
 design feature in Nova: that a nova-compute worker fully controls and
 manages a host that provides a place to put server instances. We have
 internal driver interfaces for the *hypervisor*, not for the *manager of
 hypervisors*, because, you know, that's what Nova does.

I'm going to take you to task here for use of the word 'fundamental'.
What does Nova do? Apparently: 'OpenStack Nova provides a cloud
computing fabric controller, supporting a wide variety of virtualization
technologies, including KVM, Xen, LXC, VMware, and more. In addition to
its native API, it includes compatibility with the commonly encountered
Amazon EC2 and S3 APIs.' There's nothing in there about the ratio of
Nova instances to hypervisors: that's an implementation detail. Now this
change may or may not sit well with design decisions which have been
made in the past, but the concept of managing multiple clusters from a
single Nova instance is certainly not fundamentally wrong. It may not be
pragmatic; it may require further changes to Nova which were not made,
but there is nothing about it which is fundamentally at odds with the
stated goals of the project.

Why did I bother with that? I think it's in danger of being lost. Nova
has been around for a while now and it has a lot of code and a lot of
developers behind it. We need to remember, though, that's it's all for
nothing if nobody wants to use it. VMware is different, but not wrong.
Let's stay fresh.

 The problem with all of the vCenter stuff is that it is trying to say to
 Nova don't worry, I got this but unfortunately, Nova wants and needs
 to manage these things, not surrender control to a different system that
 handles orchestration and scheduling in its own unique way.

Again, I'll flip that round. Nova *currently* manages these things, and
working efficiently with a platform which also does these things would
require rethinking some design above the driver level. It's not
something we want to do naively, which the VMware driver is suffering
from in this area. It may take time to get this right, but we shouldn't
write it off as fundamentally wrong. It's useful to users and not
fundamentally at odds with the project's goals.

 If a shop really wants to use vCenter for scheduling and orchestration
 of server instances, what exactly is the point of using OpenStack Nova
 to begin with? What exactly is the point of trying to use OpenStack Nova
 for scheduling and host operations when you've already shelled out US
 $6,000 for vCenter Server and a boatload more money for ESX licensing?

I confess I wondered this myself. However, I have now spoken to real
people who are spending real money doing exactly this. The drivers seem
to be:

* The external API
* A heterogeneous cloud

vSphere isn't really designed for the former and doesn't do it well. It
obviously doesn't help with the latter at all. For example, users want
to be able to give non-admin customers the ability to deploy across both
KVM and VMware.

To my mind, a VMware cluster is an obvious deployment target. I think
it's reasonable for Nova 

[openstack-dev] [Cinder] Icehouse RC2 available

2014-04-07 Thread Thierry Carrez
Hello everyone,

Due to various release-critical issues detected in Cinder icehouse RC1,
a new release candidate was just generated. You can find a list of the
10 bugs fixed and a link to the RC2 source tarball at:

https://launchpad.net/cinder/icehouse/icehouse-rc2

Unless new release-critical issues are found that warrant a release
candidate respin, this RC2 will be formally released as the 2014.1 final
version on April 17 next week. You are therefore strongly encouraged to
test and validate this tarball !

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/cinder/tree/milestone-proposed

If you find an issue that could be considered release-critical and
justify a release candidate respin, please file it at:

https://bugs.launchpad.net/cinder/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-07 Thread Eugene Nikanorov
Hi Sam,

I find google doc a little bit too heavy for ML discussion.
So I'd like to extract the gist of the overall use case that we want to use
when discussing an API (for ex, single-call API).
Here it is as I see it (Sam, correct me if I'm wrong):

User wants to setup web application that is available both via HTTP and
HTTPS protocols and consists of various parts.
One part http://ip-addr/part1 is only available via HTTP, another part
http://ip-addr//part2 is available via both HTTP and HTTPS.
Webapp parts part1 and part2 are served with two different group of nodes
which reside on different (option: same) private networks.

Please provide a call or a set of calls that would allow to configure
balancing for such app. Consider additional options like HA.

Thanks,
Eugene.



On Mon, Apr 7, 2014 at 4:12 PM, Alun Champion p...@achampion.net wrote:

 Yes, to ensure availability the same application could be deployed
 across multiple availability zones/cells, as these can offer some
 level of independence of risk. The use-cases seemed to expect same
 network, which may not be achievable given the above, but ideally
 should be able to load-balance across zones/cells. Other cloud
 management solutions have moved to name based LB because of the above,
 is this something being considered?

 On 7 April 2014 05:27, Samuel Bercovici samu...@radware.com wrote:
  Please elaborate, do you mean that the nodes could be on different
 zones/cells or something else?
 
 
  -Original Message-
  From: Alun Champion [mailto:p...@achampion.net]
  Sent: Sunday, April 06, 2014 4:51 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases
 and web ui screen captures
 
  How do these use-cases relate to availability zones or cells, is the
 assumption that the same private network is available across both? An
 application owner could look to protect availability not just provide
 scalability.
 
  On 6 April 2014 07:51, Samuel Bercovici samu...@radware.com wrote:
  Per the last LBaaS meeting.
 
 
 
  1.   Please find a list of use cases.
 
  https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1
  -mXuSINis/edit?usp=sharing
 
 
 
  a)  Please review and see if you have additional ones for the
  project-user
 
  b)  We can then chose 2-3 use cases to play around with how the CLI,
  API, etc. would look
 
 
 
  2.   Please find a document to place screen captures of web UI. I
 took
  the liberty to place a few links showing ELB.
 
  https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uH
  erSq3pLQA/edit?usp=sharing
 
 
 
 
 
  Regards,
 
  -Sam.
 
 
 
 
 
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-07 Thread Solly Ross
@Matt Booth: I think you make a lot of good points, but I think the main gist 
of the opposing argument, so to speak,
is that currently, the way we differentiate between potential compute resources 
(whether they be an individual
hypervisor or a cluster) is by having each have its own compute node.

I think some of the reluctance here is to change that model -- the idea that a 
Nova compute node represents one
resource which is, for all intents and purposes, atomic to OpenStack.  While I 
get your point that this is an
implementation detail, I think it's a rather large one, and a fundamental 
assumption in current OpenStack code
(for the most part).  If we change that assumption, we shouldn't really change 
it piecemeal.

IMHO, this model (compute nodes as atomic resources) fits the overall design 
well.  That being said,
I personally would not be averse to something like expanding a NUMA-style API 
to cover the cluster, as
I think this continues to fit the existing model -- a NUMA-style API breaks 
down an atomic resource,
so for a VMWare cluster that would allow tuning to individual hypervisors, 
while for an individual
hypervisor that would allow tuning to individual cores, etc.

Best Regards,
Solly Ross

- Original Message -
From: Matthew Booth mbo...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, April 7, 2014 10:47:35 AM
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live 
migration with one nova compute

On 07/04/14 06:20, Jay Pipes wrote:
 On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar wrote:
 Well, it seems to me that the problem is the above blueprint and the code 
 it introduced. This is an anti-feature IMO, and probably the best solution 
 would be to remove the above code and go back to having a single  
 nova-compute managing a single vCenter cluster, not multiple ones.

 Problem is not introduced by managing multiple clusters from single 
 nova-compute proxy node.  
 
 I strongly disagree.
 
 Internally this proxy driver is still presenting the compute-node for each 
 of the cluster its managing.
 
 In what way?
 
  What we need to think about is applicability of the live migration use case 
 when a cluster is modelled as a compute.   Since the cluster is modelled 
 as a compute, it is assumed that a typical use case of live-move is taken 
 care by the underlying cluster itself.   With this there are other use 
 cases which are no-op today like host maintenance mode, live move, setting 
 instance affinity etc., In order to resolve this I was thinking of 
 A way to expose operations on individual ESX Hosts like Putting host in 
 maintenance mode,  live move, instance affinity etc., by introducing Parent 
 - Child compute node concept.   Scheduling can be restricted to Parent 
 compute node and Child compute node can be used for providing more drill 
 down on compute and also enable additional compute operations.Any 
 thoughts on this?
 
 The fundamental problem is that hacks were put in place in order to make
 Nova defer control to vCenter, when the design of Nova and vCenter are
 not compatible, and we're paying the price for that right now.
 
 All of the operations you describe above -- putting a host in
 maintenance mode, live-migration of an instance, ensuring a new instance
 is launched near or not-near another instance -- depend on a fundamental
 design feature in Nova: that a nova-compute worker fully controls and
 manages a host that provides a place to put server instances. We have
 internal driver interfaces for the *hypervisor*, not for the *manager of
 hypervisors*, because, you know, that's what Nova does.

I'm going to take you to task here for use of the word 'fundamental'.
What does Nova do? Apparently: 'OpenStack Nova provides a cloud
computing fabric controller, supporting a wide variety of virtualization
technologies, including KVM, Xen, LXC, VMware, and more. In addition to
its native API, it includes compatibility with the commonly encountered
Amazon EC2 and S3 APIs.' There's nothing in there about the ratio of
Nova instances to hypervisors: that's an implementation detail. Now this
change may or may not sit well with design decisions which have been
made in the past, but the concept of managing multiple clusters from a
single Nova instance is certainly not fundamentally wrong. It may not be
pragmatic; it may require further changes to Nova which were not made,
but there is nothing about it which is fundamentally at odds with the
stated goals of the project.

Why did I bother with that? I think it's in danger of being lost. Nova
has been around for a while now and it has a lot of code and a lot of
developers behind it. We need to remember, though, that's it's all for
nothing if nobody wants to use it. VMware is different, but not wrong.
Let's stay fresh.

 The problem with all of the vCenter stuff is that it is trying to say to
 Nova don't 

Re: [openstack-dev] [Cinder]a question about os-volume_upload_image

2014-04-07 Thread Mike Perez
On 22:55 Wed 02 Apr , Mike Perez wrote:
 On 02:58 Thu 03 Apr , Bohai (ricky) wrote:
  Hi,
  
  When I use an image to create a cinder volume, I found image's metadata is 
  saved into DB table volume_glance_metadata.
  
  But when I use  os-volume_upload_image to create image from cinder 
  volume, the metadata in volume_glance_metadata 
  is not setted back to the newly created image. I can't found the reason for 
  this.
  Anyone can give me a hint?
  
  I am a newbie for cinder and i apologize if I am missing something very 
  obvious.
  
  Best regards to you.
  Ricky
 
 Hi Ricky,
 
 The glance metadata is currently only stored for creating new volumes, new
 snapshots or backups. It's used for in the volume creation to know if the
 volume is bootable.
 
 -- 
 Mike Perez

Correction to myself, the glance metadata is no longer used for determining if
a volume is bootable. 

https://github.com/openstack/cinder/commit/c8f814569d07544734f10f134e146ce981639e07

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-07 Thread Tomas Sedovic
On 06/04/14 23:27, Steve Baker wrote:
 On 05/04/14 04:47, Tomas Sedovic wrote:
 Hi All,

snip

 The maintenance burden of merge.py can be gradually reduced if features
 in it can be deleted when they are no longer needed. At some point in
 this process merge.py will need to accept HOT templates, and risk of
 breakage during this changeover would be reduced the smaller merge.py is.
 
 How about this for the task order?
 1. remove OpenStack::ImageBuilder::Elements support from merge.py
 2. move to software-config based templates
 3. remove the following from merge.py
3.1. merging params and resources
3.2. FileInclude
3.3. OpenStack::Role
 4. port tripleo templates and merge.py to HOT
 5. use some HOT replacement for Merge::Map, delete Merge::Map from tripleo
 6. move to resource providers/scaling groups for scaling
 7. rm -f merge.py

I like this.

Clint's already working on #2. I can tackle #1 and help review  test
the software config changes. We can deal with the rest afterwards.

One note on 3.1: until we switch to provider resources or get_file, we
can't drop the merging params and resources feature.

We can drop FileInclude, OpenStack::Role and deep merge (e.g. joining
`notCompute0Config` from `overcloud-source.yaml` and `swift-source.yaml`
example from my email), but we'll have to keep the functionality of
putting multiple templates together for a bit longer.

That said, I don't think switching to provider resources is going to be
a drastic change.

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-07 Thread Steven Hardy
On Sun, Apr 06, 2014 at 10:22:15PM -0400, Michael Elder wrote:
 If Keystone is configured with an external identity provider (LDAP, 
 OpenID, etc), how does the creation of a new user per resource affect that 
 external identity source? 

My understanding is that it should be possible to configure keystone to use
multiple (domain specific) identity backends.

So a possible configuration could be to have real users backed by the
LDAP directory, and have all projects/users associated with heat (which are
created in a special heat domain, completely separated from real users)
backed by some other identity backend, e.g Sql.

http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers

This is something we should definitely test, and I'd welcome feedback from
the keystone folks, or anyone who has experience with this functionality,
as to how well it works in Icehouse.

 My suggestion is broader, but in the same spirit: Could we consider 
 defining an _authorization_ stack token (thanks Adam), which acts like 
 an OAuth token (by delegating a set of actionable behaviors that a token 
 holder may perform). The stack token would be managed within the stack 
 in some protected form and used for any activities later performed on 
 resources which are managed by the stack. Instead of imposing user 
 administration tasks like creating users, deleting users, etc against the 
 Keystone database, Heat would instead provide these stack tokens to any 
 service which it connects to when managing a resource. In fact, there's no 
 real reason that the stack token couldn't piggyback on the existing 
 Keystone token mechanism, except that it would be potentially longer lived 
 and restricted to the specific set of resources for which it was granted. 

So oauth was considered before we implemented the domain-isolated users,
but it was not possible to persue due to lack of client support:

https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers
https://blueprints.launchpad.net/heat/+spec/oauth-credentials-resource

The main issue with tokens as provided by keystone today, is that they will
expire.  That is the main reason for chosing to create a user rather than
e.g a token limited in scope via a trust - if you put it in the instance,
you have to refresh it before expiry, which may not always be possible.

Additionally, you don't really want the credentials deployed inside a
(implicitly untrusted) instance derived from the operator who created the
stack - you want something associated with the stack but completely
isolated from real users

Your stack token approach above appears to indicate that Heat would
somehow generate, and maintain the lifecycle of, some special token which
is not owned by keystone.  This idea has been discussed, and rejected,
because we would prefer to make use of keystone functionality instead of
having the burden of maintaining our own bespoke authorization system.

If implementing something based on oauth, or some sort of scope-limited
non-expiring token, becomes possible, it's quite likely we can provide the
option to do something other than the domain isolated users which has been
impelmented for Icehouse.

Ultimately, we had to use what was available in keystone *now* to enable
delivery of something which worked for Icehouse, hence the decision to use
what is available in the keystone v3 API.

 Not sure if email is the best medium for this discussion, so if there's a 
 better option, I'm happy to follow that path as well. 

I think it's fine, and I'm happy to get constructive feedback on the the
current approach, along with ideas for roadmap items which can potentially
improve it.

I have proposed this summit session which may provide more opportunity for
discussion, if accepted:

http://summit.openstack.org/cfp/details/190

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of ovs-firewall-driver blueprint?

2014-04-07 Thread Amir Sadoughi
Hi Vishal,

I’ve restarted my work on the blueprint last week now that Juno is open for 
development and OVS 2.1.0 is available, targeting juno-1. My plan is to have a 
working implementation available by the summit design discussion to make sure 
we cover all our bases. Between the blueprint page you referenced[0], this wiki 
page[1], and the Gerrit review[2] site, that is all the up-to-date information. 
Outside of those links, any other updates will be on this mailing list or the 
ML2 weekly IRC meeting.

I’m open to collaboration and interested in anything you are able to 
contribute. Do you have any existing work to share or feedback?

Amir

[0] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver
[1] https://wiki.openstack.org/wiki/Neutron/blueprint_ovs-firewall-driver
[2] https://review.openstack.org/#/q/topic:bp/ovs-firewall-driver,n,z

On Apr 7, 2014, at 3:33 AM, Thapar, Vishal (HP Networking) 
vtha...@hp.commailto:vtha...@hp.com wrote:

Hi,

I am working on an OVS based implementation of Neutron Security Groups and came 
across this blueprint:
https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

I’ve gone through every mail, document and IRC chat log on this to get a good 
grasp of history on this, but couldn’t find any recent activity on this 
blueprint. It is listed on Meetings page on wiki but last meeting seems to have 
been held last year in December. I’ve just started working on prototyping this 
and would like to work with community to see it to completion.

Could anyone suggest on how to proceed on this? Do I need to request a meeting 
for this?

Thanks and Regards,
Vishal.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Security audit of OpenStack projects

2014-04-07 Thread Nathan Kinder
Hi,

We don't currently collect high-level security related information about
the projects for OpenStack releases.  Things like the crypto algorithms
that are used or how we handle sensitive data aren't documented anywhere
that I could see.  I did some thinking on how we can improve this.  I
wrote up my thoughts in a blog post, which I'll link to instead of
repeating everything here:

  http://blog-nkinder.rhcloud.com/?p=51

tl;dr - I'd like to have the development teams for each project keep a
wiki page updated that collects some basic security information.  Here's
an example I put together for Keystone for Icehouse:

  https://wiki.openstack.org/wiki/Security/Icehouse/Keystone

There would need to be an initial effort to gather this information for
each project, but it shouldn't be a large effort to keep it updated once
we have that first pass completed.  We would then be able to have a
comprehensive overview of this security information for each OpenStack
release, which is really useful for those evaluating and deploying
OpenStack.

I see some really nice benefits in collecting this information for
developers as well.  We will be able to identify areas of weakness,
inconsistency, and duplication across the projects.  We would be able to
use this information to drive security related improvements in future
OpenStack releases.  It likely would even make sense to have something
like a cross-project security hackfest once we have taken a pass through
all of the integrated projects so we can have some coordination around
security related functionality.

For this to effort to succeed, it needs buy-in from each individual
project.  I'd like to gauge the interest on this.  What do others think?
 Any and all feedback is welcome!

Thanks,
-NGK

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Meeting Monday April 7th at 20:00 UTC

2014-04-07 Thread Douglas Mendizabal
Hi Everyone,

The Barbican team is hosting our weekly meeting today, Monday April 7, at
20:00 UTC  in #openstack-meeting-alt

Meeting agenda is avaialbe here
https://wiki.openstack.org/wiki/Meetings/Barbican and everyone is welcomed
to add agenda items

You can check this link
http://time.is/0800PM_7_Apr_2014_in_UTC/CDT/EDT/PDT?Barbican_Weekly_Meeting
if you need to figure out what 20:00 UTC means in your time.

-Douglas Mendizabal




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Some Thoughts on Log Message ID Generation Blueprint

2014-04-07 Thread Ben Nemec

On 04/03/2014 10:19 PM, Peng Wu wrote:

Hi,

   Recently I read the Separate translation domain for log messages
blueprint[1], and I found that we can store both English Message Log and
Translated Message Log with some configurations.

   I am an i18n Software Engineer, and we are thinking about Add message
IDs for log messages blueprint[2]. My thought is that if we can store
both English Message Log and Translated Message Log, we can skip the
need of Log Message ID Generation.

   I also commented the Add message IDs for log messages blueprint[2].

   If the servers always store English Log Messages, maybe we don't need
the Add message IDs for log messages blueprint[2] any more.

   Feel free to comment this proposal.

Thanks,
   Peng Wu

Refer URL:
[1]
https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
[2] https://blueprints.launchpad.net/oslo/+spec/log-messages-id


As I recall, there were more reasons for log message ids than just i18n 
issues.  There's also the fact that an error message might change 
significantly from one release to another, but if it's still addressing 
the same issue then the message id could be left alone so searching for 
it would still return relevant results, regardless of the release.


That said, I don't know if anyone is actually working on the message id 
blueprint so I'm not sure how much it matters at this point. :-)


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Mark Washenberger
Hi Saju,

VM imports are likely to show up in Glance under this blueprint:
https://blueprints.launchpad.net/glance/+spec/new-upload-workflow

Cheers,
markwash


On Mon, Apr 7, 2014 at 12:06 AM, Saju M sajup...@gmail.com wrote:

 Hi,

 Amazon provides option to Import/Export VM.
 http://aws.amazon.com/ec2/vm-import/

 does OpenStack has same feature ?
 Have anyone started to implement this in Openstack ?. If yes, Please point
 me to the blueprint. I would like to work on that.


 Regards
 Saju Madhavan
 +91 09535134654

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Problem with kombu version.

2014-04-07 Thread Dmitry Teselkin
Hello,

I'm working on Murano integration into FUEL-5.0, and have faced the
following problem: our current implementation depends on 'kombu.five'
module, but this module (actually a single file) is available only starting
at kombu 3.0. So this means that murano-api component depends on kombu
=3.0. This meets the OpenStack global requirements list, where kombu
=2.4.8 is declared. Unfortunately, this also means that system-wide
version upgrade is required.

So the question is - what is the right way to solve the promblem? I see the
following options:
1. change kombu version requirement to =3.0 for entire FUEL installation -
it doesn't break global requirements constraint but some other FUEL
components could be affected.
2. replace calls to functions from 'python.kombu' and use existing version
- I'm not sure if it's possible, I'm awaiting answer from our developers.

Which is the most suitable variant, or are there any other solutions for
the problem?


-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Aditya Thatte
We are doing implementing that usecase. My talk is selected at the summit.
Please do visit.
http://openstacksummitmay2014atlanta.sched.org/mobile/#session:c0d9f8aefb90f93cfc8fc66b67b8403d
On 07-Apr-2014 6:37 PM, Mark Washenberger mark.washenber...@markwash.net
wrote:

 Hi Saju,

 VM imports are likely to show up in Glance under this blueprint:
 https://blueprints.launchpad.net/glance/+spec/new-upload-workflow

 Cheers,
 markwash


 On Mon, Apr 7, 2014 at 12:06 AM, Saju M sajup...@gmail.com wrote:

 Hi,

 Amazon provides option to Import/Export VM.
 http://aws.amazon.com/ec2/vm-import/

 does OpenStack has same feature ?
 Have anyone started to implement this in Openstack ?. If yes, Please
 point me to the blueprint. I would like to work on that.


 Regards
 Saju Madhavan
 +91 09535134654

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-07 Thread Aditya Thatte
We are implementing that usecase. My talk is selected at the summit. Please
do visit.
http://openstacksummitmay2014atlanta.sched.org/mobile/#session:c0d9f8aefb90f93cfc8fc66b67b8403d
On 07-Apr-2014 6:37 PM, Mark Washenberger mark.washenber...@markwash.net
wrote:

 Hi Saju,

 VM imports are likely to show up in Glance under this blueprint:
 https://blueprints.launchpad.net/glance/+spec/new-upload-workflow

 Cheers,
 markwash


 On Mon, Apr 7, 2014 at 12:06 AM, Saju M sajup...@gmail.com wrote:

 Hi,

 Amazon provides option to Import/Export VM.
 http://aws.amazon.com/ec2/vm-import/

 does OpenStack has same feature ?
 Have anyone started to implement this in Openstack ?. If yes, Please
 point me to the blueprint. I would like to work on that.


 Regards
 Saju Madhavan
 +91 09535134654

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of ovs-firewall-driver blueprint?

2014-04-07 Thread Kyle Mestery
On Mon, Apr 7, 2014 at 11:01 AM, Amir Sadoughi
amir.sadou...@rackspace.com wrote:
 Hi Vishal,

 I've restarted my work on the blueprint last week now that Juno is open for
 development and OVS 2.1.0 is available, targeting juno-1. My plan is to have
 a working implementation available by the summit design discussion to make
 sure we cover all our bases. Between the blueprint page you referenced[0],
 this wiki page[1], and the Gerrit review[2] site, that is all the up-to-date
 information. Outside of those links, any other updates will be on this
 mailing list or the ML2 weekly IRC meeting.

 I'm open to collaboration and interested in anything you are able to
 contribute. Do you have any existing work to share or feedback?

 Amir

Amir, I looked at the review, and it's coming along nicely! Any chance
you can post a
rebased version this week to get a clean Jenkins run?

Thanks!
Kyle

 [0] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver
 [1] https://wiki.openstack.org/wiki/Neutron/blueprint_ovs-firewall-driver
 [2] https://review.openstack.org/#/q/topic:bp/ovs-firewall-driver,n,z

 On Apr 7, 2014, at 3:33 AM, Thapar, Vishal (HP Networking) vtha...@hp.com
 wrote:

 Hi,

 I am working on an OVS based implementation of Neutron Security Groups and
 came across this blueprint:
 https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

 I've gone through every mail, document and IRC chat log on this to get a
 good grasp of history on this, but couldn't find any recent activity on this
 blueprint. It is listed on Meetings page on wiki but last meeting seems to
 have been held last year in December. I've just started working on
 prototyping this and would like to work with community to see it to
 completion.

 Could anyone suggest on how to proceed on this? Do I need to request a
 meeting for this?

 Thanks and Regards,
 Vishal.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of ovs-firewall-driver blueprint?

2014-04-07 Thread Kyle Mestery
Also, forgot to add, please add an item on this week's ML2 meeting agenda [1]
so we can discuss this during the meeting this week.

[1] https://wiki.openstack.org/wiki/Meetings/ML2

On Mon, Apr 7, 2014 at 11:01 AM, Amir Sadoughi
amir.sadou...@rackspace.com wrote:
 Hi Vishal,

 I've restarted my work on the blueprint last week now that Juno is open for
 development and OVS 2.1.0 is available, targeting juno-1. My plan is to have
 a working implementation available by the summit design discussion to make
 sure we cover all our bases. Between the blueprint page you referenced[0],
 this wiki page[1], and the Gerrit review[2] site, that is all the up-to-date
 information. Outside of those links, any other updates will be on this
 mailing list or the ML2 weekly IRC meeting.

 I'm open to collaboration and interested in anything you are able to
 contribute. Do you have any existing work to share or feedback?

 Amir

 [0] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver
 [1] https://wiki.openstack.org/wiki/Neutron/blueprint_ovs-firewall-driver
 [2] https://review.openstack.org/#/q/topic:bp/ovs-firewall-driver,n,z

 On Apr 7, 2014, at 3:33 AM, Thapar, Vishal (HP Networking) vtha...@hp.com
 wrote:

 Hi,

 I am working on an OVS based implementation of Neutron Security Groups and
 came across this blueprint:
 https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

 I've gone through every mail, document and IRC chat log on this to get a
 good grasp of history on this, but couldn't find any recent activity on this
 blueprint. It is listed on Meetings page on wiki but last meeting seems to
 have been held last year in December. I've just started working on
 prototyping this and would like to work with community to see it to
 completion.

 Could anyone suggest on how to proceed on this? Do I need to request a
 meeting for this?

 Thanks and Regards,
 Vishal.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-07 Thread Day, Phil
Hi Sylvain,

There was a similar thread on this recently - which might be worth reviewing:   
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html

Some interesting use cases were posted, and a I don't think a conclusion was 
reached, which seems to suggest this might be a good case for a session in 
Atlanta.

Personally I'm not sure that selecting more than one AZ really makes a lot of 
sense - they are generally objects which are few in number and large in scale, 
so if for example there are 3 AZs and you want to create two servers in 
different AZs, does it really help if you can do the sequence:


-  Create a server in any AZ

-  Find the AZ the server is in

-  Create a new server in any of the two remaining AZs

Rather than just picking two from the list to start with ?

If you envisage a system with many AZs, and thereby allow users some pretty 
find grained choices about where to place their instances, then I think you'll 
end up with capacity management issues.

If the use case is more to get some form of server isolation, then 
server-groups might be worth looking at, as these are dynamic and per user.

I can see a case for allowing more than one set of mutually exclusive host 
aggregates - at the moment that's a property implemented just for the set of 
aggregates that are designated as AZs, and generalizing that concept so that 
there can be other sets (where host overlap is allowed between sets, but not 
within a set) might be useful.

Phil

From: Murray, Paul (HP Cloud Services)
Sent: 03 April 2014 16:34
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones : 
possible or not ?

Hi Sylvain,

I would go with keeping AZs exclusive. It is a well-established concept even if 
it is up to providers to implement what it actually means in terms of 
isolation. Some good use cases have been presented on this topic recently, but 
for me they suggest we should develop a better concept rather than bend the 
meaning of the old one. We certainly don't have hosts in more than one AZ in HP 
Cloud and I think some of our users would be very surprised if we changed that.

Paul.

From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
Sent: 03 April 2014 15:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones : 
possible or not ?

+1 for AZs not sharing hosts.

Because it's the only mechanism that allows us to segment the datacenter. 
Otherwise we cannot provide redundancy to client except using Region which is 
dedicated infrastructure and networked separated and anti-affinity filter which 
IMO is not pragmatic as it has tendency of abusive usage.  Why sacrificing this 
power so that users can select the types of his desired physical hosts ? The 
latter can be exposed using flavor metadata, which is a lot safer and more 
controllable than using AZs. If someone insists that we really need to let 
users choose the types of physical hosts, then I suggest creating a new hint, 
and use aggregates with it. Don't sacrifice AZ exclusivity!

Btw, there is a datacenter design called dual-room [1] which I think best fit 
for AZs to make your cloud redundant even with one datacenter.

Best regards,

Toan

[1] IBM and Cisco: Together for a World Class Data Center, Page 141. 
http://books.google.fr/books?id=DHjJAgAAQBAJpg=PA141#v=onepageqf=false



De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Envoyé : jeudi 3 avril 2014 15:52
À : OpenStack Development Mailing List (not for usage questions)
Objet : [openstack-dev] [Nova] Hosts within two Availability Zones : possible 
or not ?

Hi,

I'm currently trying to reproduce [1]. This bug requires to have the same host 
on two different aggregates, each one having an AZ.

IIRC, Nova API prevents hosts of being part of two distinct AZs [2], so IMHO 
this request should not be possible.
That said, there are two flaws where I can identify that no validation is done :
 - when specifying an AZ in nova.conf, the host is overriding the existing AZ 
by its own
 - when adding an host to an aggregate without AZ defined, and afterwards 
update the aggregate to add an AZ


So, I need direction. Either we consider it is not possible to share 2 AZs for 
the same host and then we need to fix the two above scenarios, or we say it's 
nice to have 2 AZs for the same host and then we both remove the validation 
check in the API and we fix the output issue reported in the original bug [1].


Your comments are welcome.
Thanks,
-Sylvain


[1] : https://bugs.launchpad.net/nova/+bug/1277230

[2] : 
https://github.com/openstack/nova/blob/9d45e9cef624a4a972c24c47c7abd57a72d74432/nova/compute/api.py#L3378
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-07 Thread Martinx - ジェームズ
Amazing!!   :-D

I'll do my best to try to make this a reality, as fast as I can!

We really need to start evaluating Neutron IPv6, even on its simplest
topology (like Flat - provider network with external RA)...

Cheers!
Thiago


On 3 April 2014 16:43, Simon Leinen simon.lei...@switch.ch wrote:

 Martinx  writes:
  1- Create and maintain a Ubuntu PPA Archive to host Neutron with IPv6
  patches (from Nephos6 / Shixiong?).
 [...]
  Let me know if there are interest on this...

 Great initiative! We're building a new Icehouse cluster soon and are
 very interested in trying these packages, because we really want to
 support IPv6 properly.

 I see you already got some help from the developers - cool!
 --
 Simon.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting minutes - 04/07/2014

2014-04-07 Thread Renat Akhmerov
Thanks for joining today’s meeting!

As usually,

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-04-07-16.00.html
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-04-07-16.00.log.html

Let’s meet next time on April 14th.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-07 Thread Devananda van der Veen
On Sun, Apr 6, 2014 at 10:44 AM, Tim Bell tim.b...@cern.ch wrote:


 From my understanding, Trove is due to graduate in the Juno release.


Since I didn't see it called out elsewhere in this thread yet, I'd like to
point out that Trove graduated at the end of the Havana cycle and should be
included in the integrated release of Icehouse. See
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n108

The questions you all have raised are very good ones, even though Trove was
graduated before the new criteria were in place. These questions are also
applicable to Ironic -- which is still incubated, aiming to graduate in the
Juno release, does not yet have any content in the openstack docs repo, and
is being held to the new requirements.

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday April 8th at 19:00 UTC

2014-04-07 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday April 8th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of ovs-firewall-driver blueprint?

2014-04-07 Thread Amir Sadoughi
Kyle,

Yes. I plan on getting all 4 existing patches up-to-date over the next 9 days. 
I’m not working on upstream 100% of the time so that’s not a hard commitment, 
but I think it’s definitely doable.

Amir

On Apr 7, 2014, at 11:42 AM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:

On Mon, Apr 7, 2014 at 11:01 AM, Amir Sadoughi
amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com wrote:
Hi Vishal,

I've restarted my work on the blueprint last week now that Juno is open for
development and OVS 2.1.0 is available, targeting juno-1. My plan is to have
a working implementation available by the summit design discussion to make
sure we cover all our bases. Between the blueprint page you referenced[0],
this wiki page[1], and the Gerrit review[2] site, that is all the up-to-date
information. Outside of those links, any other updates will be on this
mailing list or the ML2 weekly IRC meeting.

I'm open to collaboration and interested in anything you are able to
contribute. Do you have any existing work to share or feedback?

Amir

Amir, I looked at the review, and it's coming along nicely! Any chance
you can post a
rebased version this week to get a clean Jenkins run?

Thanks!
Kyle

[0] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver
[1] https://wiki.openstack.org/wiki/Neutron/blueprint_ovs-firewall-driver
[2] https://review.openstack.org/#/q/topic:bp/ovs-firewall-driver,n,z

On Apr 7, 2014, at 3:33 AM, Thapar, Vishal (HP Networking) 
vtha...@hp.commailto:vtha...@hp.com
wrote:

Hi,

I am working on an OVS based implementation of Neutron Security Groups and
came across this blueprint:
https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

I've gone through every mail, document and IRC chat log on this to get a
good grasp of history on this, but couldn't find any recent activity on this
blueprint. It is listed on Meetings page on wiki but last meeting seems to
have been held last year in December. I've just started working on
prototyping this and would like to work with community to see it to
completion.

Could anyone suggest on how to proceed on this? Do I need to request a
meeting for this?

Thanks and Regards,
Vishal.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up cross-project unit test jobs for oslo libs

2014-04-07 Thread Brant Knudson
Doug -

Are the projects going to be responsible for keeping this up to date, or is
the oslo team going to take care of it? We could reject a change to add a
new oslo library dependency if the test isn't there. If there was a process
that you followed to generate the dependencies, maybe this could be
automated.

Also, do you want the projects to make sure that they have tests that use
the libraries? For example, keystone doesn't have any tests calling into
stevedore; it's used by the sample config generator. Some projects might
also have fully mocked out the library calls.

- Brant



On Fri, Apr 4, 2014 at 4:29 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 I have submitted a patch to add jobs to run the unit tests of projects
 using oslo libraries with the unreleased master HEAD version of those
 libraries, to gate both the project and the library. If you are a PTL
 or Oslo liaison, and are interested in following its progress, please
 subscribe: https://review.openstack.org/#/c/85487/3

 Doug

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack] [nova] admin user create instance for another user/tenant

2014-04-07 Thread Solly Ross
Simon, please use the operators list or general list for questions such as this 
in the future.
https://wiki.openstack.org/wiki/Mailing_Lists#General_List

Best Regards,
Solly Ross

- Original Message -
From: Xu (Simon) Chen xche...@gmail.com
To: openstack-dev@lists.openstack.org
Sent: Saturday, April 5, 2014 12:17:05 AM
Subject: [openstack-dev] [openstack] [nova] admin user create instance for  
another user/tenant

I wonder if there is a way to do the following. I have a user A with admin role 
in tenant A, and I want to create a VM in/for tenant B as user A. Obviously, I 
can use A's admin privilege to add itself to tenant B, but I want to avoid 
that. 

Based on the policy.json file, it seems doable: 
https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L8 

I read this as, as long as a user is an admin, it can create an instance.. Just 
like an admin user can remove an instance from another tenant. 

But in here, it looks like as long as the context project ID and target project 
ID don't match, an action would be rejected: 
https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L968 

Indeed, when I try to use user A's token to create a VM (POST to 
v2/tenant_b/servers), I got the exception from the above link. 

On the other hand, according to here, VM's project_id only comes from the 
context: 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L767 

I wonder if it makes sense to allow admin users to specify a project_id field 
(which overrides context.project_id) when creating a VM. This probably requires 
non-trivial code change. 

Or maybe there is another way of doing what I want? 

Thanks. 
-Simon 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Server Groups are not an optional element, bug or feature ?

2014-04-07 Thread Day, Phil
Hi Folks,

Generally the scheduler's capabilities that are exposed via hints can be 
enabled or disabled in a Nova install by choosing the set of filters that are 
configured. However the server group feature doesn't fit that pattern - 
even if the affinity filter isn't configured the anti-affinity check on the 
server will still impose the anti-affinity behavior via throwing the request 
back to the scheduler.

I appreciate that you can always disable the server-groups API extension, in 
which case users can't create a group (and so the server create will fail if 
one is specified), but that seems kind of at odds with other type of scheduling 
that has to be specifically configured in rather than out of a base system.
In particular having the API extension in by default but the ServerGroup 
Affinity and AntiAffinity  filters not in by default seems an odd combination 
(it kind of works, but only by a retry from the host and that's limited to a 
number of retries).

Given that the server group work isn't complete yet (for example the list of 
instances in a group isn't tided up when an instance is deleted) I feel a tad 
worried that the current default configuration exposed this rather than keeping 
it as something that has to be explicitly enabled - what do others think ?

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] resources ID in configuration files

2014-04-07 Thread Emilien Macchi
Hi folks,

I've been looking at how deploying Neutron Server (on Icehouse release)
and I can see we have now to provide the admin tenant ID in neutron.conf
to be able to send notifications to Nova about port updates [1] thru
Nova API.

Working on configuration management for a while, it's a tough task to
put ID in configuration files instead of resource names but still doable
at least.
I understand that Keystone API v3 now requires to use ID because of
domains, but we I would like to think at a smarter way in OpenStack
components (Neutron in my case) where we could get the resource ID (here
the admin tenant) using Keystone API and consume it directly in the code.

I sent a first patch [2] which unfortunately won't work when using
Keystone API v3, so I would like to discuss here about another approach.

Should we continue to that way and add a new parameter to specify which
domain (in case of Keystone v3 use)? Am I wrong in all way?

[1] https://bugs.launchpad.net/neutron/+bug/1302814
[2] https://review.openstack.org/#/c/85492/

Thanks,

-- 
Emilien Macchi




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-07 Thread Zane Bitter

On 06/04/14 14:23, Steven Dake wrote:

Hi folks,

There are two problems we should address regarding the growth and change
to the HOT specification.

First our +2/+A process for normal changes doesn't totally make sense
for hot_spec.rst.  We generally have some informal bar for controversial
changes (of which changes to hot_spec.rst is generally considered:).  I
would suggest raising the bar on hot_spec.rst to at-least what is
required for a heat-core team addition (currently 5 approval votes).
This gives folks plenty of time to review and make sure the heat core
team is committed to the changes, rather then a very small 2 member
subset.  Of course a -2 vote from any heat-core would terminate the
review as usual.

Second, There is a window where we say hey we want this sweet new
functionality yet it remains unimplemented.  I suggest we create some
special tag for these intrinsics/sections/features, so folks know they
are unimplemented and NOT officially part of the specification until
that is the case.

We can call this tag something simple like
*standardization_pending_implementation* for each section which is
unimplemented.  A review which proposes this semantic is here:
https://review.openstack.org/85610


This part sounds highly problematic to me.

I agree with you and Thomas S that using Gerrit to review proposed 
specifications is a nice workflow, even if the proper place to do this 
is on the wiki and linked to a blueprint. I would probably go along with 
everything you suggested provided that anything pending implementation 
goes in a separate file or files that are _not_ included in the 
generated docs.


cheers,
Zane.


My goal is not to add more review work to people's time, but I really
believe any changes to the HOT specification have a profound impact on
all things Heat, and we should take special care when considering these
changes.

Thoughts or concerns?

Regards,
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up cross-project unit test jobs for oslo libs

2014-04-07 Thread Doug Hellmann
On Mon, Apr 7, 2014 at 1:43 PM, Brant Knudson b...@acm.org wrote:
 Doug -

 Are the projects going to be responsible for keeping this up to date, or is
 the oslo team going to take care of it?

Some of each. Adding the test jobs will be part of the work the
liaison from a project does when bringing an oslo library in. We'll
help where needed, especially by creating a check-list of the steps to
follow (I'm working on that list now).

 We could reject a change to add a
 new oslo library dependency if the test isn't there. If there was a process
 that you followed to generate the dependencies, maybe this could be
 automated.

I looked through the requirements lists by hand and created jobs for
any oslo libraries in use in any requirements file. We could automate
that, but as you say it could also be handled as part of the review
for requirements on a project.

 Also, do you want the projects to make sure that they have tests that use
 the libraries? For example, keystone doesn't have any tests calling into
 stevedore; it's used by the sample config generator. Some projects might
 also have fully mocked out the library calls.

The point is to make sure if any of the tests *do* call the library,
we don't break the tests by making a change in another repository. I
don't think we need to add tests just for the sake of having them. If
we know that keystone isn't using stevedore in its tests, we could
remove that job. To start, I erred on the side of adding a job for
every relationship I found.

Doug


 - Brant



 On Fri, Apr 4, 2014 at 4:29 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:

 I have submitted a patch to add jobs to run the unit tests of projects
 using oslo libraries with the unreleased master HEAD version of those
 libraries, to gate both the project and the library. If you are a PTL
 or Oslo liaison, and are interested in following its progress, please
 subscribe: https://review.openstack.org/#/c/85487/3

 Doug

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-SDK-PHP] First IRC meeting

2014-04-07 Thread Matthew Farina
To make the OpenStack APIs easily accessible via PHP, the most popular
server side language of the web, we've been working on a PHP SDK in
Stackforge.

This week we are going to have our first IRC meeting on Wednesday. If
you're interested in a PHP SDK please come join in the discussion.

More information is available on the Wiki at
https://wiki.openstack.org/wiki/Meetings#PHP_SDK_Team_Meeting
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-07 Thread Martinx - ジェームズ
Hi Thomas!

It will be a honor for me to join Debian OpenStack packaging team! I'm in!!
:-D

Listen, that neutron-ipv6.patch I have, doesn't apply against
neutron-2014.1.rc1, here is it:

neutron-ipv6.patch: http://paste.openstack.org/show/74857/

I generated it from the commands that Xuhan Peng told me to do, few posts
back, which are:

--
git fetch https://review.openstack.org/openstack/neutronrefs/changes/49/70649/15
git format-patch -1 --stdout FETCH_HEAD  neutron-ipv6.patch
--

But, as Collins said, even if the patch applies successfully against
neutron-2014.1.rc1 (or newer), it will not pass the tests, so, there is
still a lot of work to do, to enable Neutron with IPv6 but, I think we can
start working on this patches and start testing whatever is already there
(related to IPv6).

Best!
Thiago


On 5 April 2014 03:36, Thomas Goirand z...@debian.org wrote:

 On 04/02/2014 02:33 AM, Martinx - ジェームズ wrote:
  Guys!
 
  I would like to do this:
 
 
  1- Create and maintain a Ubuntu PPA Archive to host Neutron with IPv6
  patches (from Nephos6 / Shixiong?).
 
 
  Why?
 
 
  Well, I'm feeling that Neutron with native and complete IPv6 support
  will be only available in October (or maybe later, am I right?) but, I
  really need this (Neutron IPv6) ASAP, so, I'm volunteering myself to
  create / maintain this PPA for Neutron with IPv6, until it reaches
 mainline.
 
  To be able to achieve it, I just need to know which files do I need to
  patch (the diff), then repackage Neutron deb packages but, I'll need
  help here, because I don't know where are those Neutron IPv6 patches
  (links?)...
 
  Let me know if there are interest on this...
 
  Thanks!
  Thiago

 Hi Martinx,

 If you would like to take care of maintaining the IPv6 patch for the
 life of Icehouse, then I'll happily use them in the Debian packages
 (note: I also produce Ubuntu packages, and maintain 10 repository mirrors).

 Also, if you would like to join the OpenStack packaging team in
 alioth.debian.org, and contribute to it at least for this IPv6 support,
 that'd be just great! I'm available if you need my help.

 Could you please point to me to the list of needed patches? I would need
 to keep them separated, in debian/patches, rather than pulling from a
 different git repository.

 Cheers,

 Thomas Goirand (zigo)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [IPv6] Supporting upstream RAs

2014-04-07 Thread Collins, Sean
I am currently working on a patch that allows upstream RAs and SLAAC
configuration. Currently testing it in devstack - it's based on chunks
of patchset 33 of this review that were skipped when Mark McClain
committed patchset 34.

https://review.openstack.org/#/c/56184/

Xu Han and Dazhao - do I have your permission to post a rebased version
of this patch into Gerrit - I have set myself as the author and added
you both as Co-Authors.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-04-07 Thread Randall Burt

On Apr 4, 2014, at 1:56 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 19/02/14 02:48, Clint Byrum wrote:
 Since picking up Heat and trying to think about how to express clusters
 of things, I've been troubled by how poorly the CFN language supports
 using lists. There has always been the Fn::Select function for
 dereferencing arrays and maps, and recently we added a nice enhancement
 to HOT to allow referencing these directly in get_attr and get_param.
 
 However, this does not help us when we want to do something with all of
 the members of a list.
 
 In many applications I suspect the template authors will want to do what
 we want to do now in TripleO. We have a list of identical servers and
 we'd like to fetch the same attribute from them all, join it with other
 attributes, and return that as a string.
 
 The specific case is that we need to have all of the hosts in a cluster
 of machines addressable in /etc/hosts (please, Designate, save us,
 eventually. ;). The way to do this if we had just explicit resources
 named NovaCompute0, NovaCompute1, would be:
 
   str_join:
 - \n
 - - str_join:
 - ' '
 - get_attr:
   - NovaCompute0
   - networks.ctlplane.0
 - get_attr:
   - NovaCompute0
   - name
   - str_join:
 - ' '
 - get_attr:
   - NovaCompute1
   - networks.ctplane.0
 - get_attr:
   - NovaCompute1
   - name
 
 Now, what I'd really like to do is this:
 
 map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - - NovaCompute0
 - NovaCompute1
 
 This would be helpful for the instances of resource groups too, as we
 can make sure they return a list. The above then becomes:
 
 
 map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - get_attr:
   - NovaComputeGroup
   - member_resources
 
 Thoughts on this idea? I will throw together an implementation soon but
 wanted to get this idea out there into the hive mind ASAP.
 
 Apparently I read this at the time, but completely forgot about it. Sorry 
 about that! Since it has come up again in the context of the TripleO Heat 
 templates and merge.py thread, allow me to contribute my 2c.
 
 Without expressing an opinion on this proposal specifically, consensus within 
 the Heat core team has been heavily -1 on any sort of for-each functionality. 
 I'm happy to have the debate again (and TBH I don't really know what the 
 right answer is), but I wouldn't consider the lack of comment on this as a 
 reliable indicator of lazy consensus in favour; equivalent proposals have 
 been considered and rejected on multiple occasions.
 
 Since it looks like TripleO will soon be able to move over to using 
 AutoscalingGroups (or ResourceGroups, or something) for groups of similar 
 servers, maybe we could consider baking this functionality into Autoscaling 
 groups instead of as an intrinsic function.
 
 For example, when you do get_attr on an autoscaling resource it could fetch 
 the corresponding attribute from each member of the group and return them as 
 a list. (It might be wise to prepend Output. or something similar - maybe 
 Members. - to the attribute names, as AWS::CloudFormation::Stack does, so 
 that attributes of the autoscaling group itself can remain in a separate 
 namespace.)

FWIW, ResourceGroup supports this now as well as getting the attribute value 
from a given indexed member of the group.

 
 Since members of your NovaComputeGroup will be nested stacks anyway (using 
 ResourceGroup or some equivalent feature - preferably autoscaling with 
 rolling updates), in the case above you'd define in the scaled template:
 
  outputs:
hosts_entry:
  description: An /etc/hosts entry for the NovaComputeServer
  value:
- str_join:
  - ' '
  - - get_attr:
  - NovaComputeServer
  - networks
  - ctlplane
  - 0
- get_attr:
  - NovaComputeServer
  - name
 
 And then in the main template (containing the autoscaling group):
 
str_join:
  - \n
  - get_attr:
- NovaComputeGroup
- Members.hosts_entry
 
 would give the same output as your example would.
 
 IMHO we should do something like this regardless of whether it solves your 
 use case, because it's fairly easy, requires no changes to the template 
 format, and users have been asking for ways to access e.g. a list of IP 
 addresses from a scaling group. That said, it seems very likely that making 
 the other changes required for TripleO to get rid of merge.py (i.e. switching 
 to scaling groups of templates instead of by multiplying resources in 
 templates) will make this a viable solution for 

Re: [openstack-dev] [nova] Server Groups are not an optional element, bug or feature ?

2014-04-07 Thread Russell Bryant
On 04/07/2014 02:12 PM, Russell Bryant wrote:
 On 04/07/2014 01:43 PM, Day, Phil wrote:
 Generally the scheduler’s capabilities that are exposed via hints can be
 enabled or disabled in a Nova install by choosing the set of filters
 that are configured. However the server group feature doesn’t fit
 that pattern – even if the affinity filter isn’t configured the
 anti-affinity check on the server will still impose the anti-affinity
 behavior via throwing the request back to the scheduler.

 I appreciate that you can always disable the server-groups API
 extension, in which case users can’t create a group (and so the server
 create will fail if one is specified), but that seems kind of at odds
 with other type of scheduling that has to be specifically configured in
 rather than out of a base system.In particular having the API
 extension in by default but the ServerGroup Affinity and AntiAffinity
  filters not in by default seems an odd combination (it kind of works,
 but only by a retry from the host and that’s limited to a number of
 retries).

 Given that the server group work isn’t complete yet (for example the
 list of instances in a group isn’t tided up when an instance is deleted)
 I feel a tad worried that the current default configuration exposed this
 rather than keeping it as something that has to be explicitly enabled –
 what do others think ?
 
 I consider it a complete working feature.  It makes sense to enable the
 filters by default.  It's harmless when the API isn't used.  That was
 just an oversight.
 
 The list of instances in a group through the API only shows non-deleted
 instances.
 
 There are some implementation details that could be improved (the check
 on the server is the big one).
 

https://bugs.launchpad.net/nova/+bug/1303983

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-07 Thread Steven Dake

On 04/07/2014 11:01 AM, Zane Bitter wrote:

On 06/04/14 14:23, Steven Dake wrote:

Hi folks,

There are two problems we should address regarding the growth and change
to the HOT specification.

First our +2/+A process for normal changes doesn't totally make sense
for hot_spec.rst.  We generally have some informal bar for controversial
changes (of which changes to hot_spec.rst is generally considered:).  I
would suggest raising the bar on hot_spec.rst to at-least what is
required for a heat-core team addition (currently 5 approval votes).
This gives folks plenty of time to review and make sure the heat core
team is committed to the changes, rather then a very small 2 member
subset.  Of course a -2 vote from any heat-core would terminate the
review as usual.

Second, There is a window where we say hey we want this sweet new
functionality yet it remains unimplemented.  I suggest we create some
special tag for these intrinsics/sections/features, so folks know they
are unimplemented and NOT officially part of the specification until
that is the case.

We can call this tag something simple like
*standardization_pending_implementation* for each section which is
unimplemented.  A review which proposes this semantic is here:
https://review.openstack.org/85610


This part sounds highly problematic to me.

I agree with you and Thomas S that using Gerrit to review proposed 
specifications is a nice workflow, even if the proper place to do 
this is on the wiki and linked to a blueprint. I would probably go 
along with everything you suggested provided that anything pending 
implementation goes in a separate file or files that are _not_ 
included in the generated docs.


This is a really nice idea.  We could have a hot_spec_pending.rst which 
lists out the pending ideas so we can have a gerrit review of this doc.  
The doc wouldn't be generated into the externally rendered documentation.


We could still use blueprints before/after the discussion is had on the 
hot_spec_pending.rst doc, but hot_spec_pending.rst would allow us to 
collaborate properly on the changes.


The problem I have with blueprints is they suck for collaborative 
discussion, whereas gerrit rocks for this purpose.  In essence, I just 
want a tidier way to discuss the changes then blueprints provide.


Other folks on this thread, how do you feel about this approach?

Regards
-steve

cheers,
Zane.


My goal is not to add more review work to people's time, but I really
believe any changes to the HOT specification have a profound impact on
all things Heat, and we should take special care when considering these
changes.

Thoughts or concerns?

Regards,
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack] [nova] admin user create instance for another user/tenant

2014-04-07 Thread Xu (Simon) Chen
Solly,

My point is that this feature (creating a VM for a tenant as an admin in
another project) might not be possible given the current implementation.
I've pointed out two places in nova code, from which I drew my conclusion.

Since this potentially requires a code change, I think the dev mailing list
is somewhat appropriate...

Thanks.
-Simon



On Mon, Apr 7, 2014 at 1:44 PM, Solly Ross sr...@redhat.com wrote:

 Simon, please use the operators list or general list for questions such as
 this in the future.
 https://wiki.openstack.org/wiki/Mailing_Lists#General_List

 Best Regards,
 Solly Ross

 - Original Message -
 From: Xu (Simon) Chen xche...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Saturday, April 5, 2014 12:17:05 AM
 Subject: [openstack-dev] [openstack] [nova] admin user create instance for
  another user/tenant

 I wonder if there is a way to do the following. I have a user A with admin
 role in tenant A, and I want to create a VM in/for tenant B as user A.
 Obviously, I can use A's admin privilege to add itself to tenant B, but I
 want to avoid that.

 Based on the policy.json file, it seems doable:
 https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L8

 I read this as, as long as a user is an admin, it can create an instance..
 Just like an admin user can remove an instance from another tenant.

 But in here, it looks like as long as the context project ID and target
 project ID don't match, an action would be rejected:

 https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L968

 Indeed, when I try to use user A's token to create a VM (POST to
 v2/tenant_b/servers), I got the exception from the above link.

 On the other hand, according to here, VM's project_id only comes from the
 context:
 https://github.com/openstack/nova/blob/master/nova/compute/api.py#L767

 I wonder if it makes sense to allow admin users to specify a project_id
 field (which overrides context.project_id) when creating a VM. This
 probably requires non-trivial code change.

 Or maybe there is another way of doing what I want?

 Thanks.
 -Simon


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Supporting upstream RAs

2014-04-07 Thread Martinx - ジェームズ
Awesome! I have a perfect lab to evaluate it...:-)

Just a curiosity, it will work with ML2 and Flat Network (dual-stacked with
IPv4)? I would like to try to fit this into a running lab environment, if
possible...

I mean, currently, I have a lab with Flat Network topology (Havana without
ML2), my lab network was created with:

---
neutron net-create --tenant-id $ADMIN_TENTANT_ID sharednet1 --shared
--provider:network_type flat --provider:physical_network physnet1

neutron subnet-create --ip-version 4 --tenant-id $ADMIN_TENANT_ID
sharednet1 10.33.14.0/24 --dns_nameservers list=true 8.8.8.8 8.8.4.4
---

Where physnet1 is a bridge_mappings = physnet1:br-eth0 pointing to my OVS
bridge br-eth0. IPv4 router 10.33.14.1 is upstream (provider /
external)...

Reference: https://gist.github.com/tmartinx/7019826

So, I'm wondering here, at my IPv4 router 10.33.14.1 (gateway of sharednet1
10.33.14.0/24 network), I already have a up and running RA daemon
(radvd.conf) working in a dual-stacked environment BUT, currently, of
course, the OpenStack Instances only gets an IPv4 from 10.33.14.0/24 subnet
(and from dhcp-agent from network+controller node).

Anyway, I would like to try this upstream RAs and SLAAC, like this:

---
neutron subnet-create --ip-version 6 --ipv6_ra_mode slaac
--ipv6_address_mode slaac --tenant-id $ADMIN_TENANT_ID sharednet1
2001:db8:1:1::/64
---

It works that way or, am I thinking it the wrong way?!

Also, my radvd.conf provides RDNSS/DNSSL and, my Ubuntu instances will have
the pacakge `rdnssd` installed, to deal with the resolv.conf properly.

Cheers!
Thiago


On 7 April 2014 16:24, Collins, Sean sean_colli...@cable.comcast.comwrote:

 I am currently working on a patch that allows upstream RAs and SLAAC
 configuration. Currently testing it in devstack - it's based on chunks
 of patchset 33 of this review that were skipped when Mark McClain
 committed patchset 34.

 https://review.openstack.org/#/c/56184/

 Xu Han and Dazhao - do I have your permission to post a rebased version
 of this patch into Gerrit - I have set myself as the author and added
 you both as Co-Authors.

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the oslo namespace package

2014-04-07 Thread Vishvananda Ishaya
I dealt with this myself the other day and it was a huge pain. That said,
changing all the packages seems like a nuclear option. Is there any way
we could change python that would make it smarter about searching multiple
locations for namespace packages?

Vish

On Apr 7, 2014, at 12:24 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 Some of the production Oslo libraries are currently being installed
 into the oslo namespace package (oslo.config, oslo.messaging,
 oslo.vmware, oslo.rootwrap, and oslo.version). Over the course of the
 last 2 release cycles, we have seen an increase in the number of
 developers who end up with broken systems, where an oslo library (most
 often oslo.config) cannot be imported. This is usually caused by
 having one copy of a library installed normally (via a system package
 or via pip) and another version in development (a.k.a., editable)
 mode as installed by devstack. The symptom is most often an error
 about importing oslo.config, although that is almost never the library
 causing the problem.
 
 We have already worked around this issue with the non-production
 libraries by installing them into their own packages, without using
 the namespace (oslotest, oslosphinx, etc.). We have also changed the
 way packages are installed in nova's tox.ini, to force installation of
 packages into the virtualenv (since exposing the global site-packages
 was a common source of the problem). And very recently, Sean Dague
 changed devstack to install the oslo libraries not in editable mode,
 so that installing from source should replace any existing installed
 version of the same library.
 
 However, the problems seem to persist, and so I think it's time to
 revisit our decision to use a namespace package.
 
 After experimenting with non-namespace packages, I wasn't able to
 reproduce the same import issues. I did find one case that may cause
 us some trouble, though. Installing a package and then installing an
 editable version from source leaves both installed and the editable
 version appears first in the import path. That might cause surprising
 issues if the source is older than the package, which happens when a
 devstack system isn't updated regularly and a new library is released.
 However, surprise due to having an old version of code should occur
 less frequently than, and have less of an impact than, having a
 completely broken set of oslo libraries.
 
 We can avoid adding to the problem by putting each new library in its
 own package. We still want the Oslo name attached for libraries that
 are really only meant to be used by OpenStack projects, and so we need
 a naming convention. I'm not entirely happy with the crammed
 together approach for oslotest and oslosphinx. At one point Dims and
 I talked about using a prefix oslo_ instead of just oslo, so we
 would have oslo_db, oslo_i18n, etc. That's also a bit ugly,
 though. Opinions?
 
 Given the number of problems we have now (I help about 1 dev per week
 unbreak their system), I think we should also consider renaming the
 existing libraries to not use the namespace package. That isn't a
 trivial change, since it will mean updating every consumer as well as
 the packaging done by distros. If we do decide to move them, I will
 need someone to help put together a migration plan. Does anyone want
 to volunteer to work on that?
 
 Before we make any changes, it would be good to know how bad this
 problem still is. Do developers still see issues on clean systems, or
 are all of the problems related to updating devstack boxes? Are people
 figuring out how to fix or work around the situation on their own? Can
 we make devstack more aggressive about deleting oslo libraries before
 re-installing them? Are there other changes we can make that would be
 less invasive?
 
 Doug
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Supporting upstream RAs

2014-04-07 Thread Collins, Sean
Hi Martin,

I previously posted to the mailing list with some information about our
IPv6 lab environment and devstack setup. 

http://lists.openstack.org/pipermail/openstack-dev/2014-February/026589.html

Keep in mind that code differs from what was eventually merged in
upstream, so I would ask for your patience while I rebase some patches
and submit them for review, to work with the two new IPv6 attributes.

Please join us on the IRC channel tomorrow, if you are available.

https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Supporting upstream RAs

2014-04-07 Thread Martinx - ジェームズ
Okay Collins! Got it... I remember that e-mail from Feb...
I understand it, no rush...   ^_^
Chat tomorrow, tks!

On 7 April 2014 17:35, Collins, Sean sean_colli...@cable.comcast.comwrote:

 Hi Martin,

 I previously posted to the mailing list with some information about our
 IPv6 lab environment and devstack setup.


 http://lists.openstack.org/pipermail/openstack-dev/2014-February/026589.html

 Keep in mind that code differs from what was eventually merged in
 upstream, so I would ask for your patience while I rebase some patches
 and submit them for review, to work with the two new IPv6 attributes.

 Please join us on the IRC channel tomorrow, if you are available.

 https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Location of 'enable_security_group' key in ml2_conf.ini

2014-04-07 Thread Matt Kassawara
I'm writing the ML2 configuration sections for the installation guide and
found a potential location conflict for the 'enable_security_group' key in
ml2_conf.ini. In the patch associated with this feature, the example
configuration file has this key under [security_group].

https://review.openstack.org/#/c/67281/33/etc/neutron/plugins/ml2/ml2_conf.ini

The most recent gate from the milestone-proposed branch also has this key
under [security_group].

http://logs.openstack.org/76/85676/1/gate/gate-tempest-dsvm-neutron/80af0f6/logs/etc/neutron/plugins/ml2/ml2_conf.ini.txt.gz

However, the code has this key under [securitygroup] with the
'firewall_driver' key.

https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py

What's the proper location for the 'enable_security_group' key?

Thanks,
Matt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-07 Thread Jay Pipes
On Mon, 2014-04-07 at 16:48 +, Day, Phil wrote:
 Hi Sylvain,
 
 There was a similar thread on this recently – which might be worth
 reviewing:
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html 
 
 Some interesting use cases were posted, and a I don’t think a
 conclusion was reached, which seems to suggest this might be a good
 case for a session in Atlanta. 
 
 Personally I’m not sure that selecting more than one AZ really makes a
 lot of sense – they are generally objects which are few in number and
 large in scale, so if for example there are 3 AZs and you want to
 create two servers in different AZs, does it really help if you can do
 the sequence: 
 
 - Create a server in any AZ
 - Find the AZ the server is in
 - Create a new server in any of the two remaining AZs 
 
 Rather than just picking two from the list to start with ?

Or doing this in Heat, where orchestration belongs?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Agenda for tomorrow - please add topics

2014-04-07 Thread Collins, Sean
Hi,

I've added a section for tomorrow's agenda, please do add topics that
you'd like to discuss.

https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam#Agenda_for_April_8th


-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-07 Thread Sylvain Bauza
Hi Phil,



2014-04-07 18:48 GMT+02:00 Day, Phil philip@hp.com:

  Hi Sylvain,



 There was a similar thread on this recently - which might be worth
 reviewing:
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html



 Some interesting use cases were posted, and a I don't think a conclusion
 was reached, which seems to suggest this might be a good case for a session
 in Atlanta.



The funny fact is that I was already part of this discussion as owner of a
bug related to it (see the original link I provided).
That's only when reviewing the code by itself that I found some
discrepancies and raised the question here, before committing.





 Personally I'm not sure that selecting more than one AZ really makes a lot
 of sense - they are generally objects which are few in number and large in
 scale, so if for example there are 3 AZs and you want to create two servers
 in different AZs, does it really help if you can do the sequence:



 -  Create a server in any AZ

 -  Find the AZ the server is in

 -  Create a new server in any of the two remaining AZs



 Rather than just picking two from the list to start with ?



 If you envisage a system with many AZs, and thereby allow users some
 pretty find grained choices about where to place their instances, then I
 think you'll end up with capacity management issues.



 If the use case is more to get some form of server isolation, then
 server-groups might be worth looking at, as these are dynamic and per user.



 I can see a case for allowing more than one set of mutually exclusive host
 aggregates - at the moment that's a property implemented just for the set
 of aggregates that are designated as AZs, and generalizing that concept so
 that there can be other sets (where host overlap is allowed between sets,
 but not within a set) might be useful.



 Phil




That's a good point for discussing at the Summit. I don't have yet an
opinion on this, I'm just trying to stabilize things now :-)
At the moment, I'm pretty close to submit a change which will fix two
things :
 - the decisional will be the same for both adding a server to an aggregate
and update metadata from an existing aggregate (there was duplicate code
leading to a few differences)
 - when checking existing AZs for one host, we will also get the aggregates
to know if the default AZ is related to an existing aggregate with the same
name or just something unrelated

Thanks,
-Sylvain



   *From:* Murray, Paul (HP Cloud Services)
 *Sent:* 03 April 2014 16:34
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Nova] Hosts within two Availability Zones
 : possible or not ?



 Hi Sylvain,



 I would go with keeping AZs exclusive. It is a well-established concept
 even if it is up to providers to implement what it actually means in terms
 of isolation. Some good use cases have been presented on this topic
 recently, but for me they suggest we should develop a better concept rather
 than bend the meaning of the old one. We certainly don't have hosts in more
 than one AZ in HP Cloud and I think some of our users would be very
 surprised if we changed that.



 Paul.



 *From:* Khanh-Toan Tran 
 [mailto:khanh-toan.t...@cloudwatt.comkhanh-toan.t...@cloudwatt.com]

 *Sent:* 03 April 2014 15:53
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Nova] Hosts within two Availability Zones
 : possible or not ?



 +1 for AZs not sharing hosts.



 Because it's the only mechanism that allows us to segment the datacenter.
 Otherwise we cannot provide redundancy to client except using Region which
 is dedicated infrastructure and networked separated and anti-affinity
 filter which IMO is not pragmatic as it has tendency of abusive usage.  Why
 sacrificing this power so that users can select the types of his desired
 physical hosts ? The latter can be exposed using flavor metadata, which is
 a lot safer and more controllable than using AZs. If someone insists that
 we really need to let users choose the types of physical hosts, then I
 suggest creating a new hint, and use aggregates with it. Don't sacrifice AZ
 exclusivity!



 Btw, there is a datacenter design called dual-room [1] which I think
 best fit for AZs to make your cloud redundant even with one datacenter.



 Best regards,



 Toan



 [1] IBM and Cisco: Together for a World Class Data Center, Page 141.
 http://books.google.fr/books?id=DHjJAgAAQBAJpg=PA141#v=onepageqf=false







 *De :* Sylvain Bauza [mailto:sylvain.ba...@gmail.comsylvain.ba...@gmail.com]

 *Envoyé :* jeudi 3 avril 2014 15:52
 *À :* OpenStack Development Mailing List (not for usage questions)
 *Objet :* [openstack-dev] [Nova] Hosts within two Availability Zones :
 possible or not ?



 Hi,



 I'm currently trying to reproduce [1]. This bug requires to have the same
 host on two different aggregates, each one having an AZ.



 IIRC, Nova API 

[openstack-dev] [Ceilometer]

2014-04-07 Thread Hachem Chraiti
hi erveryone,thats a python code:

from ceilometerclient.v2 import client

ceilometer =client.Client(endpoint='http://controller:8777/v2/resources',
token='e8e70342225d64d1d20a')

print  ceilometer.resources.list(q)


whats this q parameter??

Sincerly ,
Chraiti Hachem,software engineer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Doc for Trove ?

2014-04-07 Thread Nikhil Manchanda

Tim Bell writes:

 From my understanding, Trove is due to graduate in the Juno release.

 Is documentation for developers, operators and users not one of the
 criteria
 (http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements)
 ?


Hi Tim:

Just to clarify, Trove graduated at the end of the Havana cycle, and has
its first integrated release in IceHouse. Even though Trove graduated
before the release criteria were put in place, we've worked to ensure
that we are aligned with them. We've been tracking our progress against
this at https://etherpad.openstack.org/p/TroveIntegrationRequirements.

That said, docs is probably the one area that we still have a deficit
in. We have Developer, API, and Client docs, but still need to do work
on our deployment documentation. We have folks currently working on
this, and hope to fill in this gap soon.

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators Design Summit ideas for Atlanta

2014-04-07 Thread Matt Van Winkle
Based on some off-list chatter with Michael, Tom and others, I went ahead
and submitted a proposal for a nova session -
http://summit.openstack.org/cfp/details/245 - and used Tom's wording from
those submitted to other products.  This will hold a place while the Nova
PTL election finishes and we'll go from there.

Thanks!
Matt

On 4/6/14 9:32 PM, Michael Still mi...@stillhq.com wrote:

It might be that this is happening because there is no clear incumbent
for the Nova PTL position. Is it ok to hold off on this until after
the outcome of the election is known?

Michael

On Mon, Apr 7, 2014 at 2:23 PM, Tom Fifield t...@openstack.org wrote:
 So far, there's been no comment from anyone working on nova, so there's
been
 no session proposed.

 I can, of course, propose a session ... but without buy-in from the
project
 team it's unlikely to be accepted.


 Regards,


 Tom



 On 01/04/14 22:44, Matt Van Winkle wrote:

 So, I've been watching the etherpad and the summit submissions and I
 noticed that there isn't anything for nova.  Maybe I'm off base, but it
 seems like we'd be missing the mark to not have a Developer/Operator's
 exchange on the key product.  Is there anything we can do to get a
session
 slotted like these other products?

 Thanks!
 Matt

 On 3/28/14 2:01 AM, Tom Fifield t...@openstack.org wrote:

 Thanks to those projects that responded. I've proposed sessions in
 swift, ceilometer, tripleO and horizon.

 On 17/03/14 07:54, Tom Fifield wrote:

 All,

 Many times we've heard a desire for more feedback and interaction
from
 users. However, their attendance at design summit sessions is met
with
 varied success.

 However, last summit, by happy accident, a swift session turned into
a
 something a lot more user driven. A competent user was able to
describe
 their use case, and the developers were able to stage a number of
 question to them. In this way, some of the assumptions about the way
 certain things were implemented, and the various priorities of future
 plans became clearer. It worked really well ... perhaps this is
 something we'd like to have happen for all the projects?

 *Idea*: Add an ops session for each project in the design summit


 
https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-sessi
ons


 Most operators running OpenStack tend to treat it more holistically
than
 those coding it. They are aware of, but don't necessarily think or
work
 in terms of project  breakdowns. To this end, I'd imagine the such
 sessions would:

* have a primary purpose for developers to ask the operators to
 answer
  questions, and request information

* allow operators to tell the developers things (give feedback)
as a
  secondary purpose that could potentially be covered better in a
  cross-project session

* need good moderation, for example to push operator-to-operator
  discussion into forums with more time available (eg
  https://etherpad.openstack.org/p/ATL-ops-unconference-RFC )

* be reinforced by having volunteer good users in potentially
every
  design summit session
  (https://etherpad.openstack.org/p/ATL-ops-in-design-sessions )


 Anyway, just a strawman - please jump on the etherpad


 
(https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-sess
ion
 s)
 or leave your replies here!


 Regards,


 Tom


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Location of 'enable_security_group' key in ml2_conf.ini

2014-04-07 Thread Akihiro Motoki
Hi Matt,

Thanks for raising this. Both should be the same section.
[securitygroup] section exists in Havana and previous releases and
it is the right section name.

When we introduced enable_security_group option, we seem to have added
a new section
accidentally. We don't intended to introduce a new section name.

IMO, both firewall_driver and enable_security_group are placed in
[securitygroup].
It should be fixed ASAP. I will take care of it.

Thanks,
Akihiro


On Tue, Apr 8, 2014 at 5:51 AM, Matt Kassawara mkassaw...@gmail.com wrote:
 I'm writing the ML2 configuration sections for the installation guide and
 found a potential location conflict for the 'enable_security_group' key in
 ml2_conf.ini. In the patch associated with this feature, the example
 configuration file has this key under [security_group].

 https://review.openstack.org/#/c/67281/33/etc/neutron/plugins/ml2/ml2_conf.ini

 The most recent gate from the milestone-proposed branch also has this key
 under [security_group].

 http://logs.openstack.org/76/85676/1/gate/gate-tempest-dsvm-neutron/80af0f6/logs/etc/neutron/plugins/ml2/ml2_conf.ini.txt.gz

 However, the code has this key under [securitygroup] with the
 'firewall_driver' key.

 https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py

 What's the proper location for the 'enable_security_group' key?

 Thanks,
 Matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-07 Thread Ian Wells
On 3 April 2014 08:21, Khanh-Toan Tran khanh-toan.t...@cloudwatt.comwrote:

 Otherwise we cannot provide redundancy to client except using Region which
 is dedicated infrastructure and networked separated and anti-affinity
 filter which IMO is not pragmatic as it has tendency of abusive usage.


I'm sorry, could you explain what you mean here by 'abusive usage'?
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Location of 'enable_security_group' key in ml2_conf.ini

2014-04-07 Thread Martinx - ジェームズ
Hi!

I faced this problem this weekend, look:
https://bugs.launchpad.net/bugs/1303517

Currently, my ml2_conf.ini contains:

---
[security_group]
enable_security_group = True

[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
---

Best!
Thiago


On 7 April 2014 19:50, Akihiro Motoki amot...@gmail.com wrote:

 Hi Matt,

 Thanks for raising this. Both should be the same section.
 [securitygroup] section exists in Havana and previous releases and
 it is the right section name.

 When we introduced enable_security_group option, we seem to have added
 a new section
 accidentally. We don't intended to introduce a new section name.

 IMO, both firewall_driver and enable_security_group are placed in
 [securitygroup].
 It should be fixed ASAP. I will take care of it.

 Thanks,
 Akihiro


 On Tue, Apr 8, 2014 at 5:51 AM, Matt Kassawara mkassaw...@gmail.com
 wrote:
  I'm writing the ML2 configuration sections for the installation guide and
  found a potential location conflict for the 'enable_security_group' key
 in
  ml2_conf.ini. In the patch associated with this feature, the example
  configuration file has this key under [security_group].
 
 
 https://review.openstack.org/#/c/67281/33/etc/neutron/plugins/ml2/ml2_conf.ini
 
  The most recent gate from the milestone-proposed branch also has this key
  under [security_group].
 
 
 http://logs.openstack.org/76/85676/1/gate/gate-tempest-dsvm-neutron/80af0f6/logs/etc/neutron/plugins/ml2/ml2_conf.ini.txt.gz
 
  However, the code has this key under [securitygroup] with the
  'firewall_driver' key.
 
 
 https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py
 
  What's the proper location for the 'enable_security_group' key?
 
  Thanks,
  Matt
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-07 Thread Robert Collins
tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon


On 4 April 2014 08:55, Chris Jones c...@tenshu.net wrote:
 Hi

 +1 for your proposed -core changes.

 Re your question about whether we should retroactively apply the 3-a-day
 rule to the 3 month review stats, my suggestion would be a qualified no.

 I think we've established an agile approach to the member list of -core, so
 if there are a one or two people who we would have added to -core before the
 goalposts moved, I'd say look at their review quality. If they're showing
 the right stuff, let's get them in and helping. If they don't feel our new
 goalposts are achievable with their workload, they'll fall out again
 naturally before long.

So I've actioned the prior vote.

I said: Bnemec, jdob, greg etc - good stuff, I value your reviews
already, but...

So... looking at a few things - long period of reviews:
60 days:
|greghaynes   | 1210  22  99   0   081.8% |
14 ( 11.6%)  |
|  bnemec | 1160  38  78   0   067.2% |
10 (  8.6%)  |
|   jdob  |  870  15  72   0   082.8% |
4 (  4.6%)  |

90 days:

|  bnemec | 1450  40 105   0   072.4% |
17 ( 11.7%)  |
|greghaynes   | 1420  23 119   0   083.8% |
22 ( 15.5%)  |
|   jdob  | 1060  17  89   0   084.0% |
7 (  6.6%)  |

Ben's reviews are thorough, he reviews across all contributors, he
shows good depth of knowledge and awareness across tripleo, and is
sensitive to the pragmatic balance between 'right' and 'good enough'.
I'm delighted to support him for core now.

Greg is very active, reviewing across all contributors with pretty
good knowledge and awareness. I'd like to see a little more contextual
awareness though - theres a few (but not many) reviews where looking
at how the big picture of things fitting together more would have been
beneficial. *however*, I think that's a room-to-improve issue vs
not-good-enough-for-core - to me it makes sense to propose him for
core too.

Jay's reviews are also very good and consistent, somewhere between
Greg and Ben in terms of bigger-context awareness - so another
definite +1 from me.

-Rob




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-07 Thread Dan Prince


- Original Message -
 From: Robert Collins robe...@robertcollins.net
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Monday, April 7, 2014 4:00:30 PM
 Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
 
 So one interesting thing from the influx of new reviews is lots of
 patches exposing all the various plumbing bits of OpenStack. This is
 good in some ways (yay, we can configure more stuff), but in some ways
 its kindof odd - like - its not clear when
 https://review.openstack.org/#/c/83122/ is needed.
 
 I'm keen to expose things that are really needed, but i'm not sure
 that /all/ options are needed - what do folk think?

I think we can learn much from some of the more mature configuration management 
tools in the community on this front. Using puppet as an example here (although 
I'm sure other tools may do similar things as well)... Take configuration of 
the Nova API server. There is a direct configuration parameter for 
'neutron_metadata_proxy_shared_secret' in the Puppet nova::api class. This 
parameter is exposed in the class (sort of the equivalent of a TripleO element) 
directly because it is convenient and many users may want to customize the 
value. There are however hundreds of Nova config options and most of them 
aren't exposed as parameters in the various Nova puppet classes. For these it 
is possible to define a nova_config resource to configure *any* nova.conf 
parameter in an ad hoc style for your own installation tuning purposes.

I could see us using a similar model in TripleO where our elements support 
configuring common config elements directly, but we also allow people to tune 
extra undocumented options for their own use. There is always going to be a 
need for this as people need to tune things for their own installations with 
options that may not be appropriate for the common set of elements.

Standardizing this mechanism across many of the OpenStack service elements 
would also make a lot of sense. Today we have this for Nova:

nova:
  verbose: False
- Print more verbose output (set logging level to INFO instead of default 
WARNING level).
  debug: False
- Print debugging output (set logging level to DEBUG instead of default 
WARNING level).
  baremetal:
pxe_deploy_timeout: 1200
  .

I could see us adding a generic mechanism like this to overlay with the 
existing (documented) data structure:

nova:
   config:
   default.compute_manager: 
ironic.nova.compute.manager.ClusterComputeManager
   cells.driver: nova.cells.rpc_driver.CellsRPCDriver

And in this manner a user might be able to add *any* supported config param to 
the element.


 Also, some things
 really should be higher order operations - like the neutron callback
 to nova right - that should be either set to timeout in nova 
 configured in neutron, *or* set in both sides appropriately, never
 one-half or the other.
 
 I think we need to sort out our approach here to be systematic quite
 quickly to deal with these reviews.

I totally agree. I was also planning to email the list about this very issue 
this week :) My email subject was going to be TripleO templates... an upstream 
maintenance problem.

For the existing reviews today I think we should be somewhat selective about 
what parameters we expose as top level within the elements. That said we are 
missing some rather fundamental features to allow users to configure 
undocumented parameters as well. So we need to solve this problem quickly 
because there are certainly some configuration corner that users will need.

As is today we are missing some rather fundamental features in os-apply-config 
and the elements to be able to pull this off. What we really need is a generic 
INI style template generator. Or perhaps we could use something like augeus or 
even devstacks simple ini editing functions to pull this off. In any case the 
idea would be that we allow users to inject their own undocumented config 
parameters into the various service config files. Or perhaps we could 
auto-generate mustache templates based off of the upstream sample config files. 
Many approaches would work here I think...


 
 Here's an attempt to do so - this could become a developers guide patch.
 
 Config options in TripleO
 ==
 
 Non-API driven configuration falls into four categories:
 A - fixed at buildtime (e.g. ld.so path)
 B - cluster state derived
 C - local machine derived
 D - deployer choices
 
 For A, it should be entirely done within the elements concerned.
 
 For B, the heat template should accept parameters to choose the
 desired config (e.g. the Neutron-Nova example able) but then express
 the config in basic primitives in the instance metadata.
 
 For C, elements should introspect the machine (e.g. memory size to
 determine mysql memory footprint) inside os-refresh-config scripts;
 longer term we should make this an input layer to os-collect-config.
 
 For D, we need a 

[openstack-dev] [python-openstacksdk] Meeting Tuesday April 8 - 1900 UTC

2014-04-07 Thread Brian Curtin
Tomorrow is a scheduled python-openstacksdk meeting, although at least
myself and Alex Gaynor will not be available as we're in transit to
PyCon. I didn't hear from any others that they couldn't make the
meeting, so I'm guessing it will go on, just with someone else leading
it.

As Ed Leafe's proposal is the only thing that has changed (I got
bogged down last minute conference prep, don't have any code ready to
discuss yet), that's probably one topic to cover, but the rest of the
agenda is up to whoever shows up :)

https://wiki.openstack.org/wiki/Meetings#python-openstacksdk_Meeting

Date/Time: Tuesday 25 March - 1900 UTC / 1400 CDT

IRC channel: #openstack-meeting-3

About the project:
https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK

If you have questions, all of us lurk in #openstack-sdks on freenode!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-07 Thread Ghe Rivero
+1 for the -core changes

On 04/08/2014 01:50 AM, Robert Collins wrote:
 tl;dr: 3 more core members to propose:
 bnemec
 greghaynes
 jdon


 On 4 April 2014 08:55, Chris Jones c...@tenshu.net wrote:
 Hi

 +1 for your proposed -core changes.

 Re your question about whether we should retroactively apply the 3-a-day
 rule to the 3 month review stats, my suggestion would be a qualified no.

 I think we've established an agile approach to the member list of -core, so
 if there are a one or two people who we would have added to -core before the
 goalposts moved, I'd say look at their review quality. If they're showing
 the right stuff, let's get them in and helping. If they don't feel our new
 goalposts are achievable with their workload, they'll fall out again
 naturally before long.
 So I've actioned the prior vote.

 I said: Bnemec, jdob, greg etc - good stuff, I value your reviews
 already, but...

 So... looking at a few things - long period of reviews:
 60 days:
 |greghaynes   | 1210  22  99   0   081.8% |
 14 ( 11.6%)  |
 |  bnemec | 1160  38  78   0   067.2% |
 10 (  8.6%)  |
 |   jdob  |  870  15  72   0   082.8% |
 4 (  4.6%)  |

 90 days:

 |  bnemec | 1450  40 105   0   072.4% |
 17 ( 11.7%)  |
 |greghaynes   | 1420  23 119   0   083.8% |
 22 ( 15.5%)  |
 |   jdob  | 1060  17  89   0   084.0% |
 7 (  6.6%)  |

 Ben's reviews are thorough, he reviews across all contributors, he
 shows good depth of knowledge and awareness across tripleo, and is
 sensitive to the pragmatic balance between 'right' and 'good enough'.
 I'm delighted to support him for core now.

 Greg is very active, reviewing across all contributors with pretty
 good knowledge and awareness. I'd like to see a little more contextual
 awareness though - theres a few (but not many) reviews where looking
 at how the big picture of things fitting together more would have been
 beneficial. *however*, I think that's a room-to-improve issue vs
 not-good-enough-for-core - to me it makes sense to propose him for
 core too.

 Jay's reviews are also very good and consistent, somewhere between
 Greg and Ben in terms of bigger-context awareness - so another
 definite +1 from me.

 -Rob






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Location of 'enable_security_group' key in ml2_conf.ini

2014-04-07 Thread Matt Kassawara
Thiago,

My current ml2_conf.ini looks like your example. My environment also
continues to work if I omit the entire [security_group] section. However,
it stops working if I omit the [securitygroup] section.

Matt


On Mon, Apr 7, 2014 at 5:36 PM, Martinx - ジェームズ
thiagocmarti...@gmail.comwrote:

 Hi!

 I faced this problem this weekend, look:
 https://bugs.launchpad.net/bugs/1303517

 Currently, my ml2_conf.ini contains:

 ---
 [security_group]
 enable_security_group = True

 [securitygroup]
 firewall_driver =
 neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 ---

 Best!
 Thiago


 On 7 April 2014 19:50, Akihiro Motoki amot...@gmail.com wrote:

 Hi Matt,

 Thanks for raising this. Both should be the same section.
 [securitygroup] section exists in Havana and previous releases and
 it is the right section name.

 When we introduced enable_security_group option, we seem to have added
 a new section
 accidentally. We don't intended to introduce a new section name.

 IMO, both firewall_driver and enable_security_group are placed in
 [securitygroup].
 It should be fixed ASAP. I will take care of it.

 Thanks,
 Akihiro


 On Tue, Apr 8, 2014 at 5:51 AM, Matt Kassawara mkassaw...@gmail.com
 wrote:
  I'm writing the ML2 configuration sections for the installation guide
 and
  found a potential location conflict for the 'enable_security_group' key
 in
  ml2_conf.ini. In the patch associated with this feature, the example
  configuration file has this key under [security_group].
 
 
 https://review.openstack.org/#/c/67281/33/etc/neutron/plugins/ml2/ml2_conf.ini
 
  The most recent gate from the milestone-proposed branch also has this
 key
  under [security_group].
 
 
 http://logs.openstack.org/76/85676/1/gate/gate-tempest-dsvm-neutron/80af0f6/logs/etc/neutron/plugins/ml2/ml2_conf.ini.txt.gz
 
  However, the code has this key under [securitygroup] with the
  'firewall_driver' key.
 
 
 https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py
 
  What's the proper location for the 'enable_security_group' key?
 
  Thanks,
  Matt
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-07 Thread Zane Bitter
The Neutron API is a constant cause of pain for us as Heat developers, 
but afaik we've never attempted to bring up the issues we have found in 
a cross-project forum. I've recently been doing some more investigation 
and I want to document the exact ways in which the current Neutron API 
breaks orchestration, both in the hope that a future version of it might 
be better and as a guide for other API authors.


BTW it's my contention that an API that is bad for orchestration is also 
hard to use for the ordinary user as well. When you're trying to figure 
out the order of operations you need to do, there are two times at which 
you could find out you've got it wrong:


1) Before you run the command, when you realise you don't have all of 
the required data yet; or

2) After you run the command, when you get a cryptic error message.

Not only is (1) *mandatory* for a data-driven orchestration system like 
Heat, it offers orders-of-magnitude better user experience for everyone.


I should say at the outset that I know next to nothing about Neutron, 
and one of the goals of this message is to find out which parts I am 
completely wrong about. I did know a little bit about traditional 
networking at one time, and even remember some of it ;)



Neutron has a little documentation on workflow, so let's begin there: 
http://docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html#Theory


(1) Create a network
Instinctively, I want a Network to be something like a virtual VRF 
(VVRF?): a separate namespace with it's own route table, within which 
subnet prefixes are not overlapping, but which is completely independent 
of other Networks that may contain overlapping subnets. As far as I can 
tell, this basically seems to be the case. The difference, of course, is 
that instead of having to configure a VRF on every switch/router and 
make sure they're all in sync and connected up in the right ways, I just 
define it in one place globally and Neutron does the rest. I call this 
#winning. Nice work, Neutron.


(2) Associate a subnet with the network
Slightly odd choice of words, because you're actually creating a new 
Subnet (there's no such thing as a Subnet not associated with a 
Network), but this is probably just a minor documentation nit. 
Instinctively, I want a Subnet to be something like a virtual VLAN 
(VVLAN?): at its most basic level, just a group of ports that share a 
broadcast domain, but also having other properties (e.g. if L3 is in 
use, all IP addresses in the subnet should be in the same CIDR). This 
doesn't seem to be the case, though, it's just a CIDR prefix, which 
leaves me wondering how L2 traffic will be treated, as well as how I 
would do things like use both IPv4 and IPv6 on a single port (by 
assigning a port to multiple Subnets?). Looking at the docs, there is a 
much bigger emphasis on DHCP client settings than I expected - surely I 
might want to want to give two sets of ports in the same Subnet 
different DHCP configs? Still, this is not bad - the DHCP configuration 
is done by the time the Subnet is created, so there's no problem in 
connecting stuff to it immediately after.


(3) Boot a VM and attach it to the network
Here's where you completely lost me. I just created a Subnet - maybe a 
bunch of Subnets. I don't want to attach my VM just anywhere in the 
*Network*, I want to attach it to a *particular* Subnet. It's not at all 
obvious where my instance will get attached (at random?), because this 
API just plain takes the Wrong data type. As a user, I'm irritated and 
confused.


The situation for orchestration, though, is much, much worse. Because 
the server takes a reference to a network, the dependency graph 
generated from my template will look like this:


   Network -- Subnet
 ^
 \
   Server

And yet if the Server is created before the Subnet (as will happen ~50% 
of the time), it will fail. And vice-versa on delete, where the server 
must be removed before the subnet. The dependency graph we needed to 
create was this:


   Network -- Subnet -- Server

The solution used here was to jury-rig the resource types in Heat with a 
hidden dependency. We can't know which Subnet the server will end up 
attached to, so we create hidden dependencies on all of the ones defined 
in the same template. There's nothing we can do about Subnets defined in 
different templates (Heat allows a tree of templates to be instantiated 
with a single command) - I'm not sure, but it may be possible even now 
to create a tree of stacks that in practice could never be successfully 
deleted.


The Neutron models in Heat are so riddled with these kinds of invisible 
special-case hacks that all of our public documentation about how Heat 
can be expected to respond to a particular template is rendered 
effectively meaningless with respect to Neutron.


I should add that we can't blame Nova here, because explicitly creating 
a Port 

[openstack-dev] [neutron] [odl] FYHI: The OpenDaylight CI is broken due to an OpenDaylight bug

2014-04-07 Thread Kyle Mestery
The ODL CI is broken at the moment due to an upstream ODL bug around
the addition of some IPV6 parameters into the subnet create API on the
Neutron side. See the thread in the ODL lists here [1]. This is
actually a bug in the MoXy JsonProvider which ODL uses. A workaround
exists here [2], which was merged into ODL upstream master today. This
workaround is currently being backported into the stable Hydrogen
release. Until this happens, the ODL Jenkins CI will be broken. I may
temporarily change the ODL Jenkins CI to use the nightly master for
testing in the meantime.

Just an FYI for people who see this happening upstream.

Thanks,
Kyle

[1] https://lists.opendaylight.org/pipermail/discuss/2014-April/001934.html
[2] https://git.opendaylight.org/gerrit/#/c/5930/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-07 Thread Zhongyue Luo
Hi,

I know I'm not the only person who had this problem so here's two simple
steps to get the lines and line numbers aligned.

1. Install the stylebot extension

https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha

2. Click on the download icon to install the custom style for
git.openstack.org

http://stylebot.me/styles/5369

Thanks!

-- 
*Intel SSG/STO/DCST/CBE*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-07 Thread Michael Elder
Hi Steve,

Thank you -- this clarifies things quite a bit. 

I'd like to join that discussion at the summit if possible. 

-M


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook



From:   Steven Hardy sha...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   04/07/2014 12:00 PM
Subject:Re: [openstack-dev] [heat] Problems with Heat software 
configurations and KeystoneV2



On Sun, Apr 06, 2014 at 10:22:15PM -0400, Michael Elder wrote:
 If Keystone is configured with an external identity provider (LDAP, 
 OpenID, etc), how does the creation of a new user per resource affect 
that 
 external identity source? 

My understanding is that it should be possible to configure keystone to 
use
multiple (domain specific) identity backends.

So a possible configuration could be to have real users backed by the
LDAP directory, and have all projects/users associated with heat (which 
are
created in a special heat domain, completely separated from real 
users)
backed by some other identity backend, e.g Sql.

http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers


This is something we should definitely test, and I'd welcome feedback from
the keystone folks, or anyone who has experience with this functionality,
as to how well it works in Icehouse.

 My suggestion is broader, but in the same spirit: Could we consider 
 defining an _authorization_ stack token (thanks Adam), which acts like 

 an OAuth token (by delegating a set of actionable behaviors that a token 

 holder may perform). The stack token would be managed within the stack 

 in some protected form and used for any activities later performed on 
 resources which are managed by the stack. Instead of imposing user 
 administration tasks like creating users, deleting users, etc against 
the 
 Keystone database, Heat would instead provide these stack tokens to 
any 
 service which it connects to when managing a resource. In fact, there's 
no 
 real reason that the stack token couldn't piggyback on the existing 
 Keystone token mechanism, except that it would be potentially longer 
lived 
 and restricted to the specific set of resources for which it was 
granted. 

So oauth was considered before we implemented the domain-isolated users,
but it was not possible to persue due to lack of client support:

https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers
https://blueprints.launchpad.net/heat/+spec/oauth-credentials-resource

The main issue with tokens as provided by keystone today, is that they 
will
expire.  That is the main reason for chosing to create a user rather than
e.g a token limited in scope via a trust - if you put it in the instance,
you have to refresh it before expiry, which may not always be possible.

Additionally, you don't really want the credentials deployed inside a
(implicitly untrusted) instance derived from the operator who created the
stack - you want something associated with the stack but completely
isolated from real users

Your stack token approach above appears to indicate that Heat would
somehow generate, and maintain the lifecycle of, some special token which
is not owned by keystone.  This idea has been discussed, and rejected,
because we would prefer to make use of keystone functionality instead of
having the burden of maintaining our own bespoke authorization system.

If implementing something based on oauth, or some sort of scope-limited
non-expiring token, becomes possible, it's quite likely we can provide the
option to do something other than the domain isolated users which has been
impelmented for Icehouse.

Ultimately, we had to use what was available in keystone *now* to enable
delivery of something which worked for Icehouse, hence the decision to use
what is available in the keystone v3 API.

 Not sure if email is the best medium for this discussion, so if there's 
a 
 better option, I'm happy to follow that path as well. 

I think it's fine, and I'm happy to get constructive feedback on the the
current approach, along with ideas for roadmap items which can potentially
improve it.

I have proposed this summit session which may provide more opportunity for
discussion, if accepted:

http://summit.openstack.org/cfp/details/190

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-07 Thread Kevin Benton
I will just provide a few quick points of clarity.

Instinctively, I want a Subnet to be something like a virtual VLAN

That's what a network is. The network is the broadcast domain. That's why
you attach ports to the network. The subnet is just blocks of IP addresses
to use on this network. If you have two subnets on the same network, they
will be sharing a broadcast domain. Since DHCP servers do not answer
anonymous queries, there are no conflicts, which is where the subnet
requirement comes in when creating a port.

To attach a port to a network and give it an IP from a specific subnet on
that network, you would use the *--fixed-ip subnet_id *option. Otherwise,
the create port request will use the first subnet it finds attached to that
network to allocate the port an IP address. This is why you are
encountering the port- subnet- network chain. Subnets provide the
addresses. Networks are the actual layer 2 boundaries.


 if I can do a create call immediately followed by an update call then the
Neutron API can certainly do this internally

Are you sure you can do that in an update_router call? There are separate
methods to add and remove router interfaces, none of which seem to be
referenced from the update_router method.

It's not exactly clear why the external gateway is special enough that you
can have only one interface of this type on a Router, but not so special
that it would be considered a separate thing.

This is because it currently doubles as an indicator of the default route.
If you had multiple external networks, you would need another method of
specifying which to use for outbound traffic. The reason it's not a regular
port is because the port created for the external gateway cannot be owned
by the tenant since it's attaching to a network that the tenant does not
own. A special port is created for this gateway which the tenant does not
have direct control over so they can't mess with the external network.


An extra route doesn't behave at all like a static RIB entry (with a
weight and an administrative distance)

You and I have discussed this at lengths, but I will document it here for
the mailing list. :-)

This allows you to create static routes, which most certainly may live in
the RIB with IP addresses as the next hop. It's up to the neighbor (or
adjacency) discovery components to translate this to an L2 address (or
interface) when it's installed in the RIB. It is very rare to find a modern
router that doesn't let you configure static routes with IP addresses.

Correct me if I'm wrong, but the fundamental contention is that you think
routes should never be allowed to point to anything not managed by
OpenStack. This constraint gives heat the ability to reference neutron port
objects as next hops, which is very useful for resolving dependencies.
However, this gain in dependency management comes at the cost of tenant
routers never being allowed to use devices outside of neutron as next-hop
devices. This may cover many of the use cases, but it is a breaking change
due to the loss of generality.


--
Kevin Benton


On Mon, Apr 7, 2014 at 5:28 PM, Zane Bitter zbit...@redhat.com wrote:

 The Neutron API is a constant cause of pain for us as Heat developers, but
 afaik we've never attempted to bring up the issues we have found in a
 cross-project forum. I've recently been doing some more investigation and I
 want to document the exact ways in which the current Neutron API breaks
 orchestration, both in the hope that a future version of it might be better
 and as a guide for other API authors.

 BTW it's my contention that an API that is bad for orchestration is also
 hard to use for the ordinary user as well. When you're trying to figure out
 the order of operations you need to do, there are two times at which you
 could find out you've got it wrong:

 1) Before you run the command, when you realise you don't have all of the
 required data yet; or
 2) After you run the command, when you get a cryptic error message.

 Not only is (1) *mandatory* for a data-driven orchestration system like
 Heat, it offers orders-of-magnitude better user experience for everyone.

 I should say at the outset that I know next to nothing about Neutron, and
 one of the goals of this message is to find out which parts I am completely
 wrong about. I did know a little bit about traditional networking at one
 time, and even remember some of it ;)


 Neutron has a little documentation on workflow, so let's begin there:
 http://docs.openstack.org/api/openstack-network/2.0/content/
 Overview-d1e71.html#Theory

 (1) Create a network
 Instinctively, I want a Network to be something like a virtual VRF
 (VVRF?): a separate namespace with it's own route table, within which
 subnet prefixes are not overlapping, but which is completely independent of
 other Networks that may contain overlapping subnets. As far as I can tell,
 this basically seems to be the case. The difference, of course, is that
 instead of having to configure 

Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-07 Thread Nachi Ueno
Hi Zane

Thank you for your very valuable post.
We should convert your suggest to multiple bps.

2014-04-07 17:28 GMT-07:00 Zane Bitter zbit...@redhat.com:
 The Neutron API is a constant cause of pain for us as Heat developers, but
 afaik we've never attempted to bring up the issues we have found in a
 cross-project forum. I've recently been doing some more investigation and I
 want to document the exact ways in which the current Neutron API breaks
 orchestration, both in the hope that a future version of it might be better
 and as a guide for other API authors.

 BTW it's my contention that an API that is bad for orchestration is also
 hard to use for the ordinary user as well. When you're trying to figure out
 the order of operations you need to do, there are two times at which you
 could find out you've got it wrong:

 1) Before you run the command, when you realise you don't have all of the
 required data yet; or
 2) After you run the command, when you get a cryptic error message.

 Not only is (1) *mandatory* for a data-driven orchestration system like
 Heat, it offers orders-of-magnitude better user experience for everyone.

 I should say at the outset that I know next to nothing about Neutron, and
 one of the goals of this message is to find out which parts I am completely
 wrong about. I did know a little bit about traditional networking at one
 time, and even remember some of it ;)


 Neutron has a little documentation on workflow, so let's begin there:
 http://docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html#Theory

 (1) Create a network
 Instinctively, I want a Network to be something like a virtual VRF (VVRF?):
 a separate namespace with it's own route table, within which subnet prefixes
 are not overlapping, but which is completely independent of other Networks
 that may contain overlapping subnets. As far as I can tell, this basically
 seems to be the case. The difference, of course, is that instead of having
 to configure a VRF on every switch/router and make sure they're all in sync
 and connected up in the right ways, I just define it in one place globally
 and Neutron does the rest. I call this #winning. Nice work, Neutron.

In Neutron,  A network is an isolated virtual layer-2 broadcast domain
http://docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html#subnet
so the model don't have any L3 stuffs.

 (2) Associate a subnet with the network
 Slightly odd choice of words, because you're actually creating a new Subnet
 (there's no such thing as a Subnet not associated with a Network), but this
 is probably just a minor documentation nit. Instinctively, I want a Subnet
 to be something like a virtual VLAN (VVLAN?): at its most basic level, just
 a group of ports that share a broadcast domain, but also having other
 properties (e.g. if L3 is in use, all IP addresses in the subnet should be
 in the same CIDR). This doesn't seem to be the case, though, it's just a
 CIDR prefix, which leaves me wondering how L2 traffic will be treated, as
 well as how I would do things like use both IPv4 and IPv6 on a single port
 (by assigning a port to multiple Subnets?). Looking at the docs, there is a
 much bigger emphasis on DHCP client settings than I expected - surely I
 might want to want to give two sets of ports in the same Subnet different
 DHCP configs? Still, this is not bad - the DHCP configuration is done by the
 time the Subnet is created, so there's no problem in connecting stuff to it
 immediately after.

so, subnet has many meanings.
In neutron, it means
A subnet represents an IP address block that can be used to assign IP
addresses to virtual instances.
http://docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html#subnet

so subnet in your definition is more like network in neutron.


 (3) Boot a VM and attach it to the network
 Here's where you completely lost me. I just created a Subnet - maybe a bunch
 of Subnets. I don't want to attach my VM just anywhere in the *Network*, I
 want to attach it to a *particular* Subnet. It's not at all obvious where my
 instance will get attached (at random?), because this API just plain takes
 the Wrong data type. As a user, I'm irritated and confused.

+1 for specifying subnet on booting server.
We should have a bp in nova side and neutron.

 The situation for orchestration, though, is much, much worse. Because the
 server takes a reference to a network, the dependency graph generated from
 my template will look like this:

Network -- Subnet
  ^
  \
    Server

 And yet if the Server is created before the Subnet (as will happen ~50% of
 the time), it will fail. And vice-versa on delete, where the server must be
 removed before the subnet. The dependency graph we needed to create was
 this:

Network -- Subnet -- Server

 The solution used here was to jury-rig the resource types in Heat with a
 hidden dependency. We can't know 

[openstack-dev] [gantt] scheduler sub-group meeting agenda 4/7

2014-04-07 Thread Dugger, Donald D

1) Scheduler forklift efforts
2) Atlanta summit scheduler sessions
3) Opens


Topic vault (so we don't forget)

1 - no-db scheduler

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Glance Icehouse RC bugs

2014-04-07 Thread Sam Morrison
Hi,

We’ve found a couple of bugs in glance RC. They both have simple fixes that fix 
some major features, I’m wondering if some glance experts can cast their eye on 
them and if they qualify for icehouse.

glance registry v2 doesn't work (This has got some attention already, thanks)
https://bugs.launchpad.net/glance/+bug/1302351
https://review.openstack.org/#/c/85313/

v2 API can't create image
https://bugs.launchpad.net/glance/+bug/1302345
https://review.openstack.org/#/c/85918/


Thanks,
Sam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Ironic][Nova-BM] avoiding self-power-off scenarios

2014-04-07 Thread Devananda van der Veen
In case it isn't clear to others, or in case I've misunderstood, I'd like
to start by rephrasing the problem statement.

* It is possible to use Ironic to deploy an instance of ironic-conductor on
bare metal, which joins the same cluster that deployed it.
* This, or some other event, could cause the hash ring distribution to
change such that the instance of ironic-conductor is managed by itself.
* A request to do any management (eg, power off) that instance will fail in
interesting ways...

Adding a CONF setting that a conductor may optionally advertise, which
alters the hash mapping and prevents self-managing is reasonable. The
ironic.common.hash_ring will need to avoid mapping a node onto a conductor
with the same advertised UUID, but I think that will be easy. We can't
assume the driver has a pm_address key, though - some drivers may not.
Since the hash ring already knows node UUID, and a node's UUID is known
before an instance can be deployed to it, I think this will work. You can
pass that node's UUID in via heat when deploying Ironic via Ironic, and the
config will be present the first time the service starts, regardless of
which power driver is used.

Also, the node UUID is already pushed out to Nova instance metadata :)

--
Devananda
On Apr 5, 2014 2:01 PM, Robert Collins robe...@robertcollins.net wrote:

 One fairly common failure mode folk run into is registering a node
 with a nova-bm/ironic environment that is itself part of that
 environment. E.g. if you deploy ironic-conductor using Ironic (scaling
 out a cluster say), that conductor can then potentially power itself
 off if the node that represents itself happens to map to it in the
 hash ring. It happens manually too when folk just are entering lots of
 nodes and don't realise one of them is also a deployment server :).

 I'm thinking that a good solution will have the following properties:
  - its possible to manually guard against this
  - we can easily make the guard work for nova deployed machines

 And that we don't need to worry about:
  - arbitrary other machines in the cluster (because thats a heat
 responsibility, to not request redeploy of too many machines at once).

 For now, I only want to think about solving this for Ironic :).

 I think the following design might work:
  - a config knob in ironic-conductor that specifies its own pm_address
  - we push that back up as part of the hash ring metadata
  - in the hash ring don't set a primary or fallback conductor if the
 node pm address matches the conductor self pm address
  - in the Nova Ironic driver add instance metadata with the pm address
 (only) of the node

 Then we can just glue the instance metadata field to the conductor config
 key.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Some Thoughts on Log Message ID Generation Blueprint

2014-04-07 Thread Peng Wu
Thanks for the comments.
Maybe we could just search the English log. :-)

But I just find it is hard to meet all requirements of log message id,
Just some thought that we can avoid the message id generation by using
the English log.
For debug purpose, we can just read the English log.

Regards,
  Peng Wu

On Mon, 2014-04-07 at 11:19 -0500, Ben Nemec wrote:
 On 04/03/2014 10:19 PM, Peng Wu wrote:
  Hi,
 
 Recently I read the Separate translation domain for log messages
  blueprint[1], and I found that we can store both English Message Log and
  Translated Message Log with some configurations.
 
 I am an i18n Software Engineer, and we are thinking about Add message
  IDs for log messages blueprint[2]. My thought is that if we can store
  both English Message Log and Translated Message Log, we can skip the
  need of Log Message ID Generation.
 
 I also commented the Add message IDs for log messages blueprint[2].
 
 If the servers always store English Log Messages, maybe we don't need
  the Add message IDs for log messages blueprint[2] any more.
 
 Feel free to comment this proposal.
 
  Thanks,
 Peng Wu
 
  Refer URL:
  [1]
  https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
  [2] https://blueprints.launchpad.net/oslo/+spec/log-messages-id
 
 As I recall, there were more reasons for log message ids than just i18n 
 issues.  There's also the fact that an error message might change 
 significantly from one release to another, but if it's still addressing 
 the same issue then the message id could be left alone so searching for 
 it would still return relevant results, regardless of the release.
 
 That said, I don't know if anyone is actually working on the message id 
 blueprint so I'm not sure how much it matters at this point. :-)
 
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev