Re: [openstack-dev] question about createbackup API in nova

2014-06-05 Thread Bohai (ricky)
 -Original Message-
 From: Andrew Laski [mailto:andrew.la...@rackspace.com]
 Sent: Wednesday, June 04, 2014 10:15 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] question about createbackup API in nova
 
 
 On 06/04/2014 02:56 AM, Bohai (ricky) wrote:
  Hi stackers,
 
  When I use the createBackup API, I found it just snapshots the root disk of
 the instance.
  For an instance with multiple cinder backend volumes, it will not snapshot
 them.
  It's a little different to the things in current createImage API.
 
  My question is whether it's reasonable and discussed decision?
  I tried but can't find the reason.
 
 I don't know if the original reasoning has been discussed, but there has been
 some discussion on improving this behavior.  Some of the summit discussion
 around this see
 https://etherpad.openstack.org/p/juno-nova-multi-volume-snapshots .
 

Hi Andrew,

Thanks for your time and the URL you provided.

Best regards to you.
Ricky

 
  Best regards to you.
  Ricky
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] request to review bug 1301359

2014-06-05 Thread Harshada Kakad
My patch actually does not relies on any not approved feature of a client.
And I have not used any such features in my patch.
Could you let me know which feature you are talking about which you think I
have used, due to which patch would fail in integration testing?


On Wed, Jun 4, 2014 at 6:14 PM, Matthias Runge mru...@redhat.com wrote:

 On Wed, Jun 04, 2014 at 04:13:33PM +0530, Harshada Kakad wrote:
  HI Matthias Runge,
 
  Which feature in trove are you talking about?
  And even which capabilities are missing which will make the patch fail?
  I believe the patch has nothing to do with
  https://review.openstack.org/#/c/83503/

 If your patch relies on a not approved feature of a client, and we'd do
 a full integration testing with Horizon, your patch would fail, because
 the underlying client does not have the required feature.

 Does that make sense?
 --
 Matthias Runge mru...@redhat.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
*website : www.izeltech.com http://www.izeltech.com*

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Booting a Vm Failed VirtualMachineInterfaceFailed

2014-06-05 Thread Kevin Benton
This mailing list is dedicated to openstack development.
I would try your question on https://ask.openstack.org/or the general list
mentioned here: https://wiki.openstack.org/wiki/Mailing_Lists#General_List

Cheers,
Kevin Benton


On Wed, Jun 4, 2014 at 9:11 PM, Sachi Gupta sachi.gu...@tcs.com wrote:

 *Hi,*

1.
2. *While booting a virtual machine from Openstack dashboard, the call
comes to Openstack compute where it is showing vif_binding details as
vif_type = ovs*
3. *Please suggest on how to change the vif_type to vrouter *


 Thanks  Regards
 Sachi Gupta

 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-05 Thread YAMAMOTO Takashi
hi,

 ExaBgp was our first choice because we thought that run something in
 library mode would be much more easy to deal with (especially the
 exceptions and corner cases) and the code would be much cleaner. But seems
 that Ryu BGP also can fit in this requirement. And having the help from a
 Ryu developer like you turns it into a promising candidate!
 
 I'll start working now in a proof of concept to run the agent with these
 implementations and see if we need more requirements to compare between the
 speakers.

we (ryu team) love to hear any suggestions and/or requests.
we are currently working on our bgp api refinement and documentation.
hopefully they will be available early next week.

for both of bgp blueprints, it would be possible, and might be desirable,
to create reference implementations in python using ryu or exabgp.
(i prefer ryu. :-)

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Kafka support and high throughput

2014-06-05 Thread Flavio Percoco

On 04/06/14 14:58 +, Hochmuth, Roland M wrote:

Hi Flavio, In your discussions around a developing Kafka plugin for
Marconi would that be potentially be done by adding a Kafka transport to
oslo.messaging? That is something that I'm very interested in for the
monitoring as a service project I'm working on.



Hey Roland,

No, oslo.messaging is a different project and Marconi doesn't rely on
it.

Cheers,
Flavio



Thanks --Roland


On 6/4/14, 3:06 AM, Flavio Percoco fla...@redhat.com wrote:


On 02/06/14 07:52 -0700, Keith Newstadt wrote:

Thanks for the responses Flavio, Roland.

Some background on why I'm asking:  we're using Kafka as the message
queue for a stream processing service we're building, which we're
delivering to our internal customers as a service along with OpenStack.
We're considering building a high throughput ingest API to get the
clients' data streams into the stream processing service.  It occurs to
me that this API is simply a messaging API, and so I'm wondering if we
should consider building this high throughput API as part of the Marconi
project.

Has this topic come up in the Marconi team's discussions, and would it
fit into the vision of the Marconi roadmap?


Yes it has and I'm happy to see this coming up in the ML, thanks.

Some things that we're considering in order to have a more flexible
architecture that will support a higher throughput are:

- Queue Flavors (Terrible name). This is for marconi what flavors are
 for Nova. It basically defines a set of properties that will belong
 to a queue. Some of those properties may be related to the messages
 lifetime or the storage capabilities (in-memory, freaking fast,
 durable, etc). This is yet to be done.

- 2 new drivers (AMQP, redis). The former adds support to brokers and
 the later to well, redis, which brings in support for in-memory
 queues. Work In Progress.

- A new transport. This is something we've discussed but we haven't
 reached an agreement yet on when this should be done nor what it
 should be based on. The gist of this feature is adding support for
 another protocol that can serve Marconi's API alongside the HTTP
 one. We've considered TCP and websocket so far. The former is
 perfect for lower level communications without the HTTP overhead
 whereas the later is useful for web apps.

That said. A Kafka plugin is something we heard a lot about at the
summit and we've discussed it a bit. I'd love to see that happening as
an external plugin for now. There's no need to wait for the rest to
happen.

I'm more than happy to help with guidance and support on the repo
creation, driver structure etc.

Cheers,
Flavio



Thanks,
Keith Newstadt
keith_newst...@symantec.com
@knewstadt


Date: Sun, 1 Jun 2014 15:01:40 +
From: Hochmuth, Roland M roland.hochm...@hp.com
To: OpenStack List openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Kafka support and high
throughput
Message-ID: cfae6524.762da%roland.hochm...@hp.com
Content-Type: text/plain; charset=us-ascii

There are some folks in HP evaluating different messaging technologies
for
Marconi, such as RabbitMQ and Kafka. I'll ping them and maybe they can
share
some information.

On a related note, the Monitoring as a Service solution we are working
on uses Kafka. This was just open-sourced at,
https://github.com/hpcloud-mon,
and will be moving over to StackForge starting next week. The
architecture
is at,
https://github.com/hpcloud-mon/mon-arch.

I haven't really looked at Marconi. If you are interested in
throughput, low latency, durability, scale and fault-tolerance Kafka
seems like a great choice.

It has been also pointed out from various sources that possibly Kafka
could be another oslo.messaging transport. Are you looking into that as
that would be very interesting to me and something that is on my task
list that I haven't gotten to yet.


On 5/30/14, 7:03 AM, Keith Newstadt keith_newst...@symantec.com
wrote:


Has anyone given thought to using Kafka to back Marconi?  And has there
been discussion about adding high throughput APIs to Marconi.

We're looking at providing Kafka as a messaging service for our
customers, in a scenario where throughput is a priority.  We've had good
luck using both streaming HTTP interfaces and long poll interfaces to
get
high throughput for other web services we've built.  Would this use case
be appropriate in the context of the Marconi roadmap?

Thanks,
Keith Newstadt
keith_newst...@symantec.com






Keith Newstadt
Cloud Services Architect
Cloud Platform Engineering
Symantec Corporation
www.symantec.com


Office: (781) 530-2299  Mobile: (617) 513-1321
Email: keith_newst...@symantec.com
Twitter: @knewstadt




This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential,
and exempt from disclosure 

Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-05 Thread Yongsheng Gong
I think maybe we can device a kind of framework so that we can plugin
different BGP speakers.


On Thu, Jun 5, 2014 at 2:59 PM, YAMAMOTO Takashi yamam...@valinux.co.jp
wrote:

 hi,

  ExaBgp was our first choice because we thought that run something in
  library mode would be much more easy to deal with (especially the
  exceptions and corner cases) and the code would be much cleaner. But
 seems
  that Ryu BGP also can fit in this requirement. And having the help from a
  Ryu developer like you turns it into a promising candidate!
 
  I'll start working now in a proof of concept to run the agent with these
  implementations and see if we need more requirements to compare between
 the
  speakers.

 we (ryu team) love to hear any suggestions and/or requests.
 we are currently working on our bgp api refinement and documentation.
 hopefully they will be available early next week.

 for both of bgp blueprints, it would be possible, and might be desirable,
 to create reference implementations in python using ryu or exabgp.
 (i prefer ryu. :-)

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Resource action API

2014-06-05 Thread yang zhang

Thanks so much for your commits.
 Date: Wed, 4 Jun 2014 14:39:30 -0400
 From: zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [heat] Resource action API
 
 On 04/06/14 03:01, yang zhang wrote:
  Hi all,
  Now heat only supports suspending/resuming a whole stack, all the
  resources of the stack will be suspended/resumed,
  but sometime we just want to suspend or resume only a part of resources
 
 Any reason you wouldn't put that subset of resources into a nested stack 
 and suspend/resume that?
   I think that using nested-stack is a little complicated, and we can't build 
a nested-stackfor each resource, hope this bp could make it easier. 
  in the stack, so I think adding resource-action API for heat is
  necessary. this API will be helpful to solve 2 problems:
 
 I'm sceptical of this idea because the whole justification for having 
 suspend/resume in Heat is that it's something that needs to follow the 
 same dependency tree as stack delete/create.
  Are you suggesting that if you suspend an individual resource, all of 
 the resources dependent on it will also be suspended?
I thought about this, and I think just suspending an individual resource 
without dependent is ok, now the resources that can be suspended are very few, 
and almost all of those resources(Server, alarm, user, etc) could be suspended 
individually.
 
   - If we want to suspend/resume the resources of the stack, you need
  to get the phy_id first and then call the API of other services, and
  this won't update the status
  of the resource in heat, which often cause some unexpected problem.
 
 This is true, except for stack resources, which obviously _do_ store the 
 state.
- this API could offer a turn on/off function for some native
  resources, e.g., we can turn on/off the autoscalinggroup or a single
  policy with
  the API, this is like the suspend/resume services feature[1] in AWS.
 
 Which, I notice, is not exposed in CloudFormation.
 I found it on AWS web, It seems a auotscalinggroup feature, this may be not 
exposed in CloudFormation, but I think it's really a good idea.
 
I registered a bp for it, and you are welcome for discussing it.
  https://blueprints.launchpad.net/heat/+spec/resource-action-api
 
 Please propose blueprints to the heat-specs repo:
 http://lists.openstack.org/pipermail/openstack-dev/2014-May/036432.html
 
I'm sorry for it, I didn't notice the mail, and I will do it soon.
 thanks,
 Zane.
 

Regards.Zhang Yang___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] two confused part about Ironic

2014-06-05 Thread Jander lu
Hi, Devvananda

I searched a lot about the installation of Ironic, but there is little
metarial about this,  there is only devstack with ironic(
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html)

is there any docs about how to deploy Ironic on production physical node
enviroment?

thx



2014-05-30 1:49 GMT+08:00 Devananda van der Veen devananda@gmail.com:

 On Wed, May 28, 2014 at 8:14 PM, Jander lu lhcxx0...@gmail.com wrote:

 Hi, guys, I have two confused part in Ironic.



 (1) if I use nova boot api to launch an physical instance, how does nova
 boot command differentiate whether VM or physical node provision? From
 this article, nova bare metal use PlacementFilter instead of
 FilterScheduler.so does Ironic use the same method? (
 http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/
 )


 That blog post is now more than three releases old. I would strongly
 encourage you to use Ironic, instead of nova-baremetal, today. To my
 knowledge, that PlacementFilter was not made publicly available. There are
 filters available for the FilterScheduler that work with Ironic.

 As I understand it, you should use host aggregates to differentiate the
 nova-compute services configured to use different hypervisor drivers (eg,
 nova.virt.libvirt vs nova.virt.ironic).



 (2)does Ironic only support Flat network? If not, how does Ironic
 implement tenant isolation in virtual network? say,if one tenant has two
 vritual network namespace,how does the created bare metal node instance
 send the dhcp request to the right namespace?


 Ironic does not yet perform tenant isolation when using the PXE driver,
 and should not be used in an untrusted multitenant environment today. There
 are other issues with untrusted tenants as well (such as firmware exploits)
 that make it generally unsuitable to untrusted multitenancy (though
 specialized hardware platforms may mitigate this).

 There have been discussions with Neutron, and work is being started to
 perform physical network isolation, but this is still some ways off.

 Regards,
 Devananda


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][multi-node installation]Compute node can't connnected to qpid and mysql.

2014-06-05 Thread Li, Chen
Hi list,

I'm trying to install openstack ml2 neutron ml2 plugin devstack on 2 nodes with 
CentOS.

Basically, I have followed http://devstack.org/guides/multinode-lab.html.
That is really helpful.

Things looks well on controller node.
But, on the compute node, I get many issues.

This is my local.conf on compute node:
http://paste.openstack.org/show/82898/

After run ./stack.sh, I get:
http://paste.openstack.org/show/82899/

So, I run ./rejoin-stack.sh.
Three services has been started: 1$ q-agt 2$ n-cpu 3-$ c-vol

But, neutron-agent and nova-compute is complains about Unable to connect to 
AMQP server.
And cinder-volume complains about SQL connection failed

Because services on controller node works well, so I first thought it is caused 
by iptables.
but after I run:
sudo iptables -I INPUT 1 -p tcp --dport 5672
sudo iptables -I INPUT 1 -p tcp --dport 3306
on controller node.
The issue is still there.

Anyone know why this happened ??

Thanks.
-chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [TC] Program Mission Statement and the Catalog

2014-06-05 Thread Kuvaja, Erno
Hi,

+1 for the mission statement, but indeed why 2 changes?


-  Erno (jokke)

From: Mark Washenberger [mailto:mark.washenber...@markwash.net]
Sent: 05 June 2014 02:04
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance] [TC] Program Mission Statement and the Catalog
Importance: High

Hi folks,

I'd like to propose the Images program to adopt a mission statement [1] and 
then change it to reflect our new aspirations of acting as a Catalog that works 
with artifacts beyond just disk images [2].

Since the Glance mini summit early this year, momentum has been building 
significantly behind catalog effort and I think its time we recognize it 
officially, to ensure further growth can proceed and to clarify the 
interactions the Glance Catalog will have with other OpenStack projects.

Please see the linked openstack/governance changes, and provide your feedback 
either in this thread, on the changes themselves, or in the next TC meeting 
when we get a chance to discuss.

Thanks to Georgy Okrokvertskhov for coming up with the new mission statement.

Cheers
-markwash

[1] - https://review.openstack.org/98001
[2] - https://review.openstack.org/98002

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-06-05 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate. 

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-06-05 Thread Mark McLoughlin
On Thu, 2014-05-29 at 15:29 -0400, Anita Kuno wrote:
 On 05/28/2014 08:54 AM, Radomir Dopieralski wrote:
  Hello,
  
  we plan to finally do the split in this cycle, and I started some
  preparations for that. I also started to prepare a detailed plan for the
  whole operation, as it seems to be a rather big endeavor.
  
  You can view and amend the plan at the etherpad at:
  https://etherpad.openstack.org/p/horizon-split-plan
  
  It's still a little vague, but I plan to gradually get it more detailed.
  All the points are up for discussion, if anybody has any good ideas or
  suggestions, or can help in any way, please don't hesitate to add to
  this document.
  
  We still don't have any dates or anything -- I suppose we will work that
  out soonish.
  
  Oh, and great thanks to all the people who have helped me so far with
  it, I wouldn't even dream about trying such a thing without you. Also
  thanks in advance to anybody who plans to help!
  
 I'd like to confirm that we are all aware that this patch creates 16 new
 repos under the administration of horizon-ptl and horizon-core:
 https://review.openstack.org/#/c/95716/
 
 If I'm late to the party and the only one that this is news to, that is
 fine. Sixteen additional repos seems like a lot of additional reviews
 will be needed.

One slightly odd thing about this is that these repos are managed by
horizon-core, so presumably part of the Horizon program, but yet the
repos are under the stackforge/ namespace.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-05 Thread Xurong Yang
great.
I will do more test base on Eugene Nikanorov's modification.

*Thanks,*


2014-06-05 11:01 GMT+08:00 Isaku Yamahata isaku.yamah...@gmail.com:

 Wow great.
 I think the same applies to gre type driver.
 so we should create similar one after vxlan case is resolved.

 thanks,


 On Thu, Jun 05, 2014 at 12:36:54AM +0400,
 Eugene Nikanorov enikano...@mirantis.com wrote:

  We hijacked the vxlan initialization performance thread with ipam! :)
  I've tried to address initial problem with some simple sqla stuff:
  https://review.openstack.org/97774
  With sqlite it gives ~3x benefit over existing code in master.
  Need to do a little bit more testing with real backends to make sure
  parameters are optimal.
 
  Thanks,
  Eugene.
 
 
  On Thu, Jun 5, 2014 at 12:29 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
   Yes, memcached is a candidate that looks promising.  First things
 first,
   though.  I think we need the abstraction of an ipam interface merged.
  That
   will take some more discussion and work on its own.
  
   Carl
   On May 30, 2014 4:37 PM, Eugene Nikanorov enikano...@mirantis.com
   wrote:
  
I was thinking it would be a separate process that would
 communicate over
   the RPC channel or something.
   memcached?
  
   Eugene.
  
  
   On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
  
   Eugene,
  
   That was part of the whole new set of complications that I
   dismissively waved my hands at.  :)
  
   I was thinking it would be a separate process that would communicate
   over the RPC channel or something.  More complications come when you
   think about making this process HA, etc.  It would mean going over
 RPC
   to rabbit to get an allocation which would be slow.  But the current
   implementation is slow.  At least going over RPC is greenthread
   friendly where going to the database doesn't seem to be.
  
   Carl
  
   On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
   enikano...@mirantis.com wrote:
Hi Carl,
   
The idea of in-memory storage was discussed for similar problem,
 but
   might
not work for multiple server deployment.
Some hybrid approach though may be used, I think.
   
Thanks,
Eugene.
   
   
On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin c...@ecbaldwin.net
   wrote:
   
This is very similar to IPAM...  There is a space of possible ids
 or
addresses that can grow very large.  We need to track the
 allocation
of individual ids or addresses from that space and be able to
 quickly
come up with a new allocations and recycle old ones.  I've had
 this in
the back of my mind for a week or two now.
   
A similar problem came up when the database would get populated
 with
the entire free space worth of ip addresses to reflect the
availability of all of the individual addresses.  With a large
 space
(like an ip4 /8 or practically any ip6 subnet) this would take a
 very
long time or never finish.
   
Neutron was a little smarter about this.  It compressed
 availability
in to availability ranges in a separate table.  This solved the
original problem but is not problem free.  It turns out that
 writing
database operations to manipulate both the allocations table and
 the
availability table atomically is very difficult and ends up being
 very
slow and has caused us some grief.  The free space also gets
fragmented which degrades performance.  This is what led me --
somewhat reluctantly -- to change how IPs get recycled back in to
 the
free pool which hasn't been very popular.
   
I wonder if we can discuss a good pattern for handling allocations
where the free space can grow very large.  We could use the
 pattern
for the allocation of both IP addresses, VXlan ids, and other
 similar
resource spaces.
   
For IPAM, I have been entertaining the idea of creating an
 allocation
agent that would manage the availability of IPs in memory rather
 than
in the database.  I hesitate, because that brings up a whole new
 set
of complications.  I'm sure there are other potential solutions
 that I
haven't yet considered.
   
The L3 subteam is currently working on a pluggable IPAM model.
  Once
the initial framework for this is done, we can more easily play
 around
with changing the underlying IPAM implementation.
   
Thoughts?
   
Carl
   
On Thu, May 29, 2014 at 4:01 AM, Xurong Yang ido...@gmail.com
   wrote:
 Hi, Folks,

 When we configure VXLAN range [1,16M], neutron-server service
 costs
   long
 time and cpu rate is very high(100%) when initiation. One test
 base
   on
 postgresql has been verified: more than 1h when VXLAN range is
 [1,
   1M].

 So, any good solution about this performance issue?

 Thanks,
 Xurong Yang



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org

 

Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-06-05 Thread Radomir Dopieralski
On 06/05/2014 10:59 AM, Mark McLoughlin wrote:

 If I'm late to the party and the only one that this is news to, that is
 fine. Sixteen additional repos seems like a lot of additional reviews
 will be needed.
 
 One slightly odd thing about this is that these repos are managed by
 horizon-core, so presumably part of the Horizon program, but yet the
 repos are under the stackforge/ namespace.

What would you propose instead?
Keeping them in repositories external to OpenStack, on github or
bitbucket sounds wrong.
Getting them under openstack/ doesn't sound good either, as the
projects they are packaging are not related to OpenStack.

Have them be managed by someone else? Who?

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-05 Thread Steven Hardy
On Thu, Jun 05, 2014 at 12:17:07AM +, Randall Burt wrote:
 On Jun 4, 2014, at 7:05 PM, Clint Byrum cl...@fewbar.com
  wrote:
 
  Excerpts from Zane Bitter's message of 2014-06-04 16:19:05 -0700:
  On 04/06/14 15:58, Vijendar Komalla wrote:
  Hi Devs,
  I have submitted an WIP review (https://review.openstack.org/#/c/97900/)
  for Heat parameters encryption blueprint
  https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
  This quick and dirty implementation encrypts all the parameters on on
  Stack 'store' and decrypts on on Stack 'load'.
  Following are couple of improvements I am thinking about;
  1. Instead of encrypting individual parameters, on Stack 'store' encrypt
  all the parameters together as a dictionary  [something like
  crypt.encrypt(json.dumps(param_dictionary))]
  
  Yeah, definitely don't encrypt them individually.
  
  2. Just encrypt parameters that were marked as 'hidden', instead of
  encrypting all parameters
  
  I would like to hear your feedback/suggestions.
  
  Just as a heads-up, we will soon need to store the properties of 
  resources too, at which point parameters become the least of our 
  problems. (In fact, in theory we wouldn't even need to store 
  parameters... and probably by the time convergence is completely 
  implemented, we won't.) Which is to say that there's almost certainly no 
  point in discriminating between hidden and non-hidden parameters.
  
  I'll refrain from commenting on whether the extra security this affords 
  is worth the giant pain it causes in debugging, except to say that IMO 
  there should be a config option to disable the feature (and if it's 
  enabled by default, it should probably be disabled by default in e.g. 
  devstack).
  
  Storing secrets seems like a job for Barbican. That handles the giant
  pain problem because in devstack you can just tell Barbican to have an
  open read policy.
  
  I'd rather see good hooks for Barbican than blanket encryption. I've
  worked with a few things like this and they are despised and worked
  around universally because of the reason Zane has expressed concern about:
  debugging gets ridiculous.
  
  How about this:
  
  parameters:
   secrets:
 type: sensitive
  resources:
   sensitive_deployment:
 type: OS::Heat::StructuredDeployment
 properties:
   config: weverConfig
   server: myserver
   input_values:
 secret_handle: { get_param: secrets }
  
  The sensitive type would, on the client side, store the value in Barbican,
  never in Heat. Instead it would just pass in a handle which the user
  can then build policy around. Obviously this implies the user would set
  up Barbican's in-instance tools to access the secrets value. But the
  idea is, let Heat worry about being high performing and introspectable,
  and then let Barbican worry about sensitive things.
 
 While certainly ideal, it doesn't solve the current problem since we can't 
 yet guarantee Barbican will even be available in a given release of 
 OpenStack. In the meantime, Heat continues to store sensitive user 
 information unencrypted in its database. Once Barbican is integrated, I'd be 
 all for changing this implementation, but until then, we do need an interim 
 solution. Sure, debugging is a pain and as developers we can certainly 
 grumble, but leaking sensitive user information because we were too fussed to 
 protect data at rest seems worse IMO. Additionally, the solution as described 
 sounds like we're imposing a pretty awkward process on a user to save 
 ourselves from having to decrypt some data in the cases where we can't access 
 the stack information directly from the API or via debugging running Heat 
 code (where the data isn't encrypted anymore).

Under what circumstances are we leaking sensitive user information?

Are you just trying to mitigate a potential attack vector, in the event of
a bug which leaks data from the DB?  If so, is the user-data encrypted in
the nova DB?

It seems to me that this will only be a worthwhile exercise if the
sensitive stuff is encrypted everywhere, and many/most use-cases I can
think of which require sensitive data involve that data ending up in nova
user|meta-data?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Name proposals

2014-06-05 Thread Radomir Dopieralski
On 06/03/2014 06:44 PM, Radomir Dopieralski wrote:
 We decided that we need to pick the name for the splitting of Horizon
 properly. From now up to the next meeting on June 10 we will be
 collecting name proposals at:
 
 https://etherpad.openstack.org/p/horizon-name-proposals
 
 After that, until next meeting on June 17, we will be voting for
 the proposed names. In case the most popular name is impossible to use
 (due to trademark issues), we will use the next most popular. In case of
 a tie, we will pick randomly.

Just a quick remark. I allowed myself to clean the etherpad up a little,
putting all the proposed names on a single list, so that we can vote on
them later easily.

I would like to remind all of you, that the plan for the split itself is
posted on a separate etherpad, and that if anybody has any suggestions
for any additions or changes to the plan, I'd love to see them there:

https://etherpad.openstack.org/p/horizon-split-plan

That includes the suggestions for renaming the other part, and the
changes in the plan and in the later organization that would stem from
such a change.

Thank you,

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] A question about firewall

2014-06-05 Thread Xurong Yang
Hi, Stackers
My use case:

under project_id A:
1.create firewall rule default(share=false).
2.create firewall policy default(share=false).
3.attach rule to policy.
4.update policy(share=true)

under project_id B:
1.create firewall with policy(share=true) based on project A.
then create firewall fail and suspend with status=PENDING_CREATE

openstack@openstack03:~/Vega$ neutron firewall-policy-list
+--+--++
| id   | name | firewall_rules
|
+--+--++
| 7884fb78-1903-4af6-af3f-55e5c7c047c9 | Demo |
[d5578ab5-869b-48cb-be54-85ee9f15d9b2] |
| 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | Test |
[8679da8d-200e-4311-bb7d-7febd3f46e37, |
|  |  |
86ce188d-18ab-49f2-b664-96c497318056] |
+--+--++
openstack@openstack03:~/Vega$ neutron firewall-rule-list
+--+--+--++-+
| id   | name | firewall_policy_id
  | summary| enabled |
+--+--+--++-+
| 8679da8d-200e-4311-bb7d-7febd3f46e37 | DenyOne  |
949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,
| True|
|  |  |
  |  source: none(none),   | |
|  |  |
  |  dest: 192.168.0.101/32(none), | |
|  |  |
  |  deny  | |
| 86ce188d-18ab-49f2-b664-96c497318056 | AllowAll |
949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,
| True|
|  |  |
  |  source: none(none),   | |
|  |  |
  |  dest: none(none), | |
|  |  |
  |  allow | |
+--+--+--++-+
openstack@openstack03:~/Vega$ neutron firewall-create --name Test
Demo*Firewall Rule d5578ab5-869b-48cb-be54-85ee9f15d9b2 could not be
found.*
openstack@openstack03:~/Vega$ neutron firewall-show Test
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 7884fb78-1903-4af6-af3f-55e5c7c047c9 |
| id | 7c59c7da-ace1-4dfa-8b04-2bc6013dbc0a |
| name   | Test |
| status | *PENDING_CREATE*   |
| tenant_id  | a0794fca47de4631b8e414beea4bd51b |
++--+
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mid-cycle sprints: a tracking page to rule them all

2014-06-05 Thread Thierry Carrez
Hey everyone,

With all those mid-cycle sprints being proposed I found it a bit hard to
track them all (and, like, see if I could just attend one). I tried to
list them all at:

https://wiki.openstack.org/wiki/Sprints

I'm pretty sure I missed some (in particular I couldn't find a Nova
mid-cycle meetup), so if you have one set up and participation is still
open, please add it there !

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread laserjetyang
  Will this patch of Python fix your problem? *http://bugs.python.org/issue7213
http://bugs.python.org/issue7213*

On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use tpool
 to invoke libguestfs, and one external commend is executed in another green
 thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs routine
 in greenthread, rather than another native thread. But it will impact the
 performance very much. So I do not think that is an acceptable solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the issue
 can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch pad
 as below. . I am not sure if libguestfs itself can have certain mechanism
 to free/close the fds that inherited from parent process instead of require
 explicitly calling the tear down. Maybe open a defect to libguestfs to see
 what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

 --
 Qin Zhao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Team meeting this week

2014-06-05 Thread Michael Still
Hi.

This is a reminder that we will have a meeting today at 21:00 UTC. The agenda is
at https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting

Cheers,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mid-cycle sprints: a tracking page to rule them all

2014-06-05 Thread Michael Still
Thierry, you couldn't find the nova midcycle meetup because the final
details haven't been announced yet. I hope to fix that tomorrow.

Cheers,
Michael

On Thu, Jun 5, 2014 at 7:48 PM, Thierry Carrez thie...@openstack.org wrote:
 Hey everyone,

 With all those mid-cycle sprints being proposed I found it a bit hard to
 track them all (and, like, see if I could just attend one). I tried to
 list them all at:

 https://wiki.openstack.org/wiki/Sprints

 I'm pretty sure I missed some (in particular I couldn't find a Nova
 mid-cycle meetup), so if you have one set up and participation is still
 open, please add it there !

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-05 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-06-05 02:23:40 -0700:
 On Thu, Jun 05, 2014 at 12:17:07AM +, Randall Burt wrote:
  On Jun 4, 2014, at 7:05 PM, Clint Byrum cl...@fewbar.com
   wrote:
  
   Excerpts from Zane Bitter's message of 2014-06-04 16:19:05 -0700:
   On 04/06/14 15:58, Vijendar Komalla wrote:
   Hi Devs,
   I have submitted an WIP review (https://review.openstack.org/#/c/97900/)
   for Heat parameters encryption blueprint
   https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
   This quick and dirty implementation encrypts all the parameters on on
   Stack 'store' and decrypts on on Stack 'load'.
   Following are couple of improvements I am thinking about;
   1. Instead of encrypting individual parameters, on Stack 'store' encrypt
   all the parameters together as a dictionary  [something like
   crypt.encrypt(json.dumps(param_dictionary))]
   
   Yeah, definitely don't encrypt them individually.
   
   2. Just encrypt parameters that were marked as 'hidden', instead of
   encrypting all parameters
   
   I would like to hear your feedback/suggestions.
   
   Just as a heads-up, we will soon need to store the properties of 
   resources too, at which point parameters become the least of our 
   problems. (In fact, in theory we wouldn't even need to store 
   parameters... and probably by the time convergence is completely 
   implemented, we won't.) Which is to say that there's almost certainly no 
   point in discriminating between hidden and non-hidden parameters.
   
   I'll refrain from commenting on whether the extra security this affords 
   is worth the giant pain it causes in debugging, except to say that IMO 
   there should be a config option to disable the feature (and if it's 
   enabled by default, it should probably be disabled by default in e.g. 
   devstack).
   
   Storing secrets seems like a job for Barbican. That handles the giant
   pain problem because in devstack you can just tell Barbican to have an
   open read policy.
   
   I'd rather see good hooks for Barbican than blanket encryption. I've
   worked with a few things like this and they are despised and worked
   around universally because of the reason Zane has expressed concern about:
   debugging gets ridiculous.
   
   How about this:
   
   parameters:
secrets:
  type: sensitive
   resources:
sensitive_deployment:
  type: OS::Heat::StructuredDeployment
  properties:
config: weverConfig
server: myserver
input_values:
  secret_handle: { get_param: secrets }
   
   The sensitive type would, on the client side, store the value in Barbican,
   never in Heat. Instead it would just pass in a handle which the user
   can then build policy around. Obviously this implies the user would set
   up Barbican's in-instance tools to access the secrets value. But the
   idea is, let Heat worry about being high performing and introspectable,
   and then let Barbican worry about sensitive things.
  
  While certainly ideal, it doesn't solve the current problem since we can't 
  yet guarantee Barbican will even be available in a given release of 
  OpenStack. In the meantime, Heat continues to store sensitive user 
  information unencrypted in its database. Once Barbican is integrated, I'd 
  be all for changing this implementation, but until then, we do need an 
  interim solution. Sure, debugging is a pain and as developers we can 
  certainly grumble, but leaking sensitive user information because we were 
  too fussed to protect data at rest seems worse IMO. Additionally, the 
  solution as described sounds like we're imposing a pretty awkward process 
  on a user to save ourselves from having to decrypt some data in the cases 
  where we can't access the stack information directly from the API or via 
  debugging running Heat code (where the data isn't encrypted anymore).
 
 Under what circumstances are we leaking sensitive user information?
 
 Are you just trying to mitigate a potential attack vector, in the event of
 a bug which leaks data from the DB?  If so, is the user-data encrypted in
 the nova DB?
 
 It seems to me that this will only be a worthwhile exercise if the
 sensitive stuff is encrypted everywhere, and many/most use-cases I can
 think of which require sensitive data involve that data ending up in nova
 user|meta-data?

I tend to agree Steve. The strategy to move things into a system with
strong policy controls like Barbican will mitigate these risks, as even
compromise of the given secret access information may not yield access
to the actual secrets. Basically, let's help facilitate end-to-end
encryption and access control, not just mitigate one attack vector
because the end-to-end one is hard.

Until then, our DBs will have sensitive information, and such is life.

(Of course, this also reminds me that I think we should probably add a
one-time-pad type of access method that we can use to prevent compromise
of our credentials 

Re: [openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-05 Thread Paul Michali (pcm)
I booked through our company travel and got a comparable rate ($111 or $114, I 
can’t recall the exact price).

Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jun 5, 2014, at 12:48 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Yes, I was able to book it for $114 a night with no prepayment.  I had
 to call.  The agent found the block under Cisco and the date range.
 
 Carl
 
 On Wed, Jun 4, 2014 at 4:43 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 I think it's even cheaper than that. Try calling the hotel to get the
 better rate, I think Carl was able to successfully acquire the room at
 the cheaper rate (something like $115 a night or so).
 
 On Wed, Jun 4, 2014 at 4:56 PM, Edgar Magana Perdomo (eperdomo)
 eperd...@cisco.com wrote:
 I tried to book online and it seems that the pre-payment is non-refundable:
 
 Hyatt.Com Rate Rate RulesFull prepayment required, non-refundable, no
 date changes.
 
 
 The price is $149 USD per night. Is that what you have blocked?
 
 Edgar
 
 On 6/4/14, 2:47 PM, Kyle Mestery mest...@noironetworks.com wrote:
 
 Hi all:
 
 I was curious if people are having issues booking the room from the
 block I have setup. I received word from the hotel that only one (1!)
 person has booked yet. Given the mid-cycle is approaching in a month,
 I wanted to make sure that people are making plans for travel. Are
 people booking in places other than the one I had setup as reserved?
 If so, I'll remove the room block. Keep in mind the hotel I had a
 block reserved at is very convenient in that it's literally walking
 distance to the mid-cycle location at the Bloomington, MN Cisco
 offices.
 
 Thanks!
 Kyle
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Firewall is ineffective with floating ip?

2014-06-05 Thread Xurong Yang
Hi, Stackers,

Use case description:

Firewal is not working when setting the destination-ip-address as VM's
floating ip
Steps to Reproduce:
1. create one network and attached it to the newly created router
2. Create VMs on the above network
3. create security group rule for icmp
4. create an external network and attach it to the router as gateway
5. create floating ip and associate it to the VMs
6. create a first firewall rule as protocol=icmp , action =deny and
desitination-ip-address as floatingip
7. create second firewall rule as protocol=any action=allow
8. attach the rule to the policy and the policy to the firewall
9. ping the VMs floating ip from network node which is having the external
network configured.

Actual Results:
Ping succeeds

Expected Results:
Ping should fail as per the firewall rule

router's functionality both NAT and Firewall, so , although we have created
firewall rule, DNAT will take action(change floating ip to fix ip) in
PREROUTING chain preferentially when network node ping vm's floating ip, so
firewall rules in FORWARD chain couldn't match because packet's ip has been
changed to fix ip.

additional case:
if we change firewall rule protocol=icmp , action =deny and
desitination-ip-address as fix ip, ping fail.

in short , router firewall can't take effect about floating ip.

what do you think?

Cheers,

Xurong Yang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Firewall is ineffective with floating ip?

2014-06-05 Thread ZZelle
Hi,

When the router receives packets from the external network, iptables does
sequentially:
 1) NAT PREROUTING table: translate floatingip to fixed ip
 2) FILTER FORWARD table: apply FW rules ... on fixed ips because
floatingip has been translated to fixed ip


So disabling the ping to the floatingip has no effect, you should instead
disable ping to associated fixed ip.


More generally in (iptables) FW rules, you should use fixed-ips/cidrs as
source/target not floatingips


Cheers,

Cedric


On Thu, Jun 5, 2014 at 1:15 PM, Xurong Yang ido...@gmail.com wrote:

 Hi, Stackers,

 Use case description:

 Firewal is not working when setting the destination-ip-address as VM's
 floating ip
 Steps to Reproduce:
 1. create one network and attached it to the newly created router
 2. Create VMs on the above network
 3. create security group rule for icmp
 4. create an external network and attach it to the router as gateway
 5. create floating ip and associate it to the VMs
 6. create a first firewall rule as protocol=icmp , action =deny and
 desitination-ip-address as floatingip
 7. create second firewall rule as protocol=any action=allow
 8. attach the rule to the policy and the policy to the firewall
 9. ping the VMs floating ip from network node which is having the external
 network configured.

 Actual Results:
 Ping succeeds

 Expected Results:
 Ping should fail as per the firewall rule

 router's functionality both NAT and Firewall, so , although we have
 created firewall rule, DNAT will take action(change floating ip to fix ip)
 in PREROUTING chain preferentially when network node ping vm's floating ip,
 so firewall rules in FORWARD chain couldn't match because packet's ip has
 been changed to fix ip.

 additional case:
 if we change firewall rule protocol=icmp , action =deny and
 desitination-ip-address as fix ip, ping fail.

 in short , router firewall can't take effect about floating ip.

 what do you think?

 Cheers,

 Xurong Yang




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Can nova manage both ironic bare metal nodes and kvm VMs ?

2014-06-05 Thread 严超
Hi, All:
In deploying with devstack and Ironic+Nova, we set:
compute_driver = nova.virt.ironic.IronicDriver
This means we can no longer use nova to boot VMs.
Is there a way to manage both ironic bare metal nodes and kvm VMs in
Nova ?
 I followed this Link:
https://etherpad.openstack.org/p/IronicDeployDevstack


*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Is ironic support EXSI when boot a bare metal ?

2014-06-05 Thread 严超
Hi, All:
Is ironic support EXSI when boot a bare metal ? If we can, how to
make vmware EXSI ami bare metal image ?

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Is ironic support EXSI when boot a bare metal ?

2014-06-05 Thread 严超
Sorry, it was ESXI.

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-06-05 19:39 GMT+08:00 严超 yanchao...@gmail.com:

 Hi, All:
 Is ironic support EXSI when boot a bare metal ? If we can, how to
 make vmware EXSI ami bare metal image ?

 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Sean Dague
You may all have noticed things are really backed up in the gate right
now, and you would be correct. (Top of gate is about 30 hrs, but if you
do the math on ingress / egress rates the gate is probably really double
that in transit time right now).

We've hit another threshold where there are so many really small races
in the gate that they are compounding to the point where fixing one is
often failed by another one killing your job. This whole situation was
exacerbated by the fact that while the transition from HP cloud 1.0 -
1.1 was happening and we were under capacity, the check queue grew to
500 with lots of stuff being approved.

That flush all hit the gate at once. But it also means that those jobs
passed in a very specific timing situation, which is different on the
new HP cloud nodes. And the normal statistical distribution of some jobs
on RAX and some on HP that shake out different races didn't happen.

At this point we could really use help getting focus on only recheck
bugs. The current list of bugs is here:
http://status.openstack.org/elastic-recheck/

Also our categorization rate is only 75% so there are probably at least
2 critical bugs we don't even know about yet hiding in the failures.
Helping categorize here -
http://status.openstack.org/elastic-recheck/data/uncategorized.html
would be handy.

We're coordinating changes via an etherpad here -
https://etherpad.openstack.org/p/gatetriage-june2014

If you want to help, jumping in #openstack-infra would be the place to go.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-05 Thread Buraschi, Andres
Hi Brandon, thanks for your reply. Your explanation makes total sense to me. 
So, let's see what the consensus is. :)

Regards and have a nice day!
Andres

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Wednesday, June 04, 2014 6:28 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API

Hi Andres,
I've assumed (and we know how assumptions work) that the deprecation would take 
place in Juno and after a cyle or two it would totally be removed from the 
code.  Even if #1 is the way to go, the old /vips resource would be deprecated 
in favor of /loadbalancers and /listeners.

I agree #2 is cleaner, but I don't want to start on an implementation (though I 
kind of already have) that will fail to be merged in because of the strategy.  
The strategies are pretty different so one needs to be decided on.

As for where LBaaS is intended to end up, I don't want to speak for Kyle, so 
this is my understanding; It will end up outside of the Neutron code base but 
Neutron and LBaaS and other services will all fall under a Networking (or 
Network) program.  That is my understanding and I could be totally wrong.

Thanks,
Brandon

On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
 Hi Brandon, hi Kyle!
 I'm a bit confused about the deprecation (btw, thanks for sending this 
 Brandon!), as I (wrongly) assumed #1 would be the chosen path for the new API 
 implementation. I understand the proposal and #2 sounds actually cleaner. 
 
 Just out of curiosity, Kyle, where is LBaaS functionality intended to end up, 
 if long-term plans are to remove it from Neutron?
 
 (Nit question, I must clarify)
 
 Thank you!
 Andres
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
 Sent: Wednesday, June 04, 2014 2:18 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
 
 Thanks for your feedback Kyle.  I will be at that meeting on Monday.
 
 Thanks,
 Brandon
 
 On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
  On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan 
  brandon.lo...@rackspace.com wrote:
   This is an LBaaS topic bud I'd like to get some Neutron Core 
   members to give their opinions on this matter so I've just 
   directed this to Neutron proper.
  
   The design for the new API and object model for LBaaS needs to be 
   locked down before the hackathon in a couple of weeks and there 
   are some questions that need answered.  This is pretty urgent to 
   come on to a decision on and to get a clear strategy defined so we 
   can actually do real code during the hackathon instead of wasting 
   some of that valuable time discussing this.
  
  
   Implementation must be backwards compatible
  
   There are 2 ways that have come up on how to do this:
  
   1) New API and object model are created in the same extension and 
   plugin as the old.  Any API requests structured for the old API 
   will be translated/adapted to the into the new object model.
   PROS:
   -Only one extension and plugin
   -Mostly true backwards compatibility -Do not have to rename 
   unchanged resources and models
   CONS:
   -May end up being confusing to an end-user.
   -Separation of old api and new api is less clear -Deprecating and 
   removing old api and object model will take a bit more work -This 
   is basically API versioning the wrong way
  
   2) A new extension and plugin are created for the new API and 
   object model.  Each API would live side by side.  New API would 
   need to have different names for resources and object models from 
   Old API resources and object models.
   PROS:
   -Clean demarcation point between old and new -No translation layer 
   needed -Do not need to modify existing API and object model, no 
   new bugs -Drivers do not need to be immediately modified -Easy to 
   deprecate and remove old API and object model later
   CONS:
   -Separate extensions and object model will be confusing to 
   end-users -Code reuse by copy paste since old extension and plugin 
   will be deprecated and removed.
   -This is basically API versioning the wrong way
  
   Now if #2 is chosen to be feasible and acceptable then there are a 
   number of ways to actually do that.  I won't bring those up until 
   a clear decision is made on which strategy above is the most acceptable.
  
  Thanks for sending this out Brandon. I'm in favor of option #2 
  above, especially considering the long-term plans to remove LBaaS 
  from Neutron. That approach will help the eventual end goal there. I 
  am also curious on what others think, and to this end, I've added 
  this as an agenda item for the team meeting next Monday. Brandon, it 
  would be great to get you there for the part of the meeting where 
  we'll discuss this.
  
  Thanks!
  Kyle
  
   Thanks,
   Brandon
  
  
  
  
  
  
   ___
   

Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-06-05 Thread Mark McLoughlin
On Thu, 2014-06-05 at 11:19 +0200, Radomir Dopieralski wrote:
 On 06/05/2014 10:59 AM, Mark McLoughlin wrote:
 
  If I'm late to the party and the only one that this is news to, that is
  fine. Sixteen additional repos seems like a lot of additional reviews
  will be needed.
  
  One slightly odd thing about this is that these repos are managed by
  horizon-core, so presumably part of the Horizon program, but yet the
  repos are under the stackforge/ namespace.
 
 What would you propose instead?
 Keeping them in repositories external to OpenStack, on github or
 bitbucket sounds wrong.
 Getting them under openstack/ doesn't sound good either, as the
 projects they are packaging are not related to OpenStack.
 
 Have them be managed by someone else? Who?

If they're to be part of the Horizon program, I'd say they should be
under openstack/. If not, perhaps create a new team to manage them.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-05 Thread Sergey Lukjanov
All patches are now approved, but stuck in gate, so, the 2014.1.1
release for Sahara will be right after all changes merged into the
stable/icehouse branch.

On Tue, Jun 3, 2014 at 2:30 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Okay, it makes sense, I've updated the etherpad -
 https://etherpad.openstack.org/p/sahara-2014.1.1

 Here is the chain of backports for 2014.1.1 -
 https://review.openstack.org/#/q/topic:sahara-2014.1.1,n,z

 Review appreciate, all changes are cherry-picked and only one conflict
 was in 
 https://review.openstack.org/#/c/97458/1/sahara/swift/swift_helper.py,cm
 due to the multi-region support addition.

 Thanks.

 On Tue, Jun 3, 2014 at 1:48 PM, Dmitry Mescheryakov
 dmescherya...@mirantis.com wrote:
 I agree with Andrew and actually think that we do need to have
 https://review.openstack.org/#/c/87573 (Fix running EDP job on
 transient cluster) fixed in stable branch.

 We also might want to add https://review.openstack.org/#/c/93322/
 (Create trusts for admin user with correct tenant name). This is
 another fix for transient clusters, but it is not even merged into
 master branch yet.

 Thanks,

 Dmitry

 2014-06-03 13:27 GMT+04:00 Sergey Lukjanov slukja...@mirantis.com:
 Here is etherpad to track preparation -
 https://etherpad.openstack.org/p/sahara-2014.1.1

 On Tue, Jun 3, 2014 at 10:08 AM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 /me proposing to backport:

 Docs:

 https://review.openstack.org/#/c/87531/ Change IRC channel name to
 #openstack-sahara
 https://review.openstack.org/#/c/96621/ Added validate_edp method to
 Plugin SPI doc
 https://review.openstack.org/#/c/89647/ Updated architecture diagram in 
 docs

 EDP:

 https://review.openstack.org/#/c/93564/ 
 https://review.openstack.org/#/c/93564/

 On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 Hey folks,

 this Thu, June 5 is the date for 2014.1.1 release. We already have
 some back ported patches to the stable/icehouse branch, so, the
 question is do we need some more patches to back port? Please, propose
 them here.

 2014.1 - stable/icehouse diff:
 https://github.com/openstack/sahara/compare/2014.1...stable/icehouse

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [cinder] Do we now require schema response validation in tempest clients?

2014-06-05 Thread Ollie Leahy
I've submitted a patch that copies the compute http response validation 
structure for the 'cinder list' request.


https://review.openstack.org/96440

I'm having trouble getting it through the gate, but input from the 
cinder group about whether this is useful work that I should persist 
with would be appreciated.


Thanks,
Ollie


On 01/05/14 05:02, Ken'ichi Ohmichi wrote:

Hi David,

2014-05-01 5:44 GMT+09:00 David Kranz dkr...@redhat.com:

There have been a lot of patches that add the validation of response dicts.
We need a policy on whether this is required or not. For example, this patch

https://review.openstack.org/#/c/87438/5

is for the equivalent of 'cinder service-list' and is a basically a copy of
the nova test which now does the validation. So two questions:

Is cinder going to do this kind of checking?
If so, should new tests be required to do it on submission?

I'm not sure someone will add the similar validation, which we are adding to
Nova API tests, to Cinder API tests also. but it would be nice for Cinder and
Tempest. The validation can be applied to the other projects(Cinder, etc)
easily because the base framework is implemented in common rest client
of Tempest.

When adding new tests like https://review.openstack.org/#/c/87438 , I don't
have strong opinion for including the validation also. These schemas will be
large sometimes and the combination in the same patch would make reviews
difficult. In current Nova API test implementations, we are separating them
into different patches.


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Yuriy Taraday
This behavior of os.pipe() has changed in Python 3.x so it won't be an
issue on newer Python (if only it was accessible for us).

From the looks of it you can mitigate the problem by running libguestfs
requests in a separate process (multiprocessing.managers comes to mind).
This way the only descriptors child process could theoretically inherit
would be long-lived pipes to main process although they won't leak because
they should be marked with CLOEXEC before any libguestfs request is run.
The other benefit is that this separate process won't be busy opening and
closing tons of fds so the problem with inheriting will be avoided.


On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use tpool
 to invoke libguestfs, and one external commend is executed in another green
 thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs routine
 in greenthread, rather than another native thread. But it will impact the
 performance very much. So I do not think that is an acceptable solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the issue
 can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch pad
 as below. . I am not sure if libguestfs itself can have certain mechanism
 to free/close the fds that inherited from parent process instead of require
 explicitly calling the tear down. Maybe open a defect to libguestfs to see
 what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

 --
 Qin Zhao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] A question about firewall

2014-06-05 Thread Gary Duan
Xurong,

Firewall is colocated with router. You need to create a router, then the
firewall state will be updated.

Gary


On Thu, Jun 5, 2014 at 2:48 AM, Xurong Yang ido...@gmail.com wrote:

 Hi, Stackers
 My use case:

 under project_id A:
 1.create firewall rule default(share=false).
 2.create firewall policy default(share=false).
 3.attach rule to policy.
 4.update policy(share=true)

 under project_id B:
 1.create firewall with policy(share=true) based on project A.
 then create firewall fail and suspend with status=PENDING_CREATE

 openstack@openstack03:~/Vega$ neutron firewall-policy-list
 +--+--++
 | id   | name | firewall_rules
  |
 +--+--++
 | 7884fb78-1903-4af6-af3f-55e5c7c047c9 | Demo | 
 [d5578ab5-869b-48cb-be54-85ee9f15d9b2] |
 | 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | Test | 
 [8679da8d-200e-4311-bb7d-7febd3f46e37, |
 |  |  |  
 86ce188d-18ab-49f2-b664-96c497318056] |
 +--+--++
 openstack@openstack03:~/Vega$ neutron firewall-rule-list
 +--+--+--++-+
 | id   | name | firewall_policy_id
| summary| enabled |
 +--+--+--++-+
 | 8679da8d-200e-4311-bb7d-7febd3f46e37 | DenyOne  | 
 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,  | True  
   |
 |  |  |   
|  source: none(none),   | |
 |  |  |   
|  dest: 192.168.0.101/32(none), | |
 |  |  |   
|  deny  | |
 | 86ce188d-18ab-49f2-b664-96c497318056 | AllowAll | 
 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,  | True  
   |
 |  |  |   
|  source: none(none),   | |
 |  |  |   
|  dest: none(none), | |
 |  |  |   
|  allow | |
 +--+--+--++-+
 openstack@openstack03:~/Vega$ neutron firewall-create --name Test 
 Demo*Firewall Rule d5578ab5-869b-48cb-be54-85ee9f15d9b2 could not be found.*
 openstack@openstack03:~/Vega$ neutron firewall-show Test
 ++--+
 | Field  | Value|
 ++--+
 | admin_state_up | True |
 | description|  |
 | firewall_policy_id | 7884fb78-1903-4af6-af3f-55e5c7c047c9 |
 | id | 7c59c7da-ace1-4dfa-8b04-2bc6013dbc0a |
 | name   | Test |
 | status | *PENDING_CREATE*   |
 | tenant_id  | a0794fca47de4631b8e414beea4bd51b |
 ++--+


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-05 Thread Day, Phil


 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 04 June 2014 19:23
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
 allocation ratio out of scheduler
 
 On 06/04/2014 11:56 AM, Day, Phil wrote:
  Hi Jay,
 
  * Host aggregates may also have a separate allocation ratio that
  overrides any configuration setting that a particular host may have
 
  So with your proposal would the resource tracker be responsible for
  picking and using override values defined as part of an aggregate that
  includes the host ?
 
 Not quite sure what you're asking, but I *think* you are asking whether I am
 proposing that the host aggregate's allocation ratio that a compute node
 might be in would override any allocation ratio that might be set on the
 compute node? I would say that no, the idea would be that the compute
 node's allocation ratio would override any host aggregate it might belong to.
 

I'm not sure why you would want it that way round - aggregates lets me 
set/change the value of a number of hosts, and change the set of hosts that the 
values apply to.That in general seems a much better model for operators 
that having to manage things on a per host basis.

Why not keep the current model where an aggregate  setting overrides the 
default - that will now come from the host config rather that scheduler 
config ?

Cheers,
Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Is ironic support EXSI when boot a bare metal ?

2014-06-05 Thread Chris K
Hi Chao,

The ironic ssh driver does support vmware. See
https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/ssh.py#L69-L89.
Have you seen the Triple-O tools, mainly Disk Image Builder (
https://github.com/openstack/diskimage-builder). This is how I build images
I use for testing. I have not tested the vmware parts of ironic as I do not
have a vmware server to test with, others have tested it.

Hope this helps.

Chris Krelle
--NobodyCam


On Thu, Jun 5, 2014 at 5:54 AM, 严超 yanchao...@gmail.com wrote:

 Sorry, it was ESXI.

 *Best Regards!*


 *Chao Yan --**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*


 2014-06-05 19:39 GMT+08:00 严超 yanchao...@gmail.com:

 Hi, All:
 Is ironic support EXSI when boot a bare metal ? If we can, how to
 make vmware EXSI ami bare metal image ?

 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-05 Thread Kyle Mestery
On Wed, Jun 4, 2014 at 4:27 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 Hi Andres,
 I've assumed (and we know how assumptions work) that the deprecation
 would take place in Juno and after a cyle or two it would totally be
 removed from the code.  Even if #1 is the way to go, the old /vips
 resource would be deprecated in favor of /loadbalancers and /listeners.

 I agree #2 is cleaner, but I don't want to start on an implementation
 (though I kind of already have) that will fail to be merged in because
 of the strategy.  The strategies are pretty different so one needs to be
 decided on.

 As for where LBaaS is intended to end up, I don't want to speak for
 Kyle, so this is my understanding; It will end up outside of the Neutron
 code base but Neutron and LBaaS and other services will all fall under a
 Networking (or Network) program.  That is my understanding and I could
 be totally wrong.

That's my understanding as well, I think Brandon worded it perfectly.

 Thanks,
 Brandon

 On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
 Hi Brandon, hi Kyle!
 I'm a bit confused about the deprecation (btw, thanks for sending this 
 Brandon!), as I (wrongly) assumed #1 would be the chosen path for the new 
 API implementation. I understand the proposal and #2 sounds actually cleaner.

 Just out of curiosity, Kyle, where is LBaaS functionality intended to end 
 up, if long-term plans are to remove it from Neutron?

 (Nit question, I must clarify)

 Thank you!
 Andres

 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
 Sent: Wednesday, June 04, 2014 2:18 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API

 Thanks for your feedback Kyle.  I will be at that meeting on Monday.

 Thanks,
 Brandon

 On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
  On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan
  brandon.lo...@rackspace.com wrote:
   This is an LBaaS topic bud I'd like to get some Neutron Core members
   to give their opinions on this matter so I've just directed this to
   Neutron proper.
  
   The design for the new API and object model for LBaaS needs to be
   locked down before the hackathon in a couple of weeks and there are
   some questions that need answered.  This is pretty urgent to come on
   to a decision on and to get a clear strategy defined so we can
   actually do real code during the hackathon instead of wasting some
   of that valuable time discussing this.
  
  
   Implementation must be backwards compatible
  
   There are 2 ways that have come up on how to do this:
  
   1) New API and object model are created in the same extension and
   plugin as the old.  Any API requests structured for the old API will
   be translated/adapted to the into the new object model.
   PROS:
   -Only one extension and plugin
   -Mostly true backwards compatibility -Do not have to rename
   unchanged resources and models
   CONS:
   -May end up being confusing to an end-user.
   -Separation of old api and new api is less clear -Deprecating and
   removing old api and object model will take a bit more work -This is
   basically API versioning the wrong way
  
   2) A new extension and plugin are created for the new API and object
   model.  Each API would live side by side.  New API would need to
   have different names for resources and object models from Old API
   resources and object models.
   PROS:
   -Clean demarcation point between old and new -No translation layer
   needed -Do not need to modify existing API and object model, no new
   bugs -Drivers do not need to be immediately modified -Easy to
   deprecate and remove old API and object model later
   CONS:
   -Separate extensions and object model will be confusing to end-users
   -Code reuse by copy paste since old extension and plugin will be
   deprecated and removed.
   -This is basically API versioning the wrong way
  
   Now if #2 is chosen to be feasible and acceptable then there are a
   number of ways to actually do that.  I won't bring those up until a
   clear decision is made on which strategy above is the most acceptable.
  
  Thanks for sending this out Brandon. I'm in favor of option #2 above,
  especially considering the long-term plans to remove LBaaS from
  Neutron. That approach will help the eventual end goal there. I am
  also curious on what others think, and to this end, I've added this as
  an agenda item for the team meeting next Monday. Brandon, it would be
  great to get you there for the part of the meeting where we'll discuss
  this.
 
  Thanks!
  Kyle
 
   Thanks,
   Brandon
  
  
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

[openstack-dev] [FUEL] OpenStack patching and FUEL upgrade follow-up meeting minutes

2014-06-05 Thread Vladimir Kuklin
Hi, fuelers

We had a meeting today to discuss how we gonna meet all the requirements
for OpenStack patching (
https://blueprints.launchpad.net/fuel/+spec/patch-openstack) and FUEL
Upgrade blueprints.
These are the main points:

1. We need strict EOS and EOL rules to decide how many maintenance releases
there will be for each series or our QA team and infrastructure will not
ever be available to digest it.
2. We need to separate versioning for components. We will have several
versions in each release:
a. version of OpenStack we support
b. version of FUEL and all its components
c. each deployed environment will have its own version which will consist
of FUEL library version and OpenStack version
d. Nailgun engine should have versioning of serializers which pass data to
provisioning and deployment engines in order to support backward
compatibilities between FUEL versions (e.g. to add node to 5.0.x
environment from 5.1 FUEL node)
3. We need to clearly specify the restrictions which patching and upgrade
process we support:
a. New environments can only be deployed with the latest version of
OpenStack and FUEL Library supported
b. Old environments can only be updated within the only minor release (e.g.
5.0.1-5.0.2 is allowed, 5.0.1-5.1 is not)
4. We have some devops tasks we need to finish to feel more comfortable in
the future to make testing of patching much easier:
a. we need to finish devops bare metal and distributed enviroments setup to
make CI and testing process easier
b. we need to implement elastic-recheck like feature to analyze our CI
results in order to allow developers to retrigger checks in case of
floating bugs
c. we need to start using more sophisticated scheduler



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-05 Thread Kyle Mestery
It would be ideal if folks could use the room block I reserved when
booking, if their company policy allows it. I've gotten word from the
hotel they may release the block if more people don't use it, just
FYI.

On Thu, Jun 5, 2014 at 5:46 AM, Paul Michali (pcm) p...@cisco.com wrote:
 I booked through our company travel and got a comparable rate ($111 or $114, 
 I can’t recall the exact price).

 Regards,

 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



 On Jun 5, 2014, at 12:48 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Yes, I was able to book it for $114 a night with no prepayment.  I had
 to call.  The agent found the block under Cisco and the date range.

 Carl

 On Wed, Jun 4, 2014 at 4:43 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 I think it's even cheaper than that. Try calling the hotel to get the
 better rate, I think Carl was able to successfully acquire the room at
 the cheaper rate (something like $115 a night or so).

 On Wed, Jun 4, 2014 at 4:56 PM, Edgar Magana Perdomo (eperdomo)
 eperd...@cisco.com wrote:
 I tried to book online and it seems that the pre-payment is non-refundable:

 Hyatt.Com Rate Rate RulesFull prepayment required, non-refundable, no
 date changes.


 The price is $149 USD per night. Is that what you have blocked?

 Edgar

 On 6/4/14, 2:47 PM, Kyle Mestery mest...@noironetworks.com wrote:

 Hi all:

 I was curious if people are having issues booking the room from the
 block I have setup. I received word from the hotel that only one (1!)
 person has booked yet. Given the mid-cycle is approaching in a month,
 I wanted to make sure that people are making plans for travel. Are
 people booking in places other than the one I had setup as reserved?
 If so, I'll remove the room block. Keep in mind the hotel I had a
 block reserved at is very convenient in that it's literally walking
 distance to the mid-cycle location at the Bloomington, MN Cisco
 offices.

 Thanks!
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Should Keystone emit notifications by default?

2014-06-05 Thread Assaf Muller
Hello everyone,

Keystone started emitting notifications [1] for users and tenants being created 
/ updated /
deleted during the Havana cycle. This was in response to bug [2], the fact that 
OpenStack
doesn't clean up after itself when tenants are deleted. Currently, Keystone 
does not emit
these notifications by default, and I propose it should. According to the 
principle of least
surprise, I would imagine that if an admin deleted a tenant he would expect 
that all of its
resources would be deleted, making the default configuration values in Keystone 
and the other
projects very important. I wouldn't want to rely on the different deployment 
tools to change
the needed configuration values.

I was hoping to get some feedback from operators regarding this.


[1] Keystone blueprint - 
https://blueprints.launchpad.net/keystone/+spec/notifications
[2] Resources not cleaned up bug - 
https://bugs.launchpad.net/keystone/+bug/967832
[3] Neutron spec - https://review.openstack.org/#/c/98097/


Assaf Muller, Cloud Networking Engineer 
Red Hat 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Need help with a gnarly Object Version issue

2014-06-05 Thread Dan Smith
 On a compute manager that is still running the old version of the code
 (i.e using the previous object version), if a method that hasn’t yet
 been converted to objects gets a dict created from the new  version of
 the object (e.g. rescue, get_console_output), then object_compat()
 decorator will call the _/from_db/_object() method in
 objects.Instance. Because this is the old version of the object
 code, it expects user_data to be a field in dict, and throws a key error.

Yeah, so the versioning rules are that for a minor version, you can only
add things to the object, not remove them.

 1)  Rather than removing the user_data field from the object just
 set it to a null value if its not requested.

Objects have a notion of unset which is what you'd want here. Fields
that are not set can be lazy-loaded when touched, which might be a
reasonable way out of the box here if user_data is really only used in
one place. It would mean that older clients would lazy-load it when
needed, and going forward we'd be specific about asking for it when we want.

However, the problem is that instance defines the fields it's willing to
lazy-load, and user_data isn't one of them. That'd mean that we need to
backport a change to allow it to be lazy-loaded, which means we should
probably just backport the thing that requests user_data when needed
instead.

 2)  Add object versioning in the client side of the RPC layer for
 those methods that don’t take objects.

I'm not sure what you mean here.

 I’m open to other ideas, and general guidance around how deletion of
 fields from Objects is meant to be handled ?

It's meant to be handled by rev-ing the major version, since removing
something isn't a compatible operation.

Note that *conductor* has knowledge of the client-side version of an
object on which the remotable_classmethod is being called, but that is
not exposed to the actual object implementation in any way. We could,
perhaps, figure out a sneaky way to expose that, which would let you
honor the old behavior if we know the object is old, or the new behavior
otherwise.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Qin Zhao
Hi, Thanks for reading my bug!  I think this patch can not fix this problem
now, because pipe2() requires Python 3.3.


On Thu, Jun 5, 2014 at 6:17 PM, laserjetyang laserjety...@gmail.com wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use tpool
 to invoke libguestfs, and one external commend is executed in another green
 thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs routine
 in greenthread, rather than another native thread. But it will impact the
 performance very much. So I do not think that is an acceptable solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the issue
 can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch pad
 as below. . I am not sure if libguestfs itself can have certain mechanism
 to free/close the fds that inherited from parent process instead of require
 explicitly calling the tear down. Maybe open a defect to libguestfs to see
 what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

 --
 Qin Zhao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] OpenStack patching and FUEL upgrade follow-up meeting minutes

2014-06-05 Thread Jesse Pretorius
On 5 June 2014 16:27, Vladimir Kuklin vkuk...@mirantis.com wrote:

 1. We need strict EOS and EOL rules to decide how many maintenance
 releases there will be for each series or our QA team and infrastructure
 will not ever be available to digest it.


Agreed. Would it not be prudent to keep with the OpenStack support standard
- support latest version and the -1 version?


 3. We need to clearly specify the restrictions which patching and upgrade
 process we support:
 a. New environments can only be deployed with the latest version of
 OpenStack and FUEL Library supported
 b. Old environments can only be updated within the only minor release
 (e.g. 5.0.1-5.0.2 is allowed, 5.0.1-5.1 is not)


Assuming that the major upgrades will be handled in
https://blueprints.launchpad.net/fuel/+spec/upgrade-major-openstack-environment
then I agree. If not, then we have a sticking point here. I would agree
that this is a good start, but in the medium to long term it is important
to be able to upgrade from perhaps the latest minor version of the platform
to the next available major version.


 4. We have some devops tasks we need to finish to feel more comfortable in
 the future to make testing of patching much easier:
 a. we need to finish devops bare metal and distributed enviroments setup
 to make CI and testing process easier
 b. we need to implement elastic-recheck like feature to analyze our CI
 results in order to allow developers to retrigger checks in case of
 floating bugs
 c. we need to start using more sophisticated scheduler


I find the scheduler statement a curiosity. Can you elaborate?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] specs pilot

2014-06-05 Thread Sergey Lukjanov
Hey folks,

as we agreed on weekly irc meeting two weeks ago, we are going to have
a specs repo pilot for juno-2. So, I'm requesting review for the
initial setup of the sahara-specs project:
https://review.openstack.org/#/q/sahara-specs,n,z

I hope to have initial stuff landed this week and start writing specs
for selected blueprints.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ServiceVM] IRC meeting minutes June 3, 2014 5:00(AM)UTC-)

2014-06-05 Thread Dmitry
Hi Isaku,
In order to make possible to European audience to join ServiceVM meetings,
could you please to move it 2-3 hours later (7-8AM UTC)?
Thank you very much,
Dmitry


On Tue, Jun 3, 2014 at 10:00 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:

 Here is the meeting minutes of the meeting.

 ServiceVM/device manager
 meeting minutes on June 3, 2014:
   https://wiki.openstack.org/wiki/Meetings/ServiceVM

 next meeting:
   June 10, 2014 5:00AM UTC (Tuesday)

 agreement:
 - include NFV conformance to servicevm project into servicevm project
   = will continue discuss on nomenclature at gerrit. tacker-specs
 - we have to define the relationship between NFV team and servicevm team
 - consolidate floating implementations

 Action Items:
 - everyone add your name/bio to contributor of incubation page
 - yamahata create tacker-specs repo in stackforge for further discussion
   on terminology
 - yamahata update draft to include NFV conformance
 - s3wong look into vif creation/network connection
 - everyone review incubation page

 Detailed logs:

 http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.html

 http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.log.html

 thanks,
 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Should Keystone emit notifications by default?

2014-06-05 Thread Jesse Pretorius
On 5 June 2014 16:46, Assaf Muller amul...@redhat.com wrote:

 Keystone started emitting notifications [1] for users and tenants being
 created / updated /
 deleted during the Havana cycle. This was in response to bug [2], the fact
 that OpenStack
 doesn't clean up after itself when tenants are deleted. Currently,
 Keystone does not emit
 these notifications by default, and I propose it should. According to the
 principle of least
 surprise, I would imagine that if an admin deleted a tenant he would
 expect that all of its
 resources would be deleted, making the default configuration values in
 Keystone and the other
 projects very important. I wouldn't want to rely on the different
 deployment tools to change
 the needed configuration values.

 I was hoping to get some feedback from operators regarding this.


As a deployer, I most certainly would prefer that this is enabled by
default.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-06-05 Thread Nilakhya Chatterjee
HI Guys,

It was gr8 to find your interest in solving the nested stack resource
listing.

Lets move ahead by finishing any discussions left over the BP and getting
an approval on It.

Till now what makes sense to me are :

a) an additional flag in the client call  --nested (randall)
b) A flattened  DS in the output  (tim)


Thanks all !


On Wed, May 21, 2014 at 12:42 AM, Randall Burt randall.b...@rackspace.com
wrote:

 Bartosz, would that be in addition to --nested? Seems like id want to be
 able to say all of it as well as some of it.

 On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
  wrote:

  Hi Tim,
 
  Maybe instead of just a flag like --nested (bool value) to resource-list
 we can add optional argument like --depth X or --nested-level X (X -
 integer value) to limit the depth for recursive listing of nested resources?
 
  Best,
  Bartosz
 
  On 05/19/2014 09:13 PM, Tim Schnell wrote:
  Blueprint:
  https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
  Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
  Tim
 
  On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
  On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com
 wrote:
 
 
  On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
  wrote:
 
  On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
  Hi Nilakhya,
 
  As Randall mentioned we did discuss this exact issue at the summit.
 I
  was
  planning on putting a blueprint together today to continue the
  discussion.
  The Stack Preview call is already doing the necessary recursion to
  gather
  the resources so we discussed being able to pass a stack id to the
  preview
  endpoint to get all of the resources.
 
  However, after thinking about it some more, I agree with Randall
 that
  maybe this should be an extra query parameter passed to the
  resource-list
  call. I'Ll have the blueprint up later today, unless you have
 already
  started on it.
  Note there is a patch from Anderson/Richard which may help with this:
 
  https://review.openstack.org/#/c/85781/
 
  The idea was to enable easier introspection of resources backed by
  nested
  stacks in a UI, but it could be equally useful to generate a tree
  resource view in the CLI client by walking the links.
 
  This would obviously be less efficient than recursing inside the
  engine,
  but arguably the output would be much more useful if it retains the
  nesting
  structure, as opposed to presenting a fully flattened soup of
  resources
  with no idea which stack/layer they belong to.
 
  Steve
  Could we simply add stack name/id to this output if the flag is
 passed? I
  agree that we currently have the capability to traverse the tree
  structure of nested stacks, but several folks have requested this
  capability, mostly for UI/UX purposes. It would be faster if you want
 the
  flat structure and we still retain the capability to create your own
  tree/widget/whatever by following the links. Also, I think its best to
  include this in the API directly since not all users are integrating
  using the python-heatclient.
  +1 for adding the stack name/id to the output to maintain a reference
 to
  the initial stack that the resource belongs to. The original stated
  use-case that I am aware of was to have a flat list of all resources
  associated with a stack to be displayed in the UI when the user
 prompts to
  delete a stack. This would prevent confusion about what and why
 different
  resources are being deleted due to the stack delete.
 
  This use-case does not require any information about the nested stacks
 but
  I can foresee that information being useful in the future. I think a
  flattened data structure (with a reference to stack id) is still the
 most
  efficient solution. The patch landed by Anderson/Richard provides an
  alternate method to drill down into nested stacks if the hierarchy is
  important information though this is not the optimal solution in this
  case.
 
  Tim
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Nilakhya | Consultant Engineering
GlobalLogic

[openstack-dev] [openstack-sdk-php] Transport Clients, Service Clients, and state

2014-06-05 Thread Matthew Farina
We've started to talk about the interactions between transport
clients, service clients, and state. I've noticed we're not on the
same page so I wanted to start a dialog. Here's my starting point...

A Transport Client is about transporting data. It sends and receives data.

A Service Client handles the interactions with a service (e.g., swift,
nova, keystone).

A Service Client uses a Transport Client when it needs to transport
data to and from a service.

When it comes to state, a Transport Client knows about transporting
things. That means it knows things like if there is a proxy and how to
work with it. A Service Client knows about a service which includes
and state for that service.

In the realm of separation of concerns, a Service Client doesn't know
about transport state and a Transport Client doesn't know about
service state. They are separate.

A Service Client doesn't care what Transport Client is used as long as
the API (interface) is compliant. A Transport Client doesn't care what
code calls it as long as it uses the public API defined by an
interface.

This is my take. If someone has a different take please share it with
the reasoning.

- Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Idea of sample admin_{user, tenant_name, password} in api-paste.ini instead of admin_token

2014-06-05 Thread Shuichiro Makigaki
Hi developers,

If you have a chance it would be great to hear any feedback.

I opened a new ticket about  admin_{user,tenant_name,
password}:
https://bugs.launchpad.net/trove/+bug/1299332

In this ticket, I suggest following changes in api-paste.ini:
+admin_tenant_name = %SERVICE_TENANT_NAME%
+admin_user = %SERVICE_USER%
+admin_password = %SERVICE_PASSWORD%

Some of other OpenStack components have them in api-paste.ini as default.
However, Trove doesn't have yet after admin_token was removed by
https://bugs.launchpad.net/trove/+bug/1325482.

Actually, I don't think it's a critical bug. Tests and devstack will
add them automatically.
However, it's important for packaging, and also helpful for Trove
beginners ( like me :-) ) who try to install Trove from source code
following documents.

In addition, this is mentioned by Thomas Goirand in
https://bugs.launchpad.net/trove/+bug/1299332. He is also afraid about
complexity of package validation by Tempest.

Again, please give me feedback if you are interested.

Regards,
Makkie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Qin Zhao
Hi Yuriy,

Thanks for reading my bug!  You are right. Python 3.3 or 3.4 should not
have this issue, since they have can secure the file descriptor. Before
OpenStack move to Python 3, we may still need a solution. Calling
libguestfs in a separate process seems to be a way. This way, Nova code can
close those fd by itself, not depending upon CLOEXEC. However, that will be
an expensive solution, since it requires a lot of code change. At least we
need to write code to pass the return value and exception between these two
processes. That will make this solution very complex. Do you agree?


On Thu, Jun 5, 2014 at 9:39 PM, Yuriy Taraday yorik@gmail.com wrote:

 This behavior of os.pipe() has changed in Python 3.x so it won't be an
 issue on newer Python (if only it was accessible for us).

 From the looks of it you can mitigate the problem by running libguestfs
 requests in a separate process (multiprocessing.managers comes to mind).
 This way the only descriptors child process could theoretically inherit
 would be long-lived pipes to main process although they won't leak because
 they should be marked with CLOEXEC before any libguestfs request is run.
 The other benefit is that this separate process won't be busy opening and
 closing tons of fds so the problem with inheriting will be avoided.


 On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com
 wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use
 tpool to invoke libguestfs, and one external commend is executed in another
 green thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs routine
 in greenthread, rather than another native thread. But it will impact the
 performance very much. So I do not think that is an acceptable solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the
 issue can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch
 pad as below. . I am not sure if libguestfs itself can have certain
 mechanism to free/close the fds that inherited from parent process instead
 of require explicitly calling the tear down. Maybe open a defect to
 libguestfs to see what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

 --
 Qin Zhao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Kind regards, Yuriy.

 

Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-05 Thread Buraschi, Andres
Thanks, Kyle. Great.

-Original Message-
From: Kyle Mestery [mailto:mest...@noironetworks.com] 
Sent: Thursday, June 05, 2014 11:27 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API

On Wed, Jun 4, 2014 at 4:27 PM, Brandon Logan brandon.lo...@rackspace.com 
wrote:
 Hi Andres,
 I've assumed (and we know how assumptions work) that the deprecation 
 would take place in Juno and after a cyle or two it would totally be 
 removed from the code.  Even if #1 is the way to go, the old /vips 
 resource would be deprecated in favor of /loadbalancers and /listeners.

 I agree #2 is cleaner, but I don't want to start on an implementation 
 (though I kind of already have) that will fail to be merged in because 
 of the strategy.  The strategies are pretty different so one needs to 
 be decided on.

 As for where LBaaS is intended to end up, I don't want to speak for 
 Kyle, so this is my understanding; It will end up outside of the 
 Neutron code base but Neutron and LBaaS and other services will all 
 fall under a Networking (or Network) program.  That is my 
 understanding and I could be totally wrong.

That's my understanding as well, I think Brandon worded it perfectly.

 Thanks,
 Brandon

 On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
 Hi Brandon, hi Kyle!
 I'm a bit confused about the deprecation (btw, thanks for sending this 
 Brandon!), as I (wrongly) assumed #1 would be the chosen path for the new 
 API implementation. I understand the proposal and #2 sounds actually cleaner.

 Just out of curiosity, Kyle, where is LBaaS functionality intended to end 
 up, if long-term plans are to remove it from Neutron?

 (Nit question, I must clarify)

 Thank you!
 Andres

 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
 Sent: Wednesday, June 04, 2014 2:18 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API

 Thanks for your feedback Kyle.  I will be at that meeting on Monday.

 Thanks,
 Brandon

 On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
  On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan 
  brandon.lo...@rackspace.com wrote:
   This is an LBaaS topic bud I'd like to get some Neutron Core 
   members to give their opinions on this matter so I've just 
   directed this to Neutron proper.
  
   The design for the new API and object model for LBaaS needs to be 
   locked down before the hackathon in a couple of weeks and there 
   are some questions that need answered.  This is pretty urgent to 
   come on to a decision on and to get a clear strategy defined so 
   we can actually do real code during the hackathon instead of 
   wasting some of that valuable time discussing this.
  
  
   Implementation must be backwards compatible
  
   There are 2 ways that have come up on how to do this:
  
   1) New API and object model are created in the same extension and 
   plugin as the old.  Any API requests structured for the old API 
   will be translated/adapted to the into the new object model.
   PROS:
   -Only one extension and plugin
   -Mostly true backwards compatibility -Do not have to rename 
   unchanged resources and models
   CONS:
   -May end up being confusing to an end-user.
   -Separation of old api and new api is less clear -Deprecating and 
   removing old api and object model will take a bit more work -This 
   is basically API versioning the wrong way
  
   2) A new extension and plugin are created for the new API and 
   object model.  Each API would live side by side.  New API would 
   need to have different names for resources and object models from 
   Old API resources and object models.
   PROS:
   -Clean demarcation point between old and new -No translation 
   layer needed -Do not need to modify existing API and object 
   model, no new bugs -Drivers do not need to be immediately 
   modified -Easy to deprecate and remove old API and object model 
   later
   CONS:
   -Separate extensions and object model will be confusing to 
   end-users -Code reuse by copy paste since old extension and 
   plugin will be deprecated and removed.
   -This is basically API versioning the wrong way
  
   Now if #2 is chosen to be feasible and acceptable then there are 
   a number of ways to actually do that.  I won't bring those up 
   until a clear decision is made on which strategy above is the most 
   acceptable.
  
  Thanks for sending this out Brandon. I'm in favor of option #2 
  above, especially considering the long-term plans to remove LBaaS 
  from Neutron. That approach will help the eventual end goal there. 
  I am also curious on what others think, and to this end, I've added 
  this as an agenda item for the team meeting next Monday. Brandon, 
  it would be great to get you there for the part of the meeting 
  where we'll discuss this.
 
  Thanks!
  Kyle
 
   Thanks,
   Brandon
  
  
  
  
  
  
   

[openstack-dev] [sahara] Etherpad for discussion of spark EDP implementation

2014-06-05 Thread Trevor McKay
Hi folks,

  We've just started an investigation of spark EDP for Sahara. Please
visit the etherpad and share your insights.  Spark experts especially
welcome!

https://etherpad.openstack.org/p/sahara_spark_edp


  There are some design/roadmap decisions we need to make -- how are we
going to do this, in what timeframe, with what steps?  Do we employ a
short term solution and replace long term with something more
supportable, etc.

Best,

Trevor


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Yuriy Taraday
Please take a look at
https://docs.python.org/2.7/library/multiprocessing.html#managers -
everything is already implemented there.
All you need is to start one manager that would serve all your requests to
libguestfs. The implementation in stdlib will provide you with all
exceptions and return values with minimum code changes on Nova side.
Create a new Manager, register an libguestfs endpoint in it and call
start(). It will spawn a separate process that will speak with calling
process over very simple RPC.
From the looks of it all you need to do is replace tpool.Proxy calls in
VFSGuestFS.setup method to calls to this new Manager.


On Thu, Jun 5, 2014 at 7:21 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 Thanks for reading my bug!  You are right. Python 3.3 or 3.4 should not
 have this issue, since they have can secure the file descriptor. Before
 OpenStack move to Python 3, we may still need a solution. Calling
 libguestfs in a separate process seems to be a way. This way, Nova code can
 close those fd by itself, not depending upon CLOEXEC. However, that will be
 an expensive solution, since it requires a lot of code change. At least we
 need to write code to pass the return value and exception between these two
 processes. That will make this solution very complex. Do you agree?


 On Thu, Jun 5, 2014 at 9:39 PM, Yuriy Taraday yorik@gmail.com wrote:

 This behavior of os.pipe() has changed in Python 3.x so it won't be an
 issue on newer Python (if only it was accessible for us).

 From the looks of it you can mitigate the problem by running libguestfs
 requests in a separate process (multiprocessing.managers comes to mind).
 This way the only descriptors child process could theoretically inherit
 would be long-lived pipes to main process although they won't leak because
 they should be marked with CLOEXEC before any libguestfs request is run.
 The other benefit is that this separate process won't be busy opening and
 closing tons of fds so the problem with inheriting will be avoided.


 On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com
 wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use
 tpool to invoke libguestfs, and one external commend is executed in another
 green thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs
 routine in greenthread, rather than another native thread. But it will
 impact the performance very much. So I do not think that is an acceptable
 solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the
 issue can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch
 pad as below. . I am not sure if libguestfs itself can have certain
 mechanism to free/close the fds that inherited from parent process instead
 of require explicitly calling the tear down. Maybe open a defect to
 libguestfs to see what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram 
 to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this
 deadlock. This problem is related with Python runtime, libguestfs, and
 eventlet. The situation is a little complicated. Is there any 

[openstack-dev] [Neutron][L3] - L3 High availability blueprint v/s pure L3 packet

2014-06-05 Thread A, Keshava
Carl,

In the L3 High availability Blue Print I want to make following observation...


1.   Just before Active going down if there was fragmented packet getting 
reassembling how to recover it  ?

If that packet is related App, App will try to resend those packet so that it 
will get reassembled in new board .
Issue:
Issue with  packets which are pure L3 level  packets.
   Example  IPSEC packet (where there is recursive  encapsulation) 
, Neighbor Discover packet, ICMP related which are pure L3 packet.
If these packets are get losses during process 
of reassembled how to recover it ?
Worst thing is no one is informing that these 
packets are lost, because there are no session based Socket for pure L3 level 
packet.


2.   Since this HA mechanism is stateless (VRRP mechanism) what is the 
impact those L3 protocol which have status ?

How exactly NAT sessions are handled ?


3.   I think we need to have serious discussion with Distributed 
functionality (like DVR) for session less High availability solution.


Thanks  Regards,
Keshava

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-05 Thread Kurt Griffiths
I just learned that some projects are thinking about having the specs process 
be the channel for submitting new feature ideas, rather than registering 
blueprints. I must admit, that would be kind of nice because it would provide 
some much-needed structure around the triaging process.

I wonder if we can get some benefit out of the spec process while still keeping 
it light? The temptation will be to start documenting everything in 
excruciating detail, but we can mitigate that by codifying some guidelines on 
our wiki and baking it into the team culture.

What does everyone think?

From: Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com
Date: Tuesday, June 3, 2014 at 9:34 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I think it becomes more useful the larger your team. With a smaller team it is 
easier to keep everyone on the same page just through the mailing list and IRC. 
As for where to document design decisions, the trick there is more one of being 
diligent about capturing and recording the why of every decision made in 
discussions and such; gerrit review history can help with that, but it isn’t 
free.

If we’d like to give the specs process a try, I think we could do an experiment 
in j-2 with a single bp. Depending on how that goes, we may do more in the K 
cycle. What does everyone think?

From: Malini Kamalambal 
malini.kamalam...@rackspace.commailto:malini.kamalam...@rackspace.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 2, 2014 at 2:45 PM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

+1 – Requiring specs for every blueprint is going to make the development 
process very cumbersome, and will take us back to waterfall days.
I like how the Marconi team operates now, with design decisions being made in 
IRC/ team meetings.
So Spec might become more of an overhead than add value, given how our team 
functions.

'If' we agree to use Specs, we should use that only for the blue prints that 
make sense.
For example, the unit test decoupling that we are working on now – this one 
will be a good candidate to use specs, since there is a lot of back and forth 
going on how to do this.
On the other hand something like Tempest Integration for Marconi will not 
warrant a spec, since it is pretty straightforward what needs to be done.
In the past we have had discussions around where to document certain design 
decisions (e.g. Which endpoint/verb is the best fit for pop operation?)
Maybe spec is the place for these?

We should leave it to the implementor to decide, if the bp warrants a spec or 
not  what should be in the spec.


From: Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 2, 2014 1:33 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I’ve been in roles where enormous amounts of time were spent on writing specs, 
and in roles where specs where non-existent. Like most things, I’ve become 
convinced that success lies in moderation between the two extremes.

I think it would make sense for big specs, but I want to be careful we use it 
judiciously so that we don’t simply apply more process for the sake of more 
process. It is tempting to spend too much time recording every little detail in 
a spec, when that time could be better spent in regular communication between 
team members and with customers, and on iterating the code (short iterations 
between demo/testing, so you ensure you are on staying on track and can address 
design problems early, often).

IMO, specs are best used more as summaries, containing useful big-picture 
ideas, diagrams, and specific “memory pegs” to help us remember what was 
discussed and decided, and calling out specific “promises” for future 
conversations where certain design points are TBD.

From: Malini Kamalambal 
malini.kamalam...@rackspace.commailto:malini.kamalam...@rackspace.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Marconi] Adopt Spec

Hello all,

We are seeing more  more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more  start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini


Re: [openstack-dev] [openstack-sdk-php] Transport Clients, Service Clients, and state

2014-06-05 Thread Jamie Hannaford
I completely agree with you regarding separation of concerns.

I also agree with your definitions: a transport client is for managing HTTP 
transactions, a service client contains all the domain logic for an API service 
(Swift, Nova, etc.). A service knows nothing about HTTP, a transport client 
knows nothing about Swift. A transport client is injected into the service 
client, satisfying the type hint. So any transport client implementing our 
interface is fine.

Up to this point I’m in 100% agreement. The area which I think I misunderstood 
was the creation process of service clients. My take was that you were 
advocating a shared transport client instance - i.e. a transport client 
instantiated once, and re-used for every service client. If we did that, there 
would be global state.

My opinion is that we create a new transport client instance for every service 
client, not re-use existing instances. What’s your take on this?

Jamie


On June 5, 2014 at 5:17:57 PM, Matthew Farina 
(m...@mattfarina.commailto:m...@mattfarina.com) wrote:

We've started to talk about the interactions between transport
clients, service clients, and state. I've noticed we're not on the
same page so I wanted to start a dialog. Here's my starting point...

A Transport Client is about transporting data. It sends and receives data.

A Service Client handles the interactions with a service (e.g., swift,
nova, keystone).

A Service Client uses a Transport Client when it needs to transport
data to and from a service.

When it comes to state, a Transport Client knows about transporting
things. That means it knows things like if there is a proxy and how to
work with it. A Service Client knows about a service which includes
and state for that service.

In the realm of separation of concerns, a Service Client doesn't know
about transport state and a Transport Client doesn't know about
service state. They are separate.

A Service Client doesn't care what Transport Client is used as long as
the API (interface) is compliant. A Transport Client doesn't care what
code calls it as long as it uses the public API defined by an
interface.

This is my take. If someone has a different take please share it with
the reasoning.

- Matt



Jamie Hannaford
Software Developer III - CH [experience Fanatical Support]

Tel:+41434303908
Mob:+41791009767
[Rackspace]



Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
-
Rackspace Hosting Australia PTY LTD a company registered in the state of 
Victoria, Australia (company registered number ACN 153 275 524) whose 
registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW 2000, 
Australia. Rackspace Hosting Australia PTY LTD privacy policy can be viewed at 
www.rackspace.com.au/company/legal-privacy-statement.php
-
Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United States of 
America
Rackspace US, Inc privacy policy can be viewed at 
www.rackspace.com/information/legal/privacystatement
-
Rackspace Limited is a company registered in England  Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ.
Rackspace Limited privacy policy can be viewed at 
www.rackspace.co.uk/legal/privacy-policy
-
Rackspace Benelux B.V. is a company registered in the Netherlands (company KvK 
nummer 34276327) whose registered office is at Teleportboulevard 110, 1043 EJ 
Amsterdam.
Rackspace Benelux B.V privacy policy can be viewed at 
www.rackspace.nl/juridisch/privacy-policy
-
Rackspace Asia Limited is a company registered in Hong Kong (Company no: 
1211294) whose registered office is at 9/F, Cambridge House, Taikoo Place, 979 
King's Road, Quarry Bay, Hong Kong.
Rackspace Asia Limited privacy policy can be viewed at 
www.rackspace.com.hk/company/legal-privacy-statement.php
-
This e-mail message (including any attachments or embedded documents) is 
intended for the exclusive and confidential use of the individual or entity to 
which this message is addressed, and unless otherwise expressly indicated, is 
confidential and privileged information of Rackspace. Any dissemination, 
distribution or copying of the enclosed material is prohibited. If you receive 
this transmission in error, please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original message. Your cooperation is 
appreciated.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Re: NFV in OpenStack use cases and context

2014-06-05 Thread Steve Gordon
Just adding openstack-dev to the CC for now :).

- Original Message -
 From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
 Subject: Re: NFV in OpenStack use cases and context
 
 Can we look at them one by one?
 
 Use case 1 - It's pure IaaS
 Use case 2 - Virtual network function as a service. It's actually about
 exposing services to end customers (enterprises) by the service provider.
 Use case 3 - VNPaaS - is similar to #2 but at the service level. At larger
 scale and not at the app level only.
 Use case 4 - VNF forwarding graphs. It's actually about dynamic
 connectivity between apps.
 Use case 5 - vEPC and vIMS - Those are very specific (good) examples of SP
 services to be deployed.
 Use case 6 - virtual mobile base station. Another very specific example,
 with different characteristics than the other two above.
 Use case 7 - Home virtualisation.
 Use case 8 - Virtual CDN
 
 As I see it those have totally different relevancy to OpenStack.
 Assuming we don't want to boil the ocean hereŠ
 
 1-3 seems to me less relevant here.
 4 seems to be a Neutron area.
 5-8 seems to be usefully to understand the needs of the NFV apps. The use
 case can help to map those needs.
 
 For 4 I guess the main part is about chaining and Neutron between DCs.
 Soma may call it SDN in WAN...
 
 For 5-8 at the end an option is to map all those into:
 -performance (net BW, storage BW mainly). That can be mapped to SR-IOV,
 NUMA. Etc'
 -determinism. Shall we especially minimise noisy neighbours. Not sure
 how NFV is special here, but for sure it's a major concern for lot of SPs.
 That can be mapped to huge pages, cache QOS, etc'.
 -overcoming of short term hurdles (just because of apps migrations
 issues). Small example is the need to define the tick policy of KVM just
 because that's what the app needs. Again, not sure how NFV special it is,
 and again a major concern of mainly application owners in the NFV domain.
 
 Make sense?
 
 Itai
 
 
 On 6/5/14 8:20 AM, Nicolas Lemieux nlemi...@redhat.com wrote:
 
 At high-level I propose to split things up while looking at the use cases
 and at a minimum address requirements for compute node (NFVI/hypervisor)
 to some extend decoupled from the controller/scheduler (VIM). This should
 simplify mapping to the ETSI ISG as well as provide better insertion
 points for the vendor eco-system.
 
 Nic
 
  On Jun 5, 2014, at 0:08, Chris Wright chr...@sous-sol.org wrote:
  
  labelled as the tl;dr version of nfv context for OpenStack developers
  
  ETSI use case doc:
 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001
 v010101p.pdf
  
  I think goal is to get some translation from these high level type of
 use cases
  into context for the existing blueprints:
  
  https://wiki.openstack.org/wiki/Meetings/NFV#Active_Blueprints
  
  Tried to capture the relevant thread on IRC here:
  
  07:13  sgordon but we should collaborate when it comes to terminology
 and use
  cases
  07:13  sgordon if it makes sense
  
  07:13  russellb #topic use cases
  07:13  sgordon rather than independently creating two etsi -
 openstack
  glossaries for example
  07:13  russellb we've done a nice job early on with gathering a big
 list of
   blueprints
  -
  07:13  russellb i think one big work area for us is the use cases and
   terminology translation for openstack
  -
  07:14  sgordon right, and the question there is do we want to target
 specific
  VNFs or more generally translate the ETSI NFV use cases
  07:14  russellb what would we like to accomplish in this area?
  07:14  sgordon in a way it's, how do we define success
  07:14  imendel I thought we want to drive requirements. The ETSI use
 cases
  are far too high level
  -
  07:14  russellb from an openstack developer perspective, I feel like
 we need
   a tl;dr of why this stuff is important
  -
  07:15  sgordon imendel, agree
  07:15  sgordon imendel, i am just trying to get a feel for how we get
 to
  something more specific
  07:15  imendel i gree, we need somthing specifc
  07:15  cdub russellb: should we aim to have that for next week?
  07:15  adrian-hoban OpenStack fits _mostly_ in what ETSI-NFV describe
 as a
   Virtualisation Infrastructure Manager (VIM)
  -
  07:15  russellb it would be nice to have something, even brief,
 written up
   that ties blueprints to the why
  -
  07:15  nijaba ijw:  would be happy to work with you on writing about
 this
  07:15  russellb cdub: yeah
  07:16  russellb maybe a few people can go off and start a first cut
 at
   something?
  07:16  cdub russellb: ok, i'll help with that
  07:16  nijaba russellb: volunteering
  07:16  adrian-hoban russellb: Are you looking for a terminology
 translation
   or use cases?
  07:16  russellb ok great, who wants to coordinate it?
  07:17  

Re: [openstack-dev] [Glance] [TC] Program Mission Statement and the Catalog

2014-06-05 Thread Mark Washenberger
On Thu, Jun 5, 2014 at 1:43 AM, Kuvaja, Erno kuv...@hp.com wrote:

  Hi,



 +1 for the mission statement, but indeed why 2 changes?


I thought perhaps this way is more explicit. First we're adopting a
straightforward mission statement which has been lacking for some time.
Next we're proposing a new mission in line with our broader aspirations.
However I'm quite happy to squash them together if folks prefer.




 -  Erno (jokke)



 *From:* Mark Washenberger [mailto:mark.washenber...@markwash.net]
 *Sent:* 05 June 2014 02:04
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Glance] [TC] Program Mission Statement and
 the Catalog
 *Importance:* High



 Hi folks,



 I'd like to propose the Images program to adopt a mission statement [1]
 and then change it to reflect our new aspirations of acting as a Catalog
 that works with artifacts beyond just disk images [2].



 Since the Glance mini summit early this year, momentum has been building
 significantly behind catalog effort and I think its time we recognize it
 officially, to ensure further growth can proceed and to clarify the
 interactions the Glance Catalog will have with other OpenStack projects.



 Please see the linked openstack/governance changes, and provide your
 feedback either in this thread, on the changes themselves, or in the next
 TC meeting when we get a chance to discuss.



 Thanks to Georgy Okrokvertskhov for coming up with the new mission
 statement.



 Cheers

 -markwash



 [1] - https://review.openstack.org/98001

 [2] - https://review.openstack.org/98002



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-05 Thread Nachi Ueno
 Yamamoto
Cool! OK, I'll make ryu based bgpspeaker as ref impl for my bp.

Yong
Ya, we have already decided to have the driver architecture.
IMO, this discussion is for reference impl.

2014-06-05 0:24 GMT-07:00 Yongsheng Gong gong...@unitedstack.com:
 I think maybe we can device a kind of framework so that we can plugin
 different BGP speakers.


 On Thu, Jun 5, 2014 at 2:59 PM, YAMAMOTO Takashi yamam...@valinux.co.jp
 wrote:

 hi,

  ExaBgp was our first choice because we thought that run something in
  library mode would be much more easy to deal with (especially the
  exceptions and corner cases) and the code would be much cleaner. But
  seems
  that Ryu BGP also can fit in this requirement. And having the help from
  a
  Ryu developer like you turns it into a promising candidate!
 
  I'll start working now in a proof of concept to run the agent with these
  implementations and see if we need more requirements to compare between
  the
  speakers.

 we (ryu team) love to hear any suggestions and/or requests.
 we are currently working on our bgp api refinement and documentation.
 hopefully they will be available early next week.

 for both of bgp blueprints, it would be possible, and might be desirable,
 to create reference implementations in python using ryu or exabgp.
 (i prefer ryu. :-)

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-sdk-php] Use of final and private keywords to limit extending

2014-06-05 Thread Matthew Farina
Some recent reviews have started to include the use of the private
keyword for methods and talk of using final on classes. I don't think
we have consistent agreement on how we should do this.

My take is that we should not use private or final unless we can
articulate the design decision to intentionally do so.

To limit public the public API for a class we can use protected.
Moving from protected to private or the use of final should have a
good reason.

In open source software code is extended in ways we often don't think
of up front. Using private and final limits how those things can
happen. When we use them we are intentionally limiting extending so we
should be able to articulate why we want to put that limitation in
place.

Given the reviews that have been put forth I think there is a
different stance. If there is one please share it.

- Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Transport Clients, Service Clients, and state

2014-06-05 Thread Matthew Farina
 My opinion is that we create a *new* transport client instance for every
service client, not re-use existing instances. What’s your take on this?

I'm not in agreement and here is why (with a use case).

A transport client is concerned with transporting only. Whether the same
one is used for each service or a new one is used for each service doesn't
matter.

An example of using two transport clients would be a case where an
application communicates with two different OpenStack clouds. One is a
public cloud and the application communicates through a proxy. A transport
client would know how to talk through the proxy to the public cloud. A
second OpenStack cloud is a private cloud that is on the same company
network. A second transport client would know how to talk to that without
communicating through the proxy.

The service clients communicating with each cloud would use the appropriate
transport client.

The mapping of transport client to service client doesn't need to be 1:1 if
they operate in different spaces. Only having instances of a transport
client as needed decreases the use of resources or the time needed to
manage those.

If a transport client is only concerned with transporting than what is the
need to have more than one per case to transport?

- Matt

On Thu, Jun 5, 2014 at 12:09 PM, Jamie Hannaford 
jamie.hannaf...@rackspace.com wrote:

  I completely agree with you regarding separation of concerns.

  I also agree with your definitions: a transport client is for managing
 HTTP transactions, a service client contains all the domain logic for an
 API service (Swift, Nova, etc.). A service knows nothing about HTTP, a
 transport client knows nothing about Swift. A transport client is injected
 into the service client, satisfying the type hint. So any transport client
 implementing our interface is fine.

  Up to this point I’m in 100% agreement. The area which I think I
 misunderstood was the *creation process* of service clients. My take was
 that you were advocating a shared transport client instance - i.e. a
 transport client instantiated once, and re-used for every service client.
 If we did that, there would be global state.

  My opinion is that we create a *new* transport client instance for every
 service client, not re-use existing instances. What’s your take on this?

  Jamie

 On June 5, 2014 at 5:17:57 PM, Matthew Farina (m...@mattfarina.com) wrote:

  We've started to talk about the interactions between transport
 clients, service clients, and state. I've noticed we're not on the
 same page so I wanted to start a dialog. Here's my starting point...

 A Transport Client is about transporting data. It sends and receives data.

 A Service Client handles the interactions with a service (e.g., swift,
 nova, keystone).

 A Service Client uses a Transport Client when it needs to transport
 data to and from a service.

 When it comes to state, a Transport Client knows about transporting
 things. That means it knows things like if there is a proxy and how to
 work with it. A Service Client knows about a service which includes
 and state for that service.

 In the realm of separation of concerns, a Service Client doesn't know
 about transport state and a Transport Client doesn't know about
 service state. They are separate.

 A Service Client doesn't care what Transport Client is used as long as
 the API (interface) is compliant. A Transport Client doesn't care what
 code calls it as long as it uses the public API defined by an
 interface.

 This is my take. If someone has a different take please share it with
 the reasoning.

 - Matt



   Jamie Hannaford
 Software Developer III - CH [image: experience Fanatical Support] [image:
 LINE] Tel: +41434303908Mob: +41791009767 [image: Rackspace]



 Rackspace International GmbH a company registered in the Canton of Zurich,
 Switzerland (company identification number CH-020.4.047.077-1) whose
 registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland.
 Rackspace International GmbH privacy policy can be viewed at
 www.rackspace.co.uk/legal/swiss-privacy-policy
 -
 Rackspace Hosting Australia PTY LTD a company registered in the state of
 Victoria, Australia (company registered number ACN 153 275 524) whose
 registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW
 2000, Australia. Rackspace Hosting Australia PTY LTD privacy policy can be
 viewed at www.rackspace.com.au/company/legal-privacy-statement.php
 -
 Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United
 States of America
 Rackspace US, Inc privacy policy can be viewed at
 www.rackspace.com/information/legal/privacystatement
 -
 Rackspace Limited is a company registered in England  Wales (company
 registered number 03897010) whose registered office is at 5 Millington
 Road, Hyde Park Hayes, Middlesex UB3 4AZ.
 Rackspace Limited privacy policy can be viewed at
 www.rackspace.co.uk/legal/privacy-policy
 -
 Rackspace Benelux B.V. is a company 

Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-06-05 Thread Randall Burt
Hey, sorry for the slow follow. I have to put some finishing touches on a spec 
and submit that for review. I'll reply to the list with the link later today. 
Hope to have an initial patch up as well in the next day or so.

On Jun 5, 2014, at 10:03 AM, Nilakhya Chatterjee 
nilakhya.chatter...@globallogic.com
 wrote:

 HI Guys, 
 
 It was gr8 to find your interest in solving the nested stack resource listing.
 
 Lets move ahead by finishing any discussions left over the BP and getting an 
 approval on It.
 
 Till now what makes sense to me are : 
 
 a) an additional flag in the client call  --nested (randall)
 b) A flattened  DS in the output  (tim) 
 
 
 Thanks all ! 
 
 
 On Wed, May 21, 2014 at 12:42 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 Bartosz, would that be in addition to --nested? Seems like id want to be able 
 to say all of it as well as some of it.
 
 On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
  wrote:
 
  Hi Tim,
 
  Maybe instead of just a flag like --nested (bool value) to resource-list we 
  can add optional argument like --depth X or --nested-level X (X - integer 
  value) to limit the depth for recursive listing of nested resources?
 
  Best,
  Bartosz
 
  On 05/19/2014 09:13 PM, Tim Schnell wrote:
  Blueprint:
  https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
  Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
  Tim
 
  On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
  On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
  On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
  wrote:
 
  On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
  Hi Nilakhya,
 
  As Randall mentioned we did discuss this exact issue at the summit. I
  was
  planning on putting a blueprint together today to continue the
  discussion.
  The Stack Preview call is already doing the necessary recursion to
  gather
  the resources so we discussed being able to pass a stack id to the
  preview
  endpoint to get all of the resources.
 
  However, after thinking about it some more, I agree with Randall that
  maybe this should be an extra query parameter passed to the
  resource-list
  call. I'Ll have the blueprint up later today, unless you have already
  started on it.
  Note there is a patch from Anderson/Richard which may help with this:
 
  https://review.openstack.org/#/c/85781/
 
  The idea was to enable easier introspection of resources backed by
  nested
  stacks in a UI, but it could be equally useful to generate a tree
  resource view in the CLI client by walking the links.
 
  This would obviously be less efficient than recursing inside the
  engine,
  but arguably the output would be much more useful if it retains the
  nesting
  structure, as opposed to presenting a fully flattened soup of
  resources
  with no idea which stack/layer they belong to.
 
  Steve
  Could we simply add stack name/id to this output if the flag is passed? I
  agree that we currently have the capability to traverse the tree
  structure of nested stacks, but several folks have requested this
  capability, mostly for UI/UX purposes. It would be faster if you want the
  flat structure and we still retain the capability to create your own
  tree/widget/whatever by following the links. Also, I think its best to
  include this in the API directly since not all users are integrating
  using the python-heatclient.
  +1 for adding the stack name/id to the output to maintain a reference to
  the initial stack that the resource belongs to. The original stated
  use-case that I am aware of was to have a flat list of all resources
  associated with a stack to be displayed in the UI when the user prompts to
  delete a stack. This would prevent confusion about what and why different
  resources are being deleted due to the stack delete.
 
  This use-case does not require any information about the nested stacks but
  I can foresee that information being useful in the future. I think a
  flattened data structure (with a reference to stack id) is still the most
  efficient solution. The patch landed by Anderson/Richard provides an
  alternate method to drill down into nested stacks if the hierarchy is
  important information though this is not the optimal solution in this
  case.
 
  Tim
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-05 Thread Jaume Devesa
After watch the documentation and the code of *exabgp* and *Ryu*, I find
the *Ryu* speaker much more easy to integrate and pythonic than *exabgp*. I
will use it as well as reference implementation in the Dynamic Routing bp.

Regards,


On 5 June 2014 18:23, Nachi Ueno na...@ntti3.com wrote:

  Yamamoto
 Cool! OK, I'll make ryu based bgpspeaker as ref impl for my bp.

 Yong
 Ya, we have already decided to have the driver architecture.
 IMO, this discussion is for reference impl.

 2014-06-05 0:24 GMT-07:00 Yongsheng Gong gong...@unitedstack.com:
  I think maybe we can device a kind of framework so that we can plugin
  different BGP speakers.
 
 
  On Thu, Jun 5, 2014 at 2:59 PM, YAMAMOTO Takashi yamam...@valinux.co.jp
 
  wrote:
 
  hi,
 
   ExaBgp was our first choice because we thought that run something in
   library mode would be much more easy to deal with (especially the
   exceptions and corner cases) and the code would be much cleaner. But
   seems
   that Ryu BGP also can fit in this requirement. And having the help
 from
   a
   Ryu developer like you turns it into a promising candidate!
  
   I'll start working now in a proof of concept to run the agent with
 these
   implementations and see if we need more requirements to compare
 between
   the
   speakers.
 
  we (ryu team) love to hear any suggestions and/or requests.
  we are currently working on our bgp api refinement and documentation.
  hopefully they will be available early next week.
 
  for both of bgp blueprints, it would be possible, and might be
 desirable,
  to create reference implementations in python using ryu or exabgp.
  (i prefer ryu. :-)
 
  YAMAMOTO Takashi
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-05 Thread Vijendar Komalla
I am not sure when Barbican would be stable/ready. As an interim solution,
what do you guys think about having a config option to enable/disable
parameter encryption (along with my current implementation)?



On 6/5/14 4:23 AM, Steven Hardy sha...@redhat.com wrote:

On Thu, Jun 05, 2014 at 12:17:07AM +, Randall Burt wrote:
 On Jun 4, 2014, at 7:05 PM, Clint Byrum cl...@fewbar.com
  wrote:
 
  Excerpts from Zane Bitter's message of 2014-06-04 16:19:05 -0700:
  On 04/06/14 15:58, Vijendar Komalla wrote:
  Hi Devs,
  I have submitted an WIP review
(https://review.openstack.org/#/c/97900/)
  for Heat parameters encryption blueprint
  
https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
  This quick and dirty implementation encrypts all the parameters on
on
  Stack 'store' and decrypts on on Stack 'load'.
  Following are couple of improvements I am thinking about;
  1. Instead of encrypting individual parameters, on Stack 'store'
encrypt
  all the parameters together as a dictionary  [something like
  crypt.encrypt(json.dumps(param_dictionary))]
  
  Yeah, definitely don't encrypt them individually.
  
  2. Just encrypt parameters that were marked as 'hidden', instead of
  encrypting all parameters
  
  I would like to hear your feedback/suggestions.
  
  Just as a heads-up, we will soon need to store the properties of
  resources too, at which point parameters become the least of our
  problems. (In fact, in theory we wouldn't even need to store
  parameters... and probably by the time convergence is completely
  implemented, we won't.) Which is to say that there's almost
certainly no 
  point in discriminating between hidden and non-hidden parameters.
  
  I'll refrain from commenting on whether the extra security this
affords 
  is worth the giant pain it causes in debugging, except to say that
IMO 
  there should be a config option to disable the feature (and if it's
  enabled by default, it should probably be disabled by default in
e.g. 
  devstack).
  
  Storing secrets seems like a job for Barbican. That handles the giant
  pain problem because in devstack you can just tell Barbican to have an
  open read policy.
  
  I'd rather see good hooks for Barbican than blanket encryption. I've
  worked with a few things like this and they are despised and worked
  around universally because of the reason Zane has expressed concern
about:
  debugging gets ridiculous.
  
  How about this:
  
  parameters:
   secrets:
 type: sensitive
  resources:
   sensitive_deployment:
 type: OS::Heat::StructuredDeployment
 properties:
   config: weverConfig
   server: myserver
   input_values:
 secret_handle: { get_param: secrets }
  
  The sensitive type would, on the client side, store the value in
Barbican,
  never in Heat. Instead it would just pass in a handle which the user
  can then build policy around. Obviously this implies the user would
set
  up Barbican's in-instance tools to access the secrets value. But the
  idea is, let Heat worry about being high performing and
introspectable,
  and then let Barbican worry about sensitive things.
 
 While certainly ideal, it doesn't solve the current problem since we
can't yet guarantee Barbican will even be available in a given release
of OpenStack. In the meantime, Heat continues to store sensitive user
information unencrypted in its database. Once Barbican is integrated,
I'd be all for changing this implementation, but until then, we do need
an interim solution. Sure, debugging is a pain and as developers we can
certainly grumble, but leaking sensitive user information because we
were too fussed to protect data at rest seems worse IMO. Additionally,
the solution as described sounds like we're imposing a pretty awkward
process on a user to save ourselves from having to decrypt some data in
the cases where we can't access the stack information directly from the
API or via debugging running Heat code (where the data isn't encrypted
anymore).

Under what circumstances are we leaking sensitive user information?

Are you just trying to mitigate a potential attack vector, in the event of
a bug which leaks data from the DB?  If so, is the user-data encrypted in
the nova DB?

It seems to me that this will only be a worthwhile exercise if the
sensitive stuff is encrypted everywhere, and many/most use-cases I can
think of which require sensitive data involve that data ending up in nova
user|meta-data?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-06-05 Thread Paul Ward

Carl,

I haven't been able to try this yet as it requires us to run a pretty  
big scale

test.

But to try to summarize the current feeling on this thread... the  
retry logic is

being put into the neutronclient already (via
https://review.openstack.org/#/c/71464/), it's just that it's not  
automatic and
is being left up to the invoker to decide when to use retry.  The idea  
of doing

the retries automatically isn't the way to go because it is dangerous for
non-idempotent operations.

So... I think we leave the proposed change as is and will potentially need to
enhance users as we see fit.  The invoker in our failure case is nova trying
to get network info, so this seems like a good first one to try out.

Thoughts?

Thanks,
  Paul

Quoting Carl Baldwin c...@ecbaldwin.net:


Paul,

I'm curious.  Have you been able to update to a client using requests?
 Has it solved your problem?

Carl

On Thu, May 29, 2014 at 11:15 AM, Paul Ward wpw...@us.ibm.com wrote:

Yes, we're still on a code level that uses httplib2.  I noticed that as
well, but wasn't sure if that would really
help here as it seems like an ssl thing itself.  But... who knows??  I'm not
sure how consistently we can
recreate this, but if we can, I'll try using that patch to use requests and
see if that helps.



Armando M. arma...@gmail.com wrote on 05/29/2014 11:52:34 AM:


From: Armando M. arma...@gmail.com




To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date: 05/29/2014 11:58 AM



Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient

Hi Paul,

Just out of curiosity, I am assuming you are using the client that
still relies on httplib2. Patch [1] replaced httplib2 with requests,
but I believe that a new client that incorporates this change has not
yet been published. I wonder if the failures you are referring to
manifest themselves with the former http library rather than the
latter. Could you clarify?

Thanks,
Armando

[1] - https://review.openstack.org/#/c/89879/

On 29 May 2014 17:25, Paul Ward wpw...@us.ibm.com wrote:
 Well, for my specific error, it was an intermittent ssl handshake error
 before the request was ever sent to the
 neutron-server.  In our case, we saw that 4 out of 5 resize operations
 worked, the fifth failed with this ssl
 handshake error in neutronclient.

 I certainly think a GET is safe to retry, and I agree with your
 statement
 that PUTs and DELETEs probably
 are as well.  This still leaves a change in nova needing to be made to
 actually a) specify a conf option and
 b) pass it to neutronclient where appropriate.


 Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:

 From: Aaron Rosen aaronoro...@gmail.com


 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/28/2014 07:44 PM

 Subject: Re: [openstack-dev] [neutron] Supporting retries in
 neutronclient

 Hi,

 I'm curious if other openstack clients implement this type of retry
 thing. I think retrying on GET/DELETES/PUT's should probably be okay.

 What types of errors do you see in the neutron-server when it fails
 to respond? I think it would be better to move the retry logic into
 the server around the failures rather than the client (or better yet
 if we fixed the server :)). Most of the times I've seen this type of
 failure is due to deadlock errors caused between (sqlalchemy and
 eventlet *i think*) which cause the client to eventually timeout.

 Best,

 Aaron


 On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com wrote:
 Would it be feasible to make the retry logic only apply to read-only
 operations?  This would still require a nova change to specify the
 number of retries, but it'd also prevent invokers from shooting
 themselves in the foot if they call for a write operation.



 Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:

  From: Aaron Rosen aaronoro...@gmail.com

  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/27/2014 09:44 PM

  Subject: Re: [openstack-dev] [neutron] Supporting retries in
  neutronclient
 
  Hi,

 
  Is it possible to detect when the ssl handshaking error occurs on
  the client side (and only retry for that)? If so I think we should
  do that rather than retrying multiple times. The danger here is
  mostly for POST operations (as Eugene pointed out) where it's
  possible for the response to not make it back to the client and for
  the operation to actually succeed.
 
  Having this retry logic nested in the client also prevents things
  like nova from handling these types of failures individually since
  this retry logic is happening inside of the client. I think it would
  be better not to have this internal mechanism in the client and
  instead make the user of the client implement retry so they are
  aware of failures.
 
  Aaron
 

  On Tue, May 27, 2014 at 10:48 AM, Paul Ward 

[openstack-dev] [Neutron][dhcp] Agent manager customization

2014-06-05 Thread ZZelle
Hi everyone,

I would like to propose a change to allow/simplify dhcp agent manager
customization (like the l3-agent-consolidation spec) and i would like
the community feedback.


Just to precise my context, I deploy OpenStack for small specific business
use cases and i often customize it because of specific use case needs.
In particular sometimes i must customize dhcp agent behavior in order
to:
- add custom iptables rules in the dhcp namespace (on dhcp post-deployment),
- remove custom iptables rules in the dhcp namespace (on dhcp pre-undeployment),
- start an application like the metadata-proxy in the dhcp namespace
for isolated networks (on dhcp post-deployment/update),
- stop an application in the dhcp namespace for isolated networks (on
dhcp pre-undeployment/update),
- etc ...
Currently (Havana,Icehouse), i create my own DHCP agent manager which extends
neutron one and allows to define pre/post dhcp (un)deployment
And I replace neutron-dhcp-agent binary, indeed it's not possible to
change/hook dhcp agent manager implementation by configuration.


What would be the correct way to allow dhcp agent manager customization ?

For my need, allowing to:
 - specify dhcp agent manager implementation through configuration and
 - add 4 methods (pre/post dhcp (un)deployment)in dhcp manager
workflow with empty implementation that can replaced using subclass

would be enough.

Based on other needs, a mechanism principle could be better or a
monkey_patch approach (as in nova) could be more generic.



I have the feeling that the correct way mustly depends on how such
feature could interest the community.



Thanks for your feedbacks,

Cedric (zzelle at irc
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] two confused part about Ironic

2014-06-05 Thread Devananda van der Veen
There is documentation available here:
  http://docs.openstack.org/developer/ironic/deploy/install-guide.html

On Thu, Jun 5, 2014 at 1:25 AM, Jander lu lhcxx0...@gmail.com wrote:
 Hi, Devvananda

 I searched a lot about the installation of Ironic, but there is little
 metarial about this,  there is only devstack with
 ironic(http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html)

 is there any docs about how to deploy Ironic on production physical node
 enviroment?

 thx



 2014-05-30 1:49 GMT+08:00 Devananda van der Veen devananda@gmail.com:

 On Wed, May 28, 2014 at 8:14 PM, Jander lu lhcxx0...@gmail.com wrote:

 Hi, guys, I have two confused part in Ironic.



 (1) if I use nova boot api to launch an physical instance, how does nova
 boot command differentiate whether VM or physical node provision? From this
 article, nova bare metal use PlacementFilter instead of FilterScheduler.so
 does Ironic use the same method?
 (http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/)


 That blog post is now more than three releases old. I would strongly
 encourage you to use Ironic, instead of nova-baremetal, today. To my
 knowledge, that PlacementFilter was not made publicly available. There are
 filters available for the FilterScheduler that work with Ironic.

 As I understand it, you should use host aggregates to differentiate the
 nova-compute services configured to use different hypervisor drivers (eg,
 nova.virt.libvirt vs nova.virt.ironic).



 (2)does Ironic only support Flat network? If not, how does Ironic
 implement tenant isolation in virtual network? say,if one tenant has two
 vritual network namespace,how does the created bare metal node instance send
 the dhcp request to the right namespace?


 Ironic does not yet perform tenant isolation when using the PXE driver,
 and should not be used in an untrusted multitenant environment today. There
 are other issues with untrusted tenants as well (such as firmware exploits)
 that make it generally unsuitable to untrusted multitenancy (though
 specialized hardware platforms may mitigate this).

 There have been discussions with Neutron, and work is being started to
 perform physical network isolation, but this is still some ways off.

 Regards,
 Devananda


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Is ironic support EXSI when boot a bare metal ?

2014-06-05 Thread Devananda van der Veen
ChaoYan,

Are you asking about using vmware as a test platform for developing
Ironic, or as a platform on which to run a production workload managed
by Ironic? I do not understand your question -- why would you use
Ironic to manage a VMWare cluster, when there is a separate Nova
driver specifically designed for managing vmware? While I am not
familiar with it, I believe more information may be found here:
  https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide

Best,
Devananda

On Thu, Jun 5, 2014 at 4:39 AM, 严超 yanchao...@gmail.com wrote:
 Hi, All:
 Is ironic support EXSI when boot a bare metal ? If we can, how to
 make vmware EXSI ami bare metal image ?

 Best Regards!
 Chao Yan
 --
 My twitter:Andy Yan @yanchao727
 My Weibo:http://weibo.com/herewearenow
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-05 Thread Duncan Thomas
The best thing to do with the code is push up a gerrit review! No need
to be shy, and you're very welcome to push up code before the
blueprint is in, it just won't get merged.

I'm very interested in this code.

On 3 June 2014 09:06, Philipp Marek philipp.ma...@linbit.com wrote:
 Hi everybody,

 at the Juno Design Summit we held a presentation about using DRBD 9
 within OpenStack.

 Here's an overview about the situation; I apologize in advance that the
 mail got a bit longer, but I think it makes sense to capture all that
 information in a single piece.



  WHAT WE HAVE


 Design Summit notes:
 https://etherpad.openstack.org/p/juno-cinder-DRBD


 As promised we've got a proof-of-concept implementation for the simplest
 case, using DRBD to access data on all nodes - the DRBDmanage volume
 driver as per the Etherpad notes (link see below).


 As both DRBD 9 and DRBDmanage are still in heavy development, there are
 quite a few rough edges; in case anyone's interested in setting that up
 on some testsystem, I can offer RPMs and DEBs of drbd-utils and
 drbdmanage, and for the DRBD 9 kernel module for a small set of kernel
 versions:

 Ubuntu 12.043.8.0-34-generic
 RHEL6 ( compat)2.6.32_431.11.2.el6.x86_64

 If there's consensus that some specific kernel version should be used
 for testing instead I can try to build packages for that, too.


 There's a cinder git clone with our changes at
 https://github.com/phmarek/cinder
 so that all developments can be discussed easily.
 (Should I use some branch in github.com/OpenStack/Cinder instead?)



  FUTURE PLANS


 The (/our) plans are:

  * LINBIT will continue DRBD 9 and DRBDmanage development,
so that these get production-ready ASAP.
Note: DRBDmanage is heavily influenced by outside
requirements, eg. OpenStack Cinder Consistency Groups...
So the sooner we're aware of such needs the better;
I'd like to avoid changing the DBUS api multiple times ;)

  * LINBIT continues to work on the DRBD Cinder volume driver,
as this is

  * LINBIT starts to work to provide DRBD 9 integration
between the LVM and iSCSI layer.
That needs the Replication API to be more or less finished.

 There are a few dependencies, though ... please see below.


 All help - ideas, comments (both for design and code), all feedback,
 and, last but not least, patches resp. pull requests - are *really*
 welcome, of course.

 (For real-time communication I'm available in the #openstack-cinder
 channel too, mostly during European working hours; I'm flip\d+.)



  WHAT WE NEED


 Now, while I filled out the CLA, I haven't read through all the
 documentation regarding Processes  Workflow yet ... and that'll take
 some time, I gather.


 Furthermore, on the technical side there's a lot to discuss, too;
 eg. regarding snapshots there are quite a few things to decide.

  * Should snapshots be taken on _one_ of the storage nodes,
  * on some subnet, or
  * on all of them?

 I'm not sure whether the same redundancy that's defined for the volume
 is wanted for the snapshots, too.
 (I guess one usecase that should be possible is to take at least one
 snapshot of the volume in _each_ data center?)


 Please note that having volume groups would be good-to-have (if not
 essential) for a DRBD integration, because only then DRBD could ensure
 data integrity *across* volumes (by using a single resource for all of
 them).
 See also 
 https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups; 
 basically, the volume driver just needs to get an
 additional value associate into this group.



  EULA


 Now, there'll be quite a few things I forgot to mention, or that I'm
 simply missing. Please bear with me, I'm fairly new to OpenStack.


 So ... ideas, comments, other feedback?


 Regards,

 Phil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Need help with a gnarly Object Version issue

2014-06-05 Thread Day, Phil
Hi Dan,

 
  On a compute manager that is still running the old version of the code
  (i.e using the previous object version), if a method that hasn't yet
  been converted to objects gets a dict created from the new  version of
  the object (e.g. rescue, get_console_output), then object_compat()
  decorator will call the _/from_db/_object() method in
  objects.Instance. Because this is the old version of the object
  code, it expects user_data to be a field in dict, and throws a key error.
 
 Yeah, so the versioning rules are that for a minor version, you can only add
 things to the object, not remove them.

Ah - Ok.  That probably explains why it doesn't work then ;-(

 
  1)  Rather than removing the user_data field from the object just
  set it to a null value if its not requested.
 
 Objects have a notion of unset which is what you'd want here. Fields that
 are not set can be lazy-loaded when touched, which might be a reasonable
 way out of the box here if user_data is really only used in one place. It 
 would
 mean that older clients would lazy-load it when needed, and going forward
 we'd be specific about asking for it when we want.
 
 However, the problem is that instance defines the fields it's willing to lazy-
 load, and user_data isn't one of them. That'd mean that we need to backport
 a change to allow it to be lazy-loaded, which means we should probably just
 backport the thing that requests user_data when needed instead.
 
Not quite sure I follow.  The list of can be lazy loaded fields is defined by 
INSTANCE_OPTIONAL_ATTRS right ?   I moved user_data into that set of fields as 
part of my patch, but the problem I have is with mix of objects and non 
objects, such as the sequence where:

Client:Gets an Object (of new version)
RPCAPI:  Converts Object to a Dict (because the specific RPC method hasn't been 
converted to take an Object yet)
Manager:  Converts dict to an Object (of the old version) via the 
@object_compat decorator

The last step fails because _from_db_object() runs just in the 
not-yet-updated manager, and hence hits a key error.

I don't think lazy loading helps here, because the code that fails is trying to 
create the object form a dict, not trying to access into an Object - or am I 
missing something ? 


  2)  Add object versioning in the client side of the RPC layer for
  those methods that don't take objects.
 
 I'm not sure what you mean here.
 
In terms of the above scenario I was thinking that the RPCAPI layer could make 
sure the object was the right version before it converts it to a dict.



  I'm open to other ideas, and general guidance around how deletion of
  fields from Objects is meant to be handled ?
 
 It's meant to be handled by rev-ing the major version, since removing
 something isn't a compatible operation.
 
 Note that *conductor* has knowledge of the client-side version of an object
 on which the remotable_classmethod is being called, but that is not exposed
 to the actual object implementation in any way. We could, perhaps, figure
 out a sneaky way to expose that, which would let you honor the old behavior
 if we know the object is old, or the new behavior otherwise.
 
I think the problem is that I don't have an object at the point where I get the 
failure, I have a dict that is trying to be mapped into an object, so it 
doesn't call back into conductor.

I'm thinking now that as the problem is the size and/or data in user_data - and 
that is only in a very few specific places I could just set the user_data 
contest in the instance Object to None or X if its not requested when the 
object is created.  (Setting it to X would probably make it easier to debug 
if something that does what it gets missed)

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Mid Cycle Meetup

2014-06-05 Thread Arnaud Legendre
Hi Folks 

We are currently working on the logistic to organize the Glance mid-cycle 
meetup. 
What we know so far: 
- it will happen in California, USA (either in Palo Alto or San Francisco), 
- it will be a 3 days event in the week Jul 28th - Aug 1st (either Monday to 
Wednesday or Tuesday to Thursday), 

With that in mind, please add yourself to this etherpad if you think you will 
be able to attend: 
https://etherpad.openstack.org/p/glance-juno-mid-cycle-meeting 

This will help a lot for the organization! 

Thank you, 
Arnaud 



- Original Message -

From: Mark Washenberger mark.washenber...@markwash.net 
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org 
Sent: Thursday, May 15, 2014 10:26:47 AM 
Subject: [openstack-dev] [Glance] Mid Cycle Meetup Survey 

Hi Folks! 

Ashwhini has put together a great survey to help us plan our Glance mid cycle 
meetup. Please fill it out if you think you might be interested in attending! 
In particular we're trying to figure out sponsorship and location. If you have 
no location preference, feel free to leave those check boxes blank. 

https://docs.google.com/forms/d/1rygMU1fXcBYn9_NgvEtjoCXlRQtlIA1UCqsQByxbTA8/viewform
 

Cheers, 
markwash 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=SHx25gcyyMWZZfoV01a%2Bkquv%2FsR5nVAjRLZQfrSobeI%3D%0As=0ae22f23cbd0591f9b0f53977ad078d6f60ed41629c39c6d0f941cebb530ee62
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes June 5

2014-06-05 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-06-05-18.02.html
Log: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-06-05-18.02.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Qin Zhao
Hi Yuriy,

I read multiprocessing source code just now.  Now I feel it may not solve
this problem very easily.  For example, let us assume that we will use the
proxy object in Manager's process to call libguestfs.  In manager.py, I see
it needs to create a pipe, before fork the child process. The write end of
this pipe is required by child process.

http://sourcecodebrowser.com/python-multiprocessing/2.6.2.1/classmultiprocessing_1_1managers_1_1_base_manager.html#a57fe9abe7a3d281286556c4bf3fbf4d5

And in Process._bootstrp(), I think we will need to register a function to
be called by _run_after_forkers(), in order to closed the fds inherited
from Nova process.

http://sourcecodebrowser.com/python-multiprocessing/2.6.2.1/classmultiprocessing_1_1process_1_1_process.html#ae594800e7bdef288d9bfbf8b79019d2e

And we also can not close the write end fd created by Manager in
_run_after_forkers(). One feasible way may be getting that fd from the 5th
element of _args attribute of Process object, then skip to close this
fd  I have not investigate if or not Manager need to use other fds,
besides this pipe. Personally, I feel such an implementation will be a
little tricky and risky, because it tightly depends on Manager code. If
Manager opens other files, or change the argument order, our code will fail
to run. Am I wrong?  Is there any other safer way?


On Thu, Jun 5, 2014 at 11:40 PM, Yuriy Taraday yorik@gmail.com wrote:

 Please take a look at
 https://docs.python.org/2.7/library/multiprocessing.html#managers -
 everything is already implemented there.
 All you need is to start one manager that would serve all your requests to
 libguestfs. The implementation in stdlib will provide you with all
 exceptions and return values with minimum code changes on Nova side.
 Create a new Manager, register an libguestfs endpoint in it and call
 start(). It will spawn a separate process that will speak with calling
 process over very simple RPC.
 From the looks of it all you need to do is replace tpool.Proxy calls in
 VFSGuestFS.setup method to calls to this new Manager.


 On Thu, Jun 5, 2014 at 7:21 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 Thanks for reading my bug!  You are right. Python 3.3 or 3.4 should not
 have this issue, since they have can secure the file descriptor. Before
 OpenStack move to Python 3, we may still need a solution. Calling
 libguestfs in a separate process seems to be a way. This way, Nova code can
 close those fd by itself, not depending upon CLOEXEC. However, that will be
 an expensive solution, since it requires a lot of code change. At least we
 need to write code to pass the return value and exception between these two
 processes. That will make this solution very complex. Do you agree?


 On Thu, Jun 5, 2014 at 9:39 PM, Yuriy Taraday yorik@gmail.com
 wrote:

 This behavior of os.pipe() has changed in Python 3.x so it won't be an
 issue on newer Python (if only it was accessible for us).

 From the looks of it you can mitigate the problem by running libguestfs
 requests in a separate process (multiprocessing.managers comes to mind).
 This way the only descriptors child process could theoretically inherit
 would be long-lived pipes to main process although they won't leak because
 they should be marked with CLOEXEC before any libguestfs request is run.
 The other benefit is that this separate process won't be busy opening and
 closing tons of fds so the problem with inheriting will be avoided.


 On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com
 wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this
 problem does not occur during data injection.  Before creating the ISO, 
 the
 driver code will extend the disk. Libguestfs is invoked in that time 
 frame.

 And now I think this problem may occur at any time, if the code use
 tpool to invoke libguestfs, and one external commend is executed in 
 another
 green thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs
 routine in greenthread, rather than another native thread. But it will
 impact the performance very much. So I do not think that is an acceptable
 solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the
 issue can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making
 cdb.make_drive, the driver will 

Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-06-05 Thread Randall Burt
I have submitted a new/expanded spec for this feature: 
https://review.openstack.org/#/c/98219/. I hope to start some WiP patches this 
afternoon/tomorrow morning. Spec reviews and input most welcome.

On Jun 5, 2014, at 11:35 AM, Randall Burt randall.b...@rackspace.com wrote:

 Hey, sorry for the slow follow. I have to put some finishing touches on a 
 spec and submit that for review. I'll reply to the list with the link later 
 today. Hope to have an initial patch up as well in the next day or so.
 
 On Jun 5, 2014, at 10:03 AM, Nilakhya Chatterjee 
 nilakhya.chatter...@globallogic.com
 wrote:
 
 HI Guys, 
 
 It was gr8 to find your interest in solving the nested stack resource 
 listing.
 
 Lets move ahead by finishing any discussions left over the BP and getting an 
 approval on It.
 
 Till now what makes sense to me are : 
 
 a) an additional flag in the client call  --nested (randall)
 b) A flattened  DS in the output  (tim) 
 
 
 Thanks all ! 
 
 
 On Wed, May 21, 2014 at 12:42 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 Bartosz, would that be in addition to --nested? Seems like id want to be 
 able to say all of it as well as some of it.
 
 On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
 wrote:
 
 Hi Tim,
 
 Maybe instead of just a flag like --nested (bool value) to resource-list we 
 can add optional argument like --depth X or --nested-level X (X - integer 
 value) to limit the depth for recursive listing of nested resources?
 
 Best,
 Bartosz
 
 On 05/19/2014 09:13 PM, Tim Schnell wrote:
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
 Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
 Tim
 
 On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
 On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
 On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
 Hi Nilakhya,
 
 As Randall mentioned we did discuss this exact issue at the summit. I
 was
 planning on putting a blueprint together today to continue the
 discussion.
 The Stack Preview call is already doing the necessary recursion to
 gather
 the resources so we discussed being able to pass a stack id to the
 preview
 endpoint to get all of the resources.
 
 However, after thinking about it some more, I agree with Randall that
 maybe this should be an extra query parameter passed to the
 resource-list
 call. I'Ll have the blueprint up later today, unless you have already
 started on it.
 Note there is a patch from Anderson/Richard which may help with this:
 
 https://review.openstack.org/#/c/85781/
 
 The idea was to enable easier introspection of resources backed by
 nested
 stacks in a UI, but it could be equally useful to generate a tree
 resource view in the CLI client by walking the links.
 
 This would obviously be less efficient than recursing inside the
 engine,
 but arguably the output would be much more useful if it retains the
 nesting
 structure, as opposed to presenting a fully flattened soup of
 resources
 with no idea which stack/layer they belong to.
 
 Steve
 Could we simply add stack name/id to this output if the flag is passed? I
 agree that we currently have the capability to traverse the tree
 structure of nested stacks, but several folks have requested this
 capability, mostly for UI/UX purposes. It would be faster if you want the
 flat structure and we still retain the capability to create your own
 tree/widget/whatever by following the links. Also, I think its best to
 include this in the API directly since not all users are integrating
 using the python-heatclient.
 +1 for adding the stack name/id to the output to maintain a reference to
 the initial stack that the resource belongs to. The original stated
 use-case that I am aware of was to have a flat list of all resources
 associated with a stack to be displayed in the UI when the user prompts to
 delete a stack. This would prevent confusion about what and why different
 resources are being deleted due to the stack delete.
 
 This use-case does not require any information about the nested stacks but
 I can foresee that information being useful in the future. I think a
 flattened data structure (with a reference to stack id) is still the most
 efficient solution. The patch landed by Anderson/Richard provides an
 alternate method to drill down into nested stacks if the hierarchy is
 important information though this is not the optimal solution in this
 case.
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 

Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Derek Higgins
On 05/06/14 13:07, Sean Dague wrote:
 You may all have noticed things are really backed up in the gate right
 now, and you would be correct. (Top of gate is about 30 hrs, but if you
 do the math on ingress / egress rates the gate is probably really double
 that in transit time right now).
 
 We've hit another threshold where there are so many really small races
 in the gate that they are compounding to the point where fixing one is
 often failed by another one killing your job. This whole situation was
 exacerbated by the fact that while the transition from HP cloud 1.0 -
 1.1 was happening and we were under capacity, the check queue grew to
 500 with lots of stuff being approved.
 
 That flush all hit the gate at once. But it also means that those jobs
 passed in a very specific timing situation, which is different on the
 new HP cloud nodes. And the normal statistical distribution of some jobs
 on RAX and some on HP that shake out different races didn't happen.
 
 At this point we could really use help getting focus on only recheck
 bugs. The current list of bugs is here:
 http://status.openstack.org/elastic-recheck/

Hitting that link gives a different page when compared to navigating to
the Rechecks tag from http://status.openstack.org and I can't find a
way to navigate to the page you linked, is this intentional?

just curious, ignore me if I'm distracting from the current issues.

 
 Also our categorization rate is only 75% so there are probably at least
 2 critical bugs we don't even know about yet hiding in the failures.
 Helping categorize here -
 http://status.openstack.org/elastic-recheck/data/uncategorized.html
 would be handy.
 
 We're coordinating changes via an etherpad here -
 https://etherpad.openstack.org/p/gatetriage-june2014
 
 If you want to help, jumping in #openstack-infra would be the place to go.
 
   -Sean
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-05 Thread Carl Baldwin
I have seen the Ryu team is involved and responsive to the community.
That goes a long way to support it as the reference implementation for
BPG speaking in Neutron.  Thank you for your support.  I'll look
forward to the API and documentation refinement

Let's be sure to document any work that needs to be done so that it
will support the features we need.  We can use the comparison page for
now [1] to gather that information (or links).  If Ryu is lacking in
any area, it will be good to understand the timeline on which the
features can be delivered and stable before we make a formal decision
on the reference implementation.

Carl

[1] https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison

On Thu, Jun 5, 2014 at 10:36 AM, Jaume Devesa devv...@gmail.com wrote:
 After watch the documentation and the code of exabgp and Ryu, I find the Ryu
 speaker much more easy to integrate and pythonic than exabgp. I will use it
 as well as reference implementation in the Dynamic Routing bp.

 Regards,


 On 5 June 2014 18:23, Nachi Ueno na...@ntti3.com wrote:

  Yamamoto
 Cool! OK, I'll make ryu based bgpspeaker as ref impl for my bp.

 Yong
 Ya, we have already decided to have the driver architecture.
 IMO, this discussion is for reference impl.

 2014-06-05 0:24 GMT-07:00 Yongsheng Gong gong...@unitedstack.com:
  I think maybe we can device a kind of framework so that we can plugin
  different BGP speakers.
 
 
  On Thu, Jun 5, 2014 at 2:59 PM, YAMAMOTO Takashi
  yamam...@valinux.co.jp
  wrote:
 
  hi,
 
   ExaBgp was our first choice because we thought that run something in
   library mode would be much more easy to deal with (especially the
   exceptions and corner cases) and the code would be much cleaner. But
   seems
   that Ryu BGP also can fit in this requirement. And having the help
   from
   a
   Ryu developer like you turns it into a promising candidate!
  
   I'll start working now in a proof of concept to run the agent with
   these
   implementations and see if we need more requirements to compare
   between
   the
   speakers.
 
  we (ryu team) love to hear any suggestions and/or requests.
  we are currently working on our bgp api refinement and documentation.
  hopefully they will be available early next week.
 
  for both of bgp blueprints, it would be possible, and might be
  desirable,
  to create reference implementations in python using ryu or exabgp.
  (i prefer ryu. :-)
 
  YAMAMOTO Takashi
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Jaume Devesa
 Software Engineer at Midokura

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Russell Bryant
On 06/05/2014 03:59 PM, Derek Higgins wrote:
 On 05/06/14 13:07, Sean Dague wrote:
 You may all have noticed things are really backed up in the gate right
 now, and you would be correct. (Top of gate is about 30 hrs, but if you
 do the math on ingress / egress rates the gate is probably really double
 that in transit time right now).

 We've hit another threshold where there are so many really small races
 in the gate that they are compounding to the point where fixing one is
 often failed by another one killing your job. This whole situation was
 exacerbated by the fact that while the transition from HP cloud 1.0 -
 1.1 was happening and we were under capacity, the check queue grew to
 500 with lots of stuff being approved.

 That flush all hit the gate at once. But it also means that those jobs
 passed in a very specific timing situation, which is different on the
 new HP cloud nodes. And the normal statistical distribution of some jobs
 on RAX and some on HP that shake out different races didn't happen.

 At this point we could really use help getting focus on only recheck
 bugs. The current list of bugs is here:
 http://status.openstack.org/elastic-recheck/
 
 Hitting that link gives a different page when compared to navigating to
 the Rechecks tag from http://status.openstack.org and I can't find a
 way to navigate to the page you linked, is this intentional?
 
 just curious, ignore me if I'm distracting from the current issues.

Refresh perhaps?  I think the change to update the status.openstack.org
page to point to elastic-recheck instead of the old rechecks page was
fairly recent.  You could have an out of date cached page.  It matches
for me.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Anita Kuno
On 06/05/2014 03:59 PM, Derek Higgins wrote:
 On 05/06/14 13:07, Sean Dague wrote:
 You may all have noticed things are really backed up in the gate right
 now, and you would be correct. (Top of gate is about 30 hrs, but if you
 do the math on ingress / egress rates the gate is probably really double
 that in transit time right now).

 We've hit another threshold where there are so many really small races
 in the gate that they are compounding to the point where fixing one is
 often failed by another one killing your job. This whole situation was
 exacerbated by the fact that while the transition from HP cloud 1.0 -
 1.1 was happening and we were under capacity, the check queue grew to
 500 with lots of stuff being approved.

 That flush all hit the gate at once. But it also means that those jobs
 passed in a very specific timing situation, which is different on the
 new HP cloud nodes. And the normal statistical distribution of some jobs
 on RAX and some on HP that shake out different races didn't happen.

 At this point we could really use help getting focus on only recheck
 bugs. The current list of bugs is here:
 http://status.openstack.org/elastic-recheck/
 
 Hitting that link gives a different page when compared to navigating to
 the Rechecks tag from http://status.openstack.org and I can't find a
 way to navigate to the page you linked, is this intentional?
 
 just curious, ignore me if I'm distracting from the current issues.

The elastic-recheck page is different from the rechecks page, so yes
navigating from status.openstack.org takes you to rechecks and that is
intentional. It is intentional inasmuch as elastic-recheck is still
considered a work in progress (so wear your hard hat) rather than ready
for public viewing (bring your camera). It is more about managing
expectations than anything.

Thanks,
Anita.

 Also our categorization rate is only 75% so there are probably at least
 2 critical bugs we don't even know about yet hiding in the failures.
 Helping categorize here -
 http://status.openstack.org/elastic-recheck/data/uncategorized.html
 would be handy.

 We're coordinating changes via an etherpad here -
 https://etherpad.openstack.org/p/gatetriage-june2014

 If you want to help, jumping in #openstack-infra would be the place to go.

  -Sean



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Regarding Configuring keystone to integrate neo4j

2014-06-05 Thread Tahmina Ahmed
Dear all,

I am trying to use neo4j graph database instead of mysql for  my keystone
would you please help with any documentation or ideas how I need to write
the driver.




Thanks  Reagrds,
Tahmina
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Regarding Configuring keystone to integrate neo4j

2014-06-05 Thread Adam Young

On 06/05/2014 05:25 PM, Tahmina Ahmed wrote:


Dear all,

I am trying to use neo4j graph database instead of mysql for  my 
keystone would you please help with any documentation or ideas how I 
need to write the driver.


No clue about Graph Databases myself.  You should pick one backend, 
though, and implement for that.  I would suggest leaving mysql for all 
but policy, and implement a driver for that.  it is by far the simplest 
subsystem.


it would be in keystone/policy/backends/








Thanks  Reagrds,
Tahmina



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Resource action API

2014-06-05 Thread Zane Bitter

On 05/06/14 03:32, yang zhang wrote:


Thanks so much for your commits.

  Date: Wed, 4 Jun 2014 14:39:30 -0400
  From: zbit...@redhat.com
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [heat] Resource action API
 
  On 04/06/14 03:01, yang zhang wrote:
   Hi all,
   Now heat only supports suspending/resuming a whole stack, all the
   resources of the stack will be suspended/resumed,
   but sometime we just want to suspend or resume only a part of resources
 
  Any reason you wouldn't put that subset of resources into a nested stack
  and suspend/resume that?

I think that using nested-stack is a little complicated, and we
can't build a nested-stack
for each resource, hope this bp could make it easier.
 
   in the stack, so I think adding resource-action API for heat is
   necessary. this API will be helpful to solve 2 problems:
 
  I'm sceptical of this idea because the whole justification for having
  suspend/resume in Heat is that it's something that needs to follow the
  same dependency tree as stack delete/create.
 
  Are you suggesting that if you suspend an individual resource, all of
  the resources dependent on it will also be suspended?

 I thought about this, and I think just suspending an individual
resource without dependent
is ok, now the resources that can be suspended are very few, and almost
all of those resources
(Server, alarm, user, etc) could be suspended individually.


Then just suspend them individually using their own APIs. If there's no 
orchestration involved then it doesn't belong in Heat.



   - If we want to suspend/resume the resources of the stack, you need
   to get the phy_id first and then call the API of other services, and
   this won't update the status
   of the resource in heat, which often cause some unexpected problem.
 
  This is true, except for stack resources, which obviously _do_ store the
  state.
 
   - this API could offer a turn on/off function for some native
   resources, e.g., we can turn on/off the autoscalinggroup or a single
   policy with
   the API, this is like the suspend/resume services feature[1] in AWS.
 
  Which, I notice, is not exposed in CloudFormation.

  I found it on AWS web, It seems a auotscalinggroup feature, this may
be not
  exposed in CloudFormation, but I think it's really a good idea.


Sure, but the solution here is to have a separate Autoscaling API (this 
is a long-term goal for us already) that exposes this feature.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Kyle Mestery
On Thu, Jun 5, 2014 at 7:07 AM, Sean Dague s...@dague.net wrote:
 You may all have noticed things are really backed up in the gate right
 now, and you would be correct. (Top of gate is about 30 hrs, but if you
 do the math on ingress / egress rates the gate is probably really double
 that in transit time right now).

 We've hit another threshold where there are so many really small races
 in the gate that they are compounding to the point where fixing one is
 often failed by another one killing your job. This whole situation was
 exacerbated by the fact that while the transition from HP cloud 1.0 -
 1.1 was happening and we were under capacity, the check queue grew to
 500 with lots of stuff being approved.

 That flush all hit the gate at once. But it also means that those jobs
 passed in a very specific timing situation, which is different on the
 new HP cloud nodes. And the normal statistical distribution of some jobs
 on RAX and some on HP that shake out different races didn't happen.

 At this point we could really use help getting focus on only recheck
 bugs. The current list of bugs is here:
 http://status.openstack.org/elastic-recheck/

 Also our categorization rate is only 75% so there are probably at least
 2 critical bugs we don't even know about yet hiding in the failures.
 Helping categorize here -
 http://status.openstack.org/elastic-recheck/data/uncategorized.html
 would be handy.

 We're coordinating changes via an etherpad here -
 https://etherpad.openstack.org/p/gatetriage-june2014

 If you want to help, jumping in #openstack-infra would be the place to go.

For the Neutron ssh timeout issue [1], we think we know why it's
spiked recently. This tempest change [2] may have made the situation
worse. We'd like to propose reverting that change with the review here
[3], at which point we can resubmit it and continue debugging this.
But this should help relieve the pressure caused by the recent surge
in this bug.

Does this sound like a workable plan to get things moving again?

Thanks,
Kyle

[1] https://bugs.launchpad.net/bugs/1323658
[2] https://review.openstack.org/#/c/90427/
[3] https://review.openstack.org/#/c/97245/

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-06-05 Thread Randall Burt
I've submitted the spec (finally) and will work on some initial patches this 
afternoon/tomorrow. Please provide any feedback and thanks!

https://review.openstack.org/#/c/98219

On Jun 5, 2014, at 11:35 AM, Randall Burt randall.b...@rackspace.com wrote:

 Hey, sorry for the slow follow. I have to put some finishing touches on a 
 spec and submit that for review. I'll reply to the list with the link later 
 today. Hope to have an initial patch up as well in the next day or so.
 
 On Jun 5, 2014, at 10:03 AM, Nilakhya Chatterjee 
 nilakhya.chatter...@globallogic.com
 wrote:
 
 HI Guys, 
 
 It was gr8 to find your interest in solving the nested stack resource 
 listing.
 
 Lets move ahead by finishing any discussions left over the BP and getting an 
 approval on It.
 
 Till now what makes sense to me are : 
 
 a) an additional flag in the client call  --nested (randall)
 b) A flattened  DS in the output  (tim) 
 
 
 Thanks all ! 
 
 
 On Wed, May 21, 2014 at 12:42 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 Bartosz, would that be in addition to --nested? Seems like id want to be 
 able to say all of it as well as some of it.
 
 On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
 wrote:
 
 Hi Tim,
 
 Maybe instead of just a flag like --nested (bool value) to resource-list we 
 can add optional argument like --depth X or --nested-level X (X - integer 
 value) to limit the depth for recursive listing of nested resources?
 
 Best,
 Bartosz
 
 On 05/19/2014 09:13 PM, Tim Schnell wrote:
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
 Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
 Tim
 
 On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
 On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
 On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
 Hi Nilakhya,
 
 As Randall mentioned we did discuss this exact issue at the summit. I
 was
 planning on putting a blueprint together today to continue the
 discussion.
 The Stack Preview call is already doing the necessary recursion to
 gather
 the resources so we discussed being able to pass a stack id to the
 preview
 endpoint to get all of the resources.
 
 However, after thinking about it some more, I agree with Randall that
 maybe this should be an extra query parameter passed to the
 resource-list
 call. I'Ll have the blueprint up later today, unless you have already
 started on it.
 Note there is a patch from Anderson/Richard which may help with this:
 
 https://review.openstack.org/#/c/85781/
 
 The idea was to enable easier introspection of resources backed by
 nested
 stacks in a UI, but it could be equally useful to generate a tree
 resource view in the CLI client by walking the links.
 
 This would obviously be less efficient than recursing inside the
 engine,
 but arguably the output would be much more useful if it retains the
 nesting
 structure, as opposed to presenting a fully flattened soup of
 resources
 with no idea which stack/layer they belong to.
 
 Steve
 Could we simply add stack name/id to this output if the flag is passed? I
 agree that we currently have the capability to traverse the tree
 structure of nested stacks, but several folks have requested this
 capability, mostly for UI/UX purposes. It would be faster if you want the
 flat structure and we still retain the capability to create your own
 tree/widget/whatever by following the links. Also, I think its best to
 include this in the API directly since not all users are integrating
 using the python-heatclient.
 +1 for adding the stack name/id to the output to maintain a reference to
 the initial stack that the resource belongs to. The original stated
 use-case that I am aware of was to have a flat list of all resources
 associated with a stack to be displayed in the UI when the user prompts to
 delete a stack. This would prevent confusion about what and why different
 resources are being deleted due to the stack delete.
 
 This use-case does not require any information about the nested stacks but
 I can foresee that information being useful in the future. I think a
 flattened data structure (with a reference to stack id) is still the most
 efficient solution. The patch landed by Anderson/Richard provides an
 alternate method to drill down into nested stacks if the hierarchy is
 important information though this is not the optimal solution in this
 case.
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Kevin Benton
Is it possible to make the depth of patches running tests in the gate very
shallow during this high-probability of failure time? e.g. Allow only the
top 4 to run tests and put the rest in the 'queued' state. Otherwise the
already elevated probability of a patch failing is exacerbated by the fact
that it gets retested every time a patch ahead of it in the queue fails.

--
Kevin Benton


On Thu, Jun 5, 2014 at 5:07 AM, Sean Dague s...@dague.net wrote:

 You may all have noticed things are really backed up in the gate right
 now, and you would be correct. (Top of gate is about 30 hrs, but if you
 do the math on ingress / egress rates the gate is probably really double
 that in transit time right now).

 We've hit another threshold where there are so many really small races
 in the gate that they are compounding to the point where fixing one is
 often failed by another one killing your job. This whole situation was
 exacerbated by the fact that while the transition from HP cloud 1.0 -
 1.1 was happening and we were under capacity, the check queue grew to
 500 with lots of stuff being approved.

 That flush all hit the gate at once. But it also means that those jobs
 passed in a very specific timing situation, which is different on the
 new HP cloud nodes. And the normal statistical distribution of some jobs
 on RAX and some on HP that shake out different races didn't happen.

 At this point we could really use help getting focus on only recheck
 bugs. The current list of bugs is here:
 http://status.openstack.org/elastic-recheck/

 Also our categorization rate is only 75% so there are probably at least
 2 critical bugs we don't even know about yet hiding in the failures.
 Helping categorize here -
 http://status.openstack.org/elastic-recheck/data/uncategorized.html
 would be handy.

 We're coordinating changes via an etherpad here -
 https://etherpad.openstack.org/p/gatetriage-june2014

 If you want to help, jumping in #openstack-infra would be the place to go.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Joe Gordon
On Thu, Jun 5, 2014 at 3:29 PM, Kevin Benton blak...@gmail.com wrote:

 Is it possible to make the depth of patches running tests in the gate very
 shallow during this high-probability of failure time? e.g. Allow only the
 top 4 to run tests and put the rest in the 'queued' state. Otherwise the
 already elevated probability of a patch failing is exacerbated by the fact
 that it gets retested every time a patch ahead of it in the queue fails.

 Such a good idea that we already do it.

http://status.openstack.org/zuul/

The grey circles refer to patches that are in the queued state. But this
only helps us from hitting resource starvation but doesn't help us get
patches through the gate. We haven't  been landing many patches this week
[0]

[0] https://github.com/openstack/openstack/graphs/commit-activity


 --
 Kevin Benton


 On Thu, Jun 5, 2014 at 5:07 AM, Sean Dague s...@dague.net wrote:

 You may all have noticed things are really backed up in the gate right
 now, and you would be correct. (Top of gate is about 30 hrs, but if you
 do the math on ingress / egress rates the gate is probably really double
 that in transit time right now).

 We've hit another threshold where there are so many really small races
 in the gate that they are compounding to the point where fixing one is
 often failed by another one killing your job. This whole situation was
 exacerbated by the fact that while the transition from HP cloud 1.0 -
 1.1 was happening and we were under capacity, the check queue grew to
 500 with lots of stuff being approved.

 That flush all hit the gate at once. But it also means that those jobs
 passed in a very specific timing situation, which is different on the
 new HP cloud nodes. And the normal statistical distribution of some jobs
 on RAX and some on HP that shake out different races didn't happen.

 At this point we could really use help getting focus on only recheck
 bugs. The current list of bugs is here:
 http://status.openstack.org/elastic-recheck/

 Also our categorization rate is only 75% so there are probably at least
 2 critical bugs we don't even know about yet hiding in the failures.
 Helping categorize here -
 http://status.openstack.org/elastic-recheck/data/uncategorized.html
 would be handy.

 We're coordinating changes via an etherpad here -
 https://etherpad.openstack.org/p/gatetriage-june2014

 If you want to help, jumping in #openstack-infra would be the place to go.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Joe Gordon
On Thu, Jun 5, 2014 at 3:05 PM, Kyle Mestery mest...@noironetworks.com
wrote:

 On Thu, Jun 5, 2014 at 7:07 AM, Sean Dague s...@dague.net wrote:
  You may all have noticed things are really backed up in the gate right
  now, and you would be correct. (Top of gate is about 30 hrs, but if you
  do the math on ingress / egress rates the gate is probably really double
  that in transit time right now).
 
  We've hit another threshold where there are so many really small races
  in the gate that they are compounding to the point where fixing one is
  often failed by another one killing your job. This whole situation was
  exacerbated by the fact that while the transition from HP cloud 1.0 -
  1.1 was happening and we were under capacity, the check queue grew to
  500 with lots of stuff being approved.
 
  That flush all hit the gate at once. But it also means that those jobs
  passed in a very specific timing situation, which is different on the
  new HP cloud nodes. And the normal statistical distribution of some jobs
  on RAX and some on HP that shake out different races didn't happen.
 
  At this point we could really use help getting focus on only recheck
  bugs. The current list of bugs is here:
  http://status.openstack.org/elastic-recheck/
 
  Also our categorization rate is only 75% so there are probably at least
  2 critical bugs we don't even know about yet hiding in the failures.
  Helping categorize here -
  http://status.openstack.org/elastic-recheck/data/uncategorized.html
  would be handy.
 
  We're coordinating changes via an etherpad here -
  https://etherpad.openstack.org/p/gatetriage-june2014
 
  If you want to help, jumping in #openstack-infra would be the place to
 go.
 
 For the Neutron ssh timeout issue [1], we think we know why it's
 spiked recently. This tempest change [2] may have made the situation
 worse. We'd like to propose reverting that change with the review here
 [3], at which point we can resubmit it and continue debugging this.
 But this should help relieve the pressure caused by the recent surge
 in this bug.

 Does this sound like a workable plan to get things moving again?



As we discussed on IRC yes, and thank you for hunting this one down.




 Thanks,
 Kyle

 [1] https://bugs.launchpad.net/bugs/1323658
 [2] https://review.openstack.org/#/c/90427/
 [3] https://review.openstack.org/#/c/97245/

  -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-05 Thread Zane Bitter

On 05/06/14 12:03, Kurt Griffiths wrote:

I just learned that some projects are thinking about having the specs
process be the channel for submitting new feature ideas, rather than
registering blueprints. I must admit, that would be kind of nice because
it would provide some much-needed structure around the triaging process.

I wonder if we can get some benefit out of the spec process while still
keeping it light? The temptation will be to start documenting everything
in excruciating detail, but we can mitigate that by codifying some
guidelines on our wiki and baking it into the team culture.

What does everyone think?


FWIW we just adopted a specs repo in Heat, and all of us feel exactly 
the same way as you do:


http://lists.openstack.org/pipermail/openstack-dev/2014-May/036432.html

I can't speak for every project, but you are far from the only ones 
wanting to use this as lightweight process. Hopefully we'll all figure 
out together how to make that happen :)


cheers,
Zane.


From: Kurt Griffiths kurt.griffi...@rackspace.com
mailto:kurt.griffi...@rackspace.com
Date: Tuesday, June 3, 2014 at 9:34 AM
To: OpenStack Dev openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I think it becomes more useful the larger your team. With a smaller team
it is easier to keep everyone on the same page just through the mailing
list and IRC. As for where to document design decisions, the trick there
is more one of being diligent about capturing and recording the why of
every decision made in discussions and such; gerrit review history can
help with that, but it isn’t free.

If we’d like to give the specs process a try, I think we could do an
experiment in j-2 with a single bp. Depending on how that goes, we may
do more in the K cycle. What does everyone think?

From: Malini Kamalambal malini.kamalam...@rackspace.com
mailto:malini.kamalam...@rackspace.com
Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Date: Monday, June 2, 2014 at 2:45 PM
To: OpenStack Dev openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

+1 – Requiring specs for every blueprint is going to make the
development process very cumbersome, and will take us back to waterfall
days.
I like how the Marconi team operates now, with design decisions being
made in IRC/ team meetings.
So Spec might become more of an overhead than add value, given how our
team functions.

_'If'_ we agree to use Specs, we should use that only for the blue
prints that make sense.
For example, the unit test decoupling that we are working on now – this
one will be a good candidate to use specs, since there is a lot of back
and forth going on how to do this.
On the other hand something like Tempest Integration for Marconi will
not warrant a spec, since it is pretty straightforward what needs to be
done.
In the past we have had discussions around where to document certain
design decisions (e.g. Which endpoint/verb is the best fit for pop
operation?)
Maybe spec is the place for these?

We should leave it to the implementor to decide, if the bp warrants a
spec or not  what should be in the spec.


From: Kurt Griffiths kurt.griffi...@rackspace.com
mailto:kurt.griffi...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Date: Monday, June 2, 2014 1:33 PM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I’ve been in roles where enormous amounts of time were spent on writing
specs, and in roles where specs where non-existent. Like most things,
I’ve become convinced that success lies in moderation between the two
extremes.

I think it would make sense for big specs, but I want to be careful we
use it judiciously so that we don’t simply apply more process for the
sake of more process. It is tempting to spend too much time recording
every little detail in a spec, when that time could be better spent in
regular communication between team members and with customers, and on
iterating the code (/short/ iterations between demo/testing, so you
ensure you are on staying on track and can address design problems
early, often).

IMO, specs are best used more as summaries, containing useful
big-picture ideas, diagrams, and specific “memory pegs” to help us
remember what was discussed and decided, and calling out specific
“promises” for future conversations where certain design points are TBD.

From: Malini Kamalambal malini.kamalam...@rackspace.com
mailto:malini.kamalam...@rackspace.com
Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 

[openstack-dev] [Ironic] Our gate status // getting better at rechecks

2014-06-05 Thread Devananda van der Veen
Quick update for those who are following along but may not be on IRC right now.

The gate (not just ours -- the gate for all of openstack, which Ironic
is a part of) is having issues right now. See Sean's email for details
on that, and what you can do to help

  http://lists.openstack.org/pipermail/openstack-dev/2014-June/036810.html

Also, a patch landed in Nova which completely broke Ironic's unit and
tempest tests two days ago. Blame me if you need to blame someone -- I
looked at the patch and thought it was fine, and so did a couple
nova-core. That is why all your Ironic patches are failing unit tests
and tempest tests, and they will keep failing until 97757 lands.
Unfortunately, the whole gate queue is backed up, so this fix has
already been in the queue for ~24hrs, and will probably take another
day, at least, to land.

In the meantime, what can you do to help? Keep working on bug fixes in
Ironic and in the nova.virt.ironic driver, and help review incoming
specifications. See Sean's email, look at the elastic recheck status
page, write E-R queries, and help fix those bugs. If you're not sure
how to help with rechecks, join #openstack-qa.

If you're an ironic-core member, please don't approve any patches
until after 97757 lands -- and then, I think we should only be
approving important bug fixes until the gate stabilizes.

Thanks,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Mid cycle meetup

2014-06-05 Thread Michael Still
Hi!

Nova will hold its Juno mid cycle meetup between July 28 and 30, at an
Intel campus in Beaverton, OR (near Portland). There is a wiki page
with more details here:

https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

I'll update that wiki page as we nail down more details. There's an
eventbrite RSVP system setup at:


https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

Cheers,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Is ironic support EXSI when boot a bare metal ?

2014-06-05 Thread 严超
Hi, thank you for you help !
I was asking about a platform on which to run a production workload managed
by Ironic.
Yes, there is a separate Nova driver specifically designed for managing
vmware.
But can we deploy a bare metal into VMWare ESXI?
Or can we use vmware driver and at the same time , use Libvert driver ?
Can we manage both bare metal and kvm (or ESXI or xen ) at the same time in
an openstack cluster  ?


*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-06-06 1:31 GMT+08:00 Devananda van der Veen devananda@gmail.com:

 ChaoYan,

 Are you asking about using vmware as a test platform for developing
 Ironic, or as a platform on which to run a production workload managed
 by Ironic? I do not understand your question -- why would you use
 Ironic to manage a VMWare cluster, when there is a separate Nova
 driver specifically designed for managing vmware? While I am not
 familiar with it, I believe more information may be found here:
   https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide

 Best,
 Devananda

 On Thu, Jun 5, 2014 at 4:39 AM, 严超 yanchao...@gmail.com wrote:
  Hi, All:
  Is ironic support EXSI when boot a bare metal ? If we can, how to
  make vmware EXSI ami bare metal image ?
 
  Best Regards!
  Chao Yan
  --
  My twitter:Andy Yan @yanchao727
  My Weibo:http://weibo.com/herewearenow
  --
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Our gate status // getting better at rechecks

2014-06-05 Thread David Shrewsbury
FYI for all,

I have posted http://review.openstack.org/98201 in an attempt to at least
give *some* warning to developers that a change to one of the public
methods on classes we derive from may have consequences to Ironic.
I have WIP'd it until 97757 lands.



On Thu, Jun 5, 2014 at 7:51 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 Quick update for those who are following along but may not be on IRC right
 now.

 The gate (not just ours -- the gate for all of openstack, which Ironic
 is a part of) is having issues right now. See Sean's email for details
 on that, and what you can do to help

   http://lists.openstack.org/pipermail/openstack-dev/2014-June/036810.html

 Also, a patch landed in Nova which completely broke Ironic's unit and
 tempest tests two days ago. Blame me if you need to blame someone -- I
 looked at the patch and thought it was fine, and so did a couple
 nova-core. That is why all your Ironic patches are failing unit tests
 and tempest tests, and they will keep failing until 97757 lands.
 Unfortunately, the whole gate queue is backed up, so this fix has
 already been in the queue for ~24hrs, and will probably take another
 day, at least, to land.

 In the meantime, what can you do to help? Keep working on bug fixes in
 Ironic and in the nova.virt.ironic driver, and help review incoming
 specifications. See Sean's email, look at the elastic recheck status
 page, write E-R queries, and help fix those bugs. If you're not sure
 how to help with rechecks, join #openstack-qa.

 If you're an ironic-core member, please don't approve any patches
 until after 97757 lands -- and then, I think we should only be
 approving important bug fixes until the gate stabilizes.

 Thanks,
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Kevin Benton
Oh cool. I didn't realize it was deliberately limited already. I had
assumed it was just hitting the resource limits for that queue.

So it looks like it's around 20 now. However, I would argue that shortening
it more would help get patches through the gate.

For the sake of discussion, let's assume there is a 80% chance of success
in one test run on a patch. So a given patch's probability of success is
.8^n where n is the number of runs.

For the 1st patch in the queue, n is just one.
For the 2nd patch, n is 1 + the probability of a failure from patch 1.
For the 3rd patch, n is 1 + the probability of a failure in patch 2 or 1.
For the 4th patch, n is 1 + the probability of a failure in patch 3, 2, or
1.
...

Unfortunately my conditional probability skills are too shaky to trust an
equation I come up with to represent the above scenario so I wrote a gate
failure simulator [1].

At a queue size of 20 and an 80% success rate. The patch in position 20
only has a ~44% chance of getting merged.
However, with a queue size of 4, the patch in position 4 has a ~71% chance
of getting merged.

You can try the simulator out yourself with various numbers. Maybe the odds
of success are much better than 80% in one run and my point is moot, but I
have several patches waiting to be merged that haven't made it through
after ~3 tries each.


Cheers,
Kevin Benton

1. http://paste.openstack.org/show/83039/


On Thu, Jun 5, 2014 at 4:04 PM, Joe Gordon joe.gord...@gmail.com wrote:




 On Thu, Jun 5, 2014 at 3:29 PM, Kevin Benton blak...@gmail.com wrote:

 Is it possible to make the depth of patches running tests in the gate
 very shallow during this high-probability of failure time? e.g. Allow only
 the top 4 to run tests and put the rest in the 'queued' state. Otherwise
 the already elevated probability of a patch failing is exacerbated by the
 fact that it gets retested every time a patch ahead of it in the queue
 fails.

 Such a good idea that we already do it.

 http://status.openstack.org/zuul/

 The grey circles refer to patches that are in the queued state. But this
 only helps us from hitting resource starvation but doesn't help us get
 patches through the gate. We haven't  been landing many patches this week
 [0]

 [0] https://github.com/openstack/openstack/graphs/commit-activity


 --
 Kevin Benton


 On Thu, Jun 5, 2014 at 5:07 AM, Sean Dague s...@dague.net wrote:

 You may all have noticed things are really backed up in the gate right
 now, and you would be correct. (Top of gate is about 30 hrs, but if you
 do the math on ingress / egress rates the gate is probably really double
 that in transit time right now).

 We've hit another threshold where there are so many really small races
 in the gate that they are compounding to the point where fixing one is
 often failed by another one killing your job. This whole situation was
 exacerbated by the fact that while the transition from HP cloud 1.0 -
 1.1 was happening and we were under capacity, the check queue grew to
 500 with lots of stuff being approved.

 That flush all hit the gate at once. But it also means that those jobs
 passed in a very specific timing situation, which is different on the
 new HP cloud nodes. And the normal statistical distribution of some jobs
 on RAX and some on HP that shake out different races didn't happen.

 At this point we could really use help getting focus on only recheck
 bugs. The current list of bugs is here:
 http://status.openstack.org/elastic-recheck/

 Also our categorization rate is only 75% so there are probably at least
 2 critical bugs we don't even know about yet hiding in the failures.
 Helping categorize here -
 http://status.openstack.org/elastic-recheck/data/uncategorized.html
 would be handy.

 We're coordinating changes via an etherpad here -
 https://etherpad.openstack.org/p/gatetriage-june2014

 If you want to help, jumping in #openstack-infra would be the place to
 go.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-05 Thread Jeremy Stanley
On 2014-06-05 19:50:30 -0700 (-0700), Kevin Benton wrote:
[...]
 At a queue size of 20 and an 80% success rate. The patch in
 position 20 only has a ~44% chance of getting merged. However,
 with a queue size of 4, the patch in position 4 has a ~71% chance
 of getting merged.
[...]

Your theory misses an important detail. A change is not ejected from
a dependent Zuul queue like that of the gate pipeline simply for
failing a voting job. It only gets ejected IF all the changes ahead
of it on which it's being tested also pass all their jobs (or if
it's at the head of the queue). This makes the length of the queue
irrelevant to the likelihood of a change eventually making it
through on its own, and only a factor on the quantity of resources
and time we spend testing it.

The reason we implemented dynamic queue windowing was to help
conserve the donated resources we use at times when there are a lot
of changes being gated and failure rates climb (without this measure
we were entering something akin to virtual memory swap paging
hysteresis patterns, but with virtual machine quotas in providers
instead of virtual memory).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Can nova manage both ironic bare metal nodes and kvm VMs ?

2014-06-05 Thread Jander lu
hi chao
I have met the same problem, I read this article
http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/


2014-06-05 19:26 GMT+08:00 严超 yanchao...@gmail.com:

 Hi, All:
 In deploying with devstack and Ironic+Nova, we set:
 compute_driver = nova.virt.ironic.IronicDriver
 This means we can no longer use nova to boot VMs.
 Is there a way to manage both ironic bare metal nodes and kvm VMs in
 Nova ?
  I followed this Link:
 https://etherpad.openstack.org/p/IronicDeployDevstack


 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Can nova manage both ironic bare metal nodes and kvm VMs ?

2014-06-05 Thread Jander lu
use host aggregates to differentiate the nova-compute services configured
to use different hypervisor drivers (eg, nova.virt.libvirt vs nova.virt.
ironic).


2014-06-06 11:40 GMT+08:00 Jander lu lhcxx0...@gmail.com:

 hi chao
 I have met the same problem, I read this article
 http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/


 2014-06-05 19:26 GMT+08:00 严超 yanchao...@gmail.com:

 Hi, All:
 In deploying with devstack and Ironic+Nova, we set:
 compute_driver = nova.virt.ironic.IronicDriver
 This means we can no longer use nova to boot VMs.
 Is there a way to manage both ironic bare metal nodes and kvm VMs in
 Nova ?
  I followed this Link:
 https://etherpad.openstack.org/p/IronicDeployDevstack


 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Please help to review https://review.openstack.org/#/c/96679/

2014-06-05 Thread Jian Hua Geng


Hi All,

Can anyone help to review this patch
https://review.openstack.org/#/c/96679/ and provide me your comments?
thanks a lot!


--
Best regard,
David Geng
--___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Can nova manage both ironic bare metal nodes and kvm VMs ?

2014-06-05 Thread 严超
Hi, Jander:
Thank you very much .
Does this work after you follow these steps ?

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-06-06 11:40 GMT+08:00 Jander lu lhcxx0...@gmail.com:

 hi chao
 I have met the same problem, I read this article
 http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/


 2014-06-05 19:26 GMT+08:00 严超 yanchao...@gmail.com:

 Hi, All:
 In deploying with devstack and Ironic+Nova, we set:
 compute_driver = nova.virt.ironic.IronicDriver
 This means we can no longer use nova to boot VMs.
 Is there a way to manage both ironic bare metal nodes and kvm VMs in
 Nova ?
  I followed this Link:
 https://etherpad.openstack.org/p/IronicDeployDevstack


 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ServiceVM] IRC meeting minutes June 3, 2014 5:00(AM)UTC-)

2014-06-05 Thread Isaku Yamahata
Hi Dmitry. Thanks for your interest.

What's your time zone? In fact we have already many time zones.
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030405.html
If desirable, we could think about rotating timezones.

Do you have specific items to discuss?
We could also arrange ad-hoc irc meetings for specific topics.

thanks,

On Thu, Jun 05, 2014 at 05:58:53PM +0300,
Dmitry mey...@gmail.com wrote:

 Hi Isaku,
 In order to make possible to European audience to join ServiceVM meetings,
 could you please to move it 2-3 hours later (7-8AM UTC)?
 Thank you very much,
 Dmitry
 
 
 On Tue, Jun 3, 2014 at 10:00 AM, Isaku Yamahata isaku.yamah...@gmail.com
 wrote:
 
  Here is the meeting minutes of the meeting.
 
  ServiceVM/device manager
  meeting minutes on June 3, 2014:
https://wiki.openstack.org/wiki/Meetings/ServiceVM
 
  next meeting:
June 10, 2014 5:00AM UTC (Tuesday)
 
  agreement:
  - include NFV conformance to servicevm project into servicevm project
= will continue discuss on nomenclature at gerrit. tacker-specs
  - we have to define the relationship between NFV team and servicevm team
  - consolidate floating implementations
 
  Action Items:
  - everyone add your name/bio to contributor of incubation page
  - yamahata create tacker-specs repo in stackforge for further discussion
on terminology
  - yamahata update draft to include NFV conformance
  - s3wong look into vif creation/network connection
  - everyone review incubation page
 
  Detailed logs:
 
  http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.html
 
  http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.log.html
 
  thanks,
  --
  Isaku Yamahata isaku.yamah...@gmail.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] A question about firewall

2014-06-05 Thread Xurong Yang
Hi, Gary
   Thanks for your response, i have created router, the fact is that
firewall rules don't update share status when updating the corresponding
firewall policy share=true. so create firewall under another project and
thus fail.
so i think it's a bug.
what do you think?

cheers,
Xurong


2014-06-05 22:00 GMT+08:00 Gary Duan garyd...@gmail.com:

 Xurong,

 Firewall is colocated with router. You need to create a router, then the
 firewall state will be updated.

 Gary


 On Thu, Jun 5, 2014 at 2:48 AM, Xurong Yang ido...@gmail.com wrote:

 Hi, Stackers
 My use case:

 under project_id A:
 1.create firewall rule default(share=false).
 2.create firewall policy default(share=false).
 3.attach rule to policy.
 4.update policy(share=true)

 under project_id B:
 1.create firewall with policy(share=true) based on project A.
 then create firewall fail and suspend with status=PENDING_CREATE

 openstack@openstack03:~/Vega$ neutron firewall-policy-list
 +--+--++
 | id   | name | firewall_rules   
   |
 +--+--++
 | 7884fb78-1903-4af6-af3f-55e5c7c047c9 | Demo | 
 [d5578ab5-869b-48cb-be54-85ee9f15d9b2] |
 | 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | Test | 
 [8679da8d-200e-4311-bb7d-7febd3f46e37, |
 |  |  |  
 86ce188d-18ab-49f2-b664-96c497318056] |
 +--+--++
 openstack@openstack03:~/Vega$ neutron firewall-rule-list
 +--+--+--++-+
 | id   | name | firewall_policy_id   
 | summary| enabled |
 +--+--+--++-+
 | 8679da8d-200e-4311-bb7d-7febd3f46e37 | DenyOne  | 
 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,  | True 
|
 |  |  |  
 |  source: none(none),   | |
 |  |  |  
 |  dest: 192.168.0.101/32(none), | |
 |  |  |  
 |  deny  | |
 | 86ce188d-18ab-49f2-b664-96c497318056 | AllowAll | 
 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,  | True 
|
 |  |  |  
 |  source: none(none),   | |
 |  |  |  
 |  dest: none(none), | |
 |  |  |  
 |  allow | |
 +--+--+--++-+
 openstack@openstack03:~/Vega$ neutron firewall-create --name Test 
 Demo*Firewall Rule d5578ab5-869b-48cb-be54-85ee9f15d9b2 could not be found.*
 openstack@openstack03:~/Vega$ neutron firewall-show Test
 ++--+
 | Field  | Value|
 ++--+
 | admin_state_up | True |
 | description|  |
 | firewall_policy_id | 7884fb78-1903-4af6-af3f-55e5c7c047c9 |
 | id | 7c59c7da-ace1-4dfa-8b04-2bc6013dbc0a |
 | name   | Test |
 | status | *PENDING_CREATE*   |
 | tenant_id  | a0794fca47de4631b8e414beea4bd51b |
 ++--+


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >