Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-09 Thread Ghanshyam Mann
Hi Sean,

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Monday, April 14, 2014 10:22 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [all] Branchless Tempest QA Spec - final draft
 
 As we're coming up on the stable/icehouse release the QA team is looking
 pretty positive at no longer branching Tempest. The QA Spec draft for this is
 here - http://docs-draft.openstack.org/77/86577/2/check/gate-qa-specs-
 docs/3f84796/doc/build/html/specs/branchless-tempest.html
 and hopefully address a lot of the questions we've seen so far.
 
 Additional comments are welcome on the review -
 https://review.openstack.org/#/c/86577/
 or as responses on this ML thread.
 
   -Sean
 
 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

There is one more scenario where interface of the existing APIs gets changed
may be only for experimental APIs (Nova V3 APIs). I have following question
regarding those.
1. How we are going to tag those in Branchless Tempest? Should we keep 
two versions of tests for old and new API interface.
2. Till Branchless Tempest is not Implemented, tempest test
 has to be skipped for those as they get failed on 
Ice-House gate tests.
So for tempest part of those changes should we wait for 
implementation of  
Branchless tempest to complete?

For example,
 'os-instance_actions' v3 API changed to 'os-server-actions' 
(https://review.openstack.org/#/c/57614/)
 after Ice-house release and its respective tests are skipped in tempest.
Now https://review.openstack.org/#/c/85666/  adds the response validation for 
this API in tempest.
As API tempest tests are skipped (cannot unskip as ice-house gate test fails), 
response validation code
will be untested on gate. 
My question is how I should go on these-
1. Should I wait for implementation of Branchless tempest to complete?
2. OK to merge even this is going to be untested on gate.


Thanks
Ghanshyam Mann



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] turbo-hipster question: where to get nova.sql

2014-05-09 Thread Peng GP Gu

 Hi,

 I recently ran turbo-hipster to test nova db migrations, but I can't find
nova.sql in the dataset cloned from github.

 Can anyone tell me where to get nova.sql?

 Thanks!

 Peng Gu___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN

2014-05-09 Thread John Wood
Hello folks,

Barbican would like to assist in managing the workflow between a CA, crypto 
plugins and event subscribers to order, generate and securely store SSL 
certificate information, as per this blueprint: 
https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates

Note however that the 'event subscribers' above would probably be specific to 
the openstack deployment and run outside of the Barbican service. So for 
Rackspace, these subscribers might be issuing tickets to our support folks, to 
then install the SSL certificates where they need to go.

Thanks,
John




From: Eichberger, German [german.eichber...@hp.com]
Sent: Thursday, May 08, 2014 3:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN

Carlos,

+1

My impression of barbican is that they indeed see themselves as sending updates 
to the LBs/VPN/X – but I am not too excited about that. Any marginally 
sophisticated user wants to control when we burst out new certificates so they 
can tie that to their maintenance window (in case client certs need to be 
updated).

German

From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
Sent: Thursday, May 08, 2014 11:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN


On May 8, 2014, at 1:41 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com
 wrote:


Hi,

Some of our users are not that organized and certificate expirations seem to 
sneak up on them. So they are looking for a single place where they can 
“manage” their certificates. I am not sure if splitting storage between 
Barbican and Neutron will allow that. I am also wondering if Barbican is 
aspiring to become that central place…

Ok but in our implementation we may decide to duplicate the X509s in our 
database so that we can search and do searches on them with out going over a 
separate API service. So we don't mind storing them in both locations. We just 
worry about case were people update their (x509,priv_key) via barbican but 
existing loadbalancers in neutron would then be unaware of the new updated 
cert. This would seem to imply that barbican would need to trigger an update on 
neutron/lbaas. :(




German

From: Carlos Garza [mailto:carlos.ga...@rackspace.comhttp://rackspace.com]
Sent: Thursday, May 08, 2014 9:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN


On May 8, 2014, at 5:19 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com
 wrote:



Hi,

Please note as commented also by other XaaS services that managing SSL 
certificates is not a sole LBaaS challenge.
This calls for either an OpenStack wide service or at least a Neutron wide 
service to implement such use cases.

So it here are the options as far as I have seen so far.
1.   Barbican as a central service to store and retrieve SSL certificates. 
I think the Certificates generation is currently a lower priority requirement
2.   Using Heat as a centralized service
3.   Implementing such service in Neutron
4.   LBaaS private implementation

BTW, on all the options above, Barbican can optionally be used to store the 
certificates or the private part of the certificates.

   Is your statement equivalent to On all the options above, Babican can 
optionally be used to store the (X509,private_key) or just the private_key.
If thats what you mean then we are on the same page. private part of a 
certificate is not a valid statement for me since x509 certs don't contain 
private parts.

I'm advocating the latter where barbican stores the key only and we store the 
X509 on our own database.



I think that we either follow option 3 if SSL management is only a Neutron 
requirement (LBaaS, VPNaaS, FWaaS) and maybe as a transition project to an 
OpenStack wide solution (1 or 2).
Option 1 or 2 might be the ultimate goal.

Regards,
-Sam.





From: Clark, Robert Graham [mailto:robert.cl...@hp.comhttp://hp.com]
Sent: Thursday, May 08, 2014 12:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN

The certificate management that LBaaS requires might be slightly different to 
the normal flow of things in OpenStack services, after all you are talking 
about externally provided certificates and private keys.

There’s already a standard for a nice way to bundle those two elements 
together, it’s described in PKCS#12 and you’ve likely come across it in the 
form of ‘.pfx’ files. I’d suggest that perhaps it would make sense for LBaaS to 
store pfx files in the LBaaS DB and store the 

Re: [openstack-dev] Hierarchical administrative boundary [keystone]

2014-05-09 Thread Frittoli, Andrea (HP Cloud)
From: Adam Young [mailto:ayo...@redhat.com] 
Sent: 09 May 2014 04:19
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Hierarchical administrative boundary [keystone]

 

On 05/08/2014 07:55 PM, Tiwari, Arvind wrote:

Hi All,

 

Below is my proposal to address VPC use case using hierarchical
administrative boundary. This topic is scheduled in Hierarchical
Multitenancy
http://junodesignsummit.sched.org/event/20465cd62e9054d4043dda156da5070e#.U
2wYXXKLR_9  session of Atlanta design summit.

 

https://wiki.openstack.org/wiki/Hierarchical_administrative_boundary

 

Please take a look.

 

Thanks,

Arvind

 






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Looks very good.  One question:  Why hierarchical domains and not Projects.
I'm not disagreeing, mind you, just that I think the Nova team is going for
hierarchical Projects. 

 

  _  

Looks good, thank you!

 

But for this to be even more interesting nova (and other services) should be
domain aware - e.g. so that a domain admin could have control on all
resources which belong to users and projects in that domain.

 

andrea

 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-05-09 Thread Thomas Goirand
On 05/07/2014 01:26 AM, Jeremy Stanley wrote:
 Given that's Sean's session before the break (assuming it doesn't
 get reshuffled) and he's signed up for the key signing too, he could
 probably be persuaded to end on time and allow us to quickly free up
 the room for that. If we go this route we'll probably still need to
 plan to begin promptly at 10:35 AM EDT (five minutes into the break)
 so that the projector can be attached and confirmed working while
 attendees for the infra session in there clear out.
 
 Any objections to this (particularly from Sean)? Preferences for
 this plan over the separate room one floor up? Note that one down
 side I see to this option is if anyone who isn't an ATC wants to
 participate, we may need someone to help get them into the Design
 Summit area.

What's the final decision? Which room will we use?

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Eugene Nikanorov
Carlos,

The general objection is that if we don't need multiple VIPs (different ip,
not just tcp ports) per single logical loadbalancer, then we don't need
loadbalancer because everything else is addressed by VIP playing a role of
loadbalancer.
Regarding conclusions - I think we've heard enough negative opinions on the
idea of 'container' to at least postpone this discussion to the point when
we'll get some important use cases that could not be addressed by 'VIP as
loadbalancer'

Eugene.

On Fri, May 9, 2014 at 8:33 AM, Carlos Garza carlos.ga...@rackspace.comwrote:


  On May 8, 2014, at 2:45 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

  Hi Carlos,

   Are you saying that we should only have a loadbalancer resource
 only in the case where we want it to span multiple L2 networks as if it
 were a router? I don't see how you arrived at that conclusion. Can you
 explain further.

 No, I mean that loadbalancer instance is needed if we need several
 *different* L2 endpoints for several front ends.
 That's basically 'virtual appliance' functionality that we've discussed on
 today's meeting.


 From looking at the irc log it looks like nothing conclusive came out
 of the meeting. I don't understand a lot of the conclusions you arrive at.
 For example your rejecting the notion of a loadbalancer concrete object
 unless its needed to include multi l2 network support. Will you make an
 honest effort to describe your objections here in the ML cause if we can't
 resolve it here its going to spill over into the summit. I certainly don't
 want this to dominate the summit.



Eugene.
   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Eugene Nikanorov
Hi Brandon

Let me know if I am misunderstanding this,and please explain it
  further.
 A single neutron port can have many fixed ips on many subnets.  Since
 this is the case you're saying that there is no need for the API to
 define multiple VIPs since a single neutron port can represent all the
 IPs that all the VIPs require?

Right, if you want to to have both ipv4 and ipv6 addresses on the VIP then
it's possible with single neutron port.
So multiple VIPs for this case are not needed.

Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-05-09 Thread mar...@redhat.com
On 08/05/14 21:29, Henry Gessau wrote:
 Have any of you javascript gurus respun this for the new gerrit version?
 Or can this now be done on the backend somehow?

haha, have been thinking this since the gerrit upgrade a couple days
ago. It was very useful for reviews... I am NOT a javascript guru but
since it's Friday I gave myself 15 minutes to play with it - this works
for me:


javascript:(function(){
list = document.querySelectorAll('table.commentPanelHeader');
for(i in list) {
title = list[i];
if(! title.innerHTML) { continue; }
text = title.nextSibling;
if (text.innerHTML.search('Build failed')  0) {
title.style.color='red'
} else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine')
= 0) {
title.style.color='#66'
} else {
title.style.color='blue'
}
}
})()


marios

 
 On Tue, Mar 04, at 4:00 pm, Carl Baldwin  wrote:
 
 Nachi,

 Great!  I'd been meaning to do something like this.  I took yours and
 tweaked it a bit to highlight failed Jenkins builds in red and grey
 other Jenkins messages.  Human reviews are left in blue.

 javascript:(function(){
 list = document.querySelectorAll('td.GJEA35ODGC');
 for(i in list) {
 title = list[i];
 if(! title.innerHTML) { continue; }
 text = title.nextSibling;
 if (text.innerHTML.search('Build failed')  0) {
 title.style.color='red'
 } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') = 
 0) {
 title.style.color='#66'
 } else {
 title.style.color='blue'
 }
 }
 })()

 Carl

 On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 I wrote an bookmarklet for neutron gerrit review.
 This bookmarklet make the comment title for 3rd party ci as gray.

 javascript:(function(){list =
 document.querySelectorAll('td.GJEA35ODGC'); for(i in
 list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML 
 list[i].innerHTML.search('CI|Ryu|Testing|Mine') 
 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()

 enjoy :)
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-05-09 Thread mar...@redhat.com
On 09/05/14 12:33, mar...@redhat.com wrote:
 On 08/05/14 21:29, Henry Gessau wrote:
 Have any of you javascript gurus respun this for the new gerrit version?
 Or can this now be done on the backend somehow?
 
 haha, have been thinking this since the gerrit upgrade a couple days
 ago. It was very useful for reviews... I am NOT a javascript guru but
 since it's Friday I gave myself 15 minutes to play with it - this works
 for me:
 
 
 javascript:(function(){
 list = document.querySelectorAll('table.commentPanelHeader');
 for(i in list) {
 title = list[i];
 if(! title.innerHTML) { continue; }
 text = title.nextSibling;
 if (text.innerHTML.search('Build failed')  0) {
 title.style.color='red'
 } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine')
 = 0) {

just noticed this ^^^ remove the extra newline here as it breaks, thanks

 title.style.color='#66'
 } else {
 title.style.color='blue'
 }
 }
 })()
 
 
 marios
 

 On Tue, Mar 04, at 4:00 pm, Carl Baldwin  wrote:

 Nachi,

 Great!  I'd been meaning to do something like this.  I took yours and
 tweaked it a bit to highlight failed Jenkins builds in red and grey
 other Jenkins messages.  Human reviews are left in blue.

 javascript:(function(){
 list = document.querySelectorAll('td.GJEA35ODGC');
 for(i in list) {
 title = list[i];
 if(! title.innerHTML) { continue; }
 text = title.nextSibling;
 if (text.innerHTML.search('Build failed')  0) {
 title.style.color='red'
 } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') = 
 0) {
 title.style.color='#66'
 } else {
 title.style.color='blue'
 }
 }
 })()

 Carl

 On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 I wrote an bookmarklet for neutron gerrit review.
 This bookmarklet make the comment title for 3rd party ci as gray.

 javascript:(function(){list =
 document.querySelectorAll('td.GJEA35ODGC'); for(i in
 list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML 
 list[i].innerHTML.search('CI|Ryu|Testing|Mine') 
 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()

 enjoy :)
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder]Persistance layer for cinder create_volume api + taskflow

2014-05-09 Thread Kekane, Abhishek
Hello Everyone,



Currently NTT Data team is working on adding persistence layer for 
create_volume api using taskflow.



I have added new section Cinder create_volume api + taskflow persistance  in 
etherpad https://etherpad.openstack.org/p/taskflow-cinder to with the issues 
while resuming the taskflows,

which we wanted to bring to the notice of the community in order to seek 
suggestions for fixing some of these issues.



Please have a look and let me know your suggestions on the same.



Thanks  Regards,



Abhishek Kekane

NTT DATA






__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Persistance layer for cinder create_volume api + taskflow

2014-05-09 Thread Kekane, Abhishek
Hello Everyone,



Currently NTT Data team is working on adding persistence layer for 
create_volume api using taskflow.



I have added new section Cinder create_volume api + taskflow persistance  in 
etherpad https://etherpad.openstack.org/p/taskflow-cinder with the issues while 
resuming the taskflows,

which we wanted to bring to the notice of the community in order to seek 
suggestions for fixing some of these issues.



Please have a look and let me know your suggestions on the same.



Thanks  Regards,



Abhishek Kekane

NTT DATA






__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ML2 extensions info propagation

2014-05-09 Thread Mathieu Rohon
Hi mohammad,


On Thu, May 8, 2014 at 5:11 PM, Mohammad Banikazemi m...@us.ibm.com wrote:

 Hi Mathieu,

 Yes, the enhancement of the get_device_details method sounds like an
 interesting and useful option.
 The option of using drivers in the agent for supporting extensions is to
 make the agent more modular and allow for selectively supporting extensions
 as needed by a given agent.If we take the approach you are suggesting and
 eliminate or reduce the use of extension specific RPCs how can we achieve
 the modularity goal above? Is there a way to make these options useful
 together? More broadly, what would be the impact of your proposal on the
 modularity of the agent (if any)?


I don't think this approach breaks the modularity architecture you
proposed. It is more about building a driver context as it is done in the
ML2 plugin, based on informations received through RPC (while ML2 build its
driver context base on information received through API Call). This driver
context populated with RPC informations would then be spread among drivers.
I think that for readability and code understanding, it would be great to
share this achitecture with ML2 plugin. Moreover, this might be needed if
you want to support several implementation of the same extension in one
agent.




 Please note that as per discussion during the ML2 meeting yesterday we are
 going to have a single etherpad for each of ML2 sessions. The etherpad for
 the Modular Layer 2 Agent session can be found at [2] from your original
 email below. We may reorganize the information that is already there but
 please do add your comments there.

done! I will also update the etherpad [1]

thanks  Kyle and Mohammad for your reply

Mathieu

 Thanks,

 Mohammad


 [image: Inactive hide details for Mathieu Rohon ---05/07/2014 10:25:58
 AM---Hi ML2er and others, I'm considering discussions around ML2]Mathieu
 Rohon ---05/07/2014 10:25:58 AM---Hi ML2er and others, I'm considering
 discussions around ML2 for the summit. Unfortunatly I

 From: Mathieu Rohon mathieu.ro...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org,

 Date: 05/07/2014 10:25 AM
 Subject: [openstack-dev] [Neutron] ML2 extensions info propagation

--



 Hi ML2er and others,

 I'm considering discussions around ML2 for the summit. Unfortunatly I
 won't attend the summit, so I'll try to participate through the
 mailing list and etherpads.

 I'm especially interested in extension support by Mechanism Driver[1]
 and Modular agent[2]. During the Juno cycle I'll work on the capacity
 to propagate IPVPN informations (route-target) down to the agent, so
 that the agent can manage MPLS encapsulation.
 I think that the easiest way to do that is to enhance
 get_device_details() RPC message to add network extension informations
 of the concerned port in the dict sent.

 Moreover I think this approach could be generalized, and
 get_device_details() in the agent should return serialized information
 of a port with every extension informations (security_group,
 port_binding...). When the core datamodel or the extension datamodel
 would be modified, this would result in a port_update() with the
 updated serialization of the datamodel. This way, we could get rid of
 security-group and l2pop RPC. Modular agent wouldn't need to deal with
 one driver by extension which need to register its RPC callbacks.

 Those informations should also be stored in ML2 driver context. When a
 port is created by ML2 plugin, it calls super() for creating core
 datamodel, which will return a dict without extension informations,
 because extension informations in the Rest call has not been processed
 yet. But once the plugin call its core extension, it should call MD
 registered extensions as proposed by nader here [4] and then call
 make_port_dict(with extension), or an equivalent serialization
 function, to create the driver context. this seralization function
 would be used by get_device_details() RPC callbacks too.

 Regards,

 Mathieu

 [1]https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support
 [2]https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent
 [3]http://summit.openstack.org/cfp/details/240
 [4]https://review.openstack.org/#/c/89211/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-09 Thread Eugene Nikanorov
Hi folks,

I'm pulling this question out of another discussion:

Is there a need to have multiple VIPs (e.g. multiple L2 ports/IP addresses)
per logical loadbalancer?
If so, we need the description of such cases to evaluate them.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical administrative boundary [keystone]

2014-05-09 Thread Yaguang Tang
Frittoli,

I think for other services we could achieve that by  modifying  the
policy.json( add domain admin role and control what the cloud admin can do
) so that domain admin user is able to manage resources belong to
users and projects in that domain.


2014-05-09 15:24 GMT+08:00 Frittoli, Andrea (HP Cloud) fritt...@hp.com:

 *From:* Adam Young [mailto:ayo...@redhat.com]
 *Sent:* 09 May 2014 04:19
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] Hierarchical administrative boundary
 [keystone]



 On 05/08/2014 07:55 PM, Tiwari, Arvind wrote:

 Hi All,



 Below is my proposal to address VPC use case using hierarchical
 administrative boundary. This topic is scheduled in Hierarchical
 Multitenancyhttp://junodesignsummit.sched.org/event/20465cd62e9054d4043dda156da5070e#.U2wYXXKLR_9session
  of Atlanta design summit.



 https://wiki.openstack.org/wiki/Hierarchical_administrative_boundary



 Please take a look.



 Thanks,

 Arvind






 ___

 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Looks very good.  One question:  Why hierarchical domains and not
 Projects.  I'm not disagreeing, mind you, just that I think the Nova team
 is going for hierarchical Projects.


 *--*

 Looks good, thank you!



 But for this to be even more interesting nova (and other services) should
 be domain aware – e.g. so that a domain admin could have control on all
 resources which belong to users and projects in that domain.



 andrea



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
Mobile:  +86 152 1094 6968
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-05-09 Thread Shyam Prasad N
Hi,

I have a two node swift cluster receiving continuous traffic (mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following traceback from
some transactions...
Traceback (most recent call last):
  File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 692, in
PUT
chunk = next(data_source)
  File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 559, in
lambda
data_source = iter(lambda: reader(self.app.client_chunk_size), '')
  File /home/eightkpc/swift/swift/common/utils.py, line 2362, in read
chunk = self.wsgi_input.read(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 147, in
read
return self._chunked_read(self.rfile, length)
  File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 137, in
_chunked_read
self.chunk_length = int(rfile.readline().split(;, 1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT
/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data
408 - PUT
http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data;
txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241 95.6405 -

It's success sometimes, but mostly 408 errors. I don't see any other logs
for the transaction ID. or around these 408 errors in the log files. Is
this a disk timeout issue? These are only 1GB files and normal writes to
files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread Sean Dague
Based on some of my blog posts on gerrit queries, I've built and gotten
integrated a custom inbox zero dashboard which is per project in gerrit.

ex:
https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard

(replace openstack/nova with the project of your choice).

This provides 3 sections.

= Needs Final +2 =

This is code that has an existing +2, no negative code review feedback,
and positive jenkins score. So it's mergable if you provide the final +2.

(Gerrit Query: status:open NOT label:Code-Review=0,self
label:Verified=1,jenkins NOT label:Code-Review=-1 label:Code-Review=2
NOT label:Workflow=-1 limit:50 )

= No negative feedback =

Changes that have no negative code review feedback, and positive jenkins
score.

(Gerrit Query: status:open NOT label:Code-Review=0,self
label:Verified=1,jenkins NOT label:Code-Review=-1 NOT
label:Workflow=-1 limit:50 )

= Wayward changes =

Changes that have no code review feedback at all (no one has looked at
it), a positive jenkins score, and are older than 2 days.

(Gerrit Query: status:open label:Verified=1,jenkins NOT
label:Workflow=-1 NOT label:Code-Review=2 age:2d)


In all cases it filters out patches that you've commented on in the most
recently revision. So as you vote on these things they will disappear
from your list.

Hopefully people will find this dashboard also useful.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread Andreas Jaeger

On 05/09/2014 02:20 PM, Sean Dague wrote:

Based on some of my blog posts on gerrit queries, I've built and gotten
integrated a custom inbox zero dashboard which is per project in gerrit.

ex:
https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard

(replace openstack/nova with the project of your choice).
[...]


Great, thanks! Is it possible to have more than one project shown?

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread Sean Dague
On 05/09/2014 08:28 AM, Andreas Jaeger wrote:
 On 05/09/2014 02:20 PM, Sean Dague wrote:
 Based on some of my blog posts on gerrit queries, I've built and gotten
 integrated a custom inbox zero dashboard which is per project in gerrit.

 ex:
 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard


 (replace openstack/nova with the project of your choice).
 [...]
 
 Great, thanks! Is it possible to have more than one project shown?
 
 Andreas

Unfortunately not. Gerrit's got a limitation that custom dashboards
either apply to All-Projects, or with a per project iterator.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread Doug Shelley
Sean,

Very cool - thanks for providing. One question - for Trove, we have Jenkins but 
also voting jobs on reddwarf - any way to change the No Negative Feedback 
section to take into account reddwarf failures?

Regards,
Doug

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: May-09-14 8:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] custom gerrit dashboard - per project review 
inbox zero

Based on some of my blog posts on gerrit queries, I've built and gotten 
integrated a custom inbox zero dashboard which is per project in gerrit.

ex:
https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard

(replace openstack/nova with the project of your choice).

This provides 3 sections.

= Needs Final +2 =

This is code that has an existing +2, no negative code review feedback, and 
positive jenkins score. So it's mergable if you provide the final +2.

(Gerrit Query: status:open NOT label:Code-Review=0,self 
label:Verified=1,jenkins NOT label:Code-Review=-1 label:Code-Review=2 NOT 
label:Workflow=-1 limit:50 )

= No negative feedback =

Changes that have no negative code review feedback, and positive jenkins score.

(Gerrit Query: status:open NOT label:Code-Review=0,self 
label:Verified=1,jenkins NOT label:Code-Review=-1 NOT
label:Workflow=-1 limit:50 )

= Wayward changes =

Changes that have no code review feedback at all (no one has looked at it), a 
positive jenkins score, and are older than 2 days.

(Gerrit Query: status:open label:Verified=1,jenkins NOT
label:Workflow=-1 NOT label:Code-Review=2 age:2d)


In all cases it filters out patches that you've commented on in the most 
recently revision. So as you vote on these things they will disappear from your 
list.

Hopefully people will find this dashboard also useful.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-09 Thread Ken'ichi Ohmichi
2014-05-09 15:00 GMT+09:00 Ghanshyam Mann ghanshyam.m...@nectechnologies.in:
 Hi Sean,

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Monday, April 14, 2014 10:22 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

 As we're coming up on the stable/icehouse release the QA team is looking
 pretty positive at no longer branching Tempest. The QA Spec draft for this is
 here - http://docs-draft.openstack.org/77/86577/2/check/gate-qa-specs-
 docs/3f84796/doc/build/html/specs/branchless-tempest.html
 and hopefully address a lot of the questions we've seen so far.

 Additional comments are welcome on the review -
 https://review.openstack.org/#/c/86577/
 or as responses on this ML thread.

   -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 There is one more scenario where interface of the existing APIs gets changed
 may be only for experimental APIs (Nova V3 APIs). I have following question
 regarding those.
 1. How we are going to tag those in Branchless Tempest? Should we 
 keep two versions of tests for old and new API interface.
 2. Till Branchless Tempest is not Implemented, tempest test
  has to be skipped for those as they get failed on 
 Ice-House gate tests.
 So for tempest part of those changes should we wait for 
 implementation of
 Branchless tempest to complete?

 For example,
  'os-instance_actions' v3 API changed to 'os-server-actions' 
 (https://review.openstack.org/#/c/57614/)
  after Ice-house release and its respective tests are skipped in tempest.
 Now https://review.openstack.org/#/c/85666/  adds the response validation for 
 this API in tempest.
 As API tempest tests are skipped (cannot unskip as ice-house gate test 
 fails), response validation code
 will be untested on gate.
 My question is how I should go on these-
 1. Should I wait for implementation of Branchless tempest to complete?
 2. OK to merge even this is going to be untested on gate.

I also am facing the same issue.
The patch which fixes Nova v3 API inconsistency has been merged
into *Nova* master, then I tried to push the other patch which blocks
this kind of inconsistencies into *Tempest* master.
However, check-tempest-dsvm-full-icehouse failed because of not
backporting the Nova patch into Icehouse branch yet.
I'm not sure we can backport this kind of patches into Icehouse
because Nova v3 API of Icehouse must be experimental in future.
So how about disabling v3 API tests in check-tempest-dsvm-full-icehouse?


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tempest on Python 2.6

2014-05-09 Thread Adrian Smith
Hi,

Is anyone running Tempest on Python 2.6?

The documentation suggests it should work (leaving aside the unit
tests), http://docs.openstack.org/developer/tempest/overview.html#python-2-6

However the install_venv_common.py script in the tempest tools
directory dies if it's running under 2.6 or lower,
https://github.com/openstack/tempest/blob/e3010feedac03abe8973393f45a52e0c7b4f2649/tools/install_venv_common.py#L49

Adrian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread Sean Dague
There are a lot of 3rd party CI systems, and strictly speaking projects
only need to pass jenkins to make it through, so I don't think that's
fundamentally something we want to add here.

There is some ability to create per project dashboards as well, if you
wanted to build something just for trove in the trove repository.
Documentation on custom dashboards is here -
https://review.openstack.org/Documentation/user-dashboards.html

Realize that every project is going to have a class of concerns, so
these are attempting to be generically applicable rules that would apply
anywhere, and highlight code that's basically waiting on reviewers, not
waiting on the authors to take an action.

-Sean

On 05/09/2014 08:42 AM, Doug Shelley wrote:
 Sean,
 
 Very cool - thanks for providing. One question - for Trove, we have Jenkins 
 but also voting jobs on reddwarf - any way to change the No Negative 
 Feedback section to take into account reddwarf failures?
 
 Regards,
 Doug
 
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net] 
 Sent: May-09-14 8:21 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] custom gerrit dashboard - per project review 
 inbox zero
 
 Based on some of my blog posts on gerrit queries, I've built and gotten 
 integrated a custom inbox zero dashboard which is per project in gerrit.
 
 ex:
 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard
 
 (replace openstack/nova with the project of your choice).
 
 This provides 3 sections.
 
 = Needs Final +2 =
 
 This is code that has an existing +2, no negative code review feedback, and 
 positive jenkins score. So it's mergable if you provide the final +2.
 
 (Gerrit Query: status:open NOT label:Code-Review=0,self 
 label:Verified=1,jenkins NOT label:Code-Review=-1 label:Code-Review=2 NOT 
 label:Workflow=-1 limit:50 )
 
 = No negative feedback =
 
 Changes that have no negative code review feedback, and positive jenkins 
 score.
 
 (Gerrit Query: status:open NOT label:Code-Review=0,self 
 label:Verified=1,jenkins NOT label:Code-Review=-1 NOT
 label:Workflow=-1 limit:50 )
 
 = Wayward changes =
 
 Changes that have no code review feedback at all (no one has looked at it), a 
 positive jenkins score, and are older than 2 days.
 
 (Gerrit Query: status:open label:Verified=1,jenkins NOT
 label:Workflow=-1 NOT label:Code-Review=2 age:2d)
 
 
 In all cases it filters out patches that you've commented on in the most 
 recently revision. So as you vote on these things they will disappear from 
 your list.
 
 Hopefully people will find this dashboard also useful.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-05-09 Thread Raildo Mascena
Hello Vish,

The implementation was done that way because it would facilitating
compatibility of hierarchical projects with Keystone, for example to get a
token, I would have to change the whole implementation to get the inherited
roles, or for example to list roles, among other features,  for this POC
this was a quick and simple, but i believe that we can discuss the best
solution.


2014-05-06 20:08 GMT-03:00 Vishvananda Ishaya vishvana...@gmail.com:

 This is a bit different from how I would have expected it to work. It
 appears that you are adding the role assignment when the project is
 created. IMO the role should be added to the list when the roles are
 checked. In other words, when getting the list of roles for a user/project,
 it walks up the tree to find all parent projects and creates a list of
 roles that includes all of the parent roles for that user that are marked
 inheritable. The implementation you made seems very fragile if parent
 projects are changed etc.

 Vish

 On Apr 14, 2014, at 12:17 PM, Raildo Mascena rail...@gmail.com wrote:

 Hi all,

 As I had promised, here is the repository of Telles Nobrega (
 https://github.com/tellesnobrega/keystone/tree/multitenancy) updated now
 with inherited roles working with hierarchical projects.

 How ​does ​it work​​?

 ​I​nherited roles operate in the following way:
 - It should be added​ a role ​to​ be inherited to a domain using the api
 PUT
 localhost:35357/v3/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_to_projects.
 - It should be create​d​ a hierarchical project as described above for
 Telles.
 - List the assignments roles GET localhost: 35357/v3/role_assignments
  and check that the role inherited is already associated with this new
 project.

 What was implemented?
 The implementation consists of filter​ing​ roles which are associated with
 the parent project to be inherited (this is done by checking the assigment
 table) for them to be added to the child project. Also a filter has been
 created to ensure that a role inherited from another domain does not
 interfere in the inheritance of this project.

 What remains to implement?

 Role inherit​ance​ has been implemented to work with domains, so the role
 will be inherited to all projects contained this domain, ie, a role that is
 marked to be inherited, even if it is not associated with the parent
 project, will be inherited to the child project. In my opinion, should ​be
 ​create​d​ a project column in the assignment that would indicate where
 to start inheritance projects, it would be possible to finish this feature.
 (This is just a suggestion, I believe ​there are other ways to make it
 work).


 2014-03-17 8:04 GMT-03:00 Telles Nobrega tellesnobr...@gmail.com:

 That is good news, I can have both information sent to nova really easy.
 I just need to add a field into the token, or more than one if needed.
 RIght now I send Ids, it could names just as easily and we can add a new
 field so we can have both information sent. I'm not sure which is the best
 option for us but i would think that sending both for now would keep the
 compatibility and we could still use the names for display porpuse


 On Sun, Mar 16, 2014 at 9:18 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-03-14 at 13:43 -0700, Vishvananda Ishaya wrote:
  Awesome, this is exactly what I was thinking. I think this is really
  close to being usable on the nova side. First of all the
  dot.sperated.form looks better imo, and I think my code should still
  work that way as well. The other piece that is needed is mapping ids
  to names for display purposes. I did something like this for a
  prototype of names in dns caching that should work nicely. The
  question simply becomes how do we expose those names. I’m thinking we
  have to add an output field to the display of objects in the system
  showing the fully qualified name.  We can then switch the display in
  novaclient to show names instead of ids.  That way an admin listing
  all the projects in orga would see the owner as orga.projb instead of
  the id string.
 
  The other option would be to pass names instead of ids from keystone
  and store those instead. That seems simpler at first glance, it is not
  backwards compatible with the current model so it will be painful for
  providers to switch.

 -1 for instead of. in addition to would have been fine, IMO.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 --
 Telles Mota Vidal Nobrega
 Bsc in Computer Science at UFCG
 Software Engineer at PulsarOpenStack Project - HP/LSD-UFCG

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Raildo Mascena
 Bachelor of 

Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-09 Thread Sean Dague
On 05/09/2014 09:31 AM, Chicken Ohmichi wrote:
 2014-05-09 15:00 GMT+09:00 Ghanshyam Mann ghanshyam.m...@nectechnologies.in:
 Hi Sean,

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Monday, April 14, 2014 10:22 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

 As we're coming up on the stable/icehouse release the QA team is looking
 pretty positive at no longer branching Tempest. The QA Spec draft for this 
 is
 here - http://docs-draft.openstack.org/77/86577/2/check/gate-qa-specs-
 docs/3f84796/doc/build/html/specs/branchless-tempest.html
 and hopefully address a lot of the questions we've seen so far.

 Additional comments are welcome on the review -
 https://review.openstack.org/#/c/86577/
 or as responses on this ML thread.

   -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 There is one more scenario where interface of the existing APIs gets changed
 may be only for experimental APIs (Nova V3 APIs). I have following question
 regarding those.
 1. How we are going to tag those in Branchless Tempest? Should we 
 keep two versions of tests for old and new API interface.
 2. Till Branchless Tempest is not Implemented, tempest test
  has to be skipped for those as they get failed on 
 Ice-House gate tests.
 So for tempest part of those changes should we wait for 
 implementation of
 Branchless tempest to complete?

 For example,
  'os-instance_actions' v3 API changed to 'os-server-actions' 
 (https://review.openstack.org/#/c/57614/)
  after Ice-house release and its respective tests are skipped in tempest.
 Now https://review.openstack.org/#/c/85666/  adds the response validation 
 for this API in tempest.
 As API tempest tests are skipped (cannot unskip as ice-house gate test 
 fails), response validation code
 will be untested on gate.
 My question is how I should go on these-
 1. Should I wait for implementation of Branchless tempest to complete?
 2. OK to merge even this is going to be untested on gate.
 
 I also am facing the same issue.
 The patch which fixes Nova v3 API inconsistency has been merged
 into *Nova* master, then I tried to push the other patch which blocks
 this kind of inconsistencies into *Tempest* master.
 However, check-tempest-dsvm-full-icehouse failed because of not
 backporting the Nova patch into Icehouse branch yet.
 I'm not sure we can backport this kind of patches into Icehouse
 because Nova v3 API of Icehouse must be experimental in future.
 So how about disabling v3 API tests in check-tempest-dsvm-full-icehouse?

This raises a very real question:

Why are we adding Tempest integrate tests for non stable interfaces?

My current leaning is that if you want tests in Tempest, your interface
has to be versioned. If it's not, it doesn't belong in Tempest at all.
But I think that's a good discussion point for the branchless Tempest
session in Atlanta.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-09 Thread Matthew Treinish
On Fri, May 09, 2014 at 10:31:00PM +0900, Ken'ichi Ohmichi wrote:
 2014-05-09 15:00 GMT+09:00 Ghanshyam Mann ghanshyam.m...@nectechnologies.in:
  Hi Sean,
 
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: Monday, April 14, 2014 10:22 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [all] Branchless Tempest QA Spec - final draft
 
  As we're coming up on the stable/icehouse release the QA team is looking
  pretty positive at no longer branching Tempest. The QA Spec draft for this 
  is
  here - http://docs-draft.openstack.org/77/86577/2/check/gate-qa-specs-
  docs/3f84796/doc/build/html/specs/branchless-tempest.html
  and hopefully address a lot of the questions we've seen so far.
 
  Additional comments are welcome on the review -
  https://review.openstack.org/#/c/86577/
  or as responses on this ML thread.
 
-Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com
  http://dague.net
 
  There is one more scenario where interface of the existing APIs gets changed
  may be only for experimental APIs (Nova V3 APIs). I have following question
  regarding those.
  1. How we are going to tag those in Branchless Tempest? Should we 
  keep two versions of tests for old and new API interface.
  2. Till Branchless Tempest is not Implemented, tempest test
   has to be skipped for those as they get failed on 
  Ice-House gate tests.
  So for tempest part of those changes should we wait for 
  implementation of
  Branchless tempest to complete?
 
  For example,
   'os-instance_actions' v3 API changed to 'os-server-actions' 
  (https://review.openstack.org/#/c/57614/)
   after Ice-house release and its respective tests are skipped in tempest.
  Now https://review.openstack.org/#/c/85666/  adds the response validation 
  for this API in tempest.
  As API tempest tests are skipped (cannot unskip as ice-house gate test 
  fails), response validation code
  will be untested on gate.
  My question is how I should go on these-
  1. Should I wait for implementation of Branchless tempest to complete?
  2. OK to merge even this is going to be untested on gate.
 
 I also am facing the same issue.
 The patch which fixes Nova v3 API inconsistency has been merged
 into *Nova* master, then I tried to push the other patch which blocks
 this kind of inconsistencies into *Tempest* master.
 However, check-tempest-dsvm-full-icehouse failed because of not
 backporting the Nova patch into Icehouse branch yet.
 I'm not sure we can backport this kind of patches into Icehouse
 because Nova v3 API of Icehouse must be experimental in future.
 So how about disabling v3 API tests in check-tempest-dsvm-full-icehouse?

I'm thinking this is the only option we have right now. It's not like the v3
api was marked as stable for icehouse. So you're not going to be backporting
anything to it, so I think we should just disable it on icehouse runs. Part of
the branchless tempest proposal is to mark and run with only the features that
run on icehouse but not master for the stable branch jobs. This is probably the
first one we've hit.

Moving forward I think we're going to have to discuss how we're going to handle
major api revisions with branchless tempest, and the path to getting them
stable and gating with all versions. This way we'll have a plan on how to handle
this kind of friction when a project starts working on a new api major version.
I'll put a note on the summit session for branchless tempest about this.

-Matt Treinish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread Sean Dague
You have to be logged in for it to work, as it references self. It's a
limitation that we couldn't find a nice solution around.

On 05/09/2014 10:00 AM, Jesus M. Gonzalez-Barahona wrote:
 Hi, Sean,
 
 I'm getting:
 
 Code Review - Error
 Error in operator label:Code-Review=0,self
 
 And maybe some more text that I cannot read, because of black
 background, and a button to Continue, which if I click leads me to the
 usual Gerrit page...
 
 Am I missing something?
 
   Jesus.
 
 On Fri, 2014-05-09 at 08:20 -0400, Sean Dague wrote:
 Based on some of my blog posts on gerrit queries, I've built and gotten
 integrated a custom inbox zero dashboard which is per project in gerrit.

 ex:
 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard

 (replace openstack/nova with the project of your choice).

 This provides 3 sections.

 = Needs Final +2 =

 This is code that has an existing +2, no negative code review feedback,
 and positive jenkins score. So it's mergable if you provide the final +2.

 (Gerrit Query: status:open NOT label:Code-Review=0,self
 label:Verified=1,jenkins NOT label:Code-Review=-1 label:Code-Review=2
 NOT label:Workflow=-1 limit:50 )

 = No negative feedback =

 Changes that have no negative code review feedback, and positive jenkins
 score.

 (Gerrit Query: status:open NOT label:Code-Review=0,self
 label:Verified=1,jenkins NOT label:Code-Review=-1 NOT
 label:Workflow=-1 limit:50 )

 = Wayward changes =

 Changes that have no code review feedback at all (no one has looked at
 it), a positive jenkins score, and are older than 2 days.

 (Gerrit Query: status:open label:Verified=1,jenkins NOT
 label:Workflow=-1 NOT label:Code-Review=2 age:2d)


 In all cases it filters out patches that you've commented on in the most
 recently revision. So as you vote on these things they will disappear
 from your list.

 Hopefully people will find this dashboard also useful.

  -Sean

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate] Design Summit Etherpads

2014-05-09 Thread Hayes, Graham
Hi All,

Here are the 2 etherpads for the Designate Design sessions next week:


  *   Tuesday 12:05-12:45
  *   https://etherpad.openstack.org/p/juno-design-summit-designate-session-1
  *   Room B305
  *   http://sched.co/1eJqs6h
  *   Tusday 14:50 - 18:10
  *   https://etherpad.openstack.org/p/juno-design-summit-designate-session-2
  *   Room B308
  *   http://sched.co/1kczUAK


Looking forward to seeing you there!

Graham
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mid-Cycle Meeting Location

2014-05-09 Thread Kyle Mestery
On Thu, May 8, 2014 at 9:21 PM, Mohammad Banikazemi m...@us.ibm.com wrote:
 Kyle Mestery mest...@noironetworks.com wrote on 05/08/2014 04:45:43 PM:

 From: Kyle Mestery mest...@noironetworks.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/08/2014 04:46 PM
 Subject: [openstack-dev] [neutron] Mid-Cycle Meeting Location



 Hi everyone:

 I've settled on a date, location, and agenda for the Neutron mid-cycle
 Meeting. The logistical information is at the etherpad referenced
 below [1]. The tl;dr for those curious:

 Date: July 9-11
 Location: Cisco office in Bloomington, MN
 Agenda: nova-network parity sprint and completion of tasks

 I've setup a hotel block of 15 rooms at this point, please add
 yourself to the list of attendees so I can track the room block
 reservation. The hotel is within walking distance of the Cisco office.
 There are lots of other hotels all very close as well.

 Thanks, looking forward to seeing everyone next week as well as at the
 Mid-Cycle Meeting!

 Kyle

 [1] https://etherpad.openstack.org/p/neutron-juno-mid-cycle-meeting


 Please bear with me as I ask a question on nova-network parity work items
 that may have been addressed earlier. Is the auto-associating floating IPs
 part of the work items? I see a blueprint on this topic [2] but do not see
 it listed in [3] or [4].

This is a good point, Mohammad. This wasn't called out explicitly at
all in the coverage gaps, but I imagine it's something we should close
on. Looking at the BP referenced below, there is some movement towards
a way to implement this. It would be good to hear from some Nova folks
on the importance of this nova-network functionality as well to gauge
where it should fall in the parity effort priority wise.

Thanks,
Kyle

 Thanks,

 Mohammad


 [2]
 https://blueprints.launchpad.net/neutron/+spec/auto-associate-floating-ip
 [3] https://etherpad.openstack.org/p/neutron-nova-parity
 [4]
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-05-09 Thread Hayes, Graham
Hi,

It looks like us 'none ATC' folk will have access to the project pods - so 
should we nail down a time on Monday?

It looks like the 16:30 onwards is the most popular choice - will we say 16:30 
on Monday in the Neutron pod?

Thanks,

Graham

On Tue, 2014-05-06 at 17:45 +, Veiga, Anthony wrote:
Hi,

The only issue I would see with the pod is that not all of us are ATCs, so we 
may or may not have access to that area (I am open to correction on that point 
- in fact I hope someone does ;) )


I’ll second this.  I have an interest in attending and assisting here, but I 
don’t have ATC status yet (though I’m an active contributor technically, just 
not via code.)




I could see it fitting in with our design session, but maybe if we meet on the 
Monday to do some initial hashing out as well, I think that would be good.

I am around for the morning, and later on in the afternoon on Monday, if that 
suits.

Graham

On Tue, 2014-05-06 at 11:21 -0600, Carl Baldwin wrote:


I have just updated my etherpad [1] with some proposed times.  Not
knowing much about the venue, I could only propose the pod area as a
the location.

I also updated the designate session etherpad [2] per your suggestion.
 If there is time during the Designate sessions to include this in the
discussion then that may work out well.

Thanks,
Carl

[1] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate
[2] https://etherpad.openstack.org/p/DesignateAtlantaDesignSession

On Tue, May 6, 2014 at 8:58 AM, Joe Mcbride 
jmcbr...@rackspace.commailto:jmcbr...@rackspace.com wrote:
 On 4/29/14, 3:09 PM, Carl Baldwin 
 c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:I feel this is an 
 important subject to discuss because the end resultwill be a better cloud 
 user experience overall.  The design summitcould be a great time to bring 
 together interested parties fromNeutron, Nova, and Designate to discuss 
 the integration that I proposein these blueprints. Do you have a 
 time/location planned for these discussions? If not, we may have some time 
 in one of the Designate sessions.  The priorities and details for our 
 design session will be pulled from 
 https://etherpad.openstack.org/p/DesignateAtlantaDesignSession. If you are 
 interested in joining us, can you add your proposed blueprints in the 
 format noted there? Thanks, joe 
 ___ OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-05-09 Thread Kyle Mestery
On Fri, May 9, 2014 at 4:39 AM, mar...@redhat.com mandr...@redhat.com wrote:
 On 09/05/14 12:33, mar...@redhat.com wrote:
 On 08/05/14 21:29, Henry Gessau wrote:
 Have any of you javascript gurus respun this for the new gerrit version?
 Or can this now be done on the backend somehow?

 haha, have been thinking this since the gerrit upgrade a couple days
 ago. It was very useful for reviews... I am NOT a javascript guru but
 since it's Friday I gave myself 15 minutes to play with it - this works
 for me:


 javascript:(function(){
 list = document.querySelectorAll('table.commentPanelHeader');
 for(i in list) {
 title = list[i];
 if(! title.innerHTML) { continue; }
 text = title.nextSibling;
 if (text.innerHTML.search('Build failed')  0) {
 title.style.color='red'
 } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine')
 = 0) {

 just noticed this ^^^ remove the extra newline here as it breaks, thanks

 title.style.color='#66'
 } else {
 title.style.color='blue'
 }
 }
 })()


Thanks Marios, this works for me now again!

Kyle

 marios


 On Tue, Mar 04, at 4:00 pm, Carl Baldwin  wrote:

 Nachi,

 Great!  I'd been meaning to do something like this.  I took yours and
 tweaked it a bit to highlight failed Jenkins builds in red and grey
 other Jenkins messages.  Human reviews are left in blue.

 javascript:(function(){
 list = document.querySelectorAll('td.GJEA35ODGC');
 for(i in list) {
 title = list[i];
 if(! title.innerHTML) { continue; }
 text = title.nextSibling;
 if (text.innerHTML.search('Build failed')  0) {
 title.style.color='red'
 } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') = 
 0) {
 title.style.color='#66'
 } else {
 title.style.color='blue'
 }
 }
 })()

 Carl

 On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 I wrote an bookmarklet for neutron gerrit review.
 This bookmarklet make the comment title for 3rd party ci as gray.

 javascript:(function(){list =
 document.querySelectorAll('td.GJEA35ODGC'); for(i in
 list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML 
 list[i].innerHTML.search('CI|Ryu|Testing|Mine') 
 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()

 enjoy :)
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Today's weekly meeting

2014-05-09 Thread sbauza
Hi,

Most of us are preparing Summit, and Dina is currently UTC-8 (so that's
pretty early for her) but I can open a meeting today. I don't have a
clear agenda, so I'll only leave the open discussion topic.

Folks who want to join, the room will be there for around 30 mins and
then I'll free it, unless we have more discussions.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unit test test_ovs_neutron_agent fails with dependency errors

2014-05-09 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On 30/04/14 23:21, Narasimhan, Vivekanandan wrote:
 Hi,
 
 
 
 I 've been trying to run test_ovs_neutron_agent.py unit test from
 openstack neutron master 'tip', and
 
 am hitting  dependency errors for novaclient as here:

Yes, neutron requires python-novaclient to be present in PYTHONPATH.

 
 
 
 Do we need to clone python-novaclient repo as well and point
 PYTHONPATH to it and
 
 try re-running the same?
 

Whatever you like most. If you would use 'tox' or run_tests.sh with
virtualenv to run unit tests, the module will be installed for you
automagically.

 
 
 Please help.
 
 
 
 ==

  ERROR:
 neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_tunnel_update

  
 --

  Empty attachments:
 
 pythonlogging:''
 
 pythonlogging:'neutron.api.extensions'
 
 
 
 Traceback (most recent call last):
 
 File neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py,
 line 89, in setUp
 
 notifier_cls = notifier_p.start()
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1396, in start
 
 result = self.__enter__()
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1252, in __enter__
 
 self.target = self.getter()
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1414, in lambda
 
 getter = lambda: _importer(target)
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1102, in _importer
 
 thing = _dot_lookup(thing, comp, import_path)
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1091, in _dot_lookup
 
 __import__(import_path)
 
 File neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 29,
 in module
 
 from neutron.db import agents_db
 
 File neutron/db/agents_db.py, line 24, in module
 
 from neutron.extensions import agent as ext_agent
 
 File neutron/extensions/agent.py, line 20, in module
 
 from neutron.api.v2 import base
 
 File neutron/api/v2/base.py, line 30, in module
 
 from neutron.notifiers import nova
 
 File neutron/notifiers/nova.py, line 19, in module
 
 from novaclient.v1_1.contrib import server_external_events
 
 ImportError: cannot import name server_external_events
 
 ==

  ERROR:
 neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_update_ports_returns_changed_vlan

  
 --

  Empty attachments:
 
 pythonlogging:''
 
 pythonlogging:'neutron.api.extensions'
 
 
 
 Traceback (most recent call last):
 
 File neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py,
 line 89, in setUp
 
 notifier_cls = notifier_p.start()
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1396, in start
 
 result = self.__enter__()
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1252, in __enter__
 
 self.target = self.getter()
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1414, in lambda
 
 getter = lambda: _importer(target)
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1102, in _importer
 
 thing = _dot_lookup(thing, comp, import_path)
 
 File
 /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py,
 line 1091, in _dot_lookup
 
 __import__(import_path)
 
 File neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 29,
 in module
 
 from neutron.db import agents_db
 
 File neutron/db/agents_db.py, line 24, in module
 
 from neutron.extensions import agent as ext_agent
 
 File neutron/extensions/agent.py, line 20, in module
 
 from neutron.api.v2 import base
 
 File neutron/api/v2/base.py, line 30, in module
 
 from neutron.notifiers import nova
 
 File neutron/notifiers/nova.py, line 19, in module
 
 from novaclient.v1_1.contrib import server_external_events
 
 ImportError: cannot import name server_external_events
 
 
 
 Ran 52 tests in 0.239s
 
 FAILED (failures=46)
 
 
 
 --
 
 Thanks,
 
 
 
 Vivek
 
 
 
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTbOkyAAoJEC5aWaUY1u57Ug0H/i4QVgdGeqLjyzdF0HWvZGno
pwj/BoyxuQVLKKPY3N4dC2HRwXrGunZyEfRsE1ZFQSLTQa8R4O8oJCLDnGVOg4+q
TYOr7s9BCdSenrfIrF/EmWbyhgH0AqfDJ1IkT+x24j0DfNoHIlCxEDkhY3SS3Stg
/uiu9iLRUuMMBK09ezw+TkBnWvw0OLSe54WblqUpWnQ/eUu23SOmD3b2MgUcVH9w
2pIlDZzxMKNIKJHitzSbXpdG3xhYhrB1GsMo+FEsxOfdVDdwYXeP2y52Sc/eFeQ7
ucC2dL3zke4rvYzXxmbXdKP7VCOajfq7fLMt8e01sewOEV+I3Iij78gfsNXFKUk=
=mAiX

Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-05-09 Thread Carl Baldwin
Graham,

Agreed.  I'll update the etherpad to reflect the decision.  See you
all at or near the Neutron pod at 4:30pm.

Carl

On Fri, May 9, 2014 at 8:17 AM, Hayes, Graham graham.ha...@hp.com wrote:
 Hi,

 It looks like us 'none ATC' folk will have access to the project pods - so 
 should we nail down a time on Monday?

 It looks like the 16:30 onwards is the most popular choice - will we say 
 16:30 on Monday in the Neutron pod?

 Thanks,

 Graham

 On Tue, 2014-05-06 at 17:45 +, Veiga, Anthony wrote:
 Hi,

 The only issue I would see with the pod is that not all of us are ATCs, so we 
 may or may not have access to that area (I am open to correction on that 
 point - in fact I hope someone does ;) )


 I’ll second this.  I have an interest in attending and assisting here, but I 
 don’t have ATC status yet (though I’m an active contributor technically, just 
 not via code.)




 I could see it fitting in with our design session, but maybe if we meet on 
 the Monday to do some initial hashing out as well, I think that would be good.

 I am around for the morning, and later on in the afternoon on Monday, if that 
 suits.

 Graham

 On Tue, 2014-05-06 at 11:21 -0600, Carl Baldwin wrote:


 I have just updated my etherpad [1] with some proposed times.  Not
 knowing much about the venue, I could only propose the pod area as a
 the location.

 I also updated the designate session etherpad [2] per your suggestion.
  If there is time during the Designate sessions to include this in the
 discussion then that may work out well.

 Thanks,
 Carl

 [1] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate
 [2] https://etherpad.openstack.org/p/DesignateAtlantaDesignSession

 On Tue, May 6, 2014 at 8:58 AM, Joe Mcbride 
 jmcbr...@rackspace.commailto:jmcbr...@rackspace.com wrote:
 On 4/29/14, 3:09 PM, Carl Baldwin 
 c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:I feel this is an 
 important subject to discuss because the end resultwill be a better cloud 
 user experience overall.  The design summitcould be a great time to bring 
 together interested parties fromNeutron, Nova, and Designate to discuss 
 the integration that I proposein these blueprints. Do you have a 
 time/location planned for these discussions? If not, we may have some time 
 in one of the Designate sessions.  The priorities and details for our 
 design session will be pulled from 
 https://etherpad.openstack.org/p/DesignateAtlantaDesignSession. If you are 
 interested in joining us, can you add your proposed blueprints in the 
 format noted there? Thanks, joe 
 ___ OpenStack-dev mailing 
 list 
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread Matthew Treinish
On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:
 On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:
 Hi Sean,
 
 2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:
 On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:
 Hi David,
 
 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which 
 was
 given a -1 due to not checking that every call to list_hosts returns 200. 
 I
 realized that we don't have a shared understanding or policy about this. 
 We
 need to make sure that each api is tested to return the right response, 
 but
 many tests need to call multiple apis in support of the one they are
 actually testing. It seems silly to have the caller check the response of
 every api call. Currently there are many, if not the majority of, cases
 where api calls are made without checking the response code. I see a few
 possibilities:
 
 1. Move all response code checking to the tempest clients. They are 
 already
 checking for failure codes and are now doing validation of json response 
 and
 headers as well. Callers would only do an explicit check if there were
 multiple success codes possible.
 
 2. Have a clear policy of when callers should check response codes and 
 apply
 it.
 
 I think the first approach has a lot of advantages. Thoughts?
 Thanks for proposing this, I also prefer the first approach.
 We will be able to remove a lot of status code checks if going on
 this direction.
 It is necessary for bp/nova-api-test-inheritance tasks also.
 Current https://review.openstack.org/#/c/92536/ removes status code checks
 because some Nova v2/v3 APIs return different codes and the codes are 
 already
 checked in client side.
 
 but it is necessary to create a lot of patch for covering all API tests.
 So for now, I feel it is OK to skip status code checks in API tests
 only if client side checks are already implemented.
 After implementing all client validations, we can remove them of API
 tests.
 Do we still have instances where we want to make a call that we know
 will fail and not through the exception?
 
 I agree there is a certain clarity in putting this down in the rest
 client. I just haven't figured out if it's going to break some behavior
 that we currently expect.
 If a server returns unexpected status code, Tempest fails with client
 validations
 like the following sample:
 
 Traceback (most recent call last):
File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
 line 36, in test_create_server_with_admin_password
  resp, server = self.create_test_server(adminPass='testpassword')
File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
 create_test_server
  name, image_id, flavor, **kwargs)
File /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
 line 95, in create_server
  self.validate_response(schema.create_server, resp, body)
File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
 in validate_response
  raise exceptions.InvalidHttpSuccessCode(msg)
 InvalidHttpSuccessCode: The success code is different than the expected one
 Details: The status code(202) is different than the expected one([200])
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Note that there are currently two different methods on RestClient
 that do this sort of thing. Your stacktrace shows
 validate_response which expects to be passed a schema. The other
 is expected_success which takes the expected response code and is
 only used by the image clients.
 Both of these will need to stay around since not all APIs have
 defined schemas but the expected_success method should probably be
 changed to accept a list of valid success responses rather than just
 one as it does at present.

So expected_success() is just a better way of doing something like:

assert.Equals(resp.status, 200)

There isn't anything specific about the images clients with it.
validate_response() should just call expected_success(), which I pushed out
here:
https://review.openstack.org/93035


 
 I hope we can get agreement to move response checking to the client.
 There was no opposition when we started doing this in nova to check
 schema. Does any one see a reason to not do this? It would both
 simplify the code and make sure responses are checked in all cases.

 Sean, do you have a concrete example of what you are concerned about
 here? Moving the check from the value returned by a client call to
 inside the client code should not have any visible effect unless the
 value was actually wrong but not checked by the caller. But this
 would be a bug that was just found if a test started failing.
 

Please draft a spec/bp for doing this, we can sort out the implementation
details in the spec review. There is definitely some overlap with the 

Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-09 Thread Brandon Logan
Yes, Rackspace has users that have multiple IPv4 and IPv6 VIPs on a
single load balancer.  However, I don't think it is a matter of it being
needed.  It's a matter of having an API that makes sense to a user.
Just because the API has multiple VIPs doesn't mean every VIP needs its
own port.  In fact creating a port is an implementation detail (you know
that phrase that everyone throws out to stonewall any discussions?).
The user doesn't care how many neutron ports are set up underneath, they
only care about the VIPs.   

Also, the load balancer wouldn't just be a container, the load balancer
would have flavor, affinity, and other metadata on it.  Plus, a user
will expect to get a load balancer back.  Since this object can only be
described as a load balancer, the name of it shouldn't be up for debate.

The API is meant to be a generic language that can be translated into a
working load balancer and should be driver agnostic.  We believe this is
the most generic and flexible API structure.  Each driver will be able
to translate this into what makes sense for that product.

On a side note, if this is too disruptive for the current LBaaS then why
couldn't this go into Neutron V3?  I thought that was the plan all along
anyway with redesigning the API.

Thanks,
Brandon  

On Fri, 2014-05-09 at 14:30 +0400, Eugene Nikanorov wrote:
 Hi folks,
 
 
 I'm pulling this question out of another discussion:
 
 
 Is there a need to have multiple VIPs (e.g. multiple L2 ports/IP
 addresses) per logical loadbalancer?
 If so, we need the description of such cases to evaluate them.
 
 
 Thanks,
 Eugene.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Steve Gordon
- Original Message -
 From: Robert Li (baoli) ba...@cisco.com
 Subject: Re: Informal meeting before SR-IOV summit presentation
 
 This is the one that Irena created:
 https://etherpad.openstack.org/p/pci_passthrough_cross_project

Thanks, I missed this as it wasn't linked from the design summit Wiki page.

-Steve

 On 5/8/14, 4:33 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
   It would be nice to have an informal discussion / unconference session
   before the actual summit session on SR-IOV. During the previous IRC
   meeting, we were really close to identifying the different use cases.
   There was a dangling discussion on introducing another level of
   indirection between the vnic_types exposed via the nova boot API and
 how
   it would be represented internally. It would be ideal to have these 2
   discussions converged before the summit session.
  
  What would be the purpose of doing that before the session? IMHO, a
  large part of being able to solve this problem is getting everyone up to
  speed on what this means, what the caveats are, and what we're trying to
  solve. If we do some of that outside the scope of the larger audience, I
  expect we'll get less interaction (or end up covering it again) in the
  session.
  
  That said, if there's something I'm missing that needs to be resolved
  ahead of time, then that's fine, but I expect the best plan is to just
  keep the discussion to the session. Afterwards, additional things can be
  discussed in a one-off manner, but getting everyone on the same page is
  largely the point of having a session in the first place IMHO.
 
 Right, in spite of my previous response...looking at the etherpad there
 is nothing there to frame the discussion at the moment:
 
 https://etherpad.openstack.org/p/juno-nova-sriov-support
 
 I think populating this should be a priority rather than organizing
 another session/meeting?
 
 Steve
 
 

-- 
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform
Red Hat Canada (Toronto, Ontario)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

2014-05-09 Thread Koderer, Marc
Hi,

7:30pm at Ted's Montana Grill. I called them and they put us on the list.
Again the location: 
https://plus.google.com/100773563660993493024/posts/P9YSgT8AVXh

It's quite near to all the hotels.

See you there!
Marc 

From: Koderer, Marc [m.kode...@telekom.de]
Sent: Wednesday, May 07, 2014 7:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

Hi folks,

ok seems to me more complex than expected ;)
Monday is the OpenStack speakers dinner.

Let's do it on Sunday. Matthew already suggested a location:
 Ted's Montana Grill
 https://plus.google.com/100773563660993493024/posts/P9YSgT8AVXh

For all the people interested just send me your mobile number..

Regards
Marc


From: Koderer, Marc [m.kode...@telekom.de]
Sent: Monday, May 05, 2014 8:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

All right, let’s meet on Monday ;)
We can discuss the details after the QA meeting this week.

Von: Frittoli, Andrea (HP Cloud) [mailto:fritt...@hp.com]
Gesendet: Donnerstag, 1. Mai 2014 18:42
An: OpenStack Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

I will arrive Sunday late.
If you meet on Monday I’ll see you there ^_^

From: Miguel Lavalle [mailto:mig...@mlavalle.com]
Sent: 01 May 2014 17:28
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

I arrive Sunday at 3:30pm. Either Sunday or Monday are fine with me. Lookging 
forward to it :-)

On Wed, Apr 30, 2014 at 5:11 AM, Koderer, Marc 
m.kode...@telekom.demailto:m.kode...@telekom.de wrote:
Hi folks,

last time we met one day before the Summit started for a short meet-up.
Should we do the same this time?

I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be 
fine for me.

Regards,
Marc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-05-09 Thread Ben Nemec
This is a development list, and your question sounds more usage-related. 
 Please ask your question on the users list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Thanks.

-Ben

On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

Hi,

I have a two node swift cluster receiving continuous traffic (mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following traceback from
some transactions...
Traceback (most recent call last):
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 692,
in PUT
 chunk = next(data_source)
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 559,
in lambda
 data_source = iter(lambda: reader(self.app.client_chunk_size), '')
   File /home/eightkpc/swift/swift/common/utils.py, line 2362, in read
 chunk = self.wsgi_input.read(*args, **kwargs)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 147,
in read
 return self._chunked_read(self.rfile, length)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 137,
in _chunked_read
 self.chunk_length = int(rfile.readline().split(;, 1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT
/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data
408 - PUT
http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data;
txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241 95.6405 -

It's success sometimes, but mostly 408 errors. I don't see any other
logs for the transaction ID. or around these 408 errors in the log
files. Is this a disk timeout issue? These are only 1GB files and normal
writes to files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

--
-Shyam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-05-09 Thread Jeremy Stanley
On 2014-05-09 15:51:19 +0800 (+0800), Thomas Goirand wrote:
 What's the final decision? Which room will we use?

I'm personally leaning toward B301 (in the Design Summit area) since
I think that will make it easier for participants to arrive on time.
As Thierry also suggested this and nobody else has expressed a
preference or raised any objections, if I don't hear any within the
next few hours I'll update the instructions to say B301 (and cancel
the other dedicated room with the conference organizers so they can
free up resources and correct the schedule before the weekend).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread Ken'ichi Ohmichi
2014-05-10 0:29 GMT+09:00 Matthew Treinish mtrein...@kortar.org:
 On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:
 On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:
 Hi Sean,
 
 2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:
 On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:
 Hi David,
 
 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which 
 was
 given a -1 due to not checking that every call to list_hosts returns 
 200. I
 realized that we don't have a shared understanding or policy about this. 
 We
 need to make sure that each api is tested to return the right response, 
 but
 many tests need to call multiple apis in support of the one they are
 actually testing. It seems silly to have the caller check the response of
 every api call. Currently there are many, if not the majority of, cases
 where api calls are made without checking the response code. I see a few
 possibilities:
 
 1. Move all response code checking to the tempest clients. They are 
 already
 checking for failure codes and are now doing validation of json response 
 and
 headers as well. Callers would only do an explicit check if there were
 multiple success codes possible.
 
 2. Have a clear policy of when callers should check response codes and 
 apply
 it.
 
 I think the first approach has a lot of advantages. Thoughts?
 Thanks for proposing this, I also prefer the first approach.
 We will be able to remove a lot of status code checks if going on
 this direction.
 It is necessary for bp/nova-api-test-inheritance tasks also.
 Current https://review.openstack.org/#/c/92536/ removes status code checks
 because some Nova v2/v3 APIs return different codes and the codes are 
 already
 checked in client side.
 
 but it is necessary to create a lot of patch for covering all API tests.
 So for now, I feel it is OK to skip status code checks in API tests
 only if client side checks are already implemented.
 After implementing all client validations, we can remove them of API
 tests.
 Do we still have instances where we want to make a call that we know
 will fail and not through the exception?
 
 I agree there is a certain clarity in putting this down in the rest
 client. I just haven't figured out if it's going to break some behavior
 that we currently expect.
 If a server returns unexpected status code, Tempest fails with client
 validations
 like the following sample:
 
 Traceback (most recent call last):
File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
 line 36, in test_create_server_with_admin_password
  resp, server = self.create_test_server(adminPass='testpassword')
File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
 create_test_server
  name, image_id, flavor, **kwargs)
File 
  /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
 line 95, in create_server
  self.validate_response(schema.create_server, resp, body)
File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
 in validate_response
  raise exceptions.InvalidHttpSuccessCode(msg)
 InvalidHttpSuccessCode: The success code is different than the expected one
 Details: The status code(202) is different than the expected one([200])
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Note that there are currently two different methods on RestClient
 that do this sort of thing. Your stacktrace shows
 validate_response which expects to be passed a schema. The other
 is expected_success which takes the expected response code and is
 only used by the image clients.
 Both of these will need to stay around since not all APIs have
 defined schemas but the expected_success method should probably be
 changed to accept a list of valid success responses rather than just
 one as it does at present.

 So expected_success() is just a better way of doing something like:

 assert.Equals(resp.status, 200)

 There isn't anything specific about the images clients with it.
 validate_response() should just call expected_success(), which I pushed out
 here:
 https://review.openstack.org/93035



 I hope we can get agreement to move response checking to the client.
 There was no opposition when we started doing this in nova to check
 schema. Does any one see a reason to not do this? It would both
 simplify the code and make sure responses are checked in all cases.

 Sean, do you have a concrete example of what you are concerned about
 here? Moving the check from the value returned by a client call to
 inside the client code should not have any visible effect unless the
 value was actually wrong but not checked by the caller. But this
 would be a bug that was just found if a test started failing.


 Please draft a spec/bp for doing this, we can sort out the 

Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-09 Thread Ken'ichi Ohmichi
2014-05-09 22:57 GMT+09:00 Matthew Treinish mtrein...@kortar.org:
 On Fri, May 09, 2014 at 10:31:00PM +0900, Ken'ichi Ohmichi wrote:
 2014-05-09 15:00 GMT+09:00 Ghanshyam Mann 
 ghanshyam.m...@nectechnologies.in:
  Hi Sean,
 
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: Monday, April 14, 2014 10:22 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [all] Branchless Tempest QA Spec - final draft
 
  As we're coming up on the stable/icehouse release the QA team is looking
  pretty positive at no longer branching Tempest. The QA Spec draft for 
  this is
  here - http://docs-draft.openstack.org/77/86577/2/check/gate-qa-specs-
  docs/3f84796/doc/build/html/specs/branchless-tempest.html
  and hopefully address a lot of the questions we've seen so far.
 
  Additional comments are welcome on the review -
  https://review.openstack.org/#/c/86577/
  or as responses on this ML thread.
 
-Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com
  http://dague.net
 
  There is one more scenario where interface of the existing APIs gets 
  changed
  may be only for experimental APIs (Nova V3 APIs). I have following question
  regarding those.
  1. How we are going to tag those in Branchless Tempest? Should we 
  keep two versions of tests for old and new API interface.
  2. Till Branchless Tempest is not Implemented, tempest test
   has to be skipped for those as they get failed on 
  Ice-House gate tests.
  So for tempest part of those changes should we wait 
  for implementation of
  Branchless tempest to complete?
 
  For example,
   'os-instance_actions' v3 API changed to 'os-server-actions' 
  (https://review.openstack.org/#/c/57614/)
   after Ice-house release and its respective tests are skipped in tempest.
  Now https://review.openstack.org/#/c/85666/  adds the response validation 
  for this API in tempest.
  As API tempest tests are skipped (cannot unskip as ice-house gate test 
  fails), response validation code
  will be untested on gate.
  My question is how I should go on these-
  1. Should I wait for implementation of Branchless tempest to complete?
  2. OK to merge even this is going to be untested on gate.

 I also am facing the same issue.
 The patch which fixes Nova v3 API inconsistency has been merged
 into *Nova* master, then I tried to push the other patch which blocks
 this kind of inconsistencies into *Tempest* master.
 However, check-tempest-dsvm-full-icehouse failed because of not
 backporting the Nova patch into Icehouse branch yet.
 I'm not sure we can backport this kind of patches into Icehouse
 because Nova v3 API of Icehouse must be experimental in future.
 So how about disabling v3 API tests in check-tempest-dsvm-full-icehouse?

 I'm thinking this is the only option we have right now. It's not like the v3
 api was marked as stable for icehouse. So you're not going to be backporting
 anything to it, so I think we should just disable it on icehouse runs. Part of
 the branchless tempest proposal is to mark and run with only the features that
 run on icehouse but not master for the stable branch jobs. This is probably 
 the
 first one we've hit.

 Moving forward I think we're going to have to discuss how we're going to 
 handle
 major api revisions with branchless tempest, and the path to getting them
 stable and gating with all versions. This way we'll have a plan on how to 
 handle
 this kind of friction when a project starts working on a new api major 
 version.
 I'll put a note on the summit session for branchless tempest about this.

Matt, Sean, thanks.
This topic is already interesting for me.
I'd like to join the session.

See you in ATL!

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread David Kranz

On 05/09/2014 11:29 AM, Matthew Treinish wrote:

On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:

On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:

Hi Sean,

2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:

On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:

Hi David,

2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:

I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
given a -1 due to not checking that every call to list_hosts returns 200. I
realized that we don't have a shared understanding or policy about this. We
need to make sure that each api is tested to return the right response, but
many tests need to call multiple apis in support of the one they are
actually testing. It seems silly to have the caller check the response of
every api call. Currently there are many, if not the majority of, cases
where api calls are made without checking the response code. I see a few
possibilities:

1. Move all response code checking to the tempest clients. They are already
checking for failure codes and are now doing validation of json response and
headers as well. Callers would only do an explicit check if there were
multiple success codes possible.

2. Have a clear policy of when callers should check response codes and apply
it.

I think the first approach has a lot of advantages. Thoughts?

Thanks for proposing this, I also prefer the first approach.
We will be able to remove a lot of status code checks if going on
this direction.
It is necessary for bp/nova-api-test-inheritance tasks also.
Current https://review.openstack.org/#/c/92536/ removes status code checks
because some Nova v2/v3 APIs return different codes and the codes are already
checked in client side.

but it is necessary to create a lot of patch for covering all API tests.
So for now, I feel it is OK to skip status code checks in API tests
only if client side checks are already implemented.
After implementing all client validations, we can remove them of API
tests.

Do we still have instances where we want to make a call that we know
will fail and not through the exception?

I agree there is a certain clarity in putting this down in the rest
client. I just haven't figured out if it's going to break some behavior
that we currently expect.

If a server returns unexpected status code, Tempest fails with client
validations
like the following sample:

Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
line 36, in test_create_server_with_admin_password
 resp, server = self.create_test_server(adminPass='testpassword')
   File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
create_test_server
 name, image_id, flavor, **kwargs)
   File /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
line 95, in create_server
 self.validate_response(schema.create_server, resp, body)
   File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
in validate_response
 raise exceptions.InvalidHttpSuccessCode(msg)
InvalidHttpSuccessCode: The success code is different than the expected one
Details: The status code(202) is different than the expected one([200])


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Note that there are currently two different methods on RestClient
that do this sort of thing. Your stacktrace shows
validate_response which expects to be passed a schema. The other
is expected_success which takes the expected response code and is
only used by the image clients.
Both of these will need to stay around since not all APIs have
defined schemas but the expected_success method should probably be
changed to accept a list of valid success responses rather than just
one as it does at present.

So expected_success() is just a better way of doing something like:

assert.Equals(resp.status, 200)

There isn't anything specific about the images clients with it.
validate_response() should just call expected_success(), which I pushed out
here:
https://review.openstack.org/93035
Right, I was just observing that it was only used by the image clients 
at present.




I hope we can get agreement to move response checking to the client.
There was no opposition when we started doing this in nova to check
schema. Does any one see a reason to not do this? It would both
simplify the code and make sure responses are checked in all cases.

Sean, do you have a concrete example of what you are concerned about
here? Moving the check from the value returned by a client call to
inside the client code should not have any visible effect unless the
value was actually wrong but not checked by the caller. But this
would be a bug that was just found if a test started failing.


Please draft a spec/bp for doing this, we can sort out the implementation
details in the spec review. There is 

[openstack-dev] Summit etherpad and discussion about making docs easier for developers

2014-05-09 Thread Nick Chase
I've put together an etherpad to hopefully get a little discussion going
about what people would like to happen to make it easier for them to
provide more docs related to the code they're providing.  (
http://junodesignsummit.sched.org/event/19381e6ad48e05abc9099eb7ff956231#.U20CR3Wx17Q
)

The etherpad is here:
https://etherpad.openstack.org/p/easier_documentation_for_developers

Please feel free to contribute any ideas to discuss.  I'd love for us to
come out of the session with something we can implement during the Juno
cycle.

Thanks!

  Nick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-09 Thread Matthew Treinish
On Sat, May 03, 2014 at 07:41:54AM +, Kenichi Oomichi wrote:
 
 Hi Matthew,
 
  -Original Message-
  From: Matthew Treinish [mailto:mtrein...@kortar.org]
  Sent: Friday, May 02, 2014 12:36 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft
  
When adding new API parameters to the existing APIs, these parameters 
should
be API extensions according to the above guidelines. So we have three 
options
for handling API extensions in Tempest:
   
1. Consider them as optional, and cannot block the incompatible
changes of them. (Current)
2. Consider them as required based on tempest.conf, and can block the
incompatible changes.
3. Consider them as required automatically with microversioning, and
can block the incompatible changes.
  
   I investigated the way of the above option 3, then have one question
   about current Tempest implementation.
  
   Now verify_tempest_config tool gets API extension list from each
   service including Nova and verifies API extension config of tempest.conf
   based on the list.
   Can we use the list for selecting what extension tests run instead of
   the verification?
   As you said In the previous IRC meeting, current API tests will be
   skipped if the test which is decorated with requires_ext() and the
   extension is not specified in tempest.conf. I feel it would be nice
   that Tempest gets API extension list and selects API tests automatically
   based on the list.
  
  So we used to do this type of autodiscovery in tempest, but we stopped 
  because
  it let bugs slip through the gate. This topic has come up several times in 
  the
  past, most recently in discussing reorganizing the config file. [1] This is 
  why
  we put [2] in the tempest README. I agree autodiscovery would be simpler, 
  but
  the problem is because we use tempest as the gate if there was a bug that 
  caused
  autodiscovery to be different from what was expected the tests would just
  silently skip. This would often go unnoticed because of the sheer volume of
  tempest tests.(I think we're currently at ~2300) I also feel that explicitly
  defining what is a expected to be enabled is a key requirement for 
  branchless
  tempest for the same reason.
 
 Thanks for the explanation, I understand the purpose of static config for
 the gate. We could not notice some unexpected skips due to the test volume
 as you said. but the autodiscovery still seems attractive for me, that would
 make it easy to run Tempest on production environments. So how about 
 implementing
 the autodiscovery as just one option which is disabled as default value in
 tempest.conf?
 For example, current config of nova v3 API extensions are
 
  api_v3_extensions=all
 
 and we will be able to specify auto instead of all if autodiscovery
 is necessary:
 
  api_v3_extensions=auto
 
 It would be nice to define it as experimental on the gate and check the
 number of test skips sometimes by comparing the legitimate gate?

So the problem with this is actually the same one we have in the gate. Even
though it's not part of the automated testing system, the issue with using
discovery as part of the testing as an end user you'll never know what's
expected to be running. How can you know if the results of your test run are
ever valid if the set of things you're trying to verify isn't fixed? If there
was a configuration error and some things were disabled by accident wouldn't
you want to catch that when you're using tempest to verify you're deployment.
For this reason feature discovery should not really ever be a run time decision.
This is really why we currently have have discovery decoupled in outside
tooling. Which currently is only verify_tempest_config, but that will probably
grow into other things as well.

I understand that this a current pain point with using tempest. Heck, I even
tried to put together an example of configuring tempest manually as part of my
summit talk and realized it was far too large for the time window I would have
during the presentation. We really need to come up with a good solution for
handling tempest configuration outside of devstack. But, I don't want to rush
into the wrong solution just because we are having issues with it right now.

As it stands now within the last few weeks I've seen 2 proposals for very
different configuration tools in addition to one we apparently have in tree,
tempest_auto_config. (which I'm probably going to rip out because I don't see 
any
value in it) I think one of the results I want to get out of summit next week is
to have a plan for doing this and get a spec drafted from that discussion. I
feel that this probably does warrant it's own session, but the schedule is
locked down now. Depending on how the discussions go during the week I may
re-purpose a good chunk of my last session to discuss this.

 
  The verify_tempest_config tool 

[openstack-dev] Point of Contact Request for MagnetoDB

2014-05-09 Thread Mathews, Tiffany J. (LARC-E301)[SCIENCE SYSTEMS AND APPLICATIONS, INC]
I am interested in establishing an expert POC for questions and concerns 
regarding MagnetoDB as I am working on creating a technology repository for one 
of the NASA Data Centers to identify and track technologies that we may not be 
currently using, however, would like to consider for potential future use. 
MagnetoDB is a technology that we are interested in learning about more- 
especially with regard to security. We are also interested in seeing if there 
are any white papers or demonstrations that could help us better understand 
this technology.

Any guidance is greatly appreciated!

Tiffany Mathews


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread GHANSHYAM MANN
Hi Matthew,



 -Original Message-

 From: Matthew Treinish [mailto:mtrein...@kortar.org]

 Sent: Saturday, May 10, 2014 12:29 AM

 To: OpenStack Development Mailing List (not for usage questions)

 Subject: Re: [openstack-dev] [qa] Checking for return codes in tempest
client

 calls



 On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:

  On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:

  Hi Sean,

  

  2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:

  On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:

  Hi David,

  

  2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:

  I just looked at a patch https://review.openstack.org/#/c/90310/3

  which was given a -1 due to not checking that every call to

  list_hosts returns 200. I realized that we don't have a shared

  understanding or policy about this. We need to make sure that each

  api is tested to return the right response, but many tests need to

  call multiple apis in support of the one they are actually

  testing. It seems silly to have the caller check the response of

  every api call. Currently there are many, if not the majority of,

  cases where api calls are made without checking the response code.

  I see a few

  possibilities:

  

  1. Move all response code checking to the tempest clients. They

  are already checking for failure codes and are now doing

  validation of json response and headers as well. Callers would

  only do an explicit check if there were multiple success codes

 possible.

  

  2. Have a clear policy of when callers should check response codes

  and apply it.

  

  I think the first approach has a lot of advantages. Thoughts?

  Thanks for proposing this, I also prefer the first approach.

  We will be able to remove a lot of status code checks if going on

  this direction.

  It is necessary for bp/nova-api-test-inheritance tasks also.

  Current https://review.openstack.org/#/c/92536/ removes status code

  checks because some Nova v2/v3 APIs return different codes and the

  codes are already checked in client side.

  

  but it is necessary to create a lot of patch for covering all API
tests.

  So for now, I feel it is OK to skip status code checks in API tests

  only if client side checks are already implemented.

  After implementing all client validations, we can remove them of

  API tests.

  Do we still have instances where we want to make a call that we know

  will fail and not through the exception?

  

  I agree there is a certain clarity in putting this down in the rest

  client. I just haven't figured out if it's going to break some

  behavior that we currently expect.

  If a server returns unexpected status code, Tempest fails with client

  validations like the following sample:

  

  Traceback (most recent call last):

 File

  /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,

  line 36, in test_create_server_with_admin_password

   resp, server = self.create_test_server(adminPass='testpassword')

 File /opt/stack/tempest/tempest/api/compute/base.py, line 211,

  in create_test_server

   name, image_id, flavor, **kwargs)

 File

 

 /opt/stack/tempest/tempest/services/compute/json/servers_client.py,

  line 95, in create_server

   self.validate_response(schema.create_server, resp, body)

 File /opt/stack/tempest/tempest/common/rest_client.py, line 596,

  in validate_response

   raise exceptions.InvalidHttpSuccessCode(msg)

  InvalidHttpSuccessCode: The success code is different than the

  expected one

  Details: The status code(202) is different than the expected

  one([200])

  

  

  Thanks

  Ken'ichi Ohmichi

  

  ___

  OpenStack-dev mailing list

  OpenStack-dev@lists.openstack.org

  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  Note that there are currently two different methods on RestClient that

  do this sort of thing. Your stacktrace shows validate_response which

  expects to be passed a schema. The other is expected_success which

  takes the expected response code and is only used by the image

  clients.

  Both of these will need to stay around since not all APIs have defined

  schemas but the expected_success method should probably be changed to

  accept a list of valid success responses rather than just one as it

  does at present.



 So expected_success() is just a better way of doing something like:



 assert.Equals(resp.status, 200)



 There isn't anything specific about the images clients with it.

 validate_response() should just call expected_success(), which I pushed
out

 here:

 https://review.openstack.org/93035



There can be possibility to have multiple success return code (Nova Server
Ext events API return 200  207 as success code). Currently there is no
such API schema but we need to consider this case. In validate_response(),
it was handled and we should expand the expected_success() 

Re: [openstack-dev] (no subject)

2014-05-09 Thread Clay Gerrard
I thought those tracebacks only showed up with old versions of eventlet or
and eventlet_debug = true?

In my experience that normally indicates a client disconnect on a chucked
encoding transfer request (request w/o a content-length).  Do you know if
your clients are using transfer encoding chunked?

Are you seeing the 408 make it's way out to the client?  It wasn't clear to
me if you only see these tracebacks on the object-servers or in the proxy
logs as well?  Perhaps only one of the three disks involved in the PUT are
timing out and the client still gets a successful response?

As the disks fill up replication and auditing is going to consume more disk
resources - you may have to tune the concurrency and rate settings on those
daemons.  If the errors happen consistently you could try running with
background consistency processes temporarily disabled and rule out if
they're causing disk contention on your setup with your config.

-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec openst...@nemebean.com wrote:

 This is a development list, and your question sounds more usage-related.
  Please ask your question on the users list: http://lists.openstack.org/
 cgi-bin/mailman/listinfo/openstack

 Thanks.

 -Ben


 On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

 Hi,

 I have a two node swift cluster receiving continuous traffic (mostly
 overwrites for existing objects) of 1GB files each.

 Soon after the traffic started, I'm seeing the following traceback from
 some transactions...
 Traceback (most recent call last):
File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 692,
 in PUT
  chunk = next(data_source)
File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 559,
 in lambda
  data_source = iter(lambda: reader(self.app.client_chunk_size), '')
File /home/eightkpc/swift/swift/common/utils.py, line 2362, in read
  chunk = self.wsgi_input.read(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 147,
 in read
  return self._chunked_read(self.rfile, length)
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 137,
 in _chunked_read
  self.chunk_length = int(rfile.readline().split(;, 1)[0], 16)
 ValueError: invalid literal for int() with base 16: '' (txn:
 tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

 Seeing the following errors on storage logs...
 object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT
 /xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A
 30396AEF6B537B00.2.data
 408 - PUT
 http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A
 30396AEF6B537B00.2.data
 txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241 95.6405 -

 It's success sometimes, but mostly 408 errors. I don't see any other
 logs for the transaction ID. or around these 408 errors in the log
 files. Is this a disk timeout issue? These are only 1GB files and normal
 writes to files on these disks are quite fast.

 The timeouts from the swift proxy files are...
 root@bulkstore-112:~# grep -R timeout /etc/swift/*
 /etc/swift/proxy-server.conf:client_timeout = 600
 /etc/swift/proxy-server.conf:node_timeout = 600
 /etc/swift/proxy-server.conf:recoverable_node_timeout = 600

 Can someone help me troubleshoot this issue?

 --
 -Shyam


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Beware Jenkins loops when commenting on old/draft patch sets

2014-05-09 Thread Ben Nemec

Hi,

As you can see on https://review.openstack.org/#/c/77924/ I was 
fortunate enough to find a new way to trigger a Jenkins loop.  In 
talking to infra, it seems the problem occurs when a comment is added to 
an older patch set (as in patch set 2 of a change with 3 patch sets) of 
a change that hasn't had a Jenkins vote in 72 hours.


Note that in the case of the patch linked above, I intended to comment 
on the latest patch set but it went to an older patch set because of the 
draft status of the patch, so I would suggest not commenting on draft 
patches (drafts are evil anyway, so please don't use them.  WIP is your 
friend. :-).


Infra is working on a solution to the problem, but in the meantime be 
very careful about commenting on old or draft patch sets.  This loop can 
generate a _lot_ of comments in a short time.  I got 123 Gerrit e-mails 
in the minute that the loop was running.


Note that abandoning the patch seemed to break the loop, but that won't 
always be an option so it's really better to just avoid it in the first 
place.


If you have any questions, I suggest you ask in #openstack-infra since 
they know more about this than I do.


Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread GHANSHYAM MANN
Hi Sean,

That’s pretty cool and helpful. Thanks for providing this.

  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: Friday, May 9, 2014 9:21 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [all] custom gerrit dashboard - per project
 review
  inbox zero
 
  Based on some of my blog posts on gerrit queries, I've built and gotten
  integrated a custom inbox zero dashboard which is per project in gerrit.
 
  ex:
  https://review.openstack.org/#/projects/openstack/nova,dashboards/impo

 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard
 rtant-changes:review-inbox-dashboard

 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard
 
  (replace openstack/nova with the project of your choice).
 
  This provides 3 sections.
 
  = Needs Final +2 =
 
  This is code that has an existing +2, no negative code review feedback,
 and
  positive jenkins score. So it's mergable if you provide the final +2.
 
  (Gerrit Query: status:open NOT label:Code-Review=0,self
  label:Verified=1,jenkins NOT label:Code-Review=-1 label:Code-Review=2
  NOT label:Workflow=-1 limit:50 )
 
  = No negative feedback =
 
  Changes that have no negative code review feedback, and positive jenkins
  score.
 
  (Gerrit Query: status:open NOT label:Code-Review=0,self
  label:Verified=1,jenkins NOT label:Code-Review=-1 NOT
  label:Workflow=-1 limit:50 )
 
  = Wayward changes =
 
  Changes that have no code review feedback at all (no one has looked at
 it), a
  positive jenkins score, and are older than 2 days.
 
  (Gerrit Query: status:open label:Verified=1,jenkins NOT
  label:Workflow=-1 NOT label:Code-Review=2 age:2d)
 
 
  In all cases it filters out patches that you've commented on in the most
  recently revision. So as you vote on these things they will disappear
 from
  your list.
 
  Hopefully people will find this dashboard also useful.
 
  -Sean
 
  --
  Sean Dague
  http://dague.net


-- 
Thanks  Regards
Ghanshyam Mann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Point of Contact Request for MagnetoDB

2014-05-09 Thread Ilya Sviridov
Hello Tiffany,

Thank you for your interest to MagnetoDB.
It is great to see you here!

My name is Ilya Sviridov and I'm leading the project.

There is a wiki page  https://wiki.openstack.org/wiki/MagnetoDB. MagentoDB
is under active development right now, so it is a bit outdated.
Also take a look at our screencast
http://www.mirantis.com/blog/introducing-magnetodb-nosql-database-service-openstack/

If you have any questions feel free to ask in mail list with [MagnetoDB]
tag or in #magnetodb IRC channel.

Are you planning to attend Atlanta Summit?
We are having the design session on *Tuesday*

http://junodesignsummit.sched.org/event/c6474b1697193bcb88598209fe929f93#.U20Vgfl5N-c




On Fri, May 9, 2014 at 10:10 AM, Mathews, Tiffany J. (LARC-E301)[SCIENCE
SYSTEMS AND APPLICATIONS, INC] tiffany.j.math...@nasa.gov wrote:

  I am interested in establishing an expert POC for questions and concerns
 regarding MagnetoDB as I am working on creating a technology repository
 for one of the NASA Data Centers to identify and track technologies that we
 may not be currently using, however, would like to consider for potential
 future use. MagnetoDB is a technology that we are interested in learning
 about more- especially with regard to security. We are also interested in
 seeing if there are any white papers or demonstrations that could help us
 better understand this technology.

  Any guidance is greatly appreciated!

  Tiffany Mathews





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Carlos Garza

On May 9, 2014, at 3:26 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com
 wrote:

Carlos,

The general objection is that if we don't need multiple VIPs (different ip, not 
just tcp ports) per single logical loadbalancer, then we don't need 
loadbalancer because everything else is addressed by VIP playing a role of 
loadbalancer.

Thats pretty much our objection. You seem to be masquerading vips as if 
they were loadbalancers. APIs that don't model reality are not a good fit as 
far as were concerned.

We do not recognize the logical connection between we will use a 
loadbalancer top level object if and only if it will contain multiple ports or 
vips. We view this as a straw man attempt to get those in favor of a 
loadbalancer top level object to some how reform an argument that we now need 
multiple ports, vips etc which isn't what we are arguing at all.

I have no doubt that even if we ever did have a use case for this you'll just 
reject the use case or come up with another bizarre constraint as to why we 
Don't need a loadbalancer top level object.
That was never the argument we were trying to make in the first place.

Regarding conclusions - I think we've heard enough negative opinions on the 
idea of 'container' to at least postpone this discussion to the point when 
we'll get some important use cases that could not be addressed by 'VIP as 
loadbalancer'

We haven't really heard any Negative opinions other that what is coming 
from you and Sam. And it looks like Sam's objection is that he has predefined 
physical loadbalancers already sitting on a rack. For example if he has a rack 
of 8 physical loadbalancers then he only has 8 loadbalancer_ids and that are 
shared by many users and for some reason this is locking him into the belief 
that he shouldn't expose loadbalancer objects directly to the customer. This is 
some what alien to us as we also have physicals in our CLB1.0 product but we 
still use the notion of loadbalancer objects that are shared across a single 
sting ray host. We don't equate a loadbalancer with an actual sting ray host.

If same needs help wrapping a virtual loadbalancer object in his API let us 
know we would like to help with that as we firmly know its awkward to take 
something such as neutron/lbaas and interpret it to be Virtual Ips as a 
service.  We've done that with our API in CLB1.0.

Carlos.

Eugene.

On Fri, May 9, 2014 at 8:33 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On May 8, 2014, at 2:45 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi Carlos,

Are you saying that we should only have a loadbalancer resource only in the 
case where we want it to span multiple L2 networks as if it were a router? I 
don't see how you arrived at that conclusion. Can you explain further.
No, I mean that loadbalancer instance is needed if we need several *different* 
L2 endpoints for several front ends.
That's basically 'virtual appliance' functionality that we've discussed on 
today's meeting.

   From looking at the irc log it looks like nothing conclusive came out of the 
meeting. I don't understand a lot of the conclusions you arrive at. For example 
your rejecting the notion of a loadbalancer concrete object unless its needed 
to include multi l2 network support. Will you make an honest effort to describe 
your objections here in the ML cause if we can't resolve it here its going to 
spill over into the summit. I certainly don't want this to dominate the summit.



Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] powervc-driver project in Stackforge

2014-05-09 Thread Xiandong Meng
We have fixed the copyright headlines and added the LICENSE, README into
the project. Thanks for your kind reminder.

Xiandong Meng
mengxiand...@gmail.com


On Thu, May 8, 2014 at 11:02 AM, Sean Dague s...@dague.net wrote:

 Copyright is fine and appropriate:

 (C) Copyright IBM Corp. 2013 All Rights Reserved

 However the OCO declaration is not compatible with Open Source. And it
 also should do the copyright like is done in other projects, a comment
 header not in a variable.

 -Sean

 On 05/08/2014 11:55 AM, Xiandong Meng wrote:
  We will fix it by adding in the Apache License file and the readme file
  to the repo. BTW, i feel it is ok to retain copyright in the source code.
 
 
  On Thu, May 8, 2014 at 10:32 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  On 05/08/2014 11:14 AM, Xiandong Meng wrote:
   Hi,
  
   Just a heads up that we have upstreamed the powervc-driver project
 to
   Stackforge. PowerVC is the strategic product for IBM Power
   virtualization management. And powervc-driver project contains a
  set of
   OpenStack drivers (nova, cinder and neutron) and utilities for the
   manage-to Power via PowerVC. With this project, community OpenStack
   users can build a mixed cloud environment with both x86 and Power
   servers.  More documentation will be provided in the launchpad and
 the
   initial CI support for the powervc-driver project will be launched
   within next 1-2 weeks.  Comments/feedback are welcome.
  
   PowerVC driver repository stackforge:
   https://github.com/stackforge/powervc-driver
   PowerVC driver project on launchpad:
   https://launchpad.net/powervc-driver
 
  This repository does not include even a README at this point.
 Starting
  with a README would be really nice.
 
  Also it looks like it's not actually Open Source -
 
 https://github.com/stackforge/powervc-driver/blob/master/nova-powervc/powervc/utils.py#L1-L9
  (that's just an example, it's littered throughout the code).
 
  Which I assume is actually a violation of being on stackforge. So
 that
  needs to be fixed ASAP or we should delete it.
 
  -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,

Xiandong Meng mengxiand...@gmail.com
mengxiand...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical administrative boundary [keystone]

2014-05-09 Thread Tiwari, Arvind
Hi All,

Thanks for looking in to my proposal. Below are my comments and answers to 
questions which is based on “my personal opinion”.

Why domain hierarchy, why not project hierarchy? Because project hierarchy is 
more impactful and need cross project changes.

As per my understanding we all are trying to solve one business use problem, 
which is “how to support VPC or Reseller” model on OS based cloud deployment.  
As per problem described in different proposals, it is purely a IAM use case, 
where different identities (users, admins, reseller ….) has different 
perception about the system/resources (IAM and non IAM) and they want ability 
to manage them.

Keystone (OS IAM service) abstracts all the IAM complexity  from lower level 
services (Nova, Swift, cinder …) by providing unified integration model (auth 
token and verification by auth middleware). Lover level services trusts 
Keystone and allow access (for particular requests) to actual resource based on 
subject’s roles provided by keystone.

Each service supports multi tenancy and tenancy mapping is establish by 
keystone through projects.  If hierarchy enforced at project level then we need 
to propagate the hierarchy info to all lower level services, where the 
hierarchy  info is not serving any good purpose but just used to map one 
tenant. Enforcing the hierarchy at project level is more impactful because all 
services have to change their implementation to consume the notion of 
hierarchy. Propagating project hierarchy to services would make sense if end 
resources (VMs, cinder volumes , swift resource ….) does obey the hierarchy 
based on projects, I think that is not the case.

As per definition domains are container for projects, users and groups and maps 
well with a business entities (ProductionIT, SuperDevShop, WidgetMaster, SPI, 
reseller .). Using domain to establish hierarchy (as per my design) will 
abstract the complexity from lower level services. Services don’t have to worry 
about the domain hierarchy and we can retain the current integration (Keystone 
project - service Tenant ) model and no need to make big change in different 
service. Mostly one place change which is Keystone.

Services has to be domain aware

IMO services (Nova, Swift …) don’t have to be domain aware (Unless I am missing 
something) as they manage resources for keystone projects. Domain is IAM 
concept which used to scope IAM resources and not very useful for end services. 
I think what we are lacking is unique role (role name) per service, having 
unique role names for each service (IAM, Nova, Swift ….)  will resolve the 
problem mentioned below by  Yaguang Tang.

Please let me know why services have to be domain aware?

Thoughts?

Thanks,
Arvind

Note:
IAM Resources – Users, groups, projects …
Non IAM resources – VMs, Swift objects, …….

From: Yaguang Tang [mailto:yaguang.t...@canonical.com]
Sent: Friday, May 09, 2014 4:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Hierarchical administrative boundary [keystone]

Frittoli,

I think for other services we could achieve that by  modifying  the 
policy.json( add domain admin role and control what the cloud admin can do ) so 
that domain admin user is able to manage resources belong to
users and projects in that domain.

2014-05-09 15:24 GMT+08:00 Frittoli, Andrea (HP Cloud) 
fritt...@hp.commailto:fritt...@hp.com:
From: Adam Young [mailto:ayo...@redhat.commailto:ayo...@redhat.com]
Sent: 09 May 2014 04:19
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Hierarchical administrative boundary [keystone]

On 05/08/2014 07:55 PM, Tiwari, Arvind wrote:
Hi All,

Below is my proposal to address VPC use case using hierarchical administrative 
boundary. This topic is scheduled in Hierarchical 
Multitenancyhttp://junodesignsummit.sched.org/event/20465cd62e9054d4043dda156da5070e#.U2wYXXKLR_9
 session of Atlanta design summit.

https://wiki.openstack.org/wiki/Hierarchical_administrative_boundary

Please take a look.

Thanks,
Arvind




___

OpenStack-dev mailing list



OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Looks very good.  One question:  Why hierarchical domains and not Projects.  
I'm not disagreeing, mind you, just that I think the Nova team is going for 
hierarchical Projects.


Looks good, thank you!

But for this to be even more interesting nova (and other services) should be 
domain aware – e.g. so that a domain admin could have control on all resources 
which belong to users and projects in that domain.

andrea


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Stephen Balukoff
Hi Eugene,

This assumes that 'VIP' is an entity that can contain both an IPv4 address
and an IPv6 address. This is how it is in the API proposal and
corresponding object model that I suggested, but it is a slight
re-definition of the term virtual IP as it's used in the rest of the
industry. (And again, we're not yet in agreement that 'VIP' should actually
contain two ip addresses like this.)

In my mind, the main reasons I would like to see the container object are:


   - It solves the colocation / apolcation (or affinity / anti-affinity)
   problem for VIPs in a way that is much more intuitive to understand and
   less confusing for users than either the hints included in my API, or
   something based off the nova blueprint for doing the same for virtual
   servers/containers. (Full disclosure: There probably would still be a need
   for some anti-affinity logic at the logical load balancer level as well,
   though at this point it would be an operator concern only and expressed to
   the user in the flavor of the logical load balancer object, and probably
   be associated with different billing strategies. The user wants a
   dedicated physical load balancer? Then he should create one with this
   flavor, and note that it costs this much more...)
   - From my experience, users are already familiar with the concept of
   what a logical load balancer actually is (ie. something that resembles a
   physical or virtual appliance from their perspective). So this probably
   fits into their view of the world better.
   - It makes sense for Load Balancer as a Service to hand out logical
   load balancer objects. I think this will aid in a more intuitive
   understanding of the service for users who otherwise don't want to be
   concerned with operations.
   - This opens up the option for private cloud operators / providers to
   bill based on number of physical load balancers used (if the logical load
   balancer happens to coincide with physical load balancer appliances in
   their implementation) in a way that is going to be seen as more fair and
   more predictable to the user because the user has more control over it.
   And it seems to me this is accomplished without producing any undue burden
   on public cloud providers, those who don't bill this way, or those for whom
   the logical load balancer doesn't coincide with physical load balancer
   appliances.
   - Attaching a flavor attribute to a logical load balancer seems like a
   better idea than attaching it to the VIP. What if the user wants to change
   the flavor on which their VIP is deployed (ie. without changing IP
   addresses)? What if they want to do this for several VIPs at once? I can
   definitely see this happening in our customer base through the lifecycle of
   many of our customers' applications.
   - Having flavors associated with load balancers and not VIPs also allows
   for operators to provide a lot more differing product offerings to the user
   in a way that is simple for the user to understand. For example:
  - Flavor A is the cheap load balancer option, deployed on a
  shared platform used by many tenants that has fewer guarantees around
  performance and costs X.
  - Flavor B is guaranteed to be deployed on vendor Q's Super
  Special Product (tm) but to keep down costs, may be shared with other
  tenants, though not among a single tenant's load balancers unless the
  tenant uses the same load balancer id when deploying their VIPs (ie. user
  has control of affinity among their own VIPs, but no control over whether
  affinity happens with other tenants). It may experience variable
  performance as a result, but has higher guarantees than the
above and costs
  a little more.
  - Flavor C is guaranteed to be deployed on vendor P's Even Better
  Super Special Product (tm) and is also guaranteed not to be shared among
  tenants. This is essentially the dedicated load balancer
option that gets
  you the best guaranteed performance, but costs a lot more than the above.
  - ...and so on.
   - A logical load balancer object is a great demarcation point
http://en.wikipedia.org/wiki/Demarcation_point between
   operator concerns and user concerns. It seems likely that there will be an
   operator API created, and this will need to interface with the user API at
   some well-defined interface. (If you like, I can provide a couple specific
   operator concerns which are much more easily accomplished without
   disrupting the user experience using the demarc at the 'load balancer'
   instead of at the 'VIP'.)


So what are the main arguments against having this container object? In
answering this question, please keep in mind:


   - If you say implementation details, please just go ahead and be more
   specific because that's what I'm going to ask you to do anyway. If
   implementation details is the concern, please follow this with a
   hypothetical or concrete example as 

Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-05-09 Thread Carl Baldwin
Fantastic!  Works for me.

Thanks,
Carl

On Fri, May 9, 2014 at 3:33 AM, mar...@redhat.com mandr...@redhat.com wrote:
 On 08/05/14 21:29, Henry Gessau wrote:
 Have any of you javascript gurus respun this for the new gerrit version?
 Or can this now be done on the backend somehow?

 haha, have been thinking this since the gerrit upgrade a couple days
 ago. It was very useful for reviews... I am NOT a javascript guru but
 since it's Friday I gave myself 15 minutes to play with it - this works
 for me:


 javascript:(function(){
 list = document.querySelectorAll('table.commentPanelHeader');
 for(i in list) {
 title = list[i];
 if(! title.innerHTML) { continue; }
 text = title.nextSibling;
 if (text.innerHTML.search('Build failed')  0) {
 title.style.color='red'
 } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine')
= 0) {
 title.style.color='#66'
 } else {
 title.style.color='blue'
 }
 }
 })()


 marios


 On Tue, Mar 04, at 4:00 pm, Carl Baldwin  wrote:

 Nachi,

 Great!  I'd been meaning to do something like this.  I took yours and
 tweaked it a bit to highlight failed Jenkins builds in red and grey
 other Jenkins messages.  Human reviews are left in blue.

 javascript:(function(){
 list = document.querySelectorAll('td.GJEA35ODGC');
 for(i in list) {
 title = list[i];
 if(! title.innerHTML) { continue; }
 text = title.nextSibling;
 if (text.innerHTML.search('Build failed')  0) {
 title.style.color='red'
 } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') = 
 0) {
 title.style.color='#66'
 } else {
 title.style.color='blue'
 }
 }
 })()

 Carl

 On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 I wrote an bookmarklet for neutron gerrit review.
 This bookmarklet make the comment title for 3rd party ci as gray.

 javascript:(function(){list =
 document.querySelectorAll('td.GJEA35ODGC'); for(i in
 list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML 
 list[i].innerHTML.search('CI|Ryu|Testing|Mine') 
 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()

 enjoy :)
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Partial fix for unittest memory consumption

2014-05-09 Thread Edgar Magana Perdomo (eperdomo)
I ran the tests in my dev server and I noticed the decrease in memory 
consumption. Very good job Maru!
The patch has been approved!

Edgar

From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, May 8, 2014 at 7:30 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][QA] Partial fix for unittest memory 
consumption

Thanks Maru,

I've added this patch to my list of patches to review.
I've also targeted the bug of J-1 because I love being a pedant bookkeeper.

Salvatore


On 8 May 2014 11:41, Maru Newby ma...@redhat.commailto:ma...@redhat.com 
wrote:
Memory usage due to plugin+mock leakage is addressed by the following patch:

https://review.openstack.org/#/c/92793/

I'm seeing residual (post-test) memory usage decrease from ~4.5gb to ~1.3gb 
with 12 concurrent test runners.  Of the 1.3gb, sqlalchemy is taking the lion 
share at ~500mb, so that will be my next target.


Maru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Sandhya Dasu (sadasu)
Thanks for all your replies.

Thanks for the great inputs on how to frame the discussion in the etherpad
so it becomes easier for people to get on board. We will add author indent
to track the source of the changes. Will work on cleaning that up.

Regarding the session itself, as you probably know, there was an attempt
in Icehouse to get the sr-iov work going. We found that the time allotted
for the session was not sufficient to get to all the use cases and discuss
alternate views. 

This time around we want to be better prepared and so would like to keep
only a couple of open times for the actual session. Hence, the request for
the early meeting. 

How does Monday 1pm sound?

Thanks,
Sandhya

On 5/9/14 11:44 AM, Steve Gordon sgor...@redhat.com wrote:

- Original Message -
 From: Robert Li (baoli) ba...@cisco.com
 Subject: Re: Informal meeting before SR-IOV summit presentation
 
 This is the one that Irena created:
 https://etherpad.openstack.org/p/pci_passthrough_cross_project

Thanks, I missed this as it wasn't linked from the design summit Wiki
page.

-Steve

 On 5/8/14, 4:33 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
   It would be nice to have an informal discussion / unconference
session
   before the actual summit session on SR-IOV. During the previous IRC
   meeting, we were really close to identifying the different use
cases.
   There was a dangling discussion on introducing another level of
   indirection between the vnic_types exposed via the nova boot API
and
 how
   it would be represented internally. It would be ideal to have
these 2
   discussions converged before the summit session.
  
  What would be the purpose of doing that before the session? IMHO, a
  large part of being able to solve this problem is getting everyone
up to
  speed on what this means, what the caveats are, and what we're
trying to
  solve. If we do some of that outside the scope of the larger
audience, I
  expect we'll get less interaction (or end up covering it again) in
the
  session.
  
  That said, if there's something I'm missing that needs to be resolved
  ahead of time, then that's fine, but I expect the best plan is to
just
  keep the discussion to the session. Afterwards, additional things
can be
  discussed in a one-off manner, but getting everyone on the same page
is
  largely the point of having a session in the first place IMHO.
 
 Right, in spite of my previous response...looking at the etherpad there
 is nothing there to frame the discussion at the moment:
 
 https://etherpad.openstack.org/p/juno-nova-sriov-support
 
 I think populating this should be a priority rather than organizing
 another session/meeting?
 
 Steve
 
 

-- 
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform
Red Hat Canada (Toronto, Ontario)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] elements vs. openstack-infra puppet for CI infra nodes

2014-05-09 Thread Elizabeth K. Joseph
On Mon, May 5, 2014 at 5:51 AM, Dan Prince dpri...@redhat.com wrote:
 I originally sent this to TripleO but perhaps [infra] would have been a 
 better choice.

Adding the -infra mailing list on this too so folks see it as they
rush around doing pre-summit things.

 The short version is I'd like to run a lightweight (unofficial) mirror for 
 Fedora in infra:

  https://review.openstack.org/#/c/90875/

On the Debian side, I also have a bug (with some mirror discussion and
an attached review) here:

https://bugs.launchpad.net/openstack-ci/+bug/1311855

After discussing this particular patch+bug with the rest of the -infra
team, there wasn't a ton of interest in running an infra-based mirror
due to the package index out of sync issue in unofficial mirrors,
which would be a problem for us.

I had hoped we could sit down and chat about this at the summit for
both Fedora and Debian mirrors, but unfortunately I won't be able to
attend (been very sick this week, doctor didn't approve getting on a
plane on Sunday). So I'm hoping some other infra folks can sync up
with Dan and the TripleO crew to chat about how we can best get these
changes in so they'll work effectively for everyone. Also happy to
continue this discussion here on list or resume at a meeting after
summit.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Robert Li (baoli)
It sounds good to me.

Thanks Sandhya for organizing it.

‹Robert

On 5/9/14, 2:51 PM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

Thanks for all your replies.

Thanks for the great inputs on how to frame the discussion in the etherpad
so it becomes easier for people to get on board. We will add author indent
to track the source of the changes. Will work on cleaning that up.

Regarding the session itself, as you probably know, there was an attempt
in Icehouse to get the sr-iov work going. We found that the time allotted
for the session was not sufficient to get to all the use cases and discuss
alternate views. 

This time around we want to be better prepared and so would like to keep
only a couple of open times for the actual session. Hence, the request for
the early meeting.

How does Monday 1pm sound?

Thanks,
Sandhya

On 5/9/14 11:44 AM, Steve Gordon sgor...@redhat.com wrote:

- Original Message -
 From: Robert Li (baoli) ba...@cisco.com
 Subject: Re: Informal meeting before SR-IOV summit presentation
 
 This is the one that Irena created:
 https://etherpad.openstack.org/p/pci_passthrough_cross_project

Thanks, I missed this as it wasn't linked from the design summit Wiki
page.

-Steve

 On 5/8/14, 4:33 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
   It would be nice to have an informal discussion / unconference
session
   before the actual summit session on SR-IOV. During the previous
IRC
   meeting, we were really close to identifying the different use
cases.
   There was a dangling discussion on introducing another level of
   indirection between the vnic_types exposed via the nova boot API
and
 how
   it would be represented internally. It would be ideal to have
these 2
   discussions converged before the summit session.
  
  What would be the purpose of doing that before the session? IMHO, a
  large part of being able to solve this problem is getting everyone
up to
  speed on what this means, what the caveats are, and what we're
trying to
  solve. If we do some of that outside the scope of the larger
audience, I
  expect we'll get less interaction (or end up covering it again) in
the
  session.
  
  That said, if there's something I'm missing that needs to be
resolved
  ahead of time, then that's fine, but I expect the best plan is to
just
  keep the discussion to the session. Afterwards, additional things
can be
  discussed in a one-off manner, but getting everyone on the same page
is
  largely the point of having a session in the first place IMHO.
 
 Right, in spite of my previous response...looking at the etherpad
there
 is nothing there to frame the discussion at the moment:
 
 https://etherpad.openstack.org/p/juno-nova-sriov-support
 
 I think populating this should be a priority rather than organizing
 another session/meeting?
 
 Steve
 
 

-- 
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform
Red Hat Canada (Toronto, Ontario)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Brent Eagles

On 09/05/14 04:21 PM, Sandhya Dasu (sadasu) wrote:

Thanks for all your replies.

Thanks for the great inputs on how to frame the discussion in the etherpad
so it becomes easier for people to get on board. We will add author indent
to track the source of the changes. Will work on cleaning that up.

Regarding the session itself, as you probably know, there was an attempt
in Icehouse to get the sr-iov work going. We found that the time allotted
for the session was not sufficient to get to all the use cases and discuss
alternate views.

This time around we want to be better prepared and so would like to keep
only a couple of open times for the actual session. Hence, the request for
the early meeting.

How does Monday 1pm sound?

Thanks,
Sandhya


That time is good with me.

Cheers,

Brent


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Irena Berezovsky
Works for me as well.
What would be the meeting place? 

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Friday, May 09, 2014 10:13 PM
To: Sandhya Dasu (sadasu); Steve Gordon
Cc: Dan Smith; OpenStack Development Mailing List (not for usage questions); 
John Garbutt; Russell Bryant; yunhong-jiang; Itzik Brown; Brent Eagles; Yongli 
He; Jay Pipes; Irena Berezovsky
Subject: Re: Informal meeting before SR-IOV summit presentation

It sounds good to me.

Thanks Sandhya for organizing it.

Robert

On 5/9/14, 2:51 PM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

Thanks for all your replies.

Thanks for the great inputs on how to frame the discussion in the 
etherpad so it becomes easier for people to get on board. We will add 
author indent to track the source of the changes. Will work on cleaning that 
up.

Regarding the session itself, as you probably know, there was an 
attempt in Icehouse to get the sr-iov work going. We found that the 
time allotted for the session was not sufficient to get to all the use 
cases and discuss alternate views.

This time around we want to be better prepared and so would like to 
keep only a couple of open times for the actual session. Hence, the 
request for the early meeting.

How does Monday 1pm sound?

Thanks,
Sandhya

On 5/9/14 11:44 AM, Steve Gordon sgor...@redhat.com wrote:

- Original Message -
 From: Robert Li (baoli) ba...@cisco.com
 Subject: Re: Informal meeting before SR-IOV summit presentation
 
 This is the one that Irena created:
 https://etherpad.openstack.org/p/pci_passthrough_cross_project

Thanks, I missed this as it wasn't linked from the design summit Wiki 
page.

-Steve

 On 5/8/14, 4:33 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
   It would be nice to have an informal discussion / unconference
session
   before the actual summit session on SR-IOV. During the previous
IRC
   meeting, we were really close to identifying the different use
cases.
   There was a dangling discussion on introducing another level of 
   indirection between the vnic_types exposed via the nova boot 
   API
and
 how
   it would be represented internally. It would be ideal to have
these 2
   discussions converged before the summit session.
  
  What would be the purpose of doing that before the session? IMHO, 
  a large part of being able to solve this problem is getting 
  everyone
up to
  speed on what this means, what the caveats are, and what we're
trying to
  solve. If we do some of that outside the scope of the larger
audience, I
  expect we'll get less interaction (or end up covering it again) 
  in
the
  session.
  
  That said, if there's something I'm missing that needs to be
resolved
  ahead of time, then that's fine, but I expect the best plan is to
just
  keep the discussion to the session. Afterwards, additional things
can be
  discussed in a one-off manner, but getting everyone on the same 
  page
is
  largely the point of having a session in the first place IMHO.
 
 Right, in spite of my previous response...looking at the etherpad
there
 is nothing there to frame the discussion at the moment:
 
 https://etherpad.openstack.org/p/juno-nova-sriov-support
 
 I think populating this should be a priority rather than organizing 
 another session/meeting?
 
 Steve
 
 

--
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform Red Hat 
Canada (Toronto, Ontario)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Sandhya Dasu (sadasu)
I have no idea, how to pick a location.
Should we meet at the Cisco booth at 1pm and then take it from there?

Any other ideas?

Thanks,
sandhya

On 5/9/14 3:17 PM, Brent Eagles beag...@redhat.com wrote:

On 09/05/14 04:21 PM, Sandhya Dasu (sadasu) wrote:
 Thanks for all your replies.

 Thanks for the great inputs on how to frame the discussion in the
etherpad
 so it becomes easier for people to get on board. We will add author
indent
 to track the source of the changes. Will work on cleaning that up.

 Regarding the session itself, as you probably know, there was an attempt
 in Icehouse to get the sr-iov work going. We found that the time
allotted
 for the session was not sufficient to get to all the use cases and
discuss
 alternate views.

 This time around we want to be better prepared and so would like to keep
 only a couple of open times for the actual session. Hence, the request
for
 the early meeting.

 How does Monday 1pm sound?

 Thanks,
 Sandhya

That time is good with me.

Cheers,

Brent



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-09 Thread Stephen Balukoff
+1 to everything Brandon just said. :)

Stephen


On Fri, May 9, 2014 at 8:40 AM, Brandon Logan
brandon.lo...@rackspace.comwrote:

 Yes, Rackspace has users that have multiple IPv4 and IPv6 VIPs on a
 single load balancer.  However, I don't think it is a matter of it being
 needed.  It's a matter of having an API that makes sense to a user.
 Just because the API has multiple VIPs doesn't mean every VIP needs its
 own port.  In fact creating a port is an implementation detail (you know
 that phrase that everyone throws out to stonewall any discussions?).
 The user doesn't care how many neutron ports are set up underneath, they
 only care about the VIPs.

 Also, the load balancer wouldn't just be a container, the load balancer
 would have flavor, affinity, and other metadata on it.  Plus, a user
 will expect to get a load balancer back.  Since this object can only be
 described as a load balancer, the name of it shouldn't be up for debate.

 The API is meant to be a generic language that can be translated into a
 working load balancer and should be driver agnostic.  We believe this is
 the most generic and flexible API structure.  Each driver will be able
 to translate this into what makes sense for that product.

 On a side note, if this is too disruptive for the current LBaaS then why
 couldn't this go into Neutron V3?  I thought that was the plan all along
 anyway with redesigning the API.

 Thanks,
 Brandon

 On Fri, 2014-05-09 at 14:30 +0400, Eugene Nikanorov wrote:
  Hi folks,
 
 
  I'm pulling this question out of another discussion:
 
 
  Is there a need to have multiple VIPs (e.g. multiple L2 ports/IP
  addresses) per logical loadbalancer?
  If so, we need the description of such cases to evaluate them.
 
 
  Thanks,
  Eugene.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-09 Thread Stephen Balukoff
Sam,

That deadline seems reasonable to me. I should have time later today or
later this weekend to fill it out.

Thanks,
Stephen


On Fri, May 9, 2014 at 9:21 AM, Samuel Bercovici samu...@radware.comwrote:

  Hi,



 9 people have filled the survey so far.

 See attached pdf.



 Regards,

 -Sam.





 *From:* Samuel Bercovici
 *Sent:* Thursday, May 08, 2014 2:51 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey



 Hi everyone,



 I think that it is a good feedback to prioritize features.

 I can publish the results in time for the 2nd LBaaS meeting (so deadline
 would be end of 15th May “summit time”)

 Is this acceptable?



 -Sam.







 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.netsbaluk...@bluebox.net]

 *Sent:* Thursday, May 08, 2014 2:17 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey



 Hi Samuel--



 I've been heads down working on API proposal review documentation, and
 haven't had time to fill it out yet. Do you have a deadline by which we
 should have filled out the survey to get our voices heard?



 Thanks,

 Stephen



 On Wed, May 7, 2014 at 2:16 PM, Samuel Bercovici samu...@radware.com
 wrote:

 6 people have completed the survey so far.





 *From:* Samuel Bercovici
 *Sent:* Tuesday, May 06, 2014 10:56 AM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey



 Hi Everyone,



 The survey is now live via: http://eSurv.org?u=lbaas_project_user

 The password is: lbaas



 The survey includes all the tenant facing use cases from
 https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing

 Please try and fill the survey this week so we can have enough information
 to base decisions next week.



 Regards,

 -Sam.







 *From:* Samuel Bercovici
 *Sent:* Monday, May 05, 2014 4:52 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey



 Hi,



 I will not freeze the document to allow people to work on requirements
 which are not tenant facing (ex: operator, etc.)

 I think that we have enough use cases for tenant facing capabilities to
 reflect most common use cases.

 I am in the process of creation a survey in surveymonkey for tenant facing
 use cases and hope to send it to ML ASAP.



 Regards,

 -Sam.





 *From:* Samuel Bercovici
 *Sent:* Thursday, May 01, 2014 8:40 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Samuel Bercovici
 *Subject:* [openstack-dev] [Neutron][LBaaS]User Stories and sruvey



 Hi Everyone!



 To assist in evaluating the use cases that matter and since we now have
 ~45 use cases, I would like to propose to conduct a survey using something
 like surveymonkey.

 The idea is to have a non-anonymous survey listing the use cases and ask
 you identify and vote.

 Then we will publish the results and can prioritize based on this.



 To do so in a timely manner, I would like to freeze the document for
 editing and allow only comments by Monday May 5th 08:00AMUTC and publish
 the survey link to ML ASAP after that.



 Please let me know if this is acceptable.



 Regards,

 -Sam.








 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-09 Thread Samuel Bercovici
Brandon,

Can you please provide statistics on the distribution between the relationships 
between load balancer and VIPs in your environment?

-Sam.


-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Friday, May 09, 2014 6:40 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

Yes, Rackspace has users that have multiple IPv4 and IPv6 VIPs on a single load 
balancer.  However, I don't think it is a matter of it being needed.  It's a 
matter of having an API that makes sense to a user.
Just because the API has multiple VIPs doesn't mean every VIP needs its own 
port.  In fact creating a port is an implementation detail (you know that 
phrase that everyone throws out to stonewall any discussions?).
The user doesn't care how many neutron ports are set up underneath, they
only care about the VIPs.   

Also, the load balancer wouldn't just be a container, the load balancer would 
have flavor, affinity, and other metadata on it.  Plus, a user will expect to 
get a load balancer back.  Since this object can only be described as a load 
balancer, the name of it shouldn't be up for debate.

The API is meant to be a generic language that can be translated into a working 
load balancer and should be driver agnostic.  We believe this is the most 
generic and flexible API structure.  Each driver will be able to translate this 
into what makes sense for that product.

On a side note, if this is too disruptive for the current LBaaS then why 
couldn't this go into Neutron V3?  I thought that was the plan all along anyway 
with redesigning the API.

Thanks,
Brandon  

On Fri, 2014-05-09 at 14:30 +0400, Eugene Nikanorov wrote:
 Hi folks,
 
 
 I'm pulling this question out of another discussion:
 
 
 Is there a need to have multiple VIPs (e.g. multiple L2 ports/IP
 addresses) per logical loadbalancer?
 If so, we need the description of such cases to evaluate them.
 
 
 Thanks,
 Eugene.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-09 Thread Eugene Nikanorov
On Fri, May 9, 2014 at 7:40 PM, Brandon Logan
brandon.lo...@rackspace.comwrote:

 Yes, Rackspace has users that have multiple IPv4 and IPv6 VIPs on a
 single load balancer.

For sure that can be supported by particular physical appliance, but I
doubt we need to translate it to logical loadbalancer.


 However, I don't think it is a matter of it being
 needed.  It's a matter of having an API that makes sense to a user.
 Just because the API has multiple VIPs doesn't mean every VIP needs its
 own port. In fact creating a port is an implementation detail (you know

that phrase that everyone throws out to stonewall any discussions?).
 The user doesn't care how many neutron ports are set up underneath, they
 only care about the VIPs.

Right, port creation is implementation detail, however, L2 connectivity for
the frontend is a certain API expectation.
I think VIP creation should have clear semantics: user creates L2 endpoint,
e.g. l2 port + ipv4[+ipv6] address.
If we agree that we only need 1 L2 port per logical loadbalancer, then it
could be handled by two API/objmodel approaches:

1) loadbalancer + VIPs,  1:n relationship
2) VIP + listeners, 1:n relationship
You see that from API and obj model structure perspective those approaches
are exactly the same.
However, in (1) we would need to specify L3 information (ipv4 + ipv6
addresses, subnet_id) for the loadbalancer, and that will be inherited by
VIPs which would keep info about L4+
To me it seems a little bit confusing (per our glossary)

While in second approach VIP remains a keeper of L2/L3 information, while
listeners keep L4+ information.
That seems to be more clear.

In case we want more than one L2 port, then we need to combine those
approaches and have loadbalancer+VIPs+Listeners, where loadbalancer is a
container that maps to a backend.
However, per discussed on the last meeting, we don't want to let user have
direct control over the backend.
Also we've heard objection to this approach several times from other core
team members (this discussion has been going for more than half a year
now), so I would suggest to move forward with single L2 port approach. Then
the question goes down to terminology: loadbalancer/VIPs or VIP/Listeners.


Also, the load balancer wouldn't just be a container, the load balancer
 would have flavor, affinity, and other metadata on it.  Plus, a user
 will expect to get a load balancer back.  Since this object can only be
 described as a load balancer, the name of it shouldn't be up for debate.

Per comments above - VIP can also play this role.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Samuel Bercovici
It boils down to two aspects:

1.   How common is it for tenant to care about affinity or have more than a 
single VIP used in a way that adding an additional (mandatory) construct makes 
sense for them to handle?

For example if 99% of users do not care about affinity or will only use a 
single VIP (with multiple listeners). In this case does adding an additional 
object that tenants need to know about makes sense?

2.   Scheduling this so that it can be handled efficiently by different 
vendors and SLAs. We can elaborate on this F2F next week.

Can providers share their statistics to assist to understand how common are 
those use cases?

Regards,
-Sam.



From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, May 09, 2014 9:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Hi Eugene,

This assumes that 'VIP' is an entity that can contain both an IPv4 address and 
an IPv6 address. This is how it is in the API proposal and corresponding object 
model that I suggested, but it is a slight re-definition of the term virtual 
IP as it's used in the rest of the industry. (And again, we're not yet in 
agreement that 'VIP' should actually contain two ip addresses like this.)

In my mind, the main reasons I would like to see the container object are:


  *   It solves the colocation / apolcation (or affinity / anti-affinity) 
problem for VIPs in a way that is much more intuitive to understand and less 
confusing for users than either the hints included in my API, or something 
based off the nova blueprint for doing the same for virtual servers/containers. 
(Full disclosure: There probably would still be a need for some anti-affinity 
logic at the logical load balancer level as well, though at this point it would 
be an operator concern only and expressed to the user in the flavor of the 
logical load balancer object, and probably be associated with different billing 
strategies. The user wants a dedicated physical load balancer? Then he should 
create one with this flavor, and note that it costs this much more...)
  *   From my experience, users are already familiar with the concept of what a 
logical load balancer actually is (ie. something that resembles a physical or 
virtual appliance from their perspective). So this probably fits into their 
view of the world better.
  *   It makes sense for Load Balancer as a Service to hand out logical load 
balancer objects. I think this will aid in a more intuitive understanding of 
the service for users who otherwise don't want to be concerned with operations.
  *   This opens up the option for private cloud operators / providers to bill 
based on number of physical load balancers used (if the logical load balancer 
happens to coincide with physical load balancer appliances in their 
implementation) in a way that is going to be seen as more fair and more 
predictable to the user because the user has more control over it. And it 
seems to me this is accomplished without producing any undue burden on public 
cloud providers, those who don't bill this way, or those for whom the logical 
load balancer doesn't coincide with physical load balancer appliances.
  *   Attaching a flavor attribute to a logical load balancer seems like a 
better idea than attaching it to the VIP. What if the user wants to change the 
flavor on which their VIP is deployed (ie. without changing IP addresses)? What 
if they want to do this for several VIPs at once? I can definitely see this 
happening in our customer base through the lifecycle of many of our customers' 
applications.
  *   Having flavors associated with load balancers and not VIPs also allows 
for operators to provide a lot more differing product offerings to the user in 
a way that is simple for the user to understand. For example:

 *   Flavor A is the cheap load balancer option, deployed on a shared 
platform used by many tenants that has fewer guarantees around performance and 
costs X.
 *   Flavor B is guaranteed to be deployed on vendor Q's Super Special 
Product (tm) but to keep down costs, may be shared with other tenants, though 
not among a single tenant's load balancers unless the tenant uses the same 
load balancer id when deploying their VIPs (ie. user has control of affinity 
among their own VIPs, but no control over whether affinity happens with other 
tenants). It may experience variable performance as a result, but has higher 
guarantees than the above and costs a little more.
 *   Flavor C is guaranteed to be deployed on vendor P's Even Better 
Super Special Product (tm) and is also guaranteed not to be shared among 
tenants. This is essentially the dedicated load balancer option that gets you 
the best guaranteed performance, but costs a lot more than the above.
 *   ...and so on.

  *   A logical load balancer object is a great demarcation point 

Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-09 Thread Eichberger, German
Our current HP LBaaS implementation only supports one VIP (aka one IP address). 
So statistics of multiple VIPs will be hard to find. We have been asked for use 
cases to support IPv4 and IPv6 as Rackspace. 

I have heard of some use cases where a load balancer (loosely defined as 
container) might have an Intranet IP and a public IP... admittedly that can be 
solved with two load balancers so that might not be too big an issue...

German


-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com] 
Sent: Friday, May 09, 2014 1:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

Brandon,

Can you please provide statistics on the distribution between the relationships 
between load balancer and VIPs in your environment?

-Sam.


-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Friday, May 09, 2014 6:40 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

Yes, Rackspace has users that have multiple IPv4 and IPv6 VIPs on a single load 
balancer.  However, I don't think it is a matter of it being needed.  It's a 
matter of having an API that makes sense to a user.
Just because the API has multiple VIPs doesn't mean every VIP needs its own 
port.  In fact creating a port is an implementation detail (you know that 
phrase that everyone throws out to stonewall any discussions?).
The user doesn't care how many neutron ports are set up underneath, they
only care about the VIPs.   

Also, the load balancer wouldn't just be a container, the load balancer would 
have flavor, affinity, and other metadata on it.  Plus, a user will expect to 
get a load balancer back.  Since this object can only be described as a load 
balancer, the name of it shouldn't be up for debate.

The API is meant to be a generic language that can be translated into a working 
load balancer and should be driver agnostic.  We believe this is the most 
generic and flexible API structure.  Each driver will be able to translate this 
into what makes sense for that product.

On a side note, if this is too disruptive for the current LBaaS then why 
couldn't this go into Neutron V3?  I thought that was the plan all along anyway 
with redesigning the API.

Thanks,
Brandon  

On Fri, 2014-05-09 at 14:30 +0400, Eugene Nikanorov wrote:
 Hi folks,
 
 
 I'm pulling this question out of another discussion:
 
 
 Is there a need to have multiple VIPs (e.g. multiple L2 ports/IP
 addresses) per logical loadbalancer?
 If so, we need the description of such cases to evaluate them.
 
 
 Thanks,
 Eugene.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Docker] Master node bootstrapping issues

2014-05-09 Thread Vladimir Kuklin
Hi all

We are still experiencing some issues with master node bootstrapping after
moving to container-based installation.

First of all, these issues are related to our system tests. We have rather
small nodes running as master node - only 1 GB of RAM and 1 virtual CPU. As
we are using strongly lrzipped archive, this seems quite not enough and
leads to timeouts during deployment of the master node.

I have several suggestions:

1) Increase amount of RAM for  master node to at least 8 Gigabytes (or do
some pci virtual memory hotplug during master node bootstrapping) and add
additional vCPU for the master node.
2) Run system tests with non-containerized environment (env variable
PRODUCTION=prod set)
3) Split our system tests in that way not allowing more than 2 master nodes
to bootstrap simulteneously on the single hardware node.
4) do lrzipping as weak as possible during the development phase and lrzip
it strongly only when we do release
5) increase bootstrap timeout for the master node in system tests


Any input would be appreciated.

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] elements vs. openstack-infra puppet for CI infra nodes

2014-05-09 Thread James E. Blair
Elizabeth K. Joseph l...@princessleia.com writes:

 On the Debian side, I also have a bug (with some mirror discussion and
 an attached review) here:

 https://bugs.launchpad.net/openstack-ci/+bug/1311855

 After discussing this particular patch+bug with the rest of the -infra
 team, there wasn't a ton of interest in running an infra-based mirror
 due to the package index out of sync issue in unofficial mirrors,
 which would be a problem for us.

 I had hoped we could sit down and chat about this at the summit for
 both Fedora and Debian mirrors, but unfortunately I won't be able to
 attend (been very sick this week, doctor didn't approve getting on a
 plane on Sunday). So I'm hoping some other infra folks can sync up
 with Dan and the TripleO crew to chat about how we can best get these
 changes in so they'll work effectively for everyone. Also happy to
 continue this discussion here on list or resume at a meeting after
 summit.

Thanks Dan and Liz.  Let's do try to sync up on this at the summit.  I
think this is important and there are good arguments for both
approaches.  I don't think it's an easy question to answer so let's
get together and consider the options.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-05-09 Thread Sergey Lukjanov
++ for B301

We'll be on summit for the whole week, probably, we could have two key
signing parties? bad_idea

On Fri, May 9, 2014 at 9:08 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-05-09 15:51:19 +0800 (+0800), Thomas Goirand wrote:
 What's the final decision? Which room will we use?

 I'm personally leaning toward B301 (in the Design Summit area) since
 I think that will make it easier for participants to arrive on time.
 As Thierry also suggested this and nobody else has expressed a
 preference or raised any objections, if I don't hear any within the
 next few hours I'll update the instructions to say B301 (and cancel
 the other dedicated room with the conference organizers so they can
 free up resources and correct the schedule before the weekend).
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New location for Juno Summit Key Signing

2014-05-09 Thread Jeremy Stanley
On 2014-05-09 16:08:52 + (+), Jeremy Stanley wrote:
[...]
 if I don't hear any within the next few hours I'll update the
 instructions to say B301 (and cancel the other dedicated room with
 the conference organizers so they can free up resources and
 correct the schedule before the weekend).

Sergey and the crickets have spoken! We will meet at the OpenStack
Juno Design Summit on Wednesday the 14th of May starting at 10:35 AM
EDT (14:35 UTC) in room B301 of the Georgia World Congress Center
(in between Infrastructure topic sessions). I have finalized details
at https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
accordingly. See you there!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-09 Thread Jiang, Yunhong
 
 This is why there is a distinction between properties set on images
 vs properties set on flavours. Image properties, which a normal user
 can set, are restricted to aspects of the VM which don't involve
 consumption of compute host resources. Flavour properties, which
 only a user with 'flavourmanage' permission can change, control
 aspects of the VM  config which consume finite compute resources.

I think the VM property should give requirement of resource, including CPU 
features like AES-NI, HVM/PV vCPU type, PCI device type because the image may 
have some special requirement for the resource, or the minimal of RAM size 
required etc. But IMHO it's not easy to say that  don't involve
consumption of compute host resources. After all, it defines the type of 
resources. 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Point of Contact Request for MagnetoDB

2014-05-09 Thread Endre Karlson
Is NASA still using OpenStack or ?


2014-05-09 19:10 GMT+02:00 Mathews, Tiffany J. (LARC-E301)[SCIENCE SYSTEMS
AND APPLICATIONS, INC] tiffany.j.math...@nasa.gov:

  I am interested in establishing an expert POC for questions and concerns
 regarding MagnetoDB as I am working on creating a technology repository
 for one of the NASA Data Centers to identify and track technologies that we
 may not be currently using, however, would like to consider for potential
 future use. MagnetoDB is a technology that we are interested in learning
 about more- especially with regard to security. We are also interested in
 seeing if there are any white papers or demonstrations that could help us
 better understand this technology.

  Any guidance is greatly appreciated!

  Tiffany Mathews





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How-to and best practices for developing new OpenStack service

2014-05-09 Thread Stefano Maffulli
On 05/08/2014 12:16 AM, Ruslan Kiianchuk wrote:
 Short tutorial for developers not familiar with OpenStack infrastructure
 still would be useful. My task is to help Python developers that haven't
 worked with OpenStack before start an OpenStack service development in
 the shortest possible period. If none will come up, I'll be thinking
 about writing one :)

I thought about a short tutorial and I realized that it wasn't going to
be short at all. So I opted for a full size training program instead :)
We're going to have the first 'full size' OpenStack Upstream Training
edition in Atlanta tomorrow so we'll know how it goes soon.

The reason why I opted for a longer training instead of a short tutorial
is that OpenStack is huge, in terms of code available *and* tools used
*and* social habits. The coexistence and the equal importance of all
these aspects made me decide that a simple tutorial would not serve
adequately the purpose of onboarding new OpenStack developers.

We're already thinking of running Upstream Training again before Paris
so stay tuned. I expect that with experience and more resources we'll
also be able to produce content to consume in 'pills' for more
continuous training of new developers.

Regards,
Stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] custom gerrit dashboard - per project review inbox zero

2014-05-09 Thread Nikhil Manchanda

Doug Shelley writes:

 Very cool - thanks for providing. One question - for Trove, we have
 Jenkins but also voting jobs on reddwarf - any way to change the No
 Negative Feedback section to take into account reddwarf failures?


Hi Doug:

We're working on moving the reddwarf trove tests to OpenStack Infra CI
in Juno, so this shouldn't be an issue long-term.

For the short term, I've set up a URI-based custom dashboard for
Trove that uses Sean's gerrit queries as a base (thanks Sean!), and does
take into account the outcome of the reddwarf tests as well.

http://bit.do/trove-reviews

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-09 Thread Stephen Balukoff
Hi Eugene,


On Fri, May 9, 2014 at 1:36 PM, Eugene Nikanorov enikano...@mirantis.comwrote:




 On Fri, May 9, 2014 at 7:40 PM, Brandon Logan brandon.lo...@rackspace.com
  wrote:

 Yes, Rackspace has users that have multiple IPv4 and IPv6 VIPs on a
 single load balancer.

 For sure that can be supported by particular physical appliance, but I
 doubt we need to translate it to logical loadbalancer.


 However, I don't think it is a matter of it being
 needed.  It's a matter of having an API that makes sense to a user.
 Just because the API has multiple VIPs doesn't mean every VIP needs its
 own port. In fact creating a port is an implementation detail (you know

 that phrase that everyone throws out to stonewall any discussions?).
 The user doesn't care how many neutron ports are set up underneath, they
 only care about the VIPs.

 Right, port creation is implementation detail, however, L2 connectivity
 for the frontend is a certain API expectation.
 I think VIP creation should have clear semantics: user creates L2
 endpoint, e.g. l2 port + ipv4[+ipv6] address.
 If we agree that we only need 1 L2 port per logical loadbalancer, then it
 could be handled by two API/objmodel approaches:

 1) loadbalancer + VIPs,  1:n relationship
 2) VIP + listeners, 1:n relationship
 You see that from API and obj model structure perspective those approaches
 are exactly the same.
 However, in (1) we would need to specify L3 information (ipv4 + ipv6
 addresses, subnet_id) for the loadbalancer, and that will be inherited by
 VIPs which would keep info about L4+
 To me it seems a little bit confusing (per our glossary)

 While in second approach VIP remains a keeper of L2/L3 information, while
 listeners keep L4+ information.
 That seems to be more clear.


There's a complication though: Pools may also need some L2/L3 information
(per the discussion of adding subnet_id as an attribute of the pool, eh.)


 In case we want more than one L2 port, then we need to combine those
 approaches and have loadbalancer+VIPs+Listeners, where loadbalancer is a
 container that maps to a backend.
 However, per discussed on the last meeting, we don't want to let user have
 direct control over the backend.


If the VIP subnet/neutron network and Pool subnet/neutron network are not
the same, then the load balancer is going to need separate L2 interfaces to
each. In fact, a VIP with a Listener that references several different pool
via L7 policies, which pools are on different subnets, is going to need an
L2 interface on all of them. Unless I'm totally misunderstanding something
(which is always a possibility-- this stuff is hard, eh!)

And actually, there are a few cases that have been discussed where
operators do want users to be able to have some (limited) control over the
back end. These almost all have to do with VIP affinity.



 Also we've heard objection to this approach several times from other core
 team members (this discussion has been going for more than half a year
 now), so I would suggest to move forward with single L2 port approach. Then
 the question goes down to terminology: loadbalancer/VIPs or VIP/Listeners.


To be fair this is definitely about more than terminology. In the examples
you've listed mentioning loadbalancer objects, it seems to me that you're
ignoring that this model also still contains Listeners.  So, to be more
accurate, it's really about:

loadbalancer/VIPs/Listeners or VIPs/Listeners.

To me, that says it's all about: Does the loadbalancer object add something
meaningful to this model?  And I think the answer is:

* To smaller users with very basic load balancing needs: No (mostly, though
to many it's still yes)
* To larger customers with advanced load balancing needs:  Yes.
* To operators of any size: Yes.

I've outlined my reasoning for thinking so in the other discussion thread.

Thanks,
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev