Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener be set through separate API/model?

2014-06-24 Thread Evgeny Fedoruk
+1 for option 1. SNI list is managed by separate entity, default TLS container 
is part of a listener object. It will have None value when listener does not 
offloads TLS.
Managing another entity for 1:0-1 relationship just for future use seems not 
right to me. Breaking TLS settings apart from listener can be done when needed, 
if needed.

Thanks,
Evg


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, June 24, 2014 4:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener 
be set through separate API/model?

Ok, so we've got opinions on both sides of the argument here. I'm actually 
pretty ambivalent about it. Do others have strong opinions on this?

On Mon, Jun 23, 2014 at 6:03 PM, Doug Wiegley 
do...@a10networks.commailto:do...@a10networks.com wrote:
Put me down for being in favor of option 1.

A single attribute in a 1:1 relationship?  Putting that in a new table sounds 
like premature optimization to me; design the database change for the future 
feature when you can see the spec for it.

Thanks,
Doug


From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 23, 2014 at 5:25 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener 
be set through separate API/model?

Also to add to pros for 2:

* Keeping the TLS stuff contained to its own objects means we can have separate 
development resources on each and not worry as much about overlapping domains. 
(TLS-related knowledge and knowledge of dealing with TCP / UDP listeners are 
separate knowledge domains. Or at least, the former is a more specialized 
subset of the latter.)

Note that what we're proposing means there's essentially a 1:0-1 relationship 
between Listener and this new yet-to-be-named object. (0 in case the Listener 
is not terminating TLS.)

Stephen

On Mon, Jun 23, 2014 at 3:38 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Whoops, [Neutron][LBaaS] got taken out of the subject line here.
Putting it back in.

On Mon, 2014-06-23 at 21:10 +, Brandon Logan wrote:
 Okay so we've talked a bit about this in IRC and now I'm sending this
 out as an update.  Here are the options with pros and cons that have
 come from that discussion.

 1) default_certificate_id is an attribute of the Listener object.

 Pros:
 -No extra entity needed

 Cons:
 -May bloat Listener object when more attributes are needed for only TLS
 termination.  Sounds like TLS version and cipher selection will be
 needed attributes in the future.


 2) A separate TLS Entity is created that is referenced by the Listener
 object.  This entity at first may only contain a certificate_id that
 references barbican.  Name and description can be allowed as well.

 Pros:
 -TLS domain specific attributes contained in its own entity
 -Future attributes would just be added to this entity and not bloat the
 Listener object.

 Cons:
 -It's another entity

 In IRC we (sbalukoff, myself) seemed to agree option 2 is right way to
 go.  Anyone agree or disagree?

 Thanks,
 Brandon

 On Mon, 2014-06-23 at 12:15 -0700, Stephen Balukoff wrote:
  The separate entity makes sense for certificates participating in an
  SNI configuration, but probably not so much for the 'default'
  certificate used when TLS is being terminated.
 
 
  Vijay: You're also right that other TLS-related attributes will
  probably get added to the Listener object. This probably makes sense
  if they apply to the Listener object as a whole. (This includes things
  like TLS version and cipher selection.)
 
 
  I don't see much of a point in creating a separate object to contain
  these fields, since it would have a 1:1 relationship with the
  Listener. It's true that for non-TLS-terminated Listeners, these
  fields wouldn't be used, but isn't that already the case in many other
  objects (not just in the Neutron LBaaS sub project)?
 
 
  Thanks,
  Stephen
 
 
 
 
 
 
  On Mon, Jun 23, 2014 at 9:54 AM, Brandon Logan
  brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
  Vijay,
  I think the separate entity is still going to happen.  I don't
  think it
  has remvoed.  Or that is may just be my assumption.
 
  Thanks,
  Brandon
 
  On Mon, 2014-06-23 at 15:59 +, Vijay Venkatachalam wrote:
   Hi:
  
  
   In the “LBaaS TLS termination capability specification”
  proposal
  
   https://review.openstack.org/#/c/98640/
  
   TLS settings like default certificate container id and SNI
  

[openstack-dev] [Mistral] Mistral test infrastructure proposal

2014-06-24 Thread Anastasia Kuznetsova
(reposting due to lack of subject)

Hello, everyone!

I am happy to announce that Mistral team started working on test
infrastructure. Due to this fact I prepared etherpad
https://etherpad.openstack.org/p/MistralTests where I analysed what we have
and what we need to do.

I would like to get your feedback to start creating appropriate blueprints
and implement them.

Regards,
Anastasia Kuznetsova
QA Engineer at Mirantis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-06-24 Thread Tatiana Ovtchinnikova
+1 and +1
Thank you Ana and Zhenguo!

--
Kind regards,
Tatiana


2014-06-21 1:17 GMT+04:00 Lyle, David david.l...@hp.com:

 I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.

 Zhenguo has been a prolific reviewer for the past two releases providing
 high quality reviews. And providing a significant number of patches over
 the past three releases.

 Ana has been a significant reviewer in the Icehouse and Juno release
 cycles. She has also contributed several patches in this timeframe to both
 Horizon and tuskar-ui.

 Please feel free to respond in public or private your support or any
 concerns.

 Thanks,
 David


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-06-24 Thread Tihomir Trifonov
+1
+1

Deserved.


On Tue, Jun 24, 2014 at 10:41 AM, Tatiana Ovtchinnikova 
t.v.ovtchinnik...@gmail.com wrote:

 +1 and +1
 Thank you Ana and Zhenguo!

 --
 Kind regards,
 Tatiana


 2014-06-21 1:17 GMT+04:00 Lyle, David david.l...@hp.com:

 I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.

 Zhenguo has been a prolific reviewer for the past two releases providing
 high quality reviews. And providing a significant number of patches over
 the past three releases.

 Ana has been a significant reviewer in the Icehouse and Juno release
 cycles. She has also contributed several patches in this timeframe to both
 Horizon and tuskar-ui.

 Please feel free to respond in public or private your support or any
 concerns.

 Thanks,
 David



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Tihomir Trifonov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-06-24 Thread Matthias Runge
On Fri, Jun 20, 2014 at 09:17:41PM +, Lyle, David wrote:
 I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.
 
 Zhenguo has been a prolific reviewer for the past two releases providing
 high quality reviews. And providing a significant number of patches over
 the past three releases.
 
 Ana has been a significant reviewer in the Icehouse and Juno release
 cycles. She has also contributed several patches in this timeframe to both
 Horizon and tuskar-ui.
 
 Please feel free to respond in public or private your support or any
 concerns.
 

Thank you!
+1 for both!

Matthias
-- 
Matthias Runge mru...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-24 Thread thomas.morin
Hi,

keshav...@hp.com :

 If this BaGpipe BGP does not support MPLS data plane driver, what is 
 advantage of this BGP from current.

Just to avoid any misunderstanding: Bagpipe BGP **does** support an MPLS 
dataplane for IPVPN today.

For E-VPN, bagpipe could support an MPLS dataplane with a new dataplane driver, 
but for now, having just a VXLAN driver is fine enough for intra-DC use cases.

(to what current BGP solution do you want to compare with?)

 What you are thinking (if I am right) is something like this below which is 
 traditional deployment model of E-VPN solution.

I see your point, inter-DC would also be addressable with E-VPN, combined or 
not with other techniques.

 https://tools.ietf.org/html/draft-rabadan-l2vpn-dci-evpn-overlay-01#ref-EVPN-Overlays

The above describe ways, among others, to do inter-DC.

 But what I was thinking is something like PW(pseudo wire) right from CN node 
 itself, so that there will not be any breakage/stitching/mapping related 
 issue.

Sorry, but I don't get your point yet:
- what problem are you trying to solve here?
- what motivation to introduce PWs?

[...]

 If we are not thinking of starting MPLS from CNs I think existing BGP (which 
 is underway) will  be sufficient.

(Again, I'm not sure which use of BGP you are referring to above.)

Just to be 100% clear: starting an MPLS encap (or VXLAN) from CNs, based on BGP 
VPN routes, *is* a scenario we favor here.

Best,

-Thomas

 -Original Message-
 From: Thomas Morin [mailto:tmmorin.ora...@gmail.com] On Behalf Of Thomas Morin
 Sent: Monday, June 23, 2014 7:25 PM
 To: A, Keshava; OpenStack Dev
 Subject: Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

  

 Hi,

  

 2014-06-22, A, Keshava:

  

  I have some of the basic question about deployment model of using this 
  BaGPipe BGP in virtual cloud network.

  

  1. We want MPLS to start right from compute node as part Tennant traffic ?

  

 BaGPipe BGP component is indeed adapted to be run on compute nodes to 
 encapsulate tenant traffic as MPLS traffic toward BGP IP VPNs external to the 
 datacenter. In this case you are interconnecting each VM at once with a /32 
 VPNv4 route.  [A]

  

 But you could use it as well on a network node to interconnect a whole 
 virtual network with one BGP route. However doing so does not simplify 
 deployments and requires additional care to handle redundancy.

  

 And to implement virtual networks with BaGPipe, the proposed target would be 
 to use it on compute nodes; but in that case MPLS is not the only option, and 
 what we currently support is VXLAN (E-VPN with a VXLAN encapsulation).

  

  

  2. We want L3 VRF separation right on Compute nodes (or NN Node) ?

      Tenant = VRF ?

      Tenant span can be across multiple CN nodes,  then have BGP to 
  Full mesh with in CN ?

  

 As said in another comment, a tenant (or project depending on the

 terminology) is not a network construct; tenants just own networks.

  

 In [A] above, for a virtual network interconnected with a VPN, there would be 
 one VRF on each compute node with a VM connected to this virtual network.

  

 (I'm not getting your question on having BGP as a full mesh in compute

 nodes)

  

  3. How to have  E-VPN connectivity mapping at NN/CN nodes ?

   Is there an L2 VPN psuedowire thinking from CN nodes itself ?

  

 I'm not sure I get your question.

 When BaGPipe BGP is used on compute nodes to build a virtual network, NN 
 don't need to be involved.  They only will be involved once a router port (on 
 a NN) is connected to a virtual network.

  

 Note also that in E-VPN there is no notion of pseudowire; E-VPN does not 
 involve learning on incoming (MPLS- or VXLAN-) encapsulated traffic, and 
 forwarding tables involve dynamically adding an encap header based on a 
 static forwarding table (rather than tunnels or pseudowires).

  

  

  4. Tennant traffic is L2 or L3 or MPLS ? Where will be L2 terminated ?

  

  

 When E-VPN is used, network traffic inside a virtual network is carried as 
 Ethernet in VXLAN, MPLS or MPLS-over-GRE (note that today BaGPipe does not 
 support any MPLS dataplane driver for E-VPN).  When IP VPN is used (eg. 
 between virtual networks, or to/from an external IP VPN), traffic is carried 
 as IP traffic in MPLS or MPLS-GRE.

  

  Help me understand the deployment model for this .

  

  

 Hope that helps,

  

 -Thomas

  

  

  -Original Message-

  From: Thomas Morin [mailto:thomas.mo...@orange.com]

  Sent: Thursday, June 19, 2014 9:32 PM

  To: OpenStack Development Mailing List (not for usage questions)

  Subject: Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing

  Proposal

  

  Hi everyone,

  

  Sorry, I couldn't make it in time for the IRC meeting.

  

  Just saw in the logs:

  15:19:12 yamamoto are orange folks here?  they might want to

    introduce their bgp speaker.

  

  The best intro to BaGPipe BGP is the 

Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-06-24 Thread Julie Pichon
On 20/06/14 22:17, Lyle, David wrote:
 I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.
 
 Zhenguo has been a prolific reviewer for the past two releases providing
 high quality reviews. And providing a significant number of patches over
 the past three releases.
 
 Ana has been a significant reviewer in the Icehouse and Juno release
 cycles. She has also contributed several patches in this timeframe to both
 Horizon and tuskar-ui.
 
 Please feel free to respond in public or private your support or any
 concerns.

+1 to both!

Julie


 
 Thanks,
 David
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on Tu, 24th of June

2014-06-24 Thread Dmitry Pyzhov
Fuelers,

Ok, today is bug squash day. No activities except bugs triage
https://wiki.openstack.org/wiki/BugTriage/fix/review/merge.

Current count:
17 new bugs
http://fuel-launchpad.mirantis.com/project/fuel/bug_table_for_status/New/None
25 incomplete bugs
https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=INCOMPLETE_WITH_RESPONSEfield.status%3Alist=INCOMPLETE_WITHOUT_RESPONSEassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search
92 critical/high bugs in 5.1 in confirmed/triaged state
https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.milestone%3Alist=63962field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search
238 medium/low/undefined bugs in 5.1 in confirmed/triaged state
https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.importance%3Alist=UNKNOWNfield.importance%3Alist=UNDECIDEDfield.importance%3Alist=MEDIUMfield.importance%3Alist=LOWfield.importance%3Alist=WISHLISTassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.milestone%3Alist=63962field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search
67 bugs in progress in 5.1
https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=INPROGRESSassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.milestone%3Alist=63962field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search

27 customer-found bugs in total
https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=NEWfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.status%3Alist=INPROGRESSassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=customer-found+field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search

Let the mortal kombat begin!



On Sat, Jun 21, 2014 at 2:58 AM, Dmitry Borodaenko dborodae...@mirantis.com
 wrote:

 No, we should have every day as review day. If there's code waiting to
 be reviewed that addresses a bug or a feature, there's simply no good
 reason to write new code for a different bug or feature with the same
 priority until the code that's already out there is reviewed.

 On Fri, Jun 20, 2014 at 11:16 AM, Andrew Woodward xar...@gmail.com
 wrote:
  Should we also have the 25th as review day so we can squish those down
 too?
 
  On Fri, Jun 20, 2014 at 6:30 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
  Fuelers,
  we need to group and enforce bug squashing activities on Tuesday, as
  discussed on IRC meeting this Thursday [1]. Feel free to do bug triaging
  first on Monday if needed.
  We have lots of bugs, and to meet quality criteria for the release we
 really
  need this.
 
  Every dedicated Fuel developer should stop all other activities and
 dedicate
  this day to bugs only. Follow instructions from last bug squashing day
 [2].
  Among other bugs, please give the higher priority to those with
  customer-found tag.
 
  Please, use 

Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener be set through separate API/model?

2014-06-24 Thread Evgeny Fedoruk
Vipsniassociations table: Line 147 in last patch of the document

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: Tuesday, June 24, 2014 10:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener 
be set through separate API/model?


SNI list is managed by separate entity
What is this entity?

From: Evgeny Fedoruk [mailto:evge...@radware.com]
Sent: Tuesday, June 24, 2014 12:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener 
be set through separate API/model?

+1 for option 1. SNI list is managed by separate entity, default TLS container 
is part of a listener object. It will have None value when listener does not 
offloads TLS.
Managing another entity for 1:0-1 relationship just for future use seems not 
right to me. Breaking TLS settings apart from listener can be done when needed, 
if needed.

Thanks,
Evg


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, June 24, 2014 4:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener 
be set through separate API/model?

Ok, so we've got opinions on both sides of the argument here. I'm actually 
pretty ambivalent about it. Do others have strong opinions on this?

On Mon, Jun 23, 2014 at 6:03 PM, Doug Wiegley 
do...@a10networks.commailto:do...@a10networks.com wrote:
Put me down for being in favor of option 1.

A single attribute in a 1:1 relationship?  Putting that in a new table sounds 
like premature optimization to me; design the database change for the future 
feature when you can see the spec for it.

Thanks,
Doug


From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 23, 2014 at 5:25 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener 
be set through separate API/model?

Also to add to pros for 2:

* Keeping the TLS stuff contained to its own objects means we can have separate 
development resources on each and not worry as much about overlapping domains. 
(TLS-related knowledge and knowledge of dealing with TCP / UDP listeners are 
separate knowledge domains. Or at least, the former is a more specialized 
subset of the latter.)

Note that what we're proposing means there's essentially a 1:0-1 relationship 
between Listener and this new yet-to-be-named object. (0 in case the Listener 
is not terminating TLS.)

Stephen

On Mon, Jun 23, 2014 at 3:38 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Whoops, [Neutron][LBaaS] got taken out of the subject line here.
Putting it back in.

On Mon, 2014-06-23 at 21:10 +, Brandon Logan wrote:
 Okay so we've talked a bit about this in IRC and now I'm sending this
 out as an update.  Here are the options with pros and cons that have
 come from that discussion.

 1) default_certificate_id is an attribute of the Listener object.

 Pros:
 -No extra entity needed

 Cons:
 -May bloat Listener object when more attributes are needed for only TLS
 termination.  Sounds like TLS version and cipher selection will be
 needed attributes in the future.


 2) A separate TLS Entity is created that is referenced by the Listener
 object.  This entity at first may only contain a certificate_id that
 references barbican.  Name and description can be allowed as well.

 Pros:
 -TLS domain specific attributes contained in its own entity
 -Future attributes would just be added to this entity and not bloat the
 Listener object.

 Cons:
 -It's another entity

 In IRC we (sbalukoff, myself) seemed to agree option 2 is right way to
 go.  Anyone agree or disagree?

 Thanks,
 Brandon

 On Mon, 2014-06-23 at 12:15 -0700, Stephen Balukoff wrote:
  The separate entity makes sense for certificates participating in an
  SNI configuration, but probably not so much for the 'default'
  certificate used when TLS is being terminated.
 
 
  Vijay: You're also right that other TLS-related attributes will
  probably get added to the Listener object. This probably makes sense
  if they apply to the Listener object as a whole. (This includes things
  like TLS version and cipher selection.)
 
 
  I don't see much of a point in creating a separate object to contain
  these fields, since it would have a 1:1 relationship with the
  Listener. It's true that for non-TLS-terminated Listeners, these
  fields wouldn't be used, but isn't that already the case in many other
  objects (not just in the Neutron LBaaS sub project)?
 
 
  Thanks,
  Stephen
 
 
 
 
 

Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-06-24 Thread AMIT PRAKASH PANDEY
+1 to both


On Tue, Jun 24, 2014 at 1:48 PM, Julie Pichon jpic...@redhat.com wrote:

 On 20/06/14 22:17, Lyle, David wrote:
  I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.
 
  Zhenguo has been a prolific reviewer for the past two releases providing
  high quality reviews. And providing a significant number of patches over
  the past three releases.
 
  Ana has been a significant reviewer in the Icehouse and Juno release
  cycles. She has also contributed several patches in this timeframe to
 both
  Horizon and tuskar-ui.
 
  Please feel free to respond in public or private your support or any
  concerns.

 +1 to both!

 Julie


 
  Thanks,
  David
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints

2014-06-24 Thread Luke Gorrie
On 18 June 2014 12:00, Carlos Gonçalves m...@cgoncalves.pt wrote:

 I’ve added Joao Soares (Portugal Telecom) and myself (Instituto de
 Telecomunicacoes) to https://wiki.openstack.org/wiki/Sprints/ParisJuno2014 for
 a Neutron and NFV meetup.
 Please add yourselves as well so that we can have a better idea of who’s
 showing interest in participating.


Looks like we have the numbers :-) with 5 people the NFV part would be one
of the biggest of the sprint.

That's good enough for me. I'm making travel arrangements to be in Paris
next week and I have updated the Wiki to confirm my interest.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] community consensus and removing rules

2014-06-24 Thread Mark McLoughlin
On Mon, 2014-06-23 at 19:55 -0700, Joe Gordon wrote:

   * Add a new directory, contrib, for local rules that multiple
 projects use but are not generally considered acceptable to be
 enabled by default. This way we can reduce the amount of cut
 and pasted code (thank you to Ben Nemec for this idea).

All sounds good to me, apart from a pet peeve on 'contrib' directories.

What does 'contrib' mean? 'contributed'? What exactly *isn't*
contributed? Often it has connotations of 'contributed by outsiders'.

It also often has connotations of 'bucket for crap', 'unmaintained and
untested', YMMV, etc. etc.

Often the name is just chosen out of laziness - I can't think of a good
name for this, and projects often have a contrib directory with random
stuff in it, so that works.

Let's be precise - these are optional rules, right? How about calling
the directory 'optional'?

Say no to contrib directories! :-P

Thanks,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-v meeting schedule

2014-06-24 Thread Peter Pouliot
Hi All,
The Hyper-v meetings for the next two weeks will need to be canceled due to 
travel and vacations.  We will resume in two weeks.

Best,

P
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Questions about test policy for scenario test

2014-06-24 Thread Sean Dague
On 06/24/2014 01:29 AM, Fei Long Wang wrote:
 Greetings,
 
 We're leveraging the scenario test of Tempest to do the end-to-end
 functional test to make sure everything work great after upgrade,
 patching, etc. And We're happy to fill the gaps we found. However, I'm a
 little bit confused about the test policy from the scenario test
 perspective, especially comparing with the API test. IMHO, scenario test
 will cover some typical work flows of one specific service or mixed
 services, and it would be nice to make sure the function is really
 working instead of just checking the object status from OpenStack
 perspective. Is that correct?
 
 For example, live migration of Nova, it has been covered in API test of
 Tempest (see
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/test_live_block_migration.py).
 But as you see, it just checks if the instance is Active or not instead
 of checking if the instance can be login/ssh successfully. Obviously,
 from an real world view, we'd like to check if it's working indeed. So
 the question is, should this be improved? If so, the enhanced code
 should be in API test, scenario test or any other places? Thanks you.

The fact that computes aren't verified fully during the API testing is
mostly historical. I think they should be. The run_ssh flag used to be
used for this, however because of some long standing race conditions in
the networking stack, that wasn't able to be turned on in upstream
testing. My guess is that it's rotted now.

We've had some conversations in the QA team about a compute verifier
that would be run after any of the compute jobs to make sure they booted
correctly, and more importantly, did a very consistent set of debug
capture when they didn't. Would be great if that's something you'd like
to help out with.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-security] Periodic Security Checks

2014-06-24 Thread Darren J Moffat
Is this intended only for checking the OpenStack infrastructure or for 
checking the hosted guest VMs as well ?


Why does the scheduling of the checks even have to be part of OpenStack 
?  Why can't the operating system that OpenStack is running on provide 
that ?


Any reason this is limited to security rather than being a generic 
mechanism ?  Eg one that can stop scheduling to given nodes based on 
reported hardware faults.


--
Darren J Moffat



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp: nova-ecu-support

2014-06-24 Thread Day, Phil
The basic framework for supporting this kind of resource scheduling is the 
extensible-resource-tracker:

https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://review.openstack.org/#/c/86050/
https://review.openstack.org/#/c/71557/

Once that lands being able schedule on arbitrary resources (such as an ECU) 
becomes a lot easier to implement.

Phil

 -Original Message-
 From: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
 Sent: 03 February 2014 09:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Nova] bp: nova-ecu-support
 
 Hi,
 
 There is a blueprint ECU[1], and that is an interesting idea for me.
 so I'd like to know the comments about ECU idea.
 
 After production environments start, the operators will need to add
 compute nodes before exhausting the capacity.
 On the scenario, they'd like to add cost-efficient machines as the compute
 node at the time. So the production environments will consist of different
 performance compute nodes. Also they hope to provide the same
 performance virtual machines on different performance nodes if specifying
 the same flavor.
 
 Now nova contains flavor_extraspecs[2] which can customize the cpu
 bandwidth for each flavor:
  # nova flavor-key m1.low_cpu set quota:cpu_quota=1  # nova flavor-
 key m1.low_cpu set quota:cpu_period=2
 
 However, this feature can not provide the same vm performance on
 different performance node, because this arranges the vm performance
 with the same ratio(cpu_quota/cpu_period) only even if the compute node
 performances are different. So it is necessary to arrange the different ratio
 based on each compute node performance.
 
 Amazon EC2 has ECU[3] already for implementing this, and the blueprint [1]
 is also for it.
 
 Any thoughts?
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ---
 [1]: https://blueprints.launchpad.net/nova/+spec/nova-ecu-support
 [2]: http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-
 to-openstack-compute.html#customize-flavors
 [3]: http://aws.amazon.com/ec2/faqs/  Q: What is a EC2 Compute Unit
 and why did you introduce it?
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-24 Thread Day, Phil
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 23 June 2014 10:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
 as part of resize ?
 
 On 18 June 2014 21:57, Jay Pipes jaypi...@gmail.com wrote:
  On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:
 
  On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:
 
  On 06/13/2014 02:22 PM, Day, Phil wrote:
 
  I guess the question I’m really asking here is:  “Since we know
  resize down won’t work in all cases, and the failure if it does
  occur will be hard for the user to detect, should we just block it
  at the API layer and be consistent across all Hypervisors ?”
 
 
  +1
 
  There is an existing libvirt blueprint:
 
  https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
  which I've never been in favor of:
 https://bugs.launchpad.net/nova/+bug/1270238/comments/1
 
 
  All of the functionality around resizing VMs to match a different
  flavour seem to be a recipe for unleashing a torrent of unfixable
  bugs, whether resizing disks, adding CPUs, RAM or any other aspect.
 
 
  +1
 
  I'm of the opinion that we should plan to rip resize functionality out
  of (the next major version of) the Compute API and have a *single*,
  *consistent* API for migrating resources. No more API extension X for
  migrating this kind of thing, and API extension Y for this kind of
  thing, and API extension Z for migrating /live/ this type of thing.
 
  There should be One move API to Rule Them All, IMHO.
 
 +1 for one move API, the two evolved independently, in different
 drivers, its time to unify them!
 
 That plan got stuck behind the refactoring of live-migrate and migrate to the
 conductor, to help unify the code paths. But it kinda got stalled (I must
 rebase those patches...).
 
 Just to be clear, I am against removing resize down from v2 without a
 deprecation cycle. But I am pro starting that deprecation cycle.
 
 John
 
I'm not sure Daniel and Jay are arguing for the same thing here John:  I 
*think*  Daniel is saying drop resize altogether and Jay is saying unify it 
with migration - so I'm a tad confused which of those you're agreeing with.

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] community consensus and removing rules

2014-06-24 Thread Sean Dague
On 06/24/2014 12:34 AM, Angus Salkeld wrote:
 On 24/06/14 12:59, Joe Gordon wrote:
 Hi All,
 
 After Friday's thread on removing several hacking rules. H402 and H803 are 
 lines 
 up to removed in the next few days, while Ben has volunteered to work on 
 H305. 
 In addition to helping clarify if we can remove a few specific rules, the 
 thread 
 touched on the bigger issue of how do make sure the rules in hacking reflect 
 the 
 communities lazy consensus of what should be enforced.
 
 The hacking repo has consists primarily of two distinct things, HACKING.rst 
 the 
 OpenStack Style Guide that is published at [0], and hacking the tool. 
 Hacking 
 the style guide's goal is as Sean puts it [1]:
 
 OpenStack has a set of style guidelines for clarity. OpenStack is a very
 large code base (over 1 Million lines of python), spanning dozens of git
 trees, with over a thousand developers contributing every 12 months. As 
 such
 common style helps developers understand code in reviews, move between
 projects smoothly, and overall make the code more maintainable. 
 
 While hacking the tool is there to move the burden of enforcing the style 
 guide 
 off of human reviewers, as human reviewers are our most constrained resource 
 today.
 
 In the past when evaluating if we should add a new rule to hacking the tool, 
 we 
 followed the  guidelines in [2]. Where consensus was met if the rule was 
 already 
 in HACKING.rst or there was lazy consensus on this mailing list. In 
 retrospect 
 this was a mistake, as this policy makes the assumption that the folks read 
 HACKING.rst and have done a review to decide if any sections should be 
 changed. 
 For example we have a few unenforced sections that we may not want to keep 
 [3][4]. Going forward I propose:
 
   * Any addition or removal of a rule requires a ML thread and/or an 
 oslo-specs
 blueprint (I am not sure which one makes the most sense here, but I am
 leaning towards the ML).
   * Only accept new rules that are already being used as a local rule in at
 least one repository.
   * Add a new directory, contrib, for local rules that multiple projects use 
 but
 are not generally considered acceptable to be enabled by default. This 
 way
 we can reduce the amount of cut and pasted code (thank you to Ben Nemec 
 for
 this idea). 
 
 While turning off rules at the repository level has always been recommended 
 as 
 hacking is a tool to help projects and not a mechanism to dictate style, 
 having 
 rules enabled in hacking does have implications. So we need a mechanism to 
 track 
 which rules folks don't find useful so we can make sure they are not enabled 
 by 
 default (either move them to contrib or remove them entirely). We can track 
 individual repositories hacking ignore lists and periodically reevaluate the 
 rules with the most ignores. This means projects vote on which rules to 
 remove 
 by maintaining there ignore list in tox.ini. I put together a tiny script to 
 do 
 this and here are my findings so far.
 
 rule: number of ignores
 
 H803: 19
 H302: 14
 H904: 14
 H305: 13
 H405: 12
 H307: 11
 H404: 9
 H402: 6
 H: 4

So the H: excludes really then contribute to other excludes, though
typically they also include specific rules. Are the net result of those
caculated as well?

 H233: 3
 H202: 3
 H306: 2
 H301: 2
 H303: 1
 H703: 1
 H304: 1
 H237: 1
 H4: 1
 H201: 1
 H701: 1
 H102: 1
 H702: 1
 
 Interestingly, of the two rules we just agreed to remove, H402 is not even 
 close to the top of the most skipped list. Of the top 3 most skipped rules
 
 
 
   * H803: The first line of the commit message must not end with a period and
 must be followed by a single blank line.
   o Removing, as previously discussed
   * H302: Do not import objects, only modules (*)
   o This has been around for a long time, should we move this to contrib?
   * H904: Wrap long lines in parentheses and not a backslash for line 
 continuation.
   o Although this has been in HACKING.rst for a while, the rule was 
 added in
 hacking 0.9. So its still unclear if this is skipped because folks 
 don't
 like it or because they haven't gotten around to fixing it up yet.
 Thoughts? 
 
 I personally like H302, but don't mind either way about the other two.
 H302 requires lots of code changes so if you haven't started with it
 it's a bit painful to enable that check (might be the reason people
 just ignore it).

I agree, H302 has been really handy in making code more readable.
Realistically this is one of those things that in Tempest we've told
people not to mass patch it, but that we should fix it as we go along.
It took a long time to get there, but it was good, and not completely
disruptive.

Also, curiously, neutron is H302 / H304 clean ... but has them in the
ignores. I wonder how many other projects are that way.

Having it available in an optional area would be good if it goes off by
default, because we 

Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-24 Thread Day, Phil
Hi Michael,

Not sure I understand the need for a gap between Juno Spec approval freeze 
(Jul 10th) and K opens for spec proposals (Sep 4th).I can understand that 
K specs won't get approved in that period, and may not get much feedback from 
the cores - but I don't see the harm in letting specs be submitted to the K 
directory for early review / feedback during that period ?  

Phil

 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 24 June 2014 09:59
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] Timeline for the rest of the Juno release
 
 Hi, this came up in the weekly release sync with ttx, and I think its worth
 documenting as clearly as possible.
 
 Here is our proposed timeline for the rest of the Juno release. This is
 important for people with spec proposals either out for review, or intending
 to be sent for review soon.
 
 (The numbers if brackets are weeks before the feature freeze).
 
 Jun 12 (-12): Juno-1
 Jun 25 (-10): Spec review day
 (https://etherpad.openstack.org/p/nova-juno-spec-priorities)
 
 Jul  3 (-9): Spec proposal freeze
 Jul 10 (-8): Spec approval freeze
 Jul 24 (-6): Juno-2
 Jul 28 (-5): Nova mid cycle meetup
 (https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint)
 
 Aug 21 (-2): Feature proposal freeze
 
 Sep  4 ( 0): Juno-3
  Feature freeze
  Merged J specs with no code proposed get deleted from nova-specs
 repo
  K opens for spec proposals, unmerged J spec proposals must rebase
 Sep 25 (+3): RC 1 build expected
  K spec review approvals start
 
 Oct 16 (+6): Release!
 (https://wiki.openstack.org/wiki/Juno_Release_Schedule)
 Oct 30: K summit spec proposal freeze
 
 Nov  6: K design summit
 
 Cheers,
 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tripleo, Ironic, the SSH power driver, paramiko and eventlet fun.

2014-06-24 Thread jang
There's a bug on this: 
https://bugs.launchpad.net/ironic/+bug/1321787?comments=all

It seems like it's been well-known for a long time that paramiko 
parallelism doesn't work well with eventlet. Ironic's aggressive use of 
the ssh power driver seems to hit this hard.

The sign that you're hitting a problem with this is the multiple 
simultaneous readers warning, which is spurious (but a sign of trouble).

I've started to follow up this problem with the eventletdev mailing list, 
since having had a trawl back through tickets it looks like we've seen 
other issues arising from this in various places, going back at least 18 
months. They're not all paramiko-related; not are they necessarily 
caused by the thing mentioned in the TRACE lines - that's just the point 
eventlet can detect the problem. I've seen the glanceclient, at the least, 
also trigger this - as well as Ironic's use of utils.execute to launch 
(again, parallel) qemu-img calls.

I'm just trying a tripleo run with a much reduced workers_pool_size to see 
if I can at least forcibly get a run to complete successfully.


As to where the problem lies: it seems eventlet has a registered listener 
backed by one FD. That FD gets recycled by another thread, which attempts 
to read or write on it. That's why eventlet is carping. Paramiko seems to 
trigger this quite reliably because it uses a worker thread to manage its 
ssh communication.

Fixes to eventlet might be quite tricky - the bug above has a link to some 
quick sketches in github - although it's just struck me that there may be 
a simpler approach to investigate, which I'll pursue after sending this.


It'd be good to get some eyeballs on this eventlet problem - it's been 
hitting us for quite a time - only, previously to ironic, not at a 
sufficiently high rate to cause huge amounts of pain.


Cheers,
jan

PS. A longer, rambling braindump went to the eventletdev mailing list, 
which can be found here:

https://lists.secondlife.com/pipermail/eventletdev/2014-June/thread.html

...I think I've a better handle on the problem now, but I still don't have 
a satisfactory from-first-principles explanation of exactly the state of 
events that causes paramiko to trigger this.

-- 
j...@ioctl.org  http://ioctl.org/jan/
stty intr ^m

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-06-24 Thread Avishay Balderman
Hi
One of L7 Rule attributes is ‘compare_type’.
This field is the match operator that the rule should activate against the 
value found in the request.
Below is list of the possible values:
- Regexp
- StartsWith
- EndsWith
- Contains
- EqualTo (*)
- GreaterThan (*)
- LessThan (*)

The last 3 operators (*) in the list are used in numerical matches.
Radware load balancing backend does not support those operators   “out of the 
box” and a significant development effort should be done in order to support it.
We are afraid to miss the Junu timeframe if we will have to focus in supporting 
the numerical operators.
Therefore we ask to support the non-numerical operators for Junu and add the 
numerical operators support post Junu.

See https://review.openstack.org/#/c/99709/4/specs/juno/lbaas-l7-rules.rst

Thanks
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-24 Thread Daniel P. Berrange
On Tue, Jun 24, 2014 at 10:55:41AM +, Day, Phil wrote:
  -Original Message-
  From: John Garbutt [mailto:j...@johngarbutt.com]
  Sent: 23 June 2014 10:35
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
  as part of resize ?
  
  On 18 June 2014 21:57, Jay Pipes jaypi...@gmail.com wrote:
   On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:
  
   On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:
  
   On 06/13/2014 02:22 PM, Day, Phil wrote:
  
   I guess the question I’m really asking here is:  “Since we know
   resize down won’t work in all cases, and the failure if it does
   occur will be hard for the user to detect, should we just block it
   at the API layer and be consistent across all Hypervisors ?”
  
  
   +1
  
   There is an existing libvirt blueprint:
  
   https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
   which I've never been in favor of:
  https://bugs.launchpad.net/nova/+bug/1270238/comments/1
  
  
   All of the functionality around resizing VMs to match a different
   flavour seem to be a recipe for unleashing a torrent of unfixable
   bugs, whether resizing disks, adding CPUs, RAM or any other aspect.
  
  
   +1
  
   I'm of the opinion that we should plan to rip resize functionality out
   of (the next major version of) the Compute API and have a *single*,
   *consistent* API for migrating resources. No more API extension X for
   migrating this kind of thing, and API extension Y for this kind of
   thing, and API extension Z for migrating /live/ this type of thing.
  
   There should be One move API to Rule Them All, IMHO.
  
  +1 for one move API, the two evolved independently, in different
  drivers, its time to unify them!
  
  That plan got stuck behind the refactoring of live-migrate and migrate to 
  the
  conductor, to help unify the code paths. But it kinda got stalled (I must
  rebase those patches...).
  
  Just to be clear, I am against removing resize down from v2 without a
  deprecation cycle. But I am pro starting that deprecation cycle.
  
  John
  
 I'm not sure Daniel and Jay are arguing for the same thing here John:
  I *think*  Daniel is saying drop resize altogether and Jay is saying
 unify it with migration - so I'm a tad confused which of those you're
 agreeing with.

Yes, I'm personally for removing resize completely since, IMHO, no matter
how many bugs we fix it is always going to be a mess. That said I realize
that people probably find resize-up useful, so I won't push hard to kill
it - we should just recognize that it is always going to be a mess which
does not result in the same setup you'd get if you booted fresh with the
new settings.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-24 Thread Michael Still
Phil -- I really want people to focus their efforts on fixing bugs in
that period was the main thing. The theory was if we encouraged people
to work on specs for the next release, then they'd be distracted from
fixing the bugs we need fixed in J.

Cheers,
Michael

On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil philip@hp.com wrote:
 Hi Michael,

 Not sure I understand the need for a gap between Juno Spec approval freeze 
 (Jul 10th) and K opens for spec proposals (Sep 4th).I can understand 
 that K specs won't get approved in that period, and may not get much feedback 
 from the cores - but I don't see the harm in letting specs be submitted to 
 the K directory for early review / feedback during that period ?

 Phil

 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 24 June 2014 09:59
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] Timeline for the rest of the Juno release

 Hi, this came up in the weekly release sync with ttx, and I think its worth
 documenting as clearly as possible.

 Here is our proposed timeline for the rest of the Juno release. This is
 important for people with spec proposals either out for review, or intending
 to be sent for review soon.

 (The numbers if brackets are weeks before the feature freeze).

 Jun 12 (-12): Juno-1
 Jun 25 (-10): Spec review day
 (https://etherpad.openstack.org/p/nova-juno-spec-priorities)

 Jul  3 (-9): Spec proposal freeze
 Jul 10 (-8): Spec approval freeze
 Jul 24 (-6): Juno-2
 Jul 28 (-5): Nova mid cycle meetup
 (https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint)

 Aug 21 (-2): Feature proposal freeze

 Sep  4 ( 0): Juno-3
  Feature freeze
  Merged J specs with no code proposed get deleted from nova-specs
 repo
  K opens for spec proposals, unmerged J spec proposals must 
 rebase
 Sep 25 (+3): RC 1 build expected
  K spec review approvals start

 Oct 16 (+6): Release!
 (https://wiki.openstack.org/wiki/Juno_Release_Schedule)
 Oct 30: K summit spec proposal freeze

 Nov  6: K design summit

 Cheers,
 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Questions about test policy for scenario test

2014-06-24 Thread Yair Fried
- Original Message -

 From: Fei Long Wang feil...@catalyst.net.nz
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: br...@catalyst.net.nz
 Sent: Tuesday, June 24, 2014 8:29:03 AM
 Subject: [openstack-dev] [QA] Questions about test policy for scenario test

 Greetings,

 We're leveraging the scenario test of Tempest to do the end-to-end
 functional test to make sure everything work great after upgrade,
 patching, etc. And We're happy to fill the gaps we found. However, I'm a
 little bit confused about the test policy from the scenario test
 perspective, especially comparing with the API test. IMHO, scenario test
 will cover some typical work flows of one specific service or mixed
 services, and it would be nice to make sure the function is really
 working instead of just checking the object status from OpenStack
 perspective. Is that correct?

 For example, live migration of Nova, it has been covered in API test of
 Tempest (see
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/test_live_block_migration.py).
 But as you see, it just checks if the instance is Active or not instead
 of checking if the instance can be login/ssh successfully
Seems to me, that what you want is to add migration test to 
https://github.com/openstack/tempest/blob/master/tempest/scenario/test_network_advanced_server_ops.py
 
This scenario does exactly what you are looking for 
1. check VM connectivity 
2. mess with VM (reboot, resize, or in your case - migrate) 
3. check VM connectivity 

 . Obviously,
 from an real world view, we'd like to check if it's working indeed. So
 the question is, should this be improved? If so, the enhanced code
 should be in API test, scenario test or any other places? Thanks you.

 --
 Cheers  Best regards,
 Fei Long Wang (王飞龙)
 --
 Senior Cloud Software Engineer
 Tel: +64-48032246
 Email: flw...@catalyst.net.nz
 Catalyst IT Limited
 Level 6, Catalyst House, 150 Willis Street, Wellington
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral 0.0.4 released

2014-06-24 Thread Renat Akhmerov
Hi,

Mistral version 0.0.4* has just been released! This is an intermediate release 
however it contains a series of important changes/fixes.

Here’s the list of the most noticeable changes:
Speeded up tests using Testr
Modified launch script to start any combination of Mistral components (engine, 
api, executor)
OpenStack related data is always accessible in actions (currently auth_token, 
project_id) without having to pass it from tasks
Modified standard http action to support all http protocol parameters (e.g. 
timeout)
Implemented pluggable task actions
Cleaned up configuration settings
Refactored engine to use plugins
Improved integration tests
A series of improvements in Mistral Dashboard
15 bugs fixed (http error codes, pypi upload for client, bugs with data flow 
context)

Links:
https://launchpad.net/mistral/juno/0.0.4 - Release page at Launchpad (release 
notes, includes download files, list of implemented blueprints and fixed bugs)
https://wiki.openstack.org/wiki/Mistral/Releases/0.0.4 - Release page at wiki 
(link to a screencast, links to examples, release notes)

Thanks to all the contributors!

*(Please note that version 0.0.3 was corrupted during the release process and 
had to be abandoned.)

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-24 Thread Russell Bryant
On 06/24/2014 07:35 AM, Michael Still wrote:
 Phil -- I really want people to focus their efforts on fixing bugs in
 that period was the main thing. The theory was if we encouraged people
 to work on specs for the next release, then they'd be distracted from
 fixing the bugs we need fixed in J.
 
 Cheers,
 Michael
 
 On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil philip@hp.com wrote:
 Hi Michael,

 Not sure I understand the need for a gap between Juno Spec approval freeze 
 (Jul 10th) and K opens for spec proposals (Sep 4th).I can understand 
 that K specs won't get approved in that period, and may not get much 
 feedback from the cores - but I don't see the harm in letting specs be 
 submitted to the K directory for early review / feedback during that period ?

I agree with both of you.  Priorities need to be finishing up J, but I
don't see any reason not to let people post K specs whenever.
Expectations just need to be set appropriately that it may be a while
before they get reviewed/approved.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [Heat] Ceilometer aware people, please advise us on processing notifications..

2014-06-24 Thread Julien Danjou
On Tue, Jun 24 2014, Clint Byrum wrote:
 Basically in Heat when a user boots an instance, we would like to act as
 soon as it is active, and not have to poll the nova API to know when
 that is. Angus has suggested that perhaps we can just tell ceilometer to
 hit Heat with a web hook when that happens.

We have a blueprint for having alarm based on notifications:

  https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification

And that would likely do what you need.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-06-24 Thread Salvatore Orlando
There is a long standing patch [1] for enabling the neutron full job.
Little before the Icehouse release date, when we first pushed this, the
neutron full job had a failure rate of less than 10%. However, since has
come by, and perceived failure rates were higher, we ran again this
analysis.

Here are the findings in a nutshell.
1) If we were to enable the job today we might expect about a 3-fold
increase in neutron job failures when compared with the smoke test. This is
unfortunately not acceptable and we therefore need to identify and fix the
issues causing the additional failure rate.
2) However this also puts us in a position where if we wait until the
failure rate drops under a given threshold we might end up chasing a moving
target as new issues might be introduced at any time since the job is not
voting.
3) When it comes to evaluating failure rates for a non voting job, taking
the rough numbers does not mean anything, as that will take in account
patches 'in progress' which end up failing the tests because of problems in
the patch themselves.

Well, that was pretty much a lot for a nutshell; however if you're not
yet bored to death please go on reading.

The data in this post are a bit skewed because of a rise in neutron job
failures in the past 36 hours. However, this rise affects both the full and
the smoke job so it does not invalidate what we say here. The results shown
below are representative of the gate status 12 hours ago.

- Neutron smoke job failure rates (all queues)
  24 hours: 22.4% 48 hours: 19.3% 7 days: 8.96%
- Neutron smoke job failure rates (gate queue only):
  24 hours: 10.41% 48 hours: 10.20% 7 days: 3.53%
- Neutron full job failure rate (check queue only as it's non voting):
  24 hours: 31.54% 48 hours: 28.87% 7 days: 25.73%

Check/Gate Ratio between neutron smoke failures
24 hours: 2.15 48 hours: 1.89 7 days: 2.53

Estimated job failure rate for neutron full job if it were to run in the
gate:
24 hours: 14.67% 48 hours: 15.27% 7 days: 10.16%

The numbers are therefore not terrible, but definitely not good enough;
looking at the last 7 days the full job will have a failure rate about 3
times higher than the smoke job.

We then took, as it's usual for us when we do this kind of evaluation, a
window with a reasonable number of failures (41 in our case), and analysed
them in detail.

Of these 41 failures 17 were excluded because of infra problems, patches
'in progress', or other transient failures; considering that over the same
period of time 160 full job runs succeeded this would leave us with 24
failures on 184 run, and therefore a failure rate of 13.04%, which not far
from the estimate.

Let's consider now these 24 'real' falures:
A)  2 were for the SSH timeout (8.33% of failures, 1.08% of total full job
runs). These specific failure is being analyzed to see if a specific
fingerprint can be found
B) 2  (8.33% of failures, 1.08% of total full job runs) were for a failure
in test load balancer basic, which is actually a test design issue and is
already being addressed [2]
C) 7 (29.16% of failures, 3.81% of total full job runs) were for an issue
while resizing a server, which has been already spotted and has a bug in
progress [3]
D) 5 (20.83% of failures, 2.72% of total full job runs) manifested as a
failure in test_server_address; however the actual root cause was being
masked by [4]. A bug has been filed [5]; this is the most worrying one in
my opinion as there are many cases where the fault happens but does not
trigger a failure because of the way tempest tests are designed.
E) 6 are because of our friend lock wait timeout. This was initially filed
as [6] but since then we've closed it to file more detailed bug reports as
the lock wait timeout can manifest in various places; Eugene is leading the
effort on this problem with Kevin B.


Summarizing the only failure modes specific to the full job seem to be C 
D. If we were able to fix those we should reasonably expect a failure rate
of about 6.5%. That's still almost twice as the smoke job, but I deem it
acceptable for two reasons:
1- by voting, we will avoid new bugs affecting the full job from being
introduced. it is worth reminding people that any bug affecting the full
job is likely to affect production environments
2- patches failing in the gate will spur neutron developers to quickly find
a fix. Patches failing a non voting job will cause some neutron core team
members to write long and boring posts to the mailing list.

Salvatore




[1] https://review.openstack.org/#/c/88289/
[2] https://review.openstack.org/#/c/98065/
[3] https://bugs.launchpad.net/nova/+bug/1329546
[4] https://bugs.launchpad.net/tempest/+bug/1332414
[5] https://bugs.launchpad.net/nova/+bug/1333654
[5] https://bugs.launchpad.net/nova/+bug/1283522
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-06-24 Thread Salvatore Orlando
Ops...  I forgot to mention that in agreement with sdague we won't anyway
enable this job before thursday June 26th, in order to give a few days to
the trusty update to settle down.

Salvatore


On 24 June 2014 14:14, Salvatore Orlando sorla...@nicira.com wrote:

 There is a long standing patch [1] for enabling the neutron full job.
 Little before the Icehouse release date, when we first pushed this, the
 neutron full job had a failure rate of less than 10%. However, since has
 come by, and perceived failure rates were higher, we ran again this
 analysis.

 Here are the findings in a nutshell.
 1) If we were to enable the job today we might expect about a 3-fold
 increase in neutron job failures when compared with the smoke test. This is
 unfortunately not acceptable and we therefore need to identify and fix the
 issues causing the additional failure rate.
 2) However this also puts us in a position where if we wait until the
 failure rate drops under a given threshold we might end up chasing a moving
 target as new issues might be introduced at any time since the job is not
 voting.
 3) When it comes to evaluating failure rates for a non voting job, taking
 the rough numbers does not mean anything, as that will take in account
 patches 'in progress' which end up failing the tests because of problems in
 the patch themselves.

 Well, that was pretty much a lot for a nutshell; however if you're not
 yet bored to death please go on reading.

 The data in this post are a bit skewed because of a rise in neutron job
 failures in the past 36 hours. However, this rise affects both the full and
 the smoke job so it does not invalidate what we say here. The results shown
 below are representative of the gate status 12 hours ago.

 - Neutron smoke job failure rates (all queues)
   24 hours: 22.4% 48 hours: 19.3% 7 days: 8.96%
 - Neutron smoke job failure rates (gate queue only):
   24 hours: 10.41% 48 hours: 10.20% 7 days: 3.53%
 - Neutron full job failure rate (check queue only as it's non voting):
   24 hours: 31.54% 48 hours: 28.87% 7 days: 25.73%

 Check/Gate Ratio between neutron smoke failures
 24 hours: 2.15 48 hours: 1.89 7 days: 2.53

 Estimated job failure rate for neutron full job if it were to run in the
 gate:
 24 hours: 14.67% 48 hours: 15.27% 7 days: 10.16%

 The numbers are therefore not terrible, but definitely not good enough;
 looking at the last 7 days the full job will have a failure rate about 3
 times higher than the smoke job.

 We then took, as it's usual for us when we do this kind of evaluation, a
 window with a reasonable number of failures (41 in our case), and analysed
 them in detail.

 Of these 41 failures 17 were excluded because of infra problems, patches
 'in progress', or other transient failures; considering that over the same
 period of time 160 full job runs succeeded this would leave us with 24
 failures on 184 run, and therefore a failure rate of 13.04%, which not far
 from the estimate.

 Let's consider now these 24 'real' falures:
 A)  2 were for the SSH timeout (8.33% of failures, 1.08% of total full job
 runs). These specific failure is being analyzed to see if a specific
 fingerprint can be found
 B) 2  (8.33% of failures, 1.08% of total full job runs) were for a failure
 in test load balancer basic, which is actually a test design issue and is
 already being addressed [2]
 C) 7 (29.16% of failures, 3.81% of total full job runs) were for an issue
 while resizing a server, which has been already spotted and has a bug in
 progress [3]
 D) 5 (20.83% of failures, 2.72% of total full job runs) manifested as a
 failure in test_server_address; however the actual root cause was being
 masked by [4]. A bug has been filed [5]; this is the most worrying one in
 my opinion as there are many cases where the fault happens but does not
 trigger a failure because of the way tempest tests are designed.
 E) 6 are because of our friend lock wait timeout. This was initially filed
 as [6] but since then we've closed it to file more detailed bug reports as
 the lock wait timeout can manifest in various places; Eugene is leading the
 effort on this problem with Kevin B.


 Summarizing the only failure modes specific to the full job seem to be C 
 D. If we were able to fix those we should reasonably expect a failure rate
 of about 6.5%. That's still almost twice as the smoke job, but I deem it
 acceptable for two reasons:
 1- by voting, we will avoid new bugs affecting the full job from being
 introduced. it is worth reminding people that any bug affecting the full
 job is likely to affect production environments
 2- patches failing in the gate will spur neutron developers to quickly
 find a fix. Patches failing a non voting job will cause some neutron core
 team members to write long and boring posts to the mailing list.

 Salvatore




 [1] https://review.openstack.org/#/c/88289/
 [2] https://review.openstack.org/#/c/98065/
 [3] https://bugs.launchpad.net/nova/+bug/1329546
 [4] 

Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-24 Thread Anne Gentle
On Tue, Jun 24, 2014 at 7:07 AM, Russell Bryant rbry...@redhat.com wrote:

 On 06/24/2014 07:35 AM, Michael Still wrote:
  Phil -- I really want people to focus their efforts on fixing bugs in
  that period was the main thing. The theory was if we encouraged people
  to work on specs for the next release, then they'd be distracted from
  fixing the bugs we need fixed in J.
 
  Cheers,
  Michael
 
  On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil philip@hp.com wrote:
  Hi Michael,
 
  Not sure I understand the need for a gap between Juno Spec approval
 freeze (Jul 10th) and K opens for spec proposals (Sep 4th).I can
 understand that K specs won't get approved in that period, and may not get
 much feedback from the cores - but I don't see the harm in letting specs be
 submitted to the K directory for early review / feedback during that period
 ?

 I agree with both of you.  Priorities need to be finishing up J, but I
 don't see any reason not to let people post K specs whenever.
 Expectations just need to be set appropriately that it may be a while
 before they get reviewed/approved.


No, we need more discipline around bug fixing and also to reserve time for
docs.

Anne



 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] dib-utils Release Question

2014-06-24 Thread Jay Dobies
Ahh, ok. I had just assumed it was a Python library, but I admittedly 
didn't look too closely at it. Thanks :)


On 06/23/2014 09:32 PM, Steve Kowalik wrote:

On 24/06/14 06:31, Jay Dobies wrote:

I finished the releases for all of our existing projects and after
poking around tarballs.openstack.org and pypi, it looks like they built
successfully. Yay me \o/

However, it doesn't look like dib-utils build worked. I don't see it
listed on tarballs.openstack.org. It was the first release for that
project, but I didn't take any extra steps (I just followed the
instructions on the releases wiki and set it to version 0.0.1).

I saw the build for it appear in zuul but I'm not sure how to go back
and view the results of a build once it disappears off the main page.

Can someone with experience releasing a new project offer me any insight?


\o/

I've been dealing with releases of new projects from the os-cloud-config
side recently, so let's see.

dib-utils has a post job of dib-utils-branch-tarball, so the job does
exist, as you pointed out, but it doesn't hurt to double check.

The object the tag points to is commit
45b7cf44bc939ef08afc6b1cb1d855e0a85710ad, so logs can be found at
http://logs.openstack.org/45/45b7cf44bc939ef08afc6b1cb1d855e0a85710ad

And from the log a few levels deep at the above URL, we see:

2014-06-16 07:17:13.122 | + tox -evenv python setup.py sdist
2014-06-16 07:17:13.199 | ERROR: toxini file 'tox.ini' not found
2014-06-16 07:17:13.503 | Build step 'Execute shell' marked build as failure

Since it's not a Python project, no tarball or pypi upload.

Cheers,



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Blueprints process

2014-06-24 Thread Dmitry Pyzhov
Guys,

We have a beautiful contribution guide:
https://wiki.openstack.org/wiki/Fuel/How_to_contribute

However, I would like to address several issues in our blueprints/bugs
processes. Let's discuss and vote on my proposals.

1) First of all, the bug counter is an excellent metric for quality. So
let's use it only for bugs and track all feature requirement as blueprints.
Here is what it means:

1a) If a bug report does not describe a user’s pain, a blueprint should be
created and bug should be closed as invalid
1b) If a bug report does relate to a user’s pain, a blueprint should be
created and linked to the bug
1c) We have an excellent reporting tool
http://fuel-launchpad.mirantis.com/project/fuel, but it needs more
metrics: count of critical/high bugs, count of bugs assigned to each team.
It will require support of team members lists, but it seems that we really
need it.


2) We have a huge amount of blueprints and it is hard to work with this
list. A good blueprint needs a fixed scope, spec review and acceptance
criteria. It is obvious for me that we can not work on blueprints that do
not meet these requirements. Therefore:

2a) Let's copy the nova future series https://launchpad.net/nova/future
and create a fake milestone 'next' as nova does
https://launchpad.net/nova/+milestone/next. All unclear blueprints should
be moved there. We will pick blueprints from there, add spec and other info
and target them to a milestone when we are really ready to work on a
particular blueprint. Our release page
https://launchpad.net/fuel/+milestone/5.1 will look much more close to
reality and much more readable in this case.
2b) Each blueprint in a milestone should contain information about feature
lead, design reviewers, developers, qa, acceptance criteria. Spec is
optional for trivial blueprints. If a spec is created, the designated
reviewer(s) should put (+1) right into the blueprint description.
2c) Every blueprint spec should be updated before feature freeze with the
latest actual information. Actually, I'm not sure if we care about spec
after feature development, but it seems to be logical to have correct
information in specs.
2d) We should avoid creating interconnected blueprints wherever possible.
Of course we can have several blueprints for one big feature if it can be
split into several shippable blocks for several releases or for several
teams. In most cases, small parts should be tracked as work items of a
single blueprint.


3) Every review request without a bug or blueprint link should be checked
carefully.

3a) It should contain a complete description of what is being done and why
3b) It should not require backports to stable branches (backports are
bugfixes only)
3c) It should not require changes to documentation or be mentioned in
release notes
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Vladimir Kozhukalov
Guys,

What do you think of removing node logs on master node right after removing
node from cluster?

The issue is when user do experiments he creates and deletes clusters and
old unused directories remain and take disk space. On the other hand, it is
not so hard to imaging the situation when user would like to be able to
take a look in old logs.

My suggestion here is to add a boolean parameter into settings which will
manage this piece of logic (1-remove old logs, 0-don't touch old logs).

Thanks for your opinions.

Vladimir Kozhukalov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-06-24 Thread Jiri Tomasek

On 06/20/2014 11:17 PM, Lyle, David wrote:

I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.

Zhenguo has been a prolific reviewer for the past two releases providing
high quality reviews. And providing a significant number of patches over
the past three releases.

Ana has been a significant reviewer in the Icehouse and Juno release
cycles. She has also contributed several patches in this timeframe to both
Horizon and tuskar-ui.

Please feel free to respond in public or private your support or any
concerns.

Thanks,
David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1 to both, thanks for your hard work!

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Monty Taylor
On 06/20/2014 02:33 PM, Joe Gordon wrote:
 On Fri, Jun 20, 2014 at 11:07 AM, Sean Dague s...@dague.net wrote:
 
 After seeing a bunch of code changes to enforce new hacking rules, I'd
 like to propose dropping some of the rules we have. The overall patch
 series is here -

 https://review.openstack.org/#/q/status:open+project:openstack-dev/hacking+branch:master+topic:be_less_silly,n,z

 H402 - 1 line doc strings should end in punctuation. The real statement
 is this should be a summary sentence. A sentence is not just a set of
 words that end in a period. Squirel fast bob. It's something deeper.
 This rule thus isn't really semantically useful, especially when you are
 talking about at 69 character maximum (79 - 4 space indent - 6 quote
 characters).

 
 Thoughts on removing all pep257 (http://legacy.python.org/dev/peps/pep-0257/)
 things from hacking? If projects would still like to enforce it there is a
 flake8 extension for pep257 itself.

I think this is an excellent idea.


 H803 - First line of a commit message must *not* end in a period. This
 was mostly a response to an unreasonable core reviewer that was -1ing
 people for not having periods. I think any core reviewer that -1s for
 this either way should be thrown off the island, or at least made fun
 of, a lot. Again, the clarity of a commit message is not made or lost by
 the lack or existence of a period at the end of the first line.

 
 ++ for removing this, in general the git based rules are funny to enforce.
 As you can run 'tox -epep8' before a commit and everything will pass, then
 you write your commit message and now it will fail.

++

 

 H305 - Enforcement of libraries fitting correctly into stdlib, 3rdparty,
 our tree. This biggest issue here is it's built in a world where there
 was only 1 viable python version, 2.7. Python's stdlib is actually
 pretty dynamic and grows over time. As we embrace more python 3, and as
 distros start to make python3 be front and center, what does this even
 mean? The current enforcement can't pass on both python2 and python3 at
 the same time in many cases because of that.

 
 ++ Oh Python 2 vs. 3
 
 For this one I think we should leave the rule in HACKING.rst but explicitly
 document it as a recommendation, and that python2 vs python3 makes this
 unenforceable.
 
 

 We have to remember we're all humans, and it's ok to have grey space.
 Like in 305, you *should* group the libraries if you can, but stuff like
 that should be labeled as 'nit' in the review, and only ask the author
 to respin it if there are other more serious issues to be handled.

 Let's optimize a little more for fun, and stop throwing -1s for silly
 things. :)

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Monty Taylor
On 06/22/2014 02:49 PM, Duncan Thomas wrote:
 On 22 June 2014 14:41, Amrith Kumar amr...@tesora.com wrote:
 In addition to making changes to the hacking rules, why don't we mandate also
 that perceived problems in the commit message shall not be an acceptable
 reason to -1 a change.
 
 -1.
 
 There are some /really/ bad commit messages out there, and some of us
 try to use the commit messages to usefully sort through the changes
 (i.e. I often -1 in cinder a change only affects one driver and that
 isn't clear from the summary).
 
 If the perceived problem is grammatical, I'm a bit more on board with
 it not a reason to rev a patch, but core reviewers can +2/A over the
 top of a -1 anyway...

100% agree. Spelling and grammar are rude to review on - especially
since we have (and want) a LOT of non-native English speakers. It's not
our job to teach people better grammar. Heck - we have people from
different English backgrounds with differing disagreements on what good
grammar _IS_

But learning to put better info into a commit message is worthwhile to
learn. I know that I, for one, have gotten better at this over my time
working on OpenStack.

 Would this improve the situation?
 
 Writing better commit messages in the first place would improve the situation?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-24 Thread Russell Bryant
On 06/24/2014 08:56 AM, Anne Gentle wrote:
 
 
 
 On Tue, Jun 24, 2014 at 7:07 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 06/24/2014 07:35 AM, Michael Still wrote:
  Phil -- I really want people to focus their efforts on fixing bugs in
  that period was the main thing. The theory was if we encouraged people
  to work on specs for the next release, then they'd be distracted from
  fixing the bugs we need fixed in J.
 
  Cheers,
  Michael
 
  On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil philip@hp.com
 mailto:philip@hp.com wrote:
  Hi Michael,
 
  Not sure I understand the need for a gap between Juno Spec
 approval freeze (Jul 10th) and K opens for spec proposals (Sep
 4th).I can understand that K specs won't get approved in that
 period, and may not get much feedback from the cores - but I don't
 see the harm in letting specs be submitted to the K directory for
 early review / feedback during that period ?
 
 I agree with both of you.  Priorities need to be finishing up J, but I
 don't see any reason not to let people post K specs whenever.
 Expectations just need to be set appropriately that it may be a while
 before they get reviewed/approved.
 
 
 No, we need more discipline around bug fixing and also to reserve time
 for docs. 

I agree with the goal, but I don't think treating the specs repo as
off-limits helps.  What are we going to do, disable the project in gerrit?

I'd rather talk about the carrot.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar] Blazar client V2 REST API support and Horizon integration

2014-06-24 Thread Fuente, Pablo A
Blazar cores,
I managed to get our client working against our V2 REST API and I
implemented the first bits of our Blazar Dashboard for Horizon. In order
to continue working on the last, I need this patches on master. I
suggest this order:
https://review.openstack.org/#/c/99389/ (V2 REST API)
https://review.openstack.org/#/c/93047/ (V2 REST API)
https://review.openstack.org/#/c/91455/ (V2 REST API)
https://review.openstack.org/#/c/100661/ (Blazar Dashboard)
https://review.openstack.org/#/c/100662/ (Blazar Dashboard)

Please take into account that the three V2 REST API patches must be
merged in order to get all working without failures.

Thanks.
Pablo.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Robert Collins
On 23 June 2014 07:04, Jay Pipes jaypi...@gmail.com wrote:

 I would also love to get rid of H404, otherwise known as the dumb rule that
 says if you have a multiline docstring, that there must be a summary line,
 then a blank line, then a detailed description. It makes things like this
 illegal, which, IMHO, is stupid:

 def do_something(self, thing):
 We do something with the supplied thing, so that something else
 is also done with this other thing. Make sure the other thing is
 of type something.
 
 pass

 Likewise, I'd love to be able to have a newline start the docstring, like
 so:

 def do_something(self, thing):
 
 We do something with the supplied thing, so that something else
 is also done with this other thing. Make sure the other thing is
 of type something.
 
 pass

 But there's a rule that prevents that as well...

 To be clear, I don't think all hacking rules are silly. To the contrary,
 there are many that are reasonable and useful. However, I'd prefer to focus
 on things that make the code more readable, not less readable, and rules
 that enforce Pythonic idioms, not some random hacker's idea of good style.

So

Lorem ipsum

Foo bar baz


is a valid PEP-257 docstring, though a bit suspect on context. In fact
*all* leading whitespace is stripped -

foo

and


foo


are equivalent for docstrings - even though they aren't equivalent for
the mk1 human eyeball reading them.

So in both cases I would have expected to you be bitten by the
first-line rule, which exists for API extractors (such as
help(module)) so that they have a useful, meaningful summary they can
pull out. I think it aids immensely in docstring readability - and its
certainly convention throughout the rest of the Python universe, so
IMO it comes part of the parcel when you ask for Python.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-06-24 Thread Dustin Lundquist
I think the API should provide an richly featured interface, and individual
drivers should indicate if they support the provided configuration. For
example there is a spec for a Linux LVS LBaaS driver, this driver would not
support TLS termination or any layer 7 features, but would still be
valuable for some deployments. The user experience of such a solution could
be improved if the driver to propagate up a message specifically
identifying the unsupported feature.


-Dustin


On Tue, Jun 24, 2014 at 4:28 AM, Avishay Balderman avish...@radware.com
wrote:

  Hi

 One of L7 Rule attributes is ‘compare_type’.

 This field is the match operator that the rule should activate against the
 value found in the request.

 Below is list of the possible values:

 - Regexp

 - StartsWith

 - EndsWith

 - Contains

 - EqualTo (*)

 - GreaterThan (*)

 - LessThan (*)



 The last 3 operators (*) in the list are used in numerical matches.

 Radware load balancing backend does not support those operators   “out of
 the box” and a significant development effort should be done in order to
 support it.

 We are afraid to miss the Junu timeframe if we will have to focus in
 supporting the numerical operators.

 Therefore we ask to support the non-numerical operators for Junu and add
 the numerical operators support post Junu.



 See https://review.openstack.org/#/c/99709/4/specs/juno/lbaas-l7-rules.rst



 Thanks

 Avishay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-24 Thread Jordan OMara

On 20/06/14 16:26 -0400, Charles Crouch wrote:

Any more takers for the tripleo mid-cycle meetup in Raleigh? If so, please
sign up on the etherpad below.

The hotel group room rate will be finalized on Monday Jul 23rd (US time), 
after that time you will be on your own for finding accommodation.


Thanks
Charles



Just an update that I've got us a block of rooms reserved at the
nearest, cheapest hotel (the Marriott in downtown Raleigh, about 200
yards from the Red Hat office) - I'll have details on how to actually
book at this rate in just a few minutes.
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgpz8FNkWvGI5.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-06-24 Thread Avishay Balderman
Hi Dustin
I agree with the concept you described but as far as I understand it is not 
currently supported in Neutron.
So a driver should be fully compatible with the interface it implements.

Avishay

From: Dustin Lundquist [mailto:dus...@null-ptr.net]
Sent: Tuesday, June 24, 2014 5:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

I think the API should provide an richly featured interface, and individual 
drivers should indicate if they support the provided configuration. For example 
there is a spec for a Linux LVS LBaaS driver, this driver would not support TLS 
termination or any layer 7 features, but would still be valuable for some 
deployments. The user experience of such a solution could be improved if the 
driver to propagate up a message specifically identifying the unsupported 
feature.


-Dustin

On Tue, Jun 24, 2014 at 4:28 AM, Avishay Balderman 
avish...@radware.commailto:avish...@radware.com wrote:
Hi
One of L7 Rule attributes is ‘compare_type’.
This field is the match operator that the rule should activate against the 
value found in the request.
Below is list of the possible values:
- Regexp
- StartsWith
- EndsWith
- Contains
- EqualTo (*)
- GreaterThan (*)
- LessThan (*)

The last 3 operators (*) in the list are used in numerical matches.
Radware load balancing backend does not support those operators   “out of the 
box” and a significant development effort should be done in order to support it.
We are afraid to miss the Junu timeframe if we will have to focus in supporting 
the numerical operators.
Therefore we ask to support the non-numerical operators for Junu and add the 
numerical operators support post Junu.

See https://review.openstack.org/#/c/99709/4/specs/juno/lbaas-l7-rules.rst

Thanks
Avishay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Aleksandr Didenko
Hi,

If user runs some experiments with creating/deleting clusters, then taking
care of old logs is under user's responsibility, I suppose. Fuel configures
log rotation with compression for remote logs, so old logs will be gzipped
and will not take much space.

In case of additional boolean parameter, the default value should be
0-don't touch old logs.

--
Regards,
Alex


On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes clusters and
 old unused directories remain and take disk space. On the other hand, it is
 not so hard to imaging the situation when user would like to be able to
 take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which will
 manage this piece of logic (1-remove old logs, 0-don't touch old logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-24 Thread Day, Phil
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 24 June 2014 13:08
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release
 
 On 06/24/2014 07:35 AM, Michael Still wrote:
  Phil -- I really want people to focus their efforts on fixing bugs in
  that period was the main thing. The theory was if we encouraged people
  to work on specs for the next release, then they'd be distracted from
  fixing the bugs we need fixed in J.
 
  Cheers,
  Michael
 
  On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil philip@hp.com wrote:
  Hi Michael,
 
  Not sure I understand the need for a gap between Juno Spec approval
 freeze (Jul 10th) and K opens for spec proposals (Sep 4th).I can
 understand that K specs won't get approved in that period, and may not get
 much feedback from the cores - but I don't see the harm in letting specs be
 submitted to the K directory for early review / feedback during that period ?
 
 I agree with both of you.  Priorities need to be finishing up J, but I don't 
 see
 any reason not to let people post K specs whenever.
 Expectations just need to be set appropriately that it may be a while before
 they get reviewed/approved.
 
Exactly - I think it's reasonable to set the expectation that the focus of 
those that can produce/review code will be elsewhere - but that shouldn't stop 
some small effort going into knocking the rough corners off the specs at the 
same time


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] High bandwidth routers

2014-06-24 Thread Mark McClain

On Jun 23, 2014, at 9:21 AM, CARVER, PAUL 
pc2...@att.commailto:pc2...@att.com wrote:

Is anyone using Neutron for high bandwidth workloads? (for sake of discussion 
let’s “high” = “50Gbps or greater”)

With routers being implemented as network namespaces within x86 servers it 
seems like Neutron networks would be pretty bandwidth constrained relative to 
“real” routers.

As we start migrating the physical connections on our physical routers from 
multiple of 10G to multiples of 100G, I’m wondering if Neutron has a clear 
roadmap towards networks where the bandwidth requirements exceed what an x86 
box can do.

Is the thinking that x86 boxes will soon be capable of 100G and multi-100G 
throughput? Or does DVR take care of this by spreading the routing function 
over a large number of compute nodes so that we don’t need to channel 
multi-100G flows through single network nodes?

I’m mostly thinking about WAN connectivity here, video and big data 
applications moving huge amounts of traffic into and out of OpenStack based 
datacenters.


There are few internal implementations of the l3 plugin that are backed by 
dedicated hardware vs commodity+network namespaces.  Of those few, all are site 
specific (due to limited feature support and not likely to upstreamed).

mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Jay Pipes

On 06/24/2014 10:34 AM, Robert Collins wrote:

On 23 June 2014 07:04, Jay Pipes jaypi...@gmail.com wrote:


I would also love to get rid of H404, otherwise known as the dumb rule that
says if you have a multiline docstring, that there must be a summary line,
then a blank line, then a detailed description. It makes things like this
illegal, which, IMHO, is stupid:

def do_something(self, thing):
 We do something with the supplied thing, so that something else
 is also done with this other thing. Make sure the other thing is
 of type something.
 
 pass

Likewise, I'd love to be able to have a newline start the docstring, like
so:

def do_something(self, thing):
 
 We do something with the supplied thing, so that something else
 is also done with this other thing. Make sure the other thing is
 of type something.
 
 pass

But there's a rule that prevents that as well...

To be clear, I don't think all hacking rules are silly. To the contrary,
there are many that are reasonable and useful. However, I'd prefer to focus
on things that make the code more readable, not less readable, and rules
that enforce Pythonic idioms, not some random hacker's idea of good style.


So

Lorem ipsum

Foo bar baz


is a valid PEP-257 docstring, though a bit suspect on context. In fact
*all* leading whitespace is stripped -

foo

and


foo


are equivalent for docstrings - even though they aren't equivalent for
the mk1 human eyeball reading them.

So in both cases I would have expected to you be bitten by the
first-line rule, which exists for API extractors (such as
help(module)) so that they have a useful, meaningful summary they can
pull out. I think it aids immensely in docstring readability - and its
certainly convention throughout the rest of the Python universe, so
IMO it comes part of the parcel when you ask for Python.



This is a summary.

And this is a description


will result in a failure of H404, due to the This is a summary. not 
being on the first line, like this:


This is a summary.

And this is a description


It is that silliness that I deplore, not the summary line followed by a 
newline issue.


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Barebones CA

2014-06-24 Thread Clark, Robert Graham
Hi all,

I’m sure this has been discussed somewhere and I’ve just missed it.

Is there any value in creating a basic ‘CA’ and plugin to satisfy 
tests/integration in Barbican? I’m thinking something that probably performs 
OpenSSL certificate operations itself, ugly but perhaps useful for some things?

-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Aleksandr Didenko
Yeah, I thought about diagnostic snapshot too. Maybe it would be better to
implement per-environment diagnostic snapshots? I.e. add diagnostic
snapshot generate/download buttons/links in the environment actions tab.
Such snapshot would contain info/logs about Fuel master node and nodes
assigned to the environment only.


On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky ikalnit...@mirantis.com
wrote:

 Hi guys,

 What about our diagnostic snapshot?

 I mean we're going to make snapshot of entire /var/log and obviously
 this old logs will be included in snapshot. Should we skip theem or
 such situation is ok?

 - Igor




 On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 Hi,

 If user runs some experiments with creating/deleting clusters, then
 taking care of old logs is under user's responsibility, I suppose. Fuel
 configures log rotation with compression for remote logs, so old logs will
 be gzipped and will not take much space.

 In case of additional boolean parameter, the default value should be
 0-don't touch old logs.

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes clusters
 and old unused directories remain and take disk space. On the other hand,
 it is not so hard to imaging the situation when user would like to be able
 to take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which
 will manage this piece of logic (1-remove old logs, 0-don't touch old logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Igor Kalnitsky
Hi guys,

What about our diagnostic snapshot?

I mean we're going to make snapshot of entire /var/log and obviously
this old logs will be included in snapshot. Should we skip theem or
such situation is ok?

- Igor




On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 Hi,

 If user runs some experiments with creating/deleting clusters, then taking
 care of old logs is under user's responsibility, I suppose. Fuel configures
 log rotation with compression for remote logs, so old logs will be gzipped
 and will not take much space.

 In case of additional boolean parameter, the default value should be
 0-don't touch old logs.

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes clusters and
 old unused directories remain and take disk space. On the other hand, it is
 not so hard to imaging the situation when user would like to be able to
 take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which will
 manage this piece of logic (1-remove old logs, 0-don't touch old logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-24 Thread Jay Pipes



On 06/24/2014 07:32 AM, Daniel P. Berrange wrote:

On Tue, Jun 24, 2014 at 10:55:41AM +, Day, Phil wrote:

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: 23 June 2014 10:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
as part of resize ?

On 18 June 2014 21:57, Jay Pipes jaypi...@gmail.com wrote:

On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:


On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:


On 06/13/2014 02:22 PM, Day, Phil wrote:


I guess the question I’m really asking here is:  “Since we know
resize down won’t work in all cases, and the failure if it does
occur will be hard for the user to detect, should we just block it
at the API layer and be consistent across all Hypervisors ?”



+1

There is an existing libvirt blueprint:

https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
which I've never been in favor of:
https://bugs.launchpad.net/nova/+bug/1270238/comments/1



All of the functionality around resizing VMs to match a different
flavour seem to be a recipe for unleashing a torrent of unfixable
bugs, whether resizing disks, adding CPUs, RAM or any other aspect.



+1

I'm of the opinion that we should plan to rip resize functionality out
of (the next major version of) the Compute API and have a *single*,
*consistent* API for migrating resources. No more API extension X for
migrating this kind of thing, and API extension Y for this kind of
thing, and API extension Z for migrating /live/ this type of thing.

There should be One move API to Rule Them All, IMHO.


+1 for one move API, the two evolved independently, in different
drivers, its time to unify them!

That plan got stuck behind the refactoring of live-migrate and migrate to the
conductor, to help unify the code paths. But it kinda got stalled (I must
rebase those patches...).

Just to be clear, I am against removing resize down from v2 without a
deprecation cycle. But I am pro starting that deprecation cycle.

John


I'm not sure Daniel and Jay are arguing for the same thing here John:
  I *think*  Daniel is saying drop resize altogether and Jay is saying
unify it with migration - so I'm a tad confused which of those you're
agreeing with.


Yes, I'm personally for removing resize completely since, IMHO, no matter
how many bugs we fix it is always going to be a mess. That said I realize
that people probably find resize-up useful, so I won't push hard to kill
it - we should just recognize that it is always going to be a mess which
does not result in the same setup you'd get if you booted fresh with the
new settings.


I am of the opinion that the different API extensions and the fact that 
they have evolved separately have created a giant mess for users, and 
that we should consolidate the API into a single move API that can 
take an optional new set of resources (via a new specified flavor) and 
should automatically live move the instance if it is possible, and 
fall back to a cold move if it isn't possible, with no confusing options 
or additional/variant API calls needed by the user.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Andrey Danin
What about to gzip old logs by Astute and place them to a special
directory, which will be managed under logrotate.d, and logrotate will
remove untouched logs after 1 month.


On Tue, Jun 24, 2014 at 6:57 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 Hi,

 If user runs some experiments with creating/deleting clusters, then taking
 care of old logs is under user's responsibility, I suppose. Fuel configures
 log rotation with compression for remote logs, so old logs will be gzipped
 and will not take much space.

 In case of additional boolean parameter, the default value should be
 0-don't touch old logs.

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes clusters and
 old unused directories remain and take disk space. On the other hand, it is
 not so hard to imaging the situation when user would like to be able to
 take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which will
 manage this piece of logic (1-remove old logs, 0-don't touch old logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread David Shrewsbury
On Tue, Jun 24, 2014 at 11:28 AM, Jay Pipes jaypi...@gmail.com wrote:


 
 This is a summary.

 And this is a description
 

 will result in a failure of H404, due to the This is a summary. not
 being on the first line, like this:

 This is a summary.

 And this is a description
 

 It is that silliness that I deplore, not the summary line followed by a
 newline issue.



Yes!! That is definitely the most silly silliness, and I deplore it.


--
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread Avishay Traeger
One more reason why block storage management doesn't really work on file
systems.  I'm OK with storing the format, but that just means you fail
migration/backup operations with different formats, right?


On Mon, Jun 23, 2014 at 6:07 PM, Trump.Zhang zhangleiqi...@gmail.com
wrote:

 Hi, all:

 Currently, there are several filesystem-based drivers in Cinder, such
 as nfs, glusterfs, etc. Multiple format of volume other than raw can be
 potentially supported in these drivers, such as qcow2, raw, sparse, etc.

 However, Cinder does not store the actual format of volume and suppose
 all volumes are raw format. It will has or already has several problems
 as follows:

 1. For volume migration, the generic migration implementation in
 Cinder uses the dd command to copy src volume to dest volume. If the
 src volume is qcow2 format, instance will not get the right data from
 volume after the dest volume attached to instance, because the info
 returned from Cinder states that the volume's format is raw other than
 qcow2
 2. For volume backup, the backup driver also supposes that src volumes
 are raw format, other format will not be supported

 Indeed, glusterfs driver has used qemu-img info command to judge the
 format of volume. However, as the comment from Duncan in [1] says, this
 auto detection method has many possible error / exploit vectors. Because if
 the beginning content of a raw volume happens to a qcow2 disk, auto
 detection method will judge this volume to be a qcow2 volume wrongly.

 I proposed that the format info should be added to admin_metadata
 of volumes, and enforce it on all operations, such as create, copy, migrate
 and retype. The format will be only set / updated for filesystem-based
 drivers,  other drivers will not contains this metadata and have a default
 raw format.

 Any advice?

 [1] https://review.openstack.org/#/c/100529/

 --
 ---
 Best Regards

 Trump.Zhang

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-24 Thread Jordan OMara

On 24/06/14 10:55 -0400, Jordan OMara wrote:

On 20/06/14 16:26 -0400, Charles Crouch wrote:

Any more takers for the tripleo mid-cycle meetup in Raleigh? If so, please
sign up on the etherpad below.

The hotel group room rate will be finalized on Monday Jul 23rd (US 
time), after that time you will be on your own for finding 
accommodation.


Thanks
Charles



Just an update that I've got us a block of rooms reserved at the
nearest, cheapest hotel (the Marriott in downtown Raleigh, about 200
yards from the Red Hat office) - I'll have details on how to actually
book at this rate in just a few minutes.


Please use the following link to reserve at the marriott (it's copied
on the etherpad)

http://tinyurl.com/redhat-marriott

We have a 24-room block reserved at that rate from SUN-FRI
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgp1uiWMlmgmN.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread Duncan Thomas
On 24 June 2014 16:42, Avishay Traeger avis...@stratoscale.com wrote:
 One more reason why block storage management doesn't really work on file
 systems.  I'm OK with storing the format, but that just means you fail
 migration/backup operations with different formats, right?

Actually I think storing the format *fixes* those cases, since the
driver knows what the source format is to get a raw stream of bytes
out. It was in trying to fix backup that this problem was found.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread John Griffith
On Tue, Jun 24, 2014 at 9:42 AM, Avishay Traeger avis...@stratoscale.com
wrote:

 One more reason why block storage management doesn't really work on file
 systems.  I'm OK with storing the format, but that just means you fail
 migration/backup operations with different formats, right?


​+1... so nice that somebody else said it for me this time!!

We need to make sure this is completely abstracted from end-user and the
manager can make the right decisions (ie implement a way to work from one
to the other)​.



 On Mon, Jun 23, 2014 at 6:07 PM, Trump.Zhang zhangleiqi...@gmail.com
 wrote:

 Hi, all:

 Currently, there are several filesystem-based drivers in Cinder, such
 as nfs, glusterfs, etc. Multiple format of volume other than raw can be
 potentially supported in these drivers, such as qcow2, raw, sparse, etc.

 However, Cinder does not store the actual format of volume and
 suppose all volumes are raw format. It will has or already has several
 problems as follows:

 1. For volume migration, the generic migration implementation in
 Cinder uses the dd command to copy src volume to dest volume. If the
 src volume is qcow2 format, instance will not get the right data from
 volume after the dest volume attached to instance, because the info
 returned from Cinder states that the volume's format is raw other than
 qcow2
 2. For volume backup, the backup driver also supposes that src
 volumes are raw format, other format will not be supported

 Indeed, glusterfs driver has used qemu-img info command to judge
 the format of volume. However, as the comment from Duncan in [1] says, this
 auto detection method has many possible error / exploit vectors. Because if
 the beginning content of a raw volume happens to a qcow2 disk, auto
 detection method will judge this volume to be a qcow2 volume wrongly.

 I proposed that the format info should be added to admin_metadata
 of volumes, and enforce it on all operations, such as create, copy, migrate
 and retype. The format will be only set / updated for filesystem-based
 drivers,  other drivers will not contains this metadata and have a default
 raw format.

 Any advice?

 [1] https://review.openstack.org/#/c/100529/

 --
 ---
 Best Regards

 Trump.Zhang

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler]

2014-06-24 Thread Abbass MAROUNI

Hi,

I was wondering if there's a way to set a tag (key/value) of a Virtual 
Machine from within a scheduler filter ?


I want to be able to tag a machine with a specific key/value after 
passing my custom filter


Thanks,

--
--
Abbass MAROUNI
VirtualScale


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-06-24 Thread Dustin Lundquist
I brought this up on https://review.openstack.org/#/c/101084/.


-Dustin


On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman avish...@radware.com
wrote:

  Hi Dustin

 I agree with the concept you described but as far as I understand it is
 not currently supported in Neutron.

 So a driver should be fully compatible with the interface it implements.



 Avishay



 *From:* Dustin Lundquist [mailto:dus...@null-ptr.net]
 *Sent:* Tuesday, June 24, 2014 5:41 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7
 Rule - comapre_type values



 I think the API should provide an richly featured interface, and
 individual drivers should indicate if they support the provided
 configuration. For example there is a spec for a Linux LVS LBaaS driver,
 this driver would not support TLS termination or any layer 7 features, but
 would still be valuable for some deployments. The user experience of such a
 solution could be improved if the driver to propagate up a message
 specifically identifying the unsupported feature.





 -Dustin



 On Tue, Jun 24, 2014 at 4:28 AM, Avishay Balderman avish...@radware.com
 wrote:

 Hi

 One of L7 Rule attributes is ‘compare_type’.

 This field is the match operator that the rule should activate against the
 value found in the request.

 Below is list of the possible values:

 - Regexp

 - StartsWith

 - EndsWith

 - Contains

 - EqualTo (*)

 - GreaterThan (*)

 - LessThan (*)



 The last 3 operators (*) in the list are used in numerical matches.

 Radware load balancing backend does not support those operators   “out of
 the box” and a significant development effort should be done in order to
 support it.

 We are afraid to miss the Junu timeframe if we will have to focus in
 supporting the numerical operators.

 Therefore we ask to support the non-numerical operators for Junu and add
 the numerical operators support post Junu.



 See https://review.openstack.org/#/c/99709/4/specs/juno/lbaas-l7-rules.rst



 Thanks

 Avishay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread John Griffith
On Tue, Jun 24, 2014 at 9:56 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 On 24 June 2014 16:42, Avishay Traeger avis...@stratoscale.com wrote:
  One more reason why block storage management doesn't really work on file
  systems.  I'm OK with storing the format, but that just means you fail
  migration/backup operations with different formats, right?

 Actually I think storing the format *fixes* those cases, since the
 driver knows what the source format is to get a raw stream of bytes
 out. It was in trying to fix backup that this problem was found.


​Yes, but I was also trying to point out this shouldn't be done in the
driver... but at this point maybe IRC is a better forum to discuss the
impl?​


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stackalytics 0.6 released!

2014-06-24 Thread Herman Narkaytis
Hi Stackers,
  More then a year ago Mirantis announced Stackalytics as a public resource
for the OpenStack community. Initially it was an internal tool for our
performance tracking, but later resource became de-facto standard for
measuring contribution statistics. We've started with several POCs on
different technologies which were targeted to show decent performance on 1
million records. At that time there were only 50k commits records and we
estimated to achieve 1M in 3 years. I'm glad to admit that we were wrong.
Now Stackalytics handles not only commits, but reviews, blueprints, bugs,
emails, registrations on openstack.org, etc. We've reached 1M bound and
still are able to generate reports in seconds!

  Today we'd like to announce 0.6 release with a bunch of new features:

   - Implemented module classification based on programs.yaml with
   retrospective integrated/incubated attribution
   - Added support for co-authored commits
   - Added metrics on filed and resolved bugs
   - Added drill-down report on OpenStack foundation members

  I want to say thank you to our development team and especially to Ilya
Shakhat, Pavel Kholkin and Yury Taraday.

  Please feel free to provide your feedback. It's highly appreciated.

--
Herman Narkaytis
DoO Ru, PhD
Tel.: +7 (8452) 674-555, +7 (8452) 431-555
Tel.: +7 (495) 640-4904
Tel.: +7 (812) 640-5904
Tel.: +38(057)728-4215
Tel.: +1 (408) 715-7897
ext 2002
http://www.mirantis.com

This email (including any attachments) is confidential. If you are not the
intended recipient you must not copy, use, disclose, distribute or rely on
the information contained in it. If you have received this email in error,
please notify the sender immediately by reply email and delete the email
from your system. Confidentiality and legal privilege attached to this
communication are not waived or lost by reason of mistaken delivery to you.
Mirantis does not guarantee (that this email or the attachment's) are
unaffected by computer virus, corruption or other defects. Mirantis may
monitor incoming and outgoing emails for compliance with its Email Policy.
Please note that our servers may not be located in your country.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Periodic Security Checks

2014-06-24 Thread Joe Gordon
On Sat, Jun 21, 2014 at 11:33 AM, Alexandr Naumchev anaumc...@gmail.com
wrote:

 Hello!
 We have blueprints here:

 https://blueprints.launchpad.net/horizon/+spec/periodic-security-checks

 and here:

 https://blueprints.launchpad.net/nova/+spec/periodic-security-checks/

 And we already have some code. Is it necessary to approve the blueprint
 before contributing the code? In any case, could someone review the
 aforementioned blueprints?
 Thanks!



Hi, nova has moved away from using 100% launchpad to approve blueprints.
Our current workflow is documented here
https://wiki.openstack.org/wiki/Blueprints#Spec_.2B_Blueprints_lifecycle


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Igor Kalnitsky
Hello,

@Aleks, it's a good idea to make snapshot per environment, but I think
we can keep functionality to make snapshot for all nodes at once too.

- Igor


On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 Yeah, I thought about diagnostic snapshot too. Maybe it would be better to
 implement per-environment diagnostic snapshots? I.e. add diagnostic
 snapshot generate/download buttons/links in the environment actions tab.
 Such snapshot would contain info/logs about Fuel master node and nodes
 assigned to the environment only.


 On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 Hi guys,

 What about our diagnostic snapshot?

 I mean we're going to make snapshot of entire /var/log and obviously
 this old logs will be included in snapshot. Should we skip theem or
 such situation is ok?

 - Igor




 On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko adide...@mirantis.com
  wrote:

 Hi,

 If user runs some experiments with creating/deleting clusters, then
 taking care of old logs is under user's responsibility, I suppose. Fuel
 configures log rotation with compression for remote logs, so old logs will
 be gzipped and will not take much space.

 In case of additional boolean parameter, the default value should be
 0-don't touch old logs.

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes clusters
 and old unused directories remain and take disk space. On the other hand,
 it is not so hard to imaging the situation when user would like to be able
 to take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which
 will manage this piece of logic (1-remove old logs, 0-don't touch old 
 logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Aleksandr Didenko
Yes, of course, snapshot for all nodes at once (like currently) should also
be available.


On Tue, Jun 24, 2014 at 7:27 PM, Igor Kalnitsky ikalnit...@mirantis.com
wrote:

 Hello,

 @Aleks, it's a good idea to make snapshot per environment, but I think
 we can keep functionality to make snapshot for all nodes at once too.

 - Igor


 On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 Yeah, I thought about diagnostic snapshot too. Maybe it would be better
 to implement per-environment diagnostic snapshots? I.e. add diagnostic
 snapshot generate/download buttons/links in the environment actions tab.
 Such snapshot would contain info/logs about Fuel master node and nodes
 assigned to the environment only.


 On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 Hi guys,

 What about our diagnostic snapshot?

 I mean we're going to make snapshot of entire /var/log and obviously
 this old logs will be included in snapshot. Should we skip theem or
 such situation is ok?

 - Igor




 On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 Hi,

 If user runs some experiments with creating/deleting clusters, then
 taking care of old logs is under user's responsibility, I suppose. Fuel
 configures log rotation with compression for remote logs, so old logs will
 be gzipped and will not take much space.

 In case of additional boolean parameter, the default value should be
 0-don't touch old logs.

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes clusters
 and old unused directories remain and take disk space. On the other hand,
 it is not so hard to imaging the situation when user would like to be able
 to take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which
 will manage this piece of logic (1-remove old logs, 0-don't touch old 
 logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-06-24 Thread Jay Pipes

On 06/11/2014 02:34 AM, Mark Washenberger wrote:

I think the tasks stuff is something different, though. A task is a
(potentially) long-running operation. So it would be possible for an
action to result in the creation of a task. As the proposal stands
today, the actions we've been looking at are an alternative to the
document-oriented PATCH HTTP verb. There was nearly unanimous consensus
that we found POST /resources/actions/verb {inputs to verb} to be a
more expressive and intuitive way of accomplishing some workflows than
trying to use JSON-PATCH documents.


Why do tasks necessarily mean the operation is long-running? As 
mentioned before to Brian R, just have the deactivation action be a 
task. There's no need to create a new faux-resource called 'action', IMO...


Best,
-jay


On Tue, Jun 10, 2014 at 4:15 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On Wed, Jun 4, 2014 at 11:54 AM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

On 05/30/2014 02:22 PM, Hemanth Makkapati wrote:
  Hello All,
  I'm writing to notify you of the approach the Glance
community has
  decided to take for doing functional API.  Also, I'm writing
to solicit
  your feedback on this approach in the light of cross-project API
  consistency.
 
  At the Atlanta Summit, the Glance team has discussed introducing
  functional API in Glance so as to be able to expose
operations/actions
  that do not naturally fit into the CRUD-style. A few
approaches are
  proposed and discussed here
 

https://etherpad.openstack.org/p/glance-adding-functional-operations-to-api.
  We have all converged on the approach to include 'action' and
action
  type in the URL. For instance, 'POST
  /images/{image_id}/actions/{action_type}'.
 
  However, this is different from the way Nova does actions.
Nova includes
  action type in the payload. For instance, 'POST
  /servers/{server_id}/action {type: action_type, ...}'.
At this
  point, we hit a cross-project API consistency issue mentioned
here
 

https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis
  (under the heading 'How to act on resource - cloud perform on
  resources'). Though we are differing from the way Nova does
actions and
  hence another source of cross-project API inconsistency , we
have a few
  reasons to believe that Glance's way is helpful in certain ways.
 
  The reasons are as following:
  1. Discoverability of operations.  It'll be easier to expose
permitted
  actions through schemas a json home document living at
  /images/{image_id}/actions/.
  2. More conducive for rate-limiting. It'll be easier to
rate-limit
  actions in different ways if the action type is available in
the URL.
  3. Makes more sense for functional actions that don't require
a request
  body (e.g., image deactivation).
 
  At this point we are curious to see if the API conventions group
  believes this is a valid and reasonable approach.
  Any feedback is much appreciated. Thank you!

Honestly, I like POST /images/{image_id}/actions/{action_type} much
better than ACTION being embedded in the body (the way nova
currently
does it), for the simple reason of reading request logs:


I agree that not including the action type in the POST body is much
nicer and easier to read in logs, etc.

That said, I prefer to have resources actually be things that the
software creates. An action isn't created. It is performed.

I would prefer to replace the term action(s) with the term
task(s), as is proposed for Nova [1].

Then, I'd be happy as a pig in, well, you know.

Best,
-jay

[1] https://review.openstack.org/#/c/86938/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Quick Survey: Horizon Mid-Cycle Meetup

2014-06-24 Thread Tzu-Mainn Chen
 On 6/20/14, 6:24 AM, Radomir Dopieralski openst...@sheep.art.pl wrote:
 
 On 20/06/14 13:56, Jaromir Coufal wrote:
  On 2014/19/06 09:58, Matthias Runge wrote:
  On Wed, Jun 18, 2014 at 10:55:59AM +0200, Jaromir Coufal wrote:
  My quick questions are:
  * Who would be interested (and able) to get to the meeting?
  * What topics do we want to discuss?
 
  https://etherpad.openstack.org/p/horizon-juno-meetup
 
  Thanks for bringing this up!
 
  Do we really have items to discuss, where it needs a meeting in person?
 
  Matthias
  
  I am not sure TBH, that's why I added also the Topic section to figure
  out if there is something what needs to be discussed. Though I don't see
  much interest yet.
 
 Apart from the split, I also work on configuration files rework, which
 could benefit from discussion, but i think it's better done here or on
 the wiki/etherpad, as that leaves tangible traces. I will post a
 detailed e-mail in a few days. Other than that, I don't see a compelling
 reason to organize it.
 
 --
 Radomir Dopieralski
 
 
 I don¹t think the split warrants a mid-cycle meetup. A topic that would
 benefit from several people being in the room is client side architecture,
 but I¹m not entirely sure we¹re ready to work through that yet, and the
 dates are a little aggressive.  If we have interest in that, we could look
 to a slightly later date.
 
 David

This was talked about a bit in today's Horizon weekly IRC meeting, and the
outcome was that it might make sense to see if people have the interest or
the time to attend such a meetup.  In order to gauge interest, here's an
etherpad where interested parties can put down their names next to dates
when they'd be available to attend.

https://etherpad.openstack.org/p/juno-horizon-meetup

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Trevor Vardeman
I'm running Ubuntu 14.04, and rather suddenly I'm unable to run ./stack.sh 
successfully.  Brandon, who is also running Ubuntu 14.04, is seeing no issues 
here.  However, all the same, I'm at a loss as to understand what the problem 
is.  At the bottom of my text is the terminal output from running ./stack.sh

It should be noted, I don't use a python virtual environment.  My reasoning is 
simple: I have a specific partition set up to use devstack, and only devstack.  
I don't think its necessary to use a VE mostly because I would find it weird to 
handle dependencies in an isolated environment rather than the host environment 
I've already dedicated to the project in the first place.  Not sure any of you 
will agree with me, and I'd only really entertain the idea of said VE if its 
the only solution to my problem.  I've installed python-pip as the latest 
version, 1.5.6.  When running ./stack.sh it will uninstall the latest version 
and try using pip 1.4.1, to no avail, and where it would try to install 1.4.1 
escapes me, according to the following output.  If I manually install 1.4.1 and 
add files to the appropriate location for its use according to ./stack.sh it 
still uninstalls the installed packages, and then fails, under what appeared to 
me to be the same output and failure as the following.  If anyone can help me 
sort this out, I'd be very appreciative.  Please feel free to message me on IRC 
(handle TrevorV) if you have a suggestion or are confused about anything I've 
done/tried.

Terminal
Using mysql database backend
2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
2014-06-24 17:16:32.095 | + [[ -t 3 ]]
2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
2014-06-24 17:16:32.095 | + source 
/home/stack/workspace/devstack/tools/install_prereqs.sh
2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
2014-06-24 17:16:32.095 | ++ 
PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
2014-06-24 17:16:32.096 | +++ date +%s
2014-06-24 17:16:32.096 | ++ NOW=1403630192
2014-06-24 17:16:32.096 | +++ head -1 /home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
2014-06-24 17:16:32.096 | ++ DELTA=1285
2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915 seconds 
remaining) '
2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds remaining)
2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
2014-06-24 17:16:32.096 | ++ return 0
2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
2014-06-24 17:16:32.096 | + /home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | +++ dirname 
/home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 |  dirname /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | +++ pwd
2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | ++ source 
/home/stack/workspace/devstack/functions-common
2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
2014-06-24 17:16:32.105 | + PIP_GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py
2014-06-24 17:16:32.106 | ++ basename https://bootstrap.pypa.io/get-pip.py
2014-06-24 17:16:32.107 | + 
LOCAL_PIP=/home/stack/workspace/devstack/files/get-pip.py
2014-06-24 17:16:32.107 | + GetDistro
2014-06-24 17:16:32.107 | + GetOSVersion
2014-06-24 17:16:32.108 | ++ which sw_vers
2014-06-24 17:16:32.111 | + [[ -x '' ]]
2014-06-24 17:16:32.111 | ++ which lsb_release
2014-06-24 17:16:32.114 | + [[ -x /usr/bin/lsb_release ]]
2014-06-24 17:16:32.115 | ++ lsb_release -i -s
2014-06-24 17:16:32.160 | + os_VENDOR=Ubuntu
2014-06-24 17:16:32.161 | ++ lsb_release -r -s
2014-06-24 17:16:32.209 | + os_RELEASE=14.04
2014-06-24 17:16:32.209 | + os_UPDATE=
2014-06-24 17:16:32.209 | + os_PACKAGE=rpm
2014-06-24 17:16:32.209 | + [[ Debian,Ubuntu,LinuxMint =~ Ubuntu ]]
2014-06-24 17:16:32.209 | + os_PACKAGE=deb
2014-06-24 17:16:32.210 | ++ lsb_release -c -s
2014-06-24 17:16:32.262 | + os_CODENAME=trusty
2014-06-24 17:16:32.262 | + 

Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Fawad Khaliq
Hi Trevor,

I ran into the same issue. I worked around quickly by doing the following:

   - After stack.sh uninstalls pip, and fails with the
pkg_resources.DistributionNotFound:
   pip==1.4.1 error, install pip from easy_install
  - # easy_install pip
   - And re - stack.sh

Haven't done the investigation yet but this may help you move past this
issue for now.

Thanks,
Fawad Khaliq



On Tue, Jun 24, 2014 at 10:32 AM, Trevor Vardeman 
trevor.varde...@rackspace.com wrote:

  I'm running Ubuntu 14.04, and rather suddenly I'm unable to run
 ./stack.sh successfully.  Brandon, who is also running Ubuntu 14.04, is
 seeing no issues here.  However, all the same, I'm at a loss as to
 understand what the problem is.  At the bottom of my text is the terminal
 output from running ./stack.sh

  It should be noted, I don't use a python virtual environment.  My
 reasoning is simple: I have a specific partition set up to use devstack,
 and only devstack.  I don't think its necessary to use a VE mostly because
 I would find it weird to handle dependencies in an isolated environment
 rather than the host environment I've already dedicated to the project in
 the first place.  Not sure any of you will agree with me, and I'd only
 really entertain the idea of said VE if its the only solution to my
 problem.  I've installed python-pip as the latest version, 1.5.6.  When
 running ./stack.sh it will uninstall the latest version and try using pip
 1.4.1, to no avail, and where it would try to install 1.4.1 escapes me,
 according to the following output.  If I manually install 1.4.1 and add
 files to the appropriate location for its use according to ./stack.sh it
 still uninstalls the installed packages, and then fails, under what
 appeared to me to be the same output and failure as the following.  If
 anyone can help me sort this out, I'd be very appreciative.  Please feel
 free to message me on IRC (handle TrevorV) if you have a suggestion or are
 confused about anything I've done/tried.

  Terminal
  Using mysql database backend
 2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
 2014-06-24 17:16:32.095 | + [[ -t 3 ]]
 2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
 2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
 2014-06-24 17:16:32.095 | + source
 /home/stack/workspace/devstack/tools/install_prereqs.sh
 2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
 2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
 2014-06-24 17:16:32.095 | ++
 PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
 2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
 2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
 2014-06-24 17:16:32.096 | +++ date +%s
 2014-06-24 17:16:32.096 | ++ NOW=1403630192
 2014-06-24 17:16:32.096 | +++ head -1
 /home/stack/workspace/devstack/.prereqs
 2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
  2014-06-24 17:16:32.096 | ++ DELTA=1285
 2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
 2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
 2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915
 seconds remaining) '
 2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds
 remaining)
 2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
 2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
 2014-06-24 17:16:32.096 | ++ return 0
 2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
 2014-06-24 17:16:32.096 | +
 /home/stack/workspace/devstack/tools/install_pip.sh
 2014-06-24 17:16:32.096 | +++ dirname
 /home/stack/workspace/devstack/tools/install_pip.sh
 2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
 2014-06-24 17:16:32.096 | ++ pwd
 2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
 2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
 2014-06-24 17:16:32.096 | ++ pwd
 2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
 2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
 2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
 2014-06-24 17:16:32.096 |  dirname
 /home/stack/workspace/devstack/functions
 2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
 2014-06-24 17:16:32.096 | +++ pwd
 2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
 2014-06-24 17:16:32.096 | ++ source
 /home/stack/workspace/devstack/functions-common
 2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
 2014-06-24 17:16:32.105 | + PIP_GET_PIP_URL=
 https://bootstrap.pypa.io/get-pip.py
 2014-06-24 17:16:32.106 | ++ basename https://bootstrap.pypa.io/get-pip.py
 2014-06-24 17:16:32.107 | +
 LOCAL_PIP=/home/stack/workspace/devstack/files/get-pip.py
 2014-06-24 17:16:32.107 | + GetDistro
 2014-06-24 17:16:32.107 | + GetOSVersion
 2014-06-24 17:16:32.108 | ++ which sw_vers
 2014-06-24 17:16:32.111 | + [[ -x '' ]]
 2014-06-24 17:16:32.111 | ++ which lsb_release
 2014-06-24 

Re: [openstack-dev] [Barbican] Barebones CA

2014-06-24 Thread John Wood
Hello Robert,

I would actually hope we have a self-contained certificate plugin 
implementation that runs 'out of the box' to enable certificate generation 
orders to be evaluated and demo-ed on local boxes. 

Is this what you were thinking though?

Thanks,
John




From: Clark, Robert Graham [robert.cl...@hp.com]
Sent: Tuesday, June 24, 2014 10:36 AM
To: OpenStack List
Subject: [openstack-dev] [Barbican] Barebones CA

Hi all,

I’m sure this has been discussed somewhere and I’ve just missed it.

Is there any value in creating a basic ‘CA’ and plugin to satisfy 
tests/integration in Barbican? I’m thinking something that probably performs 
OpenSSL certificate operations itself, ugly but perhaps useful for some things?

-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Trevor Vardeman
Fawad,

Thanks Fawad, that seems to have fixed my issue at this point.  Amused me, 
since pip is supposed to replace easy_install, but I won't nitpick if it fixes 
it ha ha.

-Trevor

From: Fawad Khaliq [fa...@plumgrid.com]
Sent: Tuesday, June 24, 2014 12:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

Hi Trevor,

I ran into the same issue. I worked around quickly by doing the following:

  *   After stack.sh uninstalls pip, and fails with the 
pkg_resources.DistributionNotFound: pip==1.4.1 error, install pip from 
easy_install
 *   # easy_install pip
  *   And re - stack.sh

Haven't done the investigation yet but this may help you move past this issue 
for now.

Thanks,
Fawad Khaliq



On Tue, Jun 24, 2014 at 10:32 AM, Trevor Vardeman 
trevor.varde...@rackspace.commailto:trevor.varde...@rackspace.com wrote:
I'm running Ubuntu 14.04, and rather suddenly I'm unable to run ./stack.sh 
successfully.  Brandon, who is also running Ubuntu 14.04, is seeing no issues 
here.  However, all the same, I'm at a loss as to understand what the problem 
is.  At the bottom of my text is the terminal output from running ./stack.sh

It should be noted, I don't use a python virtual environment.  My reasoning is 
simple: I have a specific partition set up to use devstack, and only devstack.  
I don't think its necessary to use a VE mostly because I would find it weird to 
handle dependencies in an isolated environment rather than the host environment 
I've already dedicated to the project in the first place.  Not sure any of you 
will agree with me, and I'd only really entertain the idea of said VE if its 
the only solution to my problem.  I've installed python-pip as the latest 
version, 1.5.6.  When running ./stack.sh it will uninstall the latest version 
and try using pip 1.4.1, to no avail, and where it would try to install 1.4.1 
escapes me, according to the following output.  If I manually install 1.4.1 and 
add files to the appropriate location for its use according to ./stack.sh it 
still uninstalls the installed packages, and then fails, under what appeared to 
me to be the same output and failure as the following.  If anyone can help me 
sort this out, I'd be very appreciative.  Please feel free to message me on IRC 
(handle TrevorV) if you have a suggestion or are confused about anything I've 
done/tried.

Terminal
Using mysql database backend
2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
2014-06-24 17:16:32.095 | + [[ -t 3 ]]
2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
2014-06-24 17:16:32.095 | + source 
/home/stack/workspace/devstack/tools/install_prereqs.sh
2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
2014-06-24 17:16:32.095 | ++ 
PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
2014-06-24 17:16:32.096 | +++ date +%s
2014-06-24 17:16:32.096 | ++ NOW=1403630192
2014-06-24 17:16:32.096 | +++ head -1 /home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
2014-06-24 17:16:32.096 | ++ DELTA=1285
2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915 seconds 
remaining) '
2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds remaining)
2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
2014-06-24 17:16:32.096 | ++ return 0
2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
2014-06-24 17:16:32.096 | + /home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | +++ dirname 
/home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 |  dirname /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | +++ pwd
2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | ++ source 
/home/stack/workspace/devstack/functions-common
2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
2014-06-24 17:16:32.105 | + 

Re: [openstack-dev] [barbican] Juno Mid-cycle Meetup

2014-06-24 Thread Douglas Mendizabal
Hi Everyone,

Just a reminder that the Barbican mid-cycle meetup is just under two weeks
away.   I just wanted to send out a link to the etherpad we’re using to do
some pre-planning of things that need to be covered during the meetup

https://etherpad.openstack.org/p/barbican-juno-meetup

Also, please be sure to RSVP if you’re planning on coming, so that we can
plan accordingly.

RSVP [ 
https://docs.google.com/forms/d/1iao7mEN6HV3CRCRuCPhxOaF4_tJ-Kqq4_Lli1quft58
/viewform?usp=send_form ]

Thanks,
Doug Mendizábal
IRC: redrobot

From:  Douglas Mendizabal douglas.mendiza...@rackspace.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Monday, June 16, 2014 at 9:29 PM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [barbican] Juno Mid-cycle Meetup

Hi Everyone,

Just wanted to send a reminder that the Barbican Juno meetup is coming up in
a few weeks.  We’ll be meeting at the new Geekdom location in San Antonio,
TX on  July 7-9 (Monday-Wednesday).  This meetup will overlap with the
Keystone Juno Hackathon being held July 9-11 at the same location.

RSVP [ 
https://docs.google.com/forms/d/1iao7mEN6HV3CRCRuCPhxOaF4_tJ-Kqq4_Lli1quft58
/viewform?usp=send_form ]

LOCATION

Geekdom
110 E Houston St, 7th Floor
San Antonio TX, 78205
( https://goo.gl/maps/skMaI )

DATES

Mon, July 7 – Barbican
Tue, July 8 – Barbican
Wed, July 9 – Barbican/Keystone
Thu, July 10 – Keystone
Fri, July 11 – Keystone

For more information check out the wiki page. [
https://wiki.openstack.org/wiki/Barbican/JunoMeetup ]

Thanks,

Douglas Mendizábal
IRC: redrobot




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Providing a potentially more open interface to statsd statistics

2014-06-24 Thread Seger, Mark (Cloud Services)
I've lamented for awhile that while swift/statsd provide a wealth of 
information, it's in a somewhat difficult to use format.  Specifically you have 
to connect to a socket and listen for messages.  Furthermore if you're 
listening, nobody else can.  I do realize there is a mechanism to send the data 
to graphite, but what if I'm not a graphite user OR want to look at the data at 
a finer granularity than is being sent to graphite?

What I've put together and would love to get some feedback on is a tool I'm 
calling 'statsdtee', specifically because you can configure statsd to send to 
the port it wants to listen on (configurable of course) and statsdtee will then 
process it locally AND tee it out another socket, making it possible to forward 
the data on to graphite and still allow local processing.

Local processing consists of calculating rolling counters and writing them to a 
file that looks much like most /proc entries, such as this:

$cat /tmp/statsdtee
V1.0 1403633349.159516
accaudt 0 0 0
accreap 0 0 0 0 0 0 0 0 0
accrepl 0 0 2100 0 0 0 1391 682 0 2100
accsrvr 1 0 0 0 0 2072 0
conaudt 0 0 0
conrepl 0 0 2892 0 0 0 1997 1107 0 2892
consrvr 2700 0 0 1 1 992 0
consync 541036 0 11 0 0
conupdt 0 17 17889
objaudt 0 0
objexpr 0 0
objrepl 0 0 0 0
objsrvr 117190 16325 0 43068 9 996 5 0 6904
objupdt 0 0 0 1704 0

In this format we're looking at data for account, container and object 
services.  There is a similar one for proxy.  The reason for the names on each 
line is what to report on is configurable in a conf file down to the 
granularity of a single line, thereby making it possible to report less 
information, though I'm not sure if one would really do that or not.

To make this mechanism really simple and avoid using internal timers, I'm 
simply looking at the time of each record and every time the value of the 
second changes, write out the current counters.  I could change it to every 
10th of  second but am thinking that really isn't necessary.  I could also 
drive it off a timer interrupt, but again I'm not sure that would really buy 
you anything.

My peeve with /proc is you never know what  each field means and so there is a 
second format in which headers are included and they look like this:

$ cat /tmp/statsdtee
V1.0 140369.410722
#   errs pass fail
accaudt 0 0 0
#   errs cfail cdel cremain cposs_remain ofail odel oremain oposs_remain
accreap 0 0 0 0 0 0 0 0 0
#   diff diff_cap nochg hasmat rsync rem_merge attmpt fail remov succ
accrepl 0 0 2100 0 0 0 1391 682 0 2100
#   put get post del head repl errs
accsrvr 1 0 0 0 0 2069 0
#   errs pass fail
conaudt 0 0 0
#   diff diff_cap nochg hasmat rsync rem_merge attmpt fail remov succ
conrepl 0 0 2793 0 0 0 1934 1083 0 2793
#   put get post del head repl errs
consrvr 2700 0 0 1 1 976 0
#   skip fail sync del put
consync 536193 0 11 0 0
#   succ fail no_chg
conupdt 0 17 17889
#   quar errs
objaudt 0 0
#   obj errs
objexpr 0 0
#   part_del part_upd suff_hashes suff_sync
objrepl 0 0 0 0
#   put get post del head repl errs quar async_pend
objsrvr 117190 16325 0 43068 9 996 5 0 6904
#   errs quar succ fail unlk
objupdt 0 0 0 1704 0

The important thing to remember about rolling counters is as many people who 
wish can read them simultaneously and be assured nobody is stepping on each 
other since they never get zeroed!  You simply read a sample, wait awhile and 
read another.  The result is the change in the counters over that interval and 
anyone can use any interval they choose.

So how useful people think this is?  Personally I think it's very useful...

The next step is how to calculate the numbers I'm reporting.  While statsd 
reports a lot of timing information, none of that really fits this model as all 
I want are counts.  So when I see a GET timing record, I count it as 1 GET.  
Seems to work so far. IS this a legitimate thing to be doing?  Feels right and 
from the preliminary testing I've been doing it seems pretty accurate.

One thing I've found missing is more detailed error information.  For example I 
can tell how many errors there were but I can't tell how many of each type 
there were.  Is this something that can easily be added?  I've found in our 
environment it can be useful when there's an increase in the number of errors 
on a particular server, knowing the type can be quite useful.

While I'm not currently counting everything, such as device specific data which 
would significantly increase the volume of output, I think I have covered quite 
a lot in my model.

Comments?

-mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][glance] Update volume-image-metadata proposal

2014-06-24 Thread Maldonado, Facundo N
Hi folks,

I started working on this blueprint [1] but the work to be done 
is not limited to cinder python client.
Volume-image-metadata is immutable in Cinder and Glance has 
RBAC image properties and it doesn't provide any way to find out which are 
those protected properties in advance [2].

I want to share this proposal and get feedback from you.

https://docs.google.com/document/d/1XYEqGOa30viOyZf8AiwkrCiMWGTfBKjgmeYBptaCHlM/


Thanks,
Facundo

[1] 
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata
[2] 
http://openstack.10931.n7.nabble.com/Cinder-Confusion-about-the-respective-use-cases-for-volume-s-admin-metadata-metadata-and-glance-imaga-td39849.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Doug Wiegley
Hi Brandon,

I think just one status is overloading too much onto the LB object (which
is perhaps something that a UI should do for a user, but not something an
API should be doing.)

 1) If an entity exists without a link to a load balancer it is purely
 just a database entry, so it would always be ACTIVE, but not really
 active in a technical sense.

Depends on the driver.  I don¹t think this is a decision for lbaas proper.


 2) If some of these entities become shareable then how does the status
 reflect that the entity failed to create on one load balancer but was
 successfully created on another.  That logic could get overly complex.

That¹s a status on the join link, not the object, and I could argue
multiple ways in which that should be one way or another based on the
backend, which to me, again implies driver question (backend could queue
for later, or error immediately, or let things run degraded, orŠ)

Thanks,
Doug




On 6/24/14, 11:23 AM, Brandon Logan brandon.lo...@rackspace.com wrote:

I think we missed this discussion at the meet-up but I'd like to bring
it up here.  To me having a status on all entities doesn't make much
sense, and justing having a status on a load balancer (which would be a
provisioning status) and a status on a member (which would be an
operational status) are what makes sense because:

1) If an entity exists without a link to a load balancer it is purely
just a database entry, so it would always be ACTIVE, but not really
active in a technical sense.

2) If some of these entities become shareable then how does the status
reflect that the entity failed to create on one load balancer but was
successfully created on another.  That logic could get overly complex.

I think the best thing to do is to have the load balancer status reflect
the provisioning status of all of its children.  So if a health monitor
is updated then the load balancer that health monitor is linked to would
have its status changed to PENDING_UPDATE.  Conversely, if a load
balancer or any entities linked to it are changed and the load
balancer's status is in a non-ACTIVE state then that update should not
be allowed.

Thoughts?

Thanks,
Brandon


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Eugene Nikanorov
Hi lbaas folks,

IMO a status is really an important part of the API.
In some old email threads Sam has proposed the solution for lbaas objects:
we need to have several attributes that independently represent different
types of statuses:
- admin_state_up
- operational status
- provisioning state

Not every status need to be on every object.
Pure-DB objects (like pool) should not have provisioning state and
operational status, instead, an association object should have them. I
think that resolves both questions (1) and (2).
If some object is shareable, then we'll have association object anyway, and
that's where provisioning status and operationl status can reside. For sure
it's not very simple, but this is the right way to do it.

Also I'd like to emphasize that statuses are really an API thing, not a
driver thing, so they must be used similarly across all drivers.

Thanks,
Eugene.


On Tue, Jun 24, 2014 at 10:53 PM, Doug Wiegley do...@a10networks.com
wrote:

 Hi Brandon,

 I think just one status is overloading too much onto the LB object (which
 is perhaps something that a UI should do for a user, but not something an
 API should be doing.)

  1) If an entity exists without a link to a load balancer it is purely
  just a database entry, so it would always be ACTIVE, but not really
  active in a technical sense.

 Depends on the driver.  I don¹t think this is a decision for lbaas proper.


  2) If some of these entities become shareable then how does the status
  reflect that the entity failed to create on one load balancer but was
  successfully created on another.  That logic could get overly complex.

 That¹s a status on the join link, not the object, and I could argue
 multiple ways in which that should be one way or another based on the
 backend, which to me, again implies driver question (backend could queue
 for later, or error immediately, or let things run degraded, orŠ)

 Thanks,
 Doug




 On 6/24/14, 11:23 AM, Brandon Logan brandon.lo...@rackspace.com wrote:

 I think we missed this discussion at the meet-up but I'd like to bring
 it up here.  To me having a status on all entities doesn't make much
 sense, and justing having a status on a load balancer (which would be a
 provisioning status) and a status on a member (which would be an
 operational status) are what makes sense because:
 
 1) If an entity exists without a link to a load balancer it is purely
 just a database entry, so it would always be ACTIVE, but not really
 active in a technical sense.
 
 2) If some of these entities become shareable then how does the status
 reflect that the entity failed to create on one load balancer but was
 successfully created on another.  That logic could get overly complex.
 
 I think the best thing to do is to have the load balancer status reflect
 the provisioning status of all of its children.  So if a health monitor
 is updated then the load balancer that health monitor is linked to would
 have its status changed to PENDING_UPDATE.  Conversely, if a load
 balancer or any entities linked to it are changed and the load
 balancer's status is in a non-ACTIVE state then that update should not
 be allowed.
 
 Thoughts?
 
 Thanks,
 Brandon
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Eichberger, German
Hi Doug  Brandon,

1) +1 Doug -- I like the status Building but that's a personal preference. 
It's entirely up to the driver (but it should be reasonable) and we should pick 
the states up front (as we already do with constants)

2) We actually touched upon that with the distinction between status and 
operational status -- that should take care of that.

German

-Original Message-
From: Doug Wiegley [mailto:do...@a10networks.com] 
Sent: Tuesday, June 24, 2014 11:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

Hi Brandon,

I think just one status is overloading too much onto the LB object (which is 
perhaps something that a UI should do for a user, but not something an API 
should be doing.)

 1) If an entity exists without a link to a load balancer it is purely 
 just a database entry, so it would always be ACTIVE, but not really 
 active in a technical sense.

Depends on the driver.  I don¹t think this is a decision for lbaas proper.


 2) If some of these entities become shareable then how does the status 
 reflect that the entity failed to create on one load balancer but was 
 successfully created on another.  That logic could get overly complex.

That¹s a status on the join link, not the object, and I could argue multiple 
ways in which that should be one way or another based on the backend, which to 
me, again implies driver question (backend could queue for later, or error 
immediately, or let things run degraded, orŠ)

Thanks,
Doug




On 6/24/14, 11:23 AM, Brandon Logan brandon.lo...@rackspace.com wrote:

I think we missed this discussion at the meet-up but I'd like to bring 
it up here.  To me having a status on all entities doesn't make much 
sense, and justing having a status on a load balancer (which would be a 
provisioning status) and a status on a member (which would be an 
operational status) are what makes sense because:

1) If an entity exists without a link to a load balancer it is purely 
just a database entry, so it would always be ACTIVE, but not really 
active in a technical sense.

2) If some of these entities become shareable then how does the status 
reflect that the entity failed to create on one load balancer but was 
successfully created on another.  That logic could get overly complex.

I think the best thing to do is to have the load balancer status 
reflect the provisioning status of all of its children.  So if a health 
monitor is updated then the load balancer that health monitor is linked 
to would have its status changed to PENDING_UPDATE.  Conversely, if a 
load balancer or any entities linked to it are changed and the load 
balancer's status is in a non-ACTIVE state then that update should not 
be allowed.

Thoughts?

Thanks,
Brandon


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Brandon Logan
Eugene,
Thanks for the feedback.  I have a feeling thats where we will end up
going anyway so perhaps status on all entities for now is the proper way
to build into that.  I just want my objections to be heard.

Thanks,
Brandon 

On Tue, 2014-06-24 at 23:10 +0400, Eugene Nikanorov wrote:
 Hi lbaas folks,
 
 
 IMO a status is really an important part of the API.
 In some old email threads Sam has proposed the solution for lbaas
 objects: we need to have several attributes that independently
 represent different types of statuses:
 - admin_state_up
 - operational status
 - provisioning state
 
 
 Not every status need to be on every object. 
 Pure-DB objects (like pool) should not have provisioning state and
 operational status, instead, an association object should have them. I
 think that resolves both questions (1) and (2).
 If some object is shareable, then we'll have association object
 anyway, and that's where provisioning status and operationl status can
 reside. For sure it's not very simple, but this is the right way to do
 it.
 
 
 Also I'd like to emphasize that statuses are really an API thing, not
 a driver thing, so they must be used similarly across all drivers.
 
 
 Thanks,
 Eugene.
 
 
 On Tue, Jun 24, 2014 at 10:53 PM, Doug Wiegley do...@a10networks.com
 wrote:
 Hi Brandon,
 
 I think just one status is overloading too much onto the LB
 object (which
 is perhaps something that a UI should do for a user, but not
 something an
 API should be doing.)
 
  1) If an entity exists without a link to a load balancer it
 is purely
  just a database entry, so it would always be ACTIVE, but not
 really
  active in a technical sense.
 
 
 Depends on the driver.  I don¹t think this is a decision for
 lbaas proper.
 
 
  2) If some of these entities become shareable then how does
 the status
  reflect that the entity failed to create on one load
 balancer but was
  successfully created on another.  That logic could get
 overly complex.
 
 
 That¹s a status on the join link, not the object, and I could
 argue
 multiple ways in which that should be one way or another based
 on the
 backend, which to me, again implies driver question (backend
 could queue
 for later, or error immediately, or let things run degraded,
 orŠ)
 
 Thanks,
 Doug
 
 
 
 
 On 6/24/14, 11:23 AM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 
 I think we missed this discussion at the meet-up but I'd like
 to bring
 it up here.  To me having a status on all entities doesn't
 make much
 sense, and justing having a status on a load balancer (which
 would be a
 provisioning status) and a status on a member (which would be
 an
 operational status) are what makes sense because:
 
 1) If an entity exists without a link to a load balancer it
 is purely
 just a database entry, so it would always be ACTIVE, but not
 really
 active in a technical sense.
 
 2) If some of these entities become shareable then how does
 the status
 reflect that the entity failed to create on one load balancer
 but was
 successfully created on another.  That logic could get overly
 complex.
 
 I think the best thing to do is to have the load balancer
 status reflect
 the provisioning status of all of its children.  So if a
 health monitor
 is updated then the load balancer that health monitor is
 linked to would
 have its status changed to PENDING_UPDATE.  Conversely, if a
 load
 balancer or any entities linked to it are changed and the
 load
 balancer's status is in a non-ACTIVE state then that update
 should not
 be allowed.
 
 Thoughts?
 
 Thanks,
 Brandon
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

[openstack-dev] Cinder pools implementation

2014-06-24 Thread Singh, Navneet
Hi,
  As per our discussions in the last meeting I have made an etherpad which 
details different pool implementations and at the end comparison between the 
approaches. Please go through it and be ready with any questions or opinions 
for tomorrow's meeting. Here is the link for etherpad:

https://etherpad.openstack.org/p/cinder-pool-impl-comparison

Best Regards
Navneet Singh
NetApp

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] nova needs a new release of neutronclient for OverQuotaClient exception

2014-06-24 Thread Kyle Mestery
On Mon, Jun 23, 2014 at 11:08 AM, Kyle Mestery
mest...@noironetworks.com wrote:
 On Mon, Jun 23, 2014 at 8:54 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 There are at least two changes [1][2] proposed to Nova that use the new
 OverQuotaClient exception in python-neutronclient, but the unit test jobs no
 longer test against trunk-level code of the client packages so they fail.
 So I'm here to lobby for a new release of python-neutronclient if possible
 so we can keep these fixes moving.  Are there any issues with that?

 Thanks for bringing this up Matt. I've put this on the agenda for the
 Neutron meeting today, I'll reply on this thread with what comes out
 of that discussion.

 Kyle

As discussed in the meeting, we're going to work on making a new
release of the client Matt. Ping me in channel later this week, we're
working the details out on that release at the moment.

Thanks,
Kyle

 [1] https://wiki.openstack.org/wiki/Network/Meetings#Team_Discussion_Topics

 [1] https://review.openstack.org/#/c/62581/
 [2] https://review.openstack.org/#/c/101462/
 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Mark McLoughlin
On Tue, 2014-06-24 at 09:51 -0700, Clint Byrum wrote:
 Excerpts from Monty Taylor's message of 2014-06-24 06:48:06 -0700:
  On 06/22/2014 02:49 PM, Duncan Thomas wrote:
   On 22 June 2014 14:41, Amrith Kumar amr...@tesora.com wrote:
   In addition to making changes to the hacking rules, why don't we mandate 
   also
   that perceived problems in the commit message shall not be an acceptable
   reason to -1 a change.
   
   -1.
   
   There are some /really/ bad commit messages out there, and some of us
   try to use the commit messages to usefully sort through the changes
   (i.e. I often -1 in cinder a change only affects one driver and that
   isn't clear from the summary).
   
   If the perceived problem is grammatical, I'm a bit more on board with
   it not a reason to rev a patch, but core reviewers can +2/A over the
   top of a -1 anyway...
  
  100% agree. Spelling and grammar are rude to review on - especially
  since we have (and want) a LOT of non-native English speakers. It's not
  our job to teach people better grammar. Heck - we have people from
  different English backgrounds with differing disagreements on what good
  grammar _IS_
  
 
 We shouldn't quibble over _anything_ grammatical in a commit message. If
 there is a disagreement about it, the comments should be ignored. There
 are definitely a few grammar rules that are loose and those should be
 largely ignored.
 
 However, we should correct grammar when there is a clear solution, as
 those same people who do not speak English as their first language are
 likely to be confused by poor grammar.
 
 We're not doing it to teach grammar. We're doing it to ensure readability.

The importance of clear English varies with context, but commit messages
are a place where we should try hard to just let it go, particularly
with those who do not speak English as their first language.

Commit messages stick around forever and it's important that they are
useful, but they will be read by a small number of people who are going
to be in a position to spend a small amount of time getting over
whatever dissonance is caused by a typo or imperfect grammar.

I think specs are pretty similar and don't warrant much additional
grammar nitpicking. Sure, they're longer pieces of text and slightly
more people will rely on them for information, but they're not intended
to be complete documentation.

Where grammar is so poor that readers would be easily misled in
important ways, then sure that should be fixed. But there comes a point
when we're no longer working to avoid confusion and instead just being
pendants. Taking issue[1] with this:

  whatever scaling mechanism Heat and we end up going with.

because it has a dangling preposition is an example of going way
beyond the point of productive pedantry IMHO :-)

Mark.

[1] - https://review.openstack.org/#/c/97939/5/specs/juno/remove-mergepy.rst


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Specific example NFV use case - ETSI #5, virtual IMS

2014-06-24 Thread Calum Loudon
Hello all

Following on from my contribution last week of a specific NFV use case
(a Session Border Controller) here's another one, this time for an IMS
core (part of ETSI NFV use case #5).

As we touched on at last week's meeting, this is not making claims for
what every example of a virtual IMS core would need, just as last week's
wasn't describing what every SBC would need.  In particular, my IMS core
example is for an application that was designed to be cloud-native from
day one, so the apparent lack of OpenStack gaps is not surprising: other
IMS cores may need more.  However, I think overall these two examples
are reasonably representative of the classes of data plane vs. control
plane apps.

Use case example


Project Clearwater, http://www.projectclearwater.org/.  An open source
implementation of an IMS core designed to run in the cloud and be
massively scalable.  It provides SIP-based call control for voice and 
video as well as SIP-based messaging apps.  As an IMS core it provides
P/I/S-CSCF function together with a BGCF and an HSS cache, and includes
a WebRTC gateway providing interworking between WebRTC  SIP clients.
 

Characteristics relevant to NFV/OpenStack
-

Mainly a compute application: modest demands on storage and networking.

Fully HA, with no SPOFs and service continuity over software and hardware
failures; must be able to offer SLAs.

Elastically scalable by adding/removing instances under the control of the
NFV orchestrator.

Requirements and mapping to blueprints
--

Compute application:
-   OpenStack already provides everything needed; in particular, there are
no requirements for an accelerated data plane, nor for core pinning
nor NUMA

HA:
-   implemented as a series of N+k compute pools; meeting a given SLA
requires being able to limit the impact of a single host failure 
-   we believe there is a scheduler gap here; affinity/anti-affinity
can be expressed pair-wise between VMs, but this needs a concept
equivalent to group anti-affinity i.e. allowing the NFV orchestrator
to assign each VM in a pool to one of X buckets, and requesting
OpenStack to ensure no single host failure can affect more than one
bucket (there are other approaches which achieve the same end e.g.
defining a group where the scheduler ensures every pair of VMs 
within that group are not instantiated on the same host)
-   if anyone is aware of any blueprints that would address this please
insert them here

Elastic scaling:
-   similarly readily achievable using existing features - no gap.

regards

Calum


Calum Loudon 
Director, Architecture
+44 (0)208 366 1177
 
METASWITCH NETWORKS 
THE BRAINS OF THE NEW GLOBAL NETWORK
www.metaswitch.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Support for plugins in fuel client

2014-06-24 Thread Andrey Danin
Why not to use stevedore?


On Wed, Jun 18, 2014 at 1:42 PM, Igor Kalnitsky ikalnit...@mirantis.com
wrote:

 Hi guys,

 Actually, I'm not a fun of cliff, but I think it's a good solution to use
 it in our fuel client.

 Here some pros:

 * pluggable design: we can encapsulate entire command logic in separate
 plugin file
 * builtin output formatters: we no need to implement various formatters to
 represent received data
 * interactive mode: cliff makes possible to provide a shell mode, just
 like psql do

 Well, I vote to use cliff inside fuel client. Yeah, I know, we need to
 rewrite a lot of code, but we
 can do it step-by-step.

 - Igor




 On Wed, Jun 18, 2014 at 9:14 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I am wondering what our story/vision for plugins in fuel client [1]?

 We can benefit from using cliff [2] as framework for fuel cli, apart from
 common code
 for building cli applications on top of argparse, it provides nice
 feature that allows to
 dynamicly add actions by means of entry points (stevedore-like).

 So we will be able to add new actions for fuel client simply by
 installing separate packages with correct entry points.

 Afaik stevedore is not used there, but i think it will be - cause of same
 author and maintainer.

 Do we need this? Maybe there is other options?

 Thanks

 [1] https://github.com/stackforge/fuel-web/tree/master/fuelclient
 [2]  https://github.com/openstack/cliff

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler]

2014-06-24 Thread Joe Gordon
On Jun 24, 2014 9:00 AM, Abbass MAROUNI abbass.maro...@virtualscale.fr
wrote:

 Hi,

 I was wondering if there's a way to set a tag (key/value) of a Virtual
Machine from within a scheduler filter ?

The scheduler today is just for placement. And since we are in the process
of trying to split it out, I don't think we want to make the scheduler do
something like this (at least for now).


 I want to be able to tag a machine with a specific key/value after
passing my custom filter

What is your use case? Perhaps we have another way of solving it today.


 Thanks,

 --
 --
 Abbass MAROUNI
 VirtualScale


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] OpenStack patching and FUEL upgrade follow-up meeting minutes

2014-06-24 Thread Andrey Danin
I think, Vladimir means, that we need to improve our scheduling of the CI
jobs over available CI resources. As I know, now we have a dedicated server
groups for separate tests and we cannot use free resources of other server
groups in case of overbalanced load.


On Thu, Jun 5, 2014 at 6:52 PM, Jesse Pretorius jesse.pretor...@gmail.com
wrote:

 On 5 June 2014 16:27, Vladimir Kuklin vkuk...@mirantis.com wrote:

 1. We need strict EOS and EOL rules to decide how many maintenance
 releases there will be for each series or our QA team and infrastructure
 will not ever be available to digest it.


 Agreed. Would it not be prudent to keep with the OpenStack support
 standard - support latest version and the -1 version?


  3. We need to clearly specify the restrictions which patching and
 upgrade process we support:
 a. New environments can only be deployed with the latest version of
 OpenStack and FUEL Library supported
 b. Old environments can only be updated within the only minor release
 (e.g. 5.0.1-5.0.2 is allowed, 5.0.1-5.1 is not)


 Assuming that the major upgrades will be handled in
 https://blueprints.launchpad.net/fuel/+spec/upgrade-major-openstack-environment
 then I agree. If not, then we have a sticking point here. I would agree
 that this is a good start, but in the medium to long term it is important
 to be able to upgrade from perhaps the latest minor version of the platform
 to the next available major version.


  4. We have some devops tasks we need to finish to feel more comfortable
 in the future to make testing of patching much easier:
 a. we need to finish devops bare metal and distributed enviroments setup
 to make CI and testing process easier
 b. we need to implement elastic-recheck like feature to analyze our CI
 results in order to allow developers to retrigger checks in case of
 floating bugs
 c. we need to start using more sophisticated scheduler


 I find the scheduler statement a curiosity. Can you elaborate?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Set compute_node:hypervisor_nodename as unique and not null

2014-06-24 Thread Joe Gordon
On Jun 18, 2014 11:40 AM, Manickam, Kanagaraj kanagaraj.manic...@hp.com
wrote:

 Hi,



 This mail is regarding the required model change in nova. Please fine
more details below:



 As we knew, Nova db has the table “compute_nodes” for modelling the
hypervisors and its using the “hypervisor_hostname” field to represent the
hypervisor name.

 This value is having significant value in os-hypervisor extension api
which is using this field to uniquely identify the hypervisor.



 Consider the case where a given environment is having more than one
hypervisors (KVM, EXS, Xen, etc)  with same hostname and os-hypervisor and
thereby Horizon Hypervisor panel and nova hypervisors-servers command will
fail.

 There is a defect (https://bugs.launchpad.net/nova/+bug/1329261)  already
filed on VMware VC driver to address this issue to make sure that, a unique
value is generated for the VC driver’s hypervisor.  But its good to fix at
the model level as well by  making “hypervisor_hostname” field as unique
always. And a bug https://bugs.launchpad.net/nova/+bug/1329299 is filed for
the same.



 Before fixing this bug, I would like to get the opinion from the
community. Could you please help here !

++ to making hypervisor_hostname always unique,  being that we already make
this assumption all over the place.




 Regards

 Kanagaraj M


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] community consensus and removing rules

2014-06-24 Thread Ben Nemec
On 06/24/2014 04:49 AM, Mark McLoughlin wrote:
 On Mon, 2014-06-23 at 19:55 -0700, Joe Gordon wrote:
 
   * Add a new directory, contrib, for local rules that multiple
 projects use but are not generally considered acceptable to be
 enabled by default. This way we can reduce the amount of cut
 and pasted code (thank you to Ben Nemec for this idea).
 
 All sounds good to me, apart from a pet peeve on 'contrib' directories.
 
 What does 'contrib' mean? 'contributed'? What exactly *isn't*
 contributed? Often it has connotations of 'contributed by outsiders'.
 
 It also often has connotations of 'bucket for crap', 'unmaintained and
 untested', YMMV, etc. etc.
 
 Often the name is just chosen out of laziness - I can't think of a good
 name for this, and projects often have a contrib directory with random
 stuff in it, so that works.

That's pretty much what happened here.  Contrib was just a throwaway
name I picked for convenience, but I have no particular attachment to
it. :-)

 
 Let's be precise - these are optional rules, right? How about calling
 the directory 'optional'?

+1

 
 Say no to contrib directories! :-P
 
 Thanks,
 Mark.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-24 Thread Joe Gordon
On Jun 18, 2014 3:03 PM, Chris Friesen chris.frie...@windriver.com
wrote:

 The output of nova list and nova show reflects the current status in
the database, not the actual state on the compute node.

 If the instances in question are on a compute node that is currently
down, then the information is stale and possibly incorrect.  Would there
be any benefit in adding some sort of indication of this in the nova list
output?  Or do we expect the end-user to check nova service-list (or
other health-monitoring mechanisms) to see if the compute node is up
before relying on the output of nova list?

Great question.  In general I don't think a regular user should never need
to run any health monitoring command. I think the larger question here is
what how do we handle instances associated with a nova-compute that is
currently being reported as down.  If nova-compute is down we have no way
of knowing the actual state of the instances. Perhaps we should move those
instances to an error state and let the user respond accordingly (delete
instance etc.). And if the Nova-compute service returns we correct the
state.


 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Clint Byrum
Excerpts from Mark McLoughlin's message of 2014-06-24 12:49:52 -0700:
 On Tue, 2014-06-24 at 09:51 -0700, Clint Byrum wrote:
  Excerpts from Monty Taylor's message of 2014-06-24 06:48:06 -0700:
   On 06/22/2014 02:49 PM, Duncan Thomas wrote:
On 22 June 2014 14:41, Amrith Kumar amr...@tesora.com wrote:
In addition to making changes to the hacking rules, why don't we 
mandate also
that perceived problems in the commit message shall not be an 
acceptable
reason to -1 a change.

-1.

There are some /really/ bad commit messages out there, and some of us
try to use the commit messages to usefully sort through the changes
(i.e. I often -1 in cinder a change only affects one driver and that
isn't clear from the summary).

If the perceived problem is grammatical, I'm a bit more on board with
it not a reason to rev a patch, but core reviewers can +2/A over the
top of a -1 anyway...
   
   100% agree. Spelling and grammar are rude to review on - especially
   since we have (and want) a LOT of non-native English speakers. It's not
   our job to teach people better grammar. Heck - we have people from
   different English backgrounds with differing disagreements on what good
   grammar _IS_
   
  
  We shouldn't quibble over _anything_ grammatical in a commit message. If
  there is a disagreement about it, the comments should be ignored. There
  are definitely a few grammar rules that are loose and those should be
  largely ignored.
  
  However, we should correct grammar when there is a clear solution, as
  those same people who do not speak English as their first language are
  likely to be confused by poor grammar.
  
  We're not doing it to teach grammar. We're doing it to ensure readability.
 
 The importance of clear English varies with context, but commit messages
 are a place where we should try hard to just let it go, particularly
 with those who do not speak English as their first language.
 
 Commit messages stick around forever and it's important that they are
 useful, but they will be read by a small number of people who are going
 to be in a position to spend a small amount of time getting over
 whatever dissonance is caused by a typo or imperfect grammar.


The times that one is reading git messages are often the most stressful
such as when a regression has occurred in production.

Given that, I believe it is entirely worth it to me that the commit
messages on my patches are accurate and understandable. I embrace all
feedback which leads to them being more clear. I will of course stand
back from grammar correcting and not block patches if there are many
who disagree.

 I think specs are pretty similar and don't warrant much additional
 grammar nitpicking. Sure, they're longer pieces of text and slightly
 more people will rely on them for information, but they're not intended
 to be complete documentation.


Disagree. I will only state this one more time as I think everyone knows
how I feel: if we are going to grow beyond the english-as-a-first-language
world we simply cannot assume that those reading specs will be native
speakers. Good spelling and grammar helps us grow. Bad spelling and
grammar holds us back.

 Where grammar is so poor that readers would be easily misled in
 important ways, then sure that should be fixed. But there comes a point
 when we're no longer working to avoid confusion and instead just being
 pendants. Taking issue[1] with this:
 
   whatever scaling mechanism Heat and we end up going with.
 
 because it has a dangling preposition is an example of going way
 beyond the point of productive pedantry IMHO :-)

I actually agree that it would not at all be a reason to block a patch.
However, there is some ambiguity in that sentence that may not be clear
to a native speaker. It is not 100% clear if we are going with Heat,
or with the scaling mechanism. That is the only reason for the dangling
preposition debate. However, there is a debate, and thus I would _never_
block a patch based on this rule. It was feedback.. just as sometimes
there is feedback in commit messages that isn't taken and doesn't lead
to a -1.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Barebones CA

2014-06-24 Thread Clark, Robert Graham
Yeah pretty much.

That¹s something I¹d be interested to work on, if work isn¹t ongoing
already.

-Rob





On 24/06/2014 18:57, John Wood john.w...@rackspace.com wrote:

Hello Robert,

I would actually hope we have a self-contained certificate plugin
implementation that runs 'out of the box' to enable certificate
generation orders to be evaluated and demo-ed on local boxes.

Is this what you were thinking though?

Thanks,
John




From: Clark, Robert Graham [robert.cl...@hp.com]
Sent: Tuesday, June 24, 2014 10:36 AM
To: OpenStack List
Subject: [openstack-dev] [Barbican] Barebones CA

Hi all,

I¹m sure this has been discussed somewhere and I¹ve just missed it.

Is there any value in creating a basic ŒCA¹ and plugin to satisfy
tests/integration in Barbican? I¹m thinking something that probably
performs OpenSSL certificate operations itself, ugly but perhaps useful
for some things?

-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Upgrade of Hadoop components inside released version

2014-06-24 Thread Andrew Lazarev
Hi Team,

I want to raise topic about upgrade of components in Hadoop version that is
already supported by released Sahara plugin. The question is raised because
of several change requests [1] and [2]. Topic was discussed in Atlanta
([3]), but we didn't come to the decision.

All of us agreed that existing clusters must continue to work after
OpenStack upgrade. So if user creates cluster by Icehouse Sahara and then
upgrades OpenStack - everything should continue working as before. The most
tricky operation is scaling and it dictates list of restrictions over new
version of component:

1. plugin-version pair supported by the plugin must not change
2. if component upgrade requires DIB involved then plugin must work with
both versions of image - old and new one
3. cluster with mixed nodes (created by old code and by new one) should
still be operational

Given that we should choose policy for components upgrade. Here are several
options:

1. Prohibit components upgrade in released versions of plugin. Change
plugin version even if hadoop version didn't change. This solves all listed
problems but a little bit frustrating for user. They will need to recreate
all clusters they have and migrate data like as it is hadoop upgrade. They
should also consider Hadoop upgrade to do migration only once.

2. Disable some operations over cluster created by the previous version. If
users don't have option to scale cluster there will be no problems with
mixed nodes. For this option Sahara need to know if the cluster was created
by this version or not.

3. Require change author to perform all kind of tests and prove that mixed
cluster works as good and not mixed. In such case we need some list of
tests that are enough to cover all corner cases.

Ideas are welcome.

[1] https://review.openstack.org/#/c/98260/
[2] https://review.openstack.org/#/c/87723/
[3] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-24 Thread Russell Bryant
On 06/24/2014 04:42 PM, Joe Gordon wrote:
 
 On Jun 18, 2014 3:03 PM, Chris Friesen chris.frie...@windriver.com
 mailto:chris.frie...@windriver.com wrote:

 The output of nova list and nova show reflects the current status
 in the database, not the actual state on the compute node.

 If the instances in question are on a compute node that is currently
 down, then the information is stale and possibly incorrect.  Would
 there be any benefit in adding some sort of indication of this in the
 nova list output?  Or do we expect the end-user to check nova
 service-list (or other health-monitoring mechanisms) to see if the
 compute node is up before relying on the output of nova list?
 
 Great question.  In general I don't think a regular user should never
 need to run any health monitoring command. I think the larger question
 here is what how do we handle instances associated with a nova-compute
 that is currently being reported as down.  If nova-compute is down we
 have no way of knowing the actual state of the instances. Perhaps we
 should move those instances to an error state and let the user respond
 accordingly (delete instance etc.). And if the Nova-compute service
 returns we correct the state.

There be dragons here.  Just because Nova doesn't see the node reporting
in, doesn't mean the VMs aren't actually still running.  I think this
needs to be left to logic outside of Nova.

For example, if your deployment monitoring really does think the host is
down, you want to make sure it's *completely* dead before taking further
action such as evacuating the host.  You certainly don't want to risk
having the VM running on two different hosts.  This is just a business I
don't think Nova should be getting in to.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Mark McLoughlin
On Tue, 2014-06-24 at 13:56 -0700, Clint Byrum wrote:
 Excerpts from Mark McLoughlin's message of 2014-06-24 12:49:52 -0700:
  On Tue, 2014-06-24 at 09:51 -0700, Clint Byrum wrote:
   Excerpts from Monty Taylor's message of 2014-06-24 06:48:06 -0700:
On 06/22/2014 02:49 PM, Duncan Thomas wrote:
 On 22 June 2014 14:41, Amrith Kumar amr...@tesora.com wrote:
 In addition to making changes to the hacking rules, why don't we 
 mandate also
 that perceived problems in the commit message shall not be an 
 acceptable
 reason to -1 a change.
 
 -1.
 
 There are some /really/ bad commit messages out there, and some of us
 try to use the commit messages to usefully sort through the changes
 (i.e. I often -1 in cinder a change only affects one driver and that
 isn't clear from the summary).
 
 If the perceived problem is grammatical, I'm a bit more on board with
 it not a reason to rev a patch, but core reviewers can +2/A over the
 top of a -1 anyway...

100% agree. Spelling and grammar are rude to review on - especially
since we have (and want) a LOT of non-native English speakers. It's not
our job to teach people better grammar. Heck - we have people from
different English backgrounds with differing disagreements on what good
grammar _IS_

   
   We shouldn't quibble over _anything_ grammatical in a commit message. If
   there is a disagreement about it, the comments should be ignored. There
   are definitely a few grammar rules that are loose and those should be
   largely ignored.
   
   However, we should correct grammar when there is a clear solution, as
   those same people who do not speak English as their first language are
   likely to be confused by poor grammar.
   
   We're not doing it to teach grammar. We're doing it to ensure readability.
  
  The importance of clear English varies with context, but commit messages
  are a place where we should try hard to just let it go, particularly
  with those who do not speak English as their first language.
  
  Commit messages stick around forever and it's important that they are
  useful, but they will be read by a small number of people who are going
  to be in a position to spend a small amount of time getting over
  whatever dissonance is caused by a typo or imperfect grammar.
 
 
 The times that one is reading git messages are often the most stressful
 such as when a regression has occurred in production.
 
 Given that, I believe it is entirely worth it to me that the commit
 messages on my patches are accurate and understandable. I embrace all
 feedback which leads to them being more clear. I will of course stand
 back from grammar correcting and not block patches if there are many
 who disagree.
 
  I think specs are pretty similar and don't warrant much additional
  grammar nitpicking. Sure, they're longer pieces of text and slightly
  more people will rely on them for information, but they're not intended
  to be complete documentation.
 
 
 Disagree. I will only state this one more time as I think everyone knows
 how I feel: if we are going to grow beyond the english-as-a-first-language
 world we simply cannot assume that those reading specs will be native
 speakers. Good spelling and grammar helps us grow. Bad spelling and
 grammar holds us back.

There's two sides to this coin - concern about alienating
non-english-as-a-first-language speakers who feel undervalued because
their language is nitpicked to death and concern about alienating
english-as-a-first-language speakers who struggle to understand unclear
or incorrect language.

Obviously there's a balance to be struck there and different people will
judge that differently, but I'm personally far more concerned about the
former rather than the latter case.

I expect many beyond the english-as-a-first-language world are pretty
used to dealing with imperfect language but aren't so delighted with
being constantly reminded that their use language is imperfect.

  Where grammar is so poor that readers would be easily misled in
  important ways, then sure that should be fixed. But there comes a point
  when we're no longer working to avoid confusion and instead just being
  pendants. Taking issue[1] with this:
  
whatever scaling mechanism Heat and we end up going with.
  
  because it has a dangling preposition is an example of going way
  beyond the point of productive pedantry IMHO :-)
 
 I actually agree that it would not at all be a reason to block a patch.
 However, there is some ambiguity in that sentence that may not be clear
 to a native speaker. It is not 100% clear if we are going with Heat,
 or with the scaling mechanism. That is the only reason for the dangling
 preposition debate.

I'd wager you'd seriously struggle to find anyone who would interpret
that sentence as we are going with Heat, even if they were
non-english-as-a-first-language speakers who had never heard of
OpenStack or 

Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-24 Thread Joe Gordon
On Jun 24, 2014 2:31 PM, Russell Bryant rbry...@redhat.com wrote:

 On 06/24/2014 04:42 PM, Joe Gordon wrote:
 
  On Jun 18, 2014 3:03 PM, Chris Friesen chris.frie...@windriver.com
  mailto:chris.frie...@windriver.com wrote:
 
  The output of nova list and nova show reflects the current status
  in the database, not the actual state on the compute node.
 
  If the instances in question are on a compute node that is currently
  down, then the information is stale and possibly incorrect.  Would
  there be any benefit in adding some sort of indication of this in the
  nova list output?  Or do we expect the end-user to check nova
  service-list (or other health-monitoring mechanisms) to see if the
  compute node is up before relying on the output of nova list?
 
  Great question.  In general I don't think a regular user should never
  need to run any health monitoring command. I think the larger question
  here is what how do we handle instances associated with a nova-compute
  that is currently being reported as down.  If nova-compute is down we
  have no way of knowing the actual state of the instances. Perhaps we
  should move those instances to an error state and let the user respond
  accordingly (delete instance etc.). And if the Nova-compute service
  returns we correct the state.

 There be dragons here.  Just because Nova doesn't see the node reporting
 in, doesn't mean the VMs aren't actually still running.  I think this
 needs to be left to logic outside of Nova.

 For example, if your deployment monitoring really does think the host is
 down, you want to make sure it's *completely* dead before taking further
 action such as evacuating the host.  You certainly don't want to risk
 having the VM running on two different hosts.  This is just a business I
 don't think Nova should be getting in to.

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything. Since
nova knows something *may* be wrong shouldn't we convey that to the user
(I'm not 100% sure we should myself).


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-24 Thread Rick Jones

On 06/24/2014 02:38 PM, Joe Gordon wrote:

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything.
Since nova knows something *may* be wrong shouldn't we convey that to
the user (I'm not 100% sure we should myself).


I suspect the user's first action will be to call Support asking Hey, 
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?


rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-24 Thread Joe Gordon
On Jun 24, 2014 2:47 PM, Rick Jones rick.jon...@hp.com wrote:

 On 06/24/2014 02:38 PM, Joe Gordon wrote:

 I agree nova shouldn't take any actions. But I don't think leaving an
 instance as 'active' is right either.  I was thinking move instance to
 error state (maybe an unknown state would be more accurate) and let the
 user deal with it, versus just letting the user deal with everything.
 Since nova knows something *may* be wrong shouldn't we convey that to
 the user (I'm not 100% sure we should myself).


 I suspect the user's first action will be to call Support asking Hey,
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?

True, but the alternative is, why is this dead instance listed as ACTIVE,
and I am being billed for it too. I think this is a loose-loose


 rick jones


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-24 Thread Steve Gordon
- Original Message -
 From: Rick Jones rick.jon...@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 On 06/24/2014 02:38 PM, Joe Gordon wrote:
  I agree nova shouldn't take any actions. But I don't think leaving an
  instance as 'active' is right either.  I was thinking move instance to
  error state (maybe an unknown state would be more accurate) and let the
  user deal with it, versus just letting the user deal with everything.
  Since nova knows something *may* be wrong shouldn't we convey that to
  the user (I'm not 100% sure we should myself).
 
 I suspect the user's first action will be to call Support asking Hey,
 why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?
 
 rick jones

The existing alternative would be having the user calling to ask why their 
non-responsive instance is showing as RUNNING so you are kind of damned if you 
do, damned if you don't.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Kevin L. Mitchell
On Tue, 2014-06-24 at 22:26 +0100, Mark McLoughlin wrote:
 There's two sides to this coin - concern about alienating
 non-english-as-a-first-language speakers who feel undervalued because
 their language is nitpicked to death and concern about alienating
 english-as-a-first-language speakers who struggle to understand unclear
 or incorrect language.

Actually, I think there's a third case which is the one people seem to
be worried about here: non-English-as-a-first-language speakers who are
trying to read English written by other non-English-as-a-first-language
speakers.

 Obviously there's a balance to be struck there and different people will
 judge that differently, but I'm personally far more concerned about the
 former rather than the latter case.

So, my personal experience is that, as long as you express your
corrections kindly, most non-English-as-a-first-language speakers are
receptive and appreciative of corrections.  They should be
*corrections*, though: Actually, I think you meant '…'; that would be
clearer.  Further, unless there are egregious problems, I always try to
express my language suggestions as femtonits, meaning that I don't
down-vote a patch for those issues (unless perhaps there are a *lot* of
them).  The only time I don't suggest corrections is when I really can't
understand what was meant, in which case I try to ask questions to help
clarify the meaning…

 Absolutely, and I try and be clear about that with e.g. not a -1 or
 if you're rebasing anyway, perhaps fix this.
 
 Maybe a convention for such comments would be a good thing? We often do
 'nitpick' or 'femtonit', but they are often still things people are
 -1ing on.

Perhaps we should formalize the terminology, maybe by documenting that
femtonit should mean this in something like the review checklist?  We
could pair that with a glossary of terms that could be referred to by
the your first patch bot and mentioned in the Gerrit workflow page.
That way, reviewers are using a consistent terminology—femtonit is
hardly standard English, after all; it's a specialty term we've invented
—and developers have guidance on what it means and what they should do
in response to it.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal

2014-06-24 Thread Brian Rosmaita
Hi Facundo,

Can you attend the Glance meeting this week at 20:00 UTC on Thursday in 
#openstack-meeting-alt ?

I may be misunderstanding what's at stake, but it looks like:
- Glance holds the image metadata (some user-modifiable, some not)
- Cinder copies the image metadata to use as volume metadata (none is 
user-modifiable)
- You want to implement user-modifiable metadata in Cinder, but you don't know 
which items should be mutable and which not.
- You propose to add glance API calls to allow you to figure out property 
protections on a per-property basis.

It looks like the only roles for Glance here are (1) as the original source of 
the image metadata, and then (2) as the source of truth for what image 
properties can be modified on the volume metadata.  For (1), you've already got 
an API call.  For (2), why not use the glance property protection configuration 
file directly?  It's going to be deployed somehow to your glance nodes, you can 
deploy it to your cinder nodes at the same time.  Or you can just use it as the 
basis of a Cinder property protection config file, because I wonder whether in 
the general case, you'll always want volume properties protected exactly the 
same as image properties.  If not, the new API call strategy will force you to 
deal with differences in the code, whereas the config file strategy would move 
dealing with differences to setting up the config file.  So I'm not convinced 
that a new API call is the way to go here.

But there may be some nuances I'm missing, so it might be easier to discuss at 
the Glance meeting.  The agenda looks pretty light for Thursday if you want to 
add this topic:
https://etherpad.openstack.org/p/glance-team-meeting-agenda

cheers,
brian


From: Maldonado, Facundo N [facundo.n.maldon...@intel.com]
Sent: Tuesday, June 24, 2014 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal

Hi folks,

I started working on this blueprint [1] but the work to be done 
is not limited to cinder python client.
Volume-image-metadata is immutable in Cinder and Glance has 
RBAC image properties and it doesn’t provide any way to find out which are 
those protected properties in advance [2].

I want to share this proposal and get feedback from you.

https://docs.google.com/document/d/1XYEqGOa30viOyZf8AiwkrCiMWGTfBKjgmeYBptaCHlM/


Thanks,
Facundo

[1] 
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata
[2] 
http://openstack.10931.n7.nabble.com/Cinder-Confusion-about-the-respective-use-cases-for-volume-s-admin-metadata-metadata-and-glance-imaga-td39849.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Vijay B
Hi Brandon, Eugene, Doug,

During the hackathon, I remember that we had briefly discussed how
listeners would manifest themselves on the LB VM/device, and it turned out
that for some backends like HAProxy it simply meant creating a frontend
entry in the cfg file whereas on other solutions it could mean spawning a
process/equivalent. So we must have status fields to track the state of any
such entities that are actually created. In the listener case, an ACTIVE
state would mean that the appropriate backend processes have been created
or that the required config file entries have been made.

I like the idea of having relational objects and setting the status on
them, and in our case we can use the status fields
(pool/healthmonitor/listener) in each table to denote the state of the
relationship (configuration/association on backend) to another object like
LoadBalancer. So I think the status fields should stay.

In this scenario, some entities' status could be updated in lbaas proper,
and some in the driver implementation. I don't have a strict preference as
to which among lbaas proper or the driver layer announces the status since
we discussed on the IRC that we'd have helper functions in the driver to do
these updates.


Regards,
Vijay


On Tue, Jun 24, 2014 at 12:16 PM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 On Tue, 2014-06-24 at 18:53 +, Doug Wiegley wrote:
  Hi Brandon,
 
  I think just one status is overloading too much onto the LB object (which
  is perhaps something that a UI should do for a user, but not something an
  API should be doing.)

 That is a good point and perhaps its another discussion to just have
 some way to show the status an entity has for each load balancer, which
 is what mark suggested for the member status at the meet-up.

 
   1) If an entity exists without a link to a load balancer it is purely
   just a database entry, so it would always be ACTIVE, but not really
   active in a technical sense.
 
  Depends on the driver.  I don¹t think this is a decision for lbaas
 proper.

 Driver is linked to the flavor or provider.  Flavor or provider will/is
 linked to load balancer.  We won't be able get a driver to send anything
 to if there isn't a load balancer.  Without a driver it is a decision
 for lbaas proper.  I'd be fine with setting the status of these
 orphaned entities to just ACTIVE but I'm just worried about the status
 management in the future.

 
 
   2) If some of these entities become shareable then how does the status
   reflect that the entity failed to create on one load balancer but was
   successfully created on another.  That logic could get overly complex.
 
  That¹s a status on the join link, not the object, and I could argue
  multiple ways in which that should be one way or another based on the
  backend, which to me, again implies driver question (backend could queue
  for later, or error immediately, or let things run degraded, orŠ)

 Yeah that is definitely an argument.  I'm just trying to keep in mind
 the complexities that could arise from decisions made now.  Perhaps it
 is the wrong way to look at it to some, but I don't think thinking about
 the future is a bad thing and should never be done.

 
  Thanks,
  Doug
 
 
 
 
  On 6/24/14, 11:23 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:
 
  I think we missed this discussion at the meet-up but I'd like to bring
  it up here.  To me having a status on all entities doesn't make much
  sense, and justing having a status on a load balancer (which would be a
  provisioning status) and a status on a member (which would be an
  operational status) are what makes sense because:
  
  1) If an entity exists without a link to a load balancer it is purely
  just a database entry, so it would always be ACTIVE, but not really
  active in a technical sense.
  
  2) If some of these entities become shareable then how does the status
  reflect that the entity failed to create on one load balancer but was
  successfully created on another.  That logic could get overly complex.
  
  I think the best thing to do is to have the load balancer status reflect
  the provisioning status of all of its children.  So if a health monitor
  is updated then the load balancer that health monitor is linked to would
  have its status changed to PENDING_UPDATE.  Conversely, if a load
  balancer or any entities linked to it are changed and the load
  balancer's status is in a non-ACTIVE state then that update should not
  be allowed.
  
  Thoughts?
  
  Thanks,
  Brandon
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-24 Thread Rick Jones

On 06/24/2014 02:53 PM, Steve Gordon wrote:

- Original Message -

From: Rick Jones rick.jon...@hp.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org

On 06/24/2014 02:38 PM, Joe Gordon wrote:

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything.
Since nova knows something *may* be wrong shouldn't we convey that to
the user (I'm not 100% sure we should myself).


I suspect the user's first action will be to call Support asking Hey,
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?

rick jones


The existing alternative would be having the user calling to ask why
their non-responsive instance is showing as RUNNING so you are kind
of damned if you do, damned if you don't.


There will be a call for a non-responsive instance regardless what it 
shows.  However, responsive instance not showing ERROR or UNKNOWN will 
not generate a call.  So, all in all I think you will get fewer calls if 
you don't mark the not known to be non-responsive instance as ERROR or 
UNKNOWN.


rick


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DevStack] neutron config not working

2014-06-24 Thread Rob Crittenden
Before I get punted onto the operators list, I post this here because
this is the default config and I'd expect the defaults to just work.

Running devstack inside a VM with a single NIC configured and this in
localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
Q_USE_DEBUG_COMMAND=True

Results in a successful install but no DHCP address assigned to hosts I
launch and other oddities like no CIDR in nova net-list output.

Is this still the default way to set things up for single node? It is
according to https://wiki.openstack.org/wiki/NeutronDevstack

thanks

rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >